This course is aimed at Ph.D. Students at IT University of Copenhagen
To be offered: Second half of Spring term 2017
Credit: 2 ECTS
Time and Location: Tuesdays, 13:00-14:00. First meeting: April 4.
Enrollment: 15–20 students
Instructors: Thore Husfeldt
The course will be offered as a 2-ECTS seminar, reading-group style, aimed at Ph.D. Students with different epistemological backgrounds.
Our learning objectives are straightforward. After taking the course, you should be able to:
- Remain vigilant for bullshit contaminating your information diet.
- Recognize said bullshit whenever and wherever you encounter it.
- Figure out for yourself precisely why a particular bit of bullshit is bullshit.
- Provide a statistician or fellow scientist with a technical explanation of why a claim is bullshit.
We will be astonished if these skills do not turn out to be among the most useful and most broadly applicable of those that you acquire during the course of your college education.
A second, important objective of this course is as a playground/warm-up for the the design of an undergraduate course on ITU’s upcoming Data Science eduction. Which part of this set of ideas works in a higher education context? Which skills are useful? What is teachable, and how can that be tested?
Preliminary schedule and readings
Each of the lectures will explore one specific facet of bullshit. For each week, a set of required readings are assigned. For some weeks, supplementary readings are also provided for those who wish to delve deeper.
At least in the beginning, we follow the UW course religiously.
- Introduction to bullshit
- Spotting bullshit
- The natural ecology of bullshit
- Statistical traps
- Big data
- Publication bias
- Predatory publishing and scientific misconduct
- The ethics of calling bullshit.
- Fake news
- Refuting bullshit
- More to be announced
Lecture 1. Introduction to bullshit. What is bullshit? Concepts and categories of bullshit. The art, science, and moral imperative of calling bullshit. Brandolini’s Bullshit Asymmetry Principle.
- Harry Frankfurt (1986) On Bullshit. Raritan Quarterly Review 6(2)
- G. A. Cohen (2002) Deeper into Bullshit. Buss and Overton, eds., Contours of Agency: Themes from the Philosophy of Harry Frankfurt Cambridge, Massachusetts: MIT Press.
- Philip Eubanks and John D. Schaeffer (2008) A kind word for bullshit: The problem of academic writing. College Composition and Communication 59(3): 372-388
Lecture 2. Spotting bullshit. Truth, like liberty, requires eternal vigilance. How do you spot bullshit in the wild? Effect sizes, dimensions, Fermi estimation, and checks on plausibility. Claims and the interests of those who make them. Forensic data analysis: GRIM test, Newcomb–Benford law.
- Carl Sagan 1996 The Fine Art of Baloney Detection. Chapter 12 in Sagan (1996) The Demon-Haunted World
- Case studies: Food stamp fraud, 99% caffeine-free
Lecture 3. The natural ecology of bullshit. Where do we find bullshit? Why news media provide bullshit. TED talks and the marketplace for upscale bullshit. Why social media provide ideal conditions for the growth and spread of bullshit.
- Gordon Pennycook et al. (2015) On the reception and detection of pseudo-profound bullshit. Judgement and Decision Making 10:549-563
- Adrien Friggeri et al. (2014). Rumor Cascades. Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media
Lecture 4. Causality One common source of bullshit data analysis arises when people ignore, deliberately or otherwise, the fact that correlation is not causation. The consequences can be hilarious, but this confusion can also be used to mislead. Regression to the mean pitched as treatment effect. Selection masked as transformation.
- Robert Matthews (2000) Storks deliver babies (p=0.008). Teaching Statistics22:36-38
- Case study: Traffic improvements
- Karl Pearson (1897) On a Form of Spurious Correlation which may arise when Indices are used in the Measurement of Organs. Proceedings of the Royal Society of London 60: 489–498. For context see also Aldrich (1995).
Lecture 5. Statistical traps and trickery. Base-rate fallacy / prosecutor’s fallacy. Simpson’s paradox. Data censoring. Will Rogers effect, lead-time bias, and length time bias. Means versus medians. Importance of higher moments.
- Simpson’s paradox: an interactive data visualization from VUDlab at UC Berkeley.
- Alvan Feinstein et al. (1985) The Will Rogers Phenomenon — Stage Migration and New Diagnostic Techniques as a Source of Misleading Statistics for Survival in Cancer. New England Journal of Medicine 312:1604-1608.
- Case studies: Musicians and mortality, Track records
Lecture 6. Data visualization. Data graphics can be powerful tools for understanding information, but they can also be powerful tools for misleading audiences. We explore the many ways that data graphics can steer viewers toward misleading conclusions.
- Edward Tufte (1983) The Visual Display of Quantitative InformationChartjunk: vibrations, grids, and ducks. (Chapter 5)
- Tools and tricks: Misleading axes
- Jevin West (2014) How to improve the use of metrics: learn from game theory. Nature 465:871-872
- Peter Lawrence (2014) The mismeasurement of science. Current Biology17:R583-585
- danah boyd and Kate Crawford (2011) Six Provocations for Big Data. A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society.
- Cathy O’Neil (2016) Weapons of Math Destruction Crown Press.
Lecture 8. Publication bias. Even a community of competent scientists all acting in good faith can generate a misleading scholarly record when — as is the case in the current publishing environment — journals prefer to publish positive results over negative ones. In a provocative and hugely influential 2005 paper, epidemiologist John Ioannides went so far as to argue that this publication biashas created a situation in which most published scientific results are probably false. As a result, it’s not clear that one can safely rely on the results of some random study reported in the scientific literature, let alone on Buzzfeed.
- John Ioannidis (2005) Why most published scientific results are false. PLOS Medicine 2:e124.
- Fake academe looking much like the real thing.New York Times
Dec. 29, 2016.
- Adam Marcus and Ivan Oransky (2016) Why fake data when you can fake a scientist? Nautilus November 24.
Lecture 10. The ethics of calling bullshit. Where is the line between deserved criticism and targeted harassment? Is it, as one prominent scholar argued, “methodological terrorism” to call bullshit on a colleague’s analysis? What if you use social media instead of a peer-reviewed journal to do so? How about calling bullshit on a whole field that you know almost nothing about? Pubpeer. Principles for the ethical calling of bullshit. Differences between being a hard-minded skeptic and being a domineering jerk.
- Alan Sokal (1996) A physicist experiments with cultural studies. Lingua Franca 6:62-64.
- Jennifer Ruark (2017) Anatomy of a hoax. Chronicle of Higher Education
- Robert Service (2014) Nano-Imaging Feud Sets Online Sites Sizzling. Science343:358.
- Susan Fiske (2016) Mob Rule or Wisdom of Crowds? APS Observerpreliminary draft. Also read commentaries  and .
- Michael Blatt (2016) Vigilante Science. Plant Physiology 169:907-909.
Lecture 11. Fake news.. Fifteen years ago, nascent social media platforms offered the promise of a more democratic press through decentralized broadcasting and a decoupling of publishing from advertising revenue. Instead, we get sectarian echo chambers and, lately, a serious assault on the very notion of fact. Not only did fake news play a substantive role in the November 2016 US elections, but recently a fake news story actually provoked nuclear threats issued by twitter.
New York Times
- Nov. 25, 2016
- Donath, Judith (2016) Why fake news stories thrive online. CNN Opinion.
Lecture 12. Refuting bullshit. Refuting bullshit requires different approaches for different audiences. What works for a quantitatively-skilled professional scientist won’t always convince your casually racist uncle on facebook, and vice versa.
- John Cook and Stephan Lewandowsky (2012) The Debunking Handbook.
- Craig Bennett et al. (2009) Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction
- Case study: Gender gap in 100 m times
Additional readings @ ITU
Here is some additional material that Thore finds useful and enlightening:
- Weapons of Math Destruction by Cathy O’Neil, Crown, 2016. A nontechnical collection of various examples of machine-learned decision making mechanisms and possible impacts.
- Thinking, Fast and Slow by Daniel Kahneman, Farrar, Straus and Giroux, 2013. Highly readable, Pulitzer-prize winning introduction to Kahneman–Tversky theory of the psychology of influence and decision. Includes highly operational mechanisms for bias-avoidance in groups.
- Moral Tribes: Emotion, Reason, and the Gap Between Us and Them by Joshua Greene, Penguin (2013). The social psychology of decision making as a marker of group membership. Very strong utilitarian perspective, some neuroscience.
- The Righteous Mind: Why Good People Are Divided by Politics and Religion by Jonathan Haidt, Vintage, 2013. The moral motivations for decisions and biases.
- Explaining Postmodernism: Skepticism and Socialism from Rousseau to Foucault by Stephen Hicks, Ockham’s Razor, 2011. Incredibly hostile but remarkably clear and consistent overview of postmodernism. (It would be useful to contrast this book with a similarly clear book that actually likes postmodernism.)