Tetlock and the Technocrats
How basic scorekeeping revealed the limits of expertise—and the case for a skill at thinking
From my new essay in Sportspredict:
In August 2005, around a month after Philip Tetlock published Expert Political Judgment, John Ioannidis published “Why Most Published Research Findings Are False,” kicking off what we would later come to call the replication crisis. In retrospect, Tetlock’s project was always a close cousin. Just as the replication crisis revealed that enormous swaths, even whole subdisciplines of modern science aren’t demonstrably real, Tetlock revealed that most experts’ pretensions to special understanding of the world do not survive basic scorekeeping.
The narratives have the same moral: In the modern West, prestige has been a decidedly overrated proxy for epistemic competence. Like Ioannidis, Tetlock used the tools of social science to find the limits of social science.
Yet Tetlock’s critique was not nihilistic. Fortunately for epistemology in general, hidden in the rubble of expertise was the suggestion of something more robust. If Tetlock found most experts incompetent, some still reliably outperformed others. And perhaps as famous as his finding that expertise was fake was Tetlock’s discovery that these better forecasters shared some non-obvious intellectual habits.
Read the whole thing there.

