Questions for you:
- When evaluating predictions, forecasts, or strategic plans, do you systematically track both hits and misses, or do you remember confirming examples whilst forgetting disconfirming ones?
- Given vague strategic guidance (“focus on innovation,” “customer-centric approach”), could it be matched to any outcome after the fact, making it impossible to evaluate whether the strategy actually worked?
- When someone claims they “predicted” an outcome, do you check whether their prediction was specific and falsifiable, or could it be retroactively matched to multiple different outcomes?
- In reviewing past advice or predictions that seemed accurate, how specific were they actually? Could you find equally plausible matches to completely different events?
Organisational applications:
Specificity and falsifiability in strategic planning: Vague predictions fit any random event – “the spectre will emerge in the South” could match countless occurrences, given years to find connections. Strategic plans often suffer from Nostradamus syndrome: goals so general they can be declared successful regardless of outcomes. “Increase customer satisfaction” or “drive innovation” can be matched to any initiative after the fact. Build falsifiable strategies with specific, measurable predictions: “increase NPS from 42 to 55 by Q4” enables evaluation. Vague guidance provides rhetorical cover for any outcome, preventing learning from actual results.
Confirmation bias and selective memory: Humans naturally look for meaningful connections whilst overlooking misses. This makes vague predictions seem accurate – we remember the hits (“they predicted the financial crisis!”) whilst forgetting the countless misses and the vagueness, making matching trivial. Combat this by systematically tracking all predictions: record forecasts before outcomes, evaluate accuracy against specific criteria, calculate hit rates rather than memorable examples, and maintain prediction registers to prevent selective memory. Most “accurate predictors” have terrible track records when all predictions are counted.
Post-hoc rationalisation and narrative construction: In hindsight, connections between predictions and events seem obvious and inevitable. Nostradamus didn’t predict specific events – interpreters retroactively matched vague quatrains to whatever happened. Organisations do this constantly: strategic plans are retroactively interpreted to match actual outcomes, leaders claim they “always knew” directions that emerged randomly, and consultants declare predictions accurate through creative reinterpretation. Prevent this by requiring specific, time-bound predictions before events, prohibiting retroactive interpretation, and comparing predictions to what actually happened rather than what can be construed to match.
Testing prediction quality through specificity: Good predictions must be specific enough that they could be wrong. “Political upheaval will occur” is unfalsifiable – something can always match. “The incumbent party will lose the next election” is falsifiable and testable. When evaluating advice, forecasts, or strategic guidance, demand specificity: What exactly will happen? When? Under what conditions would you admit being wrong? If prediction can be matched to multiple contradictory outcomes, it’s not a prediction – it’s a Nostradamus quatrain providing an illusion of insight without actual forecasting value.
Further reading
Prediction, forecasting, and vagueness
Superforecasting by Philip E. Tetlock and Dan Gardner – demonstrates that best forecasters make specific, falsifiable predictions updated with new evidence, whilst poor forecasters use vague language allowing retroactive reinterpretation, showing specificity is essential for evaluable forecasting.
Expert Political Judgment by Philip E. Tetlock – 20-year study tracking expert predictions showing famous pundits perform no better than random chance, largely because predictions are vague enough to be declared correct regardless of outcomes.
Future Babble by Dan Gardner – examines why expert predictions consistently fail, showing vague language, selective memory, and post-hoc rationalization create illusion of forecasting accuracy where none exists.
Confirmation bias, pattern-seeking, and selective memory
The Believing Brain by Michael Shermer – explains how people form beliefs then selectively find confirming evidence, demonstrating why vague predictions seem accurate – we remember hits whilst forgetting misses.
Thinking, Fast and Slow by Daniel Kahneman – discusses confirmation bias and availability heuristic showing memorable confirming examples dominate judgement over systematic evidence, explaining why vague predictions create false confidence.
The Black Swan by Nassim Nicholas Taleb – argues people construct narratives explaining random events after the fact, showing how vague predictions are retroactively matched to outcomes through narrative flexibility rather than actual forecasting skill.
Post-hoc rationalization and narrative construction
Mistakes Were Made (But Not by Me) by Carol Tavris and Elliot Aronson – examines self-justification showing how people retroactively reinterpret predictions and decisions to appear correct, demonstrating mechanism making vague predictions seem accurate through creative reinterpretation.
The Halo Effect by Phil Rosenzweig – business book showing how management principles are retroactively fitted to company outcomes, demonstrating vague strategic guidance can be declared successful regardless of results through narrative construction.
Fooled by Randomness by Nassim Nicholas Taleb – explores how people mistake random outcomes for meaningful patterns and retroactively construct narratives explaining chance events, showing why vague predictions appear prescient when matched to random occurrences.
Interactive exhibit
If you’d rather make your predictions online rather than on paper, then this little app will help, and even add it to your online diary… https://experiments.randomthebook.com/nostradamus/
About the image
A fairly cheap crystal ball, found on the internet.
Photo montage by Matt Ballantine, 2026
