Questions for you:
- Where in your work do you treat forecasts as more reliable than they actually are, and what decisions depend on that false confidence? How would you measure that?
- Can you think of a situation where your organisation invested heavily on one predicted outcome, when building resilience across multiple scenarios would have been more useful? How can you stop that happening again?
- How do you currently communicate uncertainty in planning and forecasting? Do you give point estimates when ranges would be more honest?
Organisational applications:
Distinguishing deterministic systems from unpredictable ones: our text makes a point that is easy to miss: the atmosphere is not random. The physics governing it follows deterministic laws. What makes long-term weather forecasting impossible is not randomness in the system but the impossibility of measuring initial conditions precisely enough to run those deterministic equations forward in time.
Many systems that your organisation is in share this structure. Markets, supply chains, and competitive dynamics are not purely random — they follow patterns and respond to forces — but they are sensitive enough to initial conditions and small perturbations that confident point predictions beyond a short horizon are probably unreliable. Treating them as fundamentally unpredictable and planning accordingly is more accurate than treating them as predictable systems with noise that better data will eventually tame.
Scenario planning as the organisational equivalent of the cone of uncertainty: Hurricane forecasters do not abandon prediction because it is imperfect. They produce a cone of uncertainty: a range of plausible tracks that widens as the forecast extends further into the future, and they make operational decisions that span that range rather than optimising for a single predicted landfall.
This is a useful model for organisational planning under genuine uncertainty. Rather than producing a single forecast and building a plan around it, scenario planning generates a small number of meaningfully different futures and tests strategic options against each. The goal is not to identify which scenario will occur, but to identify options that remain sound across several scenarios. Organisations that do this consistently, which could be yours, will make better decisions in volatile conditions than those that treat their best forecast as an immutable plan.
The compounding cost of small errors in long-range planning: The story’s mathematical point about error amplification has a direct organisational parallel. A business plan built on a revenue forecast that is five per cent optimistic, combined with a cost forecast that is five per cent conservative, and a timeline that is ten per cent compressed, produces compounding divergence from reality that becomes serious over a three-to-five-year horizon. The individual assumptions each look reasonable; the aggregate effect is a plan that bears little resemblance to what actually happens.
Overall these issues are not primarily forecasting problems but a single planning culture problem: organisations that require confident point estimates to approve projects create incentives for forecasters to produce them, regardless of whether the underlying uncertainty warrants it. Building explicit uncertainty ranges into planning processes and normalising the acknowledgement that a five-year plan is substantially a set of informed guesses tends to produce better outcomes than treating the plan as a reliable prediction.
Further reading
On chaos theory, sensitive dependence, and the limits of prediction:
The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. Taleb’s account of why forecasting fails in complex systems covers the same mathematical territory as the butterfly effect, with particular attention to the organisational and financial consequences of treating inherently unpredictable systems as if they were forecastable.
Chaos: Making a New Science by James Gleick. The most readable account of how chaos theory developed and what it means, covering Lorenz’s original atmospheric work that gave rise to the butterfly effect concept, and the broader implications for prediction in complex systems.
On forecasting, uncertainty, and how to communicate it honestly:
Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. Tetlock’s research identifies what distinguishes more accurate forecasters from less accurate ones, with practical implications for how organisations can improve their forecasting practice without pretending that genuine uncertainty can be eliminated.
The Signal and the Noise: The Art and Science of Prediction by Nate Silver. Silver’s chapter on weather forecasting is directly relevant to this story, covering how meteorologists have improved their communication of uncertainty through probabilistic forecasts, and what other domains can learn from that approach.
On robust decision-making under uncertainty:
Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb. Taleb’s argument for building systems that benefit from volatility rather than merely surviving it is the most direct organisational response to the insight that chaotic systems cannot be predicted but can be prepared for.
Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke. A practical framework for making decisions honestly under conditions where the outcome cannot be known in advance, which is the situation the butterfly effect story describes.
About the image
A photo of a Clipper Butterfly I took back in 2008 at a butterfly farm at Blenheim Palace.
Photo montage and photo by Matt Ballantine, 2026, 2008.
