Questions for you:
- What are the most influential success stories in your industry or field, and how much do you know about the equally capable people and organisations that tried the same approach and failed?
- When your organisation studies competitors or seeks inspiration from other companies, does it systematically include failed examples, or only visible successes?
- Can you think of a decision your organisation has made that was justified partly by citing a successful precedent, without accounting for how many similar attempts did not succeed?
Organisational applications:
Survivorship bias in organisational learning: Case study culture — learning from successful companies, celebrated leaders, and high-profile innovations — is structurally biased toward survivors. The companies that appear in business school curricula, management books, and conference keynotes are there because they succeeded, not because their stories are more instructive than the ones that failed. In high-variance domains, the visible successes may have followed identical strategies to invisible failures, differing only in timing, market conditions, or chance.
Organisations that base strategic decisions primarily on what successful companies have done are working from an incomplete sample in a predictable way: they are seeing the distribution of outcomes among survivors, not across all attempts.
Hiring and talent assessment under survivorship bias: The story is direct about the implications for hiring: track records in high-variance domains select for luck as well as skill, and treating impressive records as reliable signals of superior ability overweights the role of chance.
This does not mean ignoring track records, but it does mean examining them more carefully — asking about the variance in the environment, whether the outcomes were typical of peers in similar conditions, and what the person did differently rather than simply what they achieved. Structured interviews that focus on decision processes and reasoning rather than outcomes alone are a partial corrective, because the quality of reasoning is more attributable to the individual than to the outcome they happen to achieve.
Making the invisible visible in post-mortems and retrospectives: One practical response to survivorship bias in organisational learning is to deliberately seek out failure data alongside success data. This means commissioning retrospectives on projects that were quietly abandoned rather than only on those that were completed, studying competitors that exited the market rather than only those that thrived, and maintaining internal records of initiatives that did not proceed alongside those that did.
The objective is not to cultivate pessimism but to calibrate the base rate: to understand what proportion of attempts like this one typically succeed, so that success stories can be interpreted in context rather than as evidence of universal replicability.
Further reading
On survivorship bias and the invisible graveyard:
The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. Taleb’s account of the silent evidence problem — the systematic absence of failure data from the information we use to make decisions — is the most rigorous available treatment of the survivorship bias argument the story makes.
Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Nicholas Taleb. Taleb’s earlier book covers the specific application to financial markets and career success, with sustained attention to how traders and investors mistake luck for skill by examining only the survivors of random selection processes.
On learning from failure and making the invisible visible:
The Signal and the Noise: The Art and Science of Prediction by Nate Silver. Silver’s account of how base rates are systematically ignored in favour of vivid individual cases is directly relevant to the survivorship bias problem: the correction is to ask what proportion of similar attempts succeed, not whether this particular success story is inspiring.
Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke. Duke’s framework for separating decision quality from outcome quality, and for evaluating the quality of a decision by examining the reasoning rather than the result, is the most practical available corrective to survivorship-biased performance assessment.
On talent evaluation and the limits of track records:
Noise: A Flaw in Human Judgement by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein. The chapters on hiring and performance evaluation cover the evidence that track records in high-variance domains are poor predictors of future performance, and what structured assessment approaches do better.
The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing by Michael Mauboussin. Mauboussin’s framework for assessing how much of the variance in outcomes in different domains is attributable to skill versus luck provides the quantitative basis for the story’s argument, with practical guidance on adjusting for survivorship bias in talent assessment.
About the image
This is an illustration from https://apps.dtic.mil/sti/citations/ADA245827 that shows battle damage to an F-4 fighter jet. The origin story of survivorship bias is the practice of examining damage on returning aircraft to inform decisions about additional armour for planes.
Montage Matt Ballantine, 2026
