Questions for you:
- Can you think of a book, film, or piece of music you consider a classic? How confident are you that it succeeded on merit rather than timing, circumstance, or the right person happening to encounter it?
- How do you currently evaluate creative or cultural output in your organisation, and how much of that evaluation is shaped by what or who has already succeeded rather than what has genuine quality?
- When you hire or commission creative work, how do you account for the fact that past success in high-variance fields may reflect luck as much as skill? If you don’t, how might you account for that in future?
- If you wanted to find creative work by “unlucky” or overlooked creators or authors or makers, what techniques would you use?
Organisational applications:
Survivorship bias in how organisations learn from success: an early cascade of social proof that begins with essentially random early events can compund the perceived value of something into canonical status.
Organisations apply the same faulty logic when studying successful products, campaigns, or strategies and reverse-engineering lessons from them. The problem is structural: the equally good ideas that failed due to bad timing, being in the wrong room, or proposed in a meeting with an absent sponsor, are invisible. What looks like a pattern of success is a pattern of survival. Before treating any internal success story as a model to replicate, it is worth asking seriously how much of the outcome was within anyone’s control, and how many similar attempts failed without generating a case study.
The randomness of cultural canonisation and its effect on commissioning decisions: The page notes that many classics achieved their status through historical accident rather than intrinsic superiority. This matters practically for any organisation that commissions, curates, or recommends creative work. Editors, buyers, and programme commissioners who rely heavily on proven formulas, sequels, genre conventions, and the track records of established names are implicitly treating past random successes as predictive signals.
The evidence suggests they are not particularly reliable ones. Publishing’s rejection rate for work that subsequently became canonical is a fairly direct measure of expert prediction failure in high-variance creative domains. That does not mean commissioning judgement is worthless, but it does suggest that portfolios with more diversity and tolerance for novelty will outperform those that optimise for resemblance to past winners.
Managing talent assessment in high-variance creative roles: The page makes the point that in highly competitive creative fields, the skill difference between the top ten per cent and the top fraction of a per cent is often marginal, and that visible success reflects lucky timing as much as superior ability. This creates a specific problem for talent assessment.
An organisation that consistently hires on the basis of track record in high-variance roles will over-index on people who happened to be in the right place at the right time, and systematically overlook those who were not. This is not an argument for ignoring track records entirely, but for weighing the circumstances behind them more carefully: how much of the variance in this person’s environment was within their control, and how does their performance look relative to that context rather than in absolute terms?
Specific advantages can your organisation draw from this phenomenon: firstly, what do you think of the page’s hypothesis? If it’s true, that means there’s a lot of valuable work, or potential employees, available to you who are under-appreciated, and potentially under-priced – partly or wholly due to random factors. How can your organisation take advantage of that?
Further reading
On chance and success in creative industries:
The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow. Mlodinow’s account of how randomness shapes careers and reputations covers the publishing and entertainment industries directly, with clear explanations of why expert prediction of creative success performs so poorly.
Outliers: The Story of Success by Malcolm Gladwell. Gladwell’s examination of the hidden factors behind exceptional success, including the role of timing, cultural background, and arbitrary opportunity, is a readable account of why the most visible achievers are not simply the most talented.
On survivorship bias and what the failures can teach:
The Signal and the Noise: The Art and Science of Prediction by Nate Silver. Silver’s treatment of prediction failure across multiple domains includes sustained attention to how selective exposure to successful outcomes distorts our sense of what is predictable and what is not.
Thinking, Fast and Slow by Daniel Kahneman. Kahneman’s account of the halo effect and narrative fallacy explains the cognitive mechanisms that turn random early success into perceived intrinsic quality, and why those perceptions are so difficult to dislodge once established.
On cultural markets, social influence, and the amplification of random early advantage:
Everything is Obvious: How Common Sense Fails by Duncan J. Watts. Watts’s research on social contagion and cultural markets is the most rigorous available account of why hit-making is so unpredictable, based partly on experiments that reset the same cultural market multiple times and observed how random early patterns produced entirely different outcomes each time.
The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. Taleb’s discussion of Extremistan, domains where outcomes are dominated by a small number of disproportionately large events, captures the structural reason why bestseller lists look the way they do and why predicting them remains so difficult.
About the image
The wonderful second-hand book market that opens up underneath the arches of Waterloo Bridge on the South Bank of the Thames in London.
Photo montage and photo by Matt Ballantine, 2026
