Random the Book

Random the Book: Matt Ballantine and Nick Drage's experiment in serendipity and chance.


Why is seven the most likely roll?

Questions for you:

  • Can you think of a situation where you have been counting outcomes when you should have been counting the underlying combinations that produce them? Where might this error be affecting how you assess probability in your own life?
  • When you estimate the likelihood of something happening, do you instinctively enumerate the ways it could come about, or do you start from a general sense of whether it feels common or rare?
  • Is there a decision you are facing right now where the apparent number of options is misleading you, because those options are not equally likely?

Organisational applications:

Outcomes versus mechanisms: a common analytical error: The story’s central lesson is that the number of possible outcomes is not the same as their distribution of probability. This error — treating a list of outcomes as if each were equally likely without examining the underlying mechanisms — is pervasive in organisational risk assessment and planning.

A project risk register that lists ten possible failure modes and implicitly treats each as equally probable is making exactly the same mistake as the person who sees eleven possible dice totals and assumes each is equally likely. The correction is to ask, for each outcome, how many distinct pathways lead to it. The outcome with the most routes — the equivalent to seven — is the one most worth planning for, regardless of how it sits in a list.

Decomposing complex outcomes into components: The story notes that this way of breaking things down — looking at the individual components of a random process rather than the outcome — is the bedrock of probability modelling today. In practice, this means that when estimating the likelihood of a complex organisational outcome, the most reliable approach is to decompose it into its constituent parts, assess each separately, and then combine them.

A product launch’s probability of success is not a single judgement; it is a function of the probability that the market exists, the product solves the right problem, the timing is right, execution meets plan, and competitors do not move first. Each component has its own likelihood, and their combination is typically more instructive than a holistic guess. The discipline of decomposition is also a useful check on overconfidence — complex outcomes with many required conditions are generally less likely than they feel.

Why base rates matter more than individual cases: The story’s deeper implication is that understanding the distribution of outcomes requires knowing the whole sample space, not just the outcome you are focused on. This is the base-rate problem in applied form: if you are evaluating a specific outcome without knowing how the underlying combinations are distributed, you are essentially guessing at the probability without adequate information.

Organisations that establish base rates — what proportion of projects like this succeed, what proportion of hires in this role perform well, what proportion of similar initiatives achieve their targets — are doing what Cardano and Galileo did: enumerating the full sample space before estimating the probability of any particular outcome. It is more work than intuition but produces substantially more reliable estimates.

Further reading

On probability, combinations, and the sample space:

Struck by Lightning: The Curious World of Probabilities by Jeffrey Rosenthal. Rosenthal covers combinations, sample spaces, and basic probability through accessible examples that extend the dice argument into everyday contexts, making the mathematical discipline concrete for non-specialists.

The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow. Mlodinow’s account of how probability intuition fails covers the same conceptual ground as the story — the gap between how outcomes feel and how they are distributed — with a wide range of illustrative examples beyond dice.

On decomposition, base rates, and structured probability estimation:

Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner. Tetlock’s account of how good forecasters break complex questions into components and assess each separately is the applied version of the story’s dice decomposition — the same discipline of enumeration, moved from games to real-world predictions.

Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke. Duke’s framework for making probability estimates explicit and examining their components is directly relevant to the organisational applications: the goal is to replace holistic gut-feel assessments with structured decomposition of the underlying combinations.

On risk assessment, outcome enumeration, and avoiding the equal-probability error:

Against the Gods: The Remarkable Story of Risk by Peter L. Bernstein. Bernstein’s history of probability covers the Cardano and Galileo contributions the story references, and explains why the shift from outcome counting to combination enumeration was such a significant intellectual advance.

Noise: A Flaw in Human Judgement by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein. The chapters on risk assessment and structured decision-making cover how unstructured probability judgements systematically fail — partly for the same reason the dice intuition fails — and what analytical disciplines correct for this.

About the image

There are two stories in the book about why the number seven appears so frequently. The images for both are identical bar the colour schemes, which is my idea of a visual joke…

Illustration by Matt Ballantine, 2026