Random the Book

Random the Book: Matt Ballantine and Nick Drage's experiment in serendipity and chance.


How would you tell if a source of randomness was random?

Questions for you:

  • When you encounter a surprising pattern in data at work, what is your default assumption: that something caused it, or that it might simply be what randomness looks like?
  • How would you go about testing whether a process in your organisation that is supposed to be unbiased actually is?
  • Can you think of a situation where you treated a random fluctuation as a signal and acted on it? What happened?

Organisational applications:

Applying randomness testing logic to process audits: The story describes a battery of tests used to verify that a random number generator is actually producing unpredictable output, checking for uniform distribution, absence of patterns, and independence between successive values. The same logic applies when auditing any organisational process that is supposed to be unbiased.

A meritocratic hiring process, a performance rating distribution, and a customer allocation system: each can be subjected to analogous tests. Are outcomes uniformly distributed where they should be? Are there suspicious runs or clusters? Do results from one period correlate with results from the next in ways that suggest a hidden variable? Treating process audits as randomness tests, rather than as checks on whether rules were followed, often surfaces problems that rule-compliance reviews miss.

The cost of acting on noise: The story notes that truly random sequences can fail randomness tests by producing clumps that look suspicious. The organisational equivalent is a run of good or bad results that looks meaningful but falls within the bounds of normal random variation.

The problem is that organisations rarely have a worked-out sense of what normal variation looks like in a given metric, so any notable run tends to trigger a causal explanation and an intervention. The intervention then becomes part of the next period’s data, making it harder to determine the underlying trend. Building explicit confidence intervals around key metrics and treating results within those intervals as noise rather than signal is a more disciplined approach than reacting to every departure from expectations.

Distinguishing genuine anomalies from expected clumpiness: The story’s central irony, that you can’t simply assume a sequence is non-random because it looks patterned, has a direct counterpart in fraud detection and quality control. The absence of expected randomness is often more informative than its presence: financial data that is suspiciously smooth, survey results that cluster too neatly, or audit samples that show too few anomalies can all indicate manipulation.

The story connects to the Benford’s Law page elsewhere in the book, where the natural distribution of leading digits in real-world data is used to detect fabricated figures. The general principle is that both too much pattern and too little pattern are informative; the question is always what genuine randomness in this particular context should look like, and whether what you observe is consistent with that.

Further reading

On statistical testing and the detection of randomness:

The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow. Covers the statistical properties of random sequences, including why genuine randomness produces clusters and runs that consistently strike human observers as non-random, which is the core irony the story describes.

Randomness by Deborah J. Bennett. A concise history of how randomness has been understood and tested, including the development of formal statistical tests for randomness and the philosophical difficulties in defining what a random sequence actually is.

On fraud detection and the absence of expected randomness:

Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations by Mark J. Nigrini. Nigrini developed many of the practical applications of Benford’s Law to fraud detection. The book covers how the expected statistical properties of genuine data, including its natural randomness, can be used to identify manipulation.

How to Lie with Statistics by Darrell Huff. An older text, but still the most accessible account of how statistical presentation can obscure or manufacture patterns, including the ways in which data that looks clean and orderly should sometimes be treated with more suspicion than data that looks messy.

On separating signal from noise in organisational data:

The Signal and the Noise: The Art and Science of Prediction by Nate Silver. Silver’s account of forecasting across multiple domains is substantially about the difficulty of distinguishing genuine signal from random noise, with practical examples from weather, economics, and sports.

Noise: A Flaw in Human Judgement by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein. Directly relevant to the organisational applications: covers the extent to which apparent patterns in human judgement data, including performance ratings and hiring decisions, reflect noise rather than signal.

About the image

This is what a random number generator looks like. It’s strangely dull and feels like it should be a bit more Heath Robinson…

Photo montage by Matt Ballantine, 2026