Questions for you:
- Where in your organisation do fixed, predictable processes create vulnerabilities that a determined adversary — whether a competitor, a fraudster, or a bad actor — could map and exploit?
- Can you think of an audit, compliance check, or inspection regime that follows a predictable schedule, allowing those being monitored to prepare for it specifically rather than maintaining genuine compliance?
- Where does your organisation’s predictability serve legitimate purposes — building trust, enabling planning, meeting obligations — and where is it simply an unexamined habit?
Organisational applications:
Predictable thresholds as exploitable attack surfaces: The story’s central insight is that a fixed response threshold is not just a rule — it is information an adversary can use to determine how far they can probe before triggering consequences. This applies well beyond cybersecurity.
A procurement fraud detection system that always flags transactions above a specific value will be circumvented by splitting purchases below that threshold, a known pattern in public-sector fraud. An audit schedule that visits sites on a fixed annual cycle allows preparation that obscures underlying compliance. A performance management system that only escalates after three formal warnings provides a roadmap for gaming the process. In each case, the predictability of the response creates the vulnerability. Randomising the trigger, the frequency, or the threshold eliminates the adversary’s ability to operate within safely mapped limits.
The human randomness problem: The story on the companion page notes that humans are poor random number generators—we avoid repetition, prefer certain numbers, and follow unconscious patterns. This matters practically for any organisation trying to implement genuinely randomised audit, monitoring, or inspection processes.
A manager who randomly selects which calls to review will not produce a random sample; it will be influenced by recent events, personal relationships, and availability. Operationally meaningful randomisation requires actual random selection mechanisms rather than human judgment, which is why organisations with serious compliance requirements use computer-generated random samples rather than discretionary spot checks.
When predictability is the right choice: The child development story elsewhere in the book makes the opposite argument: that unpredictable authority is harmful to children, who need consistency to develop appropriately. That tension is real and worth sitting with. The answer is not that unpredictability is always better but that its value depends entirely on the relationship and the purpose.
Randomised security responses work precisely because the adversary has no legitimate expectation of consistent treatment — their goal is to exploit, and removing their ability to plan is the aim. Relationships built on trust and mutual obligation — with employees, partners, customers, and children — work differently. Randomising responses in those contexts damages the foundation of the relationship rather than improving outcomes. The skill is in identifying the type of relationship you are in before deciding whether predictability or unpredictability better serves your purpose.
Further reading
On game theory, mixed strategies, and the value of unpredictability:
The Art of Strategy: A Game Theorist’s Guide to Success in Business and Life by Avinash Dixit and Barry Nalebuff. The most accessible account of game theory for non-specialists, covering mixed strategies — the formal game-theoretic basis for randomised responses — with clear explanations of when and why unpredictability produces better outcomes than consistent behaviour.
Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life by Avinash Dixit and Barry Nalebuff. An earlier and somewhat more technical treatment of the same material, with direct coverage of randomised strategies in competitive contexts including sports, negotiation, and security.
On cybersecurity, adversarial thinking, and randomised defence:
The Art of Intrusion by Kevin Mitnick. Mitnick’s account of how social engineers and hackers probe and exploit predictable systems is the practitioner version of the story’s argument — a detailed account of how fixed thresholds and predictable responses are identified and worked around by determined adversaries.
Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb. Taleb’s argument for systems that benefit from volatility rather than merely tolerating it extends the unpredictability argument: a system that is genuinely randomised in its responses not only prevents exploitation but becomes harder to attack the more it is tested.
On the limits of unpredictability and when consistency is right:
Noise: A Flaw in Human Judgement by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein. Kahneman and colleagues argue extensively for reducing unwanted variability in professional judgment — essentially arguing for more predictability in contexts where consistency is valuable. Reading this alongside the story makes the tension between beneficial and harmful unpredictability concrete.
Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke. Duke’s account of when to commit to a strategy and when to remain flexible is relevant to the question of when organisations should be predictably consistent and when they should introduce deliberate randomness.
About the image
A rabbit that I photographed outside its warren in Bushy Park in South West London.
Photo montage and photo by Matt Ballantine, 2026
