Questions for you:
- Think of something you believed worked — a diet, a study technique, a management approach — where the people who tried it or promoted it were self-selected. How confident are you that it actually worked, rather than that motivated people tried it?
- Have you ever been part of a pilot programme or trial at work and wondered whether your results were typical? What might have been different about you and your group compared with everyone else?
- Before randomised trials became standard, medicine was dominated by anecdote and authority. Where in your life do you still rely on those two things in situations where you could, in principle, test your assumptions using more random methods?
Organisational applications:
Selection bias in organisational evaluation: The story’s core problem — doctors unconsciously assigning healthier patients to the experimental treatment — maps directly onto how organisations evaluate interventions. Training programmes are typically offered to willing participants, which means they select for motivated employees who would likely have improved regardless. New management practices are piloted in teams led by enthusiastic early adopters, whose results reflect their enthusiasm as much as the practice.
Mentoring programmes attract people who already seek out development. In each case, the people who experience the intervention are systematically different from those who do not, in ways that make the outcomes look more positive than they would be in a general rollout. The RCT insight is not that you must randomise everything, but that you must account for selection effects before drawing conclusions.
A/B testing as applied RCT logic: The most direct organisational application of RCT methodology is A/B testing — randomly assigning users, customers, or employees to different conditions and measuring differential outcomes. The story’s principle is exactly the logic behind A/B testing: random assignment distributes all unmeasurable differences equally across groups, so any remaining difference in outcomes can be attributed to the intervention rather than to pre-existing characteristics of who received it.
Organisations that rigorously use A/B testing in digital product development but revert to anecdote and authority when evaluating internal processes are applying a double standard with no principled basis. The same randomisation logic that improves product decisions can improve HR, operational, and strategic decisions.
The placebo effect in organisational interventions: The story’s point — that people improve simply because they believe they are receiving treatment, and that randomisation equalises this across groups — has a direct organisational parallel. When a team is told it is piloting a new approach, performance often improves, partly because the team knows it is being observed and is in the spotlight.
This is the Hawthorne effect, and it operates in organisational settings as reliably as the placebo effect operates in clinical ones. Evaluations that do not account for this will attribute improvement to the intervention when some or all of it would have occurred simply because a group was being paid attention to. Building comparison conditions that receive equivalent attention, without the specific intervention being evaluated, is the practical corrective.
Further reading
On the history and methodology of randomised controlled trials:
The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow. Mlodinow covers the development of the RCT and its significance for separating genuine effects from random noise, with accessible explanation of why non-random comparison groups are so systematically misleading.
The Beautiful Cure: The New Science of Human Health by Daniel M. Davis. Davis’s account of immunology research includes coverage of how clinical trials work and why randomisation is essential to the validity of the conclusions they produce, told through the specific lens of immune system research.
On selection bias, the Hawthorne effect, and evaluation design:
Thinking, Fast and Slow by Daniel Kahneman. Kahneman’s treatment of regression to the mean and the systematic errors in evaluating interventions without proper control conditions covers the organisational version of the placebo problem directly.
Noise: A Flaw in Human Judgement by Daniel Kahneman, Olivier Sibony and Cass R. Sunstein. The chapters on evaluation and measurement cover how selection effects and attention effects systematically distort organisational assessments of whether interventions are working.
On A/B testing, experimentation, and organisational learning:
Experimentation Works: The Surprising Power of Business Experiments by Stefan Thomke. Thomke’s account of how rigorous experimentation transforms organisational decision-making is the most direct available treatment of RCT logic applied to business contexts, covering both the methodology and the cultural conditions that make it work.
The Lean Startup by Eric Ries. Ries’s framework for validated learning through structured experiments draws on the same logic as the RCT: the goal is to design tests that can distinguish genuine effects from confounding factors, rather than to accumulate anecdotal evidence that feels compelling but proves nothing.
About the image
A packet of paracetamol. This image is loosely inspired by sleeve to the New Order single Fine Time.
Photo montage and photo by Matt Ballantine, 2026
