Questions for you:
- When you assign probabilities to outcomes (70% chance of success, 30% risk of failure), do you recognise these numbers represent your current state of knowledge rather than objective properties of the world?
- In disagreements about “how likely” something is, do you frame these as differing beliefs based on different information, or do you assume one person must be objectively wrong?
- When new information arrives, do you systematically update your probability assessments (Bayesian updating) or do you anchor on your initial estimates regardless of contradictory evidence?
- Looking at past predictions where you assigned confidence levels, if you were consistently incorrect did you recognise you miscalibrated? How do you know isn’t true? Can you make a habit of tracking your own predictions in future?
Questions for your organisation:
Bayesian thinking and belief updating: Probability isn’t cosmic truth – it’s mental model reflecting current knowledge. Someone flipping a coin creates different probabilities for observers (50/50 if you haven’t seen it) versus the flipper (100% or 0% if they looked). This Bayesian approach treats probability as quantified ignorance – your best guess given what you know now, updated as information arrives. Build organisational cultures where probability estimates are explicit and expected to change with new evidence, and then those estimates are rated against the actual outcomes. Practices as diverse as medical practice and professional gambling will benefit from working through a cycle of assessing what you know, making best guesses, and updating continuously.
Information asymmetry and decision-making: Two people facing identical situations can rationally assign different probabilities based on different information. A product manager with customer insight assigns a higher probability to feature success than an executive who sees only aggregate data. A security analyst, aware of specific threats, assesses risks differently from the operations team who only react to successful attacks. Don’t assume disagreement means someone is irrational – different probability assessments often reflect legitimate information asymmetries. If your organisation is more explicit about this different insights you can create mechanisms for information sharing, make those assumptions explicit, and recognise that reconciling probability estimates requires sharing underlying knowledge, not just arguing about numbers.
Probabilistic communication and interpretation: When someone says “70% chance of rain,” that number isn’t carved into reality – it reflects their best guess based on models, observations, and judgement. Build a shared understanding of probabilistic language within your organisation: what does “likely” mean numerically? The CIA’s “Words of Estimative Probability” standardised this, using that as a starting point. Without calibration, “high risk” means vastly different things to different people.
Further reading
Bayesian thinking, subjective probability, and belief updating
Thinking in Bets by Annie Duke – poker champion’s framework for probabilistic decision-making treating all predictions as bets reflecting current knowledge, emphasising importance of updating beliefs as information arrives rather than defending initial positions.
The Theory That Would Not Die by Sharon Bertsch McGrayne – history of Bayesian statistics showing how treating probability as degree of belief based on evidence revolutionised decision-making from codebreaking to medical diagnosis.
Superforecasting by Philip E. Tetlock and Dan Gardner – research on best forecasters showing they think probabilistically, update beliefs systematically with new evidence, and calibrate predictions against outcomes – demonstrating probability as trainable skill.
Calibration, probabilistic judgement, and forecasting
- Expert Political Judgment by Philip E. Tetlock – demonstrates experts are poorly calibrated probabilistic forecasters, overconfident and resistant to updating beliefs, showing importance of tracking predictions against outcomes.
- How to Measure Anything by Douglas W. Hubbard – practical guide to quantifying uncertainty using probability distributions, emphasising calibration training and Bayesian updating for business decision-making.
- The Signal and the Noise by Nate Silver – examines when probabilistic thinking improves predictions (weather, poker) versus when it fails (earthquakes, economics), showing importance of calibration and honest uncertainty assessment.
Expected utility, risk perception, and decision theory
- Against the Gods by Peter L. Bernstein – history of probability and risk management including Bernoulli’s expected utility theory showing probability assessment depends on personal circumstances – same random outcome feels different depending on what you stand to lose.
- Thinking, Fast and Slow by Daniel Kahneman – discusses probability judgement biases showing people systematically misassign probabilities through representativeness heuristic, availability bias, and overconfidence, arguing for deliberate probabilistic reasoning.
- The Black Swan by Nassim Nicholas Taleb – argues people systematically underestimate probability of rare events and overestimate ability to predict, showing limitations of probabilistic thinking for extreme outcomes.
About the image
It’s getting increasingly challenging to find coins lying around the house, because we don’t use them very often at all these days. This is a two pence piece, ready for tossing (or spending).
Photo montage and photo by Matt Ballantine, 2026
