Understanding Judgment Under Uncertainty
Judgment Under Uncertainty
Every day you make decisions without having all the facts. Judgment under uncertainty is the process of estimating probabilities and outcomes when information is incomplete or ambiguous. This isn't just about trivial choices; it shapes financial investments, medical diagnoses, weather forecasts, and political decisions.
Because we rarely have perfect information, our brains rely on shortcuts and rough estimates. Sometimes these work well. Other times, they lead us systematically astray. The rest of this section covers the main shortcuts (heuristics), how subjective probability works, and the specific biases that distort our thinking.

Role of Heuristics
Heuristics are mental shortcuts or rules of thumb that let you make judgments quickly without analyzing every piece of available data. They reduce cognitive load and speed up decision-making, but they can also produce predictable errors.
Three heuristics dominate the research literature:
-
Availability heuristic — You judge how likely something is based on how easily examples come to mind. Plane crashes get heavy news coverage, so people tend to overestimate the risk of flying compared to driving, even though driving is statistically far more dangerous. The ease of recall substitutes for actual frequency data.
-
Representativeness heuristic — You assess how likely something is by comparing it to a mental prototype. If someone is described as quiet, organized, and detail-oriented, you might guess "accountant" over "salesperson" because the description represents the accountant stereotype. The problem is that this ignores how many salespeople vs. accountants actually exist (base rates).
-
Anchoring and adjustment heuristic — An initial piece of information (the "anchor") disproportionately influences your final estimate. In salary negotiations, the first number put on the table tends to pull the final agreement toward it, even if that number was arbitrary. People adjust away from the anchor, but usually not enough.

Subjective Probability in Judgment
Subjective probability is your personal belief about how likely an event is, shaped by your knowledge, experience, and intuition. It contrasts with objective probability, which is calculated from statistical data or formal models.
- Subjective probability varies between individuals. Two doctors might estimate different likelihoods for the same diagnosis based on their clinical experience.
- Objective probability stays the same regardless of who calculates it. A fair coin has a 0.5 chance of heads no matter what you believe.
Subjective probability directly shapes how you perceive risk and make choices. One formal framework for working with it is Bayesian reasoning, which describes how you should update your beliefs when you receive new evidence:
Here, is the updated probability of A given that B occurred, is your prior belief, and is how likely the new evidence is if A were true. In practice, people often update their beliefs less than Bayes' theorem prescribes, or they ignore prior probabilities altogether.
Biases in Uncertain Decisions
Heuristics become problematic when they produce consistent, identifiable biases. Here are the major ones you need to know:
- Confirmation bias — Seeking out information that supports what you already believe while ignoring contradictory evidence. A person who thinks a medication works may pay attention only to days they felt better and dismiss days they didn't.
- Overconfidence bias — Overestimating your own knowledge or abilities. Studies show that when people say they're "99% sure" of an answer, they're wrong about 20–30% of the time. Entrepreneurs frequently underestimate the risk of business failure because of this bias.
- Framing effect — The way information is presented changes your decision, even when the underlying facts are identical. Telling patients a surgery has a "90% survival rate" leads to more consent than saying it has a "10% mortality rate." Same data, different choice.
- Conjunction fallacy — Judging a specific combination of events as more probable than either event alone. In Tversky and Kahneman's classic "Linda problem," participants rated "Linda is a bank teller and active in the feminist movement" as more likely than "Linda is a bank teller." That violates a basic rule of probability: a subset can never be more probable than the set containing it.
- Base rate neglect — Ignoring general statistical information in favor of specific case details. If a disease affects 1 in 10,000 people and a test is 99% accurate, a positive result still means the actual chance of having the disease is quite low. People routinely overlook that base rate.
- Hindsight bias — After learning an outcome, believing you "knew it all along." Following a stock market crash, people claim the signs were obvious, even though few predicted it beforehand.
- Availability cascade — A self-reinforcing cycle where a belief gains credibility simply because it's repeated. The more people talk about a risk on social media, the more real it seems, which triggers even more discussion, regardless of the actual evidence.