Prosecutor's Fallacy

aka Conditional Probability Fallacy · Transposed Conditional · Inverse Fallacy

Confusing the probability of the evidence given innocence with the probability of innocence given the evidence.

WHAT IT IS

The glitch, explained plainly.

Imagine you know that almost all dogs wag their tails when happy. You see a creature wagging its tail in the bushes. 'It must be a dog!' you say. But lots of other animals wag their tails too, and there are way more non-dogs in the world than dogs. Just because happy dogs almost always wag doesn't mean that tail-wagging almost always means dog. Mixing up those two things is the Prosecutor's Fallacy.

The Prosecutor's Fallacy occurs when someone conflates two fundamentally different conditional probabilities: the probability of observing certain evidence assuming a hypothesis is true, and the probability that the hypothesis is true given that the evidence has been observed. In legal settings, this manifests as equating the rarity of a forensic match among innocent people with the probability that a matching defendant is guilty. The fallacy ignores the prior probability (base rate) of the hypothesis being true before the evidence was introduced, which can drastically change the correct conclusion. Although named for its prevalence in prosecution arguments, the error appears across medicine, epidemiology, machine learning, and everyday interpersonal reasoning whenever conditional probabilities are transposed without applying Bayes' theorem.

SOUND FAMILIAR?

Where it shows up.

  1. 01 A forensic analyst testifies that the odds of a random person matching the DNA found at a crime scene are 1 in 500,000. The prosecutor tells the jury this means there's only a 1-in-500,000 chance the defendant is innocent. The jury convicts based primarily on this statistic, without considering how many people in the city could also match.
  2. 02 A company's fraud detection algorithm flags an employee's expense report because only 0.1% of legitimate reports trigger this particular pattern. The compliance officer concludes there's a 99.9% chance the report is fraudulent, without considering that thousands of legitimate reports are processed monthly and many could coincidentally trigger the flag.
  3. 03 A doctor tells a patient that a screening test for a rare genetic condition is 99% accurate and their result came back positive. The patient is devastated, believing they almost certainly have the condition. The doctor fails to mention that because the condition affects only 1 in 10,000 people, the vast majority of positive results in the general population are actually false positives.
  4. 04 An ecologist studying coral reef collapses notices that all 15 reefs that collapsed in the past decade showed a particular chemical signature beforehand. She publishes a paper claiming this signature is a reliable early warning signal, not accounting for the fact that she only selected reefs that had already collapsed — many reefs with the same signature may have remained healthy and were never studied.
  5. 05 During a debate about hiring practices, a manager argues that because 80% of employees who were eventually terminated had gaps in their résumés, any candidate with a résumé gap is highly likely to underperform. A colleague points out that the vast majority of employees with résumé gaps are actually successful, and the manager is confusing the direction of the statistic.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Fraud detection systems flag transactions based on patterns that match known fraud, but because fraudulent transactions are rare compared to legitimate ones, the vast majority of flagged transactions are false positives. Investigators who treat every flag as near-certain fraud waste resources and may freeze innocent accounts, confusing the probability of matching a fraud pattern given innocence with the probability of fraud given a match.

Medicine & diagnosis

Clinicians and patients frequently misinterpret positive screening results for rare conditions, assuming a positive test means near-certain disease. Because the base rate of many screened conditions is very low, even highly accurate tests produce far more false positives than true positives in the general population, leading to unnecessary anxiety, invasive follow-up procedures, and overtreatment.

Education & grading

When standardized tests identify students as 'gifted' or 'at-risk' based on score thresholds, educators sometimes treat the classification as definitive. They confuse the test's sensitivity (how well it detects the trait if present) with the predictive value (how likely the trait is present given the score), leading to misplacement of students, especially in large or diverse populations where base rates vary.

Relationships

People invert conditional probabilities in interpreting social signals — for example, assuming that because distant behavior is common when someone is upset, any distant behavior must mean that person is upset. This ignores the many other explanations for the behavior and the low base rate of actual conflict, creating unnecessary anxiety and accusations.

Tech & product

Machine learning classifiers are often evaluated by accuracy or recall, but when deployed to detect rare events (spam, malware, anomalous behavior), even small false-positive rates generate floods of false alerts. Teams that focus on the model's accuracy rate without considering the base rate of the target event can grossly overestimate the reliability of positive predictions, leading to alert fatigue and wasted engineering effort.

Workplace & hiring

HR analytics that flag employees as flight risks based on behavioral patterns may confuse the probability of matching the pattern given an employee stays with the probability of leaving given a match. Because most employees stay, flagging systems can overwhelm managers with false alarms, damaging trust and wasting retention resources.

Politics Media

Media coverage of rare but dramatic events (terrorism, mass shootings) can lead the public to commit this fallacy: because nearly all such events involve a certain profile, people assume that anyone matching the profile is dangerous, ignoring that the overwhelming majority of people fitting the profile are harmless. This drives discriminatory profiling policies and disproportionate fear.

HOW TO SPOT IT

Ask yourself…

  • Am I treating the probability of the evidence under one explanation as if it were the probability of that explanation being true?
  • Have I considered the base rate — how common or rare is the thing I'm trying to detect in the overall population?
  • Am I confusing 'most X have feature Y' with 'most things with feature Y are X'?
HOW TO DEFEND AGAINST IT

The playbook.

  • Always ask: 'What is the base rate?' Before interpreting any match, flag, or positive result, find out how common the target condition is in the relevant population.
  • Practice inverting the question: When someone says 'the probability of X given Y is Z,' explicitly ask 'but what is the probability of Y given X?' and verify they are not the same.
  • Use natural frequencies instead of percentages: Instead of '99% accurate test,' think '1 out of 100 healthy people will test positive, and only 1 out of 10,000 people has this disease.' Then count: out of 10,000 people tested, how many positives are true?
  • Draw a 2x2 contingency table or use a frequency tree to visualize true positives, false positives, true negatives, and false negatives before drawing conclusions.
  • When presented with a single statistic in a high-stakes decision, demand the complementary statistic: what are the odds under the alternative hypothesis?
FAMOUS CASES

In history.

  • Sally Clark (1999, UK): A mother was wrongfully convicted of murdering her two infants after an expert witness testified the odds of two SIDS deaths in one family were 1 in 73 million, which was misinterpreted as the probability of her innocence. Her conviction was overturned in 2003.
  • Lucia de Berk (2003, Netherlands): A nurse was convicted of multiple murders based on a statistical calculation suggesting a 1 in 342 million probability that her shifts would coincide with so many patient deaths by chance. Her conviction was overturned in 2010 after the statistical reasoning was discredited.
  • O.J. Simpson trial (1995, USA): Both prosecution and defense engaged in variants of the fallacy — the defense argued that since only 1 in 2500 abusive husbands murder their wives, Simpson's history of abuse was irrelevant, ignoring the conditional probability given that the wife had already been murdered.
  • Barry George trial (2001, UK): George was convicted of murdering TV presenter Jill Dando partly based on firearm discharge residue found in his pocket. The prosecution argued the improbability of innocent contamination without comparing it to the probability under the guilty hypothesis. His conviction was overturned in 2007.
WHERE IT COMES FROM
Academic origin

William C. Thompson and Edward L. Schumann coined the term in their 1987 paper 'Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor's Fallacy and the Defense Attorney's Fallacy,' published in Law and Human Behavior.

Evolutionary origin

In small ancestral environments with low population sizes and high base rates for threats, the difference between 'if predator then tracks' and 'if tracks then predator' was negligible — both directions of inference were roughly equivalent because the relevant populations were tiny and threats were common. The brain evolved to process simple covariation rather than formal probability inversion, which was rarely needed when base rates were intuitively known through direct experience.

IN AI SYSTEMS

How the machines inherit it.

Machine learning classifiers trained to detect rare events (fraud, disease, security threats) are highly susceptible to this fallacy when their outputs are interpreted. A model with 99% accuracy detecting a 0.01% base-rate event will generate overwhelmingly false positives, yet operators routinely treat positive predictions as near-certain. Additionally, when ML models are evaluated retrospectively on datasets where the target event has already been selected for, the resulting performance metrics can be inflated — a direct analog of selecting only cases that experienced a transition and then claiming predictive power.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked