Subadditivity Effect

aka Unpacking Effect · Subadditive Probability Judgment

Judging the whole as less likely than the sum of its parts — individual pieces adding up to more than 100%.

WHAT IT IS

The glitch, explained plainly.

Imagine someone asks you how many toys you have. You say 'maybe 20.' But then they ask, 'How many action figures? How many stuffed animals? How many Legos? How many puzzles?' When you count each type separately, you end up saying numbers that add to 35. The pieces somehow feel like more than the whole, just because you thought about each one by itself.

When people estimate the likelihood of a broad category (e.g., 'dying from natural causes'), they tend to assign it a lower probability than the combined probabilities they would give to its specific subcategories (e.g., cancer, heart disease, stroke, other natural causes) when those subcategories are listed explicitly. This violates basic probability axioms, since the whole must logically equal the sum of its mutually exclusive parts. The effect is driven by the fact that explicitly listing subcategories—'unpacking' them—increases their psychological salience and makes each feel more concrete and imaginable, thereby inflating their individual probability estimates. The degree of subadditivity is robust across domains and has been replicated consistently in studies involving health risks, financial forecasts, legal judgments, and everyday frequency estimates.

SOUND FAMILIAR?

Where it shows up.

  1. 01 A risk analyst asks a team to estimate the probability that their new product launch will fail. The team says 15%. The analyst then asks them to separately estimate the probability of failure due to manufacturing defects, marketing missteps, supply chain disruptions, and regulatory hurdles. The individual estimates sum to 42%. The team doesn't revise either set of numbers, accepting both as reasonable.
  2. 02 An insurance salesperson presents a customer with two policies: one covering 'travel problems' for $80 and another itemizing coverage for lost luggage, flight cancellation, medical emergencies abroad, and theft for $140. The customer chooses the more expensive itemized policy, feeling it addresses more risk, even though both policies cover the exact same set of events.
  3. 03 A prosecutor, during closing arguments, walks the jury through three specific scenarios of how the defendant could have committed the crime. The jurors rate the likelihood of guilt higher than a control group that was simply told the defendant committed the crime in some manner. Both groups saw the same evidence.
  4. 04 A project manager asks her team to estimate how long the full project will take. They say 6 months. She then asks them to estimate durations for each of the five major phases individually. The phase estimates sum to 9.5 months, but no one notices the inconsistency because each phase estimate felt carefully reasoned.
  5. 05 A charity fundraiser tests two donation appeals. One says 'Help fight disease in developing nations.' The other says 'Help fight malaria, tuberculosis, cholera, and other diseases in developing nations.' Donors rate the second cause as more urgent and worthy of larger contributions, even though the first description encompasses the exact same scope of work.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors and analysts tend to assign higher cumulative risk to a portfolio when individual risk factors (interest rate changes, currency fluctuation, sector downturns, regulatory changes) are listed separately than when asked about 'overall market risk.' This leads to over-hedging or excessive diversification in response to itemized risks that seem larger in aggregate than the packed whole.

Medicine & diagnosis

Patients asked about the likelihood of 'side effects' from a medication give lower estimates than patients who are presented with a list of specific side effects (nausea, headache, dizziness, fatigue). This can lead to inflated risk perception and medication non-adherence when side effects are itemized in detail on packaging or during informed consent.

Education & grading

Students estimating the probability of failing a course give a lower figure than when asked separately about failing due to poor exam scores, missed assignments, low participation, and a bad final project. This can lead to underestimation of overall academic risk when threats are considered holistically, or panic when they are itemized.

Relationships

When someone considers the general question 'Will this relationship end?' they estimate low probability. But when they separately consider incompatible life goals, communication problems, financial stress, and family conflicts, the itemized probabilities sum to a much higher figure, potentially amplifying anxiety about the relationship.

Tech & product

Product teams listing specific failure modes (server crash, data corruption, API timeout, authentication error) during risk reviews produce inflated total risk estimates compared to assessing 'system failure' as a single category. Conversely, presenting users with a single 'security' label feels less comprehensive than listing 'encryption, two-factor authentication, fraud detection, and privacy controls,' influencing perceived product value.

Workplace & hiring

During performance reviews, listing specific shortcomings (missed deadlines, poor communication, low initiative, weak technical skills) makes an employee's overall performance seem worse than a summary statement of 'needs improvement.' Managers who unpack negatives may unknowingly inflate the perceived severity of an employee's issues.

Politics Media

Political campaigns and media outlets exploit this effect by itemizing specific threats (terrorism, cyberattacks, pandemics, economic collapse) rather than referring to 'national security risks' in the abstract. The unpacked list makes the total threat seem more imminent and severe, which can be used to justify policy positions or generate engagement.

HOW TO SPOT IT

Ask yourself…

  • Am I estimating the probability of a broad category, or have I been presented with an itemized breakdown that might be inflating my total estimate?
  • If I add up my individual probability estimates for each subcomponent, do they sum to more than the probability I would assign to the category as a whole?
  • Is someone strategically unpacking a risk or opportunity into vivid subcategories to make it seem larger or more urgent than it actually is?
HOW TO DEFEND AGAINST IT

The playbook.

  • After estimating individual components, explicitly sum them and compare to your overall estimate for the same category. If they diverge significantly, investigate why.
  • Before being presented with an itemized breakdown, first anchor yourself with a holistic estimate of the packed category.
  • Ask whether the person presenting itemized risks or opportunities has an incentive to make the total seem larger (e.g., an insurance salesperson, a prosecutor, a fundraiser).
  • Use reference classes and base rates from statistical databases rather than relying on subjective estimates of subcategories.
  • Practice normalizing: after estimating parts, force yourself to allocate a fixed 100% budget across all subcategories to check for internal consistency.
FAMOUS CASES

In history.

  • Tversky and Koehler's 1994 cause-of-death study, where participants judged the probability of dying from 'natural causes' at 58%, while the sum of probabilities for cancer, heart attack, and other natural causes totaled 73%.
  • Insurance industry practices of selling itemized coverage plans at higher premiums than equivalent bundled policies, exploiting the effect that unpacked risks feel larger.
WHERE IT COMES FROM
Academic origin

Amos Tversky and Derek J. Koehler, 1994, in their paper 'Support Theory: A Nonextensional Representation of Subjective Probability' published in Psychological Review.

Evolutionary origin

In ancestral environments, survival depended on detecting and responding to specific, concrete threats rather than abstract categories of danger. A brain that becomes more alert when hearing 'snake, spider, and scorpion' than when hearing 'dangerous animals' was better at mobilizing targeted defensive responses. This bias toward concreteness ensured that vivid, identifiable risks received disproportionate attentional resources, even if it distorted holistic probability assessment.

IN AI SYSTEMS

How the machines inherit it.

AI systems trained on human-generated probability estimates can inherit subadditive patterns if training data reflects human judgments rather than calibrated statistical outputs. Recommendation and risk-scoring algorithms that decompose categories into subcategories may produce inflated cumulative risk scores. Additionally, when LLMs are asked to estimate probabilities of events, they may exhibit subadditivity if they generate estimates for parts independently without enforcing normalization constraints, producing outputs where subcategory probabilities sum to more than the whole.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked