Severity Bias

aka Outcome Severity Effect · Severity-Responsibility Link · Severity Effect

Assigning more blame for identical behavior when the outcome happens to be worse, even though the decision was the same.

WHAT IT IS

The glitch, explained plainly.

Imagine two kids throwing a ball in the house. Both throw it the exact same way. One kid's ball bounces harmlessly off a pillow, and the other kid's ball breaks a fancy vase. Even though they did the exact same thing, the kid who broke the vase gets in way more trouble. We punish people based on how bad things turned out, not just on what they actually did.

Severity Bias describes the systematic distortion in moral and causal judgment whereby the seriousness of a negative outcome inflates perceived blame, negligence, and deserved punishment for the person involved, independent of their actual intentions, knowledge, or decision quality. When an identical risky action results in a catastrophic outcome versus a minor or neutral one, observers retroactively judge the actor as more reckless, less competent, and more deserving of sanction — even though nothing about the actor's behavior differed. This bias operates as a powerful confound in legal, medical, and organizational settings, where identical conduct can be deemed acceptable when outcomes are benign but negligent when outcomes are tragic. The effect is robust across contexts and persists even among professionals trained to separate process from outcome.

SOUND FAMILIAR?

Where it shows up.

  1. 01 Two nurses on different shifts each accidentally give a patient a slightly higher dose of pain medication than prescribed — the same dosing error with the same drug. On one shift the patient recovers normally; on the other, the patient has a rare allergic reaction and ends up in the ICU. The hospital board reviews both cases and fires the second nurse for gross negligence while giving the first nurse only a verbal warning.
  2. 02 A city planner approves a bridge design that uses standard safety margins. Years later, an unusual storm causes the bridge to collapse, killing three people. An internal investigation concludes the planner was incompetent and should have anticipated extreme weather, even though dozens of other bridges built to the same standard survived the storm without incident.
  3. 03 A financial advisor recommends the same moderately aggressive portfolio to two clients with identical risk profiles. One client's portfolio drops 40% during a market crash, while the other sells just before the crash for unrelated personal reasons. The first client sues the advisor for reckless advice; the second client still speaks highly of the advisor's competence.
  4. 04 A teacher allows students to conduct a chemistry experiment that carries a small, known risk of producing fumes. In one class, the ventilation works properly and nothing happens. In another class with the same setup, a ventilation fan fails and a student gets lightheaded. Parents of the second class demand the teacher be disciplined for 'dangerous negligence,' while parents of the first class never raise the issue.
  5. 05 During a product launch meeting, a VP decides to skip an extra round of user testing to meet the deadline — a common and accepted trade-off at the company. When the product launches successfully, the decision is never questioned. When the same VP makes the same call on a different product and it ships with a critical bug that costs $2 million, the board cites the skipped testing as evidence of poor leadership.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors and regulators judge fund managers as more negligent or incompetent when risky but reasonable investment decisions result in large losses versus small gains, even when the strategy was identical. Financial advisors face lawsuits not for the quality of their advice but for the magnitude of losses that luck produced.

Medicine & diagnosis

Physicians face dramatically different evaluations of identical clinical decisions depending on patient outcomes. Doctors who make standard-of-care decisions are judged as negligent when patients die or suffer severe complications but competent when patients recover — driving defensive medicine and over-testing to avoid post-hoc blame.

Education & grading

Teachers who allow the same reasonable level of physical activity or independence in the classroom face disproportionate blame and disciplinary action when a student is injured versus when no injury occurs, leading to excessively risk-averse policies that limit learning opportunities.

Relationships

Partners judge each other's decisions — about finances, parenting, or travel plans — based on how things turned out rather than on the reasoning at the time. A spouse whose choice to delay a doctor visit coincides with a worsening condition is blamed for negligence, while the same delay with a spontaneous recovery goes unnoticed.

Tech & product

Engineering teams that ship code with known acceptable risks are scrutinized and blamed after outages or security breaches but praised for speed and pragmatism when nothing goes wrong, creating an inconsistent culture around risk tolerance and technical debt.

Workplace & hiring

Hiring managers who take a chance on unconventional candidates are praised as visionary when the hire excels but criticized as reckless when the hire underperforms, discouraging innovative talent acquisition. Post-incident reviews focus disproportionately on scapegoating individuals involved in severe failures rather than examining systemic process weaknesses.

Politics Media

Policy decisions are evaluated primarily by their outcomes rather than their rationale. Leaders who enact reasonable policies face public outrage and calls for resignation when those policies coincide with bad outcomes (economic downturns, natural disaster casualties), while the same policies are seen as wise when outcomes happen to be favorable.

HOW TO SPOT IT

Ask yourself…

  • Am I judging this person's decision differently than I would if the outcome had been better or worse?
  • Would I call this behavior 'negligent' if nothing bad had happened?
  • Am I letting the severity of the consequence drive my assessment of the decision-maker's competence or intent?
HOW TO DEFEND AGAINST IT

The playbook.

  • Before evaluating someone's decision, mentally imagine the same decision leading to the best possible outcome — would you still see it as negligent?
  • Separate the evaluation into two explicit steps: first assess the quality of the decision process given what was known at the time, then note the outcome as a separate, independent variable.
  • In organizational post-mortems, adopt a 'process audit' framework that evaluates whether procedures were followed, not whether outcomes were favorable.
  • Use pre-commitment: write down your assessment of a decision's quality before learning the outcome whenever possible.
  • Ask yourself: 'If ten people made this exact same decision, how many would have gotten a bad outcome?' to calibrate whether the outcome was genuinely foreseeable.
FAMOUS CASES

In history.

  • Medical malpractice litigation patterns consistently show that identical medical decisions result in vastly different jury verdicts depending on whether the patient survived or died, as documented in multiple studies of anesthesiology and surgical outcomes.
  • The investigation and public blame following the Challenger space shuttle disaster focused heavily on individual decision-makers, despite the fact that the same risk-acceptance culture had persisted through many successful launches without scrutiny.
  • The 2008 financial crisis led to severe condemnation of risk strategies that were standard practice and widely accepted across the industry for years when outcomes were positive.
WHERE IT COMES FROM
Academic origin

Elaine Walster (1966) first demonstrated the severity-responsibility link in her seminal paper 'Assignment of Responsibility for an Accident' in the Journal of Personality and Social Psychology. The concept was further developed by Kelly Shaver (1970) through the defensive attribution hypothesis and synthesized in Robbennolt's (2000) meta-analytic review of outcome severity and responsibility judgments.

Evolutionary origin

In ancestral environments, severe outcomes — a death, a serious injury, a destroyed food cache — demanded swift identification of responsible agents to deter future dangerous behavior. Erring on the side of harsher blame for worse outcomes served a protective function: it discouraged risk-taking that could threaten group survival and signaled strong norms against carelessness. The cost of under-blaming a genuinely negligent actor (allowing future harm) was far greater than the cost of over-blaming an unlucky one.

IN AI SYSTEMS

How the machines inherit it.

Machine learning models trained on human judgment data (legal sentencing datasets, performance reviews, incident reports) inherit severity bias by learning to weight outcome severity as a predictor of negligence or fault. AI systems used in legal or insurance contexts may systematically assign higher risk scores or blame assessments to actors associated with severe outcomes, perpetuating the bias at scale without the possibility of case-by-case human correction.

FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked