Conservatism Bias

aka Conservatism in Belief Revision · Bayesian Conservatism · Belief Perseverance Conservatism

Updating beliefs in the right direction when shown new evidence, but not nearly enough — clinging to the old view.

Illustration: Conservatism Bias
WHAT IT IS

The glitch, explained plainly.

Imagine you think your friend always picks vanilla ice cream. Then someone tells you he actually picked chocolate the last five times. Instead of saying 'Oh wow, he likes chocolate now!' you think 'Well, he still probably likes vanilla... maybe he just tried chocolate a few times.' You barely change your mind even though the new clues are really strong.

Conservatism bias describes the systematic tendency of individuals to give disproportionate weight to their existing beliefs, predictions, or base rates when confronted with new, relevant evidence. Unlike outright denial or confirmation bias (which selectively seeks supportive evidence), conservatism bias acknowledges the new data but fails to adjust beliefs by the magnitude that rational, Bayesian updating demands. This leads to sluggish, incremental revisions where large shifts are warranted, causing people to remain anchored to outdated assessments. The effect is especially pronounced when new information is abstract, statistical, or complex, as opposed to vivid and concrete, because the cognitive effort required to fully integrate abstract data into existing mental models is substantial.

SOUND FAMILIAR?

Where it shows up.

  1. 01 Maria, a financial analyst, initially valued a tech company at $120 per share based on a thorough review. Over the following quarter, the company released two strong earnings reports that significantly beat expectations. When asked for an updated target, Maria only revised her estimate to $125, even though her own models, when rerun with the new data, suggested $145.
  2. 02 Dr. Patel diagnosed a patient with a mild respiratory infection during the first visit. Two weeks later, new blood work and imaging showed markers strongly suggesting an autoimmune condition. Dr. Patel ordered a follow-up in a month rather than immediately revising the diagnosis, telling colleagues the original infection was probably still the main issue.
  3. 03 A hiring committee initially rated a candidate as average after reviewing their resume. During the interview, the candidate demonstrated exceptional problem-solving skills, provided outstanding references, and presented an impressive portfolio. In the post-interview scoring, the committee only bumped the candidate's rating from 6/10 to 7/10, still passing on the hire.
  4. 04 An intelligence agency assessed a foreign regime as stable based on years of analysis. Over six months, field agents reported surging protests, military defections, and economic collapse. The agency's quarterly report acknowledged the new data but still concluded that regime change was 'unlikely in the near term,' barely shifting from its prior assessment.
  5. 05 Jake estimated a 10% probability that his startup's new product would fail before launch. After a beta test revealed that 40% of users encountered a critical bug and the lead engineer resigned, Jake revised his failure estimate to only 15%, reasoning that the core product concept was still strong and the original analysis had been thorough.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors systematically under-react to corporate events such as earnings announcements, dividend changes, and stock splits. When a company reports earnings substantially above or below expectations, stock prices adjust, but typically not enough — a pattern known as post-earnings-announcement drift — because investors fail to fully incorporate the magnitude of the new information into their valuation models, remaining anchored to prior estimates.

Medicine & diagnosis

Clinicians may cling to an initial diagnosis even as new test results and symptoms accumulate that point toward a different condition. The initial diagnostic impression becomes a cognitive anchor, and subsequent evidence is under-weighted, leading to delayed treatment adjustments. This is particularly dangerous in conditions with evolving presentations, such as cancers initially misidentified as benign conditions.

Education & grading

Teachers form early impressions of student ability based on initial assignments or classroom behavior. When students subsequently improve or decline in performance, teachers tend to adjust their expectations and grading patterns more slowly than the objective evidence warrants, perpetuating early assessments long after they have become inaccurate.

Relationships

People form initial impressions of a partner's character traits early in a relationship and then under-adjust those impressions when faced with contradicting evidence over time. A partner who was initially perceived as reliable may continue to receive the benefit of the doubt long after repeated unreliable behavior, because the prior belief about their character is slow to update.

Tech & product

Product teams anchor on initial user research or feature assumptions and fail to sufficiently pivot when A/B test results or usage analytics present strong contradicting signals. This leads to continued investment in features that data shows users are abandoning, because the team under-weights the new behavioral data relative to their original design thesis.

Workplace & hiring

Managers anchor on first impressions formed during onboarding or early performance reviews. When an employee's performance dramatically shifts — either improving or deteriorating — the manager's formal evaluations and promotion decisions lag behind the actual performance trajectory, reflecting the original assessment more than current reality.

Politics Media

Voters and media consumers form early impressions of political candidates or policies and then under-react to new information such as policy reversals, scandal revelations, or updated economic data. Polling shifts tend to be smaller than the magnitude of new events would predict, as partisans especially resist updating priors that conflict with their established political narratives.

HOW TO SPOT IT

Ask yourself…

  • Am I giving this new evidence the full weight it deserves, or am I treating it as a minor footnote to my existing view?
  • If I had no prior opinion and saw only this new evidence, how different would my conclusion be from what I currently believe?
  • Would a neutral outsider, seeing the same new data, adjust their view more dramatically than I just did?
HOW TO DEFEND AGAINST IT

The playbook.

  • Use a 'clean-slate' exercise: ask yourself what you would conclude if you had no prior opinion and were seeing all the evidence for the first time right now.
  • Quantify your update: assign explicit probability estimates before and after receiving new evidence, then compare your actual adjustment to what Bayes' theorem would prescribe.
  • Appoint a designated devil's advocate in team settings whose role is to argue the case for the new evidence against the established view.
  • Set pre-commitment thresholds: before receiving new data, define in advance what evidence would cause you to change your mind and by how much.
  • Seek out independent, diverse perspectives from people who do not share your prior beliefs to counteract the inertia of your existing model.
FAMOUS CASES

In history.

  • The slow institutional response to the 2008 financial crisis, where credit rating agencies and major banks were slow to downgrade mortgage-backed securities despite mounting evidence of systemic default risk, reflecting under-reaction to new negative information.
  • NASA's Challenger disaster in 1986, where engineers and managers failed to sufficiently update their risk assessments about O-ring failure despite accumulating evidence from prior cold-weather launches.
  • Post-earnings-announcement drift, a well-documented market anomaly where stock prices continue drifting in the direction of an earnings surprise for months after the announcement, indicating that the market collectively under-reacts to the initial news.
WHERE IT COMES FROM
Academic origin

Ward Edwards, 1966–1968. Edwards formalized the concept in his landmark bookbag-and-poker-chip experiments and published the key paper 'Conservatism in Human Information Processing' in 1968 (in B. Kleinmuntz, Ed., Formal Representation of Human Judgment). Phillips and Edwards (1966) also published 'Conservatism in a Simple Probability Inference Task' in the Journal of Experimental Psychology.

Evolutionary origin

In ancestral environments, prior beliefs were usually built from direct, repeated personal experience and were generally reliable. Rapidly abandoning well-tested priors based on a single new data point could be dangerous — a hunter who ignored years of experience about a safe trail because of one anomalous report might walk into a predator's territory. Weighting accumulated experience heavily protected against noise and deception in an environment where information sources were unreliable.

IN AI SYSTEMS

How the machines inherit it.

Machine learning models trained on historical data can exhibit conservatism-like behavior when fine-tuned or updated with new data, particularly when regularization techniques heavily penalize deviation from pre-trained weights. In recommendation systems, models may be slow to update user preference profiles when user behavior shifts, continuing to serve recommendations based on outdated patterns. LLMs trained with reinforcement learning from human feedback can also reflect conservatism if evaluators systematically under-weight novel or surprising information in their ratings.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked