Confirmation Bias

aka Confirmatory Bias · Myside Bias · Verification Bias

Seeking, interpreting, and remembering information in ways that confirm what you already believe.

Illustration: Confirmation Bias
WHAT IT IS

The glitch, explained plainly.

Imagine you decide your neighbor is mean. After that, every time they don't wave at you, you think 'See? They're mean!' But every time they do wave, you think 'They're just pretending.' You only notice the things that prove what you already think, and you ignore or explain away anything that doesn't match.

Confirmation bias operates across three distinct stages of information processing: biased search (selectively seeking evidence that supports existing beliefs), biased interpretation (construing ambiguous evidence as supporting one's position), and biased recall (preferentially remembering information consistent with one's views). This triad of distortions means that even when two people encounter identical evidence, each may walk away more convinced of their opposing positions — a phenomenon known as attitude polarization. The bias is not limited to strongly held beliefs; it can affect how people test even mundane hypotheses, as people tend to employ a 'positive test strategy' — asking questions whose expected answers would confirm rather than disconfirm their current theory. Crucially, confirmation bias is largely unconscious: people genuinely believe they are being objective while systematically filtering reality through the lens of their preconceptions.

SOUND FAMILIAR?

Where it shows up.

  1. 01 A detective becomes convinced early in the investigation that the husband is the murderer. Over the next two weeks, she interviews dozens of witnesses. She spends significant time following up on tips implicating the husband but dismisses two alibis and a contradictory forensic report as 'probably unreliable' without investigating them further.
  2. 02 A venture capitalist believes a startup will succeed because she loves the founders. During due diligence, she reads the financial projections carefully and highlights every positive metric, but when a consultant's report flags serious market risks, she decides the consultant 'doesn't understand the sector' and moves forward with the investment.
  3. 03 A teacher believes a certain student is gifted. When reviewing the student's mixed test results at the end of the semester, she attributes the high scores to the student's brilliance and the low scores to 'off days' or poorly designed questions — never reconsidering her original assessment of the student's ability.
  4. 04 A product manager is convinced that users want a dark-mode feature. She designs a survey with questions like 'How much would dark mode improve your experience?' and 'Would you use the app more with dark mode?' She never asks open-ended questions about what users actually struggle with, and presents the overwhelmingly positive survey results as validation.
  5. 05 A climate-change skeptic and a climate activist both read the same 200-page IPCC report. The skeptic highlights passages about modeling uncertainties and concludes the science is unsettled. The activist highlights passages about rising temperatures and concludes the evidence is overwhelming. Both feel more confident in their original positions after reading the same document.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors who are bullish on a stock tend to seek out analyst reports and news articles that support their position while dismissing bearish analysis as uninformed or biased, leading to overconcentration in losing positions and delayed recognition of downturns.

Medicine & diagnosis

Physicians who form an early diagnostic hypothesis may selectively order tests that confirm the suspected condition while neglecting to rule out alternative diagnoses, contributing to diagnostic error — particularly with conditions that share overlapping symptoms.

Education & grading

Teachers who form early impressions of students as high- or low-performing tend to interpret subsequent academic work through that lens, grading ambiguous answers more favorably for 'good' students and more harshly for 'weak' ones, reinforcing the original assessment.

Relationships

Partners who believe their relationship is deteriorating selectively attend to their partner's negative behaviors as evidence of decline while overlooking gestures of affection, creating a self-reinforcing cycle of dissatisfaction and withdrawal.

Tech & product

Product teams that are emotionally invested in a feature tend to design user research that validates their solution rather than genuinely testing whether it solves a real problem, leading to the launch of features users don't need or want.

Workplace & hiring

Hiring managers who form a positive first impression of a candidate in the first 30 seconds of an interview tend to spend the remaining time asking softball questions that let the candidate confirm that impression, while asking tougher questions of candidates who made a weaker first impression.

Politics Media

Voters preferentially consume news from outlets that align with their existing political views and dismiss reporting from opposing outlets as biased or fake, deepening partisan divides and making compromise increasingly difficult.

HOW TO SPOT IT

Ask yourself…

  • Am I actively searching for information that could prove me wrong, or only looking for evidence that supports what I already believe?
  • If someone I disagreed with presented this exact same evidence, would I evaluate it differently?
  • Have I given the strongest counterargument the same careful attention I gave the arguments in my favor?
HOW TO DEFEND AGAINST IT

The playbook.

  • Practice 'steelmanning': Before dismissing an opposing view, force yourself to articulate the strongest possible version of that argument as if you had to defend it.
  • Actively seek out a 'designated dissenter' — ask a trusted person to argue against your position before you commit to a decision.
  • Use a pre-mortem technique: Imagine your decision failed spectacularly, then work backward to identify what evidence you might be ignoring right now.
  • Apply the 'Consider the Opposite' protocol: Before finalizing a judgment, list three reasons your belief could be wrong.
  • Track your prediction accuracy in a decision journal to build calibration awareness and reveal patterns of selective evidence gathering.
FAMOUS CASES

In history.

  • The 2003 Iraq War intelligence failure, where analysts selectively emphasized evidence supporting the existence of weapons of mass destruction while discounting dissenting intelligence assessments.
  • The Challenger Space Shuttle disaster (1986), where NASA engineers and managers focused on data supporting the safety of launching in cold weather while downplaying warnings from Morton Thiokol engineers about O-ring failure.
  • The prosecution of wrongful convictions such as the Central Park Five case (1989), where investigators focused on coerced confessions and ignored DNA evidence and timeline inconsistencies pointing to a different perpetrator.
WHERE IT COMES FROM
Academic origin

Peter Cathcart Wason, 1960. Formalized through his '2-4-6 Task' experiment published as 'On the failure to eliminate hypotheses in a conceptual task' in the Quarterly Journal of Experimental Psychology. The concept was comprehensively reviewed by Raymond S. Nickerson in 1998.

Evolutionary origin

In ancestral environments, rapidly categorizing stimuli based on prior experience was critical for survival. If rustling bushes previously indicated a predator, treating all future rustling as dangerous — and ignoring the times it was just the wind — was an asymmetrically safe error. Building on and confirming existing mental models of the environment allowed for faster decision-making and group cohesion, where shared beliefs enabled coordinated action against threats without the costly delay of re-evaluating every assumption from scratch.

IN AI SYSTEMS

How the machines inherit it.

Machine learning models can exhibit confirmation bias when trained on historically biased datasets, reinforcing existing patterns rather than discovering novel ones. For example, a hiring algorithm trained on past successful hires — predominantly from one demographic — will preferentially score similar candidates higher, perpetuating the original bias. In LLMs, confirmation bias emerges when models are fine-tuned or prompted in ways that favor certain viewpoints, producing outputs that echo training data distributions rather than reflecting balanced evidence. Recommender systems amplify this further by creating algorithmic filter bubbles that confirm users' existing preferences and beliefs.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked