Algorithmic Aversion

aka Algorithm Aversion · Anti-Algorithm Bias · Machine Aversion

Rejecting algorithmic advice even when it outperforms human judgment, especially after witnessing one error.

Illustration: Algorithmic Aversion
WHAT IT IS

The glitch, explained plainly.

Imagine you have two friends helping you pick apples. One friend (the robot) picks 95 great apples and 5 bad ones. The other friend (the human) picks 80 great apples and 20 bad ones. But the moment you see the robot pick one bad apple, you say 'I knew it! Robots can't pick apples!' and go back to the friend who's actually worse at it. You forgive your human friend's mistakes because you think they'll learn, but you never forgive the robot for the same mistake.

Algorithmic aversion describes the paradoxical behavioral pattern in which people refuse to rely on statistical or algorithmic predictions after observing them make even minor errors, while simultaneously tolerating far larger and more frequent errors from human experts. This aversion is amplified for tasks perceived as subjective—such as hiring, medical decisions, or moral judgments—where people believe algorithms lack the empathy, intuition, or contextual understanding required. Critically, the bias is asymmetric: people hold algorithms to a standard of near-perfection while granting humans a generous margin of error, because they believe humans can learn and adapt from mistakes in ways algorithms cannot. The effect intensifies as the stakes of a decision increase, creating a tragic pattern in which people are most likely to reject superior algorithmic advice precisely when getting the decision right matters most.

SOUND FAMILIAR?

Where it shows up.

  1. 01 A hospital introduces an AI diagnostic tool that correctly identifies early-stage cancers at a 94% accuracy rate, compared to radiologists' 88%. After the AI misses one tumor that a doctor catches during a routine review, the hospital's chief of medicine pushes to discontinue the AI system entirely, despite its overall superior track record.
  2. 02 A hedge fund manager reviews the performance of a quantitative trading algorithm that has outperformed his team's picks by 12% over three years. After one quarter where the algorithm underperforms the market by 3%, he overrides it and returns to human-driven stock selection, telling colleagues the model 'doesn't understand market psychology.'
  3. 03 A hiring manager uses an AI screening tool that identifies candidates who perform 20% better on the job on average. When she discovers the tool rejected a candidate she personally thought was excellent, she abandons the tool for all future hiring rounds, even though her own interview-based selections have a higher turnover rate.
  4. 04 A logistics company's route-optimization software consistently reduces delivery times by 15%. After a snowstorm causes the software to recommend a route that turns out to be impassable, the dispatcher switches back to manually planning all routes, reasoning that 'only a human can account for weather.' He doesn't consider that the software's overall record still vastly exceeds his own route planning.
  5. 05 A lawyer uses an AI-powered legal research tool that surfaces relevant case precedents faster and more comprehensively than manual research. After the tool once returns a case that was later overruled—something the lawyer caught during review—she stops using the tool, even though she herself had missed an even more critical precedent in a previous case done manually. She reasons that she 'needs to understand the reasoning behind each citation,' despite the tool providing explanatory summaries.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors and portfolio managers frequently override quantitative models after a single losing quarter, reverting to discretionary stock-picking that statistically underperforms index-tracking algorithms. The aversion is strongest after market downturns, when the emotional sting of algorithmic losses feels less tolerable than equivalent human-made losses.

Medicine & diagnosis

Patients and physicians resist AI-assisted diagnostics—especially in radiology, dermatology, and pathology—despite evidence that these systems match or exceed specialist accuracy. The resistance intensifies for life-or-death decisions, where people feel that only a human can bear moral responsibility for an error.

Education & grading

Teachers and administrators resist algorithmic tools for student placement, grading, or early-warning dropout detection, believing that human intuition captures nuances that data cannot. This persists even when algorithmic predictions of student success are demonstrably more accurate than teacher judgment.

Relationships

People strongly prefer human matchmakers or their own intuition over dating algorithms, viewing romantic compatibility as inherently subjective and resistant to quantification. Even users of dating apps may distrust the matching algorithm while trusting their own profile-swiping instincts, which are often driven by superficial cues.

Tech & product

Users abandon recommendation engines, spam filters, or autocomplete features after a single salient failure, reverting to manual processes that are slower and less accurate. Product teams struggle with the paradox that making algorithmic errors more visible (for transparency) can accelerate user abandonment.

Workplace & hiring

HR departments resist algorithmic resume screening or performance evaluation tools after any publicized error, preferring interview-based assessments that introduce well-documented biases like the halo effect. Managers distrust algorithmic scheduling and forecasting tools the moment they produce a visibly wrong output.

Politics Media

Voters and the public distrust algorithmic content curation and moderation on social media, perceiving automated decisions about what news to show or which content to remove as lacking nuance and fairness—even when human moderators make similar or more frequent errors. This distrust fuels demands for 'human oversight' regardless of comparative accuracy.

HOW TO SPOT IT

Ask yourself…

  • Am I holding this algorithm to a standard of perfection that I would never apply to a human doing the same task?
  • Did I lose trust in this tool because of one visible error, while ignoring its overall track record compared to the human alternative?
  • Am I rejecting algorithmic advice because I genuinely believe a human will do better, or because it just feels more comfortable to have a person in charge?
HOW TO DEFEND AGAINST IT

The playbook.

  • Before overriding an algorithm, write down its track record versus human performance over the last 20+ decisions—not just the one error you remember.
  • Apply the 'identical error' test: if a human colleague had made this same mistake, would you fire them or just move on? Give the algorithm the same grace.
  • Insist on base-rate comparisons: how often does the algorithm err versus the human alternative, in absolute numbers, not anecdotes?
  • Request or build a 'modification interface'—research shows that allowing even tiny adjustments to algorithmic outputs dramatically increases willingness to use them.
  • For subjective tasks, remind yourself that human judgment in these domains is also riddled with well-documented biases—the alternative to an imperfect algorithm is not a perfect human.
FAMOUS CASES

In history.

  • Paul Meehl's 1954 review showed statistical models outperformed clinical judgment across multiple domains in his landmark review, yet the medical profession continued to resist actuarial methods for decades.
  • Nate Silver's FiveThirtyEight model accurately predicted all 50 states in the 2012 U.S. presidential election, yet public and pundit trust in poll-aggregation algorithms remained fragile and declined sharply when the 2016 election produced a less expected outcome.
  • Autonomous vehicle development has faced disproportionate public backlash after rare fatal accidents, despite human drivers causing vastly more fatalities per mile driven.
WHERE IT COMES FROM
Academic origin

Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey at the Wharton School, University of Pennsylvania, coined and formalized the term in 2015, published in the Journal of Experimental Psychology: General.

Evolutionary origin

Humans evolved to calibrate trust through social signals—reading intentions, emotions, and accountability in other agents. Survival depended on judging whether a fellow tribesperson was reliable based on eye contact, tone, and reciprocity. Delegating decisions to an entity that provides no social feedback loop (no explanation, no apology, no body language) violates deeply wired trust-assessment circuits that expect agency and intentionality in decision-makers.

IN AI SYSTEMS

How the machines inherit it.

Algorithmic aversion creates a feedback loop that degrades AI systems: when users override or ignore algorithmic recommendations, the resulting human-generated data pollutes training sets, teaching models to mimic suboptimal human decisions. Additionally, organizations underinvest in AI tools because adoption metrics are low, creating a self-fulfilling prophecy where algorithms never get the data or iteration cycles needed to improve. AI systems may also be deliberately 'dumbed down' or made less transparent to match human expectations, sacrificing accuracy for perceived trustworthiness.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked