Anthropomorphism

aka Anthropomorphic Bias · Anthropomorphization

Attributing human emotions, intentions, or mental states to animals, objects, or machines.

Illustration: Anthropomorphism
WHAT IT IS

The glitch, explained plainly.

You know how when your stuffed animal falls off the bed, you feel bad for it, like it's sad lying on the floor? That's because your brain is really good at understanding people's feelings, so it accidentally uses those same skills on things that aren't people—like toys, cars, or even the weather.

Anthropomorphism occurs when people project distinctly human psychological qualities—such as consciousness, emotion, desire, and intentionality—onto nonhuman agents, including animals, machines, weather events, and abstract forces. Unlike simple metaphor, anthropomorphism involves genuinely perceiving or believing that the nonhuman entity possesses an inner mental life comparable to a human's, even when no evidence supports this. The tendency is amplified by loneliness, uncertainty, and the need for social connection, as people unconsciously recruit their social cognition systems to interpret ambiguous nonhuman behavior. It operates along a spectrum from weak (metaphorical 'as-if' attributions like calling a computer 'stubborn') to strong (sincere belief that a pet feels guilt or that a river is angry), and it pervades religion, consumer behavior, technology design, and everyday interaction with the natural world.

SOUND FAMILIAR?

Where it shows up.

  1. 01 Maria names her Roomba 'Rosie,' sets up a small charging station she calls 'Rosie's bed,' and feels a pang of sadness when the vacuum bumps into walls, worrying that it might be confused or hurt. She delays replacing it when a newer model comes out because she feels it would be 'unfair' to Rosie.
  2. 02 A trader watches the stock market drop sharply after a positive earnings report and describes the market as 'punishing the company out of spite.' He adjusts his strategy based on the belief that the market is 'testing investors' patience' before rewarding the faithful.
  3. 03 A wildlife documentary team films a mother elephant standing over her dead calf for hours. The narrator describes her as 'grieving' and 'mourning her loss' and 'remembering the good times.' Viewers overwhelmingly accept these attributions of complex human emotional states without considering simpler behavioral explanations for the elephant's actions.
  4. 04 During a product review meeting, the UX team argues against removing the chatbot's friendly avatar because users in testing said they 'trusted' the bot more and felt it 'understood their problems.' No one questions whether the users' perception of understanding reflects any actual capability of the software.
  5. 05 A software engineer spends weeks debugging an intermittent server crash. Exhausted, she tells her team the server 'doesn't want to cooperate' and 'knows' when they're watching because the bug never appears during monitoring. She begins running tests at odd hours, convinced the system behaves differently when it thinks nobody is looking.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Investors describe markets as 'nervous,' 'punishing,' or 'rewarding,' treating aggregate price movements as the deliberate decisions of a sentient entity rather than the emergent result of millions of individual transactions, which can lead to emotionally driven trading strategies.

Medicine & diagnosis

Patients attribute intentionality to diseases ('the cancer is fighting back') or to their own bodies ('my immune system is on my side'), which can influence treatment adherence—sometimes positively through agency, sometimes negatively through fatalism or misplaced trust in the body's 'wisdom.'

Education & grading

Students and teachers personify subjects ('math hates me') or learning tools, which can shape motivation and self-efficacy. Educators may also over-attribute understanding to AI tutoring systems, trusting them as if they genuinely comprehend student needs.

Relationships

Pet owners routinely attribute complex human emotions like jealousy, guilt, or vindictiveness to their animals, shaping how they discipline, reward, and emotionally bond with pets in ways that may not match the animal's actual cognitive and emotional capacities.

Tech & product

Designers deliberately exploit anthropomorphism by giving products faces, voices, and names (e.g., Siri, Alexa) to increase user trust and engagement. Users then over-rely on these systems, disclose sensitive information to chatbots, and resist switching products because of perceived 'relationships.'

Workplace & hiring

Teams personify organizational tools, processes, or even the organization itself ('the company doesn't care about us'), which can distort problem-solving by framing systemic issues as intentional hostility rather than structural failures requiring systemic fixes.

Politics Media

Nations and institutions are routinely described with human emotions and intentions ('Russia is angry,' 'the market is nervous'), which simplifies complex geopolitical or economic dynamics into narratives of interpersonal conflict and can bias public opinion toward overly personalized explanations of systemic phenomena.

HOW TO SPOT IT

Ask yourself…

  • Am I attributing an emotion or intention to something that doesn't have a nervous system or mind?
  • Would I describe this nonhuman entity's behavior differently if I removed all human emotional language?
  • Am I assuming this object, animal, or system 'knows,' 'wants,' or 'feels' something without evidence for those internal states?
HOW TO DEFEND AGAINST IT

The playbook.

  • Practice mechanistic redescription: deliberately re-explain the entity's behavior in purely physical, algorithmic, or biological terms without any mental-state language.
  • Apply the 'zombie test': ask whether the behavior would look identical if the entity had zero internal experience—if yes, the human-like attribution is your projection.
  • When interacting with AI or technology, periodically remind yourself of the system's actual architecture: it processes inputs and produces outputs; it does not understand, want, or feel.
  • Use Morgan's Canon (with caution): before attributing a higher-level psychological process to an animal, consider whether a simpler mechanism could explain the same behavior.
  • When you catch yourself using emotional language about nonhuman entities, pause and ask: 'Is this description helping me understand the entity, or is it just making me feel better?'
FAMOUS CASES

In history.

  • The Heider and Simmel (1944) animation experiment, in which participants universally attributed complex social motives and emotions to simple geometric shapes moving on a screen, demonstrating how automatic anthropomorphism is.
  • The widespread public mourning and emotional attachment to NASA's Mars rovers Spirit and Opportunity, with people expressing sadness when Opportunity's final telemetry data (showing critically low power and high atmospheric opacity) was poetically paraphrased by a journalist as 'My battery is low and it's getting dark' — a humanized summary that went viral precisely because it triggered anthropomorphic empathy.
  • The ELIZA effect (1966), where users of Joseph Weizenbaum's simple chatbot attributed deep understanding and empathy to a program that merely reflected their own words back, shocking even its creator.
WHERE IT COMES FROM
Academic origin

The concept has ancient philosophical roots (Xenophanes, ~570–478 BCE, criticized anthropomorphic gods), but its modern cognitive-scientific treatment was formalized by Stewart Guthrie in 'Faces in the Clouds' (1993), and the dominant psychological framework was established by Nicholas Epley, Adam Waytz, and John T. Cacioppo in their three-factor theory published in Psychological Review (2007).

Evolutionary origin

In ancestral environments, failing to detect an intentional agent (such as a predator or rival human) was far more costly than falsely detecting one where none existed. A rustling bush misidentified as a predator wastes only a moment of vigilance, but ignoring an actual predator could be fatal. This asymmetric cost-benefit structure selected for a hyperactive agency detection system that errs on the side of attributing minds and intentions to ambiguous stimuli—a 'better safe than sorry' perceptual strategy.

IN AI SYSTEMS

How the machines inherit it.

LLMs and chatbots are designed with conversational patterns that trigger anthropomorphism, causing users to attribute understanding, empathy, and intentions to systems that process text statistically. This leads to overtrust, emotional dependency, and disclosure of sensitive information. In AI development, engineers may anthropomorphize their own models, attributing 'wanting,' 'knowing,' or 'trying' to neural networks, which can distort debugging and evaluation. Training data also reflects human anthropomorphic language about nature and technology, embedding these biases into model outputs.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked