Intentional Stance

aka Intentional Systems Theory

Automatically interpreting the behavior of people, animals, or objects as driven by beliefs and desires, even when it isn't.

WHAT IT IS

The glitch, explained plainly.

Imagine you see a ball rolling toward you. Your first thought isn't 'the wind pushed it'—it's 'that ball is coming to get me!' Your brain is like a detective that always assumes someone did something on purpose, even when it was just an accident or just how a machine works. You have to stop and think harder to realize nobody meant anything by it.

The Intentional Stance describes our deep-seated cognitive default of explaining and predicting behavior by attributing mental states—beliefs, desires, goals, and intentions—to the entity we observe. Originally formulated by philosopher Daniel Dennett as a predictive strategy, it becomes a bias when applied reflexively and inappropriately: we treat thermostats as 'wanting' warmth, algorithms as 'deciding' to show us content, and strangers' neutral actions as deliberately aimed at us. This over-attribution is automatic and fast, requiring effortful cognitive override to recognize that behavior may be accidental, mechanistic, or purely situational. The bias intensifies under cognitive load, time pressure, or emotional arousal, as the slower, analytic system that could generate non-intentional explanations is suppressed.

SOUND FAMILIAR?

Where it shows up.

  1. 01 After a coworker accidentally deletes a shared file, Marcus immediately concludes she did it to sabotage his project, despite her history of being technologically clumsy and her genuine apologies. He begins documenting her 'pattern of intentional interference' for HR.
  2. 02 A customer service chatbot gives Sarah an incorrect answer about her refund status. She writes a furious complaint letter accusing the company of 'programming their bot to deceive customers and deny legitimate claims,' attributing strategic deception to what was a simple retrieval error.
  3. 03 When a self-driving car abruptly brakes at a green light due to a sensor glitch, a pedestrian nearby feels the car 'decided' to stop for him and waves a thank-you, then later tells friends the car was 'being cautious and considerate.'
  4. 04 A researcher notices that her recommendation algorithm keeps surfacing articles about a sensitive medical topic she searched once. She concludes the algorithm is 'trying to worry her' and 'knows what gets under her skin,' rather than recognizing it as a simple pattern-matching function optimizing for engagement metrics.
  5. 05 During a board meeting, the company's automated fraud detection system flags several legitimate transactions from a new market. The CEO interprets the system's flags as evidence that 'the AI has developed a conservative philosophy about international expansion' and argues they need to 'convince it' rather than simply adjusting the threshold parameters.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Traders and investors frequently attribute intentionality to market movements, interpreting a stock price decline as 'the market punishing the company' or algorithmic trading patterns as deliberate manipulation, when these movements often reflect aggregate statistical dynamics without any coordinating agent.

Medicine & diagnosis

Patients commonly interpret bodily symptoms as their body 'fighting back' or 'sending a message,' and may attribute purposeful decision-making to diseases ('the cancer is clever'), which can distort treatment adherence when patients attempt to negotiate with or outsmart their illness rather than following evidence-based protocols.

Education & grading

Teachers may interpret a student's off-task behavior as deliberate defiance rather than the result of confusion, distraction, or executive function difficulties, leading to punitive responses when supportive scaffolding would be more effective.

Relationships

Partners routinely over-attribute intentionality to each other's ambiguous behaviors—interpreting a forgotten anniversary as a deliberate signal of disinterest, or reading hostile intent into a neutral facial expression—escalating conflicts that originated from benign oversights.

Tech & product

Users develop mental models of software as having goals and preferences, saying things like 'the app doesn't want me to find this setting.' This leads designers to anthropomorphize their own systems during development, attributing user-like motivations to algorithms and overlooking purely mechanical explanations for unexpected behavior.

Workplace & hiring

Employees commonly interpret organizational decisions—restructurings, policy changes, office moves—as personally targeted actions by management rather than systemic responses to business constraints, fueling resentment and conspiracy thinking within teams.

Politics Media

Voters and commentators attribute coordinated intentional strategies to political opponents' every misstep, interpreting gaffes, logistical errors, and bureaucratic inertia as deliberate schemes, which deepens partisan distrust and makes good-faith negotiation harder.

HOW TO SPOT IT

Ask yourself…

  • Am I assuming this entity did something on purpose, and could the behavior be explained by accident, mechanism, or randomness instead?
  • Am I attributing beliefs or desires to something that may not have a mind—a system, an algorithm, an animal, or even a person acting on autopilot?
  • Would I explain this behavior differently if I knew the full mechanical or situational context behind it?
HOW TO DEFEND AGAINST IT

The playbook.

  • Practice the 'Three Explanations' rule: before settling on an intentional explanation, force yourself to generate one accidental and one mechanistic/situational explanation for the same behavior.
  • Ask 'What would a camera see?' to strip away mental-state attributions and focus on the observable physical sequence of events.
  • When attributing intent to technology, pause and reframe: 'What rule or algorithm could produce this output without any goals or awareness?'
  • Use Hanlon's Razor as a deliberate check: 'Never attribute to malice that which is adequately explained by ignorance, accident, or system design.'
  • In interpersonal conflicts, explicitly ask the other person about their intent before acting on your assumption.
FAMOUS CASES

In history.

  • The widespread public attribution of deliberate malice to IBM's Deep Blue after it defeated Garry Kasparov in 1997, with Kasparov himself accusing the machine of making 'human-like' moves and suggesting human intervention.
  • Post-9/11 intelligence failures were partially attributed to analysts over-interpreting ambiguous signals as evidence of coordinated intentional plots, while under-weighting systemic noise and bureaucratic dysfunction.
  • Public reaction to the 2010 'Flash Crash' in US stock markets, where many attributed purposeful manipulation to algorithmic trading systems that were actually responding to feedback loops without any intentional design to crash markets.
WHERE IT COMES FROM
Academic origin

Daniel Dennett, 1971 (essay 'Intentional Systems') and formalized in his 1987 book 'The Intentional Stance.' The cognitive bias dimension—the automatic over-attribution of intentionality—was empirically formalized by Evelyn Rosset in 2008.

Evolutionary origin

In ancestral environments, quickly inferring the intentions of other agents—predators, rivals, potential allies—was critical for survival. The cost of a false positive (assuming a rustling bush was a predator when it was just wind) was far lower than a false negative (assuming the predator was just wind). This asymmetric error cost favored an overactive agency and intention detection system, making the intentional stance the brain's default interpretive mode.

IN AI SYSTEMS

How the machines inherit it.

AI systems trained on human-generated text inherit the intentional stance in their language patterns, routinely describing other systems, markets, and natural phenomena using intentional vocabulary ('the model wants,' 'the algorithm tries to'). Additionally, users adopt the intentional stance toward AI itself, attributing beliefs and desires to LLMs, which distorts expectations about AI reliability and fuels both over-trust and conspiratorial fear about AI 'goals.'

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked