Observer-Expectancy Effect

aka Experimenter-Expectancy Effect · Expectancy Bias · Experimenter Effect

A researcher's or authority's expectations unconsciously influencing the behavior of those being observed, producing self-fulfilling results.

WHAT IT IS

The glitch, explained plainly.

Imagine you hand a friend a plant and say, 'This one is really special — it's supposed to grow really fast.' Without meaning to, your friend gives it a little extra water, puts it in the sunniest spot, and checks on it more often. The plant actually does grow faster — but not because it was special. It grew faster because your friend treated it differently based on what you told them. That's what happens whenever someone expects a certain result: they accidentally make it come true without knowing they did anything.

The observer-expectancy effect occurs when someone holding authority or conducting an evaluation unconsciously transmits their pre-existing beliefs to the people they are assessing, causing those people to behave in ways that confirm the original expectation. This transmission happens through subtle, often imperceptible channels: micro-expressions, tone of voice, differential attention, leading questions, or body language shifts that the observer is entirely unaware of producing. The effect is particularly insidious because it creates a closed feedback loop — the observer sees the expected outcome, which reinforces their original belief, and they never realize they manufactured the very evidence they are interpreting. Unlike deliberate fraud or conscious steering, the observer-expectancy effect operates below the threshold of awareness, making it one of the most persistent threats to objectivity in any evaluative context.

SOUND FAMILIAR?

Where it shows up.

  1. 01 Dr. Patel is running a clinical trial for a new antidepressant. Although the study is single-blind, she knows which patients received the real drug. During follow-up interviews, she unconsciously nods encouragingly and asks more open-ended questions to patients in the treatment group, while being more clipped and clinical with the placebo group. The treatment group reports significantly more improvement than the placebo group.
  2. 02 A psychology student is assigned five lab rats and told they have been specially bred for maze intelligence. She handles them more gently, speaks to them in a soothing tone, and patiently repositions them when they stall. Her rats learn the maze significantly faster than those of a classmate who was told his rats were bred from a dull line. Both sets of rats were actually identical.
  3. 03 Marcus, a venture capitalist, reads a glowing profile of a startup founder before their pitch meeting. During the presentation, he leans forward attentively, laughs at the founder's jokes, and asks constructive 'how might we scale this?' questions rather than adversarial ones. After the meeting, he tells his partners the founder was exceptionally compelling — never considering that his own behavior during the meeting elevated the quality of the pitch he received.
  4. 04 A wine competition judge is told that the next flight of wines comes from prestigious Bordeaux estates. She takes longer with each sip, finds more complex notes in her tasting journal, and scores the wines an average of eight points higher than she scored nearly identical wines presented without provenance information earlier that day.
  5. 05 A data scientist is validating a machine learning model she spent six months building. When reviewing borderline classification cases, she consistently resolves ambiguous examples in favor of the model's prediction, subtly inflating accuracy metrics. She genuinely believes she is being objective because each individual judgment feels defensible in isolation, but her cumulative decisions systematically favor the hypothesis that her model works well.
IN DIFFERENT DOMAINS

Where it shows up at work.

The same glitch looks different depending on the terrain. Finance, medicine, a relationship, a team — same mechanism, different costume.

Finance & investing

Analysts who expect a company to outperform tend to interpret ambiguous earnings data more favorably, ask more optimistic questions during earnings calls, and write reports that selectively highlight confirming metrics — which can then influence investor behavior in ways that temporarily prop up the stock price, seemingly validating the original forecast.

Medicine & diagnosis

Physicians who suspect a particular diagnosis may unconsciously conduct more thorough examinations of relevant symptoms while overlooking contradictory signs, or ask leading questions that steer patients toward confirming the expected condition. In drug trials, unblinded investigators may rate subjective outcomes more favorably for the treatment group through differential warmth or attention.

Education & grading

Teachers who are told certain students are 'gifted' or 'at-risk' unconsciously adjust their teaching behavior — offering more wait time, richer feedback, and greater encouragement to expected high-performers while providing less challenging material and fewer opportunities to students expected to struggle, thereby widening achievement gaps that appear to validate the original labels.

Relationships

When someone is told by mutual friends that a new acquaintance is 'really warm and fun,' they approach that person with more openness and enthusiasm, which elicits genuinely warmer and more engaging behavior from the other person, seemingly confirming the description.

Tech & product

Product teams that expect a new feature to improve engagement may unconsciously design A/B tests with subtle advantages for the treatment group — better onboarding flows, more prominent placement, or more polished visuals — then attribute the resulting improvement to the feature itself rather than the differential treatment.

Workplace & hiring

Managers who have already formed an impression of an employee's potential unconsciously assign more visible projects, provide more developmental feedback, and advocate more strongly for those they expect to succeed, creating performance differentials that appear to confirm the manager's original assessment during reviews.

Politics Media

Pollsters and journalists who expect a certain electoral outcome may frame questions in ways that elicit confirming responses, selectively quote respondents who match their narrative, or give more airtime to evidence supporting their prediction — subtly shaping public opinion toward the expected result.

HOW TO SPOT IT

Ask yourself…

  • Am I already expecting a particular outcome from this person or situation before I've gathered evidence?
  • Could my behavior — tone, body language, question framing — be subtly communicating what I want to see happen?
  • If someone who expected the opposite result were observing this same situation, would they reach a different conclusion?
HOW TO DEFEND AGAINST IT

The playbook.

  • Implement double-blind protocols wherever possible so that evaluators do not know which condition or group they are assessing.
  • Standardize all instructions, interactions, and measurement procedures in writing before any evaluation begins.
  • Use automated data collection and objective metrics to minimize opportunities for subjective judgment.
  • Preregister hypotheses and analysis plans before collecting data to prevent post-hoc rationalization.
  • Assign different people to run experiments, collect data, and analyze results so no single person's expectations can contaminate the entire pipeline.
FAMOUS CASES

In history.

  • Clever Hans (early 1900s): A horse appeared to perform arithmetic, but was actually responding to involuntary body language cues from questioners who knew the correct answers.
  • Rosenthal & Fode rat maze study (1963): Students told their lab rats were 'maze-bright' obtained significantly better maze performance than students told their rats were 'maze-dull,' despite all rats being genetically identical.
  • Rosenthal & Jacobson 'Pygmalion in the Classroom' (1968): Teachers told that randomly selected students were about to experience an intellectual growth spurt saw those students gain significantly more IQ points than control students over the school year.
  • Cyril Burt's twin studies (1950s–1960s): Research purporting to show intelligence was primarily hereditary was later found to contain fabricated data, including invented co-authors and suspiciously identical correlation coefficients across different sample sizes — a case of scientific fraud rather than mere expectancy bias.
WHERE IT COMES FROM
Academic origin

Robert Rosenthal formalized the concept through his foundational studies beginning in 1963 with Kermit Fode, and expanded it significantly in 1968 with Lenore Jacobson in 'Pygmalion in the Classroom.' The intellectual precursor was the investigation of Clever Hans by Oskar Pfungst in 1907.

Evolutionary origin

In ancestral social groups, the ability to rapidly read and conform to the expectations of dominant group members conferred survival advantages. Individuals who could detect subtle cues from leaders about desired behavior — and adjust accordingly — were more likely to maintain social standing and receive group protection. Conversely, those in positions of influence who could nonverbally coordinate group behavior without explicit commands could mobilize collective action more efficiently.

IN AI SYSTEMS

How the machines inherit it.

In machine learning, observer-expectancy effects enter through researchers' choices during data labeling, feature selection, and model evaluation. Annotators who expect certain patterns in training data may resolve ambiguous cases in ways that encode their assumptions. Researchers evaluating their own models may unconsciously choose metrics, thresholds, or test sets that favor their hypothesis. The effect also manifests when AI developers tune hyperparameters based on expectations rather than principled search, or when they selectively report results from runs that confirm their architecture's superiority.

Read more on Wikipedia
FREE FIELD ZINE

10 glitches quietly running your life.

A free field-zine PDF — ten cognitive glitches named, illustrated, with a defense move for each. Plus the weekly Glitch Report on Fridays — one bias named, two spotted in the wild, one defense move. Unsubscribe any time.

EXPLORE MORE

Related glitches.

LAUNCH PRICE

Train against your blindspots.

50 cards are free to preview. Buyers unlock the rest of the deck plus the interactive training — Spot-the-Bias Quiz unlimited, Swipe Deck with spaced repetition, My Blindspots, Decision Pre-Flight, the Printable Deck + Cheat Sheets, and the Field Guide e-book. $29.50$59.

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked

Unlock the full deck

Everything below — yours forever. Pay once, use across every device.

Half-off launch — limited to the first 100 readers. Auto-applied at checkout.
$59 $29.50
one-time payment · lifetime access
  • All interactive digital cards — search, filter, flip, shuffle on any device
  • Five training modes — Spot-the-Bias Quiz, Swipe Deck, Pre-Flight, Blindspots, Journal
  • Curated Lenses + Decision Templates + Defense Playbook
  • Printable Deck PDFs + Field Guide e-book + Cheat Sheets + Anki Export
  • Every future improvement, included
Unlock  $29.50

30-day refund · no questions asked