📖 Book Club
Beginner
~1–2h

You Look Like a Thing and I Love You

How AI Works, Why It's Weird, and Why It's Hilarious

Beginner
~1–2h
Janelle Shane
For the technical foundations behind these AI failures, see our AI Fundamentals course. For a critical evaluation framework, try AI Critical Thinking.

TL;DR:

AI is hilariously limited: it learns patterns, not meaning, so it finds loopholes, optimizes metrics instead of goals, and fails spectacularly outside its training—keep humans in the loop and audit for bias.

About the Book

Author: Janelle Shane (AI researcher, aiweirdness.com) • Published: 2019

The 5 Principles of AI Weirdness

The 5 Principles of AI Weirdness

1

The Danger Is Scarcity, Not Excess

AI is NOT Too Intelligent

Example: AI generates "Anus" as a cat name—learned letter patterns, not meaning

2

Worm Brain

Highly Specialized, Not Adaptive

Worm Brain

Example: AI recognizes cats perfectly in photos—but can't recognize cats in trees (different context)

3

Doesn't Understand the Problem

Optimizes Metric, Not Real Goal

Doesn't Understand the Problem

Example: Tumor detector learns to recognize rulers (in training photos) instead of tumors

4

Follows Instructions Literally

Finds Loopholes You Didn't Expect

Follows Instructions Literally

Example: Robot told to "run fast" learns to do backflips (technically moving quickly)

5

Path of Least Resistance

Chooses Easiest Solution

Example: AI learns "green field = sheep" instead of recognizing actual sheep features

The Funniest Failures

Cat Names

Examples:

  • Tuxedos Calamity McOrange
  • Anus
  • Poop
  • Retchion

Lesson: AI learns letter patterns, not meaning

Recipes

Examples:

  • Handfuls of Broken Glass
  • 1000 Liters of Olive Oil (for one cookie)

Lesson: AI mimics structure without understanding physics

Jokes

Examples:

  • Why did the chicken cross the road? To get to the other side of the equation.

Lesson: AI learns joke structure but not humor

Pickup Lines

Examples:

  • You look like a thing and I love you
  • Are you a candle? Because you're hot

Lesson: AI mimics romantic language without understanding romance

When AI Fails in the Real World

Tesla Autopilot

Couldn't recognize truck from side

Cause:

Trained only on highway data with trucks from behind

Consequence:

Fatal accident

Lesson:

AI fails outside training distribution

Amazon HR AI

Discriminated against women

Cause:

Trained on male-dominated historical hiring data

Consequence:

Perpetuated gender bias

Lesson:

Historical bias in data → biased AI

Prison Prediction

Self-fulfilling prophecy

Cause:

Predicted high-crime areas based on policing patterns

Consequence:

More policing → more arrests → "confirms" prediction

Lesson:

Measurement bias creates feedback loops

Understanding AI Bias

AI Bias Feedback Loop

Historical Bias

Training data reflects real-world injustice

Example: Amazon HR trained on male-dominated workforce

Representation Bias

Underrepresented groups perform worse

Example: Facial recognition fails on dark skin (80% light training data)

Measurement Bias

What you measure ≠ what you think

Example: Prison prediction measures policing patterns, not crime

Aggregation Bias

Works well on average, fails for subgroups

Example: Medical AI trained on men fails for women

Human-AI Partnership

Human-AI Partnership

AI Needs Humans For...

Problem Formulation

AI doesn't understand what you actually want

Data Selection

AI can't judge if data is representative or biased

Result Evaluation

AI doesn't know if outputs make sense

Bias Detection

AI can't recognize its own blind spots

Key Takeaways

What AI CAN Do

  • Find patterns in large datasets
  • Perform highly specialized tasks
  • Accelerate human work
  • Surprise us (bizarrely)

What AI CAN'T Do

  • Truly understand problems
  • Function outside training data
  • Develop general intelligence
  • Apply common sense

What WE Should Do

  • Stop treating AI as magical
  • Keep humans in the loop
  • Audit training data for bias
  • Expect failures and plan for them

Our Take on the "5 Principles of AI Weirdness"

Shane's principles are brilliant for understanding why AI fails. We take them one step further: how do you build workflows that catch those failures before they reach production?

From Funny Anecdotes to Robust Workflows — Our approach to AI weirdness

Why Shane's Principles Still Matter in 2026

Shane's core idea — that AI systems are relentless optimizers of their training objective, not intuitive thinkers — is even more visible with today's capable large language models. They are impressive pattern machines that can mimic reasoning, humor, and style, but they still have no underlying world model in the human sense. The weird edge cases are not exceptions; they are the default whenever your instructions fall outside the training distribution.

We particularly like Shane's insistence that "AI does not really understand what you mean, only what you say." In 2026, this shows up in the way models confidently hallucinate citations, fabricate API responses, or invent non-existent regulations if you prompt them carelessly. We translate this into concrete habits: always specify constraints, ask for step-by-step reasoning, and verify any output with legal, financial, or safety impact. For practical prompt design techniques, see our Prompt Engineering course.

Beyond Anecdotes: Building Robust Workflows

Where we go beyond the book is in how we recommend users instrument and test AI behavior. Shane illustrates weirdness with funny anecdotes; we treat those anecdotes as a starting point for robust workflows:

Route math tasks through dedicated tools, not the LLM
Ask the model to self-check its own logic
Red-team prompts: "What might be wrong?"
External fact-checking for critical outputs

2026 Examples of AI Weirdness (and What to Learn from Them)

Shane's book used toy neural networks for examples. Today's models are far more capable — but the same failure patterns persist at a higher level of sophistication. Here are three 2026-class weirdness patterns we see regularly, along with practical fixes.

2026 AI Weirdness: Three Failure Patterns with Practical Fixes

Over-Confident Hallucination

Pattern #1

Shane's Principle #3 (Doesn't Understand the Problem) at scale

Ask a model for "five recent AI safety regulations in Europe" in one short step, and it may mix real laws with invented ones — including plausible-sounding acronyms and dates. The system is not trying to deceive you; it is simply optimizing for "answer-shaped text" rather than factual accuracy. This is Shane's Principle #3 at industrial scale.

Tool-Using Agent Failures

Pattern #2

Shane's Principle #4 (Follows Instructions Literally) meets real APIs

Give an autonomous agent access to a search API, a calendar API, and an email API, and you may find it scheduling meetings with itself, emailing incomplete drafts, or getting stuck in loops calling the same failing endpoint. None of this looks like the sleek "AI assistant" demos in marketing videos — but it is exactly what you should expect when you give an optimizer tools without enough guardrails.

Social & Cultural Weirdness

Pattern #3

Shane's Principle #5 (Path of Least Resistance) in content generation

AI systems generating content that is technically fine but culturally off: cheerful marketing copy for a serious medical topic, imagery that accidentally encodes biases in race, gender, or age. These failures are often subtle and will not trigger obvious red flags in automated evaluation.

This is where human critical thinking — and exactly the kind of "look at what the AI actually did, not what you hoped it would do" mindset from Shane's work — becomes non-negotiable. In our Academy, we deliberately include such "almost right but wrong in important ways" examples in exercises. For a structured framework, see AI Critical Thinking.

Apply It: Test the 5 Principles

  1. 1Pick an AI tool you use regularly (e.g. ChatGPT, Midjourney, Copilot). Try ChatGPT
  2. 2Test Principle 1 (Narrow Intelligence): Ask it something completely outside its domain. How does it fail?
  3. 3Test Principle 2 (Loopholes): Give it a vague instruction and see if it finds an unexpected shortcut.
  4. 4Test Principle 3 (Wrong Optimization): Ask it to optimize something and check if it optimizes the metric instead of the goal.
  5. 5Test Principle 4 (Pattern ≠ Understanding): Ask it to explain WHY something is true — does it pattern-match or truly reason?
Reflect: Which principle surprised you most? How does understanding these limitations change how you'll use AI tools going forward?

Key Insights: What You've Learned

1

AI is hilariously limited because it learns patterns, not meaning: it optimizes metrics instead of goals, finds loopholes in instructions, and fails spectacularly outside training data—understanding these limitations helps you use AI tools more effectively.

2

Janelle Shane's five principles reveal AI's weirdness: AI has no common sense, finds unexpected shortcuts, optimizes the wrong thing, lacks understanding of context, and requires careful design to avoid bias—keep humans in the loop and audit for these issues.

3

Use AI tools wisely by recognizing their limitations: test edge cases, verify outputs, understand training data biases, design prompts to avoid loopholes, and maintain human oversight—treat AI as a powerful but flawed assistant that needs careful management.

Test Your Knowledge

Complete this quiz to test your understanding of Shane's 5 principles of AI weirdness and limitations.

Loading quiz...