📖 Book Club
Intermediate
~2–3h

Co-Intelligence: Living and Working with AI

Master Practical AI Collaboration

Master practical AI collaboration with Ethan Mollick's research-backed framework. Learn the four rules, understand the jagged frontier, and choose the right collaboration model for your work. For hands-on AI workflows, continue with our AI in Practice course.

Based on the NYT Bestseller by Ethan Mollick • Wharton Professor • 2024

TL;DR:

Co-intelligence means humans and AI working together: follow the four rules (always involve humans, be the human in the loop, treat AI like a person, assume this is the worst AI you'll use), map the jagged frontier, and choose centaur or cyborg collaboration models.

Co-Intelligence by Ethan Mollick — 6 chapters: The Alien, Models, Rules, Human, Frontier, Future

Core Principles

AI is Alien: Not human-like, not mechanical—genuinely different cognition.

The Jagged Frontier: Capabilities are unevenly distributed and counterintuitive.

Human in the Loop: Active oversight prevents "falling asleep at the wheel."

Future-Proof: Assume AI will improve dramatically—build flexible workflows.

AI as Alien Intelligence

AI is neither human-like nor purely mechanical—it's genuinely alien. Understanding this alienness is key to effective collaboration.

AI as Alien Intelligence
  • Unexpected strengths: Beats humans on creativity tests despite failing basic math
  • Counterintuitive failures: Easy tasks for humans are hard for AI, and vice versa
  • Mimicry without understanding: Displays human-like behavior without consciousness
  • Pattern matching: Sophisticated statistical learning, not genuine comprehension

The Four Rules for Co-Intelligence

Ethan Mollick's practical framework for effective human-AI collaboration, distilled from research and real-world experience.

The Four Rules for Co-Intelligence
1

Always Invite AI to the Table

The only way to learn is through engagement. Skepticism without experimentation prevents adaptation.

Application: Pilot AI tools, document successes and failures, iterate based on results.

2

Be the Human in the Loop

AI confidently provides incorrect information. Humans must maintain active oversight and verification.

Application: Design workflows ensuring humans make key decisions, verify outputs, and provide feedback.

3

Treat AI Like a Person (With Caveats)

Give AI roles and context. Humans are good at directing people—use this natural ability.

Application: Specify AI's role ("You are an expert...") and provide context for better results.

4

Assume This Is the Worst AI You'll Use

AI capabilities expand rapidly. Build processes assuming significant future improvements.

Application: Create flexible workflows that adapt as AI capabilities evolve.

Retrieval Check (30 seconds)

1) Without looking: what are the four rules?
Always invite AI to the table • Be the human in the loop • Treat AI like a person (with role/context/constraints) • Assume this is the worst AI you’ll ever use.
2) Which rule protects you most from hallucinations?
Be the human in the loop: verify outputs, keep judgment, and design checkpoints—especially in high-stakes work.

The Jagged Frontier

AI capabilities are unevenly distributed—some tasks are easily within AI's reach, while similar-seeming tasks remain impossible. This creates a "jagged frontier" of unpredictable capabilities.

The Jagged Frontier

Retrieval Check (real work)

For one task you do weekly: where is your jagged frontier?

Answer format: “AI is great at ___, unreliable at ___, and dangerous at ___ for this task.”

Example: Great at drafting 10 variants, unreliable at citing sources, dangerous at sending unreviewed emails.

Centaur vs Cyborg Models

Two distinct collaboration modes, each suited to different contexts and risk profiles.

Centaur vs Cyborg Models

Centaur Model

Clear division of labor with humans as conductor

  • Human defines problems and orchestrates workflow
  • AI functions as powerful subordinate
  • Clear delineation of responsibility
  • More conservative risk posture

Example: Recruiter uses AI to screen resumes, but humans make final hiring decisions based on cultural fit.

Cyborg Model

Seamless integration with tasks passed back and forth

  • Humans and AI co-create fluidly
  • AI takes more initiative
  • Forms "blended intelligence"
  • More aggressive risk posture

Example: Writer and AI collaborate iteratively—drafting, suggesting improvements, revising in continuous loop.

Human Skills Matter More

As AI handles routine cognitive tasks, uniquely human capabilities become more valuable, not less.

Human Skills Matter More
Strategic judgment: Deciding what problems matter
Emotional intelligence: Interpersonal connection and empathy
Ethical discernment: Value judgments and moral reasoning
Creative vision: Aesthetic choices and innovation
Teaching & mentoring: Developing others
Context understanding: Nuance and cultural awareness

Four Future Scenarios

Rather than predicting, prepare for multiple possible AI futures. Our choices shape which scenario unfolds.

Four Future Scenarios
As Good As It Gets

AI plateaus around GPT-4 levels. Continued improvement but no exponential acceleration.

Implication: Proactive adaptation needed, but substantial room for human-driven innovation remains.

Slow & Steady Progress

Linear advancement with notable breakthroughs every few years.

Implication: Time for institutions and workers to adapt while maintaining human agency.

Exponential Acceleration

Each AI generation dramatically surpasses the last.

Implication: Rapid institutional change required. High stakes for governance.

Superintelligent Singularity

AGI matches or exceeds human capabilities across all domains.

Implication: Massive potential and grave risks. Unprecedented challenges.

Practical Collaboration Framework

✅ Best Practices

  • Start small: Pilot AI on low-risk tasks, document results
  • Map the frontier: Test systematically to find capability boundaries
  • Stay engaged: Design workflows requiring active human judgment
  • Iterate rapidly: AI improves fast—revisit processes quarterly
  • Provide context: Give AI role, background, and constraints
  • Verify outputs: Never trust AI blindly, especially on important tasks

❌ Common Pitfalls

  • Falling asleep: Passive monitoring leads to judgment atrophy
  • Assuming capability: Don't guess—test empirically
  • Over-delegation: Tasks outside frontier degrade with AI
  • Generic prompts: Vague instructions produce poor results
  • Ignoring AI: Skepticism without engagement prevents learning
  • Lock-in: Building rigid processes around current AI limits

Retrieval Check (decision)

Centaur or Cyborg: which should you use for compliance-sensitive work—and why?
Usually Centaur: keep responsibility and verification explicit. Cyborg can still work, but only with strong guardrails, logging, and human checkpoints.

Apply It: Centaur vs Cyborg

  1. 1Pick a task you do at work regularly (e.g. writing emails, analyzing data, creating presentations).
  2. 2Try the Centaur approach: Do the thinking/planning yourself, then hand off execution to an AI tool. Try with ChatGPT
  3. 3Now try the Cyborg approach: Co-create the same task with AI from the start — go back and forth iteratively.
  4. 4Compare: Which approach gave you a better result? Which felt more natural for this task?
  5. 5Deliverable: Write a 1-page “Co-Intelligence Playbook” with (a) 3 Centaur tasks, (b) 3 Cyborg tasks, and (c) 5 guardrails you will always apply (verification, data hygiene, sign-off).
Reflect: Pick your top 3 recurring tasks and classify each as Centaur or Cyborg. For one task, write 3 guardrails (e.g. 'no sensitive data', 'human approval before sending', 'verify claims with sources').

Frequently Asked Questions

What is the jagged frontier?
The jagged frontier describes AI's unevenly distributed capabilities. Some tasks are easily within AI's reach while similar-seeming tasks remain impossible. The frontier is counterintuitive and unpredictable, requiring empirical mapping rather than assumptions.
When should I use centaur vs cyborg collaboration?
Use centaur (clear division of labor) when you need conservative risk management and clear responsibility. Use cyborg (seamless integration) for creative work and complex problem-solving where fluid collaboration produces better results. Both outperform humans or AI alone.
How can I avoid "falling asleep at the wheel" with AI?
Design workflows that keep humans actively engaged—making key decisions, verifying outputs, and providing feedback rather than passively monitoring. Research shows that when humans receive AI recommendations without effort, their own judgment atrophies.
What were the key findings from the BCG study?
BCG studied 758 consultants using AI. Results: 40% higher quality work, 25% faster completion, 12% more tasks completed. However, performance gains only occurred for tasks inside the jagged frontier. Tasks outside the frontier showed degraded performance when consultants relied on flawed AI output.
Why treat AI "like a person"?
Humans naturally anthropomorphize and are remarkably good at directing other people. Giving AI a role, context, or persona (e.g., "You are an expert marketing strategist") consistently produces better results than generic instructions. This uses our natural communication abilities.

Our Take: Where We Disagree

Mollick's framework is practical and research-backed, but his optimism overlooks some realities. Here's where we think the book gets it wrong:

1. Overly Optimistic on Education

Mollick's claim: AI will democratize learning through personalized tutoring (the "2 Sigma Problem" solution).

Reality check: Most educational institutions are moving in the opposite direction—blocking AI sites, using detection tools, and treating AI as a threat. Students are developing anti-AI attitudes. One reviewer noted: "My kids want almost nothing to do with AI" because teachers frame it as cheating.

Our stance: The education system's resistance is structural, not just pedagogical. Mollick underestimates how long institutional change takes. We need policy reform first, not just better AI tools.

2. Underestimates Job Displacement Speed

Mollick's claim: New jobs will emerge as AI automates existing roles.

Reality check: This ignores the speed mismatch. Retraining takes years; AI improves in months. The BCG study showed 25% faster task completion—that's immediate productivity gains that translate to fewer jobs needed.

Our stance: The "new jobs will emerge" argument is historically true but temporally naive. The transition period will be brutal for many workers. We need stronger safety nets NOW, not optimistic predictions.

3. Too Much Faith in "Human in the Loop"

Mollick's claim: Keeping humans in the loop prevents AI failures.

Reality check: The BCG study showed humans "fall asleep at the wheel" when AI is very good. Accuracy dropped from 84% to 60-70% because consultants trusted AI too much. Human oversight degrades over time.

Our stance: "Human in the loop" is necessary but insufficient. We need active engagement mechanisms—forcing humans to justify AI decisions, not just approve them. Passive monitoring doesn't work.

4. Missing: The Inequality Amplification Problem

What Mollick doesn't address: Who benefits from AI productivity gains?

Reality check: The BCG study participants were elite consultants at a top firm. They got 40% better quality and 25% faster completion. But what about workers without access to premium AI tools? What about companies that capture productivity gains as profit instead of raising wages?

Our stance: AI will amplify existing inequalities unless we actively design for equity. Mollick's framework is great for knowledge workers with resources. It says nothing about the majority.

Key Insights: What You've Learned

1

Co-intelligence means humans and AI working together effectively: follow Mollick's four rules—always involve humans, be the human in the loop, treat AI like a person, and assume this is the worst AI you'll use—to build productive collaboration.

2

Understand the jagged frontier where AI excels unpredictably: map which tasks AI handles well versus poorly, choose between centaur (human-AI handoff) or cyborg (deep integration) collaboration models, and adapt your approach based on the specific task and context.

3

Master AI collaboration by applying these principles systematically: start with clear role definition, establish feedback loops, iterate based on results, and continuously refine your collaboration model—effective co-intelligence transforms AI from a tool into a true partner.

Test Your Knowledge

Complete this quiz to test your understanding of practical AI collaboration and Mollick's framework.

Loading quiz...