📖 Book Club
Intermediate–Advanced
~5–6h

The Coming Wave

Technology, Power, and the 21st Century's Greatest Dilemma

Intermediate–Advanced
~5–6h
Mustafa Suleyman
For a complementary optimistic perspective, see Superagency. For practical AI governance frameworks, try New Laws of Robotics.

TL;DR:

The coming wave of AI and synthetic biology is uniquely dangerous due to four features: asymmetry, hyper-evolution, omni-use, and autonomy—containment requires four pillars: infrastructure, knowledge, coordination, and institutions, balancing between chaos and authoritarianism.

About the Book

Author: Mustafa Suleyman (Co-founder DeepMind, Co-founder Inflection AI, Microsoft AI CEO) • Published: 2024 • Length: ~320 pages

The Coming Wave - Four Features

Four Features of the Coming Wave

1

Asymmetry

Small Actors, Powerful Technologies

Example: A lone individual could engineer a pandemic. Deep learning costs $100K instead of $100M. CRISPR kits cost thousands instead of millions.

2

Hyper-Evolution

Exponential Improvement

Example: ChatGPT took 2 months to reach 1M users. Capabilities expand monthly. Regulation takes years.

3

Omni-Use

Dual-Use Problem

Example: Facial recognition saves lost children AND enables totalitarian surveillance. Same technology, opposite outcomes.

4

Autonomy

Loss of Human Control

Example: Autonomous weapons target in milliseconds. No human could decide in time. Who's responsible if it targets civilians?

The Central Dilemma

The Narrow Path

The Narrow Path

We must navigate between two unacceptable extremes

Uncontrolled Proliferation

  • • Chaos and catastrophe
  • • Engineered pandemics
  • • Loss of human autonomy
  • • Catastrophic accidents

Authoritarian Control

  • • Surveillance dystopia
  • • Loss of freedom
  • • Stifled innovation
  • • Totalitarian control

Four-Pillar Containment Framework

Four-Pillar Containment Framework
1

Auditing & Red-Teaming

Independent testing before deployment

How It Works:

Mandatory audits for large models. "Red teams" attempt to break systems. Results disclosed to regulators.

Goal:

Make risks visible before deployment. Build "trust but verify" culture.

Challenge:

Keeping up with rapid innovation. Defining what constitutes "safe".

2

Licensing & Compliance

Regulate AI like aviation or finance

How It Works:

Licenses required for models above compute threshold. Mandatory documentation. Regular compliance audits.

Goal:

Prevent irresponsible actors. Create accountability. Ensure safety standards.

Challenge:

Could stifle innovation if too restrictive. International coordination needed.

3

Institutional Oversight

National and international bodies with authority

How It Works:

AI Safety Boards (like FDA). Authority to review, pause, or reject deployments. International coordination.

Goal:

Independent oversight beyond corporate interests. Quick response to risks.

Challenge:

Avoiding regulatory capture. Maintaining expertise and independence.

4

International Treaties

Binding agreements on highest-risk AI

How It Works:

Ban autonomous weapons. Restrict biological engineering AI. Verification mechanisms. Enforcement for violations.

Goal:

Break arms-race logic. Establish red lines. Create enforcement mechanisms.

Challenge:

Getting all nations to agree. Verification. Ensuring compliance.

Critical Risks

AI-Accelerated Bioengineering

Catastrophic

AI designs novel pathogens. Barrier to entry drops dramatically.

Surveillance AI

High

Enabling totalitarian control. Perfect tracking and prediction.

Autonomous Weapons

High

AI making kill decisions. Loss of human oversight.

Systemic Instability

High

Cascading failures from interconnected AI systems.

Key Takeaways

The Problem

  • Powerful dual-use technologies proliferating
  • Proliferation cannot be stopped—only managed
  • Current governance inadequate
  • Without action, catastrophic outcomes likely

The Solution

  • Four-pillar containment framework
  • International cooperation breaking arms races
  • Preserve innovation while managing risk
  • Find narrow path between chaos and control

What This Means for AI Tool Users

Suleyman's framework isn't just for policymakers — it has direct implications for how you evaluate and use AI tools in your daily work.

Recognize Dual-Use

Every AI tool you use has dual-use potential. The same image generator that creates marketing materials can produce deepfakes. The same code assistant that speeds up development can generate malware.

Support Responsible Development

Suleyman's "narrow path" applies to individual choices too. You can support responsible AI development by choosing tools that prioritize safety, transparency, and accountability over pure speed or capability.

Understand the Speed Gap

AI capabilities evolve monthly; regulations take years. This "hyper-evolution" gap means you can't rely on regulations to protect you. You need to develop your own judgment about which AI capabilities to adopt and which to approach cautiously.

Maintain Human Oversight

Suleyman's "autonomy" feature warns about AI systems operating independently. In your workflow, always maintain a human-in-the-loop for important decisions. AI should inform your choices, not make them for you.

Apply It: Your Personal Containment Framework

  1. 1List the AI tools in your workflow. For each one, identify which of the four features applies: asymmetry, hyper-evolution, omni-use, or autonomy.
  2. 2Assess your "narrow path": Where are you being too cautious (missing opportunities)? Where are you being too permissive (ignoring risks)? Discuss with Claude
  3. 3Create a personal AI policy: Define 3 rules for how you'll evaluate new AI tools before adopting them (e.g. "I will check if the company publishes safety reports").
  4. 4Identify one high-stakes area in your work where AI is making decisions. Add a human review checkpoint if one doesn't exist.
Reflect: If you could advise a policymaker on one AI regulation that would make the biggest difference, what would it be? How does your daily experience with AI tools inform that recommendation?

Key Insights: What You've Learned

1

The coming wave of AI and synthetic biology is uniquely dangerous due to four features: asymmetry (small groups can cause massive harm), hyper-evolution (rapid capability improvement), omni-use (dual-use technologies), and autonomy (systems operating independently)—understanding these features is crucial for responsible AI development and use.

2

Containment requires four pillars: infrastructure (technical controls and monitoring), knowledge (understanding capabilities and risks), coordination (international cooperation), and institutions (governance frameworks)—successful containment balances between chaos (uncontrolled proliferation) and authoritarianism (over-restrictive control).

3

Navigate the coming wave by applying Suleyman's frameworks: recognize the unique risks of AI and synthetic biology, support containment efforts through responsible development and use, and actively participate in shaping positive outcomes rather than passively accepting whatever comes—the future depends on choices made today.

Test Your Knowledge

Complete this quiz to test your understanding of Suleyman's governance frameworks for AI containment.

Loading quiz...