📚 Course
Beginner–Intermediate
~2–3h

AI & The Law

EU AI Act · Copyright · GDPR & Automated Decisions

The EU AI Act entered into force in August 2024 and began prohibiting certain AI practices in February 2025. Courts have ruled on who owns AI-generated content. GDPR now applies to AI-based automated decisions about individuals. This course gives you a precise, primary-source grounding in what changed, what it means for you, and what you need to do.
Beginner–Intermediate
~2–3 hours
9 Modules
Primary Sources

TL;DR:

The EU AI Act creates a four-tier risk pyramid. Eight AI practices are completely banned since February 2025. High-risk AI systems face strict obligations before they can be deployed. Courts in the US and EU have established that AI alone cannot hold copyright — human authorship is required. GDPR's Article 22 gives individuals the right to not be subject to purely automated decisions with legal effects. Penalties for violating the AI Act reach €35 million or 7% of global annual turnover — whichever is higher.

Why this matters now — not eventually

AI law is no longer a future concern. Key dates have already passed and obligations are live:

Aug 2024

Done

EU AI Act entered into force

The regulation became part of EU law, starting the compliance clock for all organizations.

Feb 2025

Active

8 prohibited AI practices banned

Social scoring, real-time biometric surveillance, manipulative AI, and 5 other practices became illegal across the EU.

Mar 2025

Law

Thaler v. Perlmutter ruling (D.C. Circuit)

US court confirmed: AI cannot be a copyright author. Only works with human creative input are protectable.

Aug 2025

Active

GPAI model rules + penalty regime active

Providers of general-purpose AI models (GPT, Claude, Gemini etc.) face new transparency and safety obligations. Fines now enforceable.

Mar 2026

Final

SCOTUS denies cert in Thaler v. Perlmutter

The US Supreme Court declined to hear the case, making the D.C. Circuit ruling the settled law: AI alone cannot author a copyright.

Aug 2026

Deadline

Full High-Risk AI compliance required

All high-risk AI systems must complete conformity assessments, CE marking, technical documentation, and EU database registration.

Aug 2027

Deadline

Extended deadline for embedded high-risk systems

AI embedded in regulated products (medical devices, machinery) gets an extra year to comply.

The four risk tiers

The EU AI Act uses a risk-proportionate approach. The higher the potential harm, the stricter the requirements. Source: European Commission, digital-strategy.ec.europa.eu

Unacceptable Risk

BANNED

Eight specific practices are completely prohibited. No exceptions. In force since February 2025.

Social scoring by public authorities
Real-time biometric surveillance in public
Manipulative subliminal AI systems
Predictive policing based on profiling
Untargeted facial recognition database scraping

High Risk

STRICT RULES

Must complete conformity assessment, maintain technical documentation, register in EU database, and implement human oversight. Deadline: August 2026.

CV-sorting and hiring tools
Credit scoring systems
Educational access and exam scoring AI
Law enforcement evidence evaluation
Critical infrastructure safety components
Biometric identification systems
Automated visa processing

Limited Risk

TRANSPARENCY

Must inform users they are interacting with AI. Applies primarily to chatbots and deepfakes.

Chatbots and virtual assistants
AI-generated content (deepfakes, synthetic voices)
Emotion recognition systems with low stakes

Minimal / No Risk

FREE TO USE

No specific obligations. This covers the vast majority of AI applications in use today.

AI-powered spam filters
Recommendation systems (Netflix, Spotify)
AI writing assistants for personal use
Image editing tools
AI in video games

The 8 banned practices (in force since Feb 2025)

These are hard prohibitions under EU AI Act Article 5. Violating them carries the highest penalty tier. They became enforceable on 2 February 2025.

1

Manipulative AI systems

AI that uses subliminal techniques or exploits psychological weaknesses to manipulate behavior against someone's interests or to cause harm.

2

Exploitation of vulnerabilities

AI targeting people based on age, disability, or social/economic situation to distort their behavior in ways that cause harm.

3

Social scoring by public authorities

Classifying individuals based on social behavior or personal characteristics and using that score to restrict their rights or access to services.

4

Criminal risk prediction via profiling

Predicting that a person will commit a crime based solely on their profile or personality traits — not on objective, verifiable facts.

5

Untargeted facial recognition scraping

Mass scraping of the internet or CCTV footage to create or expand facial recognition databases without targeted, lawful purpose.

6

Emotion recognition in workplaces & education

Inferring the emotional states of workers or students from biometric data in professional or educational settings.

7

Biometric categorisation for protected characteristics

Deducing sensitive attributes (race, political opinions, religion, sexual orientation) from biometric data.

8

Real-time remote biometric ID in public (law enforcement)

Using real-time biometric identification systems in publicly accessible spaces for law enforcement, with very narrow exceptions (imminent threat, missing persons).

High-risk AI: am I affected?

The Act distinguishes between providers (who build or deploy AI) and deployers (who use AI in their operations). Both have obligations. Here is how to know if you fall under high-risk rules:

You are a Provider if you…

• Build and sell or deploy an AI system

• Release an AI model for others to use (including open-source)

• Substantially modify an existing AI system

Your obligations: Conformity assessment, technical documentation, CE marking, EU database registration, post-market monitoring

You are a Deployer if you…

• Use an AI system in your business operations

• Use AI to make decisions about employees, customers, or citizens

• Integrate third-party AI into your products or workflows

Your obligations: Use only high-risk systems with declaration of conformity, implement human oversight, conduct fundamental rights impact assessments

Try it: Classify your AI use cases

  1. 1List 3 AI tools or systems currently used in your organization (or that you personally use professionally).
  2. 2For each one: does it make decisions that affect employment, credit, education access, or law enforcement? If yes → likely High-Risk.
  3. 3Check the provider's EU AI Act compliance page (most major vendors now have one). Is there a Declaration of Conformity?
  4. 4For any high-risk systems you use as a Deployer: is there a human review step before the AI decision is acted on?
Reflect: Did any of your current AI uses fall into the high-risk category? What would meaningful human oversight look like in that context?

GDPR Article 22: automated decisions about people

GDPR Article 22 predates the AI Act but is now directly relevant to AI deployments. It gives individuals the right not to be subject to a decision based solely on automated processing when that decision has legal or similarly significant effects.

What counts as a “solely automated” decision with significant effect?

✓ AI automatically rejects a loan application with no human review

✓ An algorithm screens out a CV before a human ever sees it

✓ Insurance pricing set entirely by algorithm with significant financial impact

✓ Credit scores automatically determining credit access (CJEU SCHUFA ruling, Case C-634/21)

✗ Hiring manager uses AI scoring as one input, makes final decision themselves

✗ Spam filter automatically sorts email (minimal legal effect)

Individual rights under Art. 22

• Right to obtain human review of the automated decision

• Right to express your point of view

• Right to contest the decision

• Right to meaningful information about the logic used (Articles 13–15)

Organization obligations

• Conduct a Data Protection Impact Assessment (DPIA) before deploying

• Implement meaningful human oversight (not rubber-stamping)

• Provide transparent explanations of how the AI decides

• Create a clear process for individuals to challenge decisions

Try it: Article 22 audit of your AI decisions

  1. 1List AI systems in your organization that produce a score, ranking, or yes/no decision about individuals (employees, customers, applicants).
  2. 2For each: Is the decision made SOLELY by the AI, or does a human meaningfully review it before action?
  3. 3Is there a process for affected individuals to request human review? Is it documented and accessible?
  4. 4For any fully automated significant decisions: consult your DPO about whether a DPIA is required.
Reflect: Does your organization have a rubber-stamp human review process? How would you make it genuinely meaningful?

Practical compliance checklist

Use this checklist as a starting point for your organization's AI governance posture. Not legal advice — for specific situations, consult a qualified legal professional.

For all AI users (individuals)

Disclose AI involvement in work products where your employer, client, or platform requires it
Don't enter personal data, health records, or confidential client information into public AI tools
Verify AI-generated facts before publishing or presenting them
Document your creative process when using AI for commercial content
Know your rights: if an AI decision significantly affects you, you can request human review (GDPR Art. 22)

For organizations deploying AI

Inventory all AI systems in use — classify each against the EU AI Act risk tiers
For high-risk systems: verify provider has EU Declaration of Conformity
For any automated decisions about people: implement genuine human oversight
Conduct DPIA for AI systems processing personal data or making significant automated decisions
Ensure employees using AI tools have adequate AI literacy (EU AI Act Art. 4 requirement)
Review vendor contracts: who is the data controller, who is the processor?
August 2026 deadline: complete conformity assessments for any in-house high-risk AI systems
AI use policy template (internal starting point):
AI Use Policy — [Organization Name]
Last updated: [Date]

1. APPROVED USES
   - AI writing assistance for non-confidential content
   - AI coding assistance (no proprietary code in public tools)
   - AI research and summarization of non-sensitive information

2. RESTRICTED USES (require DPO review)
   - Any AI system making or informing decisions about employees or customers
   - Processing personal, health, or financial data through AI tools
   - AI-generated content for regulatory filings or legal documents

3. PROHIBITED USES
   - Entering client personal data, passwords, or source code into public AI tools
   - Using AI for real-time biometric identification of individuals
   - AI-generated content presented as human-authored without disclosure

4. DISCLOSURE REQUIREMENTS
   - Disclose AI involvement in external content where required by client contract
   - Label AI-generated images and deepfakes as required by EU AI Act Art. 50
   - Document creative process for commercially published AI-assisted work

5. HUMAN OVERSIGHT
   - All AI-informed decisions about individuals require meaningful human review
   - Automated scores or rankings must be reviewed before being acted upon

Risks & Responsible Use

Know these before you go further.

"I didn't know" is not a defence

The EU AI Act applies to any organization offering AI systems or services in the EU market — regardless of where the organization is based. A US startup deploying an AI hiring tool used by EU employees is subject to the Act.

What this means for you

If your product or service uses AI and any users or affected individuals are in the EU, assess your obligations under the Act. The compliance clock is already running.

AI literacy is now a legal obligation

EU AI Act Article 4 requires providers and deployers to ensure that their staff have an adequate level of AI literacy. This is not aspirational — it is an enforceable requirement.

What this means for you

Document your team's AI literacy training. Completing courses like this one is a concrete, documentable step toward Article 4 compliance.

Copyright in training data is still being litigated

While AI output copyright is settled (humans must author), the legality of training AI on copyrighted works is still being decided in courts worldwide. Getty Images v. Stability AI (UK) and New York Times v. OpenAI (US) are ongoing.

What this means for you

For AI tools used commercially, check whether your vendor has a licensing program or indemnification against copyright claims from training data. Some (Adobe Firefly, Getty AI) only train on licensed content.

The AI Act does not replace GDPR — both apply

Many organizations assume EU AI Act compliance covers their data protection obligations. It does not. GDPR applies independently to any personal data processed during AI system operation.

What this means for you

Treat EU AI Act compliance and GDPR compliance as parallel tracks, not sequential ones. Involve your Data Protection Officer from the start of any AI project.

Loading quiz...

Key Insights: What You've Learned

1

The EU AI Act uses a four-tier risk pyramid (Unacceptable / High / Limited / Minimal). Eight practices have been banned since February 2025 — including social scoring, predictive policing by profiling, and real-time biometric surveillance in public. Penalties for the most serious violations reach €35 million or 7% of global annual turnover.

2

Courts in the US and EU have confirmed that AI alone cannot hold copyright: human creative contribution is required. The D.C. Circuit ruled this in Thaler v. Perlmutter (March 2025) and the US Supreme Court declined review (March 2026), making it binding precedent.

3

GDPR Article 22 gives individuals the right to challenge purely automated decisions with legal or significant effects — confirmed in the CJEU's SCHUFA ruling (C-634/21). Organizations must classify their AI systems and complete conformity assessments for high-risk uses before August 2026.