AI & The Law
EU AI Act · Copyright · GDPR & Automated Decisions
TL;DR:
The EU AI Act creates a four-tier risk pyramid. Eight AI practices are completely banned since February 2025. High-risk AI systems face strict obligations before they can be deployed. Courts in the US and EU have established that AI alone cannot hold copyright — human authorship is required. GDPR's Article 22 gives individuals the right to not be subject to purely automated decisions with legal effects. Penalties for violating the AI Act reach €35 million or 7% of global annual turnover — whichever is higher.
Why this matters now — not eventually
AI law is no longer a future concern. Key dates have already passed and obligations are live:
Aug 2024
EU AI Act entered into force
The regulation became part of EU law, starting the compliance clock for all organizations.
Feb 2025
8 prohibited AI practices banned
Social scoring, real-time biometric surveillance, manipulative AI, and 5 other practices became illegal across the EU.
Mar 2025
Thaler v. Perlmutter ruling (D.C. Circuit)
US court confirmed: AI cannot be a copyright author. Only works with human creative input are protectable.
Aug 2025
GPAI model rules + penalty regime active
Providers of general-purpose AI models (GPT, Claude, Gemini etc.) face new transparency and safety obligations. Fines now enforceable.
Mar 2026
SCOTUS denies cert in Thaler v. Perlmutter
The US Supreme Court declined to hear the case, making the D.C. Circuit ruling the settled law: AI alone cannot author a copyright.
Aug 2026
Full High-Risk AI compliance required
All high-risk AI systems must complete conformity assessments, CE marking, technical documentation, and EU database registration.
Aug 2027
Extended deadline for embedded high-risk systems
AI embedded in regulated products (medical devices, machinery) gets an extra year to comply.
The four risk tiers
The EU AI Act uses a risk-proportionate approach. The higher the potential harm, the stricter the requirements. Source: European Commission, digital-strategy.ec.europa.eu
Unacceptable Risk
Eight specific practices are completely prohibited. No exceptions. In force since February 2025.
High Risk
Must complete conformity assessment, maintain technical documentation, register in EU database, and implement human oversight. Deadline: August 2026.
Limited Risk
Must inform users they are interacting with AI. Applies primarily to chatbots and deepfakes.
Minimal / No Risk
No specific obligations. This covers the vast majority of AI applications in use today.
The 8 banned practices (in force since Feb 2025)
These are hard prohibitions under EU AI Act Article 5. Violating them carries the highest penalty tier. They became enforceable on 2 February 2025.
Manipulative AI systems
AI that uses subliminal techniques or exploits psychological weaknesses to manipulate behavior against someone's interests or to cause harm.
Exploitation of vulnerabilities
AI targeting people based on age, disability, or social/economic situation to distort their behavior in ways that cause harm.
Social scoring by public authorities
Classifying individuals based on social behavior or personal characteristics and using that score to restrict their rights or access to services.
Criminal risk prediction via profiling
Predicting that a person will commit a crime based solely on their profile or personality traits — not on objective, verifiable facts.
Untargeted facial recognition scraping
Mass scraping of the internet or CCTV footage to create or expand facial recognition databases without targeted, lawful purpose.
Emotion recognition in workplaces & education
Inferring the emotional states of workers or students from biometric data in professional or educational settings.
Biometric categorisation for protected characteristics
Deducing sensitive attributes (race, political opinions, religion, sexual orientation) from biometric data.
Real-time remote biometric ID in public (law enforcement)
Using real-time biometric identification systems in publicly accessible spaces for law enforcement, with very narrow exceptions (imminent threat, missing persons).
Penalties for prohibited practices
High-risk AI: am I affected?
The Act distinguishes between providers (who build or deploy AI) and deployers (who use AI in their operations). Both have obligations. Here is how to know if you fall under high-risk rules:
You are a Provider if you…
• Build and sell or deploy an AI system
• Release an AI model for others to use (including open-source)
• Substantially modify an existing AI system
Your obligations: Conformity assessment, technical documentation, CE marking, EU database registration, post-market monitoring
You are a Deployer if you…
• Use an AI system in your business operations
• Use AI to make decisions about employees, customers, or citizens
• Integrate third-party AI into your products or workflows
Your obligations: Use only high-risk systems with declaration of conformity, implement human oversight, conduct fundamental rights impact assessments
Try it: Classify your AI use cases
- 1List 3 AI tools or systems currently used in your organization (or that you personally use professionally).
- 2For each one: does it make decisions that affect employment, credit, education access, or law enforcement? If yes → likely High-Risk.
- 3Check the provider's EU AI Act compliance page (most major vendors now have one). Is there a Declaration of Conformity?
- 4For any high-risk systems you use as a Deployer: is there a human review step before the AI decision is acted on?
AI & copyright: who owns AI-generated content?
This is now settled law in the United States, with a clear direction in the EU. Here is the current legal position, based on primary court decisions:
Thaler v. Perlmutter — the definitive US ruling
D.C. Circuit Court of Appeals, March 2025 · SCOTUS cert denied March 2026
Dr. Stephen Thaler created an AI system called the “Creativity Machine” which autonomously generated an image titled “A Recent Entrance to Paradise.” He applied for copyright registration, listing the AI as the author.
The D.C. Circuit ruled: the Copyright Act of 1976 requires all copyrightable works to be authored in the first instance by a human being.An AI system cannot be a recognized author. The US Supreme Court declined to hear the case in March 2026, making this the settled law.
Key implication
Content generated entirely by AI — with no meaningful human creative input — cannot be copyrighted in the US.Anyone can copy, reproduce, or distribute it freely.
What this means practically:
Pure AI output (no creative human input)
→ No copyright protection
Low — it's in the public domain. But your competitor can also use it.
AI-assisted work (human edits, selects, arranges)
→ Human-authored portions may be protected
Medium — you must be able to identify and separate the human creative contribution.
AI as a tool (human provides substantial creative direction)
→ Likely protectable — courts look at human creative control
The more creative control you exercise over the output, the stronger your claim. Document your process.
Training AI on copyrighted works
→ Legally contested worldwide
Multiple ongoing cases (Getty Images v. Stability AI, New York Times v. OpenAI). No settled law yet in most jurisdictions.
Use this to document your creative process when using AI for commercial work:
Project: [Project name]
Date: [Date]
AI tool used: [Tool name and version]
Human creative contributions:
- Initial concept/direction: [Describe your original idea]
- Prompts and iterations: [How many, what decisions you made]
- Selection and curation: [How you chose from AI outputs]
- Human edits and additions: [What you changed or added]
- Final human judgment calls: [Key decisions that shaped the work]
Note: This document supports demonstrating human authorship in the event of a copyright dispute.GDPR Article 22: automated decisions about people
GDPR Article 22 predates the AI Act but is now directly relevant to AI deployments. It gives individuals the right not to be subject to a decision based solely on automated processing when that decision has legal or similarly significant effects.
What counts as a “solely automated” decision with significant effect?
✓ AI automatically rejects a loan application with no human review
✓ An algorithm screens out a CV before a human ever sees it
✓ Insurance pricing set entirely by algorithm with significant financial impact
✓ Credit scores automatically determining credit access (CJEU SCHUFA ruling, Case C-634/21)
✗ Hiring manager uses AI scoring as one input, makes final decision themselves
✗ Spam filter automatically sorts email (minimal legal effect)
Individual rights under Art. 22
• Right to obtain human review of the automated decision
• Right to express your point of view
• Right to contest the decision
• Right to meaningful information about the logic used (Articles 13–15)
Organization obligations
• Conduct a Data Protection Impact Assessment (DPIA) before deploying
• Implement meaningful human oversight (not rubber-stamping)
• Provide transparent explanations of how the AI decides
• Create a clear process for individuals to challenge decisions
SCHUFA Ruling: credit scores are Article 22 decisions
Try it: Article 22 audit of your AI decisions
- 1List AI systems in your organization that produce a score, ranking, or yes/no decision about individuals (employees, customers, applicants).
- 2For each: Is the decision made SOLELY by the AI, or does a human meaningfully review it before action?
- 3Is there a process for affected individuals to request human review? Is it documented and accessible?
- 4For any fully automated significant decisions: consult your DPO about whether a DPIA is required.
Practical compliance checklist
Use this checklist as a starting point for your organization's AI governance posture. Not legal advice — for specific situations, consult a qualified legal professional.
For all AI users (individuals)
For organizations deploying AI
AI Use Policy — [Organization Name]
Last updated: [Date]
1. APPROVED USES
- AI writing assistance for non-confidential content
- AI coding assistance (no proprietary code in public tools)
- AI research and summarization of non-sensitive information
2. RESTRICTED USES (require DPO review)
- Any AI system making or informing decisions about employees or customers
- Processing personal, health, or financial data through AI tools
- AI-generated content for regulatory filings or legal documents
3. PROHIBITED USES
- Entering client personal data, passwords, or source code into public AI tools
- Using AI for real-time biometric identification of individuals
- AI-generated content presented as human-authored without disclosure
4. DISCLOSURE REQUIREMENTS
- Disclose AI involvement in external content where required by client contract
- Label AI-generated images and deepfakes as required by EU AI Act Art. 50
- Document creative process for commercially published AI-assisted work
5. HUMAN OVERSIGHT
- All AI-informed decisions about individuals require meaningful human review
- Automated scores or rankings must be reviewed before being acted uponRisks & Responsible Use
Know these before you go further.
"I didn't know" is not a defence
The EU AI Act applies to any organization offering AI systems or services in the EU market — regardless of where the organization is based. A US startup deploying an AI hiring tool used by EU employees is subject to the Act.
What this means for you
If your product or service uses AI and any users or affected individuals are in the EU, assess your obligations under the Act. The compliance clock is already running.
AI literacy is now a legal obligation
EU AI Act Article 4 requires providers and deployers to ensure that their staff have an adequate level of AI literacy. This is not aspirational — it is an enforceable requirement.
What this means for you
Document your team's AI literacy training. Completing courses like this one is a concrete, documentable step toward Article 4 compliance.
Copyright in training data is still being litigated
While AI output copyright is settled (humans must author), the legality of training AI on copyrighted works is still being decided in courts worldwide. Getty Images v. Stability AI (UK) and New York Times v. OpenAI (US) are ongoing.
What this means for you
For AI tools used commercially, check whether your vendor has a licensing program or indemnification against copyright claims from training data. Some (Adobe Firefly, Getty AI) only train on licensed content.
The AI Act does not replace GDPR — both apply
Many organizations assume EU AI Act compliance covers their data protection obligations. It does not. GDPR applies independently to any personal data processed during AI system operation.
What this means for you
Treat EU AI Act compliance and GDPR compliance as parallel tracks, not sequential ones. Involve your Data Protection Officer from the start of any AI project.
Loading quiz...
Key Insights: What You've Learned
The EU AI Act uses a four-tier risk pyramid (Unacceptable / High / Limited / Minimal). Eight practices have been banned since February 2025 — including social scoring, predictive policing by profiling, and real-time biometric surveillance in public. Penalties for the most serious violations reach €35 million or 7% of global annual turnover.
Courts in the US and EU have confirmed that AI alone cannot hold copyright: human creative contribution is required. The D.C. Circuit ruled this in Thaler v. Perlmutter (March 2025) and the US Supreme Court declined review (March 2026), making it binding precedent.
GDPR Article 22 gives individuals the right to challenge purely automated decisions with legal or significant effects — confirmed in the CJEU's SCHUFA ruling (C-634/21). Organizations must classify their AI systems and complete conformity assessments for high-risk uses before August 2026.
Ready to Apply What You Learned?
AI Readiness Framework
Build organizational readiness across 7 dimensions including governance
Start Learning