Skip to main content
TL;DR: If you build or deploy AI systems used in the EU, the EU AI Act applies to you. Enforcement starts August 2, 2026. Fines up to €35M or 7% of global revenue.

What is the EU AI Act?

The EU AI Act is the world’s first comprehensive AI regulation. It classifies AI systems by risk level and imposes obligations based on that classification. Think GDPR, but for AI.

Who does it affect?

RoleDefinitionYou if…
ProviderDevelops or places AI system on the marketYou build the AI product
DeployerUses an AI system in professional activityYou integrate AI into your product
BothCommon for startups using LLM APIsYou build AND deploy AI features
If your users are in the EU, the Act applies to you — regardless of where your company is based.

Risk classification

The Act classifies AI systems into 4 risk levels:
1

Unacceptable risk — BANNED

Social scoring, real-time biometric surveillance, subliminal manipulation, exploitation of vulnerabilities. These practices are prohibited entirely.
2

High risk — HEAVY OBLIGATIONS

AI in: hiring, credit scoring, education, healthcare, law enforcement, critical infrastructure. Requires FRIA, risk management, technical documentation, human oversight, accuracy testing, and EU Database registration.
3

Limited risk — TRANSPARENCY

Chatbots, deepfakes, emotion recognition. Must disclose that users are interacting with AI. Must mark AI-generated content.
4

Minimal risk — NO OBLIGATIONS

Spam filters, AI-powered search, recommendation engines. No specific obligations (but general best practices apply).

The 6 things you probably need to do

Most AI developers fall into “High risk” or “Limited risk”. Here’s what that means:

1. Disclose AI usage (everyone)

// Before
const response = await client.chat(message);

// After — with @complior/sdk
import { complior } from '@complior/sdk';
const client = complior(new OpenAI(), { disclosure: true });

2. Create an Agent Passport

complior agent init
Auto-discovers your AI systems and generates a standardized identity card (36 fields, ed25519 signed).

3. Run a compliance scan

complior scan
Analyzes your code against 108 obligations. Produces a score 0–100 with actionable findings.

4. Generate FRIA (high-risk only)

complior agent fria my-chatbot --organization "Acme Corp"
Fundamental Rights Impact Assessment — required before deploying high-risk AI.

5. Generate compliance documents

complior fix --ai
Auto-generates AI Policy, Worker Notification, Technical Documentation from your passport data.

6. Build an evidence trail

Every complior action automatically creates cryptographic evidence (SHA-256 + ed25519). This proves to regulators that you took compliance seriously.

Key deadlines

DateWhat happens
Feb 2, 2025Prohibited practices ban (already in effect)
Aug 2, 2025GPAI obligations begin
Aug 2, 2026Full enforcement — high-risk, limited-risk, all obligations
Aug 2, 2027Existing AI systems must comply

Key articles for developers

ArticleTopicWhat it means
Art. 5Prohibited practicesDon’t build social scoring, subliminal manipulation
Art. 6High-risk classificationCheck if your AI falls under Annex III
Art. 9Risk managementImplement systematic risk identification
Art. 13TransparencyUsers must know they’re talking to AI
Art. 14Human oversightHuman can intervene and override AI
Art. 15Accuracy & securityTest for bias, robustness, cybersecurity
Art. 26Deployer obligationsEven if you didn’t build the AI, you have duties
Art. 50GPAI transparencyDisclose AI-generated content

What Complior automates

ObligationManual effortWith Complior
Risk classificationHours of legal analysiscomplior scan (auto-detected)
FRIADays of document writingcomplior agent fria (pre-filled)
AI disclosureCode changes across codebasecomplior fix (auto-wraps SDK)
Technical documentationWeeks of writingcomplior doc generate --all
Evidence trailManual loggingAutomatic (every action logged)
Ongoing monitoringManual review cyclescomplior daemon --watch (real-time)

Quick Start

Go from zero to compliant in 5 minutes.

EU AI Act deep dive

Full article-by-article mapping.