Quick answer
The EU AI Act is the world's first comprehensive law regulating artificial intelligence. Most of its rules kicked in through 2025 and 2026, and it is now in full force. It classifies AI systems by risk level, bans the most dangerous uses outright, and imposes strict transparency rules on general-purpose models like ChatGPT and Claude. Fines can reach €35 million or 7% of global revenue — whichever is higher.
For years, "AI regulation" meant lawmakers talking about rules that never arrived. That changed. The EU AI Act was approved in 2024, its rules phased in through 2025, and by April 2026 full enforcement is live. It is the most significant AI law in the world, and — even if you do not live in Europe — it is shaping how AI companies build and ship products globally. Here is what it actually does.
What is the EU AI Act?
It is a single law covering every AI system used in the EU. Rather than regulate AI as one thing, it sorts AI systems into four risk tiers — and applies tougher rules as risk goes up. The idea is to let low-risk AI (spam filters, chatbots) keep moving freely while putting guardrails on high-risk AI (hiring decisions, medical diagnosis, law enforcement) and banning certain uses entirely.
The four risk tiers
- Unacceptable risk — banned outright. Includes social scoring by governments, real-time biometric surveillance in public spaces, and manipulative AI that exploits vulnerable groups
- High risk — heavily regulated. Includes AI used in hiring, credit scoring, medical devices, critical infrastructure, and law enforcement. Requires risk assessments, human oversight, and documentation
- Limited risk — transparency obligations. If you are using a chatbot or seeing AI-generated content, the AI must say so
- Minimal risk — no new rules. Most consumer AI tools fall here and can operate freely
What changed in 2026?
The final big piece — the rules for general-purpose AI models like GPT-5 and Claude — kicked in during 2025 and is now fully enforced. These rules require model providers to publish technical documentation, respect EU copyright law during training, and disclose when content they output is AI-generated. The biggest models are classified as "systemic risk" and face additional obligations around safety testing and incident reporting.
Does it affect non-EU companies?
Yes. If your AI system is used by anyone in the EU — even if your company is based in the US, the UK, or India — you are subject to the Act. This is the "Brussels effect" in action: rather than build separate versions for Europe, most AI companies simply apply EU rules globally. It is easier. So the EU AI Act is effectively setting the world's AI standard, even for people outside Europe.
The fines are real. Violations of the outright ban on banned AI uses can cost €35 million or 7% of global annual revenue — whichever is higher. For a company the size of OpenAI or Microsoft, 7% of global revenue is tens of billions. That is why providers are taking compliance seriously.
What does this mean for everyday users?
- AI-generated images, videos, and articles must be labelled as such in the EU — so you will see more "AI-generated" tags on content
- If you are denied a loan, a job, or a service by an AI system, you have the right to a human review and to know why
- Deepfakes of real people without consent are now clearly illegal in many cases
- Chatbots must disclose that they are AI — no more pretending to be human
- Your data is better protected when companies use it to train AI models
Is the US doing anything similar?
Not at the federal level — not yet. The US has taken a lighter-touch approach: executive orders, voluntary industry commitments, and sector-specific rules (healthcare, finance) rather than one comprehensive law. Several US states, including California and Colorado, have passed their own AI laws that borrow elements from the EU approach. For now, if you want to know what AI regulation looks like globally, the EU Act is the template.
Related reading
Bottom line
The EU AI Act is now the real-world rulebook for AI — not a future possibility. It is imperfect, but it sets clear expectations for the first time: what is allowed, what requires human oversight, and what is banned outright. If you use AI at work, expect more transparency, more labelling, and more human-in-the-loop requirements. If you build AI products, compliance is no longer optional. The era of unregulated AI is over.
