Navigation überspringen
Stay up to date

A Closer Look: Europe’s AI Act and what it means for companies in the EU

Blog / AI Act
The EU AI Act is the world’s first comprehensive AI law, shaping debates on AI ethics and privacy. It creates a legal framework that connects with GDPR and the EU Data Act. With no precedent cases, its full impact is still unclear, but here’s what we know so far.

For several months now, the term "EU AI Act" has kept popping up, especially in discussions around ethical and privacy-conscious uses of artificial intelligence across various industries. What might sound like dry legalese actually turns out to be the world’s first comprehensive AI law—a framework meant to provide a tightly woven legal structure for handling artificial intelligence. This new regulation is more than background bureaucratic noise—which makes it all the more significant. In classic EU fashion, it’s a fairly inaccessible and, at times, quite complex document that overlaps with the GDPR and the EU Data Act. Since there are no precedent cases yet, it's difficult to precisely assess the hurdles and risks. Still, we’ll do our best to break it down.

Until recently, the use of AI resembled a digital Wild West. Industry insiders won’t be too surprised—innovation often begins chaotically, just think of Bitcoin or blockchain. (Does anyone even talk about NFTs anymore? The world moves fast.)

Back on topic: start-ups and tech giants experimented freely with algorithms—from chatbots to facial recognition—while lawmakers scrambled to keep up. Those days are over. The EU AI Act—officially the "Regulation on Artificial Intelligence"—came into force in August 2024. The first rules began applying in early 2025, with more rolling out through 2026 and 2027. The sheriff is in town, and predictably, two camps have formed: one welcomes the legal clarity that allows for innovation without fear of penalties; the other complains that the regulation stifles innovation and prevents the rise of a "European Silicon Valley."

The question is: Why regulate AI so strictly? The answer is simple: AI offers enormous potential but also significant risks—from discriminatory decision-making algorithms to Big Brother-style surveillance. The EU aims to ensure AI remains trustworthy (yes, it's a black box) and human-centered. A strict, globally unique legal framework has been developed. Europe is taking a global leadership role, much like it did with the GDPR in data protection. Even non-European providers who want to offer AI systems in the EU must comply.Not everyone is thrilled. A brief scandal involved Sam Altman, CEO of OpenAI, who threatened to withdraw from Europe in May 2023 over the strict regulation—only to backtrack shortly afterward. EU lawmakers stood their ground, making it clear: the EU means business.

New Obligations: From Prohibited to High-Risk

The AI Act follows a risk-based approach. Not all AI applications are treated the same. There are four categories: unacceptable risk (banned), high risk (strictly regulated), limited risk (transparency requirements), and minimal risk (business as usual). Companies must first assess and classify their AI systems accordingly.

Prohibited Practices

At the top of the red list are AI practices that are now strictly forbidden. Eight types of applications are deemed so dangerous to safety and fundamental rights that they are outright banned. These include scenarios reminiscent of Black Mirror: manipulative AI that influences people subconsciously, exploitation of vulnerable groups (like children or the sick), or social scoring models—algorithms that evaluate people based on behavior or societal norms.

Also banned are predictive policing (AI forecasting crimes or behaviors), indiscriminate scraping of internet or surveillance footage to populate facial recognition databases, emotion recognition in workplaces or schools, and AI estimating sensitive traits like sexual orientation or ethnicity. Real-time facial recognition by police in public spaces is prohibited, except under narrowly defined circumstances.

Practically speaking: any offering remotely close to these no-gos must be pulled immediately. As of February 2, 2025, offering such systems is banned and subject to fines of up to €30 million or 6% of global annual revenue—whichever is higher.

High-Risk AI: Quality, Control, and Compliance

Not all risky AI is banned. The second category includes high-risk systems—still allowed but now highly regulated. These include AI used in HR (e.g., screening applications), education (e.g., grading students), financial decisions (e.g., credit scoring), or medical devices. Also included are AI systems used by authorities in asylum or court processes.

Companies developing or using high-risk AI must comply with a strict catalog of obligations before launching or using them. Requirements include risk management and impact assessments, thorough documentation, logging processes (to trace errors), high-quality training data (to avoid bias), clear user information, built-in monitoring capabilities, and robust cybersecurity. These complex requirements come into full force by August 2026. In the meantime, detailed guidelines are being developed. However, companies shouldn't wait—certain obligations already apply, like training all employees working with AI.

Transparency Requirements

The third category covers AI systems with limited risk. These systems are generally allowed but subject to transparency rules. For example: a chatbot must identify itself as a bot. If companies use AI for customer service, they must inform users that they're not talking to a human. A hot topic: labeling AI-generated content. Imagine AI writing articles, creating images, or deepfakes. The AI Act draws clear lines: such content must be labeled as AI-generated. Deepfakes, in particular, require a visible disclaimer.

AI-generated texts for informational purposes (e.g., news articles) must be labeled. For marketing or internal documents reviewed by humans, no mandatory label exists—yet transparent communication is encouraged. Google, for example, has already introduced “SynthID,” a watermarking method for AI images.

So, do you need to tag every AI-generated sentence? No. Labeling is required where there's a real risk of deception. Still, companies should track where AI-generated content appears and introduce internal labeling guidelines.

AI Competency Becomes Mandatory – Humans at the Center

Another key change: since February 2, 2025, companies using or offering AI systems must ensure their staff has sufficient AI competence. In other words, employee training is mandatory. Article 4 of the Act requires that all staff involved with AI systems understand their technical foundations, risks, and legal frameworks.

Companies have flexibility: workshops, e-learning, or internal certifications—whatever works, as long as employees aren’t left in the dark. If not, management is liable. Certified training programs are advisable to mitigate liability.

It should now be clear: the AI Act is no niche topic for legal departments. It belongs at the top of the agenda in boardrooms and development teams. It determines how we build and use AI in Europe going forward. At its core: do we want AI at any cost—or AI with values and responsibility? The EU has chosen the latter, imposing strict requirements even on U.S. and Chinese vendors.

Critics argue the Act could hinder Europe's tech competitiveness. Indeed, compliance requires effort, bureaucracy, and time—especially burdensome for startups. Some fear Europe’s caution will become a competitive disadvantage while AI development races ahead in the U.S. and China. But there’s another view: trust as a competitive edge. Many customers already care that AI is used ethically and lawfully. Companies that can prove compliance—through transparency, certification, or clear communication—might gain a competitive advantage. The AI Act also unifies rules across 27 countries, avoiding a patchwork of national laws. EU regulations often become global benchmarks. Some experts believe the AI Act could become the global gold standard for AI regulation.

A notable development: soon after the law passed, the EU Commission launched the AI Pact—a voluntary alliance where companies start implementing key obligations ahead of schedule.

How Companies Should Position Themselves Now

The big question: how should businesses respond? Hoping to stay under the radar won’t work—just like with the GDPR. The Act’s definition of AI is broad—even basic "intelligent" software could fall under it. The better strategy: take initiative and show responsibility.

Take Inventory: What AI systems are we already using or developing? What risk category do they fall under? Are we a provider or a user?

Compliance by Design

Developers should incorporate legal requirements from the outset. IT and compliance teams must work together to assess and manage AI usage. Onboard Teams: Use the transition period until 2026 to train teams as required by Article 4. A new role is emerging in some German firms: the AI Compliance Officer—similar to the Data Protection Officer.

Europe has the chance to shape technology with lasting values. Companies that act now won’t just avoid penalties—they may emerge stronger. European firms now have the opportunity to chart a unique path: quality, safety, and human respect as the hallmark of AI development and deployment.

In the end, it’s what we make of it. The EU AI Act gives us the "why" and the "how" of AI in Europe. It sets the framework—filling it out is up to those who work with AI every day. Whether we end up cursing its red tape or admiring the foresight of Brussels lawmakers depends on us. One thing’s certain: this law brings AI out of the legal grey zone and into the spotlight.

If you ask me personally: the game has just begun.

Sources: European Commission (EU AI Act Details) digital-strategy.ec.europa.eu; Latham & Watkins (Overview AI Act obligations) lw.com Ogletree Deakins (Timeline for entry into force) ogletree.com; Reuters (Altman and AI Act) reuters.com; Reuters (AI Act penalties) reuters.com; Mittelstand-Digital Center Berlin (Obligations for companies) digitalzentrum-berlin.de.