The EU AI Act fundamentally changes how we handle artificial intelligence. Here you'll learn everything about risk categories, compliance requirements, and German implementation. From prohibitions to best practices – so you can design your AI strategy to be legally compliant and future-proof.
The EU AI Act classifies AI systems into four risk categories, each carrying different requirements and prohibitions. This classification determines which compliance measures you must take for your AI applications.
Status: Banned since 2 February 2025
Penalty: Up to 35 million € or 7% of global turnover
Status: Regulated from 2 August 2026
Requirements: Risk assessment, quality management, human oversight
Status: Regulated from 2 August 2026
Requirements: Transparency obligations, user information
Status: No additional obligations
Requirements: Voluntary codes of conduct recommended
Foundation models like ChatGPT, GPT-4, or Claude fall under a special category of the AI Act. These General Purpose AI (GPAI) systems have specific compliance requirements you should know about.
First GPAI obligations have been in effect since 2 August 2025. If you use or develop foundation models, you must implement these requirements now.
The EU AI Act is being implemented gradually. Here you can see all important milestones so you're prepared in time and don't miss any deadlines.
| Date | Milestone | Status | What You Should Consider |
|---|---|---|---|
| 1 August 2024 | AI Act comes into force | Active | Initial orientation and inventory of your AI systems |
| 2 February 2025 | Prohibitions become effective | Active | Check immediately: Are you using prohibited AI systems? |
| 2 August 2025 | GPAI obligations active | Active | Foundation model compliance must be implemented now |
| 2 February 2026 | Commission Guidelines | Future | Detailed implementation guidelines will become available |
| 2 August 2026 | Full Applicability | Future | All your AI systems must be compliant |
| 2 August 2027 | Legacy System Compliance | Future | Older GPAI models must also be compliant |
Conduct an inventory of your AI systems and check if prohibited applications are being used. Stop using non-compliant AI systems immediately.
Implement compliance processes for GPAI models. Conduct risk assessments for high-risk systems and finalize internal expertise.
Implement all required compliance measures for your AI systems. Establish continuous monitoring and reporting processes.
Depending on whether you develop, operate, or supervise AI systems, you have different obligations. Here you'll find an overview of your specific compliance requirements.
Your Core Obligations: Conformity assessment, risk assessment, technical documentation, data quality assurance. For high-risk systems additionally quality management system and post-market surveillance.
Your Core Obligations: Human oversight, system monitoring, record-keeping, personnel AI competence. You must monitor data inputs and correctly interpret outputs.
Your Core Obligations: Market surveillance, compliance monitoring, enforcement measures, guideline provision. Special responsibility in cross-border cooperation.
Your Core Obligations: Training data summaries, copyright compliance, content labeling, systemic risk assessment. Abuse prevention and user guidelines are essential.
As operators of critical infrastructures (KRITIS), energy providers are subject to the strictest rules of the AI Act. AI systems used for control, operation, and safety of energy networks are explicitly classified as high-risk applications.
The use of these systems requires compliance with a strict catalog of obligations:
The AI Act affects different economic sectors to varying degrees. Here you'll learn what special challenges and opportunities arise for your sector.
Risk: High Risk. Challenges: Overlap with Medical Device Regulation, double obligations, patient safety. Particularly affected: Diagnostic AI, robot-assisted surgery.
Risk: High Risk. Challenges: Credit scoring bias, algorithmic transparency, BaFin supervision. Integration into MaRisk compliance and discrimination prevention required.
Risk: High Risk. Challenges: Ethics of autonomous driving, safety-critical decisions, liability issues. German ethics guidelines: Protection of human life takes priority.
Risk: High Risk/Prohibited. Challenges: Balancing fundamental rights, limited transparency. Real-time biometrics mostly prohibited, judicial approvals required.
Risk: High Risk. Challenges: As part of critical infrastructure (KRITIS), highest requirements apply to reliability and cybersecurity. AI systems for grid control must be robust and transparent.
Der Energiesektor ist als kritische Infrastruktur (KRITIS) ein zentraler Anwendungsbereich des EU KI-Gesetzes. KI-Systeme, die zur Steuerung, Überwachung oder Optimierung von Energienetzen eingesetzt werden, fallen fast ausnahmslos in die Hochrisiko-Kategorie. Dies bringt umfassende Compliance-Anforderungen mit sich, um die Versorgungssicherheit und Stabilität zu gewährleisten.
KI-Systeme, die den Stromfluss in Echtzeit steuern, Lasten verteilen oder auf Schwankungen durch erneuerbare Energien reagieren, sind Hochrisiko-Anwendungen. Sie erfordern höchste Ausfallsicherheit und Transparenz in ihren Entscheidungsprozessen.
Systeme, die den Energiebedarf vorhersagen, sind entscheidend für die Netzstabilität und Preisgestaltung. Fehlerhafte Prognosen können gravierende Folgen haben, weshalb hohe Anforderungen an die Datenqualität und Modellvalidierung gestellt werden.
KI zur Vorhersage von Ausfällen bei kritischen Komponenten (z.B. Transformatoren) ist ebenfalls als hochriskant einzustufen. Die Zuverlässigkeit dieser Systeme muss durch robuste Tests und kontinuierliche Überwachung nachgewiesen werden.
Der Einsatz von KI zur Überwachung von Anlagen und zur Abwehr von Cyberangriffen fällt unter die strengen Auflagen. Hier sind insbesondere die Anforderungen an die Robustheit und die menschliche Aufsicht entscheidend.
Germany is taking a pioneering role in AI regulation, pursuing a dual strategy: implementing EU requirements while strengthening the innovation location. Here you'll learn how the German government is implementing the EU AI Act and what additional initiatives are relevant for you.
For companies in Germany, this means: In addition to pure compliance with the EU AI Act, there are numerous funding opportunities and support programs to advance AI innovations.
32 million € budget for AI quality standards and SME innovation. If you're an SME, you can benefit from consulting and funding.
Germany must provide sandboxes by August 2026. You can test innovative AI in controlled environments - even free of charge for SMEs.
"AI for the Common Good" - if your AI solves social problems, you can benefit from expert consulting and funding.
German AI quality standards can give you international competitive advantage - "Trusted AI Made in Germany".
Germany's federal structure can lead to different interpretations in the 16 federal states. The federal government is working to create uniform standards.
Germany has already established regulatory sandboxes in various sectors. This gives your company the opportunity to test innovative AI solutions in a safe legal framework before full regulation takes effect.
Requirement: Every EU member state must have at least one AI regulatory sandbox by 2 August 2026
Benefits for Companies:
Germany has already institutionalized regulatory sandboxes in various sectors
AI compliance is not just a legal obligation but a strategic competitive advantage. Companies that become compliant early position themselves as trustworthy AI providers in the global market.
As a compliant-first company, you gain trust with customers and partners. "EU AI Act compliant" becomes a quality seal for your AI products.
EU standards often become global benchmarks. Early compliance prepares you for international expansion and opens new markets.
Regulatory Sandboxes enable risk-free innovation. You can develop groundbreaking AI solutions without taking compliance risks.
Proactive compliance protects against existential penalties of up to 35 million € and preserves your reputation from damage due to violations.