Home Intelligence Research About Contact
Europe
Implementation Phase

EU AI Act: Implementation Intelligence Assessment

Region Europe
Monitoring Status Implementation Phase
Last Updated March 2026
Intelligence Source ISAR Global

The EU AI Act entered its phased implementation schedule as the world's first comprehensive binding AI regulatory instrument. ISAR Global's systematic monitoring of member-state transposition, enforcement authority designation, and corporate compliance responses reveals material divergence between the Act's ambitions and its early implementation reality. The gap between legislative intent and regulatory capacity is the central story of European AI governance in 2025–26.

Full Analysis

The European Union’s Artificial Intelligence Act entered force on 1 August 2024 as the world’s first comprehensive binding AI regulatory framework. The Act’s phased implementation schedule spans 2024–2027, with different provisions activating at staged intervals: prohibited practices (2 February 2025), general-purpose AI model obligations (2 August 2025), high-risk system obligations (2 August 2026), and full enforcement across remaining provisions (2 August 2027).

This phased architecture was a deliberate design choice — allowing industry time to adapt and member states to build oversight infrastructure. Nineteen months into implementation, the structure is under strain. The enforcement authority designation deadline of 2 August 2025 has passed with the majority of member states still non-compliant. The general-purpose AI obligations are in force, but enforcement against non-EU providers remains untested. The high-risk system compliance deadline of August 2026 is approaching against a backdrop of inadequate conformity assessment infrastructure.

ISAR Global’s assessment documents five critical implementation gaps currently active in EU AI Act rollout, analyses the enforcement architectures emerging at member-state level, and evaluates the strategic question European policymakers have not yet adequately addressed: whether the Act creates competitive disadvantage, regulatory clarity, or simply exports compliance costs without delivering governance outcomes.

The Phased Implementation Schedule

The AI Act operates on a four-stage activation timeline. Understanding which obligations are already in force — and which are not yet active — is essential context for evaluating enforcement capacity.

Stage 1 — 2 February 2025: Prohibited practices banned. This covers AI systems using subliminal manipulation, exploiting vulnerable groups, social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), and certain predictive policing and emotion recognition applications.

Stage 2 — 2 August 2025: General-purpose AI (GPAI) model obligations entered force. All new GPAI models released from this date must comply. The GPAI Code of Practice, finalised on 10 July 2025 and endorsed by the European Commission and AI Board on 1 August 2025, provides the voluntary compliance framework. Member states were also required to have designated national competent authorities by this date.

Stage 3 — 2 August 2026: High-risk AI system obligations enter force. This is the Act’s operational core: conformity assessment requirements, quality management systems, human oversight obligations, post-market monitoring, and registration in the EU AI database. For GPAI model providers, the Commission’s formal enforcement powers also activate from this date.

Stage 4 — 2 August 2027: Remaining provisions enter force, including obligations for high-risk AI systems integrated into regulated products such as medical devices, machinery, and vehicles.

The Five Implementation Gaps

1. Member-State Authority Designation: A Systemic Failure

The AI Act required member states to designate national competent authorities — both a market surveillance authority and a notifying authority — by 2 August 2025. This deadline has been widely missed.

As of early January 2026, based on publicly available information compiled by the artificialintelligenceact.eu monitoring project, only three member states had formally designated both notifying and market surveillance authorities. Ten had pending legislative proposals or had appointed one authority only. Fourteen member states had yet to designate any competent authority whatsoever — more than five months after the legal deadline.

The implementation gap among the bloc’s largest economies is particularly notable. Germany missed the August 2025 deadline entirely. Its Federal Cabinet only adopted the draft implementation act (KI-Marktüberwachungsgesetz- und Innovationsförderungsgesetz, or KI-MIG) on 11 February 2026 — explicitly to avoid European Commission infringement proceedings. The KI-MIG must still pass through the Bundestag and Bundesrat before becoming law. The draft designates the Federal Network Agency (Bundesnetzagentur) as the main market surveillance authority, with BaFin covering financial sector AI systems — but as of March 2026, neither designation is legally enacted.

Spain represents the positive outlier. It designated the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) as national competent authority via Royal Decree 729/2023 — ahead of any formal EU AI Act deadline — and adopted a draft national AI Law in March 2025. Ireland is among the more advanced, having designated fifteen competent authorities across financial, health, consumer, and telecommunications sectors, with legislation establishing a National AI Office working through the Oireachtas.

Authority Designation — State of Play (January 2026)

3 member states: fully designated both notifying and market surveillance authorities.

10 member states: pending legislation or partial designation only.

14 member states: no designated authority of any kind — deadline missed by more than five months.

Germany: Federal Cabinet draft adopted 11 February 2026; still requires full parliamentary passage before legal enactment.

The structural problem is one of institutional capacity. The Act’s 46 provisions requiring active member-state implementation — authority designations, sandbox establishments, penalty specification — demand administrative and technical infrastructure that many states do not yet possess. A company deploying AI systems across the EU faces oversight from authorities operating at fundamentally different levels of institutional maturity, or in many cases, no operational authority at all.

2. High-Risk Classification Uncertainty

The Act’s core regulatory mechanism is a risk-based taxonomy. High-risk AI systems — defined in Annex III across eight categories including biometric identification, employment decisions, critical infrastructure, and access to essential services — face the Act’s most demanding obligations: conformity assessment, quality management systems, human oversight requirements, and post-market monitoring. The distinction between high-risk and limited-risk classification determines whether a system faces full compliance obligations or only basic transparency requirements.

The classification boundary is interpretive. Annex III includes “AI systems intended to be used for recruitment or selection of natural persons” as high-risk. This clearly covers automated CV screening used by employers as part of final hiring decisions. It is less clear whether it captures algorithmic job recommendation platforms, internal candidate ranking tools used as advisory aids, or AI systems that assist recruitment consultants without making final decisions.

The European AI Office published draft classification guidelines in January 2025. Rather than resolving ambiguity, the guidance creates procedural mechanisms for managing it: companies uncertain about classification can seek guidance from member-state authorities; authorities uncertain about interpretation can consult the AI Office; the AI Office can convene the AI Board for consensus positions. This multi-layer consultation process operates on regulatory timescales — weeks to months per query. AI deployment operates on commercial timescales — hours to days.

The practical consequence is unilateral classification: companies are making high-risk/limited-risk determinations using internal legal counsel and deploying without formal regulatory confirmation. No enforcement mechanism activates until a violation is detected post-deployment — which requires an authority with both technical capacity and political will to investigate.

3. Conformity Assessment Infrastructure

High-risk AI systems require conformity assessment before deployment. For most Annex III categories, the Act permits internal self-assessment: the deploying company assesses compliance, documents it, declares conformity, and affixes CE marking. External third-party assessment by an accredited notified body is required only for a subset of applications, principally remote biometric identification systems.

The notified body ecosystem for AI Act conformity assessment remains in early development. The designation process commenced on 2 August 2025. As of March 2026, notified bodies are still being designated across member states, and the European Commission has acknowledged that harmonised technical standards have not yet been published.

The conformity assessment infrastructure the Act assumes into existence is still being constructed — and high-risk system obligations enter force in August 2026.

The internal self-assessment route is available to the majority of high-risk system providers — but it requires companies to have quality management systems, technical documentation, and conformity assessment competence in-house. For large enterprises with established compliance infrastructure, this is manageable. For European AI startups and mid-tier developers, building this capability represents a significant resource allocation challenge without an established assessor to which they can outsource the work.

4. General-Purpose AI Model Governance

The AI Act’s final negotiated text introduced General-Purpose AI (GPAI) as a distinct regulatory category. Foundation models capable of multiple downstream applications — GPT-4, Claude, Gemini, Mistral, and their successors — are subject to a two-tier framework. Standard GPAI models face transparency obligations: technical documentation, copyright compliance policies, and training data summaries. GPAI models with systemic risk — those trained on computation exceeding 10²⁵ floating-point operations — face additional obligations including adversarial testing, incident reporting, and cybersecurity measures.

The GPAI Code of Practice, developed through a multi-stakeholder process and finalised in July 2025, operationalises these obligations across three chapters: transparency, copyright compliance, and safety and security. Major model providers participated in its development. Adherence creates a presumption of compliance with the Act’s GPAI provisions. Formal Commission enforcement activates from 2 August 2026.

A significant jurisdictional question remains open. The AI Office’s position is that providing API access to EU-based developers constitutes market placement, triggering GPAI obligations. This interpretation is legally defensible but creates an accountability gap at the downstream application layer. The GPAI provider bears obligations at the model level. The developer who builds a high-risk application on that model bears high-risk system compliance obligations. If that developer is a small European startup, it may lack the resources for full conformity assessment — but the GPAI provider has no formal obligation regarding what its API users build.

The 10²⁵ FLOPs compute threshold as a proxy for systemic risk is a pragmatic approximation, not a principled risk measure. The Act provides no mechanism for capability-based evaluation beyond the threshold, and no formal process for updating it as compute efficiency improves. Industry analysts have noted that the threshold may fail to capture the most capable models within a few years as hardware efficiency advances.

5. Enforcement Resource Constraints

The Act provides member-state authorities with penalty powers of up to €35 million or 7% of global annual turnover for serious violations — figures calibrated to create genuine deterrence against the largest global AI developers. Effective use of these powers requires technical capacity to evaluate AI systems, legal capacity to construct enforcement cases, and investigative capacity to detect non-compliance.

The enforcement precedent is GDPR, and it is instructive. Most member-state data protection authorities lack sufficient resources to investigate complex cases systematically. The pattern of GDPR enforcement has been reactive — investigations triggered by complaints, whistleblower reports, and high-profile incidents — rather than proactive surveillance of compliance. The first major GDPR penalties took nearly three years from the Regulation entering force to materialise.

AI Act enforcement will replicate this pattern, with added technical complexity. Evaluating whether an AI system meets conformity assessment requirements demands machine learning expertise, statistical evaluation capability, and understanding of model architecture and training processes. National authorities in most member states do not yet employ AI specialists at the scale necessary for systematic oversight.

The Enforcement Architecture Emerging

Member states are building AI oversight through three distinct structural models, each with different implications for enforcement effectiveness.

Centralised single authority (Spain, Netherlands): Spain’s AESIA and the Netherlands’ Authority for Consumers and Markets (ACM) serve as central competent authorities with cross-sectoral AI oversight. This model offers clear institutional accountability and a single point of contact for companies seeking guidance. Its risk is that building AI technical expertise within a single generalist regulator is slow and resource-intensive.

Distributed sectoral oversight (France, Ireland): France has assigned AI oversight to existing sectoral regulators within their respective domains, coordinated by the DGCCRF. Ireland has designated fifteen authorities across sectors. This model leverages existing domain expertise but creates coordination complexity for AI systems that operate across sectoral boundaries — which includes many of the most consequential systems.

Draft framework pending enactment (Germany, others): Germany’s situation — Federal Cabinet adoption of a draft law in February 2026, with parliamentary passage still required — represents a significant portion of the EU’s economic weight operating without formal AI Act enforcement infrastructure more than six months after the authority designation deadline.

The European AI Office holds direct enforcement authority over GPAI model providers with systemic risk — the only EU-level AI enforcement body. For all other AI Act provisions, enforcement depends entirely on member-state authorities that vary dramatically in current operational capacity.

Corporate Compliance Reality

ISAR Global’s monitoring of corporate compliance responses reveals three distinct patterns that illustrate the structural asymmetry the Act creates.

Established technology enterprises — Microsoft, Google, SAP, Siemens — have published AI Act compliance frameworks, designated internal compliance officers, and invested in conformity assessment processes. These companies treat the Act as an extension of existing GDPR compliance infrastructure. The compliance cost is substantial but manageable within enterprise compliance budgets.

European AI startups face different economics. A company of twenty engineers deploying a recruitment AI system cannot absorb conformity assessment costs that may represent a significant fraction of annual operating budget. The observed response patterns are either conservative mis-classification — treating borderline high-risk systems as limited-risk to avoid compliance burden — or delayed EU deployment pending regulatory clarity. The latter creates an ironic outcome: the Act designed to establish European AI governance may create incentives for European AI companies to stage initial deployment outside the EU.

The Act was designed with global technology companies in mind. Its compliance architecture assumes legal departments, conformity assessment budgets, and regulatory engagement capabilities that most European AI startups do not possess.

Non-EU GPAI model providers — OpenAI, Anthropic, Google DeepMind, Meta — have established EU legal entities and produced transparency documentation required under the GPAI provisions. Their compliance positioning carefully distinguishes between the GPAI layer, where they accept obligations, and downstream applications, where liability rests with the deploying developer. This is legally defensible. Whether it produces the governance outcomes the Act intended is a different question.

The Strategic Question Europe Has Not Addressed

The AI Act was presented as a governance framework that balances innovation with protection. Nineteen months into implementation, a harder question has emerged: does the Act create a sustainable regulatory environment, or does it impose asymmetric compliance costs without corresponding enforcement outcomes?

The competitive asymmetry concern — that the Act imposes compliance costs on EU-based AI development while struggling to enforce equivalent obligations on non-EU providers accessing the EU market via APIs — is real but not yet measurable. What is clearer is that regulatory uncertainty, which is the current condition rather than regulatory stringency, creates compliance costs without the offsetting benefit of clear rules companies can plan around. Strict, clear requirements can be planned for. Ambiguous requirements require hedging across interpretive scenarios — a more expensive form of compliance.

The governance test — whether the Act actually reduces AI-related harm — remains empirically unmeasurable at present. The Act’s substantive obligations for high-risk systems do not activate until August 2026. Until then, the Act operates primarily through compliance anticipation: companies documenting processes, regulators publishing guidance, lawyers constructing conformity assessments, all in advance of operational enforcement. Whether this preparation translates into meaningful governance outcomes will not be assessable until enforcement actions begin to test the framework against real deployment decisions.

Forward Assessment

Three developments over the next twelve months will reveal whether EU AI Act implementation recovers from its slow start or follows the GDPR pattern of protracted ambiguity.

Authority designation completion: Member states must have functional oversight infrastructure before high-risk system obligations enter force in August 2026. Germany’s parliamentary passage of the KI-MIG is the most consequential single designation process given its economic weight within the EU. The pace at which the fourteen currently non-designated states establish competent authorities will determine whether enforcement is operationally possible in August 2026 or must be further deferred.

Conformity assessment infrastructure: As August 2026 approaches, the Commission faces three options: expand notified body designation aggressively; extend internal conformity assessment eligibility further; or delay enforcement deadlines. None are attractive. The choice will signal whether implementation prioritises enforcement rigour or deployment flexibility.

First GPAI enforcement test case: The AI Office’s formal enforcement powers against GPAI model providers activate in August 2026. A significant incident involving a major foundation model will provide the first test of whether the Act’s extraterritorial enforcement mechanisms function in practice against non-EU providers. If enforcement succeeds and produces substantive compliance changes, the Act establishes genuine jurisdictional reach. If it produces only procedural compliance, the limits of EU regulatory power over global AI infrastructure will be exposed.

Conclusion

The EU AI Act is the most ambitious regulatory undertaking in AI governance history — a framework that seeks to govern systems still being developed, using enforcement mechanisms still being constructed, against risks still being characterised. Its ambitions are proportionate to the challenge. Its implementation, nineteen months in, is not yet proportionate to its ambitions.

The authority designation failure — fourteen member states non-compliant six months after deadline — is the most significant early indicator of implementation trajectory. The conformity assessment infrastructure deficit is the most consequential operational challenge for August 2026. The GPAI accountability gap is the most substantive governance question remaining unresolved.

The Act may close these gaps through effective implementation over the next eighteen months. It may instead widen them as AI deployment accelerates faster than regulatory infrastructure can be built. The answer will determine whether the EU AI Act becomes a global governance template or a case study in the limits of legislative ambition applied to technical systems that are already deployed.

Sources: EU AI Act Implementation Timeline · National Implementation Plans Overview · GPAI Code of Practice (Final) · European Commission GPAI FAQ · Germany KI-MIG Cabinet Adoption · State of the Act: Key Member States (Nov 2025) · IAPP EU AI Act Regulatory Directory

Return to Intelligence Map
Intelligence Active — ISAR Global
Privacy Policy Europe — ISAR Global