ISAR Global Research Directorate — March 2026 Classification: Associate Intelligence
Executive Assessment
The EU AI Act entered into force in August 2024, representing the world’s first comprehensive binding legislative framework for artificial intelligence. Its architects described it as a landmark moment in democratic governance of emerging technology — a rules-based system that would establish clear obligations, protect fundamental rights, and provide legal certainty for businesses operating across the European single market. The political ambition was considerable. The implementation reality has been considerably more modest.
ISAR Global’s enforcement monitoring reveals a pattern that students of EU regulatory history will recognise immediately: the framework is architecturally sophisticated, the political commitment is genuine, and the operational capacity to deliver on either is substantially below what the rhetoric requires. Enforcement activity in the Act’s first implementation phase ran at approximately one-third of official projections. Compliance costs for affected businesses came in at nearly double initial estimates. Twenty-three of twenty-seven member states had initiated zero enforcement actions as of mid-2025. The gap between what the EU AI Act promises and what it currently delivers is not a failure of intent — it is a structural consequence of attempting to govern a fast-moving technical domain through a legislative instrument that took four years to pass and another two years to implement.
The Legislative Architecture
The EU AI Act operates through a risk-tiered classification system. AI applications are categorised as unacceptable risk (prohibited outright), high risk (subject to conformity assessments, registration, and ongoing oversight obligations), limited risk (transparency requirements only), or minimal risk (largely unregulated). The prohibited category covers applications such as social scoring by public authorities, real-time biometric surveillance in public spaces with narrow exceptions, and systems that exploit psychological vulnerabilities. The high-risk category is considerably broader, encompassing AI used in critical infrastructure, educational assessment, employment decisions, access to essential services, law enforcement, border management, and the administration of justice.
The enforcement architecture distributes responsibility across multiple layers. At the European level, the newly established EU AI Office within the European Commission holds primary responsibility for general-purpose AI models and for coordinating national enforcement. At member state level, National Competent Authorities bear responsibility for most enforcement activity, with each member state required to designate at least one national supervisory authority. The European Artificial Intelligence Board provides coordination and consistency mechanisms across the network. The intent was to combine European coherence with national implementation flexibility — a model familiar from GDPR, which the Act consciously echoes in structure.
The Enforcement Reality
The gap between the framework’s ambition and its operational delivery is most clearly visible in the enforcement data. Official projections for the Act’s first implementation phase anticipated over 150 compliance investigations across member states, more than €45 million in preliminary fines, and a dozen or more prohibitions of non-compliant high-risk systems. The actual figures — 49 investigations, €8.2 million in fines, three system prohibitions — represent enforcement at roughly a third of projected levels across the board.
More revealing than the aggregate shortfall is its geographic distribution. Seventy-eight percent of all enforcement activity was concentrated in four member states: Germany, France, the Netherlands, and Ireland. These are not coincidentally the EU member states with the most developed AI regulatory capacity — dedicated specialist units, meaningful enforcement budgets, and established cross-sector coordination protocols. The remaining twenty-three member states, many of which had committed politically to robust AI governance at the time of the Act’s passage, had by mid-2025 produced no enforcement actions whatsoever.
The correlation between national AI regulatory capacity and actual enforcement activity is not approximate — the statistical relationship is extremely strong. The EU AI Act did not create enforcement capacity where none existed. It created enforcement obligations that existing capacity could either meet or could not, depending on the member state. The result is a nominally uniform European framework that operates, in practice, as a patchwork of jurisdictions with radically different levels of actual regulatory pressure.
The Compliance Cost Divergence
The corporate experience of EU AI Act compliance has diverged sharply from official projections in the opposite direction. Where enforcement has underperformed projections, compliance costs have substantially overrun them. Initial estimates suggested Fortune 500 European operations would face approximately €2.4 million per company in initial compliance costs. Actual spending in the first six months averaged €4.1 million — seventy-one percent above projection. Annual ongoing compliance costs similarly ran at nearly double initial estimates.
The primary driver of this overrun is the AI system classification challenge. Sixty-seven percent of compliance spending in the first phase was consumed not by implementing the Act’s requirements but by the prior question of determining whether a given system was subject to them. The Act’s risk categorisation criteria, while logical in principle, prove highly ambiguous in practice for systems that sit across category boundaries, that incorporate general-purpose AI components within sector-specific applications, or that are deployed across multiple member state jurisdictions with varying interpretive guidance.
The practical consequence has been a significant corporate strategic response. Over a third of surveyed Fortune 500 European operations implemented regulatory arbitrage strategies — restructuring legal entities, relocating high-risk AI development activities to lower-enforcement jurisdictions, or outsourcing AI system development outside the EU entirely. The Act intended to establish a level playing field for AI governance across the single market. In its current form, it has instead created strong incentives for corporate forum-shopping that its fragmented enforcement architecture is poorly equipped to counter.
The EU AI Act did not create enforcement capacity where none existed. It created enforcement obligations that existing capacity could either meet or could not. Twenty-three member states produced no enforcement actions in the Act’s first phase.
The Coordination Gap
The structural challenge confronting EU AI Act implementation extends beyond resource constraints. The Act was designed during a period when AI capabilities were advancing rapidly, but the legislative process moved at the pace that democratic institutions require. The four years between the Commission’s initial proposal and the Act’s entry into force saw the emergence of large language models, foundation models, and generative AI capabilities that the original framework was not designed to address. The Act’s general-purpose AI provisions — added relatively late in the legislative process — reflect an attempt to retrofit governance of these capabilities into a framework built around a different conception of AI risk.
The EU AI Office, established to manage general-purpose AI model governance, represents genuinely novel regulatory territory. No comparable body existed before. Its staff are building regulatory practice and institutional knowledge simultaneously, operating without enforcement precedent, and developing interpretive guidance for the most commercially and strategically sensitive category of AI systems in the world. That this will take time is inevitable. That the political communication around the Act suggested otherwise is a straightforward illustration of the governance reality versus governance rhetoric dynamic that ISAR Global exists to document.
The coordination challenge is compounded at the international level. The EU AI Act’s extraterritorial provisions — which apply obligations to providers placing AI systems on the EU market regardless of where those providers are established — represent a significant assertion of European regulatory jurisdiction. The practical mechanics of enforcing those provisions against non-EU providers, particularly those operating from jurisdictions with divergent approaches to AI governance, remain largely untested. The Act creates the legal framework for extraterritorial enforcement. It does not resolve the diplomatic and practical questions that enforcement against US or Chinese AI providers would inevitably raise.
The Regulatory Capacity Finding
The enforcement gap is structural, not transitional. Twenty-three member states produced no enforcement activity in the Act’s first implementation phase. The correlation between national regulatory capacity and enforcement activity is strong enough to suggest that low-enforcement member states will not close the gap through political will alone — they require sustained investment in specialist regulatory capability that most have not yet committed to making.
Compliance costs substantially exceed projections. The classification challenge — determining which systems are subject to which requirements — consumed the majority of corporate compliance spending. This reflects genuine ambiguity in the Act’s categorisation criteria rather than corporate reluctance to comply, and points to interpretive guidance as the near-term priority for the EU AI Office.
Regulatory arbitrage is accelerating. Corporate strategies explicitly designed to exploit enforcement capacity differences across member states are already established practice among large EU market participants. The Act’s level-playing-field objective is being undermined by the same fragmented enforcement architecture that was intended to deliver it.
The UK competitive position is real but contingent. The predictability advantage that the UK’s sector-led approach offers relative to the EU’s current enforcement fragmentation is measurable and is being reflected in investment patterns. It is also contingent on the EU’s enforcement harmonisation trajectory — if member state capacity builds as projected over the next two to three years, the competitive differential will narrow.
Forward Assessment
Three developments will determine whether the EU AI Act delivers on its governance ambitions in the period ahead.
The EU AI Office’s handling of its first major general-purpose AI enforcement case will establish the interpretive precedent that the entire framework requires. The Commission has been deliberate about sequencing its initial enforcement priorities. The first case it pursues — against which provider, under which provisions, with what remedy — will signal more about the Act’s practical reach than any volume of published guidance. ISAR Global is monitoring the pipeline of potential enforcement targets closely.
Member state regulatory capacity development will determine whether the Act functions as a genuinely uniform European framework or consolidates as a high-enforcement regime in four or five jurisdictions surrounded by regulatory voids. The Commission’s ability to compel meaningful capacity investment in lower-capacity member states through the existing coordination mechanisms is limited. Without either significant EU-level funding for national authority development or a fundamental restructuring of enforcement architecture, the two-tier pattern established in the first implementation phase may prove durable.
The Act’s adequacy for governing frontier AI systems will face increasing pressure as capability development continues. The general-purpose AI provisions were an addition to a framework designed around a different risk model. Whether that addition proves sufficient, or whether the Act requires material amendment within its first five years, depends partly on regulatory adaptability and partly on the pace of capability development. On current trajectories, the question of sufficiency is likely to arise sooner than the Act’s architects projected.
ISAR Global Associate Intelligence — March 2026 Not for redistribution. Associate access only. Primary source: ISAR Global Enforcement Intelligence Monitoring, August 2025.