IMPLEMENTATION GAP ANALYSIS

UK AI Regulation: Eight Months of Deferred Legislation and Repeated Boilerplate

ISAR Global • 23 FEB 2026

Executive Summary

Between June 2025 and February 2026, Members of Parliament and Peers submitted no fewer than 53 written questions on the subject of artificial intelligence regulation, of which 25 have been analysed for this brief. The resulting dataset offers a rare longitudinal window into the UK Government’s evolving — or more precisely, static — posture on AI governance. The central finding is stark: across eight months of sustained parliamentary scrutiny, ministerial responses display a high degree of formulaic repetition, a consistent refusal to commit to legislative timetables, and a structural rhetorical strategy of reframing novel regulatory challenges as already addressed by existing law. Where legislative commitments were made, they were expressed in terms sufficiently vague to be unfalsifiable within any reasonable parliamentary cycle.

Volume and Pattern Analysis

The 25 questions under analysis were drawn overwhelmingly from the Department for Science, Innovation and Technology (DSIT), with questions originating from both the House of Commons and the House of Lords. The temporal distribution is significant: the earliest questions in this dataset (June 2025) elicited responses that explicitly referenced forthcoming AI legislation and a planned public consultation to be launched “later this year.” By February 2026, ministerial answers had subtly shifted away from these forward commitments, reverting instead to descriptions of existing activity.

Thematically, the questions clustered around five distinct concerns: the adequacy of existing regulation for novel AI applications (including chatbots, agentic AI, and autonomous self-modifying systems); the case for a dedicated AI regulator or statutory framework; the timetable for AI-specific legislation; the governance of frontier and superintelligence-capable systems; and the coordination of AI oversight across sectoral regulators. This breadth of parliamentary concern reflects genuine cross-party anxiety about governance gaps, yet the ministerial response architecture treated the majority of these distinct questions as variations on a single theme warranting a single template answer.

Ministerial Response Quality

The most analytically significant feature of the dataset is the degree to which ministerial answers recycle a core set of formulations. The phrase “AI is a general-purpose technology with a wide range of applications, which is why the UK believes that the vast majority of AI systems should be regulated at the point of use” — or a close variant — appears in at least eight of the 25 responses examined. Similarly, the observation that “a range of existing rules already apply to AI systems, such as data protection, competition, equality legislation and other forms of sectoral regulation” recurs across responses dated from June 2025 through to February 2026.

This formulaic consistency is not, in itself, evidence of bad faith. Departments routinely maintain policy lines for coherence. However, the repetition becomes analytically problematic when the questions being asked are substantively different. A question asking specifically about regulatory oversight of AI systems capable of autonomously modifying their own programming (Item 11, November 2025) received a response that was substantively indistinguishable from one addressing general AI regulation (Item 12, also November 2025). A question about regulating artificial superintelligence (Item 8) was met with a response that, while acknowledging the seriousness of the issue, offered no regulatory mechanism beyond the existing remit of the AI Security Institute.

Two questions (Items 18 and 19, both dated 8 July 2025) were answered simply by reference to an answer given on 14 July to Question 66210 — a question not present in this dataset. This cross-referencing approach, while procedurally permissible, provides no analytical traction and represents a further signal of the limited substantive engagement with parliamentary scrutiny on this topic.

Key Commitments and Timelines

The dataset does contain several identifiable commitments that warrant tracking. In June 2025, responses to questions on frontier AI legislation (Items 21, 23, 24, 25) stated unambiguously that the Government “intends to bring forward AI legislation” and would “launch a consultation on legislation later this year.” One response (Item 24, June 2025) specified that proposals were being refined and that a “public consultation later this year” would follow.

By October 2025, attention had partially shifted to the “AI Growth Lab” initiative, described as a cross-economy sandbox to “drive responsible AI innovation and adoption.” A call for evidence on the AI Growth Lab was noted as live in October 2025, closing on 2 January — suggesting a January 2026 deadline for responses. No subsequent parliamentary answer in this dataset confirms what action followed from those responses.

By January and February 2026, references to forthcoming AI legislation had become more guarded. The January 2026 response (Item 2) stated flatly that “the government does not speculate on legislation ahead of future parliamentary sessions” — a formulation that effectively extinguishes the earlier commitment to consult “later this year.” The February 2026 response (Item 1) made no reference to planned legislation whatsoever, instead citing new criminal offences in the Crime and Policing Bill relating to AI tools as evidence of regulatory progress.

Rhetoric Versus Reality

The governance reality versus governance rhetoric gap in this dataset is clearest across three dimensions.

The consultation commitment: From June to August 2025, multiple ministerial answers referenced a forthcoming public consultation on AI legislation. By January 2026, this commitment had been replaced by a standard non-speculation formula. No consultation had been published as of the analysis date. The commitment was made at least four times across this dataset; it appears to have been quietly retired without formal announcement.

The “existing regulation is sufficient” defence: Ministers consistently responded to questions about novel AI risks — agentic AI, self-modifying systems, AI chatbots, artificial superintelligence — by referencing existing data protection, equality, and competition legislation. This posture is rhetorically effective but analytically weak. The questions being posed by parliamentarians were premised precisely on the observation that existing frameworks were not designed for these applications. The ministerial answer did not engage with this premise.

The independent regulator question: Item 16 (July 2025) asked directly about the merits of creating an independent AI regulator. Item 22 (June 2025) asked whether the AI Security Institute would become the primary AI regulator. Both received identical “regulated at the point of use” responses, with no substantive engagement with the structural governance question. The Government’s position — that existing sectoral regulators are “best placed” to oversee AI — is a legitimate policy choice, but its repeated assertion without supporting evidence or acknowledgement of the coordination challenges involved does not constitute a serious analytical response to the questions posed.

Strategic Intelligence Assessment

This dataset is consistent with a Government that made a genuine early-term commitment to AI legislation — visible in the June 2025 responses — but subsequently encountered internal or political constraints that delayed or shelved that commitment. The rhetorical infrastructure of “existing regulation applies” and “regulated at the point of use” has been deployed with increasing frequency as a substitute for legislative progress, suggesting that the policy development process has stalled rather than advanced.

The AI Security Institute (AISI) functions as the Government’s primary evidence of substantive action on frontier AI risk. AISI is referenced in at least seven of the 25 responses analysed. However, its remit — described as building “an evidence base on risks” — is preparatory rather than regulatory. Citing AISI activity in response to questions about enforcement mechanisms, mandatory pre-deployment testing, or statutory oversight conflates research capacity with regulatory authority.

The October 2025 AI Growth Lab initiative represents the most concrete new policy development visible in this dataset. Its call for evidence closed in January 2026, and its translation into operational policy remains unconfirmed. The framing of the Growth Lab — emphasising innovation facilitation rather than risk mitigation — is consistent with a broader government preference for pro-growth regulatory positioning over precautionary governance architecture.

For practitioners monitoring UK AI governance, the parliamentary record examined here suggests that the Government’s legislative timeline has slipped by at least one parliamentary session from its own stated ambition. The gap between the June 2025 commitment to consult “later this year” and the February 2026 refusal to “speculate on legislation” is not a minor scheduling adjustment — it is a substantive policy retreat conducted through the quiet substitution of one boilerplate formula for another. Parliamentary scrutiny has been met consistently but not seriously engaged.

ISAR Global will continue to monitor parliamentary activity on this dossier. The 28 questions not loaded for this analysis may contain material confirming or complicating these findings, and a follow-on synthesis will be produced when that data is available.

Share this analysis: