IMPLEMENTATION GAP ANALYSIS

UK AI Action Plan: Parliamentary Scrutiny Reveals Governance Gap

ISAR Global • 24 FEB 2026

Executive Summary

Between January 2025 and February 2026, UK parliamentarians submitted at least 62 written questions referencing the government’s AI Action Plan across both chambers. This brief analyses 50 loaded questions with full ministerial responses, drawn from the Department for Science, Innovation and Technology (DSIT), the Ministry of Justice (MoJ), and the Department for Transport (DfT). The analysis applies ISAR Global’s core “governance reality versus governance rhetoric” framework to assess the gap between the commitments ministers articulate and the delivery they can evidence.

The central finding is structural: the AI Action Plan has functioned predominantly as a rhetorical instrument in parliamentary exchanges, with a near-identical formulaic phrase — “in response to the AI Action Plan, the government committed to work with regulators to boost their capabilities” — deployed across at least fourteen separate responses to substantively different questions. This repetition signals a managed communications posture rather than a differentiated, evidence-based accountability practice. Where genuine delivery evidence is offered, it is concentrated in departmental implementation contexts, most notably the MoJ, and is absent in the areas of greatest political sensitivity: the timetable for AI legislation and the future of the regulatory architecture.

Volume and Pattern Analysis

The 50 questions analysed span a fourteen-month parliamentary window and involve three principal answering departments. DSIT accounts for the majority of questions, consistent with its lead responsibility for AI policy. The MoJ produced the most concentrated burst of scrutiny activity — seventeen questions tabled by a single Member of Parliament, Ben Obese-Jecty, on 10 October 2025, all referencing the MoJ’s own departmental AI Action Plan for Justice published on 31 July 2025. The DfT contributed a smaller set of responses, primarily referencing its Transport AI Action Plan published in June 2025.

The pattern of questioning reveals two distinct modes of parliamentary engagement. The first is thematic and diffuse — questions on AI regulation, safety, cybersecurity, and investment submitted by a range of members across both chambers, attracting largely generic ministerial responses. The second is systematic and document-anchored — the MoJ cluster represents a structured accountability exercise against a published plan, eliciting more operationally specific responses. This distinction is analytically significant: it demonstrates that the quality of ministerial accountability is a function of the specificity of the question as much as the quality of the ministerial brief.

Ministerial Response Quality

Across the DSIT responses, a pronounced templating effect is observable. The phrase “a range of existing rules already apply to AI systems, such as data protection, competition, equality legislation, and online safety” appears, in near-identical formulation, across responses to questions on AI safety, agentic AI, AI chatbots, cybersecurity, AI regulation, and content authentication. This is not inherently improper — consistent messaging on a cross-cutting policy question is a standard feature of ministerial communications — but it becomes a governance concern when the template displaces substantive engagement with the specific question asked.

Notably, when Dr Danny Chambers asked in January 2026 for the timetable for an AI Bill, the response stated that “the government does not speculate on legislation ahead of future parliamentary sessions” — a response that is constitutionally orthodox but analytically evasive. By contrast, a June 2025 response to Cat Eccles confirmed that “the Government intends to bring forward AI legislation” and that a “consultation on legislative proposals” would be launched “later this year.” By February 2026, that consultation had not been referenced in any subsequent response as having been launched, representing a potential slippage against a publicly stated commitment.

MoJ responses, by contrast, demonstrate a materially higher level of operational specificity. Ministers confirmed that the Justice AI Unit was established in November 2024; that AI transcription and summarisation pilots are active across Kent, Surrey, Sussex and Wales; that all MoJ staff now have access to a secure AI assistant; that 300 Microsoft Copilot 365 licences were purchased for leadership judges in July 2025; and that elements of the Assessing Risks, Needs and Strengths (ARNS) tool will begin national rollout from March 2026. These are concrete, dateable, falsifiable claims — the hallmark of genuine accountability rather than rhetorical positioning.

Key Commitments and Timelines

The parliamentary record surfaces a set of specific commitments against which future performance can be measured. In the domain of AI legislation, the government committed in mid-2025 to launching a public consultation on legislative proposals “later this year.” In the domain of AI compute, over £1 billion has been committed to expand the AI Research Resource, with Scotland identified as the host of the UK’s most powerful supercomputer backed by up to £750 million in additional investment. The AI Spärck Master’s programme will offer up to 100 fully funded places, and the Turing AI Global Fellowships provide up to £5 million per fellow. The overall Spending Review settlement for the AI Action Plan is confirmed at over £2 billion.

In the justice sector, the ARNS national rollout is scheduled for March 2026, providing a near-term accountability checkpoint. The Online Procedure Rule Committee’s consultation on its Inclusion Framework closed on 19 September 2025, with next steps anticipated in early 2026. The MoJ has confirmed it has not made a specific cost estimate for AI talent, training, or workforce planning — a transparency gap that will complicate future value-for-money assessments.

Across transport, the DfT references the Transport AI Action Plan (2025) and a dedicated digital twins programme, with the TransiT research hub backed by £46 million and a £5 million crisis resilience programme already active. Economic benefits from integrated transport digital twins are estimated at £1.85 billion over the next decade.

Rhetoric Versus Reality

The governance reality versus governance rhetoric gap manifests most acutely in three areas. First, on AI legislation: the government’s public posture has shifted perceptibly between mid-2025 and early 2026. June 2025 responses referenced imminent consultation; January 2026 responses retreated to procedural non-commitment. This is not necessarily evidence of policy reversal, but it is evidence of a communications discipline that prioritises flexibility over accountability.

Second, on regulatory architecture: across at least a dozen responses, ministers deflected questions about a dedicated AI regulator or a new regulatory body by restating the “point of use” doctrine and referencing regulator capability-building. The AI Growth Lab — described in a December 2025 response as the subject of a Call for Evidence — represents the most developed institutional proposal in the parliamentary record, but its status and timeline remain unspecified.

Third, on cross-government coordination: the MoJ clarified that its AI Steering Group does not include representatives from other government departments, despite the AI Action Plan being a whole-of-government initiative. Regular engagement is acknowledged, but the structural mechanisms for cross-departmental AI governance coherence are not evidenced in the parliamentary record.

Where the rhetoric-reality gap narrows, it is instructive. The MoJ’s departmental AI Action Plan for Justice has generated a measurably higher quality of parliamentary accountability, suggesting that sector-specific published plans with named actions and indicative timelines create conditions for more substantive scrutiny and more operationally grounded ministerial response.

Strategic Intelligence Assessment

The AI Action Plan — in its January 2025 iteration — has performed a dual function in UK governance: as a policy framework and as a parliamentary shield. Its acceptance of all 50 recommendations, confirmed in a May 2025 Lords response, provided the government with a principled basis for deferring legislative specificity while demonstrating strategic ambition. The £2 billion Spending Review settlement gives this posture financial credibility. However, the parliamentary record across 50 questions suggests that translation from plan to accountability is uneven, institutionally variable, and — in the critical domain of legislation — subject to progressive temporal deferral.

The most significant intelligence signal in this dataset is the divergence in accountability quality between DSIT’s whole-of-government AI governance responses and the MoJ’s departmental implementation responses. If the government’s AI governance ambition is to be operationally credible, the MoJ model — specific plans, named milestones, honest acknowledgement of gaps such as the absence of cost estimates for workforce planning — should be the standard, not the exception. At present, it remains the exception.

For analysts and stakeholders tracking UK AI governance, the near-term accountability checkpoints are the ARNS national rollout from March 2026, the anticipated legislative consultation (whose launch date remains publicly unconfirmed as of the analysis date), and the AI Growth Lab Call for Evidence outcome. These will provide the first substantive test of whether the AI Action Plan’s ambitious rhetoric is matched by institutional delivery at scale.

Share this analysis: