Executive Summary

An analysis of 52 written parliamentary questions tabled between 26 December 2025 and 26 March 2026 reveals a government that is rhetorically committed to AI leadership whilst systematically declining to legislate, define thresholds, or establish enforceable obligations. The dominant ministerial posture across the dataset is one of confident deferral: existing frameworks are repeatedly cited as adequate, new statutory interventions are consistently declined, and specific commitments — where they exist at all — are framed in terms of process rather than outcome. The parliamentary record does not support the proposition that the United Kingdom has a coherent, binding AI governance architecture commensurate with the risks its own ministers acknowledge.

The most significant governance gap identified in this dataset concerns frontier AI risk. Ministers acknowledge the possibility of transformative and destabilising AI progress, yet the record reveals no published AI Security Strategy, no defined dangerous-capability thresholds, no statutory powers to delay or prohibit model releases, and no confirmed inclusion of AI loss-of-control scenarios in the National Risk Register. Simultaneously, the Government has signed Memoranda of Understanding with five major US-headquartered AI companies — OpenAI, Google DeepMind, NVIDIA, Cohere, and Anthropic — all of which are explicitly described as non-binding. The combination of extensive commercial partnership and minimal enforceable governance represents the central tension running through this parliamentary record.

A secondary but equally revealing pattern concerns the concentration of ministerial responses in a single junior minister, Kanishka Narayan of the Department for Science, Innovation and Technology, who answered the substantial majority of questions in this dataset. The consistency — and in several cases the near-identical wording — of his responses across substantively different questions suggests a centralised communications strategy designed to project stability rather than to engage with the specific concerns raised by parliamentarians. This brief identifies that pattern as a governance intelligence signal in its own right.

Volume and Pattern Analysis

Of the 52 questions analysed, the overwhelming majority — 38 questions — were directed to the Department for Science, Innovation and Technology (DSIT), confirming that parliamentary scrutiny of AI governance is concentrated in a single departmental locus. The remaining questions were distributed across the Ministry of Defence (1), the Ministry of Housing, Communities and Local Government (1), the Cabinet Office (1), the Department for Transport (1), the Department for Energy Security and Net Zero (1), and the Department for Education (2), with Lords questions answered by Baroness Lloyd of Effra accounting for a further 13 responses. This distribution reflects the structural reality that AI governance in the United Kingdom remains primarily a DSIT responsibility, with cross-departmental coordination visible only at the margins of the parliamentary record.

The ministerial response burden within DSIT fell almost entirely upon Kanishka Narayan, who answered 35 of the 52 questions. Baroness Lloyd of Effra answered 13 questions in the Lords. Only four other ministers — Georgia Gould (Education), Luke Pollard (Defence), Dan Jarvis (Cabinet Office), Simon Lightwood (Transport), Michael Shanks (Energy), Samantha Dixon (MHCLG), and Olivia Bailey (Education) — contributed single responses each. The concentration of answers in Narayan and Lloyd of Effra suggests that AI governance scrutiny has not yet penetrated the broader ministerial cadre, and that cross-departmental accountability mechanisms remain underdeveloped.

By party, the Liberal Democrats were the most active questioners, accounting for questions from Freddie van Mierlo, Dr Danny Chambers, Victoria Collins, Mr Joshua Reynolds, Helen Maguire, Steve Darling, and Sarah Olney — collectively responsible for approximately 13 questions. The Conservatives contributed meaningfully through Julia Lopez, Kevin Hollinrake, Ben Obese-Jecty, Peter Fortune, and Mr Richard Holden. Labour backbenchers — including Chris Bloore, Dan Aldridge, Nadia Whittome, Dame Chi Onwurah, Sir Mark Hendrick, Neil Duncan-Jordan, and Dr Scott Arthur — were active across infrastructure, environmental, and sovereignty themes. Iqbal Mohamed (Independent, Dewsbury and Batley) was the single most prolific individual questioner, tabling at least ten questions in a concentrated cluster on 2 January and 20 February 2026, focused almost exclusively on frontier AI risk, national security, and emergency preparedness. Lord Taylor of Warwick (Non-affiliated) was the most active Lords questioner, accounting for nine questions spanning safety, international partnerships, infrastructure, and start-up support. The presence of Reform UK (Lee Anderson) and the Green Party (Dr Ellie Chowns) in the dataset, though each contributing a single question, indicates that AI governance concern now spans the full parliamentary spectrum.

Ministerial Response Quality

The most striking feature of the ministerial response record is the deployment of near-identical boilerplate language across substantively distinct questions. At least seven questions tabled by Iqbal Mohamed on 20 February 2026 — covering topics as varied as emergency powers over private AI developers, the publication of an AI Security Strategy, transparency on departmental responsibility for AI risk, risk modelling for frontier systems, and the role of the AI Security Institute in national security — received responses that opened with an identical formulation:

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

This verbatim repetition across questions addressing materially different policy areas — including the specific question of whether emergency powers exist to direct private AI developers during a national security incident — is not consistent with substantive ministerial engagement. It represents a formulaic deflection that acknowledges the premise of concern whilst declining to address the substance of the question. The parliamentary record therefore reveals a deliberate communications posture rather than a considered policy response.

A similar pattern is observable on the question of digital watermarking and AI content labelling. Three separate questions — from Dan Aldridge (UIN 117546), Victoria Collins (UIN 118022), and Nadia Whittome (UIN 120482) — received responses that were functionally identical, each citing the Deepfake Detection Challenge and the general-purpose nature of AI as reasons for continued exploration rather than regulatory action. The repetition of the same response to questions from different parties, tabled on different dates, confirms that ministerial answers in this domain are drafted centrally and applied uniformly regardless of the specific framing of the question.

Where responses do contain substantive content, it tends to be descriptive of existing activity rather than forward-looking in terms of commitment. The response to Lord Taylor of Warwick on AI start-up support (UIN HL14635) is among the more specific in the dataset, citing a £500 million allocation to the Sovereign AI Unit from the Spending Review. Similarly, the response to Peter Fortune (UIN 109679) confirms that on 28 January 2026, the Secretary of State wrote to 19 regulators requesting innovation plans by a defined deadline. These represent genuine policy data points, but they are exceptions within a dataset dominated by process description and rhetorical affirmation.

The Lords responses from Baroness Lloyd of Effra are marginally more expansive than those from Kanishka Narayan, and in several cases provide additional institutional context — for example, the confirmation that the UK AI Security Institute was the world’s first state-backed organisation of its kind (UIN HL15213). However, they share the same structural reluctance to make specific, verifiable commitments, and they similarly rely on the citation of existing frameworks and ongoing processes as evidence of adequate governance.

Key Commitments and Timelines

The parliamentary record yields a limited but identifiable set of specific commitments. The most concrete concerns the regulatory innovation plans: the response to Peter Fortune (UIN 109679) confirms that DSIT wrote to 19 regulators on 28 January 2026, requesting that each publish a plan setting out how they will enable safe AI-powered innovation. A deadline is implied but not specified in the available text. This represents a genuine accountability mechanism, though its enforceability depends entirely on the content of those plans when published.

The AI Growth Zones programme (UIN 101120, Victoria Collins) represents a second area of concrete commitment. The response confirms that a formal application process has been completed and zones have been designated, though the number of formally designated zones is not stated in the available text. The response frames AIGZs as a national mission to unlock private investment and drive economic growth, but provides no timeline for regulatory reforms within those zones.

The £500 million Sovereign AI Unit allocation (UIN HL14635) is cited as a Spending Review commitment to support high-potential AI start-ups, representing one of the larger financial figures in the dataset. The AI Research Resource (AIRR) is confirmed as live and free to use for UK scientists and public sector researchers (UIN HL14203). The AI Energy Council is cited in multiple responses as the mechanism for assessing system-wide energy impacts of AI infrastructure (UINs 115531, 112751).

On the question of the EU AI Act’s application to Northern Ireland (UIN 116449), the response confirms that the European Council has published a proposal for a decision to apply the EU AI Act to a limited extent in Northern Ireland under the Windsor Framework, but declines to state when the outcome of Withdrawal Agreement Joint Committee discussions will be published. This is a significant omission given the regulatory divergence implications for businesses operating across the Irish border.

The response to Iqbal Mohamed on model release powers (UIN 102277) is notable for what it does not commit to:

The Government believes that AI should be regulated at the point of use, and takes a context-based approach.

This formulation explicitly declines to establish pre-deployment powers, confirming that the Government has no current mechanism to delay or prohibit the public release of a frontier AI model assessed as dangerous by its own AI Security Institute. This is a governance commitment by omission — and a significant one.

Rhetoric Versus Reality

The central rhetorical claim running through this dataset is that the United Kingdom’s existing regulatory frameworks are adequate for AI governance, supplemented by the work of expert regulators and the AI Security Institute. Ministers repeatedly invoke data protection law, competition legislation, equality frameworks, and online safety regulation as evidence that AI is already governed. The reality revealed by the parliamentary record is more troubling: these frameworks were not designed for AI, are not consistently applied to AI, and in several critical areas — particularly frontier model risk — provide no meaningful constraint whatsoever.

The gap is most acute on frontier AI safety. The Government acknowledges, through multiple ministerial responses, that advanced AI could lead to serious security risks and that the possibility of very rapid, transformative progress must be taken seriously. Yet the same record confirms: no AI Security Strategy has been published; no dangerous-capability thresholds have been established for CBRN-related AI capabilities (with responsibility deflected to the Home Office and DESNZ in the response to UINs 102273 and 102274); no statutory powers exist to delay model releases; and the question of whether AI loss-of-control scenarios will be included in the National Risk Register received only the response that all risks are kept under review (UIN 114724, Dan Jarvis). The rhetoric of science-led preparedness is not matched by any published strategy, defined threshold, or enforceable power.

On commercial partnerships, the Government’s rhetoric emphasises mutual benefit, UK ecosystem support, and safeguards for public interest. The reality is that all five AI Memoranda of Understanding — with OpenAI, Google DeepMind, NVIDIA, Cohere, and Anthropic — are explicitly non-binding (UIN HL12903 confirms the Google DeepMind MoU is described as non-binding). The response to Julia Lopez (UIN 120350) asserts that partnerships benefit the UK AI ecosystem, but provides no mechanism for enforcing that outcome. The response to UIN 102275 confirms that the Government does not disclose which frontier models have received pre-deployment safety testing, citing commercial and security sensitivities — a position that makes independent verification of safety claims impossible.

On AI content authenticity, the rhetorical commitment to transparency is clear across multiple responses. The reality is that three separate questions on digital watermarking and content labelling received identical responses citing ongoing exploration through the Deepfake Detection Challenge, with no legislative commitment and no timeline. The response to UIN 117779 on AI-generated political advertising similarly acknowledges the risk to democratic processes whilst declining to commit to any specific regulatory intervention.

On environmental governance of AI infrastructure, the Government’s rhetoric of responsible development and net zero alignment is not matched by binding requirements. The response to Sir Mark Hendrick (UIN 118684) confirms that inward investment agreements for new data centres do not include binding requirements on energy efficiency or renewable power sourcing — the response notes that data centres are subject to existing environmental and planning frameworks, which is a materially different proposition. The response to UIN 105278 on Drax biomass power for data centres illustrates the tension between AI infrastructure ambition and environmental commitment in concrete terms.

Strategic Intelligence Conclusions

  • 1. The Government has no published AI Security Strategy, no defined dangerous-capability thresholds, and no statutory powers to delay or prohibit frontier model releases, meaning that the UK AI Security Institute’s safety assessments carry no enforceable consequence — a governance lacuna that senior decision-makers in both the public and private sectors should treat as a material risk factor.
  • 2. All five AI Memoranda of Understanding signed with major US technology companies are explicitly non-binding, and the Government’s inability to disclose which models have received pre-deployment safety testing renders independent verification of its safety assurance claims structurally impossible.
  • 3. The near-identical ministerial responses to at least seven substantively distinct questions on frontier AI risk, tabled on the same date, indicate a centralised communications strategy designed to contain parliamentary scrutiny rather than to engage with it — a pattern that should inform assessments of the reliability of ministerial statements in this domain.
  • 4. The Government’s commitment to regulate AI at the point of use rather than at the point of development creates a structural dependency on sectoral regulators whose capabilities, resources, and AI-specific mandates remain uneven, as evidenced by the January 2026 letters to 19 regulators requesting innovation plans that have not yet been published.
  • 5. The unresolved question of the EU AI Act’s application to Northern Ireland under the Windsor Framework represents a live regulatory divergence risk for businesses operating across the Irish border, and the Government’s refusal to provide a timeline for the outcome of Joint Committee discussions compounds that uncertainty.
  • 6. The concentration of AI governance scrutiny in a single junior minister and the absence of cross-departmental accountability mechanisms visible in the parliamentary record suggests that AI risk — including national security, environmental, and democratic integrity dimensions — is not yet embedded in the broader machinery of government at a level commensurate with its acknowledged significance.

Dataset Note

This analysis is based on 52 written parliamentary questions retrieved from the UK Parliament record and submitted in full for analysis, covering the period from 26 December 2025 to 26 March 2026, with a retrieval date of 26 March 2026. The dataset encompasses questions from both the House of Commons and the House of Lords and spans multiple government departments, though DSIT questions predominate. Limitations include the inherent selectivity of written questions as a data source — they reflect the concerns of individual parliamentarians rather than a systematic audit of government policy — and the fact that ministerial responses in written form are frequently more guarded than oral exchanges. Oral questions, select committee evidence, and published departmental documents would be required to construct a fully comprehensive picture of UK AI governance commitments. Nonetheless, the written parliamentary record remains the most reliable source of on-the-record ministerial commitment, and the patterns identified in this dataset are analytically robust within those constraints.