Executive Summary

The parliamentary record for the period from 31 December 2025 to 31 March 2026 yields a modest but analytically significant dataset of three written questions directed at the government’s AI governance posture. Drawn from both the House of Commons and the House of Lords, and spanning two separate departments — the Department for Science, Innovation and Technology (DSIT) and the Department for Education — the questions collectively illuminate a government that is confident in its promotional narrative around AI investment but demonstrably reluctant to subject that narrative to independent scrutiny. The pattern is one of assertion over accountability.

Two of the three questions were tabled by Iqbal Mohamed, the Independent MP for Dewsbury and Batley, and both probe the substance behind headline investment figures associated with CoreWeave’s announced commitments to UK AI infrastructure. The third, from Lord Taylor of Warwick in the Lords, addresses the foundational question of whether schoolchildren across England are being equipped with the digital and AI literacy necessary to participate in the economy that these infrastructure investments are ostensibly building. Taken together, the three questions expose a structural tension at the heart of UK AI policy: the government is aggressively courting and publicising private capital commitments whilst simultaneously declining to audit whether those commitments materialise, and whilst the educational pipeline required to realise the long-term benefits of that infrastructure remains unevenly distributed and under-reform.

The ministerial responses, all answered on 30 March 2026, are notable less for what they confirm than for what they decline to address. Ministers pivot consistently to promotional statistics — global investment rankings, startup league tables, curriculum review acceptances — rather than engaging with the specific accountability and implementation questions posed. This brief assesses that pattern as a deliberate rhetorical strategy, and one that senior decision-makers should treat with appropriate scepticism when evaluating the government’s actual governance grip on the AI sector.

Volume and Pattern Analysis

The three questions retrieved represent a narrow but coherent sample. All three were tabled within a four-week window — two on 10 March 2026 and one on 12 February 2026 — and all three received answers on the same date, 30 March 2026, suggesting a degree of administrative coordination in the government’s response scheduling. The concentration of answers on a single date, particularly for questions tabled nearly seven weeks apart, warrants note: it may reflect departmental convenience rather than any urgency of response.

The departmental distribution is instructive. Two questions fell to DSIT, the lead department for AI policy, answered by the Parliamentary Under-Secretary of State Kanishka Narayan. One question fell to the Department for Education, answered by Baroness Smith of Malvern. The absence of questions to the Cabinet Office, the Treasury, or the Home Office — all of which have significant AI governance interests — reflects the limitations of a small dataset rather than any definitive conclusion about parliamentary attention, but it does suggest that scrutiny of AI policy remains concentrated in the science and technology portfolio rather than distributed across the machinery of government as one might expect for a genuinely cross-cutting policy priority.

The questioners themselves are revealing. Iqbal Mohamed, sitting as an Independent following his departure from the Labour Party, is not a member of the official opposition and carries no committee brief in this area. His questions are nonetheless among the sharper accountability interventions in this dataset, targeting the specific mechanics of investment verification. Lord Taylor of Warwick, a non-affiliated life peer, brings a similarly independent perspective to the education question. The absence of questions from Conservative frontbench spokespeople or from the Liberal Democrats in this sample — whilst not conclusive given the dataset size — may suggest that formal opposition scrutiny of AI governance is either occurring through other parliamentary mechanisms or has not yet crystallised around these specific accountability themes.

Ministerial Response Quality

The quality of ministerial responses in this dataset is, on the whole, disappointing when measured against the standard of genuine parliamentary accountability. Both DSIT responses exhibit a pattern that experienced parliamentary analysts will recognise immediately: the minister answers a question adjacent to the one asked, deploying favourable statistics to create the impression of substantive engagement whilst leaving the core inquiry unaddressed.

In response to UIN 119686, which asked specifically how many new datacentres have been constructed as a result of CoreWeave’s investment, Kanishka Narayan provides figures on announced investment commitments — £2.5 billion in total, with £1.5 billion directed towards the Lanarkshire AI Growth Zone — but the answer as recorded does not appear to state how many datacentres have actually been built. The distinction between announced investment and constructed infrastructure is precisely the accountability gap that the question was designed to probe, and the response does not close it.

The response to UIN 119683 is, if anything, more evasive. Asked directly whether DSIT audits private investment commitments included in its press announcements, the minister responds with a statement about the UK’s global ranking for AI investment and its position as the second highest producer of AI startups globally. This is a textbook non-answer: the question was procedural and specific, asking about an internal audit mechanism, and the response is promotional and general. No audit process is described, confirmed, or denied. The absence of a direct answer to a direct question is itself an answer of sorts, and analysts should treat it accordingly.

The Department for Education’s response to HL14634 is comparatively more substantive, in that Baroness Smith of Malvern does engage with the specific subject matter — curriculum reform, AI content, and digital literacy disparities. However, the response leans heavily on the acceptance of recommendations from the independent Curriculum and Assessment Review, which is a commitment to process rather than to outcome. The language of “refreshed computing curriculum” and “essential AI content” is aspirational rather than operational, and no timelines, funding figures, or measurable targets are cited in the portion of the answer available for analysis.

Key Commitments and Timelines

Extracting firm commitments from this dataset requires careful parsing, because the ministerial responses blend factual assertion with policy aspiration in ways that can obscure the distinction between what has been done and what is intended.

On AI infrastructure, the most concrete figure in the record is CoreWeave’s announced investment of £2.5 billion, of which £1.5 billion is committed to the Lanarkshire AI Growth Zone at DataVita’s campus. The minister describes this as deploying “cutting-edge semiconductors,” which implies a specific technical commitment, but the parliamentary record as provided does not confirm construction completion, operational status, or a delivery timeline. These are announced commitments, not verified outcomes, and the government’s response to the audit question suggests no formal mechanism exists to track the gap between the two.

On education, the government’s stated commitment is to implement the relevant recommendations of the Curriculum and Assessment Review, incorporating AI content into a refreshed computing curriculum. Baroness Smith’s response indicates that the government “has accepted the relevant recommendations,” which is a policy position of record. However, acceptance of recommendations is a preliminary step; the parliamentary record does not reveal when the refreshed curriculum will be implemented, what resources will accompany it, or how the government intends to address the geographic and socioeconomic disparities in digital literacy that Lord Taylor’s question specifically raised.

The absence of any stated timeline across all three responses is a significant finding in itself. A government genuinely confident in its AI governance programme would be expected to anchor its commitments to specific milestones. The parliamentary record for this period contains none.

Rhetoric Versus Reality

The central analytical finding of this brief is that the government’s AI governance posture, as revealed in the parliamentary record, is substantially more promotional than it is operational. The gap between rhetoric and reality manifests in three distinct dimensions.

First, on investment verification: the government has built a significant part of its AI narrative around headline investment figures from major technology companies, including CoreWeave. When asked directly whether those figures are audited, the minister does not confirm that any audit mechanism exists. This is a material governance gap. Investment announcements made at high-profile summits or in ministerial press releases carry political weight regardless of whether they are ever realised; without a verification process, the government cannot distinguish between genuine capital deployment and reputational positioning by investors. The parliamentary record suggests the government either does not have such a process or is unwilling to describe it on the record.

Second, on infrastructure delivery: the question of how many datacentres have actually been constructed — as opposed to announced or committed — goes unanswered. This matters because the government’s AI strategy depends in part on physical infrastructure being built and operational. If the answer were straightforwardly positive, one would expect a minister to provide it. The evasion implies either that the number is lower than the investment rhetoric suggests, or that the government does not track construction progress with sufficient granularity to answer the question with confidence.

Third, on the education pipeline: the government’s acceptance of curriculum review recommendations is welcome, but it sits in uncomfortable tension with the scale of the AI investment narrative being promoted elsewhere. A government that announces billions in AI infrastructure investment whilst being unable to specify timelines or resources for the curriculum reforms needed to produce the workforce to operate that infrastructure is, at minimum, sequencing its communications more effectively than its policy. The digital literacy disparities raised by Lord Taylor remain unaddressed in any concrete sense by the ministerial response.

Strategic Intelligence Conclusions

  • 1. The government’s refusal to confirm or describe an audit mechanism for private AI investment commitments represents a verifiable accountability gap that opposition parties, select committees, and civil society organisations should exploit through targeted follow-up scrutiny.
  • 2. CoreWeave’s £2.5 billion announced investment in UK AI infrastructure remains, on the parliamentary record, an unverified commitment rather than a confirmed delivery, and decision-makers should weight it accordingly in any assessment of UK AI capacity.
  • 3. DSIT’s consistent deployment of global ranking statistics in response to specific procedural questions is a deliberate rhetorical strategy that signals ministerial discomfort with accountability questions rather than confidence in the underlying policy architecture.
  • 4. The Department for Education’s acceptance of Curriculum and Assessment Review recommendations on AI and digital literacy constitutes a policy position of record but carries no enforceable timeline, funding commitment, or measurable target against which progress can be assessed.
  • 5. The concentration of AI governance scrutiny in DSIT, with no questions in this dataset directed at the Treasury, Cabinet Office, or Home Office, suggests that parliamentary oversight of AI policy has not yet matched the cross-departmental reality of AI’s impact on government operations and public services.
  • 6. The most substantive accountability pressure on UK AI governance in this period is coming from independent and non-affiliated parliamentarians rather than from official opposition structures, indicating a scrutiny gap that formal opposition parties should urgently address.

Dataset Note

This intelligence brief is based on a total of three written parliamentary questions retrieved from the UK Parliament record, all three of which were submitted for analysis. The dataset covers the period from 31 December 2025 to the retrieval date of 31 March 2026, encompassing questions tabled between 12 February 2026 and 10 March 2026. The small size of the dataset places material limitations on the analytical conclusions that can be drawn: patterns identified here — including departmental distribution, ministerial response style, and questioner profile — may not be representative of the broader parliamentary treatment of UK AI governance during this period, and should be read as indicative findings rather than statistically robust conclusions. A comprehensive assessment would require retrieval and analysis of the full universe of AI-related parliamentary questions across all question types — written, oral, urgent, and topical — for the same period.