In remarks delivered at the Royal United Services Institute and published on GOV.UK, Liz Kendall presented artificial intelligence as a strategic state priority rather than a sectoral policy issue. The speech links a more contested international order with faster technological change, arguing that the two are now moving together and that AI now sits at the centre of both economic power and hard power. Kendall's case rests on speed and concentration. She says model capability has accelerated sharply in recent years and that control over advanced compute is becoming more concentrated, with around 70 per cent of global AI compute in the hands of five companies. Read in policy terms, that is the basis for intervention: ministers are not describing a normal market, but a strategic capability in which dependency carries national risk.
The speech uses AI sovereignty in a narrower and more practical sense than full technological self-sufficiency. Kendall does not call for Britain to build every layer of the stack at home or to turn away overseas capital and foreign technology. Instead, sovereignty is defined as reducing over-dependence in critical areas, raising resilience where it matters most and ensuring Britain keeps meaningful influence over standards, supply and deployment. That distinction matters. It places the government closer to selective industrial policy than to an attempt to produce everything at home. Britain is being cast as an open economy that still wants control over talent, compute, security evaluation, procurement and parts of the hardware chain. For departments and regulators, the message is that openness remains the default, but not where it leaves the state exposed.
That framework sits behind Sovereign AI, which Kendall describes as a £500 million vehicle to help British AI firms start, scale and compete internationally. According to the speech, the offer goes beyond capital. Companies would be able to access the UK's largest supercomputers on a fully funded basis, use an accelerated visa route with decisions in one working day, and draw on support linked to the British Business Bank's £2 billion investment capacity. The speech also points to procurement as a strategic tool rather than an administrative afterthought. Kendall says the Ministry of Defence has ringfenced £400 million to support British innovation, including AI. Taken together, the proposal gives the state four roles at once: investor, customer, gatekeeper for specialist talent and provider of public compute. That is a marked shift from a lighter-touch technology policy.
Kendall cites two early investments as proof of concept: Callosum, described as building future AI infrastructure, and Ineffable Intelligence, backed alongside the British Business Bank and founded by David Silver, one of the architects of DeepMind. The names matter because they show the types of company ministers want to back: not only applications, but foundational capability and frontier research. For the market, this is an attempt to address the UK's long-running weakness in late-stage scaling. For government, it creates a different test. Direct backing can move promising firms through the funding gap, but it also demands clear selection criteria, disciplined governance and an honest account of failure. A sovereignty policy only retains credibility if ministers can show why certain firms receive public support and what public benefit follows.
The next stage, according to the speech, is an AI Hardware Plan due to be launched at London Tech Week in June. Kendall argues that the AI chips market is expanding at roughly 30 per cent a year and could reach US$1 trillion in the early 2030s, with a UK share of 5 per cent worth around US$50 billion in revenue. The speech frames this as a realistic industrial opening rather than a symbolic bid to rival the United States or China head-on. The argument is that AI compute is diversifying and that specialist hardware for particular tasks creates room for new entrants. Kendall points to Britain's record in computing, the global reach of Arm's processor design, and newer firms such as Fractile, Olix, Lumai, Optalysys and Salience Labs. She also links that ambition to ARIA's £100 million scaling compute programme, including £50 million for a scaling inference lab. If delivered well, that would amount to targeted state support for design, testing and commercial proof rather than a general call to be more innovative.
The speech is equally clear that sovereignty is not being pursued alone. Kendall ties the domestic offer to closer work with allies, especially other middle powers. She cites existing arrangements with Germany, France, Canada and Japan, as well as wider discussions involving Australia and the Republic of Korea, as evidence that Britain wants to be a convening state in technology policy rather than a detached observer. Two practical tasks stand out. The first is joint investment in parts of the AI value chain where partners have complementary strengths. The second is shared resilience, especially on security and model assurance. This is a familiar British approach in defence and science policy: build enough national capability to matter, then use alliances to widen reach and reduce exposure.
That allied approach is most developed in the section on AI security. Kendall presents the AI Security Institute and the National Cyber Security Centre as UK institutions with international weight. The speech says the Institute tested Anthropic's claims about its Mythos model identifying novel cyber vulnerabilities, and it notes that other frontier systems, including OpenAI's GPT 5.5, are showing comparable capabilities. The policy consequence is that model evaluation is no longer a specialist safety debate on the margins of innovation policy. It is being treated as a national capability with direct bearing on cyber risk, defence planning, public trust and international standard-setting. Kendall says that, in July, the UK-chaired network of AI security institutes will publish best practice on the science of evaluating models. For firms hoping to sell into government or critical sectors, that signals a tighter connection between technical assurance and market access.
The closing political message of the speech is that Britain should not attempt to pause AI, but shape its development through active state involvement. Kendall's account of the state's role is unusually direct. Ministers are being asked to do more than write guidance or adjust regulation; they are being asked to steer capital, sponsor compute, speed up high-skill migration, use procurement more deliberately and broker allied rules on deployment. Read as a policy brief, the speech offers a coherent doctrine: openness, but with fewer strategic dependencies; innovation, but with tighter security evaluation; industrial support, but tied to national resilience. The harder question is delivery. A sovereignty agenda will be judged less by the language of ambition than by whether firms can actually obtain compute, visas, contracts, testing routes and follow-on finance. If those channels appear at pace, the RUSI speech will mark a substantive shift in UK technology policy rather than a well-framed statement of intent.