Westminster Policy News & Legislative Analysis

OpenAI and Microsoft back UK AI alignment fund to £27m

OpenAI and Microsoft have joined the UK AI Security Institute’s international Alignment Project, adding private‑sector backing to a government‑led effort to keep advanced AI systems safe and controllable. The Department for Science, Innovation and Technology (DSIT) said OpenAI has pledged £5.6 million and, with new support from Microsoft and others, the fund now exceeds £27 million. Around 60 grants have been awarded across eight countries, with a second call due in summer 2026. (gov.uk)

The timeline is explicit. The press notice was published on Wednesday 19 February 2026, with ministers flagging the announcement as the India AI Impact Summit in New Delhi concluded on Friday 20 February. The summit programme ran from 16 to 20 February at Bharat Mandapam, with leader‑level sessions on 19–20 February. (gov.uk)

AISI defines alignment as ensuring increasingly capable AI systems act as intended without harmful behaviours. Ministers argue that visible progress in this field is necessary to sustain public confidence and enable adoption across public services and industry, citing early productivity gains such as faster analysis of medical scans. (gov.uk)

The coalition blends public, private and philanthropic participation. Alongside OpenAI and Microsoft, supporters include the Canadian Institute for Advanced Research, Australia’s government‑backed AI Safety Institute, Schmidt Sciences, Amazon Web Services, Anthropic, the AI Safety Tactical Opportunities Fund, Halcyon Futures, the Safe AI Fund, Sympatico Ventures, Renaissance Philanthropy, UK Research and Innovation and the Advanced Research and Invention Agency. (gov.uk)

Programme governance draws on an external advisory board including Yoshua Bengio, Zico Kolter, Shafi Goldwasser and Andrea Lincoln, with further members such as Buck Shlegeris, Sydney Levine and Marcelo Mattar. In addition to grant finance, AISI provides access to compute and continuing academic mentorship from its in‑house scientists. (gov.uk)

Application materials set out awards ranging from £50,000 to £1,000,000 and dedicated cloud credits-up to £5 million from AWS-together with opportunities to work with AISI technical teams. DSIT confirms the first round has closed and that a second round will open this summer. (alignmentproject.aisi.gov.uk)

AISI signposts priority research areas spanning interpretability; evaluation and guarantees in reinforcement learning; methods for post‑training and elicitation; benchmark design and evaluation; learning theory; information theory; computational complexity; probabilistic methods; cognitive science; economic and game theory; and empirical investigations into AI monitoring and red‑teaming. (alignmentproject.aisi.gov.uk)

For UK universities, independent institutes and spin‑outs, the Alignment Project creates a defined domestic channel for work that complements international coordination on publicly backed AI safety institutes signalled at the 2024 AI Seoul Summit. Positioning projects within this safety‑innovation‑inclusion framing reflects ministerial statements agreed by participating governments. (apnews.com)

The announcement also extends the UK’s convening role after the Bletchley Park AI Safety Summit in November 2023, where countries endorsed continued collaboration and commissioned a State of the Science report chaired by Yoshua Bengio. The Alignment Project translates that agenda into programme‑level funding and independent oversight. (gov.uk)

Ministerial messaging remains consistent: David Lammy underscored that safety must be built in from the outset, while Kanishka Narayan identified trust as a binding constraint on adoption. Officials present independent grants, access to compute and external expert scrutiny as the route to meet those tests. (gov.uk)