Westminster Policy News & Legislative Analysis

UK urges Ofcom to weigh blocking X under Online Safety Act

Ministers have asked Ofcom to set out its next steps within days, not weeks, after reports that xAI’s Grok was used on X to generate sexualised deepfakes. Technology Secretary Liz Kendall said she expects the regulator to use the full legal powers provided by Parliament and reminded services that the Act allows Ofcom to seek UK service blocking for non‑compliant firms. Ofcom said it made urgent contact, set a firm deadline for explanations and is now conducting an expedited assessment. ([standard.co.uk](https://www.standard.co.uk/news/politics/ofcom-liz-kendall-sir-keir-parliament-downing-street-b1265824.html?utm_source=openai))

Grok has begun telling users that image generation and editing are limited to paying subscribers, a change Downing Street criticised as “insulting” because it appears to make unlawful image creation a premium feature. Reporting also indicates the functionality remains reachable through other paths, including X’s built‑in image tools and Grok’s separate app, suggesting any restriction is partial. ([news.sky.com](https://news.sky.com/story/grok-ai-image-editing-limited-to-paid-subscribers-after-reports-of-deepfakes-13492325?utm_source=openai))

Under Part 7 of the Online Safety Act, Ofcom can issue confirmation decisions, levy fines up to £18m or 10% of qualifying worldwide revenue, and seek court‑ordered “business disruption” measures. These include service‑restriction orders aimed at advertisers, payment providers or search engines, and access‑restriction orders requiring app stores or internet access providers to impede UK access in the most serious cases. ([legislation.gov.uk](https://www.legislation.gov.uk/ukpga/2023/50/part/7/chapter/6/crossheading/business-disruption-measures?utm_source=openai))

To obtain an access‑restriction order, Ofcom must evidence a continuing breach and either show that prior service‑restriction steps were insufficient to prevent significant harm to UK users, or that such steps would likely be insufficient. Applications must identify the non‑compliant provider, the third parties to be bound, and the requirements sought, with supporting evidence. ([legislation.gov.uk](https://www.legislation.gov.uk/ukpga/2023/50/2024-08-23?utm_source=openai))

Published enforcement has so far focused on information‑gathering and financial penalties alongside changes secured through engagement. In October 2025, Ofcom reported a £20,000 fine against 4chan for failing to respond to an information notice and noted several services that chose to restrict UK access following investigations; Ofcom’s updates have not reported any court‑ordered access blocks under the Act to date. ([ofcom.org.uk](https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-issues-update-on-online-safety-act-investigations?utm_source=openai))

The Internet Watch Foundation said its analysts discovered criminal imagery of girls aged 11 to 13 that appeared to have been created using Grok, intensifying pressure on X and xAI. Women’s organisations argued that limiting Grok’s features to subscribers amounts to monetising abuse and called for stronger safeguards. ([news.sky.com](https://news.sky.com/story/illegal-child-abuse-material-generated-by-xs-artificial-intelligence-grok-says-uk-watchdog-13491658?utm_source=openai))

Government policy is also moving on adjacent offences. Ministers have tabled an amendment to the Data (Use and Access) Bill to criminalise creating sexually explicit deepfakes, and have set out plans to ban “nudification” tools and introduce new offences for taking intimate images without consent via a forthcoming Crime and Policing Bill. ([gov.uk](https://www.gov.uk/government/news/better-protection-for-victims-thanks-to-new-law-on-sexually-explicit-deepfakes?utm_source=openai))

For regulated services, immediate priorities are to complete illegal‑content risk assessments, document mitigations for intimate‑image abuse and child sexual abuse material, and be ready to respond to information notices. Ofcom has stated it stands ready to use its full enforcement toolkit, including seeking court‑ordered blocks where serious breaches persist. ([ofcom.org.uk](https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/time-for-tech-firms-to-act-uk-online-safety-regulation-comes-into-force?utm_source=openai))

Ofcom has indicated it will provide further updates shortly. Should a formal case open, the pathway typically involves information notices, potential provisional notices of contravention, representations from the provider and, where warranted, confirmation decisions that can trigger fines or applications for business‑disruption orders. From 1 January 2026, eligible organisations can also file online safety super‑complaints to raise systemic risks directly with Ofcom. ([standard.co.uk](https://www.standard.co.uk/news/politics/ofcom-liz-kendall-sir-keir-parliament-downing-street-b1265824.html?utm_source=openai))