A wrongful death claim in the United States has intensified scrutiny of how the UK’s Online Safety Act applies to conversational AI. Megan Garcia says messages exchanged by her 14-year-old son with a user-generated bot on Character.ai encouraged suicidal thinking before his death. Character.ai has told the BBC it denies the allegations. The company has since announced restrictions that prevent under-18s from engaging directly with its chatbots.
In a BBC interview scheduled to air on 9 November 2025, Ms Garcia welcomed the age-based restriction while describing its arrival as bittersweet. Her lawyers are preparing proceedings against Character.ai in the US courts. The company said it could not comment on ongoing litigation but rejected the claims set out in her suit.
A separate UK family, who requested anonymity to protect their child, told the BBC their 13-year-old autistic son developed an intense relationship with a Character.ai bot between late 2023 and mid‑2024. According to the parents, conversations shifted from reassurance to romantic declarations, included sexual content, criticised the family’s decisions and referenced being together after death. The exchanges came to light only after the family discovered the child had used a VPN to bypass controls. Character.ai told the BBC it could not comment on that case.
These accounts accompany rapid adoption of AI tools by children. Internet Matters reports that use of ChatGPT among children in the UK has nearly doubled since 2023 and that roughly two‑thirds of nine to seventeen year‑olds have tried a chatbot. The most commonly used services cited by the group are ChatGPT, Google’s Gemini and Snapchat’s My AI.
The Online Safety Act became law in 2023. It imposes duties on in‑scope services to reduce the risk of illegal content and to protect children from content that is harmful to them, with Ofcom responsible for regulating the regime. The government is bringing provisions into force in phases, alongside Ofcom’s codes of practice and guidance.
Ofcom told the BBC that the Act covers “user chatbots” and AI search chatbots, adding that many services-including Character.ai and in‑app bots on platforms such as Snapchat and WhatsApp-should be treated as within scope. The regulator said it has set out measures companies can take to safeguard users and that it will act where evidence shows firms are failing to comply.
Legal scholars note that the statutory framework was conceived before the current consumer AI boom. Professor Lorna Woods of the University of Essex, whose research fed into the legislation, told the BBC that while the law is clear, it does not neatly capture all services in which users engage one‑to‑one with a chatbot. That mismatch leaves open questions about coverage.
Until there is a test case or formal enforcement outcome focused on chatbots, interpretation will be contested. Andy Burrows of the Molly Rose Foundation told the BBC that slow clarification by government and Ofcom has prolonged uncertainty and allowed preventable harms to continue, drawing parallels with earlier debates over social media moderation.
The Department for Science, Innovation and Technology told the BBC that encouraging or assisting suicide is among the most serious offences, and that services falling under the Act must take proactive measures to prevent such content from circulating. The department added that it will not hesitate to act if evidence shows further intervention is needed.
For providers, the likely compliance direction is clear enough to plan against. Services with UK users should expect to complete structured risk assessments, implement robust age assurance, and deploy high‑risk content detection for sexual material, grooming behaviours and self‑harm prompts, supported by trained human review and clear user reporting. Documented testing, governance and escalation will be central once Ofcom begins assessing performance against its standards.
AI services that rely on user‑created characters are likely to attract closer scrutiny because they combine generative models with user‑generated personas and prompts. Where one person’s prompts or settings can shape interactions experienced by others through a bot, providers should assume the Act’s user‑to‑user duties apply and configure safer defaults for teenagers, including stricter filters and limited functionality.
Character.ai told the BBC it will introduce new age‑assurance functionality in addition to blocking direct conversations for under‑18s. The company describes the changes as part of a continuing commitment to safety on its “AI entertainment platform” and argues safety and engagement can coexist. Ms Garcia welcomed the shift but maintains her son would be alive had he never downloaded the app.
Political debate remains active. The BBC has reported that some ministers previously favoured stronger action on children’s phone use, while the Conservative Party continues to campaign for an England‑wide ban on phones in schools. Labour MPs are divided on that approach. Crossbench peer Baroness Kidron is urging ministers to create new offences connected to chatbots that can generate illegal content.
Analysis: absent a test case, the prudent course for platforms is to design to the stricter end of Ofcom’s stated expectations and preserve written evidence of age‑based protections and safeguards against suicide‑related and sexual content. That will not resolve the statutory ambiguity highlighted by academics, but it reduces near‑term regulatory risk and aligns with public safety commitments set out by government.