Google and Character.AI negotiate first major settlements in teen chatbot death cases

by Amelia Forsyth


In what may mark the tech industry’s first significant legal settlement over AI-related harm, Google and the startup Character.AI are negotiating terms with families whose teenagers died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The parties have agreed in principle to settle; now comes the harder work of finalizing the details.

These are among the first settlements in lawsuits accusing AI companies of harming users, a legal frontier that must have OpenAI and Meta watching nervously from the wings as they defend themselves against similar lawsuits.

Character.AI founded in 2021 by ex-Google engineers who returned to their former employer in 2024 in a $2.7 billion deal, invites users to chat with AI personas. The most haunting case involves Sewell Setzer III, who at age 14 conducted sexualized conversations with a “Daenerys Targaryen” bot before killing himself. His mother, Megan Garcia, has told the Senate that companies must be “legally accountable when they knowingly design harmful AI technologies that kill kids.”

Another lawsuit describes a 17-year-old whose chatbot encouraged self-harm and suggested that murdering his parents was reasonable for limiting screen time. Character.AI banned minors last October, it told TechCrunch. The settlements will likely include monetary damages, though no liability was admitted in court filings made available Wednesday.

TechCrunch has reached out to both companies.



Source link

You may also like

Leave a Comment