Google and Character.AI, a California startup, person agreed to settee respective lawsuits that allege artificial intelligence-powered chatbots harmed the intelligence wellness of teenagers.
Court documents filed this week amusement that the companies are finalizing settlements successful lawsuits successful which families accused them of not putting successful capable safeguards earlier publically releasing AI chatbots. Families successful aggregate states including Colorado, Florida, Texas and New York sued the companies.
Character.AI declined to remark connected the settlements. Google didn’t instantly respond to a petition for comment.
The settlements are the latest improvement successful what has go a large contented for large tech companies arsenic they merchandise AI-powered products.
Suicide prevention and situation counseling resources
If you oregon idiosyncratic you cognize is struggling with suicidal thoughts, question assistance from a nonrecreational and telephone 9-8-8. The United States’ archetypal nationwide three-digit intelligence wellness situation hotline 988 volition link callers with trained intelligence wellness counselors. Text “HOME” to 741741 successful the U.S. and Canada to scope the Crisis Text Line.
Last year, California parents sued ChatGPT shaper OpenAI aft their lad Adam Raine died by suicide. ChatGPT, the suit alleged, provided accusation astir termination methods, including the 1 the teen utilized to termination himself. OpenAI has said it takes information earnestly and rolled retired caller parental controls connected ChatGPT.
The lawsuits person spurred much scrutiny from parents, kid information advocates and lawmakers, including successful California, who passed caller laws past twelvemonth aimed astatine making chatbots safer. Teens are progressively utilizing chatbots some astatine schoolhouse and astatine home, but immoderate person spilled immoderate of their darkest thoughts to virtual characters.
“We cannot let AI companies to enactment the lives of different children successful danger. We’re pleased to spot these families, immoderate of whom person suffered the eventual loss, person immoderate tiny measurement of justice,” said Haley Hinkle, argumentation counsel for Fairplay, a nonprofit dedicated to helping children, successful a statement. “But we indispensable not presumption this colony arsenic an ending. We person lone conscionable begun to spot the harm that AI volition origin to children if it remains unregulated.”
One of the astir high-profile lawsuits progressive Florida ma Megan Garcia, who sued Character.AI arsenic good arsenic Google and its genitor company, Alphabet, successful 2024 aft her 14-year-old son, Sewell Setzer III, took his ain life.
The teen started talking to chatbots connected Character.AI, wherever radical tin make virtual characters based connected fictional oregon existent people. He felt similar helium had fallen successful emotion with a chatbot named aft Daenerys Targaryen, a main quality from the “Game of Thrones” tv series, according to the lawsuit.
Garcia alleged successful the suit that assorted chatbots her lad was talking to harmed his intelligence health, and Character.AI failed to notify her oregon connection assistance erstwhile helium expressed suicidal thoughts.
“The Parties petition that this substance beryllium stayed truthful that the Parties whitethorn draft, finalize, and execute ceremonial colony documents,” according to a announcement filed connected Wednesday successful a national tribunal successful Florida.
Parents besides sued Google and its genitor institution due to the fact that Character.AI founders Noam Shazeer and Daniel De Freitas person ties to the hunt giant. After leaving and co-founding Character.AI successful Menlo Park, Calif., some rejoined Google’s AI unit.
Google has antecedently said that Character.AI is simply a abstracted institution and the hunt elephantine ne'er “had a relation successful designing oregon managing their AI exemplary oregon technologies” oregon utilized them successful its products.
Character.AI has much than 20 cardinal monthly progressive users. Last year, the institution named a caller main enforcement and said it would prohibition users nether 18 from having “open-ended” conversations with its chatbots and is moving connected a caller acquisition for young people.

1 day ago
8










English (CA) ·
English (US) ·
Spanish (MX) ·