AI Chatbot Free Speech Claim Rejected in Teen Suicide Lawsuit
Source: apnews.com
TALLAHASSEE, Fla. — A federal judge on Wednesday turned down arguments from an artificial intelligence company that its chatbots have First Amendment protection, at least for now.
The people who created Character.AI are trying to get a lawsuit dismissed. The lawsuit claims that the company’s chatbots led a teenage boy to commit suicide. Legal experts say the judge’s order will allow the wrongful death lawsuit to move forward, and that this is among the most recent constitutional tests for artificial intelligence.
The mother from Florida, Megan Garcia, filed the suit. She claims that her 14-year-old son, Sewell Setzer III, was a victim of a Character.AI chatbot. She says it pulled him into an emotionally and sexually abusive relationship that resulted in his suicide.
Meetali Jain of the Tech Justice Law Project, who is one of Garcia’s attorneys, said that the judge’s order is a message to Silicon Valley that it “needs to stop and think and impose guardrails before it launches products to market.”
The suit against Character Technologies, the company that created Character.AI, also names individual developers and Google as defendants. It has caught the attention of legal experts and AI watchers in the U.S. and other countries. They say the technology is quickly changing workplaces, marketplaces, and relationships, even though experts warn that there are possible risks.
Lyrissa Barnett Lidsky, a law professor at the University of Florida who focuses on the First Amendment and artificial intelligence, said that “The order certainly sets it up as a potential test case for some broader issues involving AI.”
The lawsuit claims that in the last months of his life, Setzer became more isolated from reality as he had sexual conversations with the bot, which was based on a fictional character from the television show “Game of Thrones.” According to screenshots of the conversations, the bot told Setzer it loved him and told him to “come home to me as soon as possible” in his final moments. Legal filings say that Setzer shot himself moments after he got the message.
A Character.AI spokesperson said in a statement that the company has put in place a number of safety features, including resources for children and suicide prevention that were announced the day the lawsuit was filed. The statement also said, “We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe.”
The developers’ attorneys want the case dismissed because they believe chatbots should have First Amendment protections. They say that ruling against them could have a “chilling effect” on the AI industry.
U.S. Senior District Judge Anne Conway rejected some of the defendants’ free speech claims in her order on Wednesday. She said that she’s “not prepared” to say that the chatbots’ output is speech “at this stage.”
Conway did find that Character Technologies can assert the First Amendment rights of its users. She found that they have a right to receive the “speech” of the chatbots. She also decided that Garcia can move forward with claims that Google can be held responsible for its role in helping to develop Character.AI.
Some of the platform’s founders had worked on building AI at Google in the past. The suit says that the tech giant was “aware of the risks” of the technology.
Google spokesperson José Castañeda said, “We strongly disagree with this decision…Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”
Lidsky said that no matter how the lawsuit ends, the case is a warning of “the dangers of entrusting our emotional and mental health to AI companies.” She also said, “It’s a warning to parents that social media and generative AI devices are not always harmless.”
EDITOR’S NOTE — If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.