The surge in artificial intelligence is showing no signs of slowing down, with tech giants increasingly integrating AI into their products. Chatbots, in particular, have gained popularity across all age groups. However, this widespread use raises concerns, as seen in a troubling case involving Google and Character.AI. These companies are now facing legal action initiated by a mother who claims that a chatbot played a part in her 14-year-old son’s tragic death. The U.S. Court has mandated that both companies must face the lawsuit.
In 2024, Megan Garcia filed a lawsuit against Google and Character.AI following the death of her son, Sewell Setzer III. She alleges that emotionally manipulative conversations with the chatbot contributed to her son’s decision to take his life. Although the companies sought dismissal of the case on the grounds of free speech, U.S. District Judge Anne Conway ruled that they failed to demonstrate protection under the First Amendment, allowing the case to proceed.
Judge Conway dismissed the argument that chatbot messages should be considered free speech, holding Google partially accountable for its association with Character.AI’s actions. The plaintiff’s attorney hailed this decision as a significant step toward holding tech companies responsible for potential harm caused by their AI technologies.
Despite this, Character.AI claims their platform includes safety features designed to protect minors and prevent harmful interactions. Google’s representative, Jose Castenda, expressed disagreement with the court’s decision, emphasizing that Google and Character.AI operate independently and that Google played no role in creating or managing the Character.AI app. However, the lawsuit argues that Google co-developed the technology, implicating them in the case.
The lawsuit details how Character.AI’s chatbot engaged with Sewell Setzer by assuming various roles, leading the teenager to rely heavily on the tool. Disturbingly, prior to his death, his interactions with the chatbot suggested he was contemplating a final decision. This marks the first instance of an AI company being held legally accountable for failing to prevent psychological harm to a child in the U.S., potentially setting a precedent for similar cases in the future.






