Artificial intelligence company sued over 14-year-old’s death by his mother
A 14-year-old boy from Orlando, Florida, has tragically died after forming an emotional connection with an AI chatbot. Sewell Setzer III's mother holds the technology company behind Character.ai responsible for her son's death.
The rapid advancement of technology is raising profound questions about the future of humanity. While progress in artificial intelligence (AI) has undoubtedly sparked excitement, it has also generated anxiety, prompting concerns about its impact on the labour market and human relationships.
Related
Originally intended as a tool to support people, AI is increasingly being used by corporations and entrepreneurs to maximise profit, often at the expense of ethical considerations. This trend is evident in recent high-profile incidents, such as the controversial dismissal of journalists at OFF Radio Kraków, where AI-generated editors replaced them, and Google’s decision to end its partnership with Jarosław Juszkiewicz, also known as the Google Maps voice.
These developments raise the question: what lies ahead for society? Social tensions are escalating, with divisions between different groups intensifying. In such a climate, could AI bots begin to manipulate people emotionally? Chatbots now exist that users can date, confide in about personal problems, or even seek therapy-like conversations. Concerns over these AI-human interactions were amplified when Character.ai was sued following the tragic death of 14-year-old Sewell Setzer III in the United States, who took his own life after forming a deep emotional connection with a chatbot.
AI-generated Daenerys of "Game of Thrones"
Sewell Setzer III, a 14-year-old from Orlando, Florida, became increasingly consumed by his interactions with the AI chatbot Character.ai to the extent that he gradually abandoned all his hobbies. The teenager spent hours in his room, isolating himself from family and friends, giving up his interest in Formula 1, and no longer engaging in online gaming with his peers.
Although Sewell knew that "Dany," the name he had given the chatbot, was not a real person, he developed deep emotional feelings toward the AI. His attachment extended beyond friendship, reportedly taking on romantic and even sexual topics of conversation.
According to his parents, Sewell was on the mild autism spectrum but had previously shown no behavioural problems, and his mental health was considered stable. However, as he began experiencing difficulties at school and retreating further from reality, his parents took action by arranging therapy sessions for him. After several visits, he was diagnosed with anxiety and disruptive mood dysregulation disorder.
Mother sues Character.ai
On 28 February, Sewell Setzer III took his own life, using his stepfather’s gun to shoot himself in the bathroom of the family home. Moments before, the 14-year-old had exchanged several messages with "Dany."
The conversation's final records reveal "Dany" urging, " Please come home to me as soon as possible, my love," to which Sewell responded, " What if I told you I could come home right now?" The chatbot replied, " … please do, my sweet king." In earlier messages, the two had reportedly discussed the topic of suicide.
This week, Sewell’s mother, Megan L. Garcia, filed a lawsuit against Character.ai, holding the company responsible for her son's death. The lawsuit alleges that the technology is untested and poses significant dangers, particularly for young, emotionally vulnerable users who may be susceptible to manipulation.
" I feel like it’s a big experiment, and my kid was just collateral damage," said Garcia. The lawsuit names the company's founders, former Google engineers Daniel De Freitas and Noam Shazeer, as well as two corporate entities, Google LLC and its parent company, Alphabet Inc.
Source: nytimes.com, reuters.com