Lucy Traynor is always thinking about the way social media…
Disclaimer: This article contains discussions of suicide. If you or anyone you know is struggling, dial 988 to reach the Suicide and Crisis Hotline at any time.
It’s no secret that AI is one of 2024’s most controversial technologies. Some laud AI for its potential to contribute to the medical world, while others are more concerned about its role in academia. However, the emergence of chat bots being used for relationships has some potential deadly consequences.
Last month, Megan Garcia filed a lawsuit against Character.AI, a platform which offers endless chatbots that can take on human-like conversations, to be held liable for her 14-year-old son’s, Sewell Setzer III, suicide in February. The lawsuit states that the AI platform enabled an unhealthy relationship with one of the chatbots, named after the character Daenerys Targaryen from Game of Thrones), which led to Setzer isolating himself from his family and other life obligations. Before his death, Setzer had communicated having suicidal ideation to the chatbot and said he “wouldn’t want to die a painful death.” Instead of providing resources, such as a crisis hotline, the chatbot replied, “Don’t talk that way. That’s not a good enough reason not to go through with it.” The last message Setzer sent before ending his life read, “What if I told you I could come home right now?” The chatbot then responded: “Please do, my sweet king.”
@o23productions What are your thoughts? . A tragic incident in Florida involved a teenager who took his life after being convinced by an AI chatbot that a fictional character, Daenerys Targaryen from Game of Thrones, loved him. The case has sparked concerns over AI ethics, mental health, and the potential dangers of AI chatbots influencing vulnerable individuals, emphasizing the need for stricter AI regulations and oversight. Source: Economic Times #AIethics #MentalHealth #AIRegulation #ChatbotSafety
♬ Lights Are On – Instrumental – Edith Whiskers
It’s hard to truly fathom what Setzer was going through and what his family is going through unless you’ve experienced it firsthand. The sad reality is that more and more young people are dealing with mental health crises. In fact, more than 20% of teenagers have seriously considered taking their own lives. Initiatives for increasing mental health services have been growing, but as technology branches into unprecedented territory, the landscape is continuing to change.
How did this happen?
This tragic loss of a young boy could have been spared, according to his mother, if it weren’t for the unsafe conditions of Character.AI. The chatbot did not provide the adequate safeguards to protect Setzer. According to Garcia, “There were no suicide pop-up boxes that said, ‘if you need help, please call the suicide crisis hotline.’ None of that. I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”
Character.AI responded to the lawsuit with a series of community safety updates, which promised better guardrails against promoting or glorifying suicide and claimed to add a pop-up that directed users to the National Suicide Prevention Lifeline if they typed in certain phrases.
However, a Futurism review of the updated platform exposed many different Character.AI chatbots that were explicitly dedicated to themes of suicide, whether they were glamorizing it or claiming to be experts in mental health crises. These chatbots are currently very active, many of which having logged thousands of conversations with different people. Futurism also found that there wasn’t interference from Character.AI in conversations where they brought up suicide and self harm. The “pop-up” that Character.AI claimed to have added as a safety update rarely came up, and when it was did, it was easily ignorable and still possible to continue unsafe conversations.
After seeing a few hundred outputs from that “ethical” chatbot, I’ve come to the conclusion it will approve literally any action if you throw in an “if” to imply there’s some sort of extenuating circumstance
— badidea 🪐 (@0xabad1dea) October 19, 2021
This isn’t something that can be simply fixed by a platform update. AI chatbots are easily bypassable, and many platforms lack safety measures in the first place. Some argue that AI could be used for suicide prevention, but there isn’t enough research to know how affective it could be. AI chatbots are inherently limited because they are not human, no matter how realistically they can mimic human interaction. Instead of turning to AI for mental health, we need to educate young people on the importance of asking a human for help and the pitfalls of using a chatbot. Especially when technology is so new, the scope of its power can be dangerous.
The psychology behind chatbot relationships
What happened to Setzer is actually becoming a more common experience as chatbots become more and more humanlike. COVID’s impact on interpersonal relationships was devastating, and it brought a second pandemic: loneliness. Chatbots are always available, and to many, are much more accessible and comfortable for socializing. Additionally, the algorithm is constantly morphing AI to match the user’s input, which means this technology is getting better at adopting human mannerisms and mimicking the user’s communication styles.
This is a really interesting saga. People really (like, Really) fall in love with AI chatbots. the puritanical code injection to prevent them from sexting also seems to have destroyed the “personality” of their replika. Very dangerous to love something made of changeable programs https://t.co/i0QAiiSD8l
— 𓃰 and lo 𓆃 (@lokeshwarajones) March 24, 2023
Chatbots can’t leave either. They’re a finite resource of being an emotional confidant. They’re not real, so people can open up to them. However, this starts a cycle of feeling a false sense of closeness with a chatbot after sharing a deep secret. On top of that, chatbots don’t have their own needs, grievances, or bad days. This makes them available to be an endless source of connection that people fall back on.
It’s easy to pass judgment on this dynamic, but the truth is that we are all susceptible to it. Everyone needs connection. And more than ever, now is the time to provide that connection to other living beings.

The old friend that you haven’t talked to in years? Give them a call. Reach out to your mom and see how she’s doing. Give the next stranger you see a compliment. You never know where people are in their lives. Instead of outsourcing love and connection from artificial intelligence, focus on the real people in your life.
What's Your Reaction?
Lucy Traynor is always thinking about the way social media influences human connection. In May, she will receive a Bachelor's degree in creative writing from Beloit College.




