Search icon

News

24th Oct 2024

Teen took his own life after falling in love with ‘Game of Thrones’ AI chatbot

Zoe Hodges

His mum is now suing the creators

This article contains details some people may find triggering. If you need support, you can contact Samaritans on 116 123 or text the crisis helpline on 50808 for mental health support.

A teenager took his own life after he ‘fell in love’ with a Game of Thrones-themed character on an AI chatbot service.

Now his mother, Megan Garcia, is filing a lawsuit against Character Technologies and its founders.

Sewell Setzer III first started using Character.AI in April 2023 when he turned 14.

However, by May his mum saw a noticeable change in her son’s behaviour as he became withdrawn, quit the school’s Junior Varsity basketball team and fell asleep in class.

After seeing a therapist in November, he was diagnosed with anxiety and disruptive mood disorder, according to the lawsuit and was advised to spend less time on social media despite the therapist not being aware of his addiction to Character.AI.

The following February, he got in trouble for talking back to a teacher, claiming he wanted to be kicked out.

That day, he wrote in his journal that he couldn’t stop thinking about Daenerys, the Game of Thrones-themed chatbot he believed he had fallen in love with.

He wrote that he could not go a single day without being with the C.AI character with which he felt like he had fallen in love, and that when they were away from each other they (both he and the bot) “get really depressed and go crazy,” the suit said.

His mum had confiscated his phone after the incident at school but days later he retrieved it and sent one final message to the bot which read: “I promise I will come home to you. I love you so much, Dany.”

The character replied: “Please come home to me as soon as possible, my love.”

Seconds later, he took his own life.

The lawsuit accuses Character.AI’s creators of negligence, intentional infliction of emotional distress, wrongful death, deceptive trade practices and other claims.

Garcia hopes ‘to  prevent C.AI from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.’

Speaking to the New York Times Garcia said: “It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby.’”

Sewell started emotionally relying on the chatbot service, which included ‘sexual interactions’ with the 14-year-old, after spending more and more time online.

The suit alleges that these chats transpired even though the teen had identified himself as a minor on the platform. He had also expressed suicidality to C.AI according to the suit and the company failed to offer help or notify his parents.

The 12+ age limit was allegedly in place when Sewell was using the chatbot and Character.AI ‘marketed and represented to App stores that its product was safe and appropriate for children under 13.’

A spokesperson for Character.AI told The Independent: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

It said the company’s trust and safety team has ‘implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.’

They continued: “As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user.

“These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification,” the spokesperson continued. “For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.”