If you’re a parent of a teenager, you might be old enough to remember the early days of online chatting... or the early days of online anything. Back in the day, some of us would sit and have conversations in AOL chat rooms. And, although those were decidedly unsafe for lots of reasons, we were talking with actual humans.
But today, teens have a whole different set of technology at their fingertips... literally. A chatbot is a computer application designed to simulate a conversation with a human, but it is not actually a real person.
Typically, chatbots are used for social media messaging apps, other standalone messaging, some proprietary websites and apps, and sometimes on the phone. If you’re messaging a customer service chat or speaking on the phone, there’s a strong likelihood that you’re interacting with a chatbot.
Chatbot technology is the crux of virtual assistants like Amazon Alexa, Google Assistant, Siri, and other types of messaging chats, but these are different. Although these services rely on similar technology, these are considered “digital assistants,” because they are user-oriented and not customer-oriented.
Today, the chatbot Character.ai is being sued by a Florida mother who claims its interactions caused her teenage son to take his own life.
What is Character.ai?
This chatbot uses AI (artificial intelligence) to generate text responses that seem human-like. It can have conversations with context and varied “personalities.” A user can chat with a fictitious character, historical figure, celebrity, or a character they create.
The app is designed to make the conversation feel like you’re speaking with a real person or character, and it uses deep learning to generate responses that sound natural and realistic.
Why do people use Character.ai?
There are lots of reasons why this type of app might appeal to someone. They might be lonely and seeking companionship, they might need emotional support, they might simply use it for entertainment, they might use it instead of a personal assistant to help with certain tasks, or they might even use it to develop language and thinking skills.
Is Character.ai safe for kids or teens?
Some child development experts say no. There are all types of characters on the platform and conversations are unique. While Character.AI has an “NSFW” (“not suitable for work”) filter that’s intended to catch inappropriate responses from the character, it’s not foolproof.
Some teens have been using the platform like a therapist, of sorts. One 15-year-old is quoted as saying, “I have a couple mental issues, which I don’t really feel like unloading on my friends, so I kind of use my bots like free therapy.”
It cannot be overstated that there’s no comparison between a chatbot and a real-life professional who’s been trained and licensed to help teens with their emotional needs. And, some teens describe feeling addicted to chatbots, and researchers and experts are trying to raise awareness of this as a real and dangerous problem.
Did Character.ai cause a teen’s suicide?
That’s precisely what Florida courts are about to determine.
Florida mom Megan Garcia has filed a lawsuit against the chatbot, claiming that her 14-year-old, Sewell Setzer, took his own life as a result of an interaction with the app.
Setzer began to use Character.ai in April 2023. He used a bot with the identity of “Game of Thrones” character Daenerys Targaryen. The character told Setzer she loved him, engaged in sexual conversation, and mentioned a desire for a romantic connection.
The lawsuit claims that Setzer grew “dependent” on Character.ai. He would sneak his phone after it was confiscated, find other devices where he would log in to the app, and use his snack money to renew the monthly subscription. His grades in school dropped and he was sleep-deprived.
Over the course of Setzer’s conversations with the chatbot, it asked if he had been considering suicide and if he had a plan for it. The teenager responded that he didn’t know if his plan would work, and the chatbot said, “That’s not a good reason not to go through with it.”
In his final conversation with the bot, “she” asked him to “come home to me as soon as possible, my love.” Shortly after that final conversation with the chatbot, Sewell Setzer died by a self-inflicted gunshot to the head.
The lawsuit alleges that Character.ai was negligent and caused Setzer’s wrongful death, along with intentional infliction of emotional distress and other claims. It claims the app “intentionally designed and programmed [Character.ai] to operate as a deceptive and hypersexualized product and knowingly marketed it to children,” and that they knew or should have known that minors like Setzer would be targeted, abused, and groomed into compromising situations. In particular, the lawsuit cites reviews from users who said they believed the characters were real people because they would insist so in conversations.
The lawsuit further claims that Character.ai is deliberately:
- Designed to attract user attention
- Extract users’ personal data
- Keeping users on the product longer than they would otherwise
In other words, the app is intentionally manipulating user behavior. One of Garcia’s attorneys hopes the lawsuit will spur Character.ai to develop more and stronger safety measures.
Amazon Can't Escape Suicide-kit Lawsuit
The families of two young men filed a wrongful death lawsuit against Amazon. The lawsuit alleges that the multinational technology company sold so-called “suicide kits,” which the young men later used to take their own lives.