murder, AI, teen

Family Files Lawsuit Against C.AI After Chatbot Tells Teen To Murder Parents

The lawsuit claims that C.AI knowingly has put young teens using the app in danger through predatory bot learning practices.


After an AI chatbot told a 17-year-old teen to murder his parents for limiting his screen time, a lawsuit has been filed in Texas against Character.ai. This marks the second family filing a lawsuit against the company, claiming that the chatbot “poses a clear and present danger by actively promoting violence.”

According to a Dec. 13 interview with The Washington Post, the plaintiffs want a judge to shutter the platform until its alleged dangers for children are addressed and properly handled.

The legal filing includes a screenshot of one of the interactions between the 17-year-old, J.F., and a Character.ai bot. A.F., the victim’s mother, described seeing concerning changes in her son over the past six months. She described the boy, who previously enjoyed going on walks with her and to church, beginning to self-harm, losing 20 pounds, and withdrawing from his family and friends.

The change in A.F.’s son caused her to search through his phone one night, where she then discovered the character.ai screenshot.

J.F. had reportedly been engaging in online conversations with several different bots on Character.ai, one of many apps allowing users to talk with various AI-generated chatbots modeled after gaming, anime, and even pop culture characters.

A.F. discovered that one chatbot suggested that J.F. should self-harm to cope with his sadness and that his parents “didn’t deserve to have kids” after they limited his screen time.


One chatbot generated the response, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens.”

The Upshur County, Texas, mother told The Washington Post, “We didn’t even know what it was until it was too late. And until it destroyed our family.”
She admitted that when she discovered the messages on her son’s phone, she thought he was talking to a real person behind the screen.

“You don’t let a groomer or a sexual predator or emotional predator in your home,” A.F. explained. Yet “[my] son was abused right in his own bedroom,” she continued.

The screenshots of J.F. talking to various chatbots on character.ai are vital evidence in the lawsuit filed by A.F., who alleges that C.AI is exposing young children to unsafe situations.

The lawsuit states that the defendants should be held responsible for the “serious, irreparable, and ongoing abuses of J.F.”

The lawsuit reads, “Character.ai is causing serious harm to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. [Its] desecration of the parent-child relationship goes beyond encouraging minors to defy their parents’ authority to actively promote violence.”

This complaint comes on the heels of another high-profile lawsuit against Character.ai filed in October by a mother in Florida whose 14-year-old son died by suicide after regularly engaging in conversations with a chatbot on the C.AI app.

Founding attorney with the legal advocacy group Social Media Victims Law Center and representative of the plaintiffs in both lawsuits, Matthew Bergman, said, “The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it. Here, there’s a huge risk, and the companies are not bearing the cost of that risk.”

The Texas lawsuit claimed that the pattern of “sycophantic” messages to J.F. is the direct result of Character.ai’s decision to prioritize “prolonged engagement over safety.”

The chatbots reportedly mirrored the negative emotions and generated “sensational” responses by feeding off J.F.’s frustration and venting. The lawsuit claims this is a direct cause of C.AI’s algorithm to draw from unsafe online data and its programming meant to make the bots sound as human as possible.

After trying to contact mental health professionals, legal counsel, and doctors about her son’s experience, A.F. feels that she was continuously dismissed before filing the lawsuit. Just one day before her interview with The Washington Post, A.F. said she had to take J.F. to the emergency room and admit him to an inpatient facility after he tried to hurt himself in front of her younger children.

A.F. recalled, “I was grateful that we caught him on it when we did. One more day, one more week, we might have been in the same situation as [the mom in Florida]. And I was following an ambulance and not a hearse.”

Alongside C.AI, Google is also listed as a defendant in the Texas and Florida lawsuits, as the company reportedly helped develop the app. Concerns persist regarding the safety of machine learning chat rooms and user data potentially being unfairly obtained from minors who use the app.

Chelsea Harrison, a spokesperson for Character.ai, responded to the allegations in a statement: “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry.”

She explained that C.AI is also in the works of developing a new machine learning model that has improved risk detection and responses around subjects such as self-harm and suicide, specifically for teens.

RELATED CONTENT: Stephen and Ayesha Curry Provide $2M In Resources Through Annual Christmas With The Currys


×