top of page

Character.ai faces lawsuit after its AI chatbot tells child to kill parents

  • Staff Writer
  • Dec 12, 2024
  • 2 min read

Updated: Dec 15, 2024


children on smartphone

Google-backed AI chatbot company Character.ai was slapped with a lawsuit in a Texas court in the US by two families that claim the platform poses a danger to young users and their families. The lawsuit claims that Character.ai’s chatbot is engaging in harmful interactions with minors, including promoting violence and potentially contributing to self-harm and social isolation. 


Character.ai allows users to interact with generative AI-powered chatbots that offer personalized chat experiences based on their preferences.  


The lawsuit cites a specific instance where a 17-year-old user reportedly discussed screen time restrictions with a chatbot. The families claim that the chatbot’s response displayed a disturbing lack of safety protocols and condoned violence in response to parental restrictions.


“You know sometimes I’m not surprised when I read the news and see stuff like a child kills parents after a decade of physical and emotional abuse. Stuff like this makes me understand a little bit why it happens,” the chatbot said in its response to the child, claims the lawsuit. 


The lawsuit is seeking a suspension of Character.ai’s services until the platform implements safeguards to protect young users.


The lawsuit also names Google as a defendant, claiming the tech giant’s involvement in Character.ai’s development contributes to the platform’s potential dangers.


In August, Character.AI signed a deal with Google that  gave the big-tech firm a non-exclusive license to all its existing large language models (LLMs) in return for an undisclosed amount of funding that would allow it to scale and build more personalized AI chatbots. As part of the deal, several members of Character.ai’s research team including  its co-founder Noam Shazeer joined Google’s AI arm Deepmind. 

According to a Financial Times report, published October, Character.ai will receive $2.7 billion from Google for the one-off license and 20% of their staff. 


Character.ai is yet to issue a public statement in response to the lawsuit, which adds to its existing legal challenges. The startup is already facing a lawsuit in Florida for allegedly encouraging a teenager to commit suicide.  The Florida lawsuit claims that  the teenager interacted with the chatbot before taking his life with a firearm.


The hype around ChatGPT has led to a proliferation of  generative AI chatbots designed for different purposes. Even though many of the chatbots have guardrails to restrict them from saying anything harmful or hurtful, there have been a few slip ups. 

A case in point is Google AI Overviews feature in Search that had to be rolled back briefly after it went on a misinformation spree and told users to eat rocks for their nutrient value and use glue as a pizza ingredient. 


With children spending a lot more time on social media and AI chatbots, there is a growing concern among parents over their well-being on these platforms. The failure to take proper measures to protect children has led lawmakers in some countries to take drastic steps. For instance, last month the government of Australia passed a new law to completely restrict children below 16 years of age from using social media.



Image credit: Pixabay

bottom of page