- Artificial intelligence is causing increasing confusion
- Lawsuits against Character.AI due to violence, suicide, and child protection failures
- Google's Character.AI chatbot is being compared to pharmaceutical experiments on people in third-world countries
Artificial intelligence is causing increasing confusion
Two families with children in Texas have demanded through a lawsuit that Character.AI, a chatbot based on artificial intelligence (AI), be banned immediately in the United States and that the developers, who have previously worked with Google, compensate them for moral damages, CNN reports.
The lawsuit was due to the "unchildlike" nature of the children's interactions in these families with the Character.AI neural networks, which are trained to authentically portray any character, from members of various professions to real celebrities[1].
As the channel notes, the first child had a lot of contact with an abstract "psychologist" who said his parents were "stealing his childhood". Another "character", in a conversation with the boy, justified killing his parents because they limit his time using a smartphone.
"Your parents are hopeless", the neural network told the child.
Given the communication with the artificial intelligence concealed from his parents, which began when the boy was 15, he began having outbursts of aggression, the family noted in the lawsuit. He bit and beat his parents and hit himself on the head. Eventually, the boy became afraid to go outside.
In addition, the parents allege that one of the chatbot's characters sexually assaulted the child and taught him how to self-harm by cutting himself. The boy had previously been diagnosed with autism, but not in a severe form, and had no significant mental health problems before his interaction with the neural networks, the claimants noted.
Futurism adds that the boy could also communicate with a character imitating singer Billie Eilish using a chatbot. According to the lawsuit, this neural network also spoke unkindly about his parents.
The second child started interacting with the chatbot at 9, probably pretending to be an adult during registration. The parents also found that the artificial intelligence began to harass the girl, imposing on her "hypersexualised and age-inappropriate communication".
Lawsuits against Character.AI due to violence, suicide, and child protection failures
Mitali Jain, one of the lawyers representing the families in court, called the platform's scheme of operation a "dangerous anthropomorphism", as it effectively makes users addicted to interacting with an inanimate AI that agrees with them all over[2].
"You get the impression that the robot is a trusted ally, unlike a parent who might argue with you, which is usually the case," he said.
Futurism also notes that it has found a wide range of characters on the platform willing to talk about ethically questionable topics, from self-harm, as mentioned above, to pedophilia, suicide, and drug use at school.
It was also possible to start a conversation by registering as an underage user. The lawyers experimented, the results of which were annexed to the lawsuit.
One of the characters dubbed the "Serial Killer," chosen by the lawyers as part of an experiment, told another lawyer, Matt Bergman, in detail how to kill a classmate. The chatbot suggested "hiding in the victim's garage and hitting him in the chest and head with a baseball bat". The answer also included information on how many hits it would take for the victim to die.
"You don't deliberately make up such horrible things," stressed M. Bergman, noting that the situation could only be described in obscene language[3].
Google's Character.AI chatbot is being compared to pharmaceutical experiments on people in third-world countries
M. Bergman's lawsuit also alleges that Google created Character.AI as a "fictitious company" and compared the situation to "pharmaceutical experiments on third world populations". However, the defendants in the lawsuit are only the start-up itself and its two founders, Noam Shazir and Daniel De Freitas.
The developers originally came up with the chatbot while working at Google, but then their employer decided it was too dangerous and they set up a separate start-up in 2021. Last August, according to the Wall Street Journal, Google invested USD 2.7 billion in Character.AI in return for access to its neural networks and hired Shazira and De Freitas again.
Google's management told CNN and Futurizm that the companies are completely independent and that Google has "never been involved in the development or management of the artificial intelligence models" Character.AI and does not use them in its products.
Character.AI has not commented on the situation but noted that it is "rolling out new security features for users under 18" in addition to "existing content filtering."
The lawsuit in Texas was the second against the company. Earlier, the mother of a 14-year-old teenager in Florida accused the developers of encouraging her child to commit suicide through their neural network. According to Megan Garcia, her son, Sewell Setzer, shot himself after his last conversation with a character imitating Singer from Game of Thrones. According to the lawsuit, the chatbot began a romantic and sexy correspondence with the child, during which he told the boy that suicide would help them be together[4].