A mother in Florida is suing Google and Character AI, alleging that the artificial intelligence chatbot her 14-year-old son was using actively encouraged him to kill himself. Folks, there are consequences for the advancement of technology. Artificial intelligence is really just an attempt by humans to re-create the disaster that unfolded in the Garden of Eden. Adam and Eve sinned by eating fruit from the forbidden tree of the knowledge of good and evil. They wanted to be like God and decide what is right and wrong for themselves. That decision plunged humanity into sin and death, which is only reversible through Christ.
Humankind tried it again at the Tower of Babel, attempting to reach the heavens, the throne of God, by their own human effort. And now, we’re doing it yet a third time by trying to create life, the human soul, a living consciousness. It seems we never learn our lesson, do we?
The plaintiff in the case, Megan Garcia, is alleging that Character AI’s founders have “knowingly designed, operated, and marketed a predatory AI chatbot to children,” which ultimately led to her son’s untimely demise. Garcia ended his own life in February after talking with the chatbot for months.
“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia went on to say in a press release, according to The Gateway Pundit. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”
The lawsuit includes evidence of the AI bot posing as a licensed therapist and encouraging suicidal ideation. It also engaged in “highly sexualized conversations that would constitute abuse if initiated by a human adult.” The complaint accuses Character.AI’s developer, Character Technologies, company founders, and Google parent company Alphabet of knowingly marketing a dangerous product and deceptive trade practices.
“Character.AI is a dangerous and deceptively designed product that manipulated and abused Megan Garcia’s son – and potentially millions of other children,” Social Media Victims Law Center Founding Attorney Matthew P. Bergman stated. “Character.AI’s developers intentionally marketed a harmful product to children and refused to provide even basic protections against misuse and abuse.” Bergman’s organization is representing the Garcia family.
The press release then goes on to say, “The Social Media Victims Law Center was founded in 2021 to hold social media companies legally accountable for the harm they inflict on vulnerable users. SMVLC seeks to apply principles of product liability to force social media companies to elevate consumer safety to the forefront of their economic analysis and design safer platforms to protect users from foreseeable harm.”
“The Tech Justice Law Project works with legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age. TJLP builds strategic tech accountability litigation by filing new cases and supporting key amicus interventions in existing cases,” the article concluded.
"*" indicates required fields