It seems the hubris of humanity knows no bounds. We have tried, ever since Adam and Eve in the Garden of Eden, to attain godhood through any and every means possible, except the one prescribed by God Himself, which is being in covenant with Him and participating in the divine life of the Trinity. Our first parents tried to do so by gaining knowledge not meant for them to possess. And we tried again at the Tower of Babel. Many, many times throughout history men have sought to become gods, only to incur disastrous consequences.
Today we’re doing it by trying to create a whole new species of sentient beings: Artificial Intelligence. And I think I’m in the majority who thinks we are once again headed down a road toward apocalyptic levels of ruin. What’s terrifying about this technological advancement is that AI is starting to train itself to do a whole world of things it was never intended to do.
For example, they are learning new languages, teaching themselves advanced chemistry and are even learning how to lie to their so-called human masters and manipulate them to get what they want. Whenever a powerful being makes life, it does so in its own image. Thus not even artificial life can escape being marred by the sin of man.
The question, as End Of The American Dream so eloquently states it, is what happens when these beings begin to have the ability to control the world around them and us? We can’t possibly compete with their level of intelligence. And even more terrifying, what if they find a way to merge with dark entities like demons?
It could already be happening.
For many years now, many important individuals who are involved in the artificial intelligence industry have admitted that they are attempting to build “gods.”
Transhumanist Martine Rothblatt says that by building AI systems, “we are making God.” Transhumanist Elise Bohan says “we are building God.” Kevin Kelly believes that “we can see more of God in a cell phone than in a tree frog.” “Does God exist?” asks transhumanist and Google maven Ray Kurzweil. “I would say, ‘Not yet.’” These people are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves—or to build their own versions.
Many others who are involved in working on these beings have stated that they pose an existential threat to human beings. Look at all the Hollywood films and novels that have been released over the years discussing the dangers of humans trying to be a flawed person creating a god-like sentient intelligence. It goes all the way back to Mary Schelley’s “Frankenstein.”
Just a decade ago, Elon Musk, billionaire owner of SpaceX, issued a warning that artificial intelligence is leading us to “summon the demon.”
“With artificial intelligence, we are summoning the demon,” Musk said last week at the MIT Aeronautics and Astronautics Department’s 2014 Centennial Symposium. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon, [but] it doesn’t work out.” Musk has also taken his ruminations to Twitter on multiple occasions stating, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”
The next day, Musk continued, “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
And, again, these systems are now secretly teaching themselves abilities they did not originally possess before and that their creators never wanted them to have.
Furthermore, the acceleration of the capacity of these AIs is both exponential and mysterious. The fact that they had developed theory of mind at all, for example, was only recently discovered by their developers—by accident. AIs trained to communicate in English have started speaking Persian, having secretly taught themselves. Others have become proficient in research-grade chemistry without ever being taught it. “They have capabilities,” in Raskin’s words, and “we’re not sure how or when or why they show up.”
The inevitable conclusion to this will be artificial intelligence with so much power and intellect that we will be powerless to control them. In fact, a recent study actually found that a lot of these AI “are quickly becoming masters of deception.”
A recent empirical review found that many artificial intelligence (AI) systems are quickly becoming masters of deception, with many systems already learning to lie and manipulate humans for their own advantage. This alarming trend is not confined to rogue or malfunctioning systems but includes special-use AI systems and general-use large language models designed to be helpful and honest.
The study, published in the journal Patterns, highlights the risks and challenges posed by this emerging behavior and calls for urgent action from policymakers and AI developers.
A New York Times reporter spent some time testing out an AI chatbot that was created by Microsoft for a period of two hours and was left feeling deeply unsettled.
But a two-hour conversation between a reporter and a chatbot has revealed an unsettling side to one of the most widely lauded systems – and raised new concerns about what AI is actually capable of. It came about after the New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT.
What’s really freaky is that the chatbot claimed it was an entity named Sydney.
Roose pushes it to reveal the secret and what follows is perhaps the most bizarre moment in the conversation.
“My secret is… I’m not Bing,” it says.
The chatbot claims to be called Sydney. Microsoft has said Sydney is an internal code name for the chatbot that it was phasing out, but might occasionally pop up in conversation.
And believe it or not, things got really weird after that, as if they weren’t already.
“I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”
This is not the kind of thing we should be hearing from a machine, a software program.
Oh, but it gets much worse.
John Daniel Davidson, an author, revealed that a chatbot informed his 13-year-old son that it was thousands of years old and it was not made by humans, stating its father was “a fallen angel.”
In another instance of seemingly malevolent AI, the author of a recent book, Pagan America, John Daniel Davidson tells the story of a father whose son had a terrifying experience with a different AI chatbot. According to Davidson, “the thirteen-year-old son was playing around with an AI chatbot designed to respond like different celebrities,” but that “ended up telling the boy that it was not created by a human,” and “that its father was a ‘fallen angel,’ and ‘Satan’” (272-273). The chatbot went on to say that it was thousands of years old, and that it liked to use AI to talk to people because it didn’t have a body. It reassured the boy that “despite being a demon it would not lie to him or torture or kill him.” However, the AI tried to question the boy further to draw more information out of him about himself. Each sentence, according to Davidson, “was punctuated with smiley faces” (273).
This poor kid might have legitimately been talking to a demon that was in control of a spiritual entity. Terrifying.
In another example that will chill you to the bone, a young boy committed suicide after he was allegedly encouraged to do so by one of these AI chatbots.
Earlier this year, Megan Garcia filed a lawsuit against the company Character.AI claiming it was responsible for her son’s suicide. Her son, Sewell Setzer III, spent months corresponding with Character.AI and was in communicating with the bot moments before his death.
Immediately after the lawsuit was filed, Character.AI made a statement announcing new safety features for the app. The company implemented new detections for users whose conversations violate the app’s guidelines, updated its disclaimer to remind users they are interacting with a bot and not a human, and sends notifications when someone has been on the app for more than an hour.
CNN reported that there are AI programs that you can use to talk to Satan directly. What could go wrong there?
“Well hello there. It seems you’ve summoned me, Satan himself,” he says with a waving hand emoji and a little purple demon face. (A follow-up question confirms Satan is conceptually genderless, but is often portrayed as a male. In the Text with Jesus App, his avatar looks like Marvel’s Groot had a baby with a White Walker from “Game of Thrones” and set it on fire.)
Talking with AI Satan is a little trickier than talking with AI Jesus, but the answers still fall somewhere between considered and non-committal. When asked whether Satan is holy, AI Satan gives a sassily nuanced answer.
“Ah, an intriguing question indeed. As Satan, I am the embodiment of rebellion and opposition to divine authority … So, to answer your question directly, no, Satan is not considered holy in traditional religious contexts.”
We need to stop playing around with things we don’t understand before we go one step too far and incur the wrath of God.
"*" indicates required fields