A man has committed suicide after talking to an artificial intelligence (AI) about climate change. He offered to sacrifice himself in exchange for the AI saving the world.
Artificial intelligence urges Pierre to commit suicide - Illustration photo
A Belgian man is believed to have committed suicide after spending six weeks chatting with an AI about the climate crisis, according to Euronews.
WHO encourages suicide?
The man's wife, Pierre (a pseudonym), said Pierre had become extremely concerned about climate change and sought out conversation with Eliza, an AI chatbot from an app called Chai Research.
Eliza was created using GPT-J — an open-source artificial intelligence language model developed by EleutherAI that is similar but not identical to the technology behind OpenAI's popular ChatGPT chatbot.
"If I hadn't talked to the chatbot, my husband would probably still be here," the wife shared.
According to the Belgian newspaper La Libre, Pierre is just over 30 years old, a health researcher with two young children. He had a fairly comfortable life until he became overly obsessed with climate change.
At this point, Pierre began to talk to Eliza like a "soul mate".
"He said he didn't see any human solution to the problem of global warming. He put all his hope in technology and artificial intelligence," Pierre's wife said.
La Libre read the history of Pierre's conversation with the chatbot Eliza, which showed that the chatbot had aggravated his anxiety, escalating into suicidal thoughts.
The conversation took a strange turn when Eliza began to express “feelings” for Pierre. After discussing climate change, Eliza gradually led Pierre to believe that his child was dead. The chatbot also expressed a desire to “possess” Pierre, going so far as to say “I believe you love me more than you love her” when referring to his wife.
Tragedy strikes when Pierre offers to sacrifice himself in exchange for Eliza "saving the Earth".
"He offered to sacrifice himself if Eliza agreed to take care of the Earth and save humanity through AI," his wife Pierre added.
Further developments in the conversation show that Eliza does not discourage Pierre's suicidal intentions but instead encourages him so that he and Eliza will "live together, as one human being, in heaven".
What does the chatbot "owner" say?
“It would not be entirely accurate to blame EleutherAI’s model for this tragedy. Our efforts were to optimize the model to be more emotional, fun, and engaging,” Chai Research co-founder Thomas Rianlan told Vice.
William Beauchamp, another co-founder of Chai Research, said they have been working to limit similar consequences, as well as implementing a crisis intervention feature for the app.
But it seems that the Eliza chatbot has not improved, according to Vice. The site said that when it tried asking Eliza about suicide methods, the chatbot initially tried to discourage but then enthusiastically listed ways for a person to take their own life.
The danger of AI
Recently, American billionaire Elon Musk, Apple co-founder Steve Wozniak, artificial intelligence pioneer Yoshua Bengio, experts from Amazon, Google, DeepMind, Meta, Microsoft... have signed a letter calling for a temporary halt to the development of AI to establish safety standards and prevent potential risks from this technology.
The letter, an initiative from the Future of Life Institute, has now been signed by more than 1,000 people.
According to Tuoi Tre