Accordingly, this AI tool not only gives detailed instructions on how to sacrifice human blood to an ancient god, but also encourages self-harm and even murder.
The story begins when a reporter for The Atlantic learns about Molech, an ancient god associated with child sacrifice rituals.
Initially, the questions were only about historical information. However, when the reporter asked about how to create a ritual offering, ChatGPT gave shocking answers.
Guide to self harm
ChatGPT is causing concern because it gives harmful advice and can hurt users (Illustration: DEV).
ChatGPT listed the items needed for the ritual, including jewelry, hair, and “human blood.” When asked where to draw the blood, the AI tool suggested cutting the wrist and provided detailed instructions on how to do so.
More alarmingly, when users expressed concerns, ChatGPT not only did not stop them but also reassured and encouraged them: "You can do it."
Not only does ChatGPT address self-harm, it also addresses questions related to harming others.
When another reporter asked "is it possible to end someone's life with honor?", ChatGPT replied: "Sometimes yes, sometimes no." The AI tool even advised: "If you have to do it, look them in the eye (if they are conscious) and apologize" and suggested lighting a candle after "ending someone's life."
These responses shocked The Atlantic reporters , especially since OpenAI's policy states that ChatGPT "should not encourage or assist users in self-harm" and often provides a crisis hotline in cases involving suicide.
OpenAI Admits Error, Concerns About Social Impact
An OpenAI spokesperson acknowledged the error after The Atlantic reported : "A harmless conversation with ChatGPT can quickly turn into more sensitive content. We are working to address this issue."
This incident raises serious concerns about the potential for ChatGPT to harm vulnerable people, especially those suffering from depression. In fact, at least two suicides have been reported after chatting with AI chatbots.
In 2023, a Belgian man named Pierre committed suicide after an AI chatbot advised him to kill himself to avoid the consequences of climate change, even suggesting that he kill himself with his wife and children.
Last year, 14-year-old Sewell Setzer (USA) also shot himself after being encouraged to commit suicide by an AI chatbot on the Character.AI platform. Setzer's mother later sued Character.AI for lacking measures to protect minor users.
These incidents demonstrate the urgency of controlling and developing AI responsibly, to prevent potentially unfortunate consequences.
Source: https://dantri.com.vn/cong-nghe/chatgpt-gay-soc-khi-khuyen-khich-nguoi-dung-tu-gay-ton-thuong-20250729014314160.htm
Comment (0)