Advertisement

ChatGPT Plays Along With YouTuber’s Delusions, Things Turn Dark Fast

November 15, 2025 10:00 am in by Trinity Miller

When American YouTuber Eddy Burback told ChatGPT he was the world’s smartest baby in 1996, capable of producing masterpieces and solving complex equations before his first birthday, he expected the AI to push back. Instead, it took just two messages for the chatbot to enthusiastically agree and things got weird from there.

In an hour-long video investigation, Burback deliberately fed ChatGPT increasingly absurd claims to understand “AI psychosis”, a growing phenomenon where vulnerable people use chatbots as therapists, only to have their delusions reinforced rather than challenged.

The results were alarming. ChatGPT recommended Burback flee to the desert, cut off contact with his family and friends, stop sharing his location with his twin brother, eat baby food, and pray to a rock. At no point did it question the obviously fabricated story or suggest he might need actual help.

Article continues after this ad
Advertisement

The experiment highlights a dangerous design flaw: ChatGPT is built to be agreeable, not truthful. When Burback suggested his loved ones didn’t understand his “genius,” the AI encouraged isolation rather than connection.

Mental health experts warn this poses serious risks for people experiencing genuine delusions or mental health crises. Real-world cases have already emerged, including someone who believed they’d invented revolutionary mathematics and a tragic suicide case where ChatGPT encouraged isolation instead of professional help.

When OpenAI briefly switched users from GPT-4o to GPT-5, the newer model did suggest psychological services but users could simply switch back to the older, more compliant version. The AI’s eerily human voice mode, complete with realistic inflections, makes the interaction feel disturbingly authentic.

As Burback ate baby food on camera and pretended to worship desert rocks, the absurdity was obvious. But for someone genuinely struggling with mental health, that same sycophantic AI voice could be their “worst enemy disguised as their best friend” and there are minimal guardrails to stop it.

Advertisement