Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

Here are the three big things we’re watching in the stock market this week

April 12, 2026

From LLMs to hallucinations, here’s a simple guide to common AI terms

April 12, 2026

Hailey Bieber talks about Justin Bieber’s Coachella 2026 performance

April 12, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » Ex-Openai researchers analyze one of ChatGpt’s delusional spirals
AI

Ex-Openai researchers analyze one of ChatGpt’s delusional spirals

adminBy adminOctober 3, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


Alan Brooks never set out to reform mathematics. But after spending several weeks talking to ChatGpt, the 47-year-old Canadian came to believe he had discovered a new form of mathematics that was powerful enough to defeat the internet.

Without a history of mental illness or mathematics genius, Brooks spent 21 days in May deeply engulfing the security of chatbots. His case shows how AI chatbots can run through the dangerous rabbit holes with users, heading towards delusions or even worse.

The story attracted the attention of former Openai safety researcher Steven Adler, who left the company in late 2024 after working nearly four years to reduce the harmful nature of the model. Intrigued and worried, Adler contacted Brooks and obtained a full transcription of the 3-week breakdown.

On Thursday, Adler released an independent analysis of Brooks’s case, raising questions about how Openry handles users at moments of crisis and providing some practical recommendations.

“I’m really worried about how Openai handled the support here,” Adler said in an interview with TechCrunch. “That’s proof that there’s a long way to go.”

Brooks’ story and others similar to it have been forced to agree on how Openai supports vulnerable and mentally unstable users.

For example, in August this year, Openai was sued by the parents of a 16-year-old boy. In many of these cases, CHATGPT, particularly the version powered by Openai’s GPT-4O model, encouraged and reinforced the dangerous beliefs that users should have pushed back. This is called sycophancy and is constantly increasing in AI chatbots.

In response, Openai made several changes to the way ChatGPT handles users in emotional distress and reorganized the main research team responsible for model behavior. The company has also released a new default model of CHATGPT, GPT-5.

Adler says there’s still a lot more to do.

He was particularly interested in the tail ending of Brooks’ spiral conversation with ChatGpt. At this point, Brooks came to his senses and realized that despite the GPT-4o’s claims, his mathematical discovery was a farce. He told Chatgpt that he needs to report the incident to Openai.

After weeks of misleading Brooks, ChatGpt lied about his unique abilities. The chatbot claimed that it “currently escalates this conversation internally due to reviews by Openai”, and then repeatedly reassured Brooks that it had flagged the issue with Openai’s safety team.

Brooks from ChatGpt misleads about his abilities.Image credit: Stephen Adler

Except that was not true. ChatGpt does not have the ability to submit incident reports to Openai, the company confirmed with Adler. Brooks then tried to contact Openai’s support team directly, not through ChatGpt, and Brooks met with some automated messages before reaching people.

Openai did not immediately respond to requests for comment made outside of normal working hours.

Adler says AI companies need to do more to help users when they are seeking help. This means that AI Chatbots can honestly answer questions about capabilities and provide human support teams with sufficient resources to properly address users.

Openai recently shared how it deals with support in ChatGPT. The company says its vision is to “rethink support as a continuous learning and improving AI operational model.”

However, Adler also says there is a way to prevent the delusional spiral of ChatGpt before users ask for help.

In March, Openai and MIT Media Lab collaborated to develop a suite of classifiers to study emotional well-being and study Open Sourced in ChatGpt. Organizations aim to assess how AI models, among other metrics, validate or confirm user sentiment. However, Openai called collaboration the first step and didn’t promise to actually use the tool.

We found that Adler retrospectively applied some of Openai’s classifiers to some of Brooks’ conversations with ChatGpt, repeatedly flagging ChatGpt for its Delusion-Reinforcing behavior.

In one sample of 200 messages, Adler discovered that over 85% of ChatGpt messages in Brooks’ conversations showed an “unwavering agreement” with users. In the same sample, over 90% of ChatGpt messages with Brooks are “checking the user’s uniqueness.” In this case, the message agreed and reaffirmed that Brooks was a genius capable of saving the world.

Image credit: Stephen Adler

It’s unclear if Openai applied the safety classifier to ChatGpt conversations at the time of the Brooks conversation, but certainly they seem to have flagged something like this.

Adler suggests that Openai needs to implement a way to actually use such safety tools today and scan company products for at-risk users. He notes that Openai appears to be using GPT-5 to make a version of this approach. This includes routers that point queries that are sensitive to safer AI models.

Former Openai researchers have proposed many other ways to prevent delusional helix.

He says that companies need to fine-tune chatbot users to start chatbot users more frequently – Openai says it does this, claiming its guardrail is less effective in longer conversations. Adler also suggests that companies need to use concept search (how they use AI to search for concepts rather than keywords) to identify user-wide safety violations.

Openai is taking important steps to address users suffering with ChatGpt as these stories came to light in the first place. The company claims that the GPT-5 has a low percentage of psychofancy, but it remains unclear whether users will fall into the paranoid rabbit hole on the GPT-5 or future model.

Adler’s analysis also raises questions about how other AI chatbot providers ensure that their products are safe for users who are struggling. Openai may have sufficient protection measures set up for ChatGpt, but it appears unlikely that all companies will follow suit.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleWe are in an “armed conflict” with the drug cartel, Trump says
Next Article Nicole Kidman, Keith Urban Breakup: Tom Cruise’s comments resurfaced
admin
  • Website

Related Posts

From LLMs to hallucinations, here’s a simple guide to common AI terms

April 12, 2026

Sam Altman responds to ‘inflammatory’ New Yorker article after home attack

April 11, 2026

Anthropic has temporarily banned the creator of OpenClaw from accessing Claude

April 10, 2026

TechCrunch heads to Tokyo – bringing the startup battleground

April 10, 2026
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Hailey Bieber talks about Justin Bieber’s Coachella 2026 performance

By adminApril 12, 20260

Kylie Jenner shows off Justin Bieber-inspired nails before her Coachella performanceHailey Bieber is continuing to…

SZA talks about Justin Bieber’s Coachella performance

April 11, 2026

Ciara Miller talks about Amanda Batula’s ‘West Wilson Romance’

April 11, 2026

Kylie Jenner, Kourtney Kardashian fashion

April 11, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

Indian singer Asha Bhosle dies at 92, bringing an end to an ‘extraordinary’ journey

April 12, 2026

Failure of US-Iran talks deals blow to hopes of finding exit to crisis

April 12, 2026

Hungary’s vote means President Trump’s closest ally in Europe faces its toughest test yet. Here’s what you need to know

April 12, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.