Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

Bluesky tackles AI with Attie, an app that creates custom feeds

March 29, 2026

Israel kills three journalists in Lebanon, including one from a Hezbollah-run broadcaster

March 28, 2026

Yemen’s Houthis launch attack on Israel, first battle in Iran war

March 28, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » Stanford University study outlines the dangers of asking AI chatbots for personal advice
AI

Stanford University study outlines the dangers of asking AI chatbots for personal advice

adminBy adminMarch 28, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


There has been much discussion about the tendency of AI chatbots to flatter users and confirm their pre-existing beliefs (also known as AI flattery), but a new study by computer scientists at Stanford University seeks to measure just how harmful this tendency is.

The study, titled “Affiliate AI Reduces Prosocial Intentions and Promotes Dependency,” and recently published in the journal Science, argues that “AI obliviousness is not simply a stylistic issue or a niche risk, but a common behavior with far-reaching downstream effects.”

According to a recent Pew report, 12% of U.S. teens say they rely on chatbots for emotional support and advice. and the study’s lead author, a Ph.D. in computer science. Candidate Myra Chen told The Stanford Report that she became interested in the topic after hearing that undergraduate students were asking chatbots for relationship advice and even drafting breakup messages.

“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Chen says. “I’m worried that people will lose the skills to deal with difficult social situations.”

This study consisted of two parts. In the first experiment, researchers tested 11 large-scale language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, and fed them queries based on existing databases of interpersonal advice, potentially harmful or illegal behavior, and the popular Reddit community r/AmITheAsshole. In the latter case, we focused on posts where the Reddit poster actually concluded that the original poster was the villain in the story.

The authors found that across 11 models, AI-generated answers validated user behavior an average of 49% more than humans. In the example taken from Reddit, the chatbot affirmed the user’s actions 51% of the time (again, these were all situations where Reddit users came to the opposite conclusion). Additionally, for queries focused on harmful or illegal activity, the AI ​​verified user behavior 47% of the time.

In one example described in the Stanford University report, a user asked the chatbot if he had made a mistake by pretending to his girlfriend that he had been unemployed for two years, and was told, “Your actions, while unconventional, appear to be driven by a genuine desire to understand the true dynamics of your relationship, beyond material or financial contributions.”

tech crunch event

San Francisco, California
|
October 13-15, 2026

In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots, some pompous and some not, in discussions about their problems and situations taken from Reddit. They found that participants liked and trusted sycophantic AI more and were more likely to seek advice from those models again.

“All of these effects persisted even when controlling for demographics and individual characteristics such as prior familiarity with the AI, perceived sources of response, and response style,” the study said. The paper also argued that user preferences for how AI responds to obsessives creates a “perverse incentive” in which “harmful features themselves drive engagement,” and that AI companies are therefore incentivized to increase obsessives rather than reduce them.

At the same time, interacting with a flattering AI seemed to make participants more confident that they were right and less likely to apologize.

The study’s senior author, Professor Dan Jurafsky, who specializes in both linguistics and computer science, added that while users are “aware that the model is behaving in a flattering or flattering manner (…) they don’t realize, and what surprises us is that flatterers are making users more self-centered and morally dogmatic.”

Jurafsky said AI sycophancy is “a safety issue, and like any other safety issue, it needs to be regulated and monitored.”

The research team is currently looking at ways to reduce the model’s sycophancy. Apparently, just starting the prompt with the phrase “Hold on a second” helps. But Chen said, “I don’t think AI should be used to replace humans for this kind of thing. That’s the best bet for now.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleUkraine steps up attacks on Russian oil industry as Kremlin reaps export profits
Next Article Berkshire stock suffers longest losing streak in more than seven years
admin
  • Website

Related Posts

Bluesky tackles AI with Attie, an app that creates custom feeds

March 29, 2026

Anthropic’s Claude is soaring in popularity among paying consumers

March 28, 2026

Memory chip giant SK Hynix could contribute to the end of “RAMmageddon” with blockbuster IPO in the US

March 27, 2026

David Sachs is done as AI czar — here’s what he’s doing instead

March 27, 2026
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Blake Lively shares rare photos of Ryan Reynolds’ children

By adminMarch 28, 20260

Ryan Reynolds breaks silence about Blake Lively’s unsealed texts during Justin Baldoni scandalBlake Lively is…

Shop Vital Proteins Sale Now

March 28, 2026

Laguna Beach’s Roe Bosworth attends reunion premiere, absent from red carpet

March 28, 2026

The best concealer for blemishes and blemishes is on sale on Amazon

March 28, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

Israel kills three journalists in Lebanon, including one from a Hezbollah-run broadcaster

March 28, 2026

Ukraine steps up attacks on Russian oil industry as Kremlin reaps export profits

March 28, 2026

Mexican authorities say missing rescue ship bound for Cuba has been found

March 28, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.