Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

Pimple popper Dr. Sandra Lee suffers a stroke that kills part of her brain

April 15, 2026

Asian markets mainly rise on expectations for US-Iran deal, China data attracts attention

April 15, 2026

Anthropic’s rise is causing some OpenAI investors to have second thoughts

April 15, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » Are the incentives to blame AI hallucinations bad?
AI

Are the incentives to blame AI hallucinations bad?

adminBy adminSeptember 9, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


Openai’s new research paper asks why large-scale language models like GPT-5 and chatbots like ChatGpt are still hallucinating and whether there is anything you can do to reduce those hallucinations.

In a blog post summarizing the paper, Openai defines hallucinations as “plausible but false statements generated by language models,” acknowledging that despite improvements, hallucinations “continued to be a fundamental challenge for all major language models.”

To explain the point, researchers say they got three different answers when asked about the title of Adam Tauman Kalai’s doctoral dissertation: “Widely Used Chatbots.” (Karai is one of the authors of the paper.) They then asked about his birthday and received three different dates. Again, they were all wrong.

Why are chatbots so wrong? Researchers suggest that hallucinations occur due to pre-training processes focused on correctly predicting the model without attaching true or false labels attached to the training statement.

“The spelling and parentheses follow a consistent pattern, so the error disappears on scale,” they write. “However, like a pet’s birthday, any low-frequency fact cannot be predicted from the pattern alone, and thus leads to hallucinations.”

However, the proposed solution does not focus on the initial prerequisite process, which is why a large model of language models has been evaluated. Current evaluation models do not cause hallucinations per se, but they argue that they “set the wrong incentives.”

Researchers should compare these ratings with a large number of choice tests where random guesses make sense.

TechCrunch Events

San Francisco
|
October 27th-29th, 2025

“In the same way, if the model is rated only with accuracy, the exact percentage of questions is encouraged to guess rather than say “I don’t know,”” they say.

The proposed solution is similar to a test that includes partial credits (such as SAT) to leave a blank to discourage the negative (scoring) of incorrect answers or blind guessing. Similarly, Openai states that valuing the model should “punish a confident error rather than punish uncertainty and give partial credits for the appropriate expression of uncertainty.”

And the researchers argue that “it’s not enough to introduce some new uncertainty-conscious tests on the side. Instead, “A widespread accuracy-based avoidance should be updated so that scoring prevents guessing.”

“When the main scoreboard continues to reward fortune guesses, the model continues to learn speculation,” the researcher says.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleActivists and Day workers use radio and whistles to avoid immigration agents at Home Depot
Next Article Stock Market News September 8, 2025
admin
  • Website

Related Posts

Anthropic’s rise is causing some OpenAI investors to have second thoughts

April 15, 2026

StrictlyVC San Francisco is less than a month away

April 14, 2026

Google adds AI skill to Chrome to help save favorite workflows

April 14, 2026

Max Hodak’s Science Corporation is preparing to place its first sensor in the human brain

April 14, 2026
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Pimple popper Dr. Sandra Lee suffers a stroke that kills part of her brain

By adminApril 15, 20260

Emilia Clarke’s brain aneurysmEmilia Clarke, who filmed battle scenes for Game of Thrones, published an…

Savannah Chrisley’s egg freezing journey, Todd Chrisley’s reaction

April 15, 2026

Brooklyn Beckham’s ex-Hannah talks love and family drama

April 15, 2026

Oily scalp and dry hair? This Korean hair essence is the solution

April 15, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

Israel’s new spymaster, Roman Goffman, is a close ally of Netanyahu who believed a war with Iran would bring down the regime.

April 14, 2026

Diego Maradona dies: first trial ended in scandal, but new trial begins

April 14, 2026

King Charles speaks to joint session of Parliament and meets privately with President Trump during US state visit

April 14, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.