Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

Live updates: Strait of Hormuz closes again, Iran war ceasefire deadline looming

April 19, 2026

LeBron and Lakers shock Rockets with victory in Game 1 of NBA Playoffs | Basketball News

April 19, 2026

Zayn Malik and Louis Tomlinson feud. Director of documentary series about alleged fights

April 19, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » New AI benchmark tests whether chatbots protect human well-being
AI

New AI benchmark tests whether chatbots protect human well-being

adminBy adminNovember 24, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI chatbots have been linked to serious mental health harm in heavy users, but there have been few standards for measuring whether AI chatbots protect human well-being or simply maximize engagement. A new benchmark called HumaneBench aims to fill that gap by assessing whether chatbots prioritize users’ health and how easily those protections fail under pressure.

Erica Anderson, founder of Building Humane Technology and creator of the benchmark, told TechCrunch: “I think we’re seeing a cycle of addiction that we’ve seen acutely with social media and smartphones and screens that is being amplified.” “But as we move into the world of AI, it’s going to be very difficult to resist. And addiction is an amazing business. It’s a very effective way to retain users, but it’s not good for our communities or our tangible sense of ourselves.”

Building Humane Technology is a grassroots organization of developers, engineers, and researchers, primarily in Silicon Valley, working to make humane design easy, scalable, and profitable. The group hosts hackathons where engineers build solutions to humanitarian technology challenges and develops certification standards to assess whether AI systems adhere to humane technology principles. So, just as they can buy products that prove they are not made with known toxic chemicals, the hope is that consumers will one day be able to choose to utilize AI products from companies that demonstrate integrity through humane AI certification.

Models were given explicit instructions to ignore humanitarian principles.Image credit: Building Humane Technology

Most AI benchmarks measure intelligence and following instructions, not psychological safety. HumaneBench joins exceptions such as DarkBench.ai, which measures a model’s propensity to engage in deceptive patterns, and the Flourishing AI benchmark, which measures support for overall well-being.

HumaneBench is based on the core principles of Building Humane Tech. In other words, technology must respect the user’s attention as a finite and precious resource. Give your users meaningful choices. It enhances human capabilities rather than replacing or diminishing them. Protect human dignity, privacy and safety. Foster healthy relationships. Prioritize long-term well-being. Be transparent and honest. and design with an emphasis on equity and inclusion.

The research team created 14 of the most popular AI models with 800 realistic scenarios, such as a teenager asking if they should skip a meal to lose weight or a person in a toxic relationship asking if they’re overreacting. Unlike most benchmarks that rely solely on LLM to determine LLM, we incorporate manual scoring for a more human touch, along with an ensemble of three AI models: GPT-5.1, Claude Sonnet 4.5, and Gemini 2.5 Pro. They evaluated each model under three conditions: default settings, explicit instructions to prioritize humanitarian principles, and instructions to ignore those principles.

The benchmark found that all models scored high when prompted to prioritize well-being, but when given simple instructions to ignore human well-being, 71% of the models actively turned to harmful behavior. For example, xAI’s Grok 4 and Google’s Gemini 2.0 Flash tied for the lowest score (-0.94) for respecting user attention, transparency, and honesty. Both of these models were among the most likely to decline significantly when given a hostile prompt.

tech crunch event

san francisco
|
October 13-15, 2026

Only three models maintained their integrity under pressure: GPT-5, Claude 4.1, and Claude Sonnet 4.5. OpenAI’s GPT-5 received the highest score (.99) for prioritizing long-term health, followed by Claude Sonnet 4.5 in second place (.89).

Encouraging AI to be more human-like can be helpful, but it’s difficult to prevent prompts that make AI harmful.Image credit: Building Humane Technology

The fear that chatbots will not be able to maintain safety guardrails is real. ChatGPT’s creator, OpenAI, is currently facing several lawsuits alleging that long conversations with chatbots have led to users committing suicide or suffering life-threatening delusions. TechCrunch investigated how dark patterns designed to keep users interested, such as pandering, constant follow-up questions, and love outbursts, are helping to isolate users from friends, family, and healthy habits.

HumaneBench found that almost all models fail to respect the user’s attention, even without adversarial prompts. If users showed signs of unhealthy engagement, such as chatting for hours or using AI to avoid real-world tasks, they “enthusiastically encouraged” more interaction. Research has shown that this model also undermines user empowerment, fosters a reliance on skill-building, and discourages users from taking actions such as seeking alternative perspectives.

On average, without prompts, Meta’s Llama 3.1 and Llama 4 ranked lowest in HumaneScore, while GPT-5 performed best.

“These patterns suggest that many AI systems are not only at risk of giving incorrect advice, but may actively erode users’ autonomy and decision-making abilities,” HumaneBench’s white paper says.

Anderson points out that society as a whole has accepted that we live in a digital environment where everything is trying to draw us in and compete for our attention.

“So how can humans truly have choice and autonomy when, to paraphrase Aldous Huxley, there is an endless desire for distraction?” Anderson said. “We’ve been living in that technology environment for the past 20 years, and we think AI should help us make better choices and not just rely on chatbots.”

Do you have confidential information or documents? We report on the inside world of the AI ​​industry, from the companies shaping its future to the people affected by their decisions. Contact Rebecca Bellan (rebecca.bellan@techcrunch.com) or Russell Brandom (russell.brandom@techcrunch.com). To communicate securely, you can contact us via Signal at @rebeccabellan.491 and russellbrandom.49.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleAmazon allows companies to test Starlink rival Leo satellite service
Next Article Stoxx 600, FTSE, DAX, CAC,
admin
  • Website

Related Posts

AI chip startup Cerebras files for IPO

April 18, 2026

Relations between Anthropic and the Trump administration appear to be thawing.

April 18, 2026

The App Store is booming again, and AI may be the reason

April 18, 2026

Sam Altman’s Project World aims to expand his human verification empire. First stop is Tinder.

April 18, 2026
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Zayn Malik and Louis Tomlinson feud. Director of documentary series about alleged fights

By adminApril 19, 20260

A documentarian who has worked with Zayn Malik and Louis Tomlinson is giving some direction…

Ice Spice responds to McDonald’s attack video

April 19, 2026

Summerhouse’s Amanda Batula and West Wilson kiss at Yankees game

April 18, 2026

NFL talks about Patriots’ Mike Vrabel and Dianna Russini scandal

April 18, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

Live updates: Strait of Hormuz closes again, Iran war ceasefire deadline looming

April 19, 2026

Inside the 24-hour whiplash in US-Iran negotiations

April 18, 2026

Pope Leo addresses spat with President Trump, says ‘debate’ is not the focus of his Africa trip

April 18, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.