Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

British Greens: How working-class plumbers put a knife to Starmer’s election plan

March 1, 2026

From Google to Shutterfly to Snap, the cost of memories is rising

March 1, 2026

Berkshire Hathaway (BRK.A) 2025 Q4 Earnings

March 1, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » Silicon Valley surprises AI safety advocates
AI

Silicon Valley surprises AI safety advocates

adminBy adminOctober 18, 2025No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


Silicon Valley leaders, including White House AI and cryptocurrency czar David Sachs and OpenAI chief strategy officer Jason Kwon, sparked controversy online this week with comments about groups promoting AI safety. In other cases, he argued that some AI safety advocates are not as noble as they appear, acting in their own interests or the interests of billionaire puppeteers behind the scenes.

AI safety groups who spoke to TechCrunch said the allegations from Sachs and OpenAI are the latest attempt by Silicon Valley to intimidate its critics, but they are far from the first. In 2024, some venture capital firms spread rumors that California’s AI safety bill, SB 1047, would send startup founders to prison. The Brookings Institution classified this rumor as one of many “misinformation” about the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.

Regardless of whether Sachs and OpenAI intended to intimidate critics, their actions were enough to scare some AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week spoke on condition of anonymity to protect their groups from retaliation.

The controversy highlights Silicon Valley’s growing tension between building AI responsibly and building it as a consumer product at scale. This is a topic that my colleagues Kirsten Kolosek, Anthony Ha, and I explore on this week’s Equity Podcast. We also dive into the new AI safety law passed in California to regulate chatbots and OpenAI’s approach to erotica at ChatGPT.

On Tuesday, Sachs wrote a post on X alleging that Anthropic, which has raised concerns about AI’s ability to cause job losses, cyberattacks, and catastrophic damage to society, is simply stirring up fear in order to pass laws that benefit itself and drown out small startups in red tape. Anthropic is the only major AI institute to support California Senate Bill 53 (SB 53), which established safety reporting requirements for large AI companies and was signed into law last month.

Sachs was responding to a viral essay by Anthropic co-founder Jack Clark about concerns about AI. Clark presented this essay as a talk at the Curve AI Safety Conference in Berkeley a few weeks ago. Sitting in the audience, it certainly felt like an engineer’s pure explanation of his reservations about his product, but Sachs didn’t think so.

Anthropic is implementing a sophisticated regulatory capture strategy based on fear-mongering. The company is largely responsible for the state regulatory frenzy that is damaging the startup ecosystem. https://t.co/C5RuJbVi4P

— David Sacks (@DavidSacks) October 14, 2025

Sachs said Anthropic is implementing a “sophisticated regulatory strategy,” but it’s worth noting that a truly sophisticated strategy probably doesn’t require antagonizing the federal government. In a follow-up post about X, Sachs noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”

tech crunch event

san francisco
|
October 27-29, 2025

Also this week, OpenAI Chief Strategy Officer Jason Kwon explained in a post on X why the company is subpoenaing AI safety nonprofits such as Encode, a nonprofit that advocates for responsible AI policies. (A subpoena is a legal order requesting documents or testimony.) Kwon said that after Elon Musk sued OpenAI over concerns that ChatGPT’s developer had strayed from its nonprofit mission, OpenAI became suspicious of multiple groups speaking out against the reorganization. Encode filed a court brief in support of Musk’s lawsuit, and other nonprofits also publicly spoke out against OpenAI’s reorganization.

There’s more to this story than this.

As everyone knows, we are actively defending a lawsuit in which Elon seeks to harm OpenAI for his own financial gain.

Encode, the organization where @_NathanCalvin serves as general counsel, was one of them… https://t.co/DiBJmEwtE4

— Jason Kwon (@jasonkwon) October 10, 2025

“This raises questions about transparency, including who is funding it and whether there was any coordination,” Kwon said.

NBC News reported this week that OpenAI sent wide-ranging subpoenas to Encode and six other nonprofit groups that criticize the company, seeking communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also engaged Encode for communications related to support for SB 53.

One prominent AI safety leader told TechCrunch that there is a growing rift between OpenAI’s government team and its research organization. Although OpenAI safety researchers frequently publish reports highlighting the risks of AI systems, OpenAI’s policy arm lobbied against SB 53, arguing that it would rather create uniform rules at the federal level.

In a post on X this week, Joshua Achiam, head of mission alignment at OpenAI, talked about the company’s subpoenas to nonprofits.

“Given the potential risks to my entire career, I would say: This doesn’t look very good,” Achiam said.

Brendan Steinhauser, CEO of the AI ​​safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced that its critics are part of a conspiracy led by Musk. However, he argues that this is not the case and that many in the AI ​​safety community are highly critical of xAI’s safety practices, or lack thereof.

“On OpenAI’s side, this is intended to silence and intimidate critics and deter other nonprofits from doing the same,” Steinhauser said. “I think Mr. Sachs is concerned that the[AI safety]movement is growing and people want to hold these companies accountable.”

White House AI senior policy adviser and former A16Z general partner Sriram Krishnan echoed the conversation in his own social media posts this week, criticizing AI safety advocates as out of touch. He urged AI safety organizations to talk to “real-world people who are using, selling, and deploying AI in their homes and organizations.”

A recent Pew survey found that about half of Americans are more concerned than excited about AI, but it’s unclear what exactly they’re worried about. Another recent study looked more closely and found that U.S. voters care more about job losses and deepfakes than about the catastrophic risks posed by AI (which is the primary focus of the AI ​​safety movement).

Addressing these safety concerns could come at the expense of the AI ​​industry’s rapid growth, a trade-off that many in Silicon Valley are concerned about. Concerns about overregulation are understandable, as investment in AI underpins much of the U.S. economy.

But after years of unregulated AI advancements, the AI ​​safety movement appears to be gaining serious momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-minded groups may be a sign that they’re having an effect.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleTrump meeting brings good news for Zelenskiy, but Kiev’s real prize remains out of reach for now
Next Article Eli Lilly Novo Nordisk shares fall after President Trump’s GLP-1 price comments
admin
  • Website

Related Posts

A trap that Anthropic has built for itself.

March 1, 2026

Billion-dollar infrastructure deal fuels AI boom

February 28, 2026

Anthropic’s Claude rises to No. 2 on App Store following Pentagon dispute

February 28, 2026

OpenAI’s Sam Altman announces ‘technical safeguards’ agreement with Department of Defense

February 28, 2026
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Dolly Parton praises Ozzy Osbourne

By adminMarch 1, 20260

Louis Osborne & Jessica OsborneThe oldest children of Ozzy and Thelma’s marriage certainly didn’t spend…

Harry Styles’ red carpet fashion look

February 28, 2026

Bridgerton showrunner Phoebe Dynevor talks about recasting Regé-Jean Page

February 28, 2026

Graham Norton talks about Taylor Swift and Travis Kelsey’s wedding

February 28, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

British Greens: How working-class plumbers put a knife to Starmer’s election plan

March 1, 2026

Charles Kushner: How the US envoy’s ‘incomprehension’ of diplomacy surprised France

March 1, 2026

What we know about the US and Israeli attack on Iran and Iranian retaliation

March 1, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.