The Pentagon formally designated Anthropic as a supply chain risk after the companies failed to agree on the degree of control the military should have over its AI models, including for use in autonomous weapons and domestic mass surveillance. With Anthropic’s $200 million contract falling apart, the Department of Defense turned to OpenAI instead, which accepted and saw ChatGPT’s uninstalls increase by 295%. As risks continue to rise, the question remains: How much unfettered access should the military allow to AI models?
In this episode of TechCrunch’s Equity podcast, hosts Kirsten Kolosek, Anthony Ha, and Sean O’Kane dig into what startups should be thinking about when pursuing federal contracts, especially when no one in Washington knows what to do with AI, this week’s headlines, and more.
Listen to the full episode to hear more about what’s next.
Paramount’s big deal with Warner Bros., and Equity staff’s ideas on what to call the new HBO Max and Paramount+ hybrid Should companies prepare for SaaSpocalypse, or is it just another chapter in the AI hype cycle?
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify, and all casts. You can also follow Equity on X and Threads at @EquityPod.
