The Trump administration on Friday laid out the legal framework for the only policy on AI in the United States. The framework would preempt state AI laws and centralize power in Washington, potentially undermining recent efforts by states to regulate the use and development of the technology.
A White House statement on the framework reads: “This framework can only be successful if applied uniformly across the United States.” “A patchwork of conflicting state laws will undermine America’s innovation and ability to lead in the global AI race.”
The framework outlines seven key goals that prioritize innovation and AI expansion, and proposes a centralized federal approach that overrides stricter state-level regulations. It places significant responsibility on parents for issues such as child safety, and sets relatively soft and non-binding expectations for platform accountability.
For example, Congress has said that AI companies should be required to implement features that “reduce the risk of sexual exploitation and harm to minors,” but it has not set clear and enforceable requirements.
President Trump’s framework comes three months after he signed an executive order directing federal agencies to challenge state AI laws. The order gave the Commerce Department 90 days to create a list of “onerous” state AI laws that could jeopardize states’ eligibility for federal funds such as broadband subsidies. The agency has not yet published the list.
The order also directed the administration to work with Congress on a uniform AI law. That vision is gaining traction and mirrors President Trump’s earlier AI strategy, which was more focused on accelerating corporate growth than guardrails.
The new framework proposes “national standards with the least burden” and reflects the administration’s broader push to “remove outdated or unnecessary barriers to innovation” and accelerate AI adoption across industries. This is the pro-growth, light regulatory approach favored by “accelerationists” such as White House AI czar and venture capitalist David Sachs.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Although this framework reflects federalism, states retain relatively narrow powers, only over general laws such as fraud, child protection, zoning, and state use of AI. It has taken a strong stance against state regulation of AI development itself, calling AI development an “essentially inter-state” issue related to national security and foreign policy.
The framework also aims to prevent states from “punishing AI developers for the illegal conduct of third parties involved in their models,” providing an important liability shield for developers.
The framework lacks any commitment to accountability frameworks, independent oversight, or enforcement mechanisms for new harms that may be caused by AI. In effect, this framework centralizes AI policymaking in Washington while reducing the scope for states to act as early regulators of emerging risks.
Critics argue that states are democratic sandboxes that are faster to enact laws to avoid new risks. Specifically, New York’s RAISE Act and California’s SB-53 aim to ensure that large AI companies adhere to publicly documented safety protocols.
“White House AI czar David Sachs continues to bend to Big Tech at the expense of ordinary, hard-working Americans,” said Brendan Steinhauser, CEO of The Alliance for Secure AI. “This federal AI framework is intended to prevent states from enacting AI legislation and provides no pathway to hold AI developers accountable for harm caused by their AI products.”
Many in the AI industry welcome this direction as it gives them greater freedom to “innovate” without the threat of regulation.
“This framework is exactly what startups have been looking for: a clear national standard to help companies build and scale quickly,” Teresa Carlson, president of the General Catalyst Institute, told TechCrunch. “Founders shouldn’t have to navigate a patchwork of contradictory state AI laws that hinder innovation.”
Child safety, copyright, and freedom of speech
The framework comes at a time when child safety has emerged as a central flashpoint in the debate around AI. Some states are moving aggressively to pass laws aimed at protecting minors and holding tech companies more accountable. The administration’s proposal takes a different direction, focusing more on parental control than platform accountability.
“Parents are best placed to manage their children’s digital environment and upbringing,” the framework says. “The Administration is calling on Congress to protect children’s privacy and give parents the tools to do so effectively, including account controls to manage device usage.”
The framework also states that the administration “believes” that AI platforms should “implement features that reduce the potential sexual exploitation of children and encouragement of self-harm.” The proposal calls on Congress to require such safeguards and confirms that existing laws, including laws banning child sexual abuse material, should apply to AI systems, but the proposal uses qualifiers such as “commercially reasonable” and stops short of setting out clear preconditions.
On the topic of copyright, the framework seeks to find a compromise between protecting creators and allowing AI systems to be trained on existing works, citing the need for “fair use.” This type of language reflects the arguments of AI companies, which are facing a growing number of copyright lawsuits over training data.
The main guardrails of President Trump’s AI framework appear to include ensuring that “AI can pursue truth and accuracy without restriction.” Specifically, it focuses on preventing government-led censorship, rather than platform moderation itself.
“Congress should prevent the U.S. government from forcing technology providers, including AI providers, to prohibit, coerce, or change content based on partisan or ideological objectives,” the framework reads. It also directs Congress to provide an avenue for Americans to seek legal redress against government agencies that censor speech on AI platforms or attempt to dictate the information they provide.
The framework comes as Anthropic is suing the government for violating its First Amendment rights after the Department of Defense (DOD) identified it as a supply chain risk. Antropic claims the designation is in retaliation for the Pentagon’s refusal to allow the military to use its AI products for mass surveillance of Americans or for determining the targeting and firing of lethal autonomous weapons. President Trump called Anthropic and its CEO Dario Amodei “woke” and “radical leftists.”
The framework’s language, which emphasizes the protection of “legitimate political expression and dissent,” appears to build on President Trump’s previous executive order targeting “woke AI,” which encouraged federal agencies to implement systems deemed ideologically neutral.
Because it is unclear what constitutes censorship and standard content moderation, such language could make it difficult for regulators to coordinate with platforms on issues such as misinformation, election interference, and public safety risks.
“While[the framework]correctly states that the government should not force AI companies to ban or change content based on ‘partisan or ideological objectives,’ the administration’s ‘Awakened AI’ executive order this summer does just that,” said Samir Jain, vice president of policy at the Center for Democratic Technology.
