Federal POWER GRAB over states: Congress moves to block AI regulation nationwide for a decade
- President Donald Trump's "One Big Beautiful Bill Act" includes a contentious provision, Section 43201, that imposes a 10-year ban on state-level AI regulation. This move prioritizes federal control over AI, preempting state laws on AI training, data collection and decision documentation, and has sparked bipartisan opposition.
- The bill, coupled with developments like Regeneron's acquisition of 23andMe and the rollout of the $500 billion Stargate AI surveillance project, raises concerns about the concentration of power in the hands of tech giants and the government. Critics warn of potential risks to personal privacy and digital innovation.
- States, which have been proactive in passing AI regulations, face the prospect of their laws being invalidated. Legal challenges are expected, with groups like the Electronic Privacy Information Center arguing that the bill shields tech companies from accountability for harmful AI systems.
- Historically, states have led the way in tech regulation, with examples like California's Consumer Privacy Act and Colorado and Connecticut's biometric data safeguards. The federal preemption threatens to undo these advancements, leaving regulation largely in the hands of a federal government that has been slow to act on privacy issues.
- The bill's passage could result in a decade of unchecked AI power, with significant implications for healthcare, security and personal freedoms. There is a strong call for lawmakers to work with states to ensure accountability and protect citizens from AI-driven harms.
A provision buried in President Donald Trump’s 5 trillion “One Big Beautiful Bill Act” has ignited a firestorm, effectively
stripping U.S. states of authority to regulate artificial intelligence (AI) for the next decade. Section 43201 of the bill, which cleared a House committee vote this weekend, prioritizes federal control over state-level safeguards, even as bipartisan opposition mounts. The move—coupled with Regeneron’s $256 million acquisition of 23andMe and the rollout of Trump’s $500 billion Stargate AI surveillance project—has raised alarms about unchecked corporate and government power over digital innovation and personal privacy.
The federal power play: States lose AI regulatory authority
The bill’s most contentious clause, Section 43201,
institutes a 10-year ban on state-level AI regulation, preempting laws governing how AI models are trained, data is collected and decisions are documented. The provision bars states from imposing “any substantive design, performance, data-handling, [or] civil liability requirement” on AI systems unless federal rules already address them.
Republicans in the House, while typically champions of states’ rights, narrowly advanced the measure in a 20-19 vote Sunday. Critics argue this overrun bypasses local accountability as AI integrates with critical infrastructure like healthcare, law enforcement and education. “States need to be able to address AI-driven harms, like non-consensual deepfakes or algorithmic discrimination,” warned Indiana State Rep. Matt Pierce, a Democratic member of the state’s AI task force.
The bill’s proponents, however, frame it as a “modernization” push. But opponents—including 40 state attorneys general (AGs) and over 140 advocacy groups—see it as a gift to tech giants. The AGs’ coalition asserts the moratorium “deprives consumers of reasonable protections” and hobble their ability to safeguard citizens.
Stargate and biotech: The rise of AI-fueled surveillance
The same week states warn of losing regulatory power, the White House is advancing Stargate, a $500 billion AI infrastructure initiative partnering federal agencies with companies like OpenAI and Oracle. Launched in January 2025, Stargate aims to harmonize AI tools across defense, healthcare and biomedical research. Its implications are stark: Oracle’s Larry Ellison recently announced plans to use AI to
mine electronic health records for mRNA drug development targeting individual genomes.
In tandem, Regeneron’s purchase of DNA-testing pioneer 23andMe—a firm whose 2023 data breach exposed genetic data to unnamed third parties—now hands pharmaceutical firms direct access to genomic datasets. This comes as the FDA erodes privacy further by allowing Institutional Review Boards to waive informed consent for “minimal risk” studies, potentially letting AI systems like Stargate process biometric data without user consent.
The
Department of Homeland Security has also flagged AI-driven DNA hacking as a national security risk, warning that adversarial actors could customize bioweapons to target genetic vulnerabilities.
State pushback and legal threats
States had swiftly responded to AI’s societal risks, passing 31 laws in 2024 alone to regulate AI use. But the One Big Beautiful Bill threatens to void these efforts. Indiana’s 2023 law requiring disclaimers on AI-generated campaign ads and limiting police use of AI in suspect lineups could face invalidation, according to Indianapolis lawmakers.
“We just made progress with the Take It Down Act,” said AI reform advocate Angela Tipton, referencing a new federal law addressing AI revenge porn. “But
this moratorium would roll us backward.”
Legal challenges promise to follow. The Electronic Privacy Information Center (EPIC) argues the bill immunizes tech companies from consequences of harmful AI: “If a company designs an algorithm that causes foreseeable harm, this law would make them unaccountable,” insisted an EPIC brief.
States as pioneers in tech safeguards
States historically have embraced regulatory innovation when federal action lagged. California’s 2018 Consumer Privacy Act, for instance, inspired nationwide privacy standards before Congress passed a federal law in 2022. Similarly, Colorado and Connecticut led the way on biometric data safeguards, requiring explicit user consent for facial recognition.
The One Big Beautiful Bill’s broad preemption would nullify such local advancements, leaving most decisions to Washington—a body that, as Indiana Sen. Liz Brown notes, has “failed for years to pass federal privacy protections.”
A decade of dystopia?
As the bill edges toward a House vote by Memorial Day, its repercussions loom large. A state-level AI freeze for ten years risks cementing corporate-federal control over technologies shaping healthcare, security and freedom.
“This amendment could cripple states’ ability to protect Hoosiers from AI-driven harms,” Indiana AG Todd Rokita wrote in a coalition letter. “Congress should work with states, not silence them.”
With Trump demanding urgency, the next week will determine whether an era of unchecked AI power — and lost privacy—goes forward, or if lawmakers collectively choose accountability.
Sources for this article include:
Substack.com
TheRegister.com
Fox59.com