The Trump administration’s release of a National AI Policy Framework in March strives to replace a tangle of state-level rules with a single federal standard. For compliance managers already wrestling with the EU AI Act and a growing stack of international obligations, the document raises a pointed question: does one more regulatory regime simplify things, or does it add another layer to an already fragmented global patchwork?

What the Framework Demands

The framework sets out six objectives spanning child protection, community infrastructure, intellectual property, free speech, innovation, and workforce development. It calls on Congress to give parents account controls, require AI platforms to reduce sexual exploitation of minors, protect residential ratepayers from data centre electricity costs, and streamline federal permitting for on-site power generation at AI facilities.

The framework also proposes regulatory sandboxes, wider access to federal datasets in AI-ready formats, and continued reliance on existing sector-specific regulators rather than a new federal AI rulemaking body.

But the centrepiece to it all is preemption. The White House argues that “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.” The framework says that states “should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.”

Why States Are Not Standing Down

The White House’s latest missive on AI builds on a December 2025 executive order that directed White House officials to prepare legislative recommendations for a uniform federal AI policy while preempting conflicting state laws. It established a litigation task force to challenge state AI laws by threatening to withhold broadband deployment funding from states with “onerous” AI regulations.

The administration had already tried to embed preemption into the fiscal year 2026 National Defence Authorisation Act, but House Armed Services Committee Chair Mike Rogers (R-AL) and Senate Armed Services Committee Ranking Member Jack Reed (D-RI) opposed the proposal, which didn’t make it into the final package.

Meanwhile, states have been building their own regulatory infrastructure at speed. All 50 states saw a surge in AI legislation through 2025, with new compliance regimes taking effect in California, New York, and Colorado in 2026. A bipartisan group of 40 state attorneys general has formally opposed a proposed ten-year moratorium on state AI laws, arguing that “Congress has failed to establish necessary guardrails for AI.”

State AI laws remain enforceable unless and until Congress acts. For compliance teams, that means the patchwork is not going anywhere soon.

How the Framework Compares to the EU AI Act and Global Standards

The EU has taken the opposite approach. Rather than stripping away state-level regulation, Brussels has built a comprehensive risk-based classification system that bans certain AI applications outright and imposes stringent obligations on high-risk systems, with fines of up to 6% of annual global turnover for non-compliance. Full enforcement of the EU’s general-purpose AI obligations begins in August this year.

Organizations face a compliance burden as this regulatory fissure widens. For example, a study by European healthcare experts estimated annual compliance costs of €29,277 per AI unit, with certification costs of €16,800–23,000, a substantial burden for resource-constrained organisations.

The Trump framework, by contrast, explicitly avoids creating a new federal rulemaking body and relies instead on existing sector-specific regulators, prioritizing innovation over burdensome form-filling.

Businesses Still Face Fragmentation Issues

The current AI governance landscape encompasses over 600 soft law programs and more than 1,400 AI-related standards across ISO, IEEE, ETSI, and ITU. It’s a level of fragmentation that will hit startups and smaller companies hard if they target multiple markets. They lack the resources to tiptoe through these compliance regimes, putting them at a disadvantage.

Even if Congress passes legislation aligned with the framework’s recommendations, companies selling AI into the EU will still need to comply with the AI Act. The practical effect is that multinational organisations will likely treat the EU’s higher standard as their baseline, because maintaining separate compliance regimes for each jurisdiction costs more than meeting the most demanding one.

Where ISO 42001 Fits In

Amid this regulatory sprawl, ISO/IEC 42001 offers compliance managers something increasingly valuable: a framework-agnostic foundation. Published in 2023 as the first global standard for an AI management system, it lists end-to-end requirements for establishing and maintaining an AI management system, including recommendations for continual improvement.

Organisations that build an AI management system aligned with ISO 42001 can stave off some of this regulatory overhead. It offers them a governance structure with risk assessments, documentation, and human oversight that maps onto multiple regulatory regimes simultaneously. That helps governments and businesses alike to align their AI rules with an international framework while still allowing for customization.

Whatever Congress does with the Trump framework, a management system grounded in international standards is the surest hedge against a regulatory landscape that shows no signs of converging.

Expand Your Knowledge

Blog: How to Navigate the US AI Compliance Maze

Blog: Why Regulatory Uncertainty is the Best Reason to Adopt ISO 42001 Now

Download: The No-Stress Guide to ISO 42001 Certification