When it comes to regulation, the term ‘United States’ is an oxymoron. State laws are anything but unified, with each jurisdiction traditionally tailoring legislation to suit its constituents’ specific needs. Nowhere is this patchwork approach to the law more true than in the nascent world of AI.
Lawmakers across all 50 states signed 118 AI-related bills into law this year, joining those already in force. This represents just 11% of the 1,080 laws that state legislatures considered in 2025.
Adopted laws are also becoming broader. Colorado’s SB 24-305 (effective June 30 next year) is the first broad AI consumer rights framework affecting private companies developing or using “high-risk AI systems” in areas such as employment, lending, healthcare, and housing decisions.
Different laws take different approaches to the problem, exacerbating the patchwork effect. For example, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), another comprehensive AI consumer protection framework signed this year, ties liability to intent in contrast to Colorado’s outcomes-based focus. Meanwhile California has several laws, including one that focuses on the transparency of training data.
The compliance alarm bells are already ringing for U.S. companies. Over seven in ten IT leaders now rank regulatory compliance among their top three challenges for generative AI deployment. Gartner (the source of the statistics) believes that AI regulatory violations will cause a 30% increase in legal disputes for tech companies. Businesses might understand the risk, but that doesn’t mean they’re ready for it. Under a quarter are confident that they can actually manage AI governance, says the market analyst company.
Grappling with regulation is nothing new for US companies, who had to digest GDPR (a little like a snake swallowing a goat) in 2018. But AI is shaping up to be worse for them.
Trump’s EO Gambit
Beleaguered executives might look to President Trump’s latest missive for solace. Last week, the White House issued an Executive Order seeking to rein in state regulation of AI. In doing so, it attempts to deliver on many of the promises laid out in its AI Action Plan, released in July.
The latest EO appoints Attorney General Pam Bondi as head of an ‘AI Litigation Task Force’ that will sniff out state laws that she considers unlawful. The Commerce Department will follow up by tying grant proposals to the “regulatory landscape” in the states.
The EO aims to replace state-level regulations that the AG doesn’t like with a single legal framework, which will be developed by an Special Advisor for AI and Crypto (a position currently held by former Paypal COO David Sacks). The Federal Communications Commission will also investigate creating a single federal standard for reporting and disclosing AI models.
This might bring cheer to some of those companies worried about the complexity of state-level law, but in practice, legal experts don’t believe it has legs.
“Federal agencies like the DOJ and FTC cannot encroach on lawful state regulations without a clear delegation from Congress,” wrote Olivier Sylvain, professor of law at Fordham University and a senior policy fellow at Columbia University’s Knight First Amendment Institute.
Congress isn’t playing ball. In July, the Senate voted overwhelmingly to remove a proposed 10-year moratorium on the enforcement of state AI laws from proposed legislation. More recently, lawmakers refused to add such a moratorium to the National Defense Authorization Act.
Executive orders pertain to the government, not the private sector, which is why the latest document tries to use the executive branch to bludgeon state litigation. However, “without any statute that comes close to addressing state regulation of AI, let alone one that preempts it, a DOJ or FTC attack on states would likely be dead on arrival,” Sylvain added.
The Risks of Non-Compliant AI Deployment
With this in mind, companies would do well to plan for long-term compliance with state-level AI regulation. To do this, three questions should now be mandatory in board discussions of AI: Who’s accountable for AI decisions? How are the risks assessed? And what happens when models fail? But this isn’t happening. Just 49% of boards have assessed AI, risk, according to the National Association of Corporate Directors.
The dangers of not assessing risk are multiple. The biggest by far, according to McKinsey, is inaccuracy, with 30% of companies experiencing this at least once in AI projects. In 2024, Air Canada paid damages after its chatbot invented bereavement discounts, but these pale compared to the reputational damage it caused.
Second on McKinsey’s list was explainability, which has affected 14% of companies in real-world AI deployments. Failing to explain why your model denied someone a loan could land you in regulatory hot water. Other risks include privacy violations and cybersecurity vulnerabilities, any of which could lead to potential regulatory blowback.
Companies needn’t look far for examples of AI deployments gone wrong. One AI-powered software development service deleted a person’s production database, for example. Perhaps the most troubling of all, though, was the scandal over the Dutch tax authority’s misuse of AI to make life-affecting benefits decisions.
ISO/IEC 42001 Offers a Baseline For AI Compliance
In volatile times like these, where the regulatory AI landscape is far from uniform, turning to well-established standards like ISO/IEC 42001 helps to create consistently good practices. Published in December 2023, it’s an international standard for AI management systems. Organizations considering AI deployments can use this to help them with risk and impact assessments. It’s a tool for AI governance that works across multiple jurisdictions.
Companies can help develop and maintain robust best practices in AI by building compliance measures into their organisational structure. For example, assign a member of your compliance team to oversee AI-specific compliance measures driven by ISO 42001, and build regular risk assessment measures into your AI development, deployment, and usage processes.
Awash in a turbulent ocean of state laws, and with shifting federal enforcement priorities testing the legal limits of regulation, ISO 42001 is a lifebuoy to which corporate compliance departments can cling.










