The White House unveiled a sweeping AI action plan in July that reshapes America’s approach to governing AI. It’s a huge pivot from the previous administration’s stance, taking the regulatory brakes off and promoting a full-steam-ahead approach to AI development and implementation.
This creates both opportunities and risks for organisations. Faster deployment comes at the cost of reduced federal oversight, leaving businesses to fill critical governance gaps themselves. Let’s look at what’s in store.
Three pillars to the plan
Trump’s AI Action Plan describes a coordinated federal push to win an AI arms race against global competitors, particularly China. Its recommendations span every major federal agency, from the Commerce Department’s revised standards to the Pentagon’s AI warfare initiatives. Here’s how each pillar reshapes the landscape.
Pillar I: Accelerate AI Innovation
This pillar targets regulatory barriers to private-sector AI development. The Office of Science and Technology Policy and the Office of Management and Budget (OMB) will identify and repeal restrictive rules. States face particular scrutiny as OMB evaluates their AI regulatory climates when making funding decisions, potentially steering federal dollars away from states with too many guardrails.
On the upside, the plan also promotes open-source and open-weight models, recognising their value for startups and researchers who can’t afford closed commercial systems.
Pillar II: Build American AI Infrastructure
Described by the President as the “build baby, build” part of the plan, this pillar focuses heavily on the physical requirements underpinning AI: data centres, semiconductor manufacturing, and energy. Data centre builders get permit exemptions for construction, curtailing or eliminating environmental reviews, and stripping away regulations under the Clean Air Act and Clean Water Act.
The plan opens up federal lands to data center development and their power generation facilities, while also promising a modern electrical grid to satisfy AI’s hunger for electrons (there’s a strong nod to nuclear and geothermal energy). All these data centres will need a lot of electricians, HVAC experts, and other technical infrastructure roles, so the plan carves out training programs to bolster that workforce.
Pillar III: International AI Diplomacy
How does an isolationist government do diplomacy? In a “with us or against us” kind of way. The plan frames AI development as a zero-sum game between members of an “AI Alliance” and everyone else. Those allies get access to full-stack AI export packages, bundling hardware, models, software, and standards. Non-members including China face tougher export controls targeting semiconductor manufacturing subsystems. Location verification features on advanced chips will prevent diversion to adversaries.
Deregulation creates a vacuum
The hasty deregulation shifts more compliance responsibility onto businesses themselves. The FTC must review investigations that might “unduly burden AI innovation,” while NIST strips references to misinformation, DEI, and climate change from its AI Risk Management Framework. Yet rather than eliminating compliance obligations, this federal pullback just fragments them. States retain their own AI regulations, creating a patchwork of requirements that companies must now navigate per jurisdiction. Meanwhile, a promised plan to weed out “ideological bias” introduces vague procurement standards without clear definitions.
Companies must now find their own governance frameworks to manage AI risks. This means establishing clear policies covering data governance, model transparency, and bias mitigation, especially for customer-facing systems where accountability matters most.
What next?
Organisations now need regular AI risk assessments, documented audit trails, and robust vendor management for third-party AI tools. Critical areas include algorithmic explainability, privacy safeguards beyond minimal federal requirements, and AI-specific incident response procedures.
All this will be especially tricky for companies operating across both state lines and national borders.
The safe bet is to map requirements across all operating jurisdictions, identifying the highest common denominators for compliance. Companies should audit their AI deployments and update risk registers to reflect this new landscape.
Start by creating an inventory of all current and planned AI implementations, assessing each against your risk tolerance and evolving regulations. Priority areas include customer-facing systems requiring transparency, AI-driven decisions affecting employment or credit, and third-party model dependencies.
As the world’s first AI management system standard, ISO 42001 provides 38 controls across nine objectives, following the familiar Plan-Do-Check-Act methodology. Companies achieving certification demonstrate their AI systems identify and mitigate risks, while also demonstrating resilience, scalability, and consistent oversight. The standard integrates with existing ISO 27001 systems and adapts to different industries through sector-specific guidance.
A structured controls framework is valuable in times of uncertainty, when the regulatory winds are shifting and guidance varies on how to approach a rapidly evolving technology development. ISO 42001 can be your north star, keeping you anchored to best practices in control and management of AI systems, no matter how stormy the compliance tides get beneath you.










