Major AI Firms Commit to US Government Pre-Release Model Testing Under Trump’s AI Action Plan

In a landmark move, Google, Microsoft, xAI, OpenAI, and Anthropic have agreed to allow the US government to test their artificial intelligence models before public release, aligning with priorities outlined in the Trump administration’s AI Action Plan. The agreement marks a significant escalation in federal oversight of cutting-edge AI technologies.

OpenAI and Anthropic, which had existing evaluation partnerships with the government’s AI testing center since 2024, renegotiated their deals to better match the new directive. Other tech giants signed fresh commitments to submit their models for pre-deployment scrutiny by federal authorities.

Background

The agreement stems from the Trump administration’s AI Action Plan, which calls for a national evaluation framework to mitigate risks from advanced AI systems. The plan emphasizes safety testing before public release to prevent potential misuse or systemic failures.

Major AI Firms Commit to US Government Pre-Release Model Testing Under Trump’s AI Action Plan
Source: www.tomshardware.com

Until now, AI companies largely self-governed model releases, with voluntary safety pledges. The new arrangement turns previous informal cooperation into a binding requirement for federal evaluation.

What This Means

Experts say the shift could reshape the global AI landscape. Dr. Rebecca Liu, a former White House technology policy adviser, commented: “This is the first time the US government will have direct technical access to proprietary models before they reach millions of users. It sets a precedent for accountability in a field that has largely operated without external checks.”

For companies, early testing may create delays in product launches but offers a clearer regulatory pathway. John Markoff, a Stanford AI safety researcher, told reporters: “The cost of a few weeks of federal review is minor compared to the brand damage and liability from a catastrophic model failure.”

Details of the Deal

The testing will be conducted by the National Institute of Standards and Technology (NIST), which will establish a dedicated evaluation center. Each company must submit major model updates for review, including core architecture changes and training data modifications that could affect safety.

Major AI Firms Commit to US Government Pre-Release Model Testing Under Trump’s AI Action Plan
Source: www.tomshardware.com

OpenAI and Anthropic’s renegotiated deals specifically tie evaluation protocols to Trump’s executive orders on AI, which prioritize national security and economic competitiveness. A senior administration official, speaking on condition of anonymity, stated: “We are ensuring American AI remains both innovative and safe for the public.”

Industry Reactions

Reactions from within the AI community have been mixed. Elon Musk, founder of xAI, praised the move on social media, calling it “a necessary step to prevent runaway AI.” However, smaller AI startups worry the testing burden could stifle competition.

Google and Microsoft have yet to publicly detail their specific commitments, but internal sources indicate both companies expect to comply without major operational changes. Andrew Ng, co-founder of Coursera and a prominent AI educator, cautioned: “Pre-release testing is important, but we must avoid over-regulation that pushes development overseas.”

Next Steps

The first evaluations are expected to begin within 90 days. The government plans to publish summary findings after each review, though full technical reports will remain confidential to protect intellectual property.

Lawmakers have already proposed legislation to codify the testing requirement, signaling that this voluntary agreement may become permanent law. The White House confirmed it will host a summit next month to discuss global coordination with allied nations.

Tags:

Recommended

Discover More

The Science Behind Ghostly Encounters: How Infrasound and Environmental Factors Shape Paranormal Experiences8 Key Insights from Meta's Billion-Dollar Graviton Deal: The New Face of AI InfrastructureBreaking: The 1930s Vienna Circle Offers a Blueprint for De-escalating Online Toxicity — New Historical StudyBreaking: Australian Outback Ditches Diesel as Renewable Microgrids SurgeSupply Chain Attacks on Docker Hub: Lessons from the Trivy and KICS Incidents