Sat, May 9, 2026Headlines on the hour, every hour
Technology

White House weighs vetting frontier AI models in sharp policy reversal

The Trump administration is considering government vetting of frontier artificial intelligence models before their release, a sharp reversal from a hands-off approach triggered by national security alarms over a model that can find and exploit software vulnerabilities.

By Kai Mendel7 min read
Retro typewriter with AI Ethics typed on paper, symbolizing technology policy and regulation

The Trump administration is weighing pre-release government vetting of frontier artificial intelligence models, a sharp reversal from a hands-off approach that until recently treated AI regulation as an obstacle to American leadership.

The shift, reported first by The New York Times on 4 May and confirmed by multiple outlets in the days since, was triggered by national security alarms over Anthropic’s Mythos model, which can identify and exploit cybersecurity vulnerabilities in software systems. The about-face has exposed a divide inside the administration between officials who want the United States to dominate AI without restraint and those who argue that unfettered deployment of models capable of finding zero-day flaws is a threat the government cannot ignore.

“It’s a tug of war between innovation and security,” the Bloomberg newsletter that first named the bind wrote on 8 May, describing a White House caught between its deregulatory instincts and a technology that does not fit into any existing regulatory box.

The centrepiece of the proposed approach is an executive order that would create a government-industry working group to assess frontier AI systems before they are released. Kevin Hassett, director of the White House National Economic Council, described the concept in terms that would have been unthinkable from this administration a year ago. Models should “go through a process so that they’re released to the wild after they’ve been proven safe,” Hassett told Fortune, comparing the framework to Food and Drug Administration drug approvals. “Mythos is the first, but it’s incumbent on us to build a system so U.S. AI can be the leader.”

The vehicle for this shift is the Center for AI Standards and Innovation, a rebranded version of the Biden-era U.S. AI Safety Institute housed at the National Institute of Standards and Technology. The word “safety” was stripped from the name under Trump, but CAISI has moved quickly to sign pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI. Notably absent from the list is Anthropic, the company whose technology set the policy rethink in motion.

The arc of the reversal

The administration has travelled far in months. In July 2025, President Donald Trump described his AI philosophy in characteristically blunt terms: “We have to grow that baby.” On his first day back in office, he revoked President Joe Biden’s Executive Order 14110, which mandated safety testing for powerful AI systems, replacing it with an order titled “Removing Barriers to American Leadership in Artificial Intelligence.”

The administration then moved against Anthropic directly. On 27 February, Trump ordered federal agencies to stop using Anthropic’s technology after the company refused to grant the Pentagon unrestricted access to its systems. The White House labelled Anthropic a “national security threat.”

Six weeks later, the same technology that triggered the crackdown forced a reassessment. On 8 April, Anthropic’s Mythos model demonstrated the ability to find and exploit previously unknown software vulnerabilities, making national headlines. By 21 April, Trump was telling CNBC the company was “shaping up” and that “I think we will get along with them just fine.”

A person familiar with the deliberations told Politico on 5 May that the administration was mulling a ban on companies “interfering” with government use of AI models, alongside the vetting framework. Two days later, a senior White House official walked elements of that back, telling Politico the administration was seeking “partnership” with companies rather than “government regulation.”

What the critics say

The speed and selectivity of the reversal have drawn scepticism from AI safety researchers and policy experts.

“This is a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation,” said Rumman Chowdhury, chief executive of Humane Intelligence and a former U.S. Science Envoy for AI. Chowdhury warned that the evaluations being proposed “are not actually data-driven. My concern is that this is another political tool.”

Gary McGraw, chief executive of the Berryville Institute of Machine Learning, an AI security nonprofit, questioned the structure of the proposed oversight. CAISI’s partnership model, in which the companies being evaluated co-design the evaluation framework, meant “the foxes might be asked to guard the chicken house even though they already designed and constructed it in secret,” McGraw said.

Rob van der Veer, founder of the OWASP AI Exchange, said in an interview: “Test the models. Vet them. Improve them. But design the system as if the model can still fail.”

The funding debate complicates the picture further. Congress approved $55 million for NIST AI research in January, with up to $10 million earmarked for CAISI expansion. The America First Policy Institute, a think-tank aligned with the administration, has said CAISI remains underfunded “compared with peer institutes internationally” and lacks “appropriate funding” for its expanded mandate. NIST, a 123-year-old institution, is understaffed in key technology areas with facilities below modern standards.

A selective framework

The composition of CAISI’s initial industry partnerships has drawn scrutiny. Google, Microsoft, and xAI all signed on. Anthropic, whose Mythos model catalysed the policy shift and whose technology underpins the national security concern, was excluded. The company remains in a legal dispute with the administration over the February directive barring federal agencies from using its products.

The administration has also drawn personnel from the industry it seeks to regulate. Chris Fall, tapped to lead CAISI, is a former Energy Department official from the first Trump administration who later served as a vice president at MITRE, a defence contractor. Collin Burns, a former Anthropic technical staffer, was briefly brought into CAISI but dismissed after what Fortune described as “just days on the job.”

Elizabeth Kelly, the first director of the Biden-era AI Safety Institute, left government after Trump’s inauguration and joined Anthropic as head of “beneficial deployments.” The move shows how thin the line between regulator and regulated remains in a field where expertise is concentrated in a handful of companies.

What happens next

No executive order has been issued. Hassett told Fortune that building the vetting system was “pretty much what we’re working on almost full-time right now,” but the White House’s conflicting signals in the first week of May, first tightening then loosening, suggest the internal debate is not settled.

The commercial stakes are substantial. Qualcomm shares surged 8 per cent on 7 May after the company beat earnings expectations and disclosed a new chip partnership with OpenAI, the latest signal that AI infrastructure spending is accelerating even as the regulatory environment shifts. The Semiconductor Industry Association warned that export controls and domestic licensing uncertainty could push chip investment out of the United States.

Congress has split along party lines in response. Senate Majority Leader John Thune told reporters on 6 May that “some guardrails make sense” for frontier models but cautioned against “heavy-handed mandates that send innovation offshore.” House Democrats want the administration to go further. California Representative Ted Lieu and colleagues reintroduced legislation that would write pre-deployment testing requirements into law. No markup has been scheduled in either chamber.

Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation, told The Register that the episode showed a deeper strain in the administration’s technology policy. TSMC reported a 17.5 per cent jump in April sales on AI chip demand, a reminder of the commercial momentum behind AI. The Mythos episode is a reminder of what can go wrong.

The administration is trying to hold two positions at once: more oversight than it ever promised, less than its critics demand, and applied selectively to companies that cooperate with its framework.

ai-regulationAnthropicartificial intelligencenational-securitytechnology-policytrump administration
Kai Mendel

Kai Mendel

Technology editor covering fintech, AI and the platform economy. Reports from San Francisco.

Related