The Administration That Killed AI Safety Reviews Is Now Considering Bringing Them Back
One model changed everything
The White House is discussing a formal government review process for AI models before they are released to the public. This is the same administration that, on day one, rolled back the Biden-era process requiring exactly that. What changed? A single model powerful enough to cause a cybersecurity reckoning. The governance lesson is hiding in plain sight.
The New York Times reported yesterday that the Trump administration - which built its AI policy on the premise that oversight kills innovation - is now considering an executive order to create a formal government review process for new AI models before public release. The administration held meetings last week with executives from Anthropic, Google, and OpenAI to discuss the plans. Among the options on the table: a review structure similar to the UK model, in which government bodies assess whether AI models meet certain safety standards before they reach the public.
So, the administration that rolled back a Biden-era regulatory process requiring AI developers to conduct safety evaluations and report on models with potential military applications is now discussing bringing back a version of that exact process. JD Vance, at the Paris AI gathering last year, told the world that “the AI future is not going to be won by hand-wringing about safety” … and the White House is now hand-wringing about safety.
The reversal is real and significant. Understanding what caused it is more interesting than the reversal itself.
What Actually Changed - The Mythos Problem
The policy shift traces directly to a single model. Last month, Anthropic announced a new AI system called Mythos, so capable at identifying security vulnerabilities in software that Anthropic itself described it as a potential cybersecurity “reckoning” and declined to release it publicly.
The implications were immediate and concrete. The National Security Agency has already used Mythos to assess vulnerabilities in US government software. The White House, according to people briefed on the discussions, is concerned about the political consequences if a devastating AI-enabled cyberattack were to occur with a model that had been publicly released without government review. The administration is also evaluating whether models like Mythos could offer cyber-capabilities useful to the Pentagon and intelligence agencies.
The review process under consideration would give the government first access to AI models but would not block their release, which is meaningful. The proposed mechanism is not a gating veto; it is an advance intelligence function. The government wants to know what is coming before it arrives publicly, both to protect against it and potentially to use it.
This is not, at its core, a safety governance story. It is a national security and competitive intelligence story wearing the clothes of safety governance. The administration is not reconsidering its view that safety oversight burdens innovation. It is recognizing that some AI capabilities are sufficiently powerful that the government needs to understand them before the public (and bad guys) encounter them first.
That distinction matters for how CMOs should read this development.
The Governance Reversal and What It Reveals
The structural irony is considerable. The administration disbanded the Center for AI Standards and Innovation (the Biden-era agency established specifically to vet AI models voluntarily shared with the government) and sidelined it throughout 2025. The new working group under discussion would apparently consider whether CAISI should now play a role in model assessment. The organization that the administration benched is being reconsidered as a starter.
David Sacks, the White House AI czar who spearheaded the deregulation agenda, left the role in March. Susie Wiles and Scott Bessent have stepped in and have signaled a more interventionist approach to AI policy. The leadership change is not incidental to the policy shift.
The Pentagon-Anthropic dispute adds another layer. The $200 million contract dispute and subsequent Pentagon cutoff of Anthropic technology created a practical problem: government agencies that had come to rely on Anthropic’s models lost access. Anthropic’s AI continues to run in the Maven system, which analyzes intelligence and suggests airstrike targets in Iran. The White House meeting between Wiles, Bessent, and Dario Amodei was described by both sides as “productive,” which, in Washington, typically means a negotiation is underway rather than concluded.
The policy shift and the contract dispute are related. An administration that wants access to the most powerful AI models, whose military is already using one of them for active combat operations, has an obvious interest in a framework that brings AI developers into a formal relationship with government review rather than an adversarial one with a $200 million lawsuit pending.
What the administration is building toward may be less a safety governance system and more a structured intelligence-sharing arrangement between the government and frontier AI developers, with the pre-deployment review as the mechanism that formalizes the relationship.
The Principle This Validates
Strip away the political reversals, the leadership changes, and the Pentagon dispute, and the underlying governance principle the White House is now acting on is the one this Substack has been arguing for since its first post: you cannot govern AI deployment without pre-deployment assessment.
The administration’s previous position that safety oversight burdens innovation and that the AI race will be won by building, not by hand-wringing, ran directly into a model that demonstrated why pre-deployment assessment is not hand-wringing. Mythos became a governance concern before it was released, precisely because Anthropic conducted its own assessment and concluded it should not be released. That internal assessment is what triggered the government’s interest.
This is the governance sequence that works: assess before release, constrain or disclose based on what the assessment reveals, and ensure that the people with the most to lose from dangerous deployment understand what they are deploying before it is public. Anthropic followed this sequence with Mythos. The government is now trying to insert itself into the sequence for future models.
The irony is that the approach the administration is now considering for frontier AI developers is structurally identical to what the EU AI Act, the New York RAISE Act, and the Colorado AI Act have been building for deploying organizations. Pre-deployment assessment, safety protocols documented before release, and government visibility into capability before public access. These are the requirements the White House framework characterized as “onerous” state laws imposing “undue burdens” in March, and that are now under consideration as federal policy in May.
What CMOs Should Take From This
Two practical implications for marketing leaders, set apart from the political drama.
Your AI vendors’ models may be subject to pre-release government vetting before you see them.
If the working group establishes a formal review mechanism, the models you plan to incorporate into your marketing infrastructure (the next generation of ChatGPT, Claude, and Gemini) may be reviewed by the NSA, ONCD, and the DNI before they reach the enterprise market. That review will not be public. Its findings will not be shared with you. But the models that emerge from it will have been evaluated against national security criteria before you build on them.
This does not change your vendor diligence obligations. It adds a layer of complexity to them. The question “What did your vendor’s safety evaluation find?” now has a potential government review counterpart that your vendor may or may not be able to discuss.
The governance principle the administration is rediscovering is the one your organization should already be practicing.
Pre-deployment assessment is not a regulatory burden. It is the mechanism that prevents “deploy and discover” - the failure mode where organizations find out what their AI does after it has done it publicly. Anthropic’s decision not to release Mythos is the most prominent recent example of pre-deployment assessment working correctly. The government’s response is to formalize that process.
Your organization’s AI governance should not require a cybersecurity reckoning to trigger a review. Every significant AI deployment your marketing team operates should have undergone a pre-deployment assessment - including authorized scope, human oversight mechanism, vendor diligence, and incident response protocol - before reaching customers. If it hasn’t, the White House’s change of direction on AI oversight is a reasonable prompt to close that gap before it becomes expensive.
The administration built its AI policy on the premise that governance slows innovation. Mythos demonstrated that some AI capabilities are powerful enough that even the people who built them concluded the risk of release outweighed the benefit, and the government is drawing the obvious lesson. Every CMO deploying AI in their organization’s name should be drawing the same one.
This post was written on May 4, 2026, and details are still developing. Verify before drawing final conclusions.
Ethicore Advisors Author’s Note
Craig McDonogh is the founder of Ethicore Advisors and the author of the forthcoming book “Guardrails: How to Embrace AI Without Damaging Your Brand.” He advises CMOs and senior marketing leaders on AI governance, reputational risk, and responsible deployment.
The situation described in this post is actively developing, and you should verify details before drawing final conclusions. If your organization is reviewing its AI vendor governance and pre-deployment assessment practices, Ethicore Advisors is where that conversation starts.


