Microsoft, Google, xAI Agree to US Government Security Testing of New AI Models
Al Jazeera English
Tech giants Microsoft, Google, and xAI have agreed to allow the US federal government to access new artificial intelligence models for national security testing. The agreements follow a Pentagon deal with seven technology companies to use AI in classified systems. The moves aim to identify threats from cyber attacks to military misuse before deployment.
Tech giants Microsoft, Google, and xAI have said they will allow the US federal government to access new artificial intelligence (AI) models for national security testing purposes.
The Center for AI Standards and Innovation (CAISI) under the Department of Commerce announced this agreement earlier this month, amid growing concerns about the potential of Anthropic's new Mythos model to empower hackers.
Under the deal, the US government will be allowed to evaluate AI models before deployment and conduct research to determine their capabilities and security risks. The agreement fulfills a pledge made by President Donald Trump's administration in July last year to partner with technology companies to test their AI models for “national security risks.”
Microsoft said in a statement that it would work with US government scientists to test AI systems “in ways that detect unexpected behaviors.” Together, they will develop shared data sets and workflows to test the company’s models. Microsoft has also signed a similar agreement with the UK’s AI Security Institute.
Concerns are growing in Washington about the national security risks posed by powerful AI systems. By securing early access to advanced models, US officials aim to identify threats—from cyber attacks to military misuse—before these tools are widely deployed.
The development of advanced AI systems, including Anthropic's Mythos, has recently caused global alarm among US officials and businesses over their potential to empower hackers.
CAISI Director Chris Fall stated: “Rigorous, independent measurement science is essential to understanding advanced AI and its national security implications.”
This move builds on 2024 agreements with OpenAI and Anthropic under President Joe Biden’s administration, when CAISI was known as the US AI Safety Institute. Under Biden, the institute focused on developing AI tests, definitions, and voluntary safety standards.
CAISI, the government’s lead agency for AI model testing, said it has completed over 40 evaluations, including advanced models not yet publicly released. Developers often submit versions of their models with safety barriers removed so the center can probe for national security risks.
On Wall Street, Microsoft shares fell 0.6% in mid-day trading immediately after the announcement. Meanwhile, Google parent Alphabet rose 1.3%. xAI is not publicly traded.
The announcement follows a deal between the Pentagon and seven major technology companies—Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX—to use AI systems on classified computer networks. The Defense Department said the agreement would provide resources to help “enhance warrior decision-making in complex operational environments.” Notably absent was AI company Anthropic, following a public dispute and litigation with the Trump administration over the ethics and safety of using AI in warfare.
