Pentagon Signs AI Deals with Seven Companies for Secret Systems
Joseph Stepansky
The Pentagon has signed AI deals with seven companies, including SpaceX, OpenAI, and Google, for high-security military systems. Anthropic is notably absent amid a legal dispute over unrestricted access to its Claude AI. The agreements aim to make the U.S. military an AI-first force, while concerns over AI targeting in Iran and immigrant surveillance grow.
The Pentagon's announcement on Friday (September 27) marks the latest step in a roughly decade-long effort to expand AI use in the military. The deals were signed with seven leading companies: SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services.
The Pentagon said these agreements “drive the transition toward building the U.S. military into an AI-first fighting force,” while enhancing its ability to dominate decision-making across all battlefields. The companies' technologies will be used in the highest-security information systems to “streamline data synthesis, improve situational awareness, and support warfighter decision-making in complex operational environments.”
Notably, Anthropic—the AI company in serious conflict with the Pentagon—was not on the list. Anthropic refused to grant the Pentagon unrestricted access to its AI program Claude for “any lawful purpose,” fearing the software could be misused for mass surveillance and autonomous weapons. In response, the Pentagon deemed Anthropic a “supply chain risk.” The two sides are now embroiled in litigation that has lasted nearly a year.
However, the U.S. administration still wants access to Anthropic's new Mythos AI model, seen as a potentially transformative tool in both cyber offense and defense. Previous deals with OpenAI and Google were confirmed, along with a deal with Elon Musk's xAI. All three companies agreed to the Pentagon's “any lawful purpose” clause.
According to the Pentagon, more than 1.3 million of its personnel are now using the official AI platform GenAI.mil. The department stated: “Warfighters, civilians, and contractors are putting these capabilities to use right now, reducing many tasks from months to days,” while vowing to continue building an AI architecture to avoid over-reliance on any single provider.
The U.S. government's use of AI is under increasing scrutiny, particularly in the mass deportation of immigrants. Human rights groups say tech company Palantir has been collecting real-time data on potential targets for Immigration and Customs Enforcement (ICE), including pro-Palestinian activists.
Specifically in the U.S.-Israel war in Iran, questions are being raised about how AI systems identify targets. The Pentagon announced it had struck 13,000 targets since the campaign began in late February. At least 3,375 people in Iran have been killed, including 170—most of them children—in a U.S. Tomahawk cruise missile strike on a girls' school in Minab. The Pentagon said it is still investigating the incident.
During a Senate committee hearing Thursday, Senator Kirsten Gillibrand questioned Defense Secretary Pete Hegseth about civilian casualty oversight and AI use. Hegseth replied: “No military, no nation, does more at every level than the U.S. military to ensure the protection of civilian lives, and that is our firm commitment, regardless of what systems are used.”