OpenAI Revises Military AI Agreement Following Criticism
OpenAI has announced modifications to its agreement with the US government concerning the deployment of its technology in classified military operations, describing the original deal as "opportunistic and sloppy."
On Monday, OpenAI CEO Sam Altman stated that the company would incorporate explicit language into the contract, including a prohibition on using its systems for spying on American citizens.
The agreement came to light on Friday after tensions arose between OpenAI's competitor Anthropic and the Department of Defense, sparked by concerns over the use of Anthropic's AI model Claude for mass surveillance and fully autonomous weapons.
This development has prompted broader questions about the role of AI in warfare and the balance of power between government entities and private companies.
OpenAI issued a statement on Saturday asserting that its agreement with the Pentagon contained "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
However, on Monday, Altman posted on X that additional changes were underway to ensure the system would not be "intentionally used for domestic surveillance of U.S. persons and nationals."
Under the new amendments, intelligence agencies such as the National Security Agency would require a "follow-on modification" to the contract before using OpenAI's system.
Altman acknowledged that the company erred by rushing the announcement on Friday.
"The issues are super complex, and demand clear communication," he said.
"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."
Following the announcement of OpenAI's collaboration with the Pentagon, the company faced significant backlash from users.
Data from Sensor Tower revealed a surge in ChatGPT uninstalls, with the daily average uninstall rate increasing by 200% compared to typical figures since the partnership was announced.
Meanwhile, Anthropic's Claude climbed to the top of Apple's App Store rankings, maintaining its position as of Tuesday.
The Trump administration had previously blacklisted Claude after Anthropic refused to abandon a corporate "red-line" policy opposing the use of its technology to develop fully autonomous weapons.
Despite this, reports have emerged that Claude has been utilized in the US-Israel conflict involving Iran, with CBS News, a BBC US partner, confirming its continued use as of Tuesday.
The Pentagon has declined to comment on its interactions with Anthropic.
How AI is Used by the Military
Artificial intelligence is employed in various military applications, including optimizing logistics and rapidly processing extensive data sets.
The US, Ukraine, and NATO utilize technology from Palantir, an American company providing data analytics tools to government clients for intelligence gathering, surveillance, counterterrorism, and military operations.
The UK Ministry of Defence recently signed a £240 million contract with Palantir.
At the end of last year, the BBC interviewed personnel involved in integrating Palantir's AI-powered defense platform, Maven, into NATO operations.
The software consolidates a wide range of military data, from satellite imagery to intelligence reports, which can then be analyzed by commercial AI systems such as Claude to facilitate "faster, more efficient, and ultimately more lethal decisions where that's appropriate," said Louis Mosley, head of Palantir's UK operations.

However, large language models can produce errors or fabricate information, a phenomenon known as "hallucinating."
Lieutenant Colonel Amanda Gustave, chief data officer for NATO's Task Force Maven, emphasized the presence of human oversight, stating that they "are always introducing a human in the loop" and that it "would never be the case" that AI would "make a decision for us."
Unlike Anthropic, Palantir does not endorse a complete ban on autonomous weapons but advocates for maintaining a "human in the loop."
Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic excluded from Pentagon contracts, "the most safety-conscious actor" is now "out from the room."
"That is a real problem," she added.
This week, the BBC is dedicating special coverage to AI as part of its AI Unpacked week. To learn more about AI and its implications, visit AI Unpacked.

to the Tech Decoded newsletter to stay informed on the world's leading technology news and trends. For readers outside the UK, here.







