Sam Altman: OpenAI Has No Say in Pentagon's Military AI Decisions Amid Backlash Over Ethics and Rushed Deal considering the fallout between Anthropic and Department of War

 


In a candid all-hands meeting with OpenAI employees on Tuesday, CEO Sam Altman addressed growing internal and external concerns over the company's recent partnership with the Pentagon. He bluntly stated that OpenAI cannot dictate or influence how the U.S. military deploys its AI technologies in operational contexts.

"You do not get to make operational decisions," Altman reportedly told staff, according to accounts from Bloomberg and CNBC. He elaborated: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that."

The remarks come at a tense moment for the AI sector, following OpenAI's swift agreement with the Department of Defense (referred to in some announcements as the Department of War) to deploy its models on classified military networks. The deal was announced late last week—hours after the Trump administration blacklisted rival Anthropic, labeling it a "supply-chain risk to national security" for refusing to drop ethical restrictions on uses like mass domestic surveillance or fully autonomous weapons.

Anthropic, maker of the Claude model previously integrated into Pentagon systems (including reportedly aiding operations to capture Venezuelan leader Nicolás Maduro in January and supporting targeting in the ongoing conflict with Iran), rejected updated contract terms demanded by Defense Secretary Pete Hegseth. President Trump directed all federal agencies to immediately cease using Anthropic's technology, escalating the standoff.

OpenAI stepped in quickly, with Altman initially describing the agreement as aligning with shared "red lines" on issues like prohibiting domestic mass surveillance and ensuring human responsibility for lethal force. However, the timing drew sharp criticism, with Altman later admitting the rollout appeared "opportunistic and sloppy." In response to backlash, OpenAI amended the contract earlier this week to explicitly bar intentional use for domestic surveillance of U.S. persons and nationals, and to prevent deployment by intelligence agencies like the NSA without further modifications.

The controversy has fueled heated exchanges between industry leaders. Anthropic CEO Dario Amodei reportedly sent a scathing internal memo calling OpenAI's approach "mendacious" and accusing Altman of engaging in "safety theater" to placate employees while allegedly colluding with the Pentagon. Amodei contrasted Anthropic's firm stance on ethical boundaries with claims that OpenAI's concessions were superficial, and suggested political donations and "dictator-style praise" to Trump played a role in the dynamics.

Altman's comments to staff underscore a key reality: once technology is provided under contract, ultimate control over its military applications rests with the government, not the developer. This has intensified debates among AI workers and ethicists about the responsibilities of tech companies supplying tools for warfare, especially as U.S. forces increasingly rely on AI for targeting and operations in active conflicts.

The episode highlights deepening rifts in the AI industry over national security partnerships, ethical guardrails, and the balance between innovation, profit, and moral accountability in an era of heightened geopolitical tensions. OpenAI maintains its tools will be used lawfully and responsibly, but employee unrest and public scrutiny show the Pentagon deal remains far from settled in the court of opinion.