How AI was used in the invasion of IRAN

 


As the conflict between the U.S., Israel, and Iran has escalated rapidly since February 28, 2026, artificial intelligence and specifically the use of Claude has become a central and controversial part of the military strategy known as Operation Epic Fury.

Here is an overview of how AI has been integrated into these operations and the resulting impact on the ground.


The Role of AI in "Operation Epic Fury"

Military analysts are calling this the first conflict where AI has visibly compressed the "kill chain" the time it takes to find, identify, and strike a target.

Decision Compression: Systems like Palantir’s Maven Smart System (which reportedly integrated Anthropic’s Claude model) have allowed the U.S. and Israeli forces to process massive amounts of satellite imagery and surveillance data in real-time.

Targeting Speed: In the first 24 hours of the conflict, the U.S. military reportedly struck over 1,000 targets. Experts suggest that without AI-assisted prioritization and legal-review simulations, identifying and approving this many targets would have taken weeks.

Intelligence Synthesis: Claude was used to summarize complex battlefield reports and translate intercepted communications, giving commanders a strategic overview far faster than human analysts could provide.


The Human and Ethical Cost

While the U.S. Department of War (recently redesignated from the Department of Defense) highlights the "precision" of these tools, the conflict has seen significant civilian tragedy:

The Minab School Strike: On February 28, a strike on the Shajareh Tayyebeh girls' elementary school in southern Iran killed over 165 schoolgirls. Independent investigations suggest AI may have relied on outdated data that incorrectly identified the school as part of an adjacent military base.

"Double-Tap" Accusations: Reports from first responders indicate the use of "double-tap" strikes, hitting a location and then striking again when rescuers arrive, leading to further scrutiny of automated strike planning.


The Political Fallout

The use of Claude in active warfare has led to a major rift between the tech industry and the U.S. government:

The Federal Ban: President Trump recently banned the federal use of Anthropic's tools, labeling the company a "Radical Left AI company" after it refused to remove safeguards against autonomous weapons.

The Phase-Out Grace Period: Despite the ban, the Pentagon has a six-month window to phase out Claude because it is so deeply embedded in current classified systems like Maven.

Anthropic’s Stance: CEO Dario Amodei has maintained that today’s AI is not reliable enough to power fully autonomous weapons and has fought against the technology being used for mass surveillance.