Two days before the US started the Iran War, in which Anthropic’s Claude AI model played a central role, the company’s CEO, Dario Amodei, issued a dire warning about the unrestricted use of AI in military operations.
On February 26, Amodei released a lengthy, philosophical statement about the inherent dangers in using AI for military purposes. Amodei warned, “Frontier AI systems are simply not reliable enough to power fully autonomous weapons.”
His warnings went unheeded. On February 28, the US launched airstrikes on Iran.
The US military used AI tools to pinpoint targets for airstrikes in Iran.
The U.S. military was able to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran, thanks in part to its use of artificial intelligence, according to The Washington Post.
The U.S. military has used Claude, the AI tool from Anthropic, in combination with Palantir’s Maven system for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela.
Those 1,000 targets, tragically, also included an attack on an Iranian elementary school building where over 150 children were killed.
On March 11, the New York Times reported that a preliminary Pentagon investigation into the strike found that the United States was at fault and that the incident may have resulted from the use of outdated targeting data.
Whether the AI model the US military used played a role in this particular target selection is not yet known; however, Amodei is not alone in sounding an alarm, expressing concern, and suggesting caution and oversight in the use of AI for military and war purposes.
The US lawmakers are demanding greater focus on the protections that should govern its use and increased transparency about how much control is ceded to the technology.
“We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran,” Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military contexts. “Human judgment must remain at the center of life-or-death decisions.”
“AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs, D-Calif., a member of the House Armed Services Committee.
“We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions,” she said.
Meanwhile, China warned the United States that the excessive use of artificial intelligence (AI) in the military could plunge the world into a Terminator–like dystopia.
“Such choices as the unrestricted application of AI by the military, using AI as a tool to violate the sovereignty of other nations, allowing AI to excessively affect war decisions, and giving algorithms the power to determine life and death, not only erode ethical restraints and accountability in wars, but also risk technological runaway,” a spokesman for China’s defence ministry, Jiang Bin, said on March 11.
“A dystopia depicted in the American film The Terminator could one day come true,” he said.
The Terminator, released in 1984 and starring Arnold Schwarzenegger, depicts an apocalyptic future in which AI-controlled robots fight humans.
Incidentally, in the movie, the AI robot (the T-800) and the resistance soldier (Kyle Reese) are both sent back in time from the year 2029.
So, what are the real dangers of AI’s unrestricted use in military and war?

Antropic’s CEO’s Dire Warning About AI
In his February 26 statement, Amodei highlights that Claude was the first AI company to proactively deploy our models to the Department of War and the intelligence community.
“We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers.
“Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more,” he wrote.
However, he underlined that Claude, or any other AI model for that matter, was simply not ready for two critical tasks: Mass domestic surveillance and fully autonomous weapons.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
“Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They (AI models) need to be deployed with proper guardrails, which don’t exist today.”
We cannot, in good conscience, accede to the Department of War’s (DoW) request to allow unrestricted use of our AI models, Amodei warned.
Furthermore, in a March interview with CBS News, Amodei underlined the limitations of current AI models.
“It (AI) doesn’t show the judgment that a human soldier would show – friendly fire or shooting a civilian, or just the wrong kind of thing. We don’t want to sell something that we don’t think is reliable, and we don’t want to sell something that could get our own people killed, or that could get innocent people killed.”
However, instead of heeding these warnings, the DoW termed Anthropic a “supply chain risk”.
Notably, this was the first time a US company had ever been designated a “supply chain risk,” a label usually reserved for US adversaries.
The DoW has replaced Claude with OpenAI and Elon Musk’s Grok; however, that does not mean the AI models are completely safe to operate in war settings.
Amodei, or the US lawmakers, are not opposed to the use of AI for military and war purposes; what they’re highlighting is the need for guardrails and proper oversight.
Notwithstanding China’s statement about a Terminator-like dystopian future in which AI-powered autonomous robots will fight humans, most AI use in military settings today is limited to decision-support systems.
Claude is an example of a decision support system, not a weapon. The Israeli Lavender and Gospel systems used in the Gaza war and elsewhere are also decision support systems.
These AI applications provide analytical and planning support, but human beings ultimately make the decisions.
However, there is a real danger that these processes will become increasingly automated and gradually reduce human oversight to a formality.
Amodei and other critics are not against the use of AI per se, but they’re demanding a proper legal framework and guardrails that require human oversight before an AI tool can make critical life-and-death decisions.

The US military is increasingly integrating AI models into its war planning, intelligence analysis, and automated target selection. Autonomous drones are already the norm in the Russia-Ukraine War.
The genie is out of the bottle. This process can not be reversed. Going forward, these processes will only become more integral to military planning.
Therefore, it is important to have these debates now.
The issue is not whether it is ethical to use AI models for decision-support systems or autonomous weapon systems, but rather the degree to which it is ethical and the safeguards under which it is used.
Analytical and intelligence errors, as well as institutional and cultural biases, have always been part of wars.
Intelligence mistakes led U.S. stealth bombers to accidentally strike the Chinese embassy in Belgrade, Serbia, in 1999.
The US bombed hospitals during the Afghanistan War. Hundreds of civilians lost their lives in Afghanistan and Iraq due to what could be termed cultural biases.
These biases could be internalized and institutionalized by AI models, and the process could be replicated on an industrial scale, as highlighted during the US airstrikes in Iran, where the campaign struck a blistering 1,000 targets in the first 24 hours.
Without proper safeguards, the scope for error could be multiplied manifold.
- Sumit Ahlawat has over a decade of experience in news media. He has worked with Press Trust of India, Times Now, Zee News, Economic Times, and Microsoft News. He holds a Master’s Degree in International Media and Modern History from the University of Sheffield, UK.
- VIEWS PERSONAL OF THE AUTHOR.
- He can be reached at ahlawat.sumit85 (at) gmail.com




