Tuesday, February 17, 2026
Home Americas

Oppenheimer Moment! U.S. Deployed Claude AI To “Extract” Venezuela’s Nicolás Maduro During Caracas Raid?

There is growing unease in Anthropic as it emerges that the Pentagon used its AI model Claude during the Venezuela operation, in which 83 people were killed, and its then-serving President Nicolás Maduro was “extracted” by the US military.

While there is no clarity on the extent to which the operation was planned or executed around the AI model, the report nonetheless establishes that the US Department of Defense is actively using artificial intelligence in its high-profile military operations.

The US raid on Venezuela on January 3 involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defense ministry.

On February 15, the Wall Street Journal reported that Anthropic’s AI model, Claude, was actively used in a military operation. Interestingly, Anthropic’s terms of use explicitly prohibit the use of Claude for violent ends, for the development of weapons, or for conducting surveillance.

A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was required to comply with its usage policies. The US defence department did not comment on the claims.

“We cannot comment on whether Claude, or any other AI model, was used for any specific operation. Any use of Claude – whether in the private sector or across government-is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance,” Anthropic’s spokesman said.

Athropic.

According to sources quoted by the paper, Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor to the US Department of Defense and federal law enforcement agencies.

Notably, the tension between the US Defense Department and Anthropic over the use of its AI models in war, weapons manufacturing, surveillance, or towards violent ends is not new.

Anthropic’s concerns about how Claude could be used by the Pentagon have prompted Trump administration officials to consider cancelling its contract with the AI company, worth up to US$ 200 million.

Anthropic was the first AI developer known to be used in a classified operation by the US Department of Defense. It is possible that other AI tools were also used in the Venezuela operations for unclassified tasks.

AI tools could be used for a wide variety of tasks, from summarizing documents to reading PDFs and controlling and targeting autonomous drones.

The debate about the responsible use of AI for defense and war purposes is far from settled.

On the one hand, many AI companies are uncomfortable with the use of their AI models for violent wars. At the same time, they need lucrative defense department contracts to gain legitimacy and justify their sky-high valuations.

This concern is not just theoretical. As Austrian Foreign Minister Alexander Schallenberg warned, “This is the Oppenheimer moment of our generation.”

Speaking at a Vienna conference on autonomous weapons, Schallenberg warned: “AI-driven warfare could spiral into an uncontrollable arms race. Autonomous drones and algorithm-driven targeting systems threaten to make mass killing a mechanized, near-effortless process.”

Earlier in January, Defense Secretary Pete Hegseth warned that the agency would not “employ AI models that won’t allow you to fight wars.”

Hegseth’s comments highlighted tensions between some AI companies and the Department of War over how and to what extent AI models should be used for military purposes.

Reports that the US Defense Department might have used Antropic’s AI model without the company’s explicit approval are set to further complicate the debate.

However, the AI itself has a long history of use in wars and weapons development.

AI’s Long History With Wars

Just as many groundbreaking scientific breakthroughs, AI also had its roots in war and military innovation.

A groundbreaking milestone in military history is a precursor to AI as we know it today: Mathematician Alan Turing broke the German Enigma code during World War II.

In 1950, Turing said that computer programs could be taught to think like humans. He developed the “Turing Test” to put a computer’s behavior to the test to determine if it has “human intelligence.”

The Turing Test is still widely used to determine if a computer can match human intelligence.

In 1956, John McCarthy, an American computer and cognitive scientist, coined the term “artificial intelligence” during a workshop at Dartmouth College.

In 1958, the US Department of Defense established the Advanced Research Projects Agency (later renamed DARPA) to facilitate research and development for military and industrial strategies.

In the 1960s, the US Department of Defense began training computers to mimic basic human reasoning.

By 1979, the first autonomous vehicle had been built by Stanford AI Lab.

Stanford Cart with cable, 1961.

In the 1980s, scientists trained computer models to analyze vast amounts of data, identify patterns, and draw conclusions or “learn” from the results.

However, the first use of AI in a war was during the 1991 Gulf War.

In 1991, an AI program called the Dynamic Analysis and Replanning Tool (DART) was used to schedule the transportation of supplies and personnel and to solve other logistical problems, saving millions of dollars.

In 1997, IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov in a pair of six-game chess matches. This win was widely viewed as a huge advance for AI.

By 2009, Google had built the first self-driving car.

By the 2010s, the Pentagon had understood that AI would define dominance in future wars.

In 2014, the US Defense Department unveiled ‘The Third Offset’ Strategy designed to maintain military superiority over peer competitors—specifically China and Russia—by investing in cutting-edge technologies like artificial intelligence, robotics, and autonomous systems.

Artificial Intelligence
Image for Representation

In 2017, the Pentagon launched Project Maven, a high-priority Algorithmic Warfare Cross-Functional Team (AWCFT) designed to integrate artificial intelligence (AI) and machine learning into defense systems to accelerate data analysis.

It focused on automating target identification from drone footage and other intelligence sources.

In the Russia-Ukraine War, both sides heavily use AI-enhanced drones for target identification, autonomous flight, and strike coordination. Drones now account for 70–80% of casualties in many areas, and AI is improving accuracy dramatically.

In 2023, Israel used AI tools such as Lavender and Gospel during the Gaza War. While Lavender analysed hundreds of thousands of call records to identify potential Hamas cadres, ‘the Gospel’ significantly accelerated a lethal production line of targets that officials have compared to a “factory”.

“During the period in which I served in the target room [between 2010 and 2015], you needed a team of around 20 intelligence officers to work for around 250 days to gather something between 200 to 250 targets,” Tal Mimran, a lecturer at Hebrew University in Jerusalem and a former legal adviser in the IDF, told TIME.

“Today, the AI will do that in a week.”

AI is already used in a host of military systems, from identifying and neutralizing threats, guiding manned and unmanned aircraft and vehicles, gathering intelligence, handling logistics, analyzing battlefield pictures, and building integrated missile defense systems.

In the field of AI, technology is evolving much faster than regulation. AI’s use in war is not a distant, dystopian prospect, but it is already here, developing in real time.

While AI has been used in military tools for decades, in the future, it could be central to target identification and military planning. Swarms of coordinated, autonomous drones could be roaming the skies and oceans on a permanent basis, conducting ISR missions and autonomously striking targets.

The world might be staring at an Oppenheimer moment, a moment when we cross a line never crossed before, granting machines the power to decide who lives and who dies.

AI’s use in the Venezuela Operation might just be a snapshot of things to come.

  • Nitin is the Editor of the EurAsian Times and holds a double Master’s degree in Journalism and Business Management. He has nearly 20 years of global experience in the ‘Digital World’.
  • Connect with the Author at: Nytten (at) gmail.com
  • Follow EurAsian Times on Google News