Leash The “Killer Robots” Led By U.S. & China! UN Pushes For Global Rules On Unethical AI-Driven Weapons

At a time when Artificial  Intelligence (AI) is being integrated into militaries all over the world at an increased pace for faster and more coordinated decisions across all domains, the United Nations is witnessing debates on how AI  could make wars more unpredictable, dangerous, and unethical, and why it is time to come out with shared military operational guidelines on using the AI. 

On September 24, the U.N. Secretary General António Guterres told the Security Council that  “Humankind can’t allow killer robots and other AI-driven weapons to seize control of warfare. Innovation must serve humanity – not undermine it,”.

He then added that  “there is an urgent need for consensus over international regulation”.

On September 26, the delegate of the  International Committee of the Red Cross (ICRC), which has  160 years’ experience of witnessing the introduction of new weapons on the battlefield, reiterated a joint appeal by the Secretary-General and the President of the ICRC that States must conclude as soon as possible a legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems.

The ICRC  also urged the States to adopt a human-centered approach to military AI in order “to ensure that human control and judgement are preserved in all decisions that pose risks to the life and dignity of people affected by armed conflict”.

It may be noted that on June 5, the U.N. Secretary General had presented a report to the General Assembly on “Artificial intelligence in the military domain and its implications for international peace and security”.

Whether such appeals will fall on the ears of the major powers remains to be seen, but it is a fact that an intense global arms race in artificial intelligence is now underway. Although leading powers recognize the importance of limiting AI-powered autonomous weapons, none of them trusts the others to do so first.

AI has now emerged as a central axis of both geopolitical and strategic competition, on the one hand, and military innovation, on the other. This is particularly true of the U.S. and China, although Russia is not far behind in utilizing AI as a force multiplier for future warfare.

Apparently, Washington is advancing AI-reliant operational doctrines, such as Joint All-Domain Command and Control (JADC2) and Mosaic Warfare, aimed at achieving decision superiority through speed, data integration, and distributed command across all operational domains. On its part, Beijing is believed to be pursuing an ambitious Military-Civil Fusion strategy, designed to integrate civilian AI innovation into its military-industrial complex systematically.

Experts say that at the core of this race is the rise of AI-enabled Decision Support Systems (DSS)  tools designed to help commanders process and analyse vast volumes of data from multiple sources (radar, intelligence, open-source data) to make faster and more informed battlefield decisions.

Other benefits of AI in the military include its enhancement of autonomous capabilities, such as managing surveillance, disarming improvised explosive devices (IEDs), operating unmanned military aircraft, and deploying robot sentries, thereby reducing risks to human soldiers and improving operational efficiency.

AI is also used to detect and respond to cyber threats in real-time, identify vulnerabilities in enemy networks, and launch targeted cyberattacks.

Importantly, AI now assists in the “kill chain” process by identifying and selecting potential threats, assessing collateral damage, and informing the most appropriate weapon selection for precise targeting.

In other words, strategists, warfighters, and technologists have begun to view AI as a tactical and strategic tool for outpacing adversary decision-making processes through the framework of what is known as the OODA (Observe, Orient, Decide, and Act) loop.

However, at the same time, AI is also said to be a double-edged sword. While supporters argue that AI is indispensable for fighting a war, critics caution against overreliance on opaque algorithms, warning that this could erode human judgment, obscure accountability, and increase the risk of tactical failures.

Critics point out that the potential over-reliance on technology could leave militaries vulnerable when these systems fail or are disrupted. A jammed signal, a depleted battery, or an enemy cyberattack could render sophisticated technology systems useless, leaving soldiers unprepared to operate without them, so their argument goes.

Conceptualising AI use through the OODA loop’s emphasis on speed is now argued to have limitations or challenges, five of which are particularly noteworthy, according to Myriam Dunn Cavelty and Sarah Wiedemar, researchers at the Center for Security Studies (CSS) at ETH Zurich, Switzerland.

WASHINGTON, DC – JULY 23: U.S. President Donald Trump displays a signed executive order during the “Winning the AI Race” summit hosted by All? In the Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025, in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. Chip Somodevilla/Getty Images/AFP (Photo by CHIP SOMODEVILLA / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)

5 Big Challenges

First, from a technical standpoint, battlefield realities such as degraded, incomplete, or low-resolution data can undermine the performance of AI DSS systems.

Building and maintaining operational trust in them is therefore dubious, especially in high-risk and adversarial environments. This is especially true when military personnel lack sufficient technical literacy, which is often the case. It is not easy to ensure that the operator can interpret, interrogate, and confidently act on the system’s recommendations, rather than simply accepting outputs at face value or ignoring them altogether.

Secondly, from an organisational perspective, the growing reliance on commercial providers to feed data into AI DSS could pose interoperability and data control risks over time. Cloud infrastructure is vulnerable to cyberattacks, outages, and physical disruptions, which could undermine the proper functioning of AI DSS. Other issues include data laws and maintaining sovereign control over sensitive data.

Even operationally, the military needs to scale computing capacity at short notice, to accommodate spikes in intelligence processing or real-time battlefield analysis. But then the fact remains that there are   talent gaps in the militaries of the countries.

Thirdly, the use of AI DSS also poses challenges at the doctrinal level. The integration of increasingly sophisticated and potentially autonomous AI DSS confronts traditional command structures and decision-making processes, which may blur the lines of responsibility and accountability.

Doctrinal concepts of use are developing slowly, leaving uncertainty about how such systems should be utilized in dynamic operational settings and how human operators should interact with AI-generated outputs.

Without adequate training and guidelines, there is a risk that operators may either defer too readily to AI systems, resulting in overreliance, or underutilize them due to a lack of understanding or confidence.

Fourthly,  AI-enabled decision-making raises fundamental questions at the political level. AI DSS is supposed to operate in accordance with international humanitarian law,  including the core principles of distinction, proportionality, and military necessity. Otherwise,  political leaders will face difficult decisions regarding interoperability – particularly when AI systems differ in engagement logic, data sourcing, doctrinal assumptions, or levels of compliance with such law.

The key point is that the responsible use of AI DSS must be grounded in legal, normative, and strategic principles.

Finally, growing reliance on commercial systems introduces strategic dependencies that may undermine national autonomy. While adapting commercial platforms can be faster and cheaper, it risks reduced control over core capabilities and vulnerabilities from proprietary systems and private-sector decisions.

In sum, as Cavelty and Sarah Wiedemar rightly argue, “technological capability alone does not guarantee strategic advantage. As innovation accelerates, there is a growing risk that AI DSS will be developed and deployed faster than they can be fully understood, governed, or meaningfully integrated into military operations. The challenge is not simply to accelerate decision-making, but to ensure that decisions remain informed, accountable, and operationally sound – even under conditions of uncertainty, complexity, and time pressure. Striking this balance will determine not only the effectiveness of future warfare, but also the broader legitimacy of AI in armed conflict”.

Thus, the challenge now is not merely employing AI-support systems by the militaries, but how to employ them in a manner that delivers a genuine operational advantage while preserving accountability, adhering to the laws of armed conflict, and maintaining meaningful human judgment in critical moments.

And that is precisely the point that the U.N. Secretary-General was making the other day.

Previous article“Tomahawk Terror”: From Iran, Ukraine To Japan — Meet The U.S. Land Attack Missile That Has Unnerved The “Rivals”
Next articleTrump’s Assault On Science Could Threaten The U.S. Position As World’s Top Research Nation: Nobel Officials
Prakash Nanda
Author and veteran journalist Prakash Nanda has been commenting on Indian politics, foreign policy on strategic affairs for nearly three decades. A former National Fellow of the Indian Council for Historical Research and recipient of the Seoul Peace Prize Scholarship, he is also a Distinguished Fellow at the Institute of Peace and Conflict Studies. He has been a Visiting Professor at Yonsei University (Seoul) and FMSH (Paris). He has also been the Chairman of the Governing Body of leading colleges of the Delhi University. Educated at the Jawaharlal Nehru University, New Delhi, he has undergone professional courses at Fletcher School of Law and Diplomacy (Boston) and Seoul National University (Seoul). Apart from writing many monographs and chapters for various books, he has authored books: Prime Minister Modi: Challenges Ahead; Rediscovering Asia: Evolution of India’s Look-East Policy; Rising India: Friends and Foes; Nuclearization of Divided Nations: Pakistan, Koreas and India; Vajpayee’s Foreign Policy: Daring the Irreversible. He has written over 3000 articles and columns in India’s national media and several international dailies and magazines. CONTACT: prakash.nanda@hotmail.com