Home World

AI-WAR: Is Unsafe, Untested, Unreliable Artificial Intelligence Giving China A Technological Advantage Over The U.S.?

That Artificial Intelligence (AI), as an enabling technology, now holds the extraordinary potential to transform every aspect of military affairs has been amply evident in the ongoing war in Ukraine and Israel’s counterattacks in Gaza and Lebanon.

It now dominates military operations, producing autonomous weapons, command and control, intelligence, surveillance, and reconnaissance (ISR) activities, training, information management, and logistical support.

As AI is shaping warfare, there is now immense competition among the major military powers of the world to bring about more AI innovations. China seems to be leading the race here if the regular concerns of the American strategic elites in this regard are any indication.

Until recently, the United States was said to be at the forefront of AI innovation, benefiting from leading research universities, a robust technology sector, and a supportive regulatory environment. However, now China is said to have surpassed the U.S. in all this. China is feared to have emerged as a formidable competitor of the U.S., with its strong academic institutions and innovative research.

Militarily speaking, Chinese advances in autonomy and AI-enabled weapons systems could impact the military balance while potentially exacerbating threats to global security and strategic stability.

Americans and their allied nations seem to be worried that the Chinese military could rush to deploy weapons systems that are “unsafe, untested, or unreliable under actual operational conditions” in striving to achieve a technological advantage.

Their greater worries are that China could sell AI-powered arms to potential adversaries of the United States “with little regard for the law of war.”

Andrew Hill and Stephen Gerras, both Professors at the U.S. Army College, have just written a three-part essay arguing that the United States’ potential adversaries are likely to be very motivated to push the boundaries of empowered military AI for three reasons: demographic transitions, control of the military, and fear of the United States.

They point out that regimes such as Russia and China are grappling with significant demographic pressures, including shrinking working-age populations and declining birth rates, which will threaten their military force structures over time. AI-driven systems offer a compelling solution to this problem by offsetting the diminishing human resources available for recruitment. In the face of increasingly automated warfare, these regimes can augment their military capabilities with AI systems.

Moreover, for Hill and Gerras, totalitarian regimes face a deeper internal challenge that encourages the development of AI – “the inherent threat posed by their own militaries.” Autonomous systems offer the dual advantage of reducing dependence on human soldiers, who may one day challenge the regime’s authority while increasing central control over military operations. In authoritarian settings, minimizing the risk of military-led dissent or coups is a strategic priority.

From a geopolitical perspective, Hill and Gerras point out that Russia and China will feel compelled to develop empowered military AI, fearing a strategic disadvantage if the United States gains a technological lead in this domain. That is why they will always work towards “maintaining a competitive edge by aggressively pursuing these capabilities.”

The two Professors of the U.S. Army College argue vociferously that “We underestimate AI at our own peril” and would like unrestrained and unconditional support for AI.

However, there are other analysts and policymakers, perhaps the majority, who simultaneously realize that the augmentation of military capabilities due to AI could be a double-edged sword, as the same AI can cause unimaginable damages when misused.

They seem to favor devising rules to ensure that AI complies with international law and establishing mechanisms that prevent autonomous weapons from making life-and-death decisions without appropriate human oversight. Legal and ethical considerations of AI applications are the need of the hour, so their argument goes. And they seem to have growing global support.

In fact, the United States government is initiating global efforts to build strong norms that will promote the responsible military use of artificial intelligence and autonomous systems.

Read More

Previous articleRussia’s ‘Almost’ Nuke Counter-Strike: Meet The Man Who Saved The World & Averted A Nuclear War In 1980s
Next articleGripen-E Fighter Eye “Double Gold” As They Battle F-16, Rafale, Typhoon For PAF’s Multi-Role Aircraft Contract
Author and veteran journalist Prakash Nanda has been commenting on Indian politics, foreign policy on strategic affairs for nearly three decades. A former National Fellow of the Indian Council for Historical Research and recipient of the Seoul Peace Prize Scholarship, he is also a Distinguished Fellow at the Institute of Peace and Conflict Studies. He has been a Visiting Professor at Yonsei University (Seoul) and FMSH (Paris). He has also been the Chairman of the Governing Body of leading colleges of the Delhi University. Educated at the Jawaharlal Nehru University, New Delhi, he has undergone professional courses at Fletcher School of Law and Diplomacy (Boston) and Seoul National University (Seoul). Apart from writing many monographs and chapters for various books, he has authored books: Prime Minister Modi: Challenges Ahead; Rediscovering Asia: Evolution of India’s Look-East Policy; Rising India: Friends and Foes; Nuclearization of Divided Nations: Pakistan, Koreas and India; Vajpayee’s Foreign Policy: Daring the Irreversible. He has written over 3000 articles and columns in India’s national media and several international dailies and magazines. CONTACT: prakash.nanda@hotmail.com
Exit mobile version