Men Vs Machine: Slaughterbots Set To Change The ‘LAWS’ Of Battlefield & Make Human Soldiers Redundant?

Lethal autonomous weapons systems (LAWS), also called “killer robots” or “slaughterbots” being developed by a clutch of countries, have been a topic of debate with the international military, ethics, and human rights circles raising concerns. Recent talks about a ban on these killer robots have brought them into the spotlight yet again.

What Are Killer Robots?

The exact definition of a killer robot is fluid. However, most agree that they may be broadly described as weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without any meaningful human control.

LAWS are first pre-programmed to eliminate a particular “target profile” and then deployed in an environment where the system’s AI scans to find the target using data from sensors. In case the killer robot comes across someone that the algorithm decides matches the target profile, it will fire to kill.

Slaughterbots differ from unmanned military drones as in the latter, the decision to launch an attack is made remotely by a human operator. However, in the case of autonomous weapons, that decision is made by the machine’s algorithms.

Such algorithm-dependent decision-making is likely to enable weapons to follow the trajectory of software more quickly, and at a greater scale while also reducing costs.

However, there are grave concerns regarding the ability of fully autonomous to meet international humanitarian law standards. These include the rules of distinguishing between combatants and civilians, responding to aggression with proportionality, and attacking only when it is militarily necessary. There are also concerns that killer robots would threaten the fundamental right to life and the principle of human dignity.

The Case For Banning Slaughterbots

The dispute about completely autonomous weapons reached the international arena about a decade ago. It has only been intensifying since then. It has been highlighted that it would be difficult to have the weapons comply with international humanitarian and human rights law.

This is feared as there are inherent difficulties in programming human traits such as reason and judgment into machines. The use of killer robots would also create a gap in accountability – who is responsible for killings and who is to be punished.

Moreover, the possibility of having weapons that can make life-and-death decisions has created moral outrage.

Last week, a UN conclave in Geneva took place to take some concrete action on LAWS. For the first time, a majority of the 125 nations who are signatories of the Convention on Certain Conventional Weapons (CCW) announced that they wanted constraints on killer robots.

Russian ‘Skynet’
The Russian ‘Skynet’ to lead military robots on the battlefield. (via RT)

Despite what these countries said, a proposal to ban these machines could not be passed as it was opposed by the US, Russia, Japan, Australia, South Korea, India, and Israel.

The conference which concluded on Friday managed to agree on a common agreement that measures that are acceptable to all will be considered. This impasse is believed to be problematic in light of two important developments: reports of the first potential killing by a killer robot and the swift pace at which these weapon systems are being developed by some countries.

Potential First Kill By Autonomous Weapon

Autonomous drones which can reach a specific location, pick their own targets and kill based on the algorithm fitted into their systems are known to be in development. However, it was only recently that a possible case of autonomous drones killing fighters on the battlefield came up.

A United Nations report about a skirmish that broke out as part of the military conflict in Libya in March 2020 says that an autonomous drone made its wartime debut in the region.

According to the report, the assault came during fighting between the Government of National Accord and forces aligned with General Khalifa Haftar.

The UN Panel of Experts on Libya wrote that the “Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions.”

A US military robot. (USAF photo)

The attack drone being referred to here — the Kargu-2- is made by the Turkish company Savunma Teknolojileri Mühendislik (STM). It can be operated manually as well as autonomously. The UN report did not clearly mention whether the Kargu-2 was operating autonomously or manually at the time of the attack.

Another instance of the use of autonomous drones was in the 2020 war in Nagorno-Karabakh, where Azerbaijan fought Armenia with attack drones and missiles which could loiter in the air until detecting the signal of an assigned target. Such a demonstration of the use of LAWS might have enticed nations to further spend on this technology, some believe.

Growing Investment

The UK, US, Russia, China, and Israel are at the forefront of the research and development of these killer robots. The UK’s military has poured in “tens of millions of pounds” into these AI-powered weapons. It recently announced a £2.5 million project for “drone swarms” that will be controlled with the assistance of next-generation AI, autonomy, and machine learning.

As EurAsian Times previously reported, every branch of the US military is looking for more robots. While the US Navy is experimenting with a 135-ton ship named ‘Sea Hunter’ that could patrol the oceans without a crew, the country’s Army is developing a novel system for its tanks that will allow machines to smartly pick targets.

Australian Army training with a ‘Vision 60’ prototype as a multi-purpose sensor and recon bot. (via Twitter)

It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM) that has the ability to pick out vehicles to attack without human say-so. The US Air Force is not behind, either.

It is working on an autonomous aircraft teaming architecture under its “SkyBorg” program. It is speculated that the aircraft will allow the USAF “to posture, produce and sustain mission sorties at sufficient tempo to produce and sustain combat mass in contested environments”.

Meanwhile, Russia is believed to have a “drone submarine” that is equipped with “nuclear ordnance”. China, on the other hand, has announced its intention to become the global leader in artificial intelligence (AI) which is the foundation of the development of such robots by 2030.

Meanwhile, Israel has established itself in the field of autonomous weapons, especially with its Harop ‘Suicide Drone’, Robattle wheeled battlefield robot, and Sentry-Tech automated border control machine gun.