Author : Abhijit Singh

Originally Published 2023-01-11 15:43:25 Published on Jan 11, 2023
Despite their growing usage in armed conflict, artificially intelligent unmanned combat systems raise questions of law, ethics and accountability
Armed drones in Indian military: Can machines understand the rules of war?
India is on a drive to induct unmanned combat systems into the military. Months after the Indian Army announced the induction of “swarm drones” into its mechanised forces, the Navy chief, Admiral R Hari Kumar, reiterated the importance of autonomous systems in creating a “futureproof” Indian Navy (IN). Speaking at the Navy Day press conference last month, Admiral Kumar listed initiatives to bolster the Navy’s operational prowess, including a move to procure a fleet of Armed drones in Indian military: Can machines understand the rules of war? Despite their growing usage in armed conflict, artificially intelligent unmanned combat systems raise questions of law, ethics and accountability armed “Predator” drones from the United States. It is incumbent on the IN, he said, to keep a close eye on the movements of Chinese vessels in the Indian Ocean Region. Military drones are important assets in “navigating the turbulent security situation” in the littorals. The IN, indeed, has been on a mission to expand surveillance in India’s near-seas. Two years after it leased MQ-9B Sea Guardian drones from the US, the navy, in July 2022, released an unclassified version of its “unmanned roadmap” for the induction of remote autonomous platforms, including undersea vehicles. A key driver for the enterprise is underwater domain awareness, deemed an increasingly vital component of maritime deterrence in the Eastern Indian Ocean. In the aftermath of the conflict in Ladakh in June 2020, there is a growing sense among Indian experts and military planners that China’s undersea presence in the Indian Ocean is on the cusp of crossing a critical threshold. Recent reports of the sighting of Chinese drones in the waters off Indonesian islands suggest the Peoples Liberation Army Navy has been studying the operating environment of the Indian Ocean. Already, there has been a rise in the deployment of Chinese research and survey vessels in the waters around India’s Andaman and Nicobar Islands. Ever more alive to the dangers posed by foreign undersea presence in Indian waters, the IN sought to acquire its own autonomous underwater vehicles (AUVs) with twin surveillance and strike capabilities.

A key driver for the enterprise is underwater domain awareness, deemed an increasingly vital component of maritime deterrence in the Eastern Indian Ocean.

The navy’s interest in armed undersea drones, however, has tickled the curiosity of maritime observers. Despite being widely used in underwater search and exploration, underwater vehicles have never quite been viewed as warfighting assets by India’s military establishment. Notwithstanding the AUVs’ utility in tasks such as mine detection and ship survey, India’s naval planners have traditionally desisted from deploying undersea drones in a combat role. Not anymore, evidently. Indian analysts and decision-makers seem to be belatedly acknowledging the warfighting abilities of underwater autonomous platforms powered by artificial intelligence (AI). With the fourth industrial revolution (4IR) shaping a new era in warfare, Indian observers are beginning to recognise the likely impact of disruptive technologies on the maritime domain. AI powered by deep learning, data analytics, and cloud computing, many say, is poised to alter the maritime battlefront, potentially triggering a revolution in naval affairs in India. Even so, a sense of foreboding surrounds the use of intelligent machines in maritime combat. Regardless of the compelling narrative that surrounds AI in warfare, the technology is more complicated than many imagine. To start, there is an ethical paradox that typifies artificially intelligent combat systems. Despite rendering warfare more deadly, AI compromises the control, safety, and accountability of weapon systems; it also enhances the risk of shared liability between networked systems, particularly when weapon algorithms are sourced from abroad, and when the satellite and link systems that enable combat solutions are not under the control of the user. If that weren’t enough complexity, AI is characterised by a predisposition to certain kinds of data. Biases in the collection of data, in the set of instructions for data analysis, and in the selection of probabilistic outcomes muddle rational decision-making, undermining confidence in automated combat solutions. What is more, AI seemingly automates weapon systems in ways that are inconsistent with the laws of war.

AI powered by deep learning, data analytics, and cloud computing, many say, is poised to alter the maritime battlefront, potentially triggering a revolution in naval affairs in India.

Such harms are far from conjectural. The critics of AI in warfare say that fielding nascent technologies without comprehensive testing puts both military personnel and civilians at risk. A system of targeting human beings based on probabilistic assessments by computers that act merely on machine-learned experiences (measuring differences between outcomes and expectations at every stage of computation), they contend, is problematic because the computer neither has access to all relevant data to make an informed decision nor recognises that it needs more information to come up with an optimal solution. If it erroneously used force in a theatre of conflict, there is no one to be held accountable, as blame can’t be pinned on a machine. The doctrinal paradox is equally troubling. There is no easy way of incorporating AI-fuelled warfighting approaches into doctrine, particularly when many technologies are in a nascent stage of development, and there is little clarity about how effective AI could be in combat. Following the successful deployment of armed drones in the Ukraine war and the Azerbaijan-Armenia conflict, some make a case for military doctrine to be informed by the expectation of regular use of unmanned assets in war. But shaping policy to account for AI is challenging. That is because military doctrine is premised on a traditional understanding of conflict. If war is a normative construct, then there are rules and codes to be followed, and ethical standards to be met. Military leaders know that the “necessity” of using force in war ought to be established and that “proportionality” in force deployment is critical. A template for a doctrine based on the conflict in Ukraine or the Azerbaijan-Armenia war would be a mistake. The legal questions that underwater combat drones pose are no less complex. It is not yet clear if unmanned maritime systems enjoy the status of “ships” under the UN convention of the laws of the sea; even if they do, it is unlikely that they can be classified as warships. Nevertheless, their lawful use is not necessarily precluded in either peacetime or armed conflict. Consider a situation in which an Indian unmanned drone outside the territorial waters of a neighbouring state felt compelled to engage a Chinese warship or survey vessel inside those waters. It wouldn’t necessarily be illegal, but it would be unprincipled. It would also create a precedent for China to respond in kind.

Consider a situation in which an Indian unmanned drone outside the territorial waters of a neighbouring state felt compelled to engage a Chinese warship or survey vessel inside those waters.

For the IN, there is also a capacity limitation that restricts the development of AI. While technology absorption in the navy has matured in certain areas over a period of time, a large gap still exists in the development of critical technologies, which are system engineering, airborne and underwater sensors, weapon systems, and hi-tech components. Notwithstanding the announcement of multiple AI projects, the navy remains focused on using AI in noncombat activities such as training, logistics, inventory management, maritime domain awareness, and predictive maintenance. India’s maritime managers recognise that the IN is still at a place on its evolutionary curve where incorporating AI in combat systems could prove risky. An incremental approach, many believe, is the best way forward. It is worth acknowledging that AI in warfare is not just a matter of combat effectiveness but also of warfighting ethics. AI-infused unmanned systems on the maritime battlefront pose a degree of danger, making it incumbent upon the military to deploy its assets in ways that are consistent with national and international law. India’s naval leadership would do well to avoid the conundrum of developing AI-powered underwater systems, whose use might be justified in an operational context, but violate the fundamental principles of “humanity”, “military necessity”, and  proportionality” that underpin the laws of war.
This commentary originally appeared in Indian Express.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Abhijit Singh

Abhijit Singh

A former naval officer Abhijit Singh Senior Fellow heads the Maritime Policy Initiative at ORF. A maritime professional with specialist and command experience in front-line ...

Read More +