
By George Allison,
Published by UK Defence Journal, 6 May 2023
In a recent AI in Weapons Systems Committee session, experts debated the ethical, legal, and technical concerns of AI in weapons like the Royal Navy’s Phalanx system, discussing potential bans on specific autonomous systems.
The Artificial Intelligence in Weapons Systems Committee recently held a public evidence session, inviting experts to discuss ethical and legal concerns surrounding the use of AI in weaponry.
The session included testimony from Professor Mariarosaria Taddeo, Dr. Alexander Blanchard, and Verity Coyle, who examined the implications of AI in defence and security.
Professor Taddeo highlighted three main issues with the implementation of AI in weapons systems, stating, “We need to take a step back here, because it is important to understand that, when we talk about artificial intelligence, we are not just talking about a new tool like any other digital technology. It is a form of agency.”
She emphasised concerns regarding the limited predictability of outcomes, difficulty attributing responsibility, and the potential for AI systems to perpetrate mistakes more effectively than humans. Taddeo argued that the unpredictability issue is intrinsic to the technology itself and unlikely to be resolved.
Verity Coyle, a Senior Campaigner/Adviser at Amnesty International, emphasized the potential human rights concerns raised by autonomous weapons systems (AWS), saying, “The use of AWS, whether in armed conflict or in peacetime, implicates and threatens to undermine fundamental elements of international human rights law, including the right to life, the right to remedy, and the principle of human dignity.”
She argued that without meaningful human control over the use of force, AWS cannot be used in compliance with international humanitarian law (IHL) and international human rights law (IHRL).
During the session, Verity Coyle provided an example of an existing AWS, the Kargu-2 drones deployed by Turkey, which have autonomous functions that can be switched on and off. She warned that, “We are on a razor’s edge in terms of how close we are to these systems being operational and deadly.”
In response to questions about existing AI-driven defence systems, such as the Phalanx used by the Royal Navy, Coyle stated, “If it is targeting humans, yes,” indicating that any system targeting humans should be banned.
The experts recommended the establishment of a legally binding instrument that mandates meaningful human control over the use of force and prohibits certain systems, particularly those that target human beings.
See: Original Article
