Autonomous weapons systems pose a tremendous physical and ethical threat to humanity. While the majority of countries want them prohibited, a few powerful states are blocking progress. Richard Moyes and Uldduz Sohrabi argue that the time is right for debate and action that moves beyond the deadlocked UN process.
‘Artificial intelligence’ is rapidly being integrated into all sectors of society – and warfighting is no exception. Militarised states have also increased their spending on AI-related technologies for military use. The US Department of Defence is estimated to have spent between $800 million and $1.3 billion (USD) on AI in 2020, with an additional $1.7 billion to $3.5 billion for unmanned and autonomous systems. China has matched this spending, according to the same report by the Center for Security and Emerging Technology (CSET).
With the world going through what some claim to be a fourth industrial revolution, increased spending on AI and robotics is high on the agenda of many militaries. Yet dreams of military advantage from new technologies sit alongside deep moral concerns – that there will be a loss of human moral engagement in the use of lethal force, an erosion of existing legal expectations and that machine decision-making will lead to still further dehumanisation.
How new technologies will be configured and used remains an open question – but the potential to apply lethal force faster and without putting your own troops in harm’s way creates a set of incentives that need to be actively counteracted. In this context, Article 36 is part of a community of states, international organisations and civil society actors proposing a legal structure to retain meaningful human control in the use of lethal force. We believe that new rules are needed to address autonomy in weapons systems – to make sure human moral agency is retained and to avoid a society where we mandate machines to make decisions to target and kill people.
After nearly a decade of discussions at the United Nations it is clear that the most militarised states are determined to prevent the development and adoption of such rules. This sets up a challenge to the wider international community – regardless of opposition from Russia, the United States, India, the UK and other leaders in military technologies – to shape the landscape of rules and expectations that can protect our wider societal interests.
Regulating autonomous weapon systems
International discussions on ‘autonomous weapons’ started in 2013 at the UN Human Rights Council and from 2014 the issue has been on the agenda of the UN Convention on Conventional Weapons (CCW).
The progress of these discussions has been slow. This is partly because of a genuine lack of shared understanding of the subject matter or agreement around key terms. But it is also a product of deliberate efforts by militarised states – refusing to find agreement and using the CCW’s rules of procedure to exert an effective veto over anything approaching real progress. Whilst the majority of countries is calling for negotiation of a legal instrument, states such as Russia, the United States, Israel, India, China, the UK and others have kept the conversation bogged down in previously agreed language and endless reiterations of existing international law, whilst refusing to back proposals to negotiate.
While these states choose to remain in a cold war mentality, others are now approaching the issue with a clear awareness that regulating autonomous weapons has much wider implications – that it is not just about the rules of warfare, but about how we, as individuals and as a society, will relate to technology in the future.
Article 36 and the Stop Killer Robots campaign have proposed a structure of regulation that is based on preserving the vital elements of human control and avoiding the dehumanisation of ‘machines targeting people.’ It is an approach to regulation that would prohibit unacceptable systems and system configurations, but would also apply ‘positive obligations’ to require human control to be exerted in practice.
The last two years have seen a dramatic convergence of thinking amongst states and other organisations towards such a ‘two track’ approach of prohibitions and regulations – to the extent that such an approach could now be described as the mainstream position. It was adopted by the Chair of the CCW Expert Group in his effort to draft a paper that reflected the position of the majority. This convergence is a fundamental shift – and vital to the establishment of an effective legal instrument.
So, despite certain militarised states working to prevent progress, agreement has been forming regardless. There are still significant differences on details amongst the majority who adopt the ‘two track’ approach – but at least these states can now clearly see where those differences of opinion lie, and can discuss them in the knowledge that there is also much that they agree on. This is a platform for negotiation.
This shift has been possible because of a stepping away from futuristic sci-fi notions and a recognition that this is a ‘now’ issue about machines applying lethal force on the basis of sensor-information. All weapons that would function ‘autonomously’ share at least one common feature; they all use a sensor (or sensors) to automatically detect and trigger a response against a target. Article 36’s proposal starts by identifying this as the foundation for our technologies of concern – and in doing so we situate the issue clearly in the present.
Weapon systems that use sensors are not new. Landmines provide an example of such a system in perhaps its most primitive form. More advanced sensor-based weapons that are currently in use include the DoDAAM Super aEgis II sentry guns, active protection systems on armoured vehicles, and other larger missile defence systems, including the US Patriot system or the Israeli Iron Dome. According to the developer of Super aEgis II, the South Korean sentry gun has the capacity of being used in a fully autonomous mode but has only been deployed in a semi-autonomous mode due to a concern to avoid mistakes.
Within the scope of sensor-based weapons, Article 36 is calling for a prohibition on systems that target people directly. Having machines targeting people is an affront to human dignity – and with the exception of anti-personnel landmines (which are now outlawed) such technologies have not been widely adopted. A ban on systems that target humans would also be important for preventing an erosion of the protections (for both civilians and combatants) that are currently enshrined in the law.
A second line of prohibition would be on systems that do not allow meaningful human control, and so are not sufficiently predictable for those activating them to make informed judgements about the likely consequences of their decisions. All other autonomous weapons systems – sensor-based systems that do not target humans and can be used with meaningful human control – should be subject to certain ‘positive obligations’ in their design and use to require that they are controlled sufficiently in practice.
The CCW: A forum that can’t make progress?
Whilst the development of a broadly shared policy orientation is a basis for optimism, discussions in the UN CCW have also illustrated the challenges of making progress in the face of political opposition from militarised states. In 2021, the efforts of the Chair of the Group of Governmental Experts (GGE), Belgium’s ambassador to the UN in Geneva, were systematically rejected by certain states. Russia was the most vocal, but India, Israel, the USA and others, all spoke against an ambitious way forward. The CCW’s way of working effectively allows any state to block agreement on formal decisions and so – amidst much geo-political posturing that had nothing to do with the subject matter at hand – these states blocked the forum from moving towards full negotiations.
Such blocking is a consistent feature of the CCW, and one of the reasons it hasn’t been able to agree any new legal rules for 20 years. None of the meeting’s participants were actually expecting that to change. The CCW is a forum that cannot make formal progress, but where groups of states can develop the shared understandings and partnerships that are needed to make progress elsewhere. After all, both the (Ottawa) Mine Ban Treaty on anti-personnel mines and the Convention on Cluster Munitions were developed independently after the CCW was unable to make progress.
The CCW will meet again in 2022 – the next GGE meeting is due to be held from 7th-11th March. Expectations for those meetings are now very low, and people expect them to be dominated by US efforts to direct discussions towards compiling a list of rules of existing law that might be relevant to the issue. Such an exercise will do nothing to determine the necessary characteristics of human control, nor to challenge the dehumanising prospect of machines identifying people as targets.
Of course, states that want to see action will continue to participate in those discussions in good faith – but after the experience of 2021, they will not be doing so with any expectation that such work can come to a meaningful conclusion. Rather, attention will be turning elsewhere – to identifying a process of discussion through which the majority of states can refine and then give legal effect to the broad policy orientation that has now come to the fore.
Limiting the extent to which it is machines that are determining where and when lethal force will be applied is a fundamental legal challenge, and requires a legal response. Whilst militarised states will do all that they can to prevent such legal developments, it does not serve our wider societal interests to wait for their permission to act. This is an issue that bears upon how we relate to technology in society – the extent that our lives are at the mercy of new technologies and the extent that we have the social and political tools to control that relationship. Progressive states need to start setting the standards for that debate.
Richard Moyes is Managing Director at Article 36, a specialist non-profit organisation focused on reducing harm from weapons.
The views and opinions expressed in posts on the Rethinking Security blog are those of the authors and do not necessarily reflect the position of the network and its broader membership.
Image Credit: sibsky2016 via shutterstock.com