Visualising themes in debates about autonomous weapons
When we embarked on this project one of our first tasks was to try to distil the main themes of the debate surrounding lethal autonomous weapon systems (LAWS). As an early piece in our attempt to map the LAWS debate, we wanted to understand what the key points of interest and contention are and where they intersect. This firstly required a lot of reading of texts on LAWS from international law, ethics, politics, the tech industry, and popular culture. We then identified lists of key issues that were relevant to each domain. We decided to first take a manual approach to this process, so the themes and their linkages were developed over a number of weeks in discussions within our research team. As we progress through our project we will look to consolidate and improve on this taxonomy both through the development of our own understanding of the issues and the connections between them and through automated text analysis.
Below we provide an interactive overview of the themes. Scroll down to read our explanations for the different themes and groupings. To skip straight to the visualisation click here.
Our taxonomy is grouped by six major themes: International Law, Ethics, Politics, Military and Defence, Artificial Intelligence (AI) and Technology and Popular Culture. We will focus on each of these in turn.
The debate over regulation of LAWS under international law is being driven by state and NGO representatives and has primarily played out since 2013 in meetings held under the auspices of the Convention on Certain Conventional Weapons (CCW). The vast bulk of the proceedings in the CCW have focused on definitions, which have proven to be very difficult to pin down. There are mixed aims amongst the parties to these meetings and questions are increasingly being asked as to whether the CCW is the best forum for developing law on LAWS. A range of different legal principles have been brought to bear on these proceedings.
Finding an agreed definition of Lethal Autonomous Weapons has proven challenging. The range of ways in which AI and autonomy are incorporated or could be incorporated into weapons systems and the lethality or otherwise of these systems makes drawing clear lines difficult. This problem of definition has shifted heavily to the concept of âmeaningful human controlâ over lethal weapons systems as a measure for whether certain systems should be banned or regulated. There is still no consensus among states on these definitional issues, which are a necessary first step to generating legal texts on the matter.
The aims of the legal process for all parties are fairly straightforward: to come to agreement on what types of systems should be banned and what types of systems could be allowed under certain conditions. Activist groups are developing principles on these questions, but states that are resisting regulation continue to insist that regulations or bans arenât possible in the absence of agreed definitions.
While there has been widespread agreement that the CCW was the best place in which to negotiate on regulation and/or banning of LAWS, this is beginning to fracture, with some calls for a new treaty outside the CCW. This is due to the requirement that rules must be unanimously agreed within the CCW, which looks extremely difficult to achieve at this stage after nearly 10 years of debate.
The key legal principles that have been debated thus far surround questions of responsibility, accountability, discrimination, and proportionality under International Humanitarian Law (IHL); that is, who is responsible if autonomous weapons breach the laws of war and how (if at all) can they operate within the confines of these rules? This has been supplemented by broader reference to international human rights law and particularly to arguments about the impact on human dignity that LAWS could potentially have.
Much of the academic, legal, military, and media literature on LAWS focuses on ethical issues: what are the potential harms posed by autonomous weapons and, conversely, what good might they do? These questions take us into ethical debates that cross over between the various stakeholders in these debates. We have separated them out here to emphasise their importance to the debate, but the intersection with the other domains should be noted.
According to those who are advocates for the development and deployment of LAWS, there are a number of potential goods that these new technologies could achieve. The argument that these weapons systems could offer better compliance with the Law of Armed Conflict (LOAC or IHL) has been put forward by some ethicists and roboticists. This is associated with claims that these technologies will bring unprecedented precision to the battlefield, ensuring fewer civilian casualties and less damage to public infrastructure. The fact that such weapons would not bring the unpredictability of emotional human decision-making to bear in high-pressure battle situations is also seen by some (rather counterintuitively) as a way to make war more âhumaneâ. The broader theme of autonomous military robots âsaving livesâ of soldiers, civilians, and people suffering from other humanitarian disasters or accidents is also extremely prevalent as a proposed positive outcome of LAWS.
Those arguing for a ban or regulation of LAWS have pointed to a wide range of potential harms that such technologies could bring. Breaches of human rights and basic human dignity lie at the foundation of many of these arguments. Contrary to those in favour, the argument is made that the inability to bring human empathy and emotion to battlefield decisions will in fact cause greater harm than good. Others worry that autonomy necessarily means loss of control and unpredictability in the behaviour of AI-driven weapons systems, that the low cost and low risk nature of the weapons could generate more war, and that these wars would be worse as a consequence of their dehumanisation. Many of these themes are related to the image of âkiller robotsâ, which is central to the campaign against LAWS and intersects with public views of the technology that have been developed and influenced by pop culture depictions of rampaging military/police robots.
Politics is an obvious theme to choose. Whether, how, and where LAWS are developed and how they are used is inescapably political. As sub-themes we focus on power and policy.
Power, here, focuses on power in an international sense at this stage. Strategic competition emerges as a dominant theme, particularly as it relates to the US-China relationship. The development of, and discourse about, new and emerging technologies such as LAWS is heavily shaped by perceptions of US-China strategic competition. The concept of an arms race is viewed through a similar lens, linked to perceptions of a broader AI arms race. Outside of strategic competition, the potential for use by terrorists or other non-state actors emerges as a concern.
Policy focuses more precisely on some of the mechanisms shaping development. Funding determines the amount of effort and resources available for the research and development of LAWS. A range of actors and motives influence funding decisions, including corporate interests, levels of government enthusiasm, military requests and perceptions of capability needs, and think tank lobbying. Funding is further shaped by dual-use technology developed in civilian industries, the nature of the defence industrial base in a given country, strategic outlook and defence postures, and interoperability requirements.
Meanwhile, policy prescribes whether and how LAWS are regulated and used domestically, including regulation regarding testing and certification for both civilian and defence purposes. The extent to which LAWS can be used domestically is also influenced by levels of public acceptance. As seen with other weapons systems, such as nuclear weapons or cluster munitions, public opinion can make use of particular weapons systems unfeasible.
It is in the military/defence area that LAWS will be used, and consequently where the vast majority of the research and funding of such systems takes place. The military brings a particular set of perspectives to the debates on LAWS, grounded in their view of their role, their understanding of the strategic and operating environments, and their perceived capability requirements to fulfil their role. While there are numerous possible sub-themes that could be considered, we settled on military advantage and trusted autonomy.
Researching and developing new and emerging technologies like LAWS is tied up in the desire for military advantage, to stay one step ahead of adversaries. In developing LAWS, military/defence discourse focuses on improving precision, winning wars, battlefield management, force protection, speed, mass, expendability, and endurance â all falling under the umbrella of military advantage. Given advantage for states like the US and Australia requires working together, interoperability also emerges as a strong theme.
Trusted autonomy has also emerged strongly in the commentary around LAWS in the defence space. The focus here is, in particular, whether personnel can trust LAWS to perform as expected, with predictability. The systems also must be incorporated into defence activities in ways which are consistent with how personnel and the public expect defence to behave. This means compliance with IHL, and public acceptance of LAWS.
Autonomous weapons are emerging technologies and this makes debates about them distinct from debates about other controversial weapons technologies (e.g. nuclear weapons, land mines and cluster munitions). Although there are already weapons that could be defined as lethal autonomous weapon systems, the technological dreams of advocates and nightmares of critics (e.g. swarms of lethal autonomous drones) are yet to be realised. Roboticists, computer scientists and other technologists play a crucial role in bringing these technologies into existence, but the warnings of technologists have also sparked activism and inter-governmental debates about regulating autonomous weapons.
We have catalogued a number of issues related to limits and problems of the the technology and the role of those in industry developing this technology.
There is discussion of a number of problems related to the technologies underpinning lethal autonomous weapons. The machine learning algorithms comprising autonomous weapons systems (e.g. to classify targets) learn to generalise from data-sets that encode human judgments and biases. However, despite efforts to produce explainable AI, it is difficult to understand what determines predictions by âblack boxâ autonomous systems. Adding to these problems, if deployed, autonomous weapons are required to apply force based on sensor data in chaotic, unstructured, unique and uncertain operating environment: a war zone.
Stepping away from operational problems with autonomous weapons, high profile figures like Stephen Hawking and Elon Musk have warned that a possible end-point of artificial intelligence development is superintelligent AI that could surpass human control and harm humans.
Technological limits also shape the development of autonomous weapons. While energy usage and computing resources constrain the size, mobility, and operating time of autonomous weapon systems, issues related to bandwidth and networking in communication deprived environments (e.g. buildings, underground tunnels) are a key rationale for developing autonomous systems that can continue their mission without direct communication and control.
When it comes to the technology industry itself and their role in the development of autonomous weapons, a signifcant theme is funding for robotics, autonomy and other relevant technologies. Funding links universities, the tech sector and governments and central to these networks are research and funding agencies like the Defense Advanced Research Projects Agency or DARPA. DARPA's challenges are an example of how academic and private sector researchers have been socialised into the long-term development of autonomous weapons, with teams often competing to build technologies for âsearch and rescueâ.
Funding for AWS is also justified because these âdual-useâ technologies are argued to provide economic benefits for the private sector and non-military consumers, as well as contributing to economic growth. These dual-use arguments also come up in debates about regulating AWS where measures to regulate or ban AWS are viewed as having potential to constrain the activities of the technology sector.
The ownership and commercialisation theme reflects increasing concerns about who is developing, proliferating and profiting from these technologies.
Talk about autonomous weapons is not restricted to elites: popular culture references are noticeable in texts written with the general public in mind. Entertainment, traditional media and social media provoke and respond to public fascination and concern. We have categorised popular culture by positive and negative themes.
DARPA's challenges represent an effort to build public acceptance for military robot development. One of the most recent of these, the Subterranean Challenge, is broadcast live with sports-style commentary and highlights. Also relevant is another public face of robotics: the institutionalised âwarâ of television shows like Robot Wars and BattleBots, with some team sponsors and competitors active in military robotics and drones.
From Terminator to Black Mirror, the killer robot trope looms large in popular culture representations of robots and AI, invoked as entertainment and cautionary tale. We have released a web interface, Robot Dreams, to explore the resonance of these negative themes in movie representations of robots. Our forthcoming article in Digital War documents how killer robots are a touchstone in media reporting and online discussions about robot quadrupeds, but this is wider trend in discourse about autonomous weapons and military robots. Activists, militaries, and others are mobilising killer robot images to shape public understanding and support.
This is a work in progress, as we intend to work on consolidating, extending and verifying these themes and their connections as the project continues. We welcome and feedback or suggestions on where the themes and the links between them could be improved or added to, so please do get in touch if anything you see (or donât see) here is bugging you or you have comments about our work!
Click here to explore the visualisation
The visualisation was developed using this Observable Notebook.