Mapping LAWS

Blog 05: New Zealand’s autonomous weapons challenge

As 2021 draws to a close another meeting of the Convention on Conventional Weapons (CCW) ends without making any meaningful progress on the issue of lethal autonomous weapons systems (or LAWS). Much has already been said about the general sense of inertia at the CCW meetings, so I want to focus specifically on the challenges faced by Aotearoa New Zealand in its attempt to play a more assertive role in these debates.

Last week I was asked to make some comments on a story being produced by TVNZ journalist Logan Church on New Zealand’s stance on ‘killer robots’. Earlier in the week Church had run an initial story on the active stance being taken by New Zealand’s Minister for Arms Control and Disarmament, Phil Twyford, in favour of a regulation/ban framework for autonomous weapons under international law.

For his follow-up piece, Church interviewed prominent computer scientist and AI expert Stuart Russell, who has spoken out against the development of ‘killer robots’ for many years. Russell opined that New Zealand’s position within the Five Eyes intelligence network (alongside the US, UK, Canada and Australia) could make it an influential voice in challenging the AI/autonomy-focused weapons programmes in the partner countries. For my part, I argued the opposite: that if anything, New Zealand’s place within Five Eyes will make it difficult for this country to make and sustain a commitment to not using LAWS. There are a number of reasons for this.

From a political perspective, it looks painfully naïve to think that the United States, or even Australia, would take cues from New Zealand on the regulation of their military industries and the ethics of their military deployments. New Zealand has been a target of some resentment and scorn from their Five Eyes partners over recent years for not being tough enough on China, and fragments of antipathy towards New Zealand’s strong history of anti-nuclear activism, which ended New Zealand’s formal alliance with the US, still exist in the halls of power in Washington. Whether there is appetite to potentially create another defence rift between the countries over these new technologies remains to be seen.

The possibilities of such a rift are real. The US, UK, and Australia are all investing very heavily in AI-powered weapons technologies, many of which have autonomous capabilities, in the belief that this is the best way to find an edge (or ‘offset’) if the developing tensions with China spill over into military conflict. There is a very strong expectation that New Zealand will stand shoulder-to-shoulder with its ‘traditional’ defence partners against Chinese aggression in the Indo-Pacific, an expectation that New Zealand’s most recent Defence Assessment seems to clearly affirm. It is for this reason that Twyford’s Cabinet paper on LAWS allows for ‘maintaining interoperability with our key defence partners’ and acknowledges that ‘our policy settings will need to take into account the prospects of operating alongside, and being able to communicate meaningfully with, AWS deployed militaries.’ This looks like a very significant carve-out designed to placate those key defence partners, and one which potentially undermines the whole commitment to bans or regulation of LAWS.

But the potential pitfalls may run even deeper than this. As a Five Eyes partner, New Zealand is responsible for gathering and sharing digital communications and intercepts from the South Pacific region and feeding them into US-based analysis systems. There is no question that this mass of data, along with that collected by other partners, could be used to train machine learning systems deployed by the US or Australia and could potentially be used as inputs into algorithms for targeting or attack decisions in war. We have already seen how metadata has been deployed by the US to identify targets and conduct ‘signature strikes’ with drones in the war on terror and this is only likely to increase with the development of autonomous technologies.

In a future scenario in which New Zealand has signed and ratified a ban on LAWS and then conducts military operations alongside LAWS-equipped militaries, or continues to supply intelligence data that the weapons systems need to function, accusations of a breach of their legal commitments or at the very least hypocrisy would be hard to deny. It might be argued that so long as the New Zealand military doesn’t kill people with its own autonomous technologies, then the spirit of the commitment would be maintained. But the lines here are blurry, as they are in all matters related to these weapons technologies and algorithmic war. It is a relatively cost-free exercise to recognise and call out the moral and ethical problems of LAWS, but for New Zealand to consistently adhere to those commitments will require a more radical revision of ‘traditional’ security settings.

When taking its anti-nuclear stance in the 1980s, the New Zealand Government was prepared to hold a strong and consistent line, right down to the denial of the US ship visit which led to the breakdown of ANZUS. There was a heavy price to be paid for that stance and the Government was willing to pay it. Would today’s New Zealand Government stand behind its commitments to ban LAWS to the extent that it meant withdrawing from the Five Eyes intelligence arrangement, or lead to another intense souring of relations with the US and – this time – Australia? It’s very hard to see such a situation eventuating in the current political context, which will make the playing out of Twyford’s anti-LAWS agenda very interesting to watch over 2022 and the years beyond.

— Jeremy


Moses, Jeremy. (2021, December 24). New Zealand’s autonomous weapons challenge. Mapping LAWS: Issue mapping and analysing the lethal autonomous weapons debate.