6

Experts warn that absence of human error in autonomous war weapons “is a myth”

 10 months ago
source link: https://diginomica.com/experts-warn-absence-human-error-autonomous-war-weapons-myth
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Experts warn that absence of human error in autonomous war weapons “is a myth”

By Derek du Preez

July 4, 2023

Dyslexia mode



An image of a drone that has weapons on it

(Image by Ashfad Hossain from Pixabay)

The use of algorithms, AI and autonomous systems are penetrating almost all sections of society - and the benefits and risks of their use, depending on context,  are being debated far and wide. However, the risks of using autonomous systems in, for example, social media scheduling are likely to be far less significant than the risks of using autonomous weapons at war. 

The latter point was the topic of discussion during an evidence session, held by the AI in Weapon Systems Parliamentary Committee, where a number of experts shared their views on the critical need for human oversight when deploying autonomous weapons during conflict. 

In fact, the experts suggested that removing human intervention - or human responsibility for decisions in the field - would be nigh on impossible, given that conscious human decision making would have been made throughout the process of creating an autonomous weapons system. 

For instance, Taniel Yusef, Visiting Researcher at the Centre for the Study of Existential Risk, told the Committee: 

We must get away from the idea that there is a machine with an absence of human error, because a human being isn't in that process on the field. That's not the case at all. Human decisions are involved. 

Human decision making is littered all the way through the lifecycle of the weapon system, from the capture of the data, what data is chosen, to decision boundaries, to parameters, to labeling, to coding, to all of that toggling that happens. 

These decisions are made by human beings all the way through the process - so to have this myth presented, that there is no human error in this, I think would be a misrepresentation of the situation. I think human error exists all the way through this process - and so does computer error.

The debate between members of the Committee and the experts providing evidence was wide ranging and often heated. The three experts on the panel argued that no-one in the field, from policy makers to manufacturers, is arguing against the use of autonomous systems, but that people are calling for legally binding instruments to control when responsibility lies and to provide clear guidance. 

In particular, the experts were clear on their view on whether autonomous systems should be used against humans. Laura Nolan, SRE and Principle Engineer at Stanza Systems, said: 

The main question around regulation is: should such systems be able to use humans as target profiles, which we think is very dangerous and problematic.

Yusef agreed and said: 

Selecting, targeting and the application of force on humans - that should be completely off the table for various reasons. One, they are harder to profile, they're just much more technologically different. It's unethical. It's difficult for legal reasons - to legally ascertain who would be a viable target as a matter of law, and it's technologically more difficult to target a human for all sorts of reasons. 

We’re more agile, we move differently. Airport scanners find it very difficult targeting us, let alone something in the noise of war. There are all sorts of reasons why humans should be off the table.

Caution is an advantage

One of the key points of tension during the Committee’s hearing was during a line of questioning from one of its members around the topics of the UK pursuing autonomous systems with caution and whether or not weapons could actually be autonomous (in the purest sense of the word). 

Professor Christian Enemark, Professor of International Relations, University of Southampton, said that on the idea of an ‘AI arms race’, where if the UK’s enemies race ahead with autonomous systems and therefore the UK should seek to outpace them,  is a false principle in this scenario. He explained: 

The counterpoint to that is you don't have to put operational advantage and caution in opposition to each other, either as a matter of military strategy and operation, or as a matter of military ethics. And arguably, this particular space is a good example of that caution. 

On the technology, it is operationally advantageous to the extent that, for example, caution is caution against mistakes, such as it would harm UK personnel, or would catastrophically harm civilians, 

UK personnel charged with protecting caution could be a way of avoiding hastily moving towards systems where hostilities are initiated at a high speed or escalated at AI speed to the detriment of UK interests, where things cannot be cannot be reined in because of technological factors.

Equally, Enemark was keen to highlight that weapons that are ‘autonomous’ are not possible of making decisions that are discriminatory - and that proportionality of an outcome will always be a human-based decision. He said: 

This concept of discrimination and proportionality being discharged autonomously - that is an impossibility. Only humans can do discrimination. Only humans can do proportionality. The autonomous discharging by a nonhuman entity is, if you like, philosophical nonsense. 

Nolan added her support to this point and said: 

What I will say is this, if you're asking for a proportionality judgment, you need to know the strategic anticipated military value of the action. And there's no way that a weapon can know that a weapon is in the field, looking at perhaps some images, some sort of machine learning and perception stuff. It doesn't know anything. 

It's just doing some calculations. which don't really bear any relation to the military value. Only the commander can know the military value because the military value of a particular attack is not purely based on that contact, that local context on the ground - it's based on the broader strategic context. 

So I think it is absolutely impossible to ask a weapon on the ground to make that determination.

Equally, Nolan was keen to highlight that in order to prevent things spiraling out of control, as it relates to autonomous weapons, keeping humans highly engaged with autonomous systems is likely to be the key to a higher level of success. She said: 

Humans are extremely poor supervisors of machine activity, particularly when that machine activity is fairly reliably correct. It's among the most boring things you can ask a human being to do. It's very, very difficult for people to sort of maintain that engagement. 

So I would state, instead, it is better to keep humans actively engaged in the process, wherever possible, and you will have better results. 

It is important to use it only [autonomous weapons] when there is really an advantage. You don't want to sprinkle autonomy fairy dust on everything military because, in many cases…want to choose your strategic timing. So use it in places where there's no alternative, where it is worth it, where the risks are manageable.

An example of risk

Yusef helpfully provided an example of why legally binding principles around the use of autonomous weapons are so critical, particularly as it relates to civilian casualties. Her point cuts through to the core problems in this area, particularly with regard to how humans may use these systems as evidence of no wrongdoing, even if that wrongdoing did occur. 

Yusef explained that the mathematics that is applied in the algorithms used in these weapons are pretty “basic”, where they classify data points and find, for example, a hyperplane - which is essentially a line between two different data points. So the system essentially creates a scatter graph of the image it is receiving. But Yusef said it’s important to understand that the system doesn’t really understand the image, it is defining the image based on a mathematical equation that determines where the pixels are allocated between the points on the hyperplane (or other shaped) lines. 

Weights and parameters can be included, but essentially the mathematical formula will predict the highest likely outcome of what the class of image is supposed to be - say, a human, a cat, a dog, or a piece of machinery. She said: 

So it's deciphering these pixels. It's a bit more complicated than this, but it's just maths, and it's quite rudimentary maths. But because of the computational methods and the power it does it very, very quickly. 

However, these classifications aren’t foolproof - as Yusef said, it’s just a mathematical assessment of an image that’s been received during the noise of war, rather than a human understanding of the image. This is dangerous territory if we are going to trust weapons to act autonomously. She said; 

What concerns me is when this happens in the field, you will have people on the ground saying ‘these civilians were killed’. And you'll have a report by the weapon that feeds back: ‘but look at the maths’. 

This will be a recording from the field that says ‘the math says it was a target’. It will say ‘but it was a military base because the math says so’, and we defer to maths a lot, because maths is very specific. It won't be wrong, it will be right. But there is a difference between correct and accurate. There's a difference between precise and accurate. 

The maths will be right because it was coded [that way]. But it won't be right on the ground. And that terrifies me, because without a legally binding instrument, that kind of meaningful human control, control with oversight at the end, that's what we'll be missing. 

So when you ask the question, about proportionality and if it’s technically possible - no, it's not technically possible, because you can't know the outcome of a system, how it will achieve the goal that you've coded, until it's done. 

My take

The use of AI in life or death situations probably holds lessons for how we should be using AI more broadly. Yes, the context is more extreme, but the principles can probably be carried through. For instance, in Yusef’s example, it’s clear that whilst an autonomous system has made a decision - without human oversight of that decision, it can’t entirely be trusted. Someone has coded that system, someone has defined the parameters - and often people are wrong. That’s not to say that autonomous systems shouldn’t be used, but we need to recognize that we can’t implicitly trust them just because the mathematics behind these systems are technically ‘accurate’. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK