1

Safety isn’t the only issue for self-driving vehicles - data privacy and algorit...

 1 year ago
source link: https://diginomica.com/safety-isnt-only-issue-self-driving-vehicles-data-privacy-and-algorithmic-oversight-need-attention
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Safety isn’t the only issue for self-driving vehicles - data privacy and algorithmic oversight need attention too

By Derek du Preez

August 22, 2022

Audio mode

Dyslexia mode



Image of two cars parked in a parking space

(Image by Peggy und Marco Lachmann-Anke from Pixabay )

Self driving vehicles have been deployed on roads for testing and have been promoted by technology companies for a number of years now. The argument goes that through the use of visual data and advanced algorithms, automated vehicles will result in much safer driving experiences. 

The discussions around safety have been explored at length - and rightly so. Automating machines that carry people across large, complex terrains needs to be examined carefully. And whilst attitudes to the deployment of self-driving vehicles vary, it’s not too hard to imagine a near future where removing human error from the roads does indeed result in fewer accidents and/or fatalities. 

But given the understandable high profile of user safety when considering the deployment of self-driving vehicles, other concerns are sometimes overlooked. This is particularly true of data privacy and algorithmic bias and/or oversight. 

Cars or vehicles that rely on the collection of information to operate autonomously, inevitably are going to collect and store huge swathes of data. This data is often personal and is collected without explicit consent - for example, visual data of someone walking near your vehicle. In theory we could be looking at a world where we have thousands of mobile surveillance systems roaming the roads all over the country, with private companies collecting visual data on unaware passers by.

Not only this, but because companies want to maintain a competitive advantage in the quickly developing market for autonomous vehicles, how this data is processed and used isn’t always clear. The problem of ‘black box’ algorithms has been highlighted in other fields - but isn’t often in the public consciousness when thinking about driverless cars. 

The UK’s Centre for Data Ethics and Innovation (CDEI) is looking to bring more attention to this area, with the release of its ‘Responsible Innovation in Self-Driving Vehicles’ policy paper. The paper aims to create a framework for the safe development of autonomous vehicles, which covers everything from road safety to governance. 

The paper comes off the back of the British Government's plans to invest £100 million in research and safety development of self-driving vehicles, with the aim of getting them on the road by 2025. It is estimated that this could bring 38,000 new jobs to the UK and create a £42 billion industry. 

Data privacy

This report will put road safety to one side, as it has been highlighted how this has been covered comprehensively elsewhere. However, the CDEI does a good job of highlighting the data concerns that result from the introduction of autonomous vehicles (AVs). 

As the report notes, whilst AVs collect data in a similar way to other devices that are readily available (smart speakers, video doorbells), their use creates some unique problems. The CDEI explains: 

There are two key characteristics of AVs that suggest particular attention should be paid to the privacy implications of these systems. 

Firstly, AVs may lead to widespread collection and processing of personal data in order to achieve core functionality such as detecting other road users in situations where explicit consent is not feasible. 

Secondly, they require regulatory authorisation for deployment (as discussed in the safety section above) that may be perceived as regulatory endorsement (implicitly or explicitly) about this personal data processing, including how they strike the right balance between what is necessary for safe driving, and sufficient protection of personal data. These challenges merit careful consideration given the potential future scale of AV use in public spaces.

AVs are likely to process several categories of personal data, such as time-stamped location data of the vehicle (which carries a high degree of identifiability), as well as health and wellbeing data on the driver. Not only this, as noted above, AV sensors may also collect personal data from individuals outside the vehicle (pedestrians and other road users), including facial images collected from video feeds. 

The report also highlights how some companies are exploring the use of biometric data of road users outside of the vehicle. Biometric data is essentially personal data that relates to the physical, physiological or behavioural characteristics of a person. The reason that this may be useful could be in instances where other road users engage with your vehicle - for example, making eye contact. 

The CDEI says that there may be legitimate reasons for collecting data in this way under GDPR legislation, but it is “something of a grey area and would be subject to undertaking a legitimate interests assessment”. 

And as highlighted previously, the use of video feeds on AVs creates a potential new ‘surveillance environment’ that’s operated by a select few private companies. The report notes: 

Some AVs use video cameras that, while their primary purpose is safe operation, can also function as surveillance cameras by collecting, storing and transmitting video of their environments (in a non-targeted way). 

This video data could potentially be reused for other purposes such as evidence of crimes unrelated to road safety, and there is some evidence that this is already happening in both public and private places. Unlike dash cams, these are now potentially core capabilities of the safe operation of an AV, which would be regulated in the future by DfT agencies. 

In effect, this is potentially approving a surveillance capability, and DfT should draw on the existing governance frameworks for surveillance cameras.

Black box oversight

Closely related to the issues of data privacy are those of explainability. Given the autonomous nature of self-driving vehicles, the CDEI rightly notes that they lack “moral autonomy”. Simply put, if something goes wrong you can’t blame the vehicle itself. 

The report states: 

Since a self-driving vehicle lacks agency, any action it performs must be traced back to its designers and operators. The Law Commissions have concluded that it is not reasonable to hold an individual programmer responsible for the actions of the vehicle. Instead, the ASDE (authorised self driving entity) as an organisation bears responsibility. 

This raises a fundamental need for an appropriate degree of explainability for the vehicle’s ‘decisions’.

However, explainability in this area isn’t always easy. The CDEI notes how investigations into high profile self-driving vehicle crashes have resulted in poor perception and classification of objects, as well as unsatisfactory post-hoc explanations. 

Explainability allows for improvements to safety and accountability, and provides evidence for which to evaluate the fairness of systems. But it seems that this isn’t always easy to do with AVs, given that machine learning based systems are challenging to explain. This is particularly important given the personal data being collected and the personal safety risks at play - where accidents will result in looking for people to place blame. The report adds: 

The potential hazards of AVs as robots operating in open-ended, uncertain environments, raise the stakes for the interpretability of AI. With other technologies that make use of machine learning systems, performance has been prioritised over interpretability. Growing interest in explainable AI is starting to redress this balance, but there may be some uses of machine learning in AVs, such as computer vision, that remain incompletely interpretable. It may be impossible to know with certainty why an AV image recognition system classified an object or a person according to a particular category. Other parts of AV systems, such as those that determine the speed and direction of the vehicle, are in many cases rules-based and therefore more easily explainable.

Techniques for ensuring explainability will differ across AV systems. An ASDE may need to review logs from a particular event or replay logs through a simulator. Generating explanations for ML-based systems remains an active research area and it is likely that capabilities will advance significantly in the coming years.

My take

Improving safety on the roads through AVs is a worthy pursuit and one that will likely become a reality in the near future. But what’s needed is effective regulation to ensure that this network of surveillance systems, which rests in the hands of a few privately owned companies, considers privacy and explainability as equally important as safety. This is one of those areas where the likely outcomes aren’t yet predictable, so regulation needs to be thoughtful from the start. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK