2

Regulating for AI - why the UK’s political narratives make it much harder to suc...

 2 years ago
source link: https://diginomica.com/regulating-ai-why-uks-political-narratives-make-it-much-harder-succeed
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Regulating for AI - why the UK’s political narratives make it much harder to succeed

By Chris Middleton

November 23, 2021

Audio mode

Dyslexia mode

An image of the word ‘AI’
(Image by Gerd Altmann from Pixabay )

In a world in which data volumes will double by 2022 and again by 2025, emerging, data-centric technologies present a massive opportunity for Brexit UK. Chief among these is artificial intelligence (AI), an area in which Downing Street sees itself as governing a superpower. Britain has backed its ambition for world leadership in the field with a Sector Deal, a new National Strategy, a new Office for AI, new research institutions, and more - all excellent initiatives kicked off since the Brexit referendum in 2016.

However, as previously explored by diginomica, the reality is that while the UK has world-class AI expertise, academic excellence, and a thriving start-up scene, it is falling behind China - whose government can throw billions of dollars and overwhelming labor power at its own ambitions - and the US, which has hot-housed the Big Tech behemoths that dominate our lives.

Meanwhile, the UK has left one of the largest datasets on the planet: the EU, which has a population of nearly 450 million people. It has also torn up some international research and funding alliances, while others, such as Horizon Europe, are mired in the Brexit war of words. All this mitigates against leadership and essential R&D collaboration. Paradoxically, the damage done by Britain's desire to go it alone has made it much harder to do so.

So, the real question for Britain now is how can it maximise the economic, skills, and knowledge paybacks from its strengths and, if not dominate the AI world, at least flourish within it?

At a recent Westminster eForum on the policies and regulations that might enable Britain to capitalize on emerging technologies, Matt Hervey, Partner and Head of AI Law at legal practice Gowling GWG, set out his own vision of the frameworks needed. But first he felt obliged to tick the box marked ‘apocalypse':

[AI is a] clear, if extreme, example with which to illuminate legal, policy, security, and regulatory considerations for emerging technology. AI offers potentially massive benefits - and predictions are routinely measured in trillions of dollars - but equally it presents very extensive risks, the most pronounced of which is the theoretical future risk of an artificial general intelligence that could end human existence.

Now, the consensus is that is at least 30 years away, if that's a comfort to you. But I was lucky enough to attend [computer scientist Professor] Stuart Russell's Reith lecture earlier this month. And his view is that a spark of human ingenuity could come at any time. And the threat is so great as to justify regulatory work now.

The media-driven urge to talk up the apocalypse is a longstanding problem in the technology sector. While no one denies there are risks and deep ethical considerations around AI, automation, robotics, and other Industry 4.0 technologies, the Terminator cliché of technology turning on its human masters rather ignores the AI systems that speed the development of vaccines, help in the early detection of diseases and pandemics, and minimize heat, waste, and carbon emissions in power grids and data centres - all things that seek to avert apocalypse. By speeding up analysis and spotting hidden patterns in data, AI might be the thing that saves the planet. Hervey continued:

But the important thing about AI is there are current harms, which regulation, policy, and law need to address. And that's things like access to finance, access to healthcare, fake news, and encouragement to political extremism. And I think we can also see an increasing concentration of power and wealth [among tech companies].

Many people also see this as a race - a race for investment, to preserve our national security, and even a race to defend core values and culture.

Fair points, though they don't just apply to AI; some are equally relevant to all forms of algorithm, and to cloud or social platforms. That said, the risk that AI might be used to deny services or opportunities to individuals or groups, because of historic biases in data or because organizations believe it benefits them, is real.

Keeping pace with technology 

One of the problems of seeing AI as a race for global dominance is that races can be won by cutting corners, especially when the rewards might be vast. This presents tough challenges to policy, law, and regulation, he said:

The first is it's an extremely fast-moving technology. We've really seen a sea change in the last 10 years with data and processing power enabling machine learning to an extent never seen before. The issue here is that law and regulation tend to lag badly behind technology.

Then it's important to realize that the status quo is not neutral. There's uncertainty in this area as to whether your investment will be wasted. If regulation outlaws what you're doing, then that uncertainty puts you at risk.

More, current tax and liability laws distort investment. So, for example, national insurance encourages automation [in processing], whereas liability rules encourage companies to use humans for the same task. Now, tax and liability laws are valid levers to influence investment, but they were not designed for AI so their impact on innovation may be arbitrary.

Another problem for lawmakers and regulators is the dearth of skills and expertise in the sector: thousands of AI and data science jobs are going unfilled in the wider market. This makes it harder for regulators and policy wonks to recruit subject matter experts, especially when they can't match the remuneration opportunities of a tech titan. AI also has applications across every industry, so regulators and lawmakers are all competing for the same few experts.

And if that wasn't enough, there are further challenges, said Hervey:

There are no established technical solutions to some of the key challenges. These include explicability, bias, robustness, and proof of safety, so, at the moment, technical standards beyond merely defining terms are not realistic.

And finally, with the broader impacts of AI on society, such as reducing privacy and encouraging political extremism, the effects are hard to predict and therefore hard to regulate. But even when these issues are known, it is not clear how regulation should resolve them. They involve broad issues of freedom of speech, freedom of choice, culture, and values, and so really require resolution by political consensus.

Again, the caveat here is that these challenges don't just apply to AI, but also to governments' failure, I would argue, to regard the Big Tech platforms as publishers, which would solve many problems at a stroke. Big Tech companies are publishers when it suits them - which is most of the time - but not when it means being liable for content. That's the point at which they claim they are merely facilitating debate, so what's it to do with them if they enable the spread of false narratives, encourage insurrection, or platform deliberate interference by hostile states?

But on AI there are positive signs of industry taking the lead by recognising that a multidisciplinary approach is needed, said Hervey:

[In terms of] collaboration between academics and industry in the UK, we have many positive examples. For example, the Turing Institute's work with the ICO and Rolls Royce's promotion of an ethical framework for AI [tech companies such as Microsoft, Apple, and Google have produced similar initiatives].

Regulation needs to focus on impact assessments for now, similar to GDPR. But in particular, we need to encourage the evaluation of impacts not merely on a single data owner, but also on a range of stakeholders who will be affected by the use of AI.

We also need to focus on the appropriateness of AI, given its shortcomings in any given application. And we must insist on mitigations to the risks of bias, lack of robustness, and the like. I think technical standards and accreditation must follow only as technical solutions are identified.

Updating IP rights for AI 

So, what is Hervey's personal take on all this, beyond these broad, but helpful, statements of policy? He said:

If I wanted to pick one priority for work, it would be a proof of safety to unlock the potential benefits of AI in healthcare, or in transportation.

And given my expertise in IP, our IP rights were largely designed in the 1960s and they were not designed for AI. For example, there is no protection for data per se. Plus it's difficult to patent inventions that improve AI, and even more difficult - impossible really - to patent inventions made by AI. So, there is a high reliance on trade secrets when dealing with AI innovations, and this creates conflict with regulatory oversight.

Meanwhile, a lack of rights in data makes data-sharing much harder. This requires contractual solutions and practical measures to keep data secret from a third party. So, there's a risk of market failure if inventions by AI are not protected, assuming inventions by AI become common.

These problems for regulators and law enforcers have other knock-on effects: the difficulty in protecting data and innovation when it comes to AI creates a reliance on industrial secrecy. In turn, this makes organizations, and the country as a whole, vulnerable to cyberattack.

Hervey concluded:

I would say that the absolute priority is to improve our exceptions to copyright protection to allow text and data mining, particularly for commercial purposes, albeit with the right of corporate owners to opt out. Because this would allow the training of AI using third party copyrighted works.

But the big-picture challenge for the UK is that it has isolated itself on the world stage, suggested Hervey:

I think we are currently at a competitive disadvantage for R&D in this country. And I think the same goes for regulation. Generally, we all face the Brussels effect where products and services sold in the UK need to comply with EU regulations in any case. And more generally, do investors really want a proliferation of different national standards?

We should adhere to EU requirements but streamline compliance and potentially allow more flexible requirements during the R&D phase of products before they are sold to the public.

My take

Once Hervey side-lined the apocalypse in his presentation, Hervey's common-sense proposals were worth waiting for. However, the problem - as ever in Brexit Britain - is that they may not suit the UK's dominant political narrative of ‘to hell with Europe'. The irony is that narrative is doing untold damage to the UK's ability to succeed on its own terms.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK