1

Wrap up warm for AI Summer, or you might catch cold

 1 year ago
source link: https://diginomica.com/wrap-warm-ai-summer-or-you-might-catch-cold
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Wrap up warm for AI Summer, or you might catch cold

By Chris Middleton

November 23, 2022

Audio mode

Dyslexia mode

clever

Artificial Intelligence (AI) is now in bloom, with its decades-long Spring finally giving way to “AI Summer”. So said speakers at a Westminster eForum this week on next steps for the technology. But despite a burgeoning market worldwide, some heavy weather remains, with a chill in the air for anyone rushing outside without protection.

For example, a new report from IT services and consulting firm Infosys reveals that most companies are still failing to convert data science into what they really need: measurable business value. This echoes other reports in recent years, which have found that many users expect quick cost savings from AI, rather than deploy it to create smarter, more insight-driven operations. 

Companies looking for swift and easy paybacks are likely to disappointed, it seems. According to the survey of 2,500 senior IT decision-makers by the Infosys Knowledge Institute (conducted across 13 business sectors in the US, UK, France, Germany, Australia, and New Zealand), only one-quarter of users are very satisfied with their results to date. 

Among the leaders in AI adoption, the most satisfied are those that have focused on improved data sharing, trust and ethical usage, and integrating data science with their line-of-business functions. This suggests that strategic initiatives in support of clear, top-down aims are more likely to succeed than tactical, hype-led programs.

That message was certainly rammed home at this week’s Westminster conference. Speaking at the eForum on Monday, Stephen Almond, Director of Technology and Innovation at UK data protection watchdog the Information Commissioner’s Office (ICO), said:

At the ICO, we see a bright future for AI in the UK, underpinned by terrific trust and confidence in how AI is developed and used. But the actions of an irresponsible few risk undermining this.

Who could Almond be referring to? Last month, the ICO took the unorthodox step of issuing a warning about an entire, much-hyped subset of AI: emotion recognition and analysis systems. The move was even more unusual given the ICO’s new remit of supporting UK innovation.

He explained:

For those of you not familiar with them, these technologies process data such as gaze tracking, facial movements, heartbeats, and skin moisture to draw inferences about people's emotions. 

Some organizations are looking into these technologies as possible ways of making important decisions; for example, identifying whether people might be fraudsters or how people might respond to stress at work. 

But there's just one problem: the science doesn't stack up. The inability of algorithms that aren't yet sufficiently developed to detect emotional cues can create a risk of systemic bias, inaccuracy, and even discrimination.”

Almond added:

Frankly, organizations shouldn't be using meaningless information to make what can be very meaningful decisions.

Ouch. Among the countless, obvious problems with the technology, some people have more expressive faces than others, but this doesn’t make their expressions more trustworthy. Any AI that penalizes shy or nervous people simply because they are sweating or frowning doesn’t seem like a desirable innovation. 

A joker might point out that people with long faces are tired of being asked why they are sad, while Botox users may look permanently surprised. But the serious point is this: we can’t all be deliriously happy in these perilous times, and we don’t all express our feelings in the same way if we are. Despite our fondness for using them, we humans are far more complex than emojis.

A stern warning

Yet there are even deeper problems with any facile application of emotion-tracking technology. For example, facial and other cues may vary from culture to culture, from ethnicity to ethnicity, and even by gender or age. 

And that’s without even considering the problems that some facial recognition systems have had when identifying ethnic minorities or women (due to computer vision failures and a preponderance of training data being gathered by, and from, white males).

Factor in the risks of trawling for personal data in public spaces without the subjects’ knowledge or consent and you have a technology that is ripe for abuse. There’s also a high risk of error, automated bias, and exclusion. 

And that’s not all. Emotion recognition and analysis technology may oblige people to use cameras and sensors against their will – which may have cultural, religious, mental health, and security implications – while having their homes, workplaces, or private lives monitored. Why should any company have a library of customers’ facial expressions? Some users may not be aware they are being surveilled.

In short, the list of problems gets longer the more you think about it, which begs the question: why are some organizations apparently giving these issues so little thought? In a stark warning to both the AI industry and its users, Almond said:

We have yet to see any emotion analysis technologies that would fully meet the requirements of data protection law, although the door is always open for people who want to come to us.

Organizations that are not accepting responsibility and are causing harm to vulnerable people can expect to be investigated, and our regulatory expectations are clear. 

When you're processing people's personal data, you must do so lawfully, fairly, and transparently. You must avoid excessive collection of personal data. You must keep it secure when you're processing sensitive personal data – such as ethnicity or health – or when you're using AI to take automated decisions that could have legal or similarly significant effects. And you have stringent obligations in place.

He concluded:

At the ICO, we expect a high degree of accountability for any AI that uses people's data. We expect you to be responsible for the compliance of your system, to actively and proactively assess and mitigate its risks, and to justify the choices that you make.

Strong words. And a vital warning about the need for explicable, auditable algorithms and transparent data – one that applies equally to any AI initiative, not just to emotion recognition systems.

A Galactic error

However, the ICO’s concern about one poorly designed, unproven technology is not the only cloud over the AI garden at present. At least one Big Tech company has been bringing the market into disrepute all on its own. 

Facebook and Instagram parent Meta – one of several giants to have been haemorrhaging staff lately – released its open-source large-language model Galactica this month, in a much-hyped move that rapidly went wrong. 

The technology Zucked, to put it mildly.

The system had supposedly been trained on 48 million different academic and scientific papers, articles, websites, and even lecture notes, with the aim of providing rapid insights to time-poor researchers. But instead, Galactica began ‘hallucinating’ non-existent research, crediting authors with things they hadn’t said, and spewing out other incorrect findings, references, and attributions. 

Some of these errors were only apparent to experts in different fields of learning, who tested the system and found it woefully inadequate and, in some cases, dangerously wrong. 

Granted, some scholarly papers seem designed to be impenetrable even to an audience of peers, but the implications of such a high-profile AI failure for both human knowledge and academic reputations are too grim to contemplate: a world of false research and inaccurate findings, apparently authored by trusted sources. Is that what the world needs?

Thankfully, the system was taken offline after less than three days. But imagine if it hadn’t been and you lacked the expertise to realize that Galactica was giving you bogus information. How far might that data have spread before the errors were spotted? 

My take

The lesson is we may be expecting too much too quickly from some AI ventures, while losing sight of obvious risks and challenges. As decision-makers, we need to do better than that, especially when handling the personal data of millions of citizens and customers, many of whom may be vulnerable.

Meta is just one example of hubris on an epic scale. However, it’s fair to say that the company sometimes gives the impression of knowing almost nothing about real human behaviour, despite gathering unprecedented amounts of data about it. 

After all, this is the vendor that sees our futures taking place in the Metaverse, a realm that for some reason resembles a 1990s video game or some collective boomer hallucination rather than a place anyone would want to live and work. 

Caveat emptor, as someone once said (I’d ask Galactica who, but it wouldn’t be wise). So, just remember: using artificial intelligence should never mean abandoning your own common sense. And that applies to vendors as much as it does to users.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK