6

A former Google scientist has a new approach to make AI more ethical - The Washi...

 2 years ago
source link: https://www.washingtonpost.com/technology/2022/01/17/artificial-intelligence-ai-empathy-emotions/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
A former Google scientist has a new approach to make AI more ethical
Alan Cowen, founder of Hume AI. The Google veteran says psychology research is key to more ethical artificial intelligence. (Courtesy of Hume AI)
January 17, 2022 at 6:00 a.m. EST

As artificial intelligence lays claims to growing parts of our social and consumer lives, it’s supposed to eliminate all the creeping flaws humans introduce to the world.

The reality, of course, is quite different. From Facebook algorithms that learn how to stoke anger to facial recognition apps that don’t recognize people of color, AI frequently offers less of an improvement on the status quo than an insidious reinforcement of it.

Now a Silicon Valley upstart says he has a fresh approach to the problem. Alan Cowen, a former Google data scientist with a background in psychology, has created a research company, Hume AI, and a companion not-for-profit that he says can help make the whole messy business of AI more empathetic and human.

By getting trained on hundreds of thousands of facial and vocal expressions from around the world, artificial intelligence on the Hume platform can react to how users are truly feeling and cater more closely to their emotional needs, Cowen said.

Advertisement

He said he hopes the platform will eventually be incorporated into consumer applications like viral videos and digital assistants — even as the efficacy is still uncertain and large tech companies’ willingness to quickly implement it is unlikely.

On Monday, Cowen is expected to announce a set of ethical guidelines he hopes will be agreed to by the companies that use the platform. Its beta launch, he will announce, will happen in March, with a more formal unveiling to follow. It will be free for many researchers and developers.

“We know this is going to be a long fight,” Cowen said in an interview. “But we need to start optimizing for well-being and not for engagement.”

Translation: The goal for algorithms should not be to keep us constantly clicking and buying but to make us better people and our society more just. And Cowen said his psychologically based approach can help.

Advertisement

The entrepreneur, 31, is far from the first tech player to try to inject humanity into digital spaces. The “ethical AI” movement has as its broad goal incorporating fairness into algorithms and counts among its members many pedigreed organizations, such as Marc Rotenberg’s policy-oriented Center for AI and Digital Policy and the new bias-fighting Distributed Artificial Intelligence Research Institute from former Google star Timnit Gebru, to name just two.

There are also a slew of academic experts, like Duke University’s Cynthia Rudin, who have taken strong public positions on simplifying AI to eliminate social harm.

What Cowen brings to the endeavor is a high degree of psychological research to accompany those ethical goals. His past work includes studying emotional responses across cultures (like a study on similar reactions to sad songs in the United States and China) and work on the many nuances of vocal inflections.

Advertisement

He comes with an army of names. Hume Initiative, the nonprofit, has convened an ethics committee with many heavy-hitters in the field of emotional and ethical AI, including the founder of Google’s human-centered Empathy Lab, Danielle Krettek Cobb; the “algorithmic fairness” expert Karthik Dinakar; and Dacher Keltner, the University of California at Berkeley professor who was Cowen’s graduate school mentor and advised Pixar on the emotions in “Inside Out.”

Cowen said he has raised $5 million from the start-up studio Aegis Ventures with another round to follow. Money will be channeled to investigating how AI can be crafted to not just process with great speed and see unseen patterns, but also to inject human understanding, an approach Cowen has dubbed “empathic AI.” (At Google, Cowen’s research involved “affective computing,” which aims to increase machines’ ability to read and simulate emotion.)

The idea of more emotions might seem to contradict much of the thinking about AI, whose principal strength is often seen as its ability to make decisions precisely without human sentiment. But many in the affective-computing community say AI’s inability to read people is what makes it dangerous — and makes it critical that AI sees the humanity in the humans it is serving.

Advertisement

Cowen is particularly drawing from his research on emotions, going beyond the six basic ones scientists have worked with since a 1990s study to categorize more than 20 manifesting in face, voice and body. Many of them are positive. (Four of the original six were negative.)

The digital applications for that work are, at least in theory, vast. Cowen foresees digital-assistant algorithms recognizing emotions on our faces and in our voices and then making recommendations accordingly, like suggesting soothing sounds for the stressed.

He also imagines crisis call-center operators employing Hume-based AI to help diagnose the seriousness of a person’s depression from their vocalizations.

Keltner noted further rich possibilities, including customizing educational plans based on children’s particular emotional responses and judging the real-world risks of incendiary political content.

Advertisement

Cowen said he even envisions social media companies using Hume’s platform to gauge a user’s mood — then algorithmically adjusting served posts to improve it.

“Is it increasing people’s sadness when they see a post? Is it making them sad a day or two later even when the post is gone?” Cowen said. “Companies have been lacking objective measures of emotions, multifaceted and nuanced measures of people’s negative and positive experiences. And now they can’t say that anymore.”

Of course, there’s no guarantee that if a company’s AI could gauge an emotion it wouldn’t exploit it; when standing at the intersection of data and well-being, Big Tech does not necessarily take the high road.

Another challenge in AI factoring in emotion lies in how it needs to avoid building in the emotions of its human programmers. Cowen’s partners, at least, say they believe it avoids that pitfall. “Hume’s models are rich in information but still non-judgmental,” said Arjun Nagendran, co-founder of the virtual-reality employee-training company Mursion.

Advertisement

Krettek Cobb, the Google Empathy Lab founder, said she sees Cowen’s work as essential to our interactions with the digital world.

“For technology to serve us in the most caring and compassionate way, it needs to understand the full spectrum of the human story,” she wrote in an email, citing the importance of feelings in that story. “This is why empathic AI holds such great promise — it could allow technology to attune to these critical emotional layers and reflect the most enlivening parts of ourselves.”

(In the guidelines, Cowen says Hume’s goal is that AI be used to “strengthen humanity’s greatest qualities of belonging, compassion, and well-being” while also reducing “the risks of misuse.” The guidelines ask signees to take such pledges as “Empathic AI should never be used to develop more cruel forms of psychological warfare by optimizing for negative emotional responses.”)

Advertisement

Ben Shneiderman, a University of Maryland professor and AI expert who next month will publish the book “Human-Centered AI,” about achieving more equitable AI outcomes, said that initiatives like Cowen’s can play a role in creating ethnical AI, but that it’s not all that’s needed.

“The right technology can help a lot. But if you’re just looking at technology to create a safety culture, it’s not going to work,” he said. He cited government regulation and hard standards set by the likes of insurance companies and auditors as essential.

Hume’s adoption head winds could be fierce. A Pew Research Center study published in June found that more than two-thirds of AI experts did not believe that artificial intelligence would be used mostly for social good by 2030.

And how much empathic AI even should be embraced by tech companies is unclear. There’s an argument that just as adoption of mercilessly high-powered AI should proceed slowly, so should emotional inputs that could worsen the problem. Would Siri knowing how we feel make for a more healthy human-machine relationship? Or creepily give it more power over us?

Cowen acknowledged the dangers to empowering fast-growing AI with more emotional data. But he also said that the alternative is far scarier.

“If we continue to improve these algorithms to optimize for engagement without projects like empathic AI, then soon kids will be spending 10 hours a day on social media,” he said. “And I don’t think that’s good for anyone.”


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK