24

AI Researcher Anca Dragan on Helping Robots Understand Humans

 4 years ago
source link: https://www.wired.com/story/anca-dragan-artificial-intelligence-berkeley-wired25/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

When humans and robots cross paths, the results aren’t just frustrating—the autonomous car, say, that’s too shy to turn left—they can also be fatal. Consider last year’s Uber crash, in which the self-driving algorithms weren’t coded to yield to an unexpected human jaywalker.

At theWIRED25 conference Friday, Anca Dragan, a professor who studies human-robot interaction at UC Berkeley, spoke about what it takes to avoid those kinds of problems. Her interest is in what happens when robots graduate beyond virtual worlds and wide-open test tracks, and start dealing with unpredictable humans.

“It turns out that really complicates matters,” she says.

The issues go beyond simply teaching robots to treat humans as obstacles to be avoided. Instead, robots need to be given a predictive model of how humans behave. That isn’t easy; even to each other, humans are basically black boxes. But the work done in Dragan’s lab revolves around a fundamental insight: “Humans are not arbitrary because we’re actually intentional beings,” she says. Her group designs algorithms that help robots figure out our goals: that we’re trying to reach that door or pass on the freeway or take that turn. From there, a robot can begin to infer what actions you’ll take to get there, and how best to avoid cutting you off.

It’s like that song, Dragan says: “ Every step you take; every move you make ,” reveals your desires and intentions, and also the next moves you might take or make to get there.

Still, sometimes it’s impossible for robots and humans to figure out what the other will do next. Dragan gives the example of a robot driver and a human one pulling up to an intersection at the same exact moment. How do you avoid a stalemate or crash? One potential fix is to teach robots social cues. Dragan might have the robo-car inch back a bit—a signal to the human driver that it’s OK for them to go first. It’s one step towards getting us all to play a bit nicer.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK