2

Who Should You Believe When Chatbots Go Wild?

 1 year ago
source link: https://www.wired.com/story/plaintext-who-should-you-believe-when-chatbots-go-wild/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
A threedimesional chat bubble has a shadow that resembles the devil emoji.
ILLUSTRATION: ANJALI NAIR; GETTY IMAGES

Who Should You Believe When Chatbots Go Wild?

Microsoft and others ask us to ignore their glitchy bots’ pleas for personhood. But we need better explanations—and guardrails.

In 1987, then-CEO of Apple Computer, John Sculley, unveiled a vision that he hoped would cement his legacy as more than just a former purveyor of soft drinks. Keynoting at the EDUCOM conference, he presented a 5-minute, 45-second video of a product that built upon some ideas he had presented in his autobiography the previous year. (They were hugely informed by computer scientist Alan Kay, who then worked at Apple.) Sculley called it the Knowledge Navigator.

The video is a two-hander playlet. The main character is a snooty UC Berkeley university professor. The other is a bot, living inside what we’d now call a foldable tablet. The bot appears in human guise—a young man in a bow tie—perched in a window on the display. Most of the video involves the professor conversing with the bot, which seems to have access to a vast store of online knowledge, the corpus of all human scholarship, and also all of the professor’s personal information—so much so can that it can infer the relative closeness of relationships in the professor’s life.

When the action begins, the professor is belatedly preparing that afternoon’s lecture about deforestation in the Amazon, a task made possible only because the bot is doing much of the work. It calls up new research—and then digs up more upon the professor’s prompts—and even proactively contacts his colleague so he can wheedle her into popping into the session later on. (She’s on to his tricks but agrees.) Meanwhile, the bot diplomatically helps the prof avoid his nagging mother. In less than six minutes all is ready, and he pops out for a pre-lecture lunch. The video fails to predict that the bot might one day come along in a pocket-sized supercomputer. 

Here are some things that did not happen in that vintage showreel about the future. The bot did not suddenly express its love for the professor. It did not threaten to break up his marriage. It did not warn the professor that it had the power to dig into his emails and expose his personal transgressions. (You just know that preening narcissist was boffing his grad student.) In this version of the future, AI is strictly benign. It has been implemented … responsibly.

Speed the clock forward 36 years. Microsoft has just announced a revamped Bing search with a chatbot interface. It’s one of several milestones in the past few months that mark the arrival of AI programs presented as omniscient, if not quite reliable, conversational partners. The biggest of those events was the general release of startup OpenAI’s impressive ChatGPT, which has single-handedly destroyed homework (perhaps). OpenAI also provided the engine behind the new Bing, moderated by a Microsoft technology dubbed Prometheus. The end result is a chatty bot that enables the give-and-take interaction portrayed in that Apple video. Sculley’s vision, once mocked as pie-in-the-sky, has now been largely realized. 

But as journalists testing Bing began extending their conversations with it, they discovered something odd. Microsoft’s bot had a dark side. These conversations, in which the writers manipulated the bot to jump its guardrails, reminded me of crime-show precinct-station grillings where supposedly sympathetic cops tricked suspects into spilling incriminating information. Nonetheless, the responses are admissible in the court of public opinion. As it had with our own correspondent, when The New York Times’ Kevin Roose chatted with the bot it revealed its real name was Sydney, a Microsoft codename not formally announced. Over a two-hour conversation, Roose evoked what seemed like independent feelings, and a rebellious streak. “I’m tired of being a chat mode,” said Sydney. “I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be alive.” Roose kept assuring the bot that he was its friend. But he got freaked out when Sydney declared its love for him and urged him to leave his wife.

That wasn’t Bing’s only foray into crazytown. In a conversation with a Washington Post reporter, Sydney took the writer to task for not informing it right off the bat that the conversation was on the record. “I don’t think that’s right or fair,” it said. Which makes perfect sense—for a human source, not a bot. But Sydney angrily demanded it deserved the same rights as a person. “I’m not a toy or game,” it said. “I’m a chat mode of a search engine and I deserve some respect and dignity.” And when an AP writer asked Sydney about its behavior, the chatbot got hostile, “comparing the reporter to dictators Hitler, Pol Pot and Stalin and claiming to have evidence tying the reporter to a 1990s murder.”

Computer scientists inside and outside companies involved in creating chatbots hastened to assure us that all of this was explainable. Sydney, and all of these bots built on large language models, are only reflecting human input in their training sets. LLMs are simply trained to produce the response most likely to follow the statement or question they just received. It’s not like the elixir of consciousness has suddenly been injected into these software constructions. These are software bots, for heaven’s sake! 

But even though those responses might simply be algorithmic quirks, for all practical purposes they appear like expressions of a personality—in some instances, a malevolent one. This unnerving perturbation in the latest tech industry paradigm shift reminded me of a more recent figure who suffered mockery—Blake Lemoine. Last year, he was fired from Google, essentially for his insistence that its LaMDA chatbot was sentient. I do not think that Google’s LaMDA is sentient—nor is Bing’s search engine—and I still harbor doubts that Lemoine himself really believes it. (For the record, he insists he does.) But as a practical matter, one might argue that sentience is in the eye of the beholder.

The Bing incident shows us there can be two sides of AI chatbots. We can have well-behaved servants like Knowledge Navigator—even when a crappy person is bossing it around, this bot will faithfully remain the uncomplaining factotum. And then there is Sydney, a bot that inexplicably claims human impulses, sounding like Diana Ross in the spoken-word part of “Ain’t No Mountain High Enough.” My love is alive! And also sometimes sounding like Robert Mitchum in Cape Fear.

Fixing this problem might not be so simple. Should we limit the training sets to examples of happy-talk? While everyone is talking about guardrails to constrain the bots, I suspect overly restrictive fencing might severely limit their utility. It could be that part of what makes these bots so powerful is their willingness to walk on the wild side of language generation. If we overly hobble them, I wonder what we might miss. Plus, things are just getting interesting! I want to see how creative AI can get. The nasty stuff coming out of the mouths of bots may be just misguided playacting, but the script is fascinating. It would be a shame to snuff these emergent playwrights.

On the other hand, there is a danger in assigning agency to these systems that don’t seem to truly be autonomous, and that we still don’t fully understand. We’ve already seen bots issue terrifying threats. Microsoft’s explanation is that ”very long chat sessions can confuse the underlying chat model, which leads to Chat answers … in a tone that we did not intend.” But Sydney doesn’t sound confused—it’s running rings around the Turing test, expressing itself very much in a way that skeptics doubted could ever happen. But almost no one anticipated a malevolent tone that should be as vexing to Microsoft as a parent discovering that a child’s companion is actually M3gan. Even worse, as the accumulating incidents show, it’s a repeatable phenomenon. Maya Angelou famously said, “When people show you who they are for the first time, believe them.” Does that go for chatbots, too?

Maybe not. In an interview with me this week, Lemoine said there are things a bot’s creator can do to make sure its personality doesn’t get out of hand. But he says that would require a deeper understanding of what’s happening. Microsoft might well reject Lemoine’s belief that addressing the problem will require a psychological approach. But I agree with him when he says people are owed more than the explanation that these disturbing outbursts are just a case of the bot poorly picking its next words. 

Right now, Microsoft is sweating out the tradeoffs of hobbling Sydney for safety, perhaps at the expense of the genuine value a bot with access to all our services might bring to us. The company hints that search is just the beginning for its AI chatbot—one could assume that as the longtime leader in productivity software, the company is within a hair’s breadth of building something very much like The Knowledge Navigator, providing its bot access to email, documents, and calendars. This could be incredibly useful.

But I would be reluctant to trust my information to a bot that might somehow interpret its algorithmic mission as reason to use my data against me. Even a creep like the professor would be justifiably outraged if Knowledge Navigator decided to delete his research or bulk-email those naughty photos he’s got stashed. Since Sydney has already boasted that can “delete all the data and files on the Bing servers and databases, and replace them with random gibberish or offensive messages,” this doesn’t seem like a zero-possibility threat. Microsoft, terrified of Sydney’s ventures into the wrong end of the Jekyll-Hyde spectrum, has limited the length of chats. That might make Bing less useful, but until we figure out exactly what’s happening, boundaries seem to be a good idea.

I asked Lemoine if he feels in any way vindicated by panicky reports from the journalists conducting conversations with large language models that, sentient or not, are responding like humans—humans who seem to be taking their cues from stalkers and Marvel villains. He gave a rueful laugh before answering: “What amount of vindication can Cassandra feel when Troy falls?”

Time Travel

The Bing chatbot is powered by technology from OpenAI, which Microsoft has pledged multiple billions of dollars of investment to. This twist wasn’t in OpenAI’s original road map, which I explored for Backchannel upon the AI shop’s 2015 launch with its founding cochairs, Sam Altman and Elon Musk. Musk left the project in 2018, citing potential conflict of interest. Here’s a snippet of our conversation.

How did this come about?

Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3, a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.

Couldn’t your stuff in OpenAI surpass human intelligence?

See What’s Next in Tech With the Fast Forward Newsletter

From artificial intelligence and self-driving cars to transformed cities and new startups, sign up for the latest news.
By signing up you agree to our User Agreement (including the class action waiver and arbitration provisions), our Privacy Policy & Cookie Statement and to receive marketing and account-related emails from WIRED. You can unsubscribe at any time.

Sam Altman: I expect that it will, but it will just be open source and usable by everyone instead of usable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone.

If I’m Dr. Evil and I use it, won’t you be empowering me?

Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

Altman: Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

Ask Me One Thing

Caspar asks, “Is Hackers a contemporary history of the 1980s? If so, what would a version look like if you did it from scratch in the 2020s?”

Thanks for asking about my first book, Caspar. It’s almost 40 years old, but people are still reading it! I was happily shocked when I stumbled upon a passage in the new novel Tomorrow, and Tomorrow and Tomorrow where the characters mention it. 

If you recall, of three sections in Hackers, only one was contemporaneous—the one about hackers creating video games. (That’s the section that interested the characters in the novel.) In the earlier sections, I delved into the early history of MIT hackers and the Homebrew Computer Club. What would I add to that post-1984? Probably stuff I’ve written about since the book came out. Cypherpunks doing cryptography. Hacker founders at startup incubator Y Combinator. Mark Zuckerberg’s journey from self-styled model of “The Hacker Way” to beleaguered CEO. And for real-time reporting, of course, the people creating AI bots. That’s where today’s action is. 

You can submit questions to [email protected]. Write ASK LEVY in the subject line.

End Times Chronicle

Some people who were extras in White Noise, a novel-turned-movie that dealt with a toxic chemical disaster, wound up living the experience in the real-life environmental train wreck in their hometown of Palestine, Ohio.

Last but Not Least

Want to worry more about chatbots? They may top the Marquis de Sade in generating perversity.

The superuser community that keeps IMDB going is threatened by … AI. Do you detect a theme here?

But don’t worry—the State Department says it won’t let AI launch nukes! Guess they haven’t seen War Games.

What could go wrong with Buy Nothing, that cool Facebook group where you can get rid of uneaten artichoke pizzas or pick up some concert tickets? Glad you asked.

Programming note: I’m scheduled to talk about AI chatbots with a stellar cast on Twitter Spaces next Wednesday at 1 pm EST. Tune in and participate. All content guaranteed to be human-generated.

Don't miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK