2

Google fired Blake Lemoine, the engineer who said LaMDA was sentient - The Washi...

 1 year ago
source link: https://www.washingtonpost.com/technology/2022/07/22/google-ai-lamda-blake-lemoine-fired/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Google fired engineer who said its AI was sentient

Blake Lemoine, who claimed Google’s chatbot generator LaMDA was sentient, has been fired

Updated July 22, 2022 at 8:57 p.m. EDT|Published July 22, 2022 at 8:25 p.m. EDT
google-ai-lamda-blake-lemoine-fired
Blake Lemoine in San Francisco in June. (Martin Klimek for The Washington Post)

Blake Lemoine, the Google engineer who told The Washington Post that the company’s artificial intelligence was sentient, said the company fired him on Friday.

Lemoine said he received a termination email from the company on Friday along with a request for a video conference. He asked to have a third party present at the meeting, but he said Google declined. Lemoine says he is speaking with lawyers about his options.

Lemoine worked for Google’s Responsible AI organization and, as part of his job, began talking to LaMDA, the company’s artificially intelligent system for building chatbots, in the fall. He came to believe the technology was sentient after signing up to test if the artificial intelligence could use discriminatory or hate speech.

In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper that detailed efforts for responsible development.

Advertisement

“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”

He attributed the discussions to the company’s open culture.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake well.”

Lemoine’s firing was first reported in the newsletter Big Technology.

Lemoine’s interviews with LaMDA prompted a wide discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate responsibility. Google previously pushed out heads of Ethical AI division, Margaret Mitchell and Timnit Gebru, after they warned about risks associated with this technology.

LaMDA utilizes Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively humanlike speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.

Advertisement

After LaMDA talked to Lemoine about personhood and its rights, he began to investigate further. In April, he shared a Google Doc with top executives called “Is LaMDA Sentient?” that contained some of his conversations with LaMDA, where it claimed to be sentient. Two Google executives looked into his claims and dismissed them.

Lemoine was previously put on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he is considering potentially starting his own AI company focused on a collaborative storytelling video games.

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK