1

An AI researcher who has been warning about the technology for over 20 years say...

 1 year ago
source link: https://finance.yahoo.com/news/ai-researcher-warning-technology-over-114317785.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

An AI researcher who has been warning about the technology for over 20 years says we should 'shut it all down,' and issue an 'indefinite and worldwide' ban

Sawdah Bhaimiya
Fri, March 31, 2023, 8:43 PM GMT+9·3 min read
Artificial Intelligence
An AI researcher warned that "literally everyone on Earth will die," if AI development isn't shut down.iLexx/Getty Images
  • One AI researcher who has been warning about the tech for over 20 years said to "shut it all down."

  • Eliezer Yudkowsky said the open letter calling for a pause on AI development doesn't go far enough.

  • Yudkowsky, who has been described as an "AI doomer," suggested an "indefinite and worldwide" ban.

An AI researcher who has warned about the dangers of the technology since the early 2000s said we should "shut it all down," in an alarming op-ed published by Time on Wednesday.

Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.

The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4.

Yudkowsy's article, titled "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," said he refrained from signing the letter because it understated the "seriousness of the situation," and asked for "too little to solve it."

He wrote: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

He explained that AI "does not care for us nor for sentient life in general," and we're far from instilling those kinds of principles in the tech at present.

Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries.

"If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike," Yudkowsky said.

Yudkowsky has for many years been issuing bombastic warnings about the possibly catastrophic consequences of AI. Earlier in March he was described by Bloomberg as an "AI Doomer," with author Ellen Huet noting that he has been warning about the possibility of an "AI apocalypse" for a long time.

Recommended Stories

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK