1

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

 1 year ago
source link: https://www.informationweek.com/big-data/citing-risks-to-humanity-ai-tech-leaders-demand-pause-on-ai-research
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

While applauding the ethical intentions of the open letter -- signed by Steve Wozniak, Turing Prize winner Yoshua Bengio, Elon Musk, and over 1,000 other leaders -- some experts wonder if the approach is too little, too late.
Red circle and crossbar surround the word 'AI' written in gold lettering, representing ban on artificial intelligence
Credit: Dragon Claws / Alamy Stock Photo

An open letter urging a pause on artificial intelligence -- signed by more than 1,000 top tech leaders and researchers in data science, artificial intelligence, and information technology -- calls for regulation on the emerging technology’s “profound risks to society and humanity.”

The nonprofit Future of Life Institute on Wednesday released the letter calling for a halt in the breakneck pace AI-powered chatbots like GPT-4, ChatGPT, and Google’s Bard are being developed and deployed. The letter is signed by the likes of Apple co-founder Steve Wozniak, Tesla firebrand Elon Musk, 2020 presidential candidate Andrew Yang, Turing Prize winner and founder-scientific director of the Montreal Institute for Learning Algorithms Yoshua Bengio, Berkley professor of computer science and co-author of the textbook "Artificial Intelligence: A Modern Approach" Stuart Russell, and a list of CEOs and researchers within the AI field.

According to the letter, titled “Pause Giant AI Experiments: An Open Letter,” the huge leaps in AI development experienced in the past several months “have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control.”

The letter urges a 6-month, publicly verifiable pause in the development of AI systems “more powerful than GPT-4” and says governments should step in and enforce a moratorium if an agreement cannot be reached quickly. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says.

Putting the AI Genie Back in the Bottle

Natalia Modjeska, research director and leader of Omdia’s AI research team, says the letter may be well-intentioned, but wonders how effective the initiative will be. “While I applaud and fully support this initiative, I wonder whether it has any teeth,” she told InformationWeek. “You can’t put the genie back in the bottle. And really, of what we need to do, such as develop safety protocols, independent oversight, auditing, certification, watermarking … Realistically, how much of this can be done in six months?”

For Roger Kay, founder of market intelligence firm Endpoint Technologies Associates, regulating the quickly emerging technology is a near-impossible task. “You’re talking about multiple stakeholders and multiple jurisdictions and countries,” he said. “Our own government is so dysfunctional, it’s hard to see them really getting something done quickly. There’s no stopping this technology. But I also think there’s a lot of panic involved. People have been working on AI for decades. It’s just now entered the popular imagination in a big way.”

What will really guide the future of AI technology is monetization and companies jockeying for position to best use the technology, Kay says. “Right now, a whole bunch of leaders are saying ‘Let’s not let AI get out of hand,’ because they don’t have control of it. And we do have to take it all very seriously because there is so much at stake.”

The Future of Life Institute was created in 2015 with a mission to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” Much of the concern surrounding AI is centered on the sudden emergence of predictive text generation that critics warn can be used maliciously and lead to the spread of misinformation and other problems.

In an interview with the New York Times, AI critic and entrepreneur Gary Marcus said, “These things are shaping our world. We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”

What to Read Next:

Should There Be Enforceable Ethics Regulations on Generative AI?

ChatGPT: An Author Without Ethics

AI Will Profoundly Impact Tech Products and Services


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK