

How OpenAI’s Sam Altman is keeping up the AI safety balancing act - Vox
source link: https://www.vox.com/future-perfect/2023/5/24/23735698/openai-sam-altman-ai-safety-legislation-risks-development-regulation
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

This story is part of a group of stories called
Finding the best ways to do good.
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies,” OpenAI CEO Sam Altman once said. He was joking. Probably. Mostly. It’s a little hard to tell.
Altman’s company, OpenAI, is fundraising unfathomable amounts of money in order to build powerful groundbreaking AI systems. “The risks could be extraordinary,” he wrote in a February blog post. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” His overall conclusion, nonetheless: OpenAI should press forward.
There’s a fundamental oddity on display whenever Altman talks about existential risks from AI, and it was particularly notable in his most recent blog post, “Governance of superintelligence”, which also lists OpenAI president Greg Brockman and chief scientist Ilya Sutskever as co-authors.
It’s kind of weird to think that what you do might kill everyone, but still do it
The oddity is this: Altman isn’t wholly persuaded of the case that AI may destroy life on Earth, but he does take it very seriously. Much of his writing and thinking is in conversation with AI safety concerns. His blog posts link to respected AI safety thinkers like Holden Karnofsky, and often dive into fairly in-depth disagreements with safety researchers over questions like how the cost of hardware at the point where powerful systems are first developed will affect “takeoff speed” — the rate at which improvements to powerful AI systems drive development of more powerful AI systems.
At the very least, it is hard to accuse him of ignorance.
But many people, if they thought their work had significant potential to destroy the world, would probably stop doing it. Geoffrey Hinton left his role at Google when he became convinced that dangers from AI were real and potentially imminent. Leading figures in AI have called for a slowdown while we figure out how to evaluate systems for safety and govern their development.
Altman has said OpenAI will slow down or change course if it comes to realize that it’s driving toward catastrophe. But right now he thinks that, even though everyone might die of advanced AI, the best course is full steam ahead, because developing AI sooner makes it safer and because other, worse actors might develop it otherwise.
Altman appears to me to be walking an odd tightrope. Some of the people around him think that AI safety is fundamentally unserious and won’t be a problem. Others think that safety is the highest-stakes problem humanity has ever faced. OpenAI would like to alienate neither of them. (It would also like to make unfathomable sums of money and not destroy the world.) It’s not an easy balancing act.
“Some people in the AI field think the risks of AGI (and successor systems) are fictitious,” the February blog post says. “We would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.”
And as momentum has grown toward some kind of regulation of AI, fears have grown — especially in techno-optimist, futurist Silicon Valley — that a vague threat of doom will lead to valuable, important technologies that could vastly improve the human condition being nipped in the bud.
There are some genuine trade-offs between ensuring AI is developed safely and building it as fast as possible. Regulatory policy adequate to notice if AI systems are extremely dangerous will probably add to the costs of building powerful AI systems, and will mean we move slower as our systems get more dangerous. I don’t think there’s a way out of this trade-off entirely. But it’s also obviously possible for regulation to be wildly more inefficient than necessary, to crush lots of value with minimal effects on safety.
Trying to keep everyone happy when it comes to regulation
The latest OpenAI blog post reads to me as an effort by Altman and the rest of OpenAI’s leadership to once again dance a tightrope: to call for regulation which they think will be adequate to prevent the literal end of life on Earth (and other catastrophes), and to ward offregulation that they think will be blunt, costly, and bad for the world.
That’s why the so-called governance road map for superintelligence contains paragraphs warning: “Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.
“By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.”
Cynically, this just reads “regulate us at some unspecified future point, not today!” Slightly less cynically, I think that both of the sentiments Altman is trying to convey here are deeply felt in Silicon Valley right now.People are scared both that AI is something powerful, dangerous, and world-changing, worth approaching differently than your typical consumer software startup — and that many possible regulatory proposals would be strangling human prosperity in its cradle.
But the problem with “regulate the dangerous, powerful future AI systems, not the present-day safe ones” is that, because AI systems that were developed with our current training techniques are poorly understood, it’s not actually clear that it’ll be obvious when the “dangerous, powerful” ones show up — and there’ll always be commercial incentive to say that a system is safe when it’s not.
I’m excited about specific proposals to tie regulation to specific capabilities: to have higher standards for systems that can do large-scale independent actions, systems that are highly manipulative and persuasive, systems that can give instructions for acts of terror, and so on. But to get anywhere, the conversation does have to get specific. What makes a system powerful enough to be important to regulate? How do we know the risks of today’s systems, and how do we know when those risks get too high to tolerate? That’s what a “governance of superintelligence” plan has to answer.
You've read 1 article in the last 30 days.
Explanatory journalism is a public good
At Vox, we believe that everyone deserves access to information that helps them understand and shape the world they live in. That's why we keep our work free. Support our mission and help keep Vox free for all by making a financial contribution to Vox today.
Recommend
-
8
OpenAI掌门人Sam Altman:AI的下一个发展阶段 ...
-
8
TechOpenAI CEO Sam Altman says he’s a ‘little bit scared’ of A.I.
-
9
TechOpenAI changed its plans and won’t train on customer data, Sam Altman says
-
6
Sam Altman: the quick, deep thinker leading OpenAI by Glenn CHAPMAN...
-
10
Who is Sam Altman, the OpenAI CEO set to testify in front of Congress?Who is Sam Altman, the OpenAI CEO testifying at Congress?The OpenAI founder helped kick off the latest artificial...
-
11
本文来自HumanloopCEORazaHabib对OpenAICEOSamAltman的最新访谈,Sam在这次访谈里非常坦诚的谈论了关于目前困境、GPU资源限制、Conetxt长度、多模态以及开源等问题,甚至还提到了Plugin上线后表现并不理想,人们更多希望将大模型集成到自己的应用中,而...
-
5
Tech ...
-
7
How Sam Altman got his CEO job back at ChatGPT creator OpenAI Skip to main c...
-
10
Sam Altman to return as CEO of OpenAI / After an attempted coup by OpenAI’s board that lasted five days, Altman is returning alongside co-founder Greg Brockman. Nov 22, 2023, 6:03 AM UTC
-
9
12 月 24 日,OpenAI 创始人 Sam Altman 在 X 平台上向社区征求意见,询问用户希望 OpenAI 在2024 年构建/解决哪些东西?根据 Altman 的最新回复,他整理了来自网友的诉求,列出了普遍提出的请求,目前总共 12 个,分别为:通用人工智能(AGI)
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK