1

AI technology “can go quite wrong,” OpenAI CEO tells Senate

 11 months ago
source link: https://arstechnica.com/tech-policy/2023/05/ai-technology-can-go-quite-wrong-openai-ceo-tells-senate/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Please regulate us —

AI technology “can go quite wrong,” OpenAI CEO tells Senate

Advanced AI systems should require government licenses, GPT-4 maker's CEO says.

Jon Brodkin - 5/16/2023, 9:02 PM

OpenAI CEO Sam Altman sits at a table and speaks into a microphone while testifying in a Senate hearing.
Enlarge / OpenAI CEO Sam Altman testifies about AI rules before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023, in Washington, DC.
Getty Images | Win McNamee

OpenAI CEO Sam Altman testified in the US Senate today about the potential dangers of artificial intelligence technology made by his company and others and urged lawmakers to impose licensing requirements and other regulations on organizations that make advanced AI systems such as OpenAI's GPT-4.

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said. "For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities."

While Altman touted AI's benefits, he said that OpenAI is "quite concerned" about elections being affected by content generated by AI. "Given that we're going to face an election next year and these models are getting better, I think this is a significant area of concern... I do think some regulation would be quite wise on this topic," Altman said.

Altman was speaking at a hearing held by the Senate Judiciary Committee's Subcommittee on Privacy, Technology, and the Law. Also testifying was IBM's chief privacy and trust officer, Christina Montgomery.

"IBM urges Congress to adopt a precision regulation approach to AI," Montgomery said. "This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself." Montgomery said that Congress should clearly define the risks of AI and impose "different rules for different risks," with the strongest rules "applied to use cases with the greatest risks to people and society."

Advertisement

AI tech “can go quite wrong”

Several lawmakers commented on OpenAI and IBM's willingness to face new rules, with Sen. Dick Durbin (D-Ill.) saying it's remarkable that big companies came to the Senate to "plead with us to regulate them."

Altman suggested that Congress form a new agency that licenses AI tech "above a certain scale of capabilities and could take that license away to ensure compliance with safety standards." Before an AI system is released to the public, there should be independent audits by "experts who can say the model is or isn't in compliance with these stated safety thresholds and these percentages on questions X or Y," he said.

Altman said he is worried that the AI industry could "cause significant harm to the world."

"I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," Altman said. "We want to work with the government to prevent that from happening."

Altman said he doesn't think burdensome requirements should apply to companies and researchers whose models are much less advanced than OpenAI's. He suggested that Congress "define capability thresholds" and place AI models that can perform certain functions into the strict licensing regime.

As examples, Altman said that licenses could be required for AI models "that can persuade, manipulate, influence a person's behavior, a person's beliefs," or "help create novel biological agents." Altman said it would be simpler to require licensing for any system that is above a certain threshold of computing power, but that he would prefer to draw the regulatory line based on specific capabilities.

OpenAI consists of both nonprofit and for-profit entities. Altman said that OpenAI's GPT-4 model is "more likely to respond helpfully and truthfully and refuse harmful requests than any other model of similar capability," partly due to extensive pre-release testing and auditing:

Advertisement

Before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model's behavior, and implements robust safety and monitoring systems. Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming, and dangerous capability testing.

Altman also said that people should be able to opt out of having their personal data used for training AI models. OpenAI last month announced that ChatGPT users can now turn off chat history to prevent conversations from being used to train AI models.

People shouldn’t be “tricked” into interacting with AI

Montgomery pitched transparency requirements, saying that consumers should know when they're interacting with AI. "No person anywhere should be tricked into interacting with an AI system... the era of AI cannot be another era of move fast and break things," she said.

She also said the US should quickly hold companies accountable for deploying AI "that disseminates misinformation on things like elections."

Senators heard from Gary Marcus, an author who founded two AI and machine-learning companies and is a professor emeritus of psychology and neural science at New York University. He said at today's hearing that AI can create persuasive lies and provide harmful medical advice. Marcus also criticized Microsoft for not immediately pulling the Sydney chatbot after it exhibited alarming behavior.

"Sydney clearly had problems... I would have temporarily withdrawn it from the market and they didn't," Marcus said. "That was a wake-up call to me and a reminder that even if you have companies like OpenAI that is a nonprofit... other people can buy those companies and do what they like with them. Maybe we have a stable set of actors now, but the amount of power that these systems have to shape our views and lives is really significant, and that doesn't even get into the risks that someone might repurpose them deliberately for all kinds of bad purposes."

Page:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK