2

Top AI researchers ask OpenAI, Meta and more to allow independent research - The...

 2 months ago
source link: https://www.washingtonpost.com/technology/2024/03/05/ai-research-letter-openai-meta-midjourney/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Top AI researchers say OpenAI, Meta and more hinder independent evaluations

Tech company policies have put a chill on independent AI research, says open letter

March 5, 2024 at 9:00 a.m. EST
ai-research-letter-openai-meta-midjourney
A frame of a video generated by OpenAI's new intelligence artificial tool Sora, unveiled in Paris on Feb. 16. (Stefano Rellandini/AFP/Getty Images)
Listen
Share
Comment

More than 100 top artificial intelligence researchers have signed an open letter calling on generative AI companies to allow investigators access to their systems, arguing that opaque company rules are preventing them from safety-testing tools being used by millions of consumers.

The researchers say strict protocols designed to keep bad actors from abusing AI systems are instead having a chilling effect on independent research. Such auditors fear having their accounts banned or being sued if they try to safety-test AI models without a company’s blessing.

The letter was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang; Pulitzer Prize-winning journalist Julia Angwin; Renée DiResta from the Stanford Internet Observatory; Mozilla fellow Deb Raji, who has pioneered research into auditing AI models;ex-government official Marietje Schaake, a former member of European Parliament; and Brown University professor Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy.

Advertisement

The letter, sent to companies including OpenAI, Meta, Anthropic, Google and Midjourney, implores tech firms to provide a legal and technical safe harbor for researchers to interrogate their products.

“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter says.

The effort lands as AI companies are growing aggressive at shutting outside auditors out of their systems.

OpenAI claimed in recent court documents that New York Times’s efforts to find potential copyright violations was “hacking” its ChatGPT chatbot. Meta’s new terms says it will revoke the license to LLaMA 2, its latest large language model, if a user alleges the system infringes on intellectual property rights. Movie studio artist Reid Southen, another signatory, had multiple accounts banned while testing whether the image generator Midjourney could be used to create copyrighted images of movie characters. After he highlighted his findings, the company amended threatening language in its terms of service.

Advertisement

“If You knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find You and collect that money from You,” the terms say. “We might also do other stuff, like try to get a court to make You pay our legal fees. Don’t do it.”

An accompanying policy proposal, co-authored by some signatories, says that OpenAI updated its terms to protect academic safety research after reading an early draft of the proposal, “though some ambiguity remains.”

AI companies’ policies typically prohibit consumers from using a service to generate misleading content, commit fraud, violate copyright, influence elections, or harass others. Users who violate the terms may have their accounts suspended or banned without a chance for appeal.

But to conduct independent investigations, researchers often purposefully break these rules. Because the testing happens under their own log-in, some fear AI companies, which are still developing methods for monitoring potential rule breakers, may disproportionatelycrack down on users who bring negative attention to their business.

Advertisement

Although companies like OpenAI offer special programs to give researchers access, the letter argues this setup fosters favoritism, with companies hand-selecting their evaluators.

External research has uncovered vulnerabilities in widely used models like GPT-4, such as the ability to break safeguards by translating English inputs to less commonly used languages like Hmong.

In addition to safe harbor, companies should provide direct channels so outside researchers can tell them about problems with their tools, said researcher Borhane Blili-Hamelin, who works with the nonprofit AI Risk and Vulnerability Alliance.

Otherwise the best way to get visibility for potential harms may be shaming a company on social media, he said, which hurts the public by narrowing the type of vulnerabilities that get investigated and leaves the companies in an adversarial position.

“We have a broken oversight ecosystem,” Blili-Hamelin said. “Sure, people find problems. But the only channel to have an impact is these ‘gotcha’ moments where you have caught the company with its pants down.”


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK