3

OpenAI Eases Stance on Use of ChatGPT for Military Purposes

 5 months ago
source link: https://www.businessinsider.com/openai-usage-policy-chatgpt-military-purposes-2024-1
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

OpenAI eases stance on the use of ChatGPT for military purposes

Jan 13, 2024, 7:04 PM UTC
ChatGPT and OpenAI logo

Angle down iconAn icon in the shape of an angle pointing down. OpenAI quietly rolled out changes to its usage policy this week and lifted a broad ban on "Military and warfare" uses of its technology. Jonathan Raa/Getty Images

  • OpenAI updated its usage policies on January 10.
  • As part of the update, it eased restrictions on military use of its technology.
  • The changes accompany OpenAI's rollout of the GPT Store — its custom ChatGPT marketplace.
Insider Today
NEW LOOK
Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview
Bull
Email address
By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy. You can opt-out at any time.
Bull

OpenAI quietly eased restrictions on military applications of its technology earlier this week.

In an unannounced update to its usage policies on January 10, OpenAI lifted a broad ban on using its technology for "military and warfare." The new language still prohibits OpenAI's services from being used for more specific purposes like developing weapons, injuring others, or destroying property, a spokesperson for OpenAI told Business Insider.

The spokesperson added that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." On January 10, OpenAI rolled out its GPT Store, a marketplace for users to share and browse customized versions of ChatGPT known as "GPTs." 

OpenAI's new usage policy now includes principles like "Don't harm others," which are "broad yet easily grasped and relevant in numerous contexts," as well as bans on specific use cases like developing or using weapons, OpenAI's spokesperson said. 

Some AI experts worry that OpenAI's policy rewrite is too generalized, especially when AI technology is already being used in the conflict in Gaza. The Israeli military said it used AI to pinpoint targets to bomb inside the Palestinian territory.

"The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement," Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept

Though OpenAI did not offer many specifics about its plans, the language changes could open the door to future contracts with the military. A spokesperson for OpenAI told BI that there are national security use cases that align with the company's mission, which is in part what led to the changes. OpenAI is already working with Defense Advanced Research Projects Agency, for instance, "to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on."

Read next

Watch: What is ChatGPT, and should we be afraid of AI chatbots?

64271da445ef1b001984c194?width=1280&height=866&format=jpeg&auto=webp
Advertisement

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK