12

Commerce Department Looks To Craft AI Safety Rules - Slashdot

 1 year ago
source link: https://news.slashdot.org/story/23/04/11/1428209/commerce-department-looks-to-craft-ai-safety-rules
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Commerce Department Looks To Craft AI Safety Rules

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

Commerce Department Looks To Craft AI Safety Rules (axios.com) 18

Posted by msmash

on Tuesday April 11, 2023 @10:40AM from the moving-forward dept.
The federal government is taking what could be the first steps toward requiring safer, more transparent AI systems as a Commerce Department agency invited public comment to help shape specific policy recommendations. From a report: The move is far short of the comprehensive AI legislation critics have advocated. But with the frenzy over generative AI continuing to grow, the Biden administration is trying to get a head start on a government response to the fast-moving industry. The Commerce Department's National Telecommunications and Information Administration (NTIA) is asking the public to weigh in on what role the federal government can play to ensure AI algorithms are acting as claimed and not causing harm.

"We really believe in the promise of AI," Assistant Commerce Secretary Alan Davidson, who runs NTIA, tells Axios. "We do believe it needs to be implanted safely and we're concerned that's not happening right now." Davidson said that the government could take a range of actions to shape AI that don't require new legislation -- including mandating audits as part of its procurement standards or offering prizes or bounties to those who find bias within algorithms. "We need to start the hard work of actually putting in place processes that are going to make people feel like the (AI) tools are doing what they say they are going to do, that models are behaving," Davidson said.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

  • How to do the same to AI as Commerce did to the internet in General. Allow Business interests to Hijack AI for their own purposes.

    The safety they are looking for is their own, they'll continue giving away customer information to the bad guys a million or so people at a time.

    • Re:

      Objective number zero 0: should be fight like hell to make sure nobody slips a CDA-230 like free pass for corporate interests for AI.

      We need to make damn sure there will be accountability this time!

      • Re:

        Whoever gets there first wins the world, that's who everyone will be "accountable" to.
      • Even if it was mandated to open-source release the code, and release the models with weights attached, who would be capable of tracing a particular "thought" or "creative generation" of the AI?

        Even the PhDs who make and test these things often cannot figure out exactly why it learned a particular capability, or it may take them significant time and effort to figure out roughly why a single pattern of inference / generation emerges.

        So what is meant by transparency that is meaningful? An AI that can introspec
  • The opinions are smelly and intertwined.

  • Uh (Score:4, Interesting)

    by backslashdot ( 95548 ) on Tuesday April 11, 2023 @11:40AM (#63441334)

    It's way too soon to be regulating this stuff. As usual, it will end up that to work around the legislation you would need massive capital.. and so basically the well resourced corporations won't be inhibited but the little guy will be. It's like how big Pharma colludes with the FDA to make sure you need about $1 billion for a new drug to get through the approval process. Innovative ideas won't get the funded because no investor will gamble with $1 billion only to have the drug fail at the last FDA step. If we stifle AI with burdensome regulations, only the elite companies will be able to do AI research.

    • Re:

      Or foreign countries. Doesn't this fuck up our protectionism? Slow down AI development, why? So China can catch up?

    • Re:

      Yes, that's their goal. "Open"AI has repeatedly stated they intend to be the arbiters of right and wrong, to control the tech fully, and are seeking monopoly in the field. Similarly Google and Amazon and Musk want OpenAI slowed down because there's a 6 month waiting list on NVIDIA H100's right now and they don't want to fall behind.

      • Re:

        Elon Musk is the biggest hypocrite on AI. He claims AI is very dangerous, yet his cars drive you around in traffic using "the most powerful AI in the world"... how does that work? If AI is so dangerous why is he putting it in a position where it can make a decision that kills me or others? chatGPT is not in charge of any life or death systems, but Elon wants THAT stifled.

    • Re:

      Which is exactly why we don't want to regulate this AT ALL.

      Ultimately its just software, LLMS are just a software transform over massive amounts of input. AI as it exists today is not special or distinct. What is going to happen if Commerce - especial Biden's Commerce gets hands around it is that we will get a bunch regulation that simultaneously makes it impossible for you, I, or anyone not big tech to play because compliance will be to complex, but at the same time absolves big tech from any civil complai

      • Re:

        That's a valid point, how do they even enforce this without audits and scouring your code repos? As usual the big companies will be able to skirt around it, as they will have the training data sets and utilize various legal loopholes to develop their own AI systems that are "regulatory compliant."

        • Re:

          Right now, based on similar guidance from the FDA/NIST/FTC, the regulations will involve documenting the hell out of it and being held to the standards you document. Characterize the model, the training set, the validation set, the held-out test set. If the data is from human subjects, what are the demographics? Is there sufficient representation of every kind of person in all of your data splits? Is the data stored securely? How do you increase data over time, and manage the data to prevent overtuning to a
  • What role to play? Don't play a role. The lobbiest cycle takes two years to complete and revolutionary elements of AI are coming out every year now. Even by the corrupt standards and goals of the federal government the new regulations will be short-sighted and ineffective. As for actually preventing any problems, let me laugh even harder. [youtube.com]
  • are doomed to repeat it.

    This is increasingly like when the internet became widely available to the general public, and the pearl clutchers nattered on (and on and on) about how we need rules, and licenses to access the internet, and so on.

    And a year two later, everybody adapted, and life moved on, because even idiots have some survival instinct.

  • Please help me understand how todays Generative AI software is dangerous.

    Obviously you have the academic fraud and plagiarism to counter, and perhaps a bit of misinformation, but what is the danger of an ai that helps you generate words and graphics. If you still trust anything you read on the internet without researching it you are the problem, not AI. Also, schools have always had plagiarism and issues with people hiring others to write their papers and take their tests. AI just does it better and cheap

    • Re:

      We're probably not near AGI, but the usual trope is that if you can achieve it purely in multi-node software, you can't reasonably destroy it — the idea being that only if it absolutely requires special hardware do you have a reasonable chance to contain an even-moderately-higher-than-human intelligence AGI once it figures out how to defeat security, especially if it gets smarter as it gets access to more computing resources.

      I suppose we could still turn everything off manually, though, at least until

  • One word that, in my opinion, regulators throw around too carelessly is "bias." The reason is that there are multiple statistical definitions of bias or fairness, and it is literally mathematically impossible to satisfy them all simultaneously. (See https://arxiv.org/pdf/1609.058... [arxiv.org] , I am not one of the coauthors). So which definition of bias? If someone was offering bounties for finding bias in algorithms, the easiest way to win would be to find which definition of bias the model was tuned to minimize, th

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK