5

ChatGPT Risks Divide Biden Administration Over AI Rules in EU - Slashdot

 9 months ago
source link: https://slashdot.org/story/23/05/31/1136212/chatgpt-risks-divide-biden-administration-over-ai-rules-in-eu
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

ChatGPT Risks Divide Biden Administration Over AI Rules in EU

Follow Slashdot stories on Twitter

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

ChatGPT Risks Divide Biden Administration Over AI Rules in EU (bloomberg.com) 27

Posted by msmash

on Wednesday May 31, 2023 @10:00AM from the looming-uncertainty dept.
Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated -- and their differences are playing out this week in Sweden. From a report: Some White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said. Meanwhile, US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people, who asked not to be identified because the information isn't public. This dissonance has left the US without a coherent response during this week's US-EU Trade and Technology Council gathering in Sweden to the EU's plan to subject generative AI to additional rules. The proposal would force developers of artificial intelligence tools to comply with a host of strong regulations, such as requiring them to document any copyrighted material used to train their products and more closely track how that information is used. National Security Council spokesman Adam Hodge said the Biden administration is working across the government to "advance a cohesive and comprehensive approach to AI-related risks and opportunities."
Find Your Place In The World BY Amply
  • It’s funny how AI has rapidly developed since 2012. It went from being busy work for doctoral students to a commercially viable technology with its own subculture of tech proponents. Thousands of engineers and scientists are coming out of school with some experience in AI, and many engineers who are not formally trained in AI are exploring it for their projects. And it’s easy for them to do that, because many of the landmark papers are open access, there are tons of open source codes and even open source trained models. There was even some anticipation that America would have self-driving trucks driving across the nation.

    Then large scale generative models come out and people saw that it could output B- quality code, legal documents, news articles, throwaway illustrations, business correspondence, and student essays. Suddenly AI is a big risk, because *white collar jobs* are at risk. Suddenly AI Existential Risk frauds like Eliezer Yudkowsky, who nobody took seriously, get their 15 minutes of fame to be useful idiots for AI Regulation to protect all those white collar jobs.

    Of course, if AI is regulated, the owners of ChatGPT are not going to be impacted. They’ll just hire a dozen compliance experts. It will be all those tinkerers who are benefiting from all the open source code and open access papers who will decide that their neat project is not going to be worth several years of legal hoops to try to put into the market.
    • Re:

      It's sad to hear the language they are using to justify this. "Competitive disadvantage" ignores the huge gains from properly regulating AI. The problem is those gains are for you, not for big corporations with lobbying power.

      • Re:

        Without intending to start a fight, honestly what are the “huge gains” to be had from regulating AI? From my perspective, regulation in the US, whether well-intended or once effective, seems to generally be a method for big businesses to consolidate their lead by raising the baseline cost of their industry. Maybe this makes (or made) sense for things like food, medicine, and utilities, but seems to make less sense for “software in general,” and AI is just a particular method of makin
        • Re:

          Consumer and employee protection are two areas where regulation has been very beneficial, and those are directly related to how AI regulation will benefit regular people. All (or at least most) of your concerns about regulation are valid, and regulatory capture is especially dangerous, but claiming regulation doesn't help average people is simply ridiculous. Of course you can find plenty of times where regulation has caused problems, but a modern world without regulation would be dystopian.

  • The thing is: US and EU politicians are profoundly unqualified to even discuss these issues. The people they have advising them are not much better. If they try to tailor laws and regulations to this specific technology, they will screw it up.

    What they can do - what they should do - is write laws and regulations that about actual important issues. The EU did this with their privacy laws, for example: You cannot deal in people's private data. They didn't say "you can't use JavaScript to deal in private data" - the laws are not tied to a specific technology.

    Ignore the "AI" part, ignore ChatGPT and Bard. What *effects* are important? What rights need protected? Because those effects and those rights need to be protected *anyway*, regardless of what technology you are protecting them from.

    • Re:

      AI has specific issues that are not covered by the existing laws. For example, who is responsible for libel generated by LLM, if there are limitations on publicly accessible data when training LLMs, and authorship/copyright to LLM outputs. It is unwise to leave this to courts to decide.
      • I disagree: this is not AI specific. If any piece of software does something illegal, who is responsible? You buy tax software, and it miscalculates your taxes. Your car's navigation software sends you somewhere very, very wrong. You use a web-scraper and a bug in the software DOS's the target site. Windows installs an update and reboots your laptop in the middle of an important presentation.

        Software is a tool. Granted, a complex tool, but still a tool. Laws should address the effects, not the specifics o

      • Re:

        The *publisher* is responsible for the libel. If i generate an essay via LLM, and then put it on my website, i'm liable for its content.

        • Re:

          It is not that simple. What if AI generates libel in response to a query? [washingtonpost.com]
          • Re:

            What if i type libel into microsoft word? it's a tool, it should be treated as such.

    • 1. AI generated content must be labeled as such
      2. Liability for any damage caused by AI falls on the entity deploying AI, not the developer (eg a medical help bot says to take *way* too much ibuprofen)

      • Re:

        1. No, that's the point, what about deceptive content created by other means? "Deceptive or misleading content must be labeled as such", who cares how it was created? You can lie with Photoshop, or hell, multiple exposures on film.
        2. Liability for damage caused by anything should be based on fitness for claimed purpose. If whoever provides it makes claims that whatever it is cannot support, then they should be liable to whatever extent the claims were false.

        There, two reasonable and reasonably simple rules

        • Re:

          The point is AI generated content is not provably reliable. Humans aren't reliable, either, but someone should be told to whom they are interacting with so they can make an educated judgement.

          Again, because AI is not reliable, anyone deploying it should bear the brunt of liability. If a 3rd party developer is making claims about it's reliability, then the entity deploying the AI can sue them for breach of contract, but the deploying entity should not be held blameless. Once it can be scientifically proven t

          • Re:

            Does the same hold true for something manipulated with Photoshop? It really isn't hard to see how requiring the labeling of AI content is ridiculous.

          • Re:

            Yes this are simple rules that should generally play in all spaces.

            You can make a bot, and you can give it whatever abilities you can design but you should not be able to discard responsibility for what said bot does.

            You can make spoof photo/sounds/etc but if your intention is to cause someone to act in a way that is harmful or against their interest that is _fraud_.

            From a regulatory perspective AI should not be the interesting part. Which is why we should all be very very skeptical of the large industry pl

        • Re:

          1. No, that's the point, what about deceptive content created by other means? "Deceptive or misleading content must be labeled as such", who cares how it was created? You can lie with Photoshop, or hell, multiple exposures on film.

          In general, it is not against the law to lie...protected speech.

          A few exceptions, like when talking to the Feds (always a bad idea...lawyer up if they come calling on you)....and if you're committing fraud or for slander (or is it libel, I get confused).

          But just telling lies al

          • Re:

            He was probably referring to fraud, where it is illegal to lie to claim a payment or procure property or services (among other things). An AI lying should be fine, as long as it wouldn't be considered fraud.

      • Re:

        That sounds simple and reasonable, but AI has been being used to generate digital images for a long time. If I am looking at an advertisement in a magazine where a moon was digitally generated using AI, does that advertisement need some text in the lower right corner of the image saying some of this image was generated with AI? If they remove a tattoo from a model using AI, do we need to be told? I sure hope some banner doesn't need to be put on screen when I'm watching the next MCU blockbuster.

    • Re:

      This should be the litmus test for any regulations targeting AI. If the law has to specifically target AI, it is a bad law. At best it may be important to clarify how existing laws apply to AI, but never creating new and specific legislation.

      If you are worried about misleading content created by AI, you should be just as worried as misleading content created manually by an artist / photographer / writer / etc.

      If you are worried about copyrights being infringed by AI, you should be just as worried about copy

    • Re:

      US law is littered with that. Very often it has been a way to score points on being "tough on crime", making things that are already illegal even more illegal. Other times it is because of prosecutors who fear that the courts won't enforce the laws the way they want so they don't bother the lawsuit without a very specific law on the books. In the end we get tons of ultra-specific laws dealing with a specific technology or a specific method of doing things, and then we have the general form that doesn't get

      • Re:

        US law is littered with that. Very often it has been a way to score points on being "tough on crime", making things that are already illegal even more illegal.

        Yep, like the so-called "hate crimes" they try to push....

        If someone is murdered...bad. If they, apparently are killed because they are (insert race or sexuality here)....somehow that is worse?

        Dead is dead...doesn't matter why you killed someone, the result is the crime.

      • Re:

        That is a good example, and while I'm sure I would disagree with the creation of some of those laws it is probably important to make some kinds of fraud more severe than others. Especially for those where financial damages are difficult to discern, like citizenship fraud. And while IANAL, it seems like some of these laws primarily exist to ensure a violation is considered a federal crime. That could potentially be an area where I could see new regulations, if it was necessary to make existing laws enforceab

    • Re:

      THIS.

      AI regulation implies that they must behave in a morally acceptable manner.

      Do we really trust politicians to properly define morality today?
    • Re:

      No single person or profession has the qualifications to draft laws and regulations for this stuff yet. It's an interdisciplinary problem.

      We went through something very much like this when computer technology supplanted paper record keeping. In 1972 the Nixon Administration'assembleed an interdisciplinary panel of technical experts, business leaders, social scientists and legal scholars to study the dangers computers posed to privacy That multi-disciplinary approach clearly worked, because their report, ca

  • "The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by an endless series of hobgoblins, most of them imaginary."

  • Whatever rules you put in place will be ignored. Those directly in the line of fire will play nice with the regulators, but behind closed doors they'll quietly experiment so that when the guardrails are dropped they aren't behind.

    Besides, you're only regulating the friendlies. The bad actors are not in the path.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK