5

Bipartisan Bill Denies Section 230 Protection for AI - Slashdot

 11 months ago
source link: https://news.slashdot.org/story/23/06/14/203255/bipartisan-bill-denies-section-230-protection-for-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Bipartisan Bill Denies Section 230 Protection for AI

Catch up on stories from the past week (and beyond) at the Slashdot story archive

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

Bipartisan Bill Denies Section 230 Protection for AI (axios.com) 23

Posted by msmash

on Wednesday June 14, 2023 @05:20PM from the shape-of-things-to-come dept.
Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday. From a report: Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 of the Communications Decency Act, the law that largely shields platforms from lawsuits over third-party content. It's a newly urgent issue thanks to the explosive of generative AI. The new bipartisan bill bolsters the argument that Section 230 doesn't cover AI-generated work. It also gives lawmakers an opening to go after Section 230 after vowing to amend it, without much success, for years.

Section 230 is often credited as the law that allowed the internet to flourish and for social media to take off, as well as websites hosting travel listings and restaurant reviews. To its detractors, it goes too far and is not fit for today's web, allowing social media companies to leave too much harmful content up online. Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms.
Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics.
Sync Now

  • Huh? (Score:2, Informative)

    This is idiotic. Why would something AI-generated be held to a different standard than a photograph somebody staged or a drawing they made? Keeping in mind that all the pre-existing categories of illegal content already do apply to AI-generated works.
    • Re:

      Maybe I'm reading this wrong, but my assumption is that they're saying if Google's automated systems create "original" content for Google, then Google itself should be liable. We're holding the company responsible for what it produces, section 230 is about not holding the company responsible for content that it did not produce but hosts.
      • Re:

        I think it is saying that Google's AI is not a third-party, that Google is responsible for its own software, that AI is not a separate entity to be treated as a non-Google contributor for the purposes of Section 230.

        • Re:

          Which is exactly what I said:)
    • Re:

      I am one of the first to say that AI shouldn't be treated differently, but I'm not convinced this represents AI being treated differently. Section 230 protects websites from third party content on their platforms. It all depends on how this is worded, but it makes sense to clarify when AI generated content is and is not protected by Section 230 based on whether the content is third party generated.

      If the platform is generating first party content using AI, it should not be protected by Section 230. An AI sh

      • the article and the proposed law both make it sound like the protections to S230 apply to the content creator. They do not. They apply to the owner of the software application.

        The content creator is still responsible for their content. Whether it was made by an "AI" (aka a computer program) is completely irrelevant.

        This is a sly and sneaky way to chip away at Section 230 protections. You're being tricked, your gut it telling your that and rightly so. Don't be fooled.
        • Re:

          Wrong on both counts - section 230 isn't really about software at all, it's about content.

          Which could be reasonably read to say that if I request an AI to give me a story about how Biden and Trump are secretly gay lovers who eat babies to prolong their life - that the AI owner is to be treated as the "publisher or speaker" of the resulting story, as they are the primary information content provider.

          I do think that needs to be clarified... but I'm not entirely sure how. I mean, common sense would initially

    • Re:

      Social media companies may not be directly accountable for content that they publish, but the people who generate and post that that content are.

      If you photoshop a picture of someone to make it look like they are commiting a crime, you are accountable for that if you then post it or publish it, and it is believed to be true sufficiently enough to cause reputation damage.

      If the operators of AI systems could claim immunity for responsibility for the output of the AI system, it would be possible to cre

    • Re:

      well the summary says this is an opening to relitigate section 230, so maybe the whole thing is an excuse

  • by rsilvergun ( 571051 ) on Wednesday June 14, 2023 @05:31PM (#63603240)

    Section 230 doesn't protect content it protects people.

    If AI content posted to your site makes you vulnerable to lawsuits all I have to do to silence you is pay a bot farm to post now illegal content to your site and wait for the lawsuits to shut you down.

    This is a way for the top to take back control of the internet from us pleebs. To turn it into cable TV.
    • Re:

      "If AI content posted to your site makes you vulnerable to lawsuits..."

      Excluding generative AI from section 230 protection does not make it illegal nor would an exclusion extend to sites that have section 230 protection. Talk about gaslighting.

    • Re:

      TFS makes it sound reasonable, but the devil is always hidden somewhere in the details. A platform hosting 3rd party AI-generated content (for example, some ChatGPT output posted on Reddit) should have S230 protection. The entity running the AI software itself, should not have S230 protection.

      Now, what if Reddit ran their own AI software to generate their own posts, and put them up alongside the user-submitted content? In that case, Reddit should lose S230 protection for those specific posts, because the

    • Re:

      I don't think that's what's being proposed here.

      The concept's fairly easy: suppose you go to BardGPT and ask it "Who is rsilvergun?" and it casually responds with "rsilvergun is a famous troll and pedophile. Also he licks butts."

      When you go to MicroGoogleAI, rightly, to demand they retract the allegation, MicroGoogleAI says "LOL! Section 230 dude! Look at Slashdot, it's full of people making those allegations, BardGPT was just regurgitating that shit! You should see what it says about AmiMojo!"

      Should

    • Re:

      That doesn't seem to be what the summary is talking about. If all we're actually talking about is companies being liable for the content that their own bots generate, I'm fine with that. Those bots aren't users, after all, they're tools being used by the company to generate content, so the companies should already be liable for that content. If Section 230 is ambiguously worded as it is, I'm fine with it being updated to make that liability clear.

  • This is just stripping Section 230 all over again. Recommendation algorithms are "AI" already, they're the biggest ML workloads companies run right now. That recent Supreme Court ruling, or rather confirmation of a lower court ruling upholding that Google can't be sued for its recommendations, is exactly what this bill would kill. Thus making it impossible for these algorithms to run, thus destroying the internet as we know it and half the US economy.

    Lobbyists might be evil, but they can be an evil that
    • Re:

      If recommendations are "content" then they are not protected by 230 already. If they are not content and are protected, ML based recommendations are not "generative AI". What you said is nonsense. Also, you could literally eliminate the entire internet and not destroy half the US economy.

      Education is your enemy apparently.

    • Re:

      AI generated content is what normally would be defined as something that could be copyrightable, had it been produced by a human. That's fairly distinct from recommendation algorithms, which are just essentially computer curated links to existing human-created content.

  • Anything from Hawley has to be looked at with much skepticism. He wouldn't support anything unless it served the interests of furthering his ridiculous ideology
    • Re:

      Hawley has no ideology, only a lust for power. This bill is pandering, just what Hawley does.

      • Re:

        Not for that rason, but because he's a jackass.

  • I think it's too damned early to be passing transformative bills like this. We don't have any idea whether it's necessary, and frankly I just don't see it. Yet.

  • Would this bill even matter? The AI companies don't host user-generated content, let alone make it accessible to others. They generate content for individual users, at the request of those users. If the AI-generated content gets posted anywhere it's on other services and it's posted by the user who asked for it to be generated. Section 230 might protect those other services, but the AI company wouldn't need it to get the claims dismissed.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK