

YouTube Case at US Supreme Court Could Shape Protections for ChatGPT and AI - Sl...
source link: https://news.slashdot.org/story/23/04/24/1024231/youtube-case-at-us-supreme-court-could-shape-protections-for-chatgpt-and-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

YouTube Case at US Supreme Court Could Shape Protections for ChatGPT and AI
Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated
Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
What the court decides about those issues is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, a company in which Microsoft is a major investor, or Bard from Alphabet's Google should be protected from legal claims like defamation or privacy violations, according to technology and legal experts. That is because algorithms that power generative AI tools like ChatGPT and its successor GPT-4 operate in a somewhat similar way as those that suggest videos to YouTube users, the experts added.
-
›
TFA only mentions what the lawsuit is actually over right at the end:
The case being decided by the Supreme Court involves an appeal by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in a 2015 rampage by Islamist militants in Paris, of a lower court's dismissal of her family's lawsuit against YouTube.
The lawsuit accused Google of providing "material support" for terrorism and claimed that YouTube, through the video-sharing platform's algorithms, unlawfully r
-
This seems to be overreach on the part of the authors of the article if they're trying to tie broader implications for AI and ChatGPT to this case. The Supreme Court usually attempts to keep their rulings as narrow as they possibly can within the confines of existing law so as not to step on the toes of the legislature who should be writing the policy.
-
I think it is reasonable to connect "youtube is liable for the output of their algorithms" to "openAI is liable for the output of their algorithms". Of course, that is but one of the steps that would have to happen to rule against YouTube.
If the court rules against YouTube, then OpenAI should be worried
If the court rules in favor of YouTube, that doesn't necessarily mean anything at all for OpenAI, depending on the reasoning.
-
-
-
-
AI is honest but algorithms are misleading? Really? I don't know where you got your education but I think you should ask a refund.
-
This sounds like an argument that AI is not ready for production use. Relegating it back to being a research toy wouldn't be so bad.
-
-
If it takes a supreme court case to determine is B equates to A, why would the author automatically assume C equates to A?
-
For failing to teach critical thinking, logic and the scientific method.
I've seen a college math major drawn down youtube rabbit hole of conspiracy theories - so - I think the key one is the scientific method which explains how scientists create hypothesis, conduct careful experiments and rigorously study results in order to make conclusions on the truth of the hypothesis.-
You can lead a horse to water, but you cannot make him drink.
It is not the lack of ability to apply critical thinking that caused issues for those math majors, but the desire to feel smart & special by understanding the secrets others do not. The only notable thing here is that math majors have the chops to earn the right to feel smart & special by real merit.
-
You can lead a horse to water but you can't make him drink
You can send a man to college but you can't make him think
-
-
-
If I watch a video and take a violent and illegal action afterwards, is that the video's doing? Or should I be held accountable for my own actions?
Should we regulate businesses so that they don't spread disinformation or violent extremists content? Sure. But that's not the same as holding them accountable for the acts of a third party. A reasonable best effort is the standard we should hold businesses to in filtering this kind of stuff. Holding them accountable for individual videos doesn't make sense, when
-
A reasonable best effort is nothing like what youtube did, which was to recommend the most engaging (and in this case, illegal and banned) videos.
If they had done their best and something slipped through, no problem. But that's not why they are in court.
And if you think videos and news don't influence people, there's literally dozens of counter examples, ranging from 1938 to today, for for people will go out and murder others or themselves based on what hatemongers tell them.
Sure, it's their responsibility.
-
Maybe everyone should be held accountable for their own actions: both the person who does the commits morally wrong actions and those who encourage them.
For example, under the Texas Law of Parties, YouTube could be viewed as a participant in the crime. At the federal level, conspiracy is already illegal. And I suspect that if you and your friends had posted a video encouraging people to riot, you could also be held accountable for that as well. If you convinced others to commit atrocities by lying to
-
The discussion is not about holding them account for the content of individual videos. The discussion is about holding them account for their specific promotion of individual videos.
-
-
From a web description, "Section 230 is a provision of federal law that protects website hosts, including platforms like Google and Facebook, and their users from legal liability for online information provided by third parties."
But AI is "generating" information, not just relaying links or directly presenting information provided by third parties.
So is it protected? Or could this lawsuit succeed? https://yro.slashdot.org/story... [slashdot.org] -
Hosting and curating are two different things. If you promote (up-rank) bad content that's on your hosting/messaging platform, you are acting as a curator, not just a content hoster. Section 320 only protects hosters, not curators.
-
I agree that hosting and curating are different, and should have different guidelines and protections. Where it gets difficult IMO is to what level of "curation" and "hosting" should the platforms be liable for?
For example, YouTube, the algorithm is nearly opaque, and it runs it based on both what you and others are doing. Probably a pretty high level of liability.
The opposite end: This slashdot forum. Except for ratings above a threshold (user-driven), things are laid out in exactly the order replies
-
Does that mean if someone scores 5 on a comment that slashdot should be liable for promoting the post? According to your definition, Slashdot is curating the post - displaying it more prominently that other posts.
Section 230 protects just about anyone hosting a website with user-contributed content (including Slashdot, StackOverflow, GitHub,...). If it isn't, you will dramatically raise the costs of anyone hosting 3rd party content, probably leading to the closing of many sites. We'll see what the Supreme
-
-
That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.
These are two completely different legal issues, and the text of S230 provides no obvious protection for editorializing and content curation. That's by design because the greater CDA was about cleaning up the Internet. S230 was intended to immunize companies for taking actions to remove offensi
-
If the ruling is in favor of Youtube, you are right, the resemblance is too low to conclude much.
If the ruling is against Youtube, then at least one facet that must be established is that companies are liable for their algorithmic output. So in which case that's a big consequence for generative AI. If Youtube is liable because they recommended certain content through algorithm, it would be reasonable to think that OpenAI could be found liable if, say, ChatGPT cyber-bullies someone to suicide.
-
-
Google has been using their secret "citizen dossiers" (that the end user citizen never gets to see) to feed a recommendation algorithm (that the end user never gets to understand) that have performed malicious psychological experiments against the end user.
By now, Google has had ample opportunity to cover up any such abuses targeted at specific end users as well as to cover up any coordinated intent cc: The Justice Department
</section
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK