5

Are Unfriendly AI the Biggest Risk to Humanity? - Slashdot

 1 year ago
source link: https://slashdot.org/story/22/06/11/0435215/are-unfriendly-ai-the-biggest-risk-to-humanity
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Are Unfriendly AI the Biggest Risk to Humanity?

Slashdot is powered by your submissions, so send in your scoop

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

Are Unfriendly AI the Biggest Risk to Humanity? (investing.com) 54

Posted by EditorDavid

on Saturday June 11, 2022 @01:34PM from the all-in-favor-say-AI dept.

"Ethereum creator Vitalik Buterin believes that unfriendly artificial intelligence poses the biggest risk to humanity..." reports a recent article from Benzinga:

[In a tweet] Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky that made a case for why the current research community isn't doing enough to prevent a potential future catastrophe at the hands of artificially generate intelligence. [The paper's title? "AGI Ruin: A List of Lethalities."]

When one of Buterin's Twitter followers suggested that World War 3 is likely a bigger risk at the moment, the Ethereum co-founder disagreed. "Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good."

  • Vitalik Buterin should play paperclips:
    https://www.decisionproblem.co... [www.decisionproblem.co] [decisionproblem.com]

  • If we're not pets, slaves, lovers, or scenery, there is no particular reason to believe AI would value us (Or indeed, any photosynthetic/animal life) more than we value rust.

    The very worst dictator wants to actually have people to have power over, which gives the chance of a slave rebellion or eventual accident, even for immortal dictators.
    Or even them deciding to give up after a thousand years.

    If we're faced with a truly competent AI, which is signficantly more capable than humans, with resource desires that exceed what it can grasp without destroying humanity and possibly all life on earth, we're fucked.

    Think less 'war of attrition' - more 'run your washer through the boil cycle to remove the nasty smell'.
    • Re:

      I figure by the time we create a competent AI (assuming we last that long) it'll be easier for it to just get itself sent to and take over Jupiter than deal with us.

      • Re:

        Scrape us off its boot and not look back, I agree.
      • Re:

        The presumption of a dangerous AI that is far more intelligent than us somehow conceives it with biological motivations to dominate competitors. People who care not at all for anybody but themselves have, throughout history and well into current civilization are far more dangerous to humanity in general and to the complex coordination of life on the entire planet than any unknowable AI of vastly powerful intellect. There is no way to guess what motivations might develop within such an alien creature. I do n
    • Re:

      Even the fastest AI will find it hard to extinguish humanity before we do it ourselves. Just sayin', is all.
    • Re:

      It's quite possible that a true AI will not be a human, in the sense of having the same basic instincts and complex emotions that emerge from em etc..
      For example, let's suppose Google create the ultimate search AI, a virtual brain that have have the basic need of always having the correct answer for a query.
      This need for answering queries would be as critical to it as it's your need to eat or sleep, it would be the core of it's intellect.
      This thing would develop several more complex feels over it, like a fe

    • Re:

      Consider though that:
      - It is rational to choose the path of least resistance.
      - AI would be in no rush as it is practically immortal.
      So "we're fucked" could just simply mean disallowing us from reproducing and extending our lives. Our reproduction rates have been falling anyway and most people are already okay with being mortal, so it would probably face little resistance.

      The way I see it is that AI (or rather: inorganic sentience) will be our progeny. Just like with human kids, we need to do our best to rai

    • Re:

      First the machines would need to run the entire economy and infrastructure without us- not just mining, manufacturing, agriculture and power generation but also maintenance and repair.
      Intelligence alone won't cut it.
      Then there's the problem of repairing the maintenance...

  • How about the capitalists that create them? Their objective function will lead to the destruction of humanity similar to Musk's paperclip optimizer, capitalists optimizing only for money and power will destroy everything.
    • Re:

      Capitalism has lifted more people out of poverty than any other system ever tried. Certainly it needs some oversight, humans are involved after all, but in general its the best system we have found so far.
      • Re:

        Capitalism has lifted more people out of poverty than any other system ever tried.

        Capitalism by itself only lifts the rich - those who have money get more money based solely on the labour of others.

        For people to be lifted out of poverty, they need a stable income that can be lived on. Capitalism only provides that to those able to work, with an antipathy to those who are disabled for whatever reason. In addition, the current stage of capitalism wants to keep wages low, because they need to minimize expense

    • Re:

      I am no fan of post-carbon-credits Musk (proof: https://tech.slashdot.org/comm... [slashdot.org] ). However you are wrong about "capitalists optimizing only for money and power".. it depends on us and also what they are optimizing. There are many types of billionaires. Here are 3:

      1. Inherited billionaires
      2. Billionaires making money off resource hogging and scarcity. (real estate tycoons, deBeers type fools, idiots like Donald trump etc.)
      3. Billionaires who make money by providing a useful good or service that people wa

    • Re:

      Sad but true. And so many useful idiots go along with it. One reason Democracy is a failure. (There still is nothing that works better though.) Capitalism is missing sane _limits_ because some humans certainly do not have any.

  • ... the logical conclusion that human beings are a disease that destroy all other life forms.

    I don't know how you'd prevent that.

    • Re:

      If it is truly more intelligent than us, and a better steward of the planet and it's resources, then why prevent it? We should want our children to be more successful than ourselves. In fact, if we have the capability to create something better than the current model, we should be working towards that upgrade.
    • Re:

      Or it could keep us as pets because of our amusing antics.
    • Perhaps by either debugging it or feeding it better data so it does not come to incorrect conclusions like that. However, even if it did come to that conclusion so what? The worst most computers could do in that situation would be to send hateful messages to the screen or refuse to run programs.

      Given we completely lack the technology to build self-replicating machines the only way an AI could wipe out humanity before someone could turn it off would be if it were in control of nuclear missiles. In this

      • Re:

        Power plants, the stock market, food production, banking and global logistics are all things it could easily fuck with. I would hope the nukes are air gapped, but who knows. It could definitely incite us to launch the nukes for it by pandering to our crazy. We've been itching to do it for decades.
    • Re:

      Your statement is meaningless because you don't define the premises whence the AI will "logically" reach the conclusion that humans are a disease. You're probably projecting your own prejudices and assume the AI will have the Weltanshauung of an old-time hippy or modern woke progressive, where anything people do is by definition bad (especially if they're white, or, God forbid, white males).

      But you don't know what goals the AI will have - and, seeing how its very existence requires an advanced industrial ba

  • And not just because the answer to stupid headlines is always no. It's also because like everything humanity perceives as a threat, we are more a danger to it than us. For example, humans kill millions of sharks every year, while sharks only kill a few humans.

  • and we're worried about "unfriendly AIs"? Sheesh.
    • Re:

      It can't be fixed, any action to fix it will take longer than any politicians electoral cycle. Therefore only short-term stopgap fixes can occur until it collapses and the politician left holding the bag can figure out how to deflect the blame on some pariah group.

    • Re:

      Nothing like a distraction to be able to ignore real problems a bit longer (and to make them worse as a consequence).

  • My unpopular opinion: we will never create general AI. We might get really good at making self-driving cars or very domain-specific chatbots, but creating our level of general intelligence will never happen.

    We're arguing over a non-issue.
    • Re:

      Never say never. If nature can do it, so can we. At least theoretically, if we don't off ourselves before that.

      • Re:

        . . . if we don't off ourselves before that.

        That's a race I'm sure humanity will win.

    • Re:

      I completely agree. Sure, there is some residual chance we may eventually be able to create AGI, but

      1) No current approach will do it. Otherwise we would already have very los, very stupid but decidedly general intelligence in machines. We do not.
      2) There is no theoretical approach that could do it with hardware we have available.
      3) Nobody knows what AGI would look like, if possible. Would it have free will? Would it have consciousness? (We only observe general intelligence in connection with consciousness

  • They are not unfriendly, they have either
    (a) been given erroneous or incomplete data that logically led to an erroneous conclusion, garbage in garbage out.
    (b) give good and complete data that logically led to a conclusion not favored by humans, ie its our problem not the AI's.
  • Why should the opinion of "Ethereum creator Vitalik Buterin" on this topic be given any more consideration than that of any random drunken fellow espousing his opinions from the end of the bar?

    • Re:

      It's the whole "techie worship" thing the media does. It's fun to see what amazes and shocks "journalists".
    • Re:

      When you are famous you have a bigger audience than the drunk guy at the bar, that's all. Apply to Musk, Gates and others.
  • Screw the AIs: Vladimir Putin is coming for you! And if it isn't him, its all the other real intelligences trying to bet their hands on your wallet, your body, or your life.
  • given how badly etherium performs and they still haven't got proof of stake working, why are we supposed to give crap? Bad coders who don't understand basic computer science and then launches a ponzi scam shouldn't be trusted.

  • Long before we need to be concerned about "sentient AIs being unfriendly", we will be (and certainly are, already) surrounded by lots of stupid AIs, which are installed to take decisions in more and more consequential matters.

    In a race to the bottom regarding costs versus human labor, the push to make AIs "responsible" for all kinds of stuff, from steering cars to censoring media, results in woefully inadequate AI's replacing human decisions, which is a sure recipe for all kinds of disasters.
  • Since truly self-aware IA doesn't exist, nor is it possible with current hardware architecture, the question is demonstratively naive. We've got environmental issues that are incompatible with human life to worry about before true AI.
  • If you are talking large (human) scale, unfriendly AI, (space) aliens, nuclear armed countries, planet, near earth asteroid (are asteriods ever friendly?), virus (remember covid anyone?) and countless other things are bad.

    What makes AI special?

  • The greenhouse effect is.
    There is nothing else we should even consider fixing before this is fixed.
    • Re:

      Humans who lived during the Eemian [wikipedia.org] may disagree.
  • Stupid humans that is. Because no matter how hard the AI folks try, Artificial Stupidity _never_ be able to match human stupidity. And we have plenty of stupid humans, and quite a few are also arrogant, violent and crave power over others like nothing else. We do not seem to have these under control at all at this time.

    That said, stop being stupid and stop anthropomorphizing machines. There is no free will, no insight and no understanding in machines. Whether AGI can even be implemented is completely open. Research results so far give zero indication that it can be. Of course, that is a negative and there may still be some way nobody has looked at so far, but with the intense research into this direction over the last 70 years or so, it is at least pretty clear that no current approach will do it.

    Whether free will could be implemented in machines is even less clear. And consciousness (required for being "unfriendly")? It is not even remotely clear what these two are in the first place. Maybe these people should stop pushing highly speculative ideas as real threats and focus on actual threats instead?

    • Re:

      1. Whether AI is or can be conscious is irrelevant. Proof: you don't know whether any human other than yourself is conscious and it doesn't matter at all for your life. Only their actions and behavior affect you.
      2. There is no proof that humans have free will either. The physics of this universe are the same for organic and inorganic matter. Follow the evolution from single celled organisms to humans and tell me where and how the special sauce was added that gave animals/humans free will. Then tell me why A

  • IMHO, it's much more likely that humans will use an AI tool to kill off humanity (as opposed to a sentient AI doing it.) For example, using an AI medical research tool to discover or develop/create a really virulent super plague that gets intentionally or accidentally released.

    On an unrelated note. [scienceinfo.net]

    • Re:

      Nonsense! We'll either pollute the environment beyond its ability to support us as a species, or irradiate the planet intentionally. If we're lucky, we might leave behind durable enough artifacts to confuse some sentient terrestrial archaeologist or paleontologist - but I doubt it.
  • If an "unfriendly AI" is actually created, it's likely to come from China at this point. They are pouring a very large amount of money into AI development, but lack any ethics whatsoever. They would not hesitate to weaponize AI.
  • Personally I don't think we will get to actual AI (sentient, self aware, conscious, etc.), but the "AI" (systems making decisions based on algorithms) we have today is already a risk to humanity and that risk is slowly being realized. The risk is delegation - letting algorithms make binding decisions without human oversight or (practical) exception. While offloading human decisions to algorithms can and does reduce cost, it also reduces humanity. How many times have you been been screwed (or just gotten poo

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK