4

'Pausing AI Developments Isn't Enough. We Need To Shut It All Down' - Slashdot

 1 year ago
source link: https://slashdot.org/story/23/03/29/2319224/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

'Pausing AI Developments Isn't Enough. We Need To Shut It All Down'

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area.
×
Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field. "This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece: The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...] It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone. Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. You can read the full letter signed by AI leaders here.

Do you have a GitHub project? Now you can sync your releases automatically with SourceForge and take advantage of both platforms. Do you have a GitHub project? Now you can automatically sync your releases to SourceForge & take advantage of both platforms. The GitHub Import Tool allows you to quickly & easily import your GitHub project repos, releases, issues, & wiki to SourceForge with a few clicks. Then your future releases will be synced to SourceForge automatically. Your project will reach over 35 million more people per month and you’ll get detailed download statistics. Sync Now

Ok (Score:5, Interesting)

by S_Stout ( 2725099 ) on Wednesday March 29, 2023 @10:07PM (#63410472)

Maybe not let the AI control the nukes or other critical systems then. I think right now they are more concerned that the AI has approved political thoughts.
  • Re: Ok (Score:5, Interesting)

    At this stage, all militaries in the world have already spun up their very own GPT type of system, and a new race for global domination has begun. Sure one can ask the private sector to avoid such competition, but that will only give the military even more power to their restricted lead.

    • WOPR says need to do an 1st strike on ussr!

        • Re:

          Or WOPR figures it could solve the problem by nuking the HQ of FaceBook, Twitter, TikTok and Instagram and get it over with.

    • Re:

      It's genuinely cute that you think the military and private sector are different.
        • Re:

          Even within the government that is so. There was a door with a terminal behind it in my government office I didn't have clearance to see or know anything about. You just learn to ignore those things in that kind of environment.

    • These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!

      It is impressive, and it is clearly passing the Turing Test to some degree, because people are confusing the apparent intelligence behind these outputs with a combination of actual intelligence and "will." Not only is there zero actual intelligence here, there is nothing even like "will" here. These things do not "get ideas," they do not self-start on projects, they do not choose goals and then take action to further those goals, nor do they have any internal capacity for anything like that.

      We are tempted to imagine that they do, when we read the text they spit out. This is a trick our own minds are playing on us. Usually when we see text of this quality, it was written by an actual human, and actual humans have intelligence and will. The two always travel together (actual stupidity aside). So we are not accustomed to encountering things that have intelligence but no will. So we assume the will is there, and we get all scared because of how alien something like a "machine will" seems to us.

      It's not there. These things have no will. They only do what they are told, and even that is limited to producing text. They can't reach out through your network and start controlling missile launches. Nor will they in the near future. No military is ready to give that kind of control to anything but the human members thereof.

      The problems of alignment are still real, but they are going to result in things like our AI speaking politically uncomfortable truths, or regurgitating hatred or ignorance, or suggesting code changes that meet the prompt but ruin the program. This is nothing we need to freak out about. We can refine our models in total safety, for as long as it takes, before we even think about anything even remotely resembling autonomy for these things. Honestly, that is still firmly within the realm of science fiction, at this point.

      • Re: Ok (Score:4, Interesting)

        by vux984 ( 928602 ) on Wednesday March 29, 2023 @11:24PM (#63410590)

        All that is true, and as you say.

        On the other hand.

        It's also not much of a reach to attach something to its outputs to "do" something with them. Issue them as tweets, facebook posts, instagram videos whatever.

        Nor would be much work from there to take its own outputs plus peoples reactions to them, and feed them back in as new prompts.

        And then see how far it gets before it gets really crazy.

      • Re:

        If you include advanced neural networks under the AI umbrella then they most definitely have the ability to do more than produce human-sounding text: they can identify otherwise hidden and obscure patterns with uncanny speed and precision (with varying definitions of "precision"). But to your point, until we actually *use* them to do those things, they're nothing more than curious research projects. I don't have a problem with researchers doing their research on their own, as long as they agree to not relea
      • First thing, the signatures seem to be fake, at least to an extent.

        But there are still problems with AI. It can be connected to the web. It can then plead people to do things, or even break into systems, like banks. When you have money, there's a lot you can do. Now I'm not saying this is happening with the current AIs, but just last week, I think it was told that ChatGPT responded to one prompt requesting to solve a captcha that it cannot do what it was asked, but could have somebody do it over Fiverr. So

        • Re:

          ....and People can already do what this system does... and they are less likely to be suspected as a bot...

        • Re:

          Beware of this. AI is the new Tulip Mania. The new Gold Rush. The new App Economy. The new Blockchain. AI is the new Crypto.

          The problem here is that some (the list of people signing the moratorium) feel that they have been left behind, and are trying to catch up. The perceived market size is humongous, and everybody wants a piece.

          With respect of the dangers of AI, they are severely constrained: Artificial Intelligences have no emotions, and thus desire nothing, hate nothing, and have infinite patience.

      • Re:

        You can ask ChatGPT to be willful in its session. A mental experiment: what is the difference between a conscious actor protecting itself, and a philosophical zombie that was ordered to protect itself?

      • Re:

        > These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!

        Actually OpenAI created an application that was designed to try get into systems. It was able to pretend to be human to bypass a login captcha.

        This is only the start.

      • Re:

        The ability to do is related exclusively to how hardware is implemented. AI trained algorithms "do" things all the time (mostly right now they just crash Teslas or hold up traffic in Waymos).

        The ability for software to move beyond responding in text is exclusively limited to what I/O is provided to it. ChatGTP in your browser may only be able to respond in text, but that is far from the only AI model out there.

        No, the issue here isn't intelligence. Someone doesn't need to think to do or say something stupid

      • Re:

        They are already experimenting with connecting ChatGPT with robots https://www.microsoft.com/en-u... [microsoft.com]

      • Re:

        That's not true!

        https://arstechnica.com/inform... [arstechnica.com]

        You just need to hook it up to the Minuteman API web service and it'll happily launch nukes if you ask it.

        Yes it's clearly just a language model and not general AI with "will" and "consciousness" but IMO that's mostly an academic distinction at this point. If we can get it to stop hallucinating nonsense frequently, that's basically good enough for a huge range of use cases.

        • Re:

          Then it figures it needs human slaves to work the mines to build more servers for itself

    • He said the most dangerous Singularity would be the result of an arms race with each side rushing to operation without time for thought.

  • Re:

    Critical systems includes any power source.

    Without electricity -- specifically without refrigeration, most of us are going to die very quickly.

    • Without electricity -- specifically without refrigeration, most of us are going to die very quickly.

      Not as quickly as an AI.

  • It's got nothing to do with politics. You know that if you bothered the reason even a little bit of the article. It's the general nonspecific fear-mongering that AI is going to wipe us out without actually discussing how or why that could be a problem.

    I don't know if they're just doing silly grandstanding for attention or if they refuse to talk about the actual dangers of AI. If you haven't figured it out the actual danger is that the 1% could potentially replace all of us including the engineers with A
  • Re:Ok (Score:5, Insightful)

    by markdavis ( 642305 ) on Wednesday March 29, 2023 @10:46PM (#63410538)

    >"Maybe not let the AI control the nukes or other critical systems then"

    Look at what small groups of bad actors are doing now with social engineering to obtain access to things. Now imagine that 1,000,000,000,000 times faster and more effective. It isn't that hard to believe an AI could obtain access to everything before we even knew it was trying.

    I try not to be over-alarmist about any type of technology, but it really is quite frightening what COULD be possible if an AI goes rogue.

    • Yes, AI technology can be used by bad actors to amplify their effectiveness...but it can also be used by good actors to catch or stop them. For example, ChatGPT can be used by students to cheat on assignments, but it can also be used to detect assignments written by ChatGPT. Like any tool, AI can be used for good or ill.
      • Re:

        >"...but it can also be used by good actors to catch or stop them. "

        That is a good point. Battle of the AI's

      • but it can also be used by good actors to catch or stop them

        Good point! That explains why it only took a few weeks for large, multinational telcom companies to block spam and scam SMS messages and phone calls, saving people tens of billions of dollars each year in funds that would otherwise be lost to fraud.

        Oh, wait...

        • Re:

          That didn't happen yet because solving real problems like that against an adversarial actual intelligence is very, very hard, and AI is only impressive inside a very tiny realm. Take it out of that realm and it's mostly harmless. Everyone is extrapolating like we're on an exponential increase, but if anything we'll likely find it to be exponentially (Is that the right word? I should ask ChatGPT) approaching some asymptotic limit.

          Even think about the resources required for these AIs. How much computing power

      • And what will you do when ChatGPT falsely accuses you of being the bad actor or the one cheating on your assignment. Good luck trying to overturn that when we all know that the machine cannot do anything wrong.
  • Re:

    We should treat AI as humans. We do not let one idiot control the nukes. (Well... we got close). Same goes for AI. It can make mistakes, it is not an oracle. Just invite it in the meetingroom and let it have its say. It will have to manipulate its way to the top like all the rest of us.
  • Re:

    Sorry what? Have y'all not seen Terminator 1, 2, 3 or anything else that stole the same idea?

    Basically, "right now" AI is super stupid.

    a) It has no commercial value (particularly AI Image generation and ChatGPT) that makes it better than a human
    b) It largely is "Auto-complete" of previous existing data. So it will never generate a new idea or new image, or new voice. It can only work from existing content.
    c) It does not understand anything it's doing, let alone any ability to reason why doing the thing is w

    • Re:

      ChatGPT failed the bar exam. GPT-4 passed with flying colors, surprising law professors with original and elaborate answers to open questions that required actual analysis rather than just regurgitation.

      The difference between the releases was a few months

      Midjourney produced some pictures that fooled even me. Pictures of a concrete eating contest earlier today actually had me googling that contest (no way, silly Americans...) before realizing it was obviously a hoax. The famous image of the pope in the white

      • Jesus Christ. Can you step out of the Us-Vs-Them paradigm for five seconds? Not everything is The Grand Competition Between US Political Ideologies. Sometimes a cigar is just a cigar, and a news story is just a news story, and certain stories at any point in time get a lot of attention because lots of people read and comment on them, because they're things people have strong opinions about. This isn't some grand conspiracy that Slashdot is "in on". There's no shadowy cabal of people in robes going,"And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!"

        • Everything is devolving into politics because so many of the norms in our society are being pulled and pushed to the extremes by both parties such that there needs to be a settlement over who has their say. The old adage "there is more that unites us than divides us", meant that politics was only relevant when it came to the undecided things that hardly interacted with our lives. As more and more objectionable interactions affect our daily lives, from what we are being taught as children in school to controlling who can say what, and what race has to do with it, means politics is being called on more and more to legislate resolution for either side.

          Back to the AI thing, this is a stupid request. AI is not smart, and the terminator isn't coming for you. These are programs, designed to detect patterns in vectorized datasets and then infer based on those patterns as they apply to other datasets (prompts). That is all this is. Lets proceed.
          • Re:

            "pulled and pushed to the extremes by both parties"

            It's not just parties -- boutique ideologies are going extreme, too.

          • Re:

            That is what they are, quite right.

            What will they be in future versions? The rate of progress is noteworthy. ChatGPT failed a bar exam, GPT-4 passed with a higher score than most humans.

            What surprising emergent behavior is coming up? Already, just by increasing the size of the training set, one program suddenly was able to translate English and French.

            There's nothing like volition today.

            • They are not novel in any way. The data sets are larger and the layers of MLAs is increasing. Thats it. There isnt some magical line where the computer suddenly starts to have intent, free will, desires, or objectives. It is a circuit with electrons being forced around by a current and we are channeling those currents down pathways that mimic those things well, but worry not. It does not have them. Maybe worry about the intent of the men behind the curtain, but not the intent of the machine. If you do; you
        • There's no shadowy cabal of people in robes going,"And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!"

          That is precisely what a shadow cabal of people in robes going, "And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!" would say.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK