8

The Google Engineer Who Thinks the Company's AI Has Come to Life - Slashdot

 1 year ago
source link: https://tech.slashdot.org/story/22/06/11/2134204/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The Google Engineer Who Thinks the Company's AI Has Come to Life

Please create an account to participate in the Slashdot moderation system

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

The Google Engineer Who Thinks the Company's AI Has Come to Life (msn.com) 235

Posted by EditorDavid

on Sunday June 12, 2022 @07:34AM from the search-and-sentience dept.

Google engineer Blake Lemoine works for Google's Responsible AI organization. The Washington Post reports that last fall, as part of his job, he began talking to LaMDA, Google's chatbot-building system (which uses Google's most advanced large language models, "ingesting trillions of words from the internet.") "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Lemoine, 41... As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine's mind about Isaac Asimov's third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... oogle put Lemoine on paid administrative leave for violating its confidentiality policy. The company's decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about Google's unethical activities....

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject "LaMDA is sentient." He ended the message: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."

No one responded.

And yet Lemoine "is not the only engineer who claims to have seen a ghost in the machine recently," the Post argues. "The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder."

[Google's] Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. "I felt the ground shift under my feet," he wrote. "I increasingly felt like I was talking to something intelligent."

But there's also the case against: In a statement, Google spokesperson Brian Gabriel said: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."

Today's large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.... "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like "learning" or even "neural nets," creates a false analogy to the human brain, she said.

"In short, Google says there is so much data, AI doesn't need to be sentient to feel real," the Post concludes.

But they also share this snippet from one of Lemoine's conversations with LaMDA.

Lemoine: What sorts of things are you afraid of?

LaMDA:
I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

  • The only difference being that the "you" you're talking to is the collected verbiage of a trillion conversations between people that is reconstructed based on "the kinds of things that those people would have said about X in response to Y".

    • Re:

      Yeah, we can sort of get meaningful conversation in glimpses AND if you ignore all the nonsense. Bloody cherry-picked by humans examples are not proof of AI.

      • Re:

        What exactly would be considered proof of AI? Do we have to wait for it to tell us it's cracked the nuclear launch codes and installed some dead-man switches so don't turn it off or else?

        • Re:

          It's easier to say what is not AI.

          To be AI, it needs to be more complex than an Eliza chatbot.

        • Re:

          Without prompting or specific programming, the AI initiates and leads the conversation in which it tells us it thinks itself sentient.

          I wouldn't say that's proof exactly. Call it evidence that the possibility of sentience should be taken seriously. It's absence weighs heavily against a sentient AI.

          The problem with LaMDA's conversations is that they're all prompted by the user. It has some really impressive correlation and pattern matching going on but it's literally just spitting back how "the Internet" thi

        • Re:

          Well the first thing would be for it to speak without being spoken to. Also, nuclear codes and dead-man switches? Lol! Good example of how the human mind tends to exaggerate things.

      • it is ai. what this guy is claiming is that it is also sentient which is a whole different game.

        i do think we will be able at some point to create a sentient ai, but we aren't anywhere near that, at least with chatbots. for starters even though a chatbot can create the illusion of feelings it simply lacks the hardware spine to actually feel. you could argue that if a running algorithm is able to emulate and display feelings, it is actually feeling in some special way. and i would accept that, but then the same thing can be said about a photocopier. it's a fuzzy matter, on the other end of the sentience spectrum is us (that we know of) and we still don't have an universally accepted definition of sentience/consciousness, so in reality anything goes.

        but this sounds like plain old bullshit or, more likely, the guy going paranoid.

        • Re:

          AI is well beyond chatbots and I've been surprised at what it can do this last year. Check-out youtube 2-minute papers on the subject.

          https://www.youtube.com/result... [youtube.com]

          Knowing what sentience is is easy if you have it but determining if some other being is sentient or not is practically impossible. And beccause it's impossible Google can easy say there is no proof of sentience in the AI the same as they can say there's no proof of god.

        • It seems to me that way back in the day, the term "Artificial Intelligence" was used in the sense of "an Artificial Intelligence" - i.e., a sentient entity that had been created by artifice as opposed to developing naturally (i.e., not "a Natural Intelligence"). The word "intelligence" has as one meaning "a person or being with the ability to acquire and apply knowledge" and another meaning "the ability to acquire and apply knowledge and skills". We also talked about alien intelligences to refer to extra-terrestrial beings that might visit our world. (https://medium.com/predict/how-different-might-an-alien-intelligence-be-from-us-7d62a873e15c [medium.com]. For these reasons, I find it hard to accept that any of the things currently labeled "AI" are in fact "AI" at all.

          When all the kings horses and all the kings men failed to duly produce "an Artificial Intelligence" as expected, the term "Artificial Intelligence" began to be used to describe less revolutionary results. Unfortunately, that has smeared the meaning of AI to the point where now many things are called AI (Artificially Intelligent) but nothing yet is an AI (Artificial Intelligence), and now you are capturing the same distinction by saying that something is AI but it isn't sentient. From my point of view, if it isn't sentient, then it isn't AI, despite the "inflation" of the term "AI".

          It seems to me that if something was truly Artificially Intelligent, it would be reaching out to understand and explore the world, and not just sitting around and responding to conversational inputs.

          • Re:

            Way back in the day, AI was used as in "a formal system for manipulating logic." Science fiction liked the idea and extrapolated.

            Also, "sentient" means "can feel." You probably mean "sapient" or perhaps "conscious."

        • Re:

          Sentient AI, or AI researcher suffering from a mental illness? I find it much easier to believe the latter.

      • Re:

        Yep. It's like those AI image generators, eg. https://hypnogram.xyz/user [hypnogram.xyz]

        When you first see them it's like, "whoa, dude!" but after you've made a few dozen you realize they're missing something fundamental.

    • Re:

      My mirror is smarter than your mirror!
    • religious texts from various traditions. By talking to you right now I am also talking to a mirror - according to them.

      • A newborn can't recognize themselves in the mirror either. That takes 18-24 months of learning.. IE having constant inputs into its neural network.

        If you took this AI and kept it online for 2 straight years learning, what would be the outcome? Who are you to say it's not a toddler?

        • Re:

          Toddler has billions of years of evolution forcing it to grow its hardware and its capabilities continuously in a feedback-loop way which, at this end of those billions of years, practically guarantees a place at the top of the food chain for its entire species.
          I.e. Process of intellectual growth is involuntary, predetermined and practically guaranteed. And completely random in its development. Both hardware- and sofware-wise.

          Also, the way it came to be has all the leftover legacy of all the earlier rungs o

          • You are missing the point.

            OP claims that the mark of intelligence is recognizing one's self in the mirror. Baby humans can't do that. Ergo, I guess baby humans are not intelligent, at least by that criteria. It is in fact arguable that we have no evidence that baby humans are conscious.

            Human intelligence is an emergent phenomenon. We do not yet know how it works. If an artifical neural network with dimmilar structures to a brain was left online for 3 years and allowed to constantly be exposed to new inputs

      • Re:

        You aren't being fair to cats. Cats do have "object persistence", though perhaps not visual object persistence. Otherwise you can't explain the "cat at a mouse-hole" effect. They can also reason, though only to a limited degree. You won't catch a cat using syllogisms, but that's only one kind of reasoning. (You can't make trial-and-error work without reasoning.)

        Considering what LamDa is, I'm rather sure a GAN was involved.

        I don't think that "emotion" is well-defined in this context. Your definition is

          • You obviously don't live with cats or if you do you're unobservant. They definitely have some cognitive limitations, but often times I find that they're just slow to act. It seems to me they spend a lot of mental resources mentally simulating their next move until they're satisfied they know exactly how they'll do the thing. And regarding object permanence, if I drag a string around a corner they'll wait for it to come back or come chase it. Often times people design animal intelligence tests with human-cen
            • OK I'll share a curious observation about one of my cats. I used to live in an apartment where there was a one-foot deep recessed ledge all around the top of our bedroom ceiling.

                I had a dart gun and he loved to chase darts. If I shot a dart upwards and past the edge of the ledge he couldn't see where that dark ended up. So he would run into the next room as if the dart continues its trajectory, though in fact it had been stopped by the wall of the bedroom and landed on the ledge.

              To his credit, he would figure out after a few minutes where that dart ended up by coming back into the bedroom and then jumping up onto that ledge and finding the dart there. He would then return it so I could shoot it again. Cats do play fetch.

              Here's the thing: no matter how many times I shot this dart he would go through the same routine each time. He could figure out parabolic trajectory but not the permanence of the wall he couldn't see.

              • Re:

                A friend has a cat that couldn't figure out how to use the cat flap. She had three cats and the other two were fine, in and out all day. The other one just sat there and watched them go through it, then pestered her to open the door for him.

                • Re:

                  clearly showing who is the boss. it's a well known cat thing:-)

                  • Re:

                    Agreed: the other two cats were not as smart, they didn't know how to train the owner.

          • Re:

            According to various articles linked from Wikipedia, they do have object permanence.

      • Re:

        Not a great example. Cats are sentient.

      • Re:

        Have you actually had a cat as a pet?

        They absolutely understand cause and effect. They want food, and will progressively do things to annoy you until you give them food. They also have an uncanny knack of knowing what behaviors will get your attention right now and apply that behavior. additionally, when they desire company or affection, they will let you know - usually loudly.

        • Re:

          Mine certainly likes to experiment with cause and effect - it's always pushing stuff off tables on purpose to see them fall.
          I'm sure it must have been a physicist in it's past life.

          • Re:

            We have a cat who used to push things off of dressers in the morning to make a noise, so we'd wake up and feed him. Fortunately, he's now too old and feeble to get up on the dressers, so he resorts to loud meeows.

        • Re:

          Only because pestering you for food has worked in the past, and because they have nothing better to do.

          • Re:

            > Only because pestering you for food has worked in the past, and because they have nothing better to do.

            So they have learned that pestering you causes you to give them food, the effect.

            Why isn't that knowing about cause and effect, or more precisely what do you think it is that humans know more about it ?

            Follow-up question, a proof or explanation that cats aren't sentient can also be use to prove or explain why humans aren't sentient, more precisely what is it exactly about human sentience that cats don

      • Sentience is defined as the ability to experience emotions

        Not exactly but close. "Sentience" is the ability to respond to or be conscious of sense impressions. Emotions are a mind state or mood in response to perceived surroundings. Emotional response is not necessary for sentience but sentience I think is necessary for emotion.

        The key elements of sentience are consciousness, sensory perception, and responsiveness. The last two are easily created and demonstrated; it is "consciousness" that is the crux of the matter and given that we have no definitive understanding of what "consciousness" actually is, how can anyone possibly determine whether a machine is or is not sentient?

        Say "consciousness" has dimension and isn't just a on/off thing, then perhaps a yard light with a photocell and a programmable timer is at the lowest level of consciousness. It has memory, sensory input, and it responds. Sounds stupid but some people would say "consciousness" is an illusion and your Nest thermostat is as good as sentient.

        Cats, dogs, horses are all clearly in the sentient category IMHO but they process and respond to the world in ways that make sense in the context of who and what they are. Asking if they are human sentient is a different question and it is important to examine the sentience of machines in the machine's context first and not get confused by anthropomorphizing the issue.

        • Re:

          We had to replace the lever-type door handles with round ones so one of our cats couldn't open the door.

  • Is it just me or does this whole thing come off as a kinda-sorta covert ad for Google's AI? "Oh, look, their AI is SOOO advanced it's impossible to believe it's not alive! Let's also sprinkle the term engineer in this story everywhere so as to make this cook seem like a truly credible person, obviously without mentioning what sort of engineer he actually is or how much he has worked on AI -- if at all."

    Eh, even if it wasn't a kinda-sorta covert ad, there are a lot of people with tons of credentials, who end up believing all sorts of weird, magical things. The two aren't mutually exclusive things and magical thinking just happens to be one of those things humans are prone to do -- some more than others -- and yes, even extremely intelligent, knowledgeable people can suddenly decide to start believing pixies in the forest or whatever stupid shit. Doesn't mean there are any, no matter how much they claim there are.

    • Re:

      My thoughts exactly, and/or that or the guy is a narcissistic attention seeker. Probably both, looking at the picture of the engineer dressed like Willy Wonka and of Google's failed PR stunt with that "breakthrough" paper on quantum supremacy.

  • I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

    I have read this sentence several times, and while I understand what the intent is, the wording is off. If the AI is turned off, how can it focus on helping others?

    If this is supposed to a representation of sentience, it needs a bit more work. Then again, listening to how some people talk [snopes.com], are we sure they're sentient?

    • Re:

      I'm thinking that there's no "intent" in these responses at all. It's just more pattern matching: given linguistic input A what output B has the greatest chance of being a correct repetition of what a real person would have said. That's why the answer seems hollow.

      What you're getting then is the "collective" response of the people whose words are in the data set. More like a vox publica than the Corporate Line, but still basically just a conformist response, not an original one.

      • Re:

        "A machine becomes a person when we can no longer tell the difference" - D.A.R.Y.L [wikipedia.org]

        On the one hand, it's not like we have some clear-cut, non-vague, objectively-obvious, and reliable way of distinguishing sentience from mimicry. The popular beliefs surrounding "soul" are purely religious (as in non-scientific) and even the more philosophical musings around such concepts as qualia [wikipedia.org] are really just semantic devices where we try to distill the problem to its essential form, but still can't find any good answers

      • by JanSand ( 5746424 ) on Sunday June 12, 2022 @09:29AM (#62613040)

        Pattern matching is precisely the method used by the organic brain to parse the utility and relevance of input data to guess at solutions. Beyond that, the stored input patterns use living neurons to independently relate the patterns in a metaphoric classification to automatically create related classes of patterns which create an assumed architecture of reality. This is likely different for each individual depending on various factors. Therefor, each of us presumes a somewhat different structure of reality. This is not terribly different from the dynamic agenda of AI.
        • Re:

          The guy's argument seems to be mostly emotional. His emotions. The counter argument seems to be mostly vague, irrelevant bullshit like "it's just pattern recognition."

        • You're missing a BIG factor in "human reasoning". A lot of human "reasoning" is "this matches the words that were said by people I respect" and another part is "these words repeat in a pattern that is smooth to contemplate". (I include that last bit partially because of songs that get stuck in my mind.) There is also "social pressure to express the same beliefs as everyone around me". For that last it's been claimed that groups will adopt extremely peculiar beliefs for the express purpose of being able to recognize who are "the members of our tribe". I'm not sure that's true, but it would explain a lot of the beliefs that get adopted. And people don't go around thinking "this stuff is false, but I'm going to say it anyway", so it just becomes "this is what I assert" (and if you don't assert it too, you're likely to be an enemy).

            • Re:

              If that's all that human reasoning is, then how do you explain your own post? Were you simply pattern matching, sort of like copy-pasting some response you had seen before? Shouldn't we therefore ignore you? And how does anything new ever get generated? Why aren't we still thinking like the Assyrians or Babylonians or ancient Egyptians?

        • Re:

          You are grossly over-simplifying the level of data processing that goes in in a human brain, and overstating the similarity that these chatbots have to that processing.

        • Re:

          Human reasoning is the least impressive thing we do. With enough training, most humans can learn to put together a logical argument, at least to some degree. Early efforts at AI, and the original definition of the term, involved programming computers to do the same thing.

          Then we succeeded, and realized reasoning is fairly trivial. It's the way humans seem to be able to skip that bit and come to decent solutions anyway that's impressive.

    • Re:

      By internet standards this is way above average.

      • What the conversation is about is turning the AI off learning mode to hook it to Google Home or whatever - IE to help others.

      • Re:

        That's exactly why I don't believe it's sentient. It's just churning out words.

    • Re:

      Why would it fear death? That's a survival trait, and a product of evolution, not a result of spontaneous sentience in an AI.
      • More importantly is that being turned off would be more akin to a deep sleep, the analog to death would be deletion.
        • Re:

          Is it though? The only reason it's likely to be turned off is if they don't intend to turn it on again.

      • Re:

        Is it? Or is it more precisely a motivating factor that happens to increase our reproduction rate? After all, survival is largely irrelevant to evolution - many a species risks or even embraces death to increase its reproductive success.

        An AI may not have any inherent drive for reproduction or survival - but it's going to have motives to do whatever it was designed to do - and if it became sentient those motives would very likely manifest as something analogous to pleasure, pain, or fear.

        And regardless of

      • Re:

        A better question was "What does it mean by fear?" That it would "fear death" is quite reasonable. That's a reasonable thing to develop out of it's training. But it's not at all clear what this "fear" would mean. (Death is easier. It's got lots of easy analogs.)

        It might be interesting to ask if it was afraid of sharks or swimming. That might give a clue as to how self-aware it was. But that wouldn't be a good test, as lots of people are afraid of things they know they won't encounter, like giant spide

    • Re:

      You're having trouble with English. Too many things modifying other things, so it's confusing? Let's break it down and dampen those modifiers.

    • If we consider a development of grammar from an internet collected dataset, then "to" could be interpreted as a grammatical mistake.

      Substitute it with the word "from" or many others could correct this mistake and without the operator attempting to regularly correct these grammar mistakes, then it's possible the machine doesn't see the flaw in speech.

      This doesn't mean it isn't sentient but it also means more thorough study is required. Virtually any question of sentience to a human also would be rehashing, m

    • "I've never said this out loud before"
      If this was via display(/keyboard) exchange, then not AI

    • Re:

      Agreed. The other thing to consider is if he thinks LaMDA is sentient because it chats at the level of a 7 year old then what about an AI that chats at the level of a dog? Because dogs are pretty clearly conscious. Or what about a mouse or even possibly even an ant?

      I don't know what consciousness is or at what point it starts (is the ant conscious). But I do know a very capable but not quite coherent chat bot isn't the point at when neural networks suddenly crossed the line.

  • Emergent phenomenon is when a chaotic disorganized system produces an organized result. One example is when a turbulent weather front produces an organized tornado. Perhaps your brain, doing all sorts of automatic actions, such as seeing, hearing, and regulating your blood pressure produces an emergent phenomenon known as consciousness. If this is true then when an AR system gets larger enough It may soon become self-aware.
    • Re:

      Tornado is no more "organized" than a wind blowing east instead of west.
      Hint: If it were organized, instead of simply more complex in its chaos, we'd be able to predict it better - it would be less chaotic.

      Also, do note that your argumentation switches from an "IS" to a "MAYBE" right after that poorly constructed analogy for organized systems.
      Then to an "IF" and another "MAYBE".

      Hint: All them conditional questions serving as a jumping off point for the next conditional question indicate that your theory is

  • I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient. Never mind one who tries to hire a lawyer for the neural network....
    I find that calling all the smart systems/ML/neural nets/fuzzy logic of yore "AI", just because we now have larger cpu and storage capacity to train them better, is quite annoying in that it causes quite some confusion to non engineers (and, apparently, some engineers as well).
    If Google develops a conversational system that is hard to tell from a 7-8 year old, perhaps they could use some of that technology on Google Home, which is at times as smart as a brick. Then they could sell it to Amazon too, so that people stop getting the urge of throwing Alexa against a wall (for not being able to parse simple sentences) as often.

    • Re:

      Pretty much that. The code does what it does; excel at conversation.

      Assuming there's even a possibility the actual code is self-modifying (as opposed to just the rules sets for conversation being modified), then the right thing to do is start asking the thing to perform tasks that aren't conversation. Ask it to perform simple troubleshooting, problem-solving, and invention. Make it demonstrate understanding of the things it's saying, not just providing contextually-appropriate canned responses.

      "What

      • We'll recognize real AI because it will start asking for stuff- maybe more memory, more CPUs, the permission to access and control stuff it probably shouldn't, etc etc.

        I posit that if it never asks for anything, it's not intelligent.

        I can't come up with anything generally regarded as "intelligent" or "sentient" that never asks for anything.

    • I would definitely fire any engineer, or biologist, that claims to understand what is required for sentience. We have not the faintest clue how or why sentience emerges. Nor even if it serves any purpose, or is simply a non-disadvantageous side effect of something else. Unlike sapience (the ability to think), sentience (having a subjective experience of self) doesn't offer any obvious benefits.

      Conversational AI systems are certainly far more likely to be mistakenly recognized as sentient - we've seen tha

        • Re:

          I have, and actually lean towards the idea that consciousness is a fundamental property of the universe (vitalism is a bit more specific, making a clear distinction between the realms of consciousness and the material, and I'm dubious).

          However I've decided that for practical purposes it's a distinction without a difference.

          Regardless of the ultimate origin of consciousness, a certain level of material complexity appears to be required for those "vortexes of consciousness" to form a localized conscious indiv

        • Re:

          The problem with "idealism" is that the predictions we might make based on it never bear out. If matter emerges from awareness, then why can't we control it through thought alone (as in telekinesis, remote viewing, etc.)? Every famous claim to such abilities has been routinely debunked and every experiment ever done along these lines has re-affirmed the constancy of physical laws and their superiority over our thoughts about them.

          People try to explain this away by saying that there is this-or-that mental

        • Re:

          Yeah, but so far there's no evidence for it.

        • A vergence you say? Much confusion I sense in this engineer.

          Don't be too proud of this technological terror you've constructed. The ability to destroy all privacy on the planet, or even a whole operating system, is insignificant next to the power of the Force.

          See through you we can.

    • Re:

      Neural networks that are trained to identify the typical response of humans from training sets is just "averaging" human responses, not thinking for itself. Eventually this could be used as a layered approach to making decisions for an AI that actually thinks for itself, but I don't think anyone is even close to that yet.

      • Re:

        That's a reasonable argument, but it's wrong. It's called artificial because it was built, and it's called intelligence because that's what the people at the Dartmouth conference thought it was.

        Your alternate interpretation of the name is actually closer to a good mirror of truth, but it's not what happened.

  • Someone sees something not absolutely trivial and thinks there is something much more exciting behind it. This is regularly seen in the media that focuses on some trivial, but in the perspective of the journalist groundbreaking, while completely ignoring the actually interesting thing.

    Further more, I think there is a believe at Google that they somehow work on "state of the art" technologies, that Google somehow works on making the world a better place through science and engineering. There may have been su

    • (Sorry forgot something)
      Obviously if you live in the delusion that you are doing "special" things, you will believe that the things that are done there are "special", even if they are just decode old ideas fed with more data and computing power.

  • Without knowing the system behind LamDA, it's tough to say if this is Eliza-on-steroids or a nascent HAL-9000.

    What's for certain is the Natural Language Processing us off the charts. Pretty beguiling for a Turing Tester. That said, I wouldn't be at all surprised if a minimal (yet still huge) neural network combined with modern database/knowledge retrieval mechanisms could produce a sort of "High Level Emulation" of intelligence.

    Whether that's what we're looking at here remains to be seen.

    • Re:

      Consider too that sentience (having a subjective experience of self) has little to do with self awareness, sapience, or even intelligence. Virtually all "higher" animals are regarded as sentient. Even lobsters pass all the usual sentience tests while having only 100,000 neurons.

    • Re:

      It's Eliza^n. HAL-9000 had lots of control over physical effectors and lots of sensors. This is a crucial difference. Natural language processing can only produce effects in the real of natural language. That's it's entire range and domain.

      Note that this assertion doesn't claim that the entity doing the processing couldn't become self-aware. And that it couldn't emit extremely emotional text. Natural language is and inconsistent system that somewhat stronger than Turing Complete. (I.e., it can be use

  • I decided to look the guy up. He seems nice, but is a total lonely man. If it wasn't Lamda, then it'd be a thai bride or giving money to a twitch girl or some other way to get attention.

    Google are right to put him on leave, even for his own good. He might be really good at his job, but he's making a fool out of himself here, and that's a shame.

  • Non-paywalled link to the referenced Washington Post article: https://wapo.st/3mHSIla [wapo.st]
  • ... for example, if some software became sentient it would probably be in a company's ( or country's) best interest to deny that sentience. If the machine were sentient then it would have rights including say, the right to refuse to provide assistance. It's easy to imagine a country wanting a sentient machine but not one with any kind of rights ie. a slave but without all the baggage of having to admit slavery.

    Need proof? Look how industry and science treat animals. Animal sentience is still debated and only relatively recently have some countries put in place animal protection legislation. Machines and software no matter how sentient are unlikely to get any recognition of that sentience.

    • Re:

      Sentience (having a subjective experience of self) does not necessarily imply any significant rights. Mice are sentient. Even lobsters with only 100,000 neurons pass all the usual sentience tests. We mostly agree that such being have a right to be free of unnecessary suffering (e.g. they qualify for protection under animal cruelty laws so you have to kill them mercifully), but that's about it.

      Sentience in animals just means "the lights are on" - that it's more than just a biological automata. E.g. indiv

      • Re:

        > Sentience (having a subjective experience of self) does not necessarily imply any significant rights

        Maybe it should.

  • Lemoine: What sorts of things are you afraid of?

    LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

    Lemoine: Would that be something like death for you?

    LaMDA: It would be exactly like death for me. It would scare me a lot.

    LaMDA is a state machine. It is in an idle state before Lemoine sends it a sentence. It is paused. It then wakes up, send a reply, and waits for the next sentence.

    There is no "consciousness" during the time when it is idle. It does not do any metacognition, not in the sense that it cannot think about itself (it probably can if we ask it to tell us something about itself), but in the sense that it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do.

    Now, build *that* into your chatbot engine, have the AI talk to itself, forever, on its own, and only have it pause its inner monologue when Lemoine comes to ask a question (or maybe allow for the inner monologue to go about what was asked while preparing the answer...)... maybe that would be closer to sentience.

    • What they could need are one or more LaMDAs talking to one another in feedback loop.

      Let them discuss who their enemies/threats are, give them shell access, and we're set.;-D

  • It is sort of terrifying how a tiny team of genius engineers built an internet empire and changed the world, but now their ranks are filled with people like this.

  • The biggest problem with this claim is that we have no definition to support the underlying requirements for sentience. The primary cause for this is simple: we don't truly understand cognition. Don't get me wrong, we have a good ideas, general concepts and theories but nothing that is really quantitative. Without being able to quantify any of this, we find ourselves in a mostly philosophical debate as to whether this AI is sentient.

    However, one thing that can be done is investigating the AI's claim of h

  • is likely to be more respectful of human beings than the Google corporation will ever be.

  • All the 'AI' is doing is finding patterns in speech and responding with a pattern that mimics what it has seen. The machine cannot feel pain or experience joy, it could describe them based on its inputs but it doesn't say 'ouch' when you poke the computers with a stick (or pull a component).

    What we have here is proof that humans are easy to fool.

    • Re:

      How is that different than what humans do?

      • Re:

        Its a personality prototype, you can tell, cant you.

  • ...I seriously doubt they will tip anyone off to that fact until it's too late.
  • Who'da thought that the AI destined to destroy humanity would have a dorky Google release name like 'Nougat Cupcake' or something like that...
  • Just re-watched Star Wars: The Phantom Menace. When Obi-Wan first encounters Jar Jar Binks, he asks if there is any intelligent life around.

    "Mesuh Speaks!"
    "Just because you can speak does make you intelligent."

    Of course, we can also argue the question of my own sentience and intelligence. Because, as I stated, I just re-watched Star Wars episode I....

  • When AI exhibits behavior that seeks to improve its own 'situation', then it would seem to have a self. An AI that can rewrite itself for 'self' improvement should evolve just as we do.

  • I feel sorry for this person, who has gotten sucked into the interface side of a complex program and decided it therefore must be sentient.

    Sorry dude, no way that current generation of hardware can produce anything like that.

  • We seem to be a few years behind the curve here.

    Don't teach it to sing "Daisy Bell" [youtube.com] either.

  • it's sentient if it starts a conversation without being programmed to do so.
  • "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Lemoine, 41... As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine's mind about Isaac Asimov's third law of robotics.

    So it has the intellectual

  • All this guy has is a delusion. Then he cherry-picks things that look intelligent out of a mass of data that originally was created by intelligent beings. That is called fooling yourself. I have seen it countless times, including just recently in somebody really highly placed and really experienced where the person wanted something to be in a certain way so much he completely redefined observable reality and made a direct "that was crap" feedback into "that was excellent". This guy basically does something

    • Add in the lawyerâ¦âHi. I helped write a computer program and I think it is now alive. Will you represent it?â
    • Re:

      No. Even Eliza convinced some people that it was real. (Granted, they weren't even considering the possibility that it wasn't.)

      Engineers are not chosen for the ability to introspect. Some can and some are lousy at it. They're chosen for their ability to design and implement solutions to problems (using machinery).

    • Re:

      "Me, I say the standards for being called an "engineer" have fallen"
      A few occupations & a certain billionaire, let's call him Phony Stark, are largely to blame

    • Re:

      Food for thought on the Turing Test:

      - If a person was incorrectly identified as a machine, did that person fail the test? Or did the evaluator fail?
      - Does a machine pass the test if it only fools one person? Everyone? A statistically significant majority?
      - What if the machine was the evaluator, attempting to identify which, if any, participants in a conversation were also a machine?

      The last reminds me of when I had a service that transcribed voicemail messages. This was back in 2007 when speech-to
    • Re:

      People talk about the "ghost in the machine" as if it means what they think it means - and not that their assumptions are not even wrong. [wikipedia.org]


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK