5

LaMDA and the power of illusion: The aliens haven't landed ... yet  | VentureBea...

 1 year ago
source link: https://venturebeat.com/2022/06/14/lamda-and-the-power-of-illusion-the-aliens-havent-landed-yet/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Community

LaMDA and the power of illusion: The aliens haven’t landed … yet 

Alien UFO invasion in the forest at night.
Image Credit: gremlin/Getty

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A few weeks ago I wrote a piece for VentureBeat entitled “Prepare for Arrival” about the very real prospect that an alien intelligence will arrive on planet earth during the next 40 years.  I was referring to the world’s first sentient artificial intelligence (AI) that matches or exceeds human intelligence.

Yes, it will be created in a research lab but it will be just as alien and far more dangerous than an intelligence from a faraway star. This will happen and, unfortunately, we humans are thoroughly unprepared. 

But let me be clear – it has not happened yet. 

Concept of a AI and ethics a artificial intelligence, processing neurological data, cloud, humanoid replacement, 3d render

We have not yet achieved sentient AI, but that doesn’t mean that we never will.

I say this because this weekend I received calls and emails from friends and colleagues asking me if the aliens had just landed. They were referring to an article in the Washington Post about a Google engineer named Blake Lemoine who decided last week to go public with the claim that a Google language generating AI known as LaMDA had become sentient. According to the Post, he went public with this warning after Google executives dismissed his concerns as unsupported by evidence. 

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

So, what’s the truth here? 

Personally, I find this to be a significant event, but not because LaMDA is sentient. It’s significant because the LaMDA language model has apparently reached a level of sophistication where it can fool a well-informed and well-meaning engineer into believing its dialog came from a sentient being rather than a sophisticated software model that relies on complex statistics and pattern-matching. And it’s not the only model out there with the capability to deceive us. Open AI famously released GPT-3 in 2020 with impressive results and Meta AI recently announced its own language model called OPT.   

All of these systems fall under the uninspired heading of “Large Language Models” or LLMs. They’re built by training giant neural networks on massive datasets – potentially billions of documents written by us humans, from newspaper articles and Wikipedia posts to informal messages on Reddit and Twitter. Based on this mindboggling set of examples, the systems learn to generate language that seems very human. It’s rooted in statistical correlations – like figuring out which words are most likely to follow other words in a sentence that we humans would write. The Google model is unique in that it was trained not just on documents but on dialog, so it is learning how a human might respond to an inquiry and can therefore replicate responses in a very convincing way.  

But here’s the critical point – there’s no mechanism in these systems (at least not disclosed) that would allow language models to actually understand what they are writing.

The dialog that LaMDA produces contains intelligence, but that intelligence is from the human documents it was trained on and not the unique musings of a sentient piece of software. Think about it this way – I could take a document about an esoteric subject that I know absolutely nothing about and rewrite it in my own words without actually understanding the topic at all. In a sense, that’s what these LLMs are doing, and yet they can be extremely convincing to us humans.  This is even true with dialog. 

But let’s be honest — we humans are easily fooled.  

Although my background is deeply technical and I currently run an AI company, I’ve also spent years working as a professional screenwriter.  To be successful in that field, you must be able to craft realistic and convincing dialog.  Writers can do this because we’ve all observed thousands upon thousands of people having authentic conversations. But the characters we create are not sentient beings – they’re illusions. We might even feel like we know them, but they’re not real. That’s what LaMDA is doing – creating an illusion, only it’s doing it in real time, which is far more convincing than a scripted fictional character.  And far more dangerous. 

Yes, these systems are dangerous. 

That’s because they can deceive us into believing that we’re talking to a real person. They’re not even remotely sentient, but they can still be deployed as “agenda-driven conversational agents” that engage us in dialog with the goal of influencing us. Unless regulated, this form of conversational advertising could become the most effective and insidious form of persuasion ever devised.  After all, these LLMs can easily be combined with AI systems that have access to our personal data history – our hobbies and interest and values – and could use this data to generate custom dialog that individually maximizes the persuasive impact. These systems could also be combined with emotional analysis tools that read our facial expressions and vocal inflections, allowing AI agents to adjust their tactics mid-conversation based on how we react.  All of these technologies are being aggressively developed. 

From advertising and propaganda to disinformation and misinformation, LLMs could become the perfect vehicle for social manipulation on a massive scale.  And it won’t just be used with disembodied voices like Siri or Alexa – photorealistic avatars will soon be deployed that are indistinguishable from real humans. We are only a few years away from encountering virtual people online who look and sound and speak just like real people but who are actually AI agents that appear sentient deployed by third parties to engage us in targeted conversations aimed at specific persuasive objectives. This is extremely dangerous. 

After all, if LaMDA could convince an experienced Google engineer into believing it was sentient, what chance do the rest of us have against a photorealistic virtual person armed with our personal data and targeting us with a promotional agenda?  Such technologies could easily convince us into buying things we don’t need or believing things that are not in our best interest, or worse that are untrue.  Yes, there are amazing positive applications of LLMs that will have a constructive impact on society, but to protect against the dangers, we need to regulate conversational AI. 

Louis Rosenberg, PhDis a technology pioneer in the fields of VR, AR and AI. He is known for developing the first augmented reality system for the US Air Force in 1992, for founding the early virtual reality company Immersion Corporation (Nasdaq IMMR) in 1993 and founding the early augmented reality company Outland Research in 2004. He is currently Founder & CEO of Unanimous AI, a company that amplifies human intelligence. Rosenberg earned his PhD from Stanford University, was a professor at California State University, and has been awarded over 300 patents for his work developing VR, AR, and AI technologies.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK