6

The Year AI Conquered Our Imaginations—Before It Conquers the World - The Ringer

 4 months ago
source link: https://www.theringer.com/tech/2023/12/22/24011753/year-of-ai-artificial-intelligence-chatgpt-openai-sam-altman
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The Year AI Conquered Our Imaginations—Before It Eventually Conquers the World

Generative AI wasn’t invented in 2023, but 2023 was the year it became a hit. Now we’re left to wonder where tech’s first exciting innovation in a while will lead humanity—and the robots themselves.

By Brian Phillips Dec 22, 2023, 6:20am EST

Getty Images/Ringer illustration

Big Tech needed a win. If you’re going to understand artificial intelligence in 2023—and you should know up front that you aren’t going to understand it (no one understands it, including the people whose job it is to build and explain it; but let’s say you want to try)—here is your crucial piece of background information. Tech was on a losing streak. It is absurd to say that an industry is on a losing streak when the leaders of that industry are constantly in the news for building atmospherically regulated undersea bunkers where they can stroll through their kelp gardens sipping immortality serums after civilization collapses. Still. At the end of 2022, the metaverse was a bust. Cryptocurrency had been exposed as a kind of weird dating app designed to match the worst people in the world with the most gullible. (Often, these were the same people.) Tech needed its next new thing, and it needed it quick.

Enter ChatGPT, which launched in November 2022. I should say here that I am, by habit and inclination, an early adopter—nerd-adjacent, if not a card-carrying nerd. I was the grade school kid who learned to code in green text during the floppy disk era; I was the wraithlike lurker waiting outside the Apple Store the day the first iPad dropped. I remember the sense of giddy possibility that used to imbue the gadget space, the feeling that you would swipe your finger across a touchscreen—a touchscreen!—and whatever happened next would be cool and fun and useful and like nothing you’d ever seen. I am wired (ha) to love and appreciate this stuff. But by November 2022, the idea that I could be wowed and delighted by a tech product was beginning to seem quaintly historical, like Vaudeville and bowler hats.

By November 2022, social media was making me ill. My light bulbs were spying on me. My Crock-Pot wouldn’t cook soup until I gave it my Wi-Fi password, and then it would cook it wrong, and when I Googled “how to make my Crock-Pot stop fucking up the soup,” I’d get zilch about Crock-Pots but 900 fake-review websites stuffed with affiliate marketing links for a hummingbird feeder I briefly thought about buying two years earlier. Truth no longer existed, yet I somehow needed 37 apps to watch Friends. Outside, I was sure I would be killed by a self-driving car; the unscathed occupant of that car would then turn the video of my death into a far-right meme, which Elon Musk (ugh) would repost on X (ugh) under the caption “on fleek,” a phrase which he had just heard for the very first time and would soon be discussing at length in a live audio chat with Brett Favre and former CIA director Mike Pompeo.

And then, within this fallen world, ChatGPT appeared. Here, at last, was something genuinely new and exciting: a chatbot that could respond to complex inputs in ways that called into question the very nature of intelligence and that could also sort of write fart jokes.

Image_from_iOS.jpg

ChatGPT, a product of the San Francisco company OpenAI, isn’t an actual consciousness, in the Data from Star Trek sense. It’s a generative language bot that has been fed millions of human sentences and programmed to recognize patterns in them; all it is doing is making probabilistic assessments of which words belong with which other words in the context of specific prompts. But interacting with it is unlike interacting with any app that came before it. Other recent tech breakthroughs had looked like solutions in search of problems—it was never remotely clear what anyone was supposed to do in Mark Zuckerberg’s metaverse, and honestly, regular money works fine?—but ChatGPT and the wave of AI-powered chatbots were addressing a real human emergency, and it was instantly clear what this emergency was. The emergency was that we—humanity—had no easy way to rewrite old Simpsons episodes in the voice of Virginia Woolf. Now, suddenly, miraculously, we had one. “Did you say life is a luminous halo, Skinner—a semitransparent envelope surrounding us from the beginning of consciousness to the end?” “No, sir, I said ‘steamed hams.’”

And, OK, sure, the chatbots did (to sound like The New York Times for a second) raise ethical questions. To function at all, generative AI had to be trained on massive datasets, and the AI companies appeared to have assembled those datasets mostly by vacuuming up tens of thousands of copyrighted books on the legal basis of, “It’s fine. Don’t worry about it. Whatever.” Image-generating bots like Midjourney, Stable Diffusion, and OpenAI’s DALL-E did the same with copyrighted artwork, meaning that every doodle Midjourney drew and every sentence ChatGPT wrote were arguably acts of plagiarism. Then there’s the environmental cost: The computing power required to keep these machines running is astronomical, and is it worth vaporizing Earth’s fresh water just to have a robot answer your work emails for you? (My head says, “No.” My heart says, “When was the last time I swam in a lake?”) Finally, there’s the uncomfortable fact that people both inside and outside AI developer communities keep flagging the same minor risk, namely that the software might metastasize into a malevolent, unstoppable entity that would eradicate the human species and destroy the earth.

Concerning? Yes. Totally. We are all extremely concerned. That is why ChatGPT had only (checks notes) 100 million users nine weeks after its launch.

Generative AI wasn’t invented in 2023, but 2023 was the year it became a hit. From the standpoint of Silicon Valley’s idea of itself, it probably matters less whether the technology is profitable yet (it almost certainly isn’t) and more that it’s everywhere. It’s the big, hot thing. Tech companies might have been going through a painful contraction, they might have spent the year laying off tens of thousands of workers, but they are—in their view—once again writing the script for the future. If that script happened to have been stolen from thousands of human artists and writers, well, check your dataset for a sentence about what you can’t avoid when making an omelet.

AI spread through 2023 with surreal speed, the speed of a dream unfolding. The scene changes were instant and illogical. In January, we were all tapping away at ChatGPT, making our little jokes. By December, the pope was calling for an international treaty to regulate AI, warning that the world risked falling into “the spiral of a technological dictatorship.” By December, we were all reading news stories with ledes like: “Swarms of killer drones are likely to soon be a standard feature of battlefields all over the world. This has … spurred concerns about the prospect of eventually turning life-or-death decisions over to artificial intelligence programs.” What happened in between the jokes and the drone swarms? Will the drones be programmed with the voice of Raymond Carver?

This was the year the Times ran an article called “Using AI to Talk to the Dead.” Did you know that people are now using chatbots in an attempt to pierce the veil of mortality? Did you know that some dying people are now having their voices and likenesses transformed into AI-powered video avatars so that their bereaved loved ones can continue speaking to them after they’re gone? Did you know that one of these people was the actor Ed Asner? It’s all true. Given access to the right screen, you too could be having a conversation with the digital revenant of Lou Grant from The Mary Tyler Moore Show.

This was the year Hollywood decided it could slash labor costs by using computers to write scripts and replicate actors with, essentially, deepfakes. (These plans were scuttled, at least for now, by the actors and writers strikes.)

This was the year I bought an audiobook of a Blandings Castle novel by P. G. Wodehouse and discovered it had been narrated by a robot. The robot’s comic timing was dismal; that said, a failed robot audiobook narrator would make a great Wodehouse character if the late humorist could somehow be resurrected in 2024, as … I guess he can be?

This was the year Microsoft, which has invested $13 billion in OpenAI, tried to replace its boring, old Bing search engine with an AI chatbot, one that immediately started insulting and gaslighting its users, and one that also did not know what year it was.

This was the year Elon Musk (ugh) unleashed an AI-powered chatbot called Grok (ugh) on X (ugh), only to find that Grok spoke up for trans rights when questioned. This did not endear Grok—poor, brave Grok!—to a significant portion of Musk’s user base, men who think gender is in the Constitution and that the Iron Man movies are a metaphor for fighting the woke mind virus.

This was the year OpenAI announced it was investigating reports that ChatGPT was acting “lazy” and refusing to complete basic tasks, leading to serious speculation that the bot might have accidentally contracted seasonal affective disorder.

“Model behavior can be unpredictable,” OpenAI tweeted. OK. Understood. How are those drone swarms coming?

Above all, this was the year a large and public rift opened within the AI community over the question of whether AI’s possible costs to humanity outweighed the benefits. This debate came to a head in November, when the CEO of OpenAI, Sam Altman, an AI optimist, was unexpectedly forced out by the board of the company; Microsoft almost immediately announced it was hiring him to lead an advanced AI research team, but before he could sign the paperwork, OpenAI brought him back. OpenAI had originally been conceived as a nonprofit that would work to ensure that artificial intelligence technology was developed in responsible ways, and this whirlwind breakup-reunion arc was widely interpreted as a sign that AI’s rapid expansion in 2023 had put that mission under extreme stress. The board was torn, and the fracture reflected the weird mix of idealism, corporate greed, curiosity, and terror that swirled through AI discourse in general.

On the one side were techno-optimists who foresaw a utopian future. AI would eliminate workplace drudgery, diagnose diseases more effectively than doctors could, and save humanity from one of its most loathed burdens: paying writers for anything, ever. On the other side were techno-doomsters who felt less cheery about the prospect of an exponential increase in online misinformation (now easy to generate with one click), widespread job losses, and of course the fleets of airborne murder bots with unpredictable personalities. (Unpredictable enough to pay a writer? Let’s not find out!)

If you were an optimist in a position of power, you wanted to forge ahead with AI development at top speed, which would probably make you obscenely rich. If you were a doomster, you wanted to slow the rate of development until the risks could be addressed, which would make you less obscenely rich but would also not make you the inhabitant of a postapocalyptic planet where all the news sites said male-pattern baldness could be cured by ivermectin. You could say people were selfish on both sides.

One of the strangest things about AI in 2023, and one of the best things from the perspective of Big Tech’s self-image, was the way people talked about the strangeness itself. Normally, when a tech product is flawed, the flaws are criticized, and the criticism makes the people who produced the product look bad. That’s clear enough, right? A few months ago, I bought some fancy noise-canceling headphones. Almost as soon as I took them out of the box, they developed a persistent loud hiss in the left ear cup. This flaw did not make me think the headphone designers were geniuses; it made me think they were clowns who should have their headphone-building tools expropriated by the state. And this was reflected in the angry and contemptuous way I spoke about them, mostly to my dogs, who lost a lot of respect for the manufacturer, believe me.

But because of the unique nature of generative AI, even its most glaring shortcomings have a bizarre way of making it seem more glamorous. Take ChatGPT. ChatGPT lies constantly. Have I not mentioned this? Maybe “lies” is the wrong word. ChatGPT has no concept of the truth. It knows only which words seem like they should go together. If you ask it a question, it might give you the right answer, or it might string together a series of false statements so brazen they’d make a shameless 6-year-old cringe in embarrassment. Earlier this year, I was working on a project about small-time grifters. I asked ChatGPT to help me find sources on racetrack hustlers. My eyes lit up as it told me yarn after yarn about legendary con men like Miles Cuthbert, “the Lavender Kid,” who worked the Saratoga Race Course in the 1950s, and Julius Engelmann, a German immigrant who fleeced suckers by pretending to be an exiled Russian count. The stories were perfect for my project. They were also completely untrue. ChatGPT had conned me in my research about con artists. This is the true meaning of the word “metaverse.”

Under normal circumstances, this behavior would be seen as a problem, right? After all, this is a technology that some of the world’s biggest companies are injecting into their search engines. In June, a lawyer named Steven Schwartz was fined $5,000 for submitting a legal brief in federal court that he’d used ChatGPT to write. It was full of completely fictitious case law. Just outrageous, made-up nonsense that he handed to a federal judge, assuming it was all real. Which, admittedly, is kinda funny for us onlookers, but it’s not an ideal situation for a tech product to land one of its users in, surely.

However, because the major selling point of ChatGPT is its near-human facility with speech, no one who uses, works on, or reports on the bot can resist the temptation to anthropomorphize it. We see it as having a human personality, and we describe it, inanely, in those terms. (Within the AI industry, the word for what a bot does when it responds to a query with false statements is “hallucination.”) And what’s one of the signal attributes of human consciousness? Its unreliability. We forget stuff. We misunderstand. We make mistakes. So when ChatGPT gets its facts wrong, our brains don’t interpret this as the sign of a nonfunctional piece of computer code; to our brains, already busily anthropomorphizing our fun robot chatting friend, the errors make ChatGPT seem more like an actual person. It lies. It hallucinates. It gets “lazy” or “sassy.” What could be more relatable? It’s the perfect product for a stumbling tech sector: one that actually seems more futuristic when it doesn’t work.

And all that anxiety about an AI-fueled apocalypse? What does that do except make the AI industry seem more supremely important? It has the fate of the whole world in its hands! If the question is, “Are we building a viable product, or is this just another shitty bubble we can cash in on till it pops?” everyone leaves the meeting feeling deflated, even the people who think the product is good. If the question is, “Will our device bring utopia to the galaxy or supersede the human species altogether?” I think everyone leaves the meeting feeling pretty puffed up and pleased with themselves, even the people on Team Panic. The moment that question is asked, everyone in the room becomes a historic figure, which is why there’s a subtle vibe of grandiosity around even the movement to treat AI with caution. This is the side Elon Musk professes to be on, by the way (ugh). It is not necessarily the more modest position.

So this is what 2023 gave us. A fun tech innovation—the first in a while—that might inadvertently trigger the apocalypse. A barely functional tech application—the thousandth one since lunchtime—whose downsides somehow made it even more alluring. It was an arc that started with the silliest jokes and ended in the very darkest places. (Israel is currently using an AI targeting system to direct its bombardment of Gaza; the system is called “the Gospel.”) Very little about the story was predictable, and very little about it is likely to be predictable in 2024. But the Sam Altman saga at least demonstrated that one thing in tech never changes. In a clash between two competing sets of ideals, the one that wins is the one that says it’s OK to take the money.

Sign up for the The Ringer Newsletter

Email (required)
By submitting your email, you agree to our Terms and Privacy Notice. You can opt out at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.




About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK