1

7 speed bumps on the road to AI

 1 year ago
source link: https://www.infoworld.com/article/3692290/7-speed-bumps-on-the-road-to-ai.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

7 speed bumps on the road to AI

Artificial intelligence is rife with practical and ethical dilemmas, and now they're coming home to roost. Here are seven unavoidable questions about AI.

By Peter Wayner

Contributing writer,

InfoWorld | May 2, 2023 2:00 am PDT

Android head under construction.

Gwoeii/Shutterstock

Everyone is amazed as the AI butterflies emerge from their cocoons. Large language models are poised, learned, and adept at communicating in a wide range of languages and styles. The art models can produce pretty much anything that we can dream of and explain in words—the wackier the better. Scientists are teaching AIs to compose music, explore space, and (much to our worry) perform repetitive tasks traditionally done by paid workers.

For all these amazing results, serious problems are lurking just under the surface. Perhaps we haven’t noticed yet because we’re still in a state of awe. Down the road, we will have no choice but to grapple with the material and ethical dilemmas raised by AI. Here are seven issues that we won't be able to ignore when the wonder starts to fade.

Resource scarcity

Most large AI models rely on massively parallel computations that can be accelerated by GPUs or specialized chips known as TPUs. These are also often used by cryptocurrency miners, video meeting participants, and game players. Everyone wants the hardware and in the last few years the cost of it has soared as demand exceeded supply. To make matters worse, many are using the cloud and sometimes the cloud just can’t expand fast enough. Building out AI models to their full potential will take plenty of hardware, and it won’t be cheap.

Hardware isn't the only scarce resource required for a successful AI rollout. Big AI jobs require lots of electricity, which not every region can supply. Between geopolitical conflicts and fickle renewable sources, simply getting enough electricity at a predictable price is a challenge. Some cloud providers are boosting prices in certain regions just to compensate for their bigger bills thanks to geopolitical turmoil.

AI ethics

There are many issues and debates that humans know to avoid in certain contexts, such as holiday dinners or the workplace. AIs, though, need to be taught how to handle such issues in every context. Some large language models are programmed to deflect loaded questions or just refuse to answer them, but some users simply won't let a sleeping dog lie. When such a user notices the AI dodging a tricky question, such as one that invokes racial or gender bias, they'll immediately look for ways to get under those guardrails.

Bias in data and insufficient data are issues that can be corrected for over time, but in the meantime, the potential for mischief and misuse is huge. And, while getting AI to churn out hate speech is bad enough, the plot thickens considerably when we start using AI to explore the moral implications of real life decisions.

Global labor inequality

Many AI projects depend on human feedback to guide their learning. Often, a project of scale needs a high volume of people to build the training set and adjust the model’s behavior as it grows. For many projects, the needed volume is only economically feasible if trainers are paid low wages in poor countries. There’s a deep debate over what is fair and just, but no one has figured out an economically feasible solution for larger projects. Just as the gemstone industry doesn’t like to think about the gritty and dangerous work of mining, the AI industry doesn’t have a simple solution to the cost of labor.

Bad feedback loops

Disinformation in the form of fake news and reviews has been with us for some time now, being created for any reason from politics to profit. In the case of reviews, vendors create fake positive reviews of their products and fake bad reviews of competitive products. Algorithms for blocking bad actors are surprisingly complex and require considerable maintenance. There’s no free lunch, it seems.

But now imagine what happens when AIs start being used to produce disinformation. For one thing, the volume of fake information will increase exponentially. For another, there's a good chance another AI will pick it up and feed it back into the training corpus. Viral misinformation already pollutes our social networks. How will bad feedback loops amplify and corrupt knowledge? Will there be training sets that are carefully curated from ancient texts before the singularity?

The law

AIs learn everything they know by copying a huge corpus of text and images. In most cases, the humans who created this data were never told that their work would be folded into an AI model that might someday be worth billions of dollars. What happens when those humans start losing their jobs to artificial intelligence? Suddenly, they've all got plenty of time to lawyer up about licenses, copyright, and plagiarism. We can joke about just training an AI on the relevant case law, but an AI judge that can render a verdict in milliseconds is scarier than a human judge who takes years to weigh the issues. If we are counting on humanity and the legal system, it could be decades before we have a verdict.

Here's another scenario in the same vein. When an AI makes a mistake about historical events or present-day pop culture, it might be offensive but no one is directly injured. When that same AI says something disparaging about a living person, it might actually be libelous. It’s easy to imagine an AI gluing together scraps of sentences to say something very wrong about a living person. Then let's say that person has the means to hire a team of lawyers to exact revenge. Is the AI itself culpable? Or is the company that owns it? If there’s a pile of money somewhere, the lawyers will find it.

Death and destruction

As far as I know, there are no obvious examples of evil AIs that behave like villains in science fiction movies. Self-driving cars and factory machinery are known to make mistakes, but so far no artificial malice seems to be involved. A foolish human eating breakfast behind the wheel while texting and driving is somehow more acceptable (though no less potentially tragic) than a self-driving car that crashes. It seems we just don’t know, yet, how to account for the possibility of serious injury or death by artificial intelligence.

So what about it? Clever users have already begun to figure out the ways that AIs tend to make mistakes. An AI may excel at getting obscure information right, but even simple tasks like counting to five can trip it up. But what if the cracks run deeper than mere counting errors? At the least, we might conclude that some jobs are not meant for AI—no matter how much we want the profit.

Great expectations

Humans tend to imagine that animals and AIs think just like we do. This may frighten us because other humans are often disappointing or downright dangerous. The real problem is that AIs have a distinctly different form of intelligence, which we have yet to understand. As a species, we have much to learn about AI's unique strengths and weaknesses.

In the meantime, the field of artificial intelligence is so hyped and buoyed by human optimism that it could never live up to our dreams. It’s not the scientists' fault that they can’t corral our expectations; even the corporations aren't to blame for capitalizing on them. It's our fault for letting our hearts and hopes outrun reality. The field is doomed to disappoint, if only because we’re expecting so much.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK