21

Artificial Intelligence: Do stupid things faster with more energy!

 4 years ago
source link: https://www.tuicool.com/articles/6jyAzuz
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Why we need a new breed of leader in the data-fueled era

Multiple choice time! What’s the best kind of worker?

A) Reliable workers who carry out orders precisely, quickly, and efficiently.

B) Unreliable workers who may or may not feel like doing what they’re told.

0*7STgGCeuvFqyj2sQ.jpg?q=20

If you think this is a no-brainer and reliable option (A) is the obvious answer, think again. It really depends on the skills of whoever’s giving the workers their instructions.

Reliable workers will efficiently scale up the intelligent decision-making of a good leader, but they will unfortunately also amplify a foolish decision-maker. Remember those classic café posters? “Coffee: Do stupid things faster with more energy!” When a leader is incompetent (or depraved), unreliable workers are a blessing. Can’t drag single-minded determination out of them? How wonderful! Things can get scary when zealots wholeheartedly pursue objectives set by a bad decision-maker.

The quality of the decision-maker matters! When a leader is incompetent or depraved, unreliable workers are a blessing.

If you have a fool in charge, unreliable option (B) is your best hope: nothing much gets done and the leader’s folly fizzles.

Computers are the ultimate reliable workers and they can grant your wishes at scale with breathtaking speed.

And now for the bad news! Computers are the ultimate reliable workers. They do only what they are told. No more and no less. They don’t think for themselves. They don’t think at all! They don’t want anything except what you told them to want. (In fact, neuroscientists talk about concepts like motivation — wanting — as a biological privilege. It makes no sense to say a machine really wants anything.) If you tell your computer to want to say “hello” to the world a million times, you’ll get no complaints. Expect me to do the same task for you? No way, I’m bored already.

1*JWJl51ZoqTGgzoo32Mg9ZQ.png?q=20

Compared with machines, humans are unreliable. If you pick a task that humans and machines are capable of performing equally well at the individual level and assign it to 1000 machines and 1000 humans, my money’s not on Team Human for the best total score. Here’s why.

If you‘re most obsessed with performance at the individual level or model complexity, you might be missing an important concept: incentive complexity at the system level.

We’re all precious snowflakes that come with the added bonus of being unreliable in our own individual ways. Each of us takes a lifetime of effort to cultivate and no two of us get the same series of inputs from our environments. The result is a dizzying mix of incentives across workers — from playing with their kids to foraging for junk food to perhaps looking busy on occasion. As their unreliability pulls in different directions, it puts the brakes on bad decision-makers.

All our human unreliability points in different directions, putting the brakes on bad decision-makers.

That’s why humans usually don’t scale up a leader’s intentions as efficiently as machines do. (That said, if people are aligned like iron filings with a magnet, history shows that we become capable of some scary things.)

0*mEQ5wXdPLdgPDlc-.jpg?q=20

Computer systems tend to follow much more simplistic incentive sets because these are dictated to them by the humans who build them. As simple-minded creatures, we tend to pick goals like “ maximize revenue ” or “ identify cats accurately ”. Maybe we’ll combine two or three of these at once if we’re feeling fancy.

Behind today’s apps are legions of machines all following the same simple orders.

Compare that with the tapestry of incentives experienced by all the different minds on this planet, all tugging in different directions. Even a single brain is awash with a universe of competing objectives on the edge of mutiny. How does a single developer handcraft such a system? You don’t… or maybe machine-assisted hypercomplex incentive design is where the next big human-like intelligence ( HLI ) breakthrough will come from. Speculation aside, behind today’s apps are legions of machines all following the same simple orders.

Technology is a lever that scales the wishes of human decision-makers.

Who gives those orders? Whoever’s in charge of the project. I’d love for us to stop using language that implies there is any technology that is autonomous or independent of people . Didn’t humans build it? (Or is there some extraterrestrial event I’m not aware of?)

For those who prefer watching to reading, here’s a keynote I gave on this topic

Because it always comes from and is designed by people , all technology is an echo of the wishes of whoever built it. Sometimes that echo lingers for such a long time after the button press that it’s easy to forget the role people played in making it. It always does what people built it to do… which is tragicomically often not the same thing as what they hoped they were building it to do. (Devs, you know that bug is your fault, not the computer’s, right?)

The pointy-haired boss is one of the four horsemen of the apocalypse.

Technology scales the wishes of human decision-makers. It’s a lever, and the more it scales the more of a lever it is. When levers become long enough to move the world, why aren’t we demanding to know whether the people wielding them have the skills to do so responsibly ?

0*iNVUsOCe2Jo3LAXx.jpg?q=20

Image: SOURCE .

In past centuries, bad decision-makers were relatively self-limiting, so training people in decision skills wasn’t taken all that seriously. Sure, you’d start tutoring a royal from infancy, but why bother with the rest of the populace?

Today, as computer systems scale to touch more lives, we’re reminded all too often that your garden variety tech product manager was blessed with a much rosier childhood and hasn’t done much by way of a speedy catch-up to prepare for their growing responsibilities in a technology-fueled world.

Too much status, not enough craft.

Worst of all are those leaders who see decision-making only as another way to thump their chests and exert status. Make the lever long enough and the pointy-haired boss becomes one of the four horsemen of the apocalypse. Perhaps it’s time to rethink decision-making as a science and a skill to cultivate.

What does any of this have to do with AI?

So far, our discussion is about scalable technology in general. None of it was AI-specific .

The trouble with ML/AI is that building it effectively and responsibly takes all the decision-making skill of scalable tech and then some. It is more of an amplifier of both decision intelligence and decision stupidity (I hope you’ll agree that one of these sounds more palatable than the other).

Think of it as a proliferation of magic lamps.

When I think of machine learning and AI , I think of a proliferation of magic lamps. Forget humanoid robots — these technologies are far more powerful. Unlike traditional programming, they allow you to solve a problem even if you can’t think up the solution’s steps yourself.

AI allows you to automate what you can’t express.

That’s because they let you express your wishes with examples and objectives instead of explicit instructions , which means that you can automate beyond human expression .

0*c6N4vDe-UqDsxPyG.png?q=20

AI represents a fundamental leap in human progress… but do you remember the moral of every be-careful-what-you-wish-for story?

It’s not the genie that’s dangerous, it’s the unskilled wisher.

In those stories, it’s not the genie that is dangerous. It’s the unskilled wisher. The genie’s magic could be a huge boon to humankind, but that depends on which direction you point it in. If our species were granted enormous power, I’d be much less afraid of Hollywood-caricature villains (aggravated psychopathy is blessedly rare) than teeming multitudes of well-meaning bumblers.

Compared with traditional programming, ML/AI involves expressing what you want in a style that is closer to those magic lamp stories.

At its heart, the ML/AI paradigm is a new way to communicate with machines . Compared with traditional programming (step-by-step instructions), ML/AI involves expressing what you want from the genie in a style that is closer to those magic lamp stories ( objectives and examples ).

What happens when you put the power to communicate with data at scale in the hands of thoughtless communicators?

The scary part of AI is not the robots. It’s the people.

An ML/AI system will either fail testing and deliver nothing (because your genie was so impotent that you sent the lamp back to the attic), or pass testing and will deliver exactly what the decision-maker wished for. Not what the decision maker wanted or hoped for but precisely what the decision-maker asked for. What if they made a foolish wish?

Testing an ML/AI system protects you from an ineffective genie, not a foolish wisher.

If you’re harboring the following sentiment, be very afraid: “I never do what the boss asked me to do, I do what the boss actually wants.”

0*OKhbt9Av7u0GF1kb?q=20
Less hoping and more training, please. SOURCE .

What you’re saying is that your boss does not know how to wish responsibly. Putting them in charge of a ML/AI project is a disaster waiting to happen. Instead, they need training or to be put somewhere safe (and padded?) where they can’t do any damage. Take your eye off them for a moment and they’ll have asked an AI system to make as many paperclips as possible .

We need to work harder than ever to ensure that whoever is involved in the decision role has the skills to wish responsibly.

The wisher who is perhaps most dangerous of all is the one whose wishes have unintended consequences that escape flimsy safety nets . Even if a wisher has the best of intentions, they’re a hazard if they aren’t able to fully think through what they are asking for, so that the spirit of the wish matches its letter. With great power comes great responsibility… to use that power intelligently. That takes skill, not just good intentions. But how do you build responsible wishing skills for the AI era? By investing the time and seeking them out .

Technologies that scale are like ever-lengthening levers, and we mustn’t forget that there’s a human decision-maker on the end of those levers.

Powerful technologies that scale are getting easier and easier to use, so it’s more vital than ever to recognize just how much of the human element they have baked into them.

If you’re in the habit of blaming lifeless objects, you’re not helping here. Open your eyes to the people behind the machines.

If you want to worry about something when it comes to AI, don’t worry about personhood or robots . Worry about scale, speed, reach, and longevity. The more a tool’s effect scales, the more careful you need to be with it. The more people your decisions can affect, the larger your duty.

If you ask me whether I’m scared of AI, what I hear you asking me is whether I am scared of human negligence.

If you ask me whether I’m scared of AI, what I hear you asking me is whether I am scared of human negligence. That’s the only way the question makes sense to me. My answer? No. I’m optimistic about our AI future. I’m convinced that these skills can be taught and that people can learn to build safe and effective systems responsibly, driving progress and making life better for those around them. That’s why I (and others like me) choose to step up and contribute to training a new breed of leader skilled in decision intelligence .

If we teach people how to build magic lamps, we must also teach the skills for wishing responsibly. Otherwise, huge scale will bring huge problems.

Decision intelligence is a new academic discipline concerned with all aspects of selecting between options. As a movement, it is built on the realization that if we teach people how to build magic lamps, we must also teach the skills for wishing responsibly. Otherwise, huge scale will bring huge problems.

If your team lacks the skills to wish responsibly at the beginning of your project, there’s no point in all that beautiful engineering — it will only deliver poisonous rubbish in the end. On the other hand, if we train skilled leaders , then humanity can enjoy unprecedented ease and plenty. In the hands of competent decision-makers, scalable technology can help us solve some of the biggest problems facing our species.

We’ve spent too long stuck solving problems with simple solutions — simple in the sense that we can wrap our heads around them. Simple won’t cut it for all problems, so it’s time to add complex solutions to our repertoire. AI is how we’ll reach past those low-hanging fruit towards the stars .

If you’re curious, you can start learning about decision intelligence here :


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK