0

In Defense Of Anthropomorphizing Technology

 3 months ago
source link: https://hackaday.com/2024/02/07/in-defense-of-anthropomorphizing-technology/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

In Defense Of Anthropomorphizing Technology

Touch.jpg?w=800

Last week I was sitting in a waiting room when the news came across my phone that Ingenuity, the helicopter that NASA put on Mars three years ago, would fly no more. The news hit me hard, and I moaned when I saw the headline; my wife, sitting next to me, thought for sure that my utterance meant someone had died. While she wasn’t quite right, she wasn’t wrong either, at least in my mind.

As soon as I got back to my desk I wrote up a short article on the end of Ingenuity‘s tenure as the only off-Earth flying machine — we like to have our readers hear news like this from Hackaday first if at all possible. To my surprise, a fair number of the comments that the article generated seemed to decry the anthropomorphization of technology in general and Ingenuity in particular, with undue harshness directed at what some deemed the overly emotional response by some of the NASA/JPL team members.

Granted, some of the goodbyes in that video are a little cringe, but still, as someone who seems to easily and eagerly form attachments to technology, the disdain for an emotional response to the loss of Ingenuity perplexed me. That got me thinking about what role anthropomorphization might play in our relationship with technology, and see if there’s maybe a reason — or at least a plausible excuse — for my emotional response to the demise of a machine.

Part of the Crew

To be clear, when I use the term “anthropomorphism” here I’m not referring to making machines look like humans, but rather to our tendency to develop emotional attachments to machines, as well as to act as if they have some level of awareness of their users and their creators. There’s a name for this: “Tool anthropomorphism,” or the assignment of human-like characteristics to tools and machines, is an area of scholarly research. In commonplace terms, when you sweet-talk a dodgy lawnmower so that it’ll start on the next pull, or say goodnight to the project on your workbench before giving up on it for the evening, you’re engaging in tool anthropomorphism.

ToolsYouCantTake-thumb.jpg?w=400

Tool anthropomorphism is nothing new; we’ve been assigning human characteristics to our machines for a long time, long enough that it makes me think there has to be some purpose to it. On the user side, I think anthropomorphism helps people relate to technology. An example of this might be when humans first started naming boats. Logically, there’s no reason to give an inanimate object like a boat a name. But for members of a species as social and as strongly tribal as we are, it must have been much easier for them to get into a primitive boat and sail off into a dangerous ocean knowing that the vessel had a name. It probably would have made the boat seem less of a stranger and more like a member of the village, imbuing it with a personality that they could relate to.

Beyond dispelling the “otherness” of a ship, naming it probably served another, more practical purpose. With a name — and possibly a face; many cultures did (and still do) adorn the prows of boats with facial features and eyes, to help the boat “see” where it’s taking them — it’s a lot more likely that the crew will take proper care of it. Even the simplest sailing vessels are technically complex systems, and getting to know their quirks and idiosyncrasies is crucial to survival. It also gives the crew someone to beseech when things are going wrong, to lavish praise upon when returning safely to shore, or to blame when things go wrong.

Of course, none of this makes any difference to the boat, since it has no consciousness to perceive its own status or to consider the sailors’ entreaties one way or the other. So in purely rational terms, how the sailors think about their boat won’t make the slightest difference to whether it sinks or floats. But that’s not the point; it’s the sailors who are influenced by the anthropomorphization, not the vessel. It’s a brain hack, really; act like the ship is a person worthy of love and slavish devotion, and you’re more likely to do what it takes to keep her together and get you home. Break that faith, and things probably won’t go the way you want them to.

Even though there’s always been a lot of superstition surrounding the ancient mariners and their ships, and understandably so given the risky nature of their trade, the purpose that anthropomorphism served back then applies to the “user experience” of technology all through the ages. The classic example of this, particularly for Americans, is with our cars. We spend so much time in our cars, often while having intense experiences, that it’s hard not to anthropomorphize them. Some of us give them names, and some even claim to know their vehicle’s personality quirks and what they’ll do in certain situations. We’ll talk to it, ply it with loving words of encouragement when it acts up, and threaten it with the junkyard when it lets us down. I can’t count the number of times I’ve arrived safely at home after a long, dangerous drive in a blizzard or hurricane and taken the time to tenderly caress the dashboard of my truck and whisper a quiet word of thanks for deliverance.

Is any of that rational? Of course not. The truck isn’t listening. On the other hand, feeling connected to that inanimate machine, especially after going through a harrowing experience with it, is powerfully motivating to get to know everything about it, to see to its care and maintenance, and to make sure it’s in top shape for the next trip out. Anthropomorphizing a car — or a computer, a spacecraft, a house, or even a helicopter on another planet — serves the same purpose as naming a ship did all those ages ago. The technology may change, but it’s still the human brain that’s getting hacked by seeing human characteristics where none exist, and the result is the same: a better, more productive relationship with machines.

Back to the Drawing Board

The other place where I think our tendency to anthropomorphize technology pays dividends, and the one that probably concerns most Hackaday readers more directly, is in the creation of new technologies. As we all know, real innovation is generally a long, drawn-out process that starts with ideation and (hopefully) ends with something useful that never existed before. No matter whether it’s a mechanical, electrical, software, or a combination of all three, most projects are long, often painful slogs with too many dead ends and failures to count. Seeing that process through to the end is a hard thing to do, but personalizing the project somehow seems to make it easier.

neuralnetworking_thumbnail.png?w=400

If we’re thinking in strictly rational terms on difficult projects, the tenth or eleventh “back to the drawing board” moment would probably compel us to cut our losses and abandon the project. Sometimes we do just that, but other times we’ll say something like, “I can’t do that, this project is my baby!” Is it really? Nope, it’s just a collection of parts sitting on your bench. But somewhere along the line, probably without even realizing it, you started thinking of it as your offspring, with hopes and aspirations for what it’ll be when it “grows up.” Giving your project the characteristics of a child and seeing it as utterly dependent on you for survival is often enough to get you over the creative hump and see the project through to the end. If you have any doubt about the power of anthropomorphizing machines, a quick look at The Soul of a New Machine will probably be enough to convince you otherwise; would a team of otherwise rational engineers work 90-hour weeks to bring a minicomputer to life if they didn’t at least partially think of it as a person?

I’m no psychologist, so I have no idea whether my ideas about the role of anthropomorphism of machines are even approximately correct. Then again, I’m not a credentialed engineer either, yet I still do a pretty decent job figuring things out by the seats of my pants. And something tells me that thinking of machines in more human, more personal terms serves a purpose both in how we manage the often painful process of creation, as well as how we relate to the technology that others create. And if that means being saddened by the demise of a machine on Mars, I’m OK with that.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK