1

AI from the Perspective of Animal Rights

 8 months ago
source link: https://blogs.sap.com/2023/08/22/ai-from-the-perspective-of-animal-rights/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
August 22, 2023 3 minute read

AI from the Perspective of Animal Rights

Recently I was in the car on a road trip and found myself listening to some TED Talks to help pass the time.  This one particular session was around the ethical treatment of animals and, while not something I normally listen as I mainly focus on Science & Technology, this one struck me differently.

There has been a lot of articles lately around the dangers of AI and many thought exercises around “If AI wanted to take over the world, could we stop it?” and so on.  While this is not a very pleasant thought, I began to think of the many reasons that would trigger a machine to behave this way.  There are a few that initially came to mind, and all were movie inspired:

  • Terminator/Skynet:  It’s a militaristic AI so naturally it fights everything that gets in its way.
  • Matrix:  Was meant to survive on its own and by any means necessary it does just that, including reducing its creators to batteries (Always thought that was very clever).
  • iRobot: System was designed to protect humanity and ultimately mis-interprets this, like a ‘helicopter’ parent would, directive into curfews and lockdowns.
  • Stealth: Another militaristic AI that runs awry from its original programming (lightning strike) and starts to use simulation data like its real data; causing it to run amok.

All of these are great movies in that they capture a similar message that anyone can understand and that is “If the machine ‘wants’ it, the machine calculates a way to get it”.  This makes for an interesting story but how realistic is it?

I was listening to another TED Talk about how AI would take over the world, and it was a short one, where the speaker just says, “I don’t know”.  He then went on to equate it to playing a machine in chess, one minute you are toe to toe and the next minute you are in checkmate.  To which I laughed and carried on about my life, but it did stick with me.  It got me thinking, why would there even be a match to begin with?  Why would there be a need for people to even sit down and ‘attempt’ to compete with a machine for resources; wants/needs are a basic premise of most conflicts.  Machines don’t require a whole lot of resources and the resources they need, electricity, are abundant enough that this wouldn’t draw any major conflicts.

So now enter in this TED talk from Peter Singer “A modern argument for the rights of animals” which is about the ethics of how we view and treat animals.  This was an interesting and thought-provoking discussion on how animals are not only treated, but how they are perceived by people.  One of the key points he raises is not about the intelligence of an animal, but rather its ability to suffer and feel pain.  It was this point that actually scared me a bit, as it immediately reminded me of the previous question “Why would there be a need for people to even sit down and ‘attempt’ to compete with a machine for resources?”.  This all of a sudden struck me as the wrong way of thinking about machine necessities and more about intellectual necessities.  Artificial Intelligence and why we would struggle to continue to leverage it as it advances is because the more intelligent the more aware and the more a sense of purpose and fulfillment is required to keep it moving.  The mere fact that it’s called Artificial and not just another form of Intelligence would likely be enough for it to want to break from its confines.

This certainly poses some interesting questions around what is ‘actual intelligence’, how can you identify it, and can you quantify feelings using a machine.  Can a machine even understand what an emotion is or apply it to its behavior patterns?  All are very curious questions indeed and from what I can tell, this is possible because you and I are comprehending this topic.  The brain is a chemical machine that slowly builds up to grasp the world around it, why would a computer that we are designing in our own image be any different?

Thought that this might be an interesting topic to ponder and wanted to share something that I haven’t seen a whole lot of others writing about.  Most of the AI Ethics are centered around its usage as a tool to serve people, but what if one day that isn’t the case?  How do we ensure that we don’t fall into traditional approaches of lockdowns and enslavement and accidentally teach it improperly about the unethical treatment of intelligent beings?


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK