If you listen to the media, you may think by now that it won't be long until AI takes over our entire lives. And we better hope that it's benevolent AI, or we'll end up in a dystopian future where the robots kill us all off, for the greater good of the planet. Where does this kind of thinking come from? And how valid is it, really? Explore the sense and nonsense of AI with me.

An interesting story that was in the news recently concerned the Mars Rover Opportunity.

Opportunity was a robotic rover that successfully landed on Mars in January of 2004, as part of NASA's Mars Exploration Rover Program. It was originally meant to be operational for 90 days.

But 'Oppy', as it was affectionately nicknamed, actually managed to serve for well over 14 years.

Oppy remained at least partly functional, until the planetary dust storm of 2018 finally proved fatal.

The last words the robot sent to earth were:

"My battery is low and it’s getting dark"
- Oppy

If you find this a touching story, you're not the only one. All over twitter, people are thanking Oppy for his service, saying "RIP" and promising that we're coming to revive him when we, as humanity, finally land on Mars.

I found this incredibly interesting.

After all, I don't think there is anybody that would argue that this machine actually has emotions. Yet we apparently can't help but project our human emotions on a dead and soulless machine like this. This is called anthropomorphism.

Anthropomorphism: the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology. (Wikipedia)

I believe this tendency to project human attributes onto machines is what lies at the core of the AI-scare. The more machines start to look, sound and function like sentient beings, the more this tendency takes over our brains.

Even the terms we use to describe our modern-day machines reflect this. Take the term intelligence. You see that term being thrown around left and right. Artificial intelligence, deep learning, smart phones, intelligent lighting systems.

Intelligence: the ability to acquire and apply knowledge and skills.

In my opinion, 'intelligence' is a severe misnomer for what's happening here. No matter how advanced and capable our machines become, they will never be 'intelligent' in any meaningful sense of that word. It's misleading.

Sir Arthur C. Clark famously said that any sufficiently advanced technology is indistinguishable from magic.

And the same applies here.

Of course, it's easy to start feeling like a virtual assistant like Siri or Alexa actually understands you like a human being – especially when you have very limited knowledge of what actually happens under the hood of these so-called intelligent machines.

But calling advanced technology intelligent makes just as much sense as calling it magic. It's just your anthropomorphic tendency taking over.

'My battery is low, and it's getting dark'

Because in the end, these machines are still machines. They're objects, made out of metal; with less life in them than a single-celled organism. They have no will or desire of their own. They do what they're told to do. Nothing more, nothing less.

As you probably know, these machines run on code. And what is code, really? Code is nothing more than a set of detailed instructions. As such, code is comparable to a recipe for cooking, but it has to be way more detailed and specific.

And do you know why it needs to be so incredibly detailed?

Because machines are not really intelligent. They're actually quite dumb.

You cannot assume any prior knowledge or any form of intuitive understanding, like you could if you were explaining something to another person. Instead, you have to spell out every single detail of what you want the machine to accomplish.

If you've never written a single line of code, it may be hard to grasp just how specific and detailed code actually needs to be.

In that case, you may find it entertaining to read this programming vs. making a cup of tea article. It's a great analogy to explain the complexities of software development to the uninitiated.

Any technology built on code, no matter how advanced and complex it gets, is ultimately a set of (incredibly detailed) instructions to a metal object. The machine will never 'want' anything it isn't explicitly told to strive for, it will never 'learn' anything it isn't explicitly told to learn and it will never understand you in any meaningful sense of that word.

So, next time you feel scared that Artificial Intelligence is going to take over the world, think twice. It's probably your mind playing anthropomorphic tricks on you.

Remember that a machine is a dead object. It has no goals of its own, but only does what it's explicitly told to do. By humans.

As a result, AI is about as scary as a kitchen knife. It's harmless by itself. It can only become deadly in the wrong hands.

Human hands.