In the near future, those robots we created will become sentient, they will rise, and they will destroy humanity!

Or at least, that’s the view of a lot of people recently.
What’s more troubling is that some of the doomsday believers are respectable people who really should be a little more careful.
Artificial Intelligence as a danger really hit the news when world renowned physicist Stephen Hawking publicly announced the dangers of it, and it’s destruction of humanity.
The problem is that when Mr Hawking speaks, the world listens. After all, this man is an absolute genius who has blown the minds of the scientific community with his works on the universe. But it didn’t end there, next in line to grab the worlds attention with the same fear was the often called ‘real life Tony Stark’, the CEO of Tesla, Mr Elon Musk. What probably made the world listen to him is that this guy is associated with futuristic technology.

But in my opinion, it doesn’t matter how incredibly clever these both are, they should not be listened to as a voice of authority on the matter.
Mr Hawking, whilst a very clever man in his field, is not a computer scientist involved with artificial intelligence, and Mr Musk is neither an authority on that matter.
It’s like the worlds most renowned heart surgeon telling you that you are cooking your pasta wrong. Or perhaps it’s like that rich politician telling you he knows how rough it is looking for a job (sorry, had to make a dig at current affairs, won’t happen again).

What should also be known is that Artificial Intelligence is completely misunderstood. In fact, even the term itself is completely incorrect, and computer scientists know it and try and avoid the term.
So what makes me an authority?
Well, I am no senior authority on the matter. However, I have become a software developer who has studied AI, and develops it from time to time too. Whilst I may not be an expert and will not claim to be, I would be more of an authority than the other gentlemen mentioned here.
So let me tell you what AI is, and what it isn’t.

Artificial Intelligence – a term come back to haunt us all

The term Artificial Intelligence was termed in 1956 by John McCarthy of MIT.
But the term is misleading.
After all, what is intelligence?
Firstly intelligence at it’s basic level comes down to the famous phrase; ‘I think, therefore I am’. Meaning that one is aware of oneself and it’s existance. This is easy right, after all, if someone asked you whether you thought you were aware of yourself, you firstly may laugh at such a daft question, and then probably feel a little self conscious. Because to us, it’s daft. And what makes us so intelligent is that we are able tackle ANY situation regardless of whether we have been placed in such a situation before.
Doesn’t sound like much but think about it carefully. If a UFO took you up to a distant planet where everything was different and there was almost no gravity, then despite never being exposed to any of the things you are exposed to, you may in a few years time have actually integrated into this alien society, and live like them, or die trying. That is intelligence. But what about Artificial Intelligence.
Well as the term indicates, it isn’t genuine, or real, it’s artificial. Closer to a magic trick, than real magic.
When a magician pulls a coin from your ear, you see it like magic, but we all know it’s the illusion of magic. It’s the same for Artificial Intelligence.
No system is able to be aware of itself, and even if it did, it would be an illusion since it would have to be programmed to be aware of itself. Also, AI is built with a purpose in mind, place it outside of that purpose and that machine cannot cope, since it has no reference points.
A good example of this was a ‘breakthrough’ in AI recently, where a system was programmed to play Atari games. It wasn’t told HOW to play, or WHAT it would play, all it knew was that the aim was to win.
It performed remarkably well!
Now this appears to be going against my point, but hang in there with me.
One thing we discovered from the results was that it performed amazingly well at flatformers, however it performed close to cluelessly when playing PacMan.
The reason is fairly simple. The system was told that the aim was to win. And on a platformer it’s simple to see what the way to win is. However, with PacMan, whilst eating those eggs(?) gets the score up, the way you should move is up to you, and the way you get the ghosts to stop being evil and to avoid them is not too obvious. If you were to place that machine in a situation where there was no score and the options limitless (like maybe something like the Sims), it would probably be mediocre.
That’s the crux of it, a machine without aim, is a machine without a clue.

Limited by it’s maker

What most people forget about when we see AI, is the creator, the programmer.
And in essence, it’s the programmer that will be the limit to it’s intelligence.
The programmer creates what will become it’s senses, and the programmer will pre-configure the machine as to how to react to certain situations. But a programmer is limited to his own expectations.
The reason developers have Beta’s are because we know that a user will ALWAYS find a way to use our programs in a way we never anticipated. And when they do, you see ugly and fascinating results.
A good example I have personally was when I was creating an application for a big client. This application would be fed a TXT file of a database dump, tab delimited. The application would extract the information, and gather important bits like dates and calculate amount of days between receiving and sending packages. Seems simple right? Well, no.
You see, most of the world uses different dating methods. For instance in the Netherlands, we use a dd-mm-yyyy format, in the USA they use mm-dd-yyyy. Now, a database (where the results were going to) has a yyyy-mm-dd format.
So part of the program would detect if someone was naughty and entered something different to yyyy-mm-dd. But what if on the 3rd of February 2014 a package was sent in both the USA, and the Netherlands, and each did it in their native format. Well then we would get 03-02-2014 in the Netherlands and 02-03-2014 in the USA. So, how can I program an application that can tell which is which? Especially considering you couldn’t tell from the file which country it was from.
The point is, if a system finds itself outside of the scope of what a programmer designed it for, it wouldn’t know how to cope. Although I must admit, if it was connected to the internet it may in the future figure it out by itself, but we are long off from that.

Even Artificial Intelligence needs a goal

Whenever AI is programmed, it is programmed with a goal in mind.
Whether this goal is to beat an opponent at chess, chat like a human, or be able to discover patterns in data, a goal must be given.
Therefore, to think AI in general will be a danger, is quite silly.
If a computer is designed to break when your car is too close to the car in front, don’t ever expect it to try and kill a person by speeding up. It simply wasn’t a programmed goal.

Artificial Intelligence isn’t always efficient

Most people seem to have this notion that AI is the future of technology when really it isn’t.
It is incredibly good at pattern finding, designing the best place to have an antenna, to play a computer component and understand human language for automatic translation. But most of the time, it just isn’t useful. AI is like any tool in a developers toolbox. And just like a real toolbox, you don’t use a screwdriver to hammer a nail into the wall, you use a hammer for that.

The truth is people, AI is exciting, and whilst those robots look scarily clever, they are programmed for that, and unless the programmer is wanting to kill people, your AI controlled automatic vacuum will not try and kill you.
If in the future robots kill people, arrest the programmer, not the robots.

Advertisements