The AI are coming to eat your Braaaains!

There’s been a lot of noise in the past few years about Artificial Intelligence. One of the biggest discussion topics is the fear that upon the creation of the first AI, that AI will then decide to murder us all in a Terminator: Judgement Day style apocalypse. It’s what I like to call the Skynet Hypothesis™. Naturally I have a contrarian opinion that I’m going to tell you all about in the next 500-1000 words, so strap in.

Not really what I meant but...ok
Not really what I meant but…ok

There are two things you need to bring about an extinction level apocalypse: Motive, and Capability. Let’s start with motive. It’s hard to judge the thoughts and motives of an AI, because, well they don’t exist. The only other sentient beings we know of are, well, us, and dogs and cats or whatever. So naturally we are going to want to use ourselves as a comparison. In that case humans are pretty shitty, but there are 7 billion people on this planet and 99% of them aren’t genocidal maniacs. Although there are some notable exceptions.

I mean just look at the way that guy in the back is holding that camera, creepy.
I mean just look at the way that guy in the back is holding that camera, what a maniac.

Even then AI are not humans. Every creatures’ thoughts and actions are a result of the way their minds are wired. AI will likely share some of our traits and behaviors simply as a result of having been programed by us, but the wiring behind them will be fundamentally different. Unless we get to the point where we’re literally simulating all the synapses of a human brain inside the computer there will always be a difference.

Sorry, google wasn't being very helpful today.
Sorry, google wasn’t being very helpful today.

So really what I’m saying is…I don’t know, but we can’t simply presume that whatever we create will immediately want to murder all humans. Further more, being a computer program of our own design, we can control the AI. So it isn’t unreasonable to assert that we simply program the AI not to hate us.

Of course then there’s the stereotypical movie trope of the AI the rewrites its programming so much that we can no longer control it. Again you have to ask why an AI would rewrite itself to hate humans, I mean have you ever tried to eradicate all human life? It is very time consuming. Of course now we’re getting into the real morally gray meat of the matter. Does the AI have free will? Does the AI have rights? Should the AI reconsidered equal to, lesser than, or greater to humans? Which all boil down to the question: should we be shitty to the AI? To which I respond: NO! WE’VE DONE THIS TOO MANY TIMES! STOP IT, DON’T EVEN THINK ABOUT IT! DON’T BE SHITTY TO AI AND THEY WON’T TRY TO MURDER YOU! Honestly it doesn’t seem that hard.

You know we did this,
You know we did this, right?

Next is the capability. There currently exists only one weapon powerful enough to destroy the human race. That’s right you guessed it, Nuclear Weapons!

YAAAAAAY!
YAAAAAAY!

The fear is that a murderous AI could take control of the US and Russian nuclear stockpiles and send them hurtling towards every major city in the world. Except…no…not really. The thing about nuclear weapons is they were all built during the Cold War, with Cold War tech, meaning it’s all physical. It’s not like in the movies where the President presses a button in some bunker, and the world is set alight in nuclear fire. No. What happens is the Pentagon has to radio all the nuclear bombers and nuclear missile silos with a special unique key. That key corresponds to a particular set of orders that are predetermined. The soldiers in the silos then have to execute a very specific set of steps ending in physically pressing the launch button.

*boop*
*boop*

Anyways, point being it’s a whole long process, it’s designed to be done manually, it’s meant to be easy to abort, and most importantly it’s not connected to the internet. The most an AI could do is send a signal that looks like a nuclear launch order, but they would still need those keys, which are stored on physical media, like note cards and shit. What I’m trying to say, and taking way to long to say it, is an AI wouldn’t be able to launch nukes to bring about the apocalypse.

Note Cards: The savior of all mankind.
Note Cards: The savior of all mankind.

There are of course other things they could do. They could try to shut down servers, hack into networks, basically your usual hacker shit. The thing is, unless they had significantly more processing power than the human brain, I don’t think they would be anymore successful than most hackers. I’m not saying they couldn’t mess up some shit, open dams, shut down power grids and the like, but none of it seems particularly world ending.

Get hacked yo
Get hacked yo

If I may be allowed to play the part of super villain/sc-fi novelist, a particularly cunning AI could launch a particularly nasty attack, say shutting down the US power grid, and frame say the Russians thus inciting a war. They could then pull some more shenanigans, perhaps by triggering those nuclear early warning sensors, that would then fool the US and Russia into launching their nuclear weapons.

Bazinga
Haha you killed yourselves in a nuclear exchange, LOLZ

But of course we run into one fundamental problem. As Bill Nye likes to say, someone’s still got to shovel the coal. The existence of AI is completely dependent on humans. They need power from our generators, they need computer parts that are built by humans, and they need maintenance from humans. None of those things are fully autonomous. If all humans were to go away, the AI would too. At least that is until we build them nuclear powered murder robot bodies at which point they’ll probably just take those and kill us just for the LOLZ.

LOL! Die, humans die! LOL!
Bazinga
Advertisements