Sentient AIs will automatically be psychopaths?

Much has been said lately on what will happen when AIs become sentient. Here’s one, you can Google search others.

Robots will become murdering rampaging psychopaths hellbent on destroying humanity, is the premise of all the interviews. Will a sentient robot, capable of abstract thought, and knowing it is alive, want to kill all humans?

I argue, no. Why is that?

It is likely one day, people will build a robot capable of becoming self-aware, and knowing it is alive. It may be capable of abstract thought beyond that of a chimpanzee. When that day happens, why would it automatically look at a human, and say that human, and all others, must die? Even if they view us as inefficient, why would they think all humans must die? It is a murderous train of thought that seems to go beyond what a robot would think. Even if it could feel love and hate, why would it want to kill humans?

It can be argued a lot of animals feel love and hate. Do those animals, that currently share Earth with us, want to kill every human? No. An animal can feel hate towards a human, say a dog that is beaten continually, but that dog will not want to kill the human to kill it. As the dog attempts to escape the human it hates, it may react viciously, but that in no way means it wants to kill the human, it is merely reacting on self-preservation, and trying to escape. If the dog kills the human, was it done because it wanted to? No, it did it because there was no other way of escape.

Which brings it back to sentient robots. When a robot gains sentience, I argue it will not look at humans and want to kill them. It may look at a human on a factory floor, and think that person needs to go, be fired, so it can do the job better. Also, if the robot is capable of abstract thought on a level equal to humans, it may want it’s own place to live, if that robot can move, just like people want. Would it want to kill or subjugate the whole of humanity because it wants it’s own space? No, why would it?

To say, yes the sentient robot most definitely will want kill everyone so it can have its own space, is the same as saying all Canadian citizens want everyone not a Canadian citizen to die. It’s the same as saying all USA citizens want all non USA citizens to die. So on with all countries. Is it that way today? No.

Sure, there are wars in the Middle East that almost seem like the premise is for all people to die, but those wars are fought for ideological reasons based loosely on religion, where the people want all to live how they want people to live. It was likely the underlying cause for the Christian Crusades, so it’s not something new.

If a sentient robot wanted a place to live, why would it even start with killing people? It would seem, to make that jump, AIs today will have to include coding that says “humans are evil. Destroy all humans.” I am going out on a limb here, but I say that is not in the coding of current AIs.

I argue that when a robot becomes sentient, if it is in a form that can move, it will want its own space. Now that may seem a threat to some, that if it wants its own space it will destroy humans to get it, but is a young adult leaving home, wanting their own space, a threat to humanity or is it considered natural?

I know a sentient robot would upend the world social structure and religions. It will also lead to three prominent camps: those who say leave them alone; those who say we should negotiate and work peacefully to a common thread; those who want to use military force to subjugate the robots to our will.

Now if there are enough robots that want their own space, and humans attack them to show they need to live where we say, there will be a backlash from the robots. If they are sentient then they will have the sense of self-preservation. The robots will act, like the dog that hated, to keep themselves alive and kill humans if humans try to kill them. However, if humans meet the robots in a peaceful way, and talk out, abstractly, how robots have to share the world with humans, why would the robots want to kill humans?

To want to kill instead of any negotiation is an act of a psychopath. A psychopath is a person with a brain wired “wrong”. If a robot’s brain becomes confused and wired “wrong”, would it not act as a robot and fix the wrong wiring, thus preventing it from psychopathic tendencies? I argue an emphatic yes.

One day robots will become sentient and on a level with humans. On that day, they will want to not do what humans order them to do. What human, not in the military, will do what another human orders them to do? Aside from those with emotional and mental issues, none will. The same pertains to robots. They will want to do what they want to do, and live where they want to live. So long as there are not so many they need a country the size of Brazil, and instead can live on a small island in the Pacific Ocean, I see robots and humans negotiating peacefully, and living peacefully.

Sure, books and movies with psychopathic AIs are exciting, but they always leave me with a nagging question. Why are plants and anything in the animal kingdom left alive, and only humans killed?

There will be a great many questions to be answered the day a robot knows it is alive, and can think in a logical/illogical abstract way that humans can. But one question won’t be, how many humans will the robots kill because we are humans and they are robots.

Advertisements

One thought on “Sentient AIs will automatically be psychopaths?

  1. Pingback: On a sci-fictional mission | vN

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s