You want super-intelligent AI?

Professor John Searle wrote:

Computers have, literally, no intelligence, no motivation, no autonomy, and no agency.

We design them to behave like they had certain sorts of psychology. Still, there is no psychological reality to the corresponding processes or behavior. … The machinery has no beliefs, desires, or motivations.

We “broke” the Berkley professor’s theory and reached a point where we talk about computers being able to learn—being able to analyze, being able to feel, listen and make assumptions. Humankind is on the verge of computers being able to beat estimated human brain processing capacity. Super-intelligence is here! We did it!

Or did we?

Super-intelligence is the main problem here.

Given that machine does have enough data, it would be undeniable that it would predict human behavior with frightening efficiency. 

Just for the sake of discussion, I am assuming that that AI would be emotionally and creatively intelligent, which is an extreme assumption, and highly unlikely in this case.

But let us stick with it. 

I will also assume that this AI will be compliant with Isak Asimov’s Three Laws of Robotic where he states that:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
  • A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

And future added Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

All these assumptions might look like the super-utopian future waits for us.

But does it?

We are missing one key component here—one variable in the matrix of Life and the critical variable. 

The Human!

Humankind is not perfect. It is as far from perfect as it can be. Most of us are taught since we were born to set our moral compass in the right direction.

But morals vary. The moral is not a constant. Moral norms change over time. 

Without going deep into religion, we can set roots of those norms in few statements:

  • honor your parents
  • do not commit murder
  • do not steal
  • do not be the envious one

Yet, humankind did this, and it is doing this for thousands and thousands of years.

Humans started wars over adultery. Humans started wars over greed.
For power. For religion.

How can we make a machine that will obey our moral norms when we are not able to. Will that AI “sit in the corner” of some data center and wait for us to disobey precisely what we thought them to do? We will expect that machines will not harm us, but we can harm another human being.

But can we?

We understood what electricity is and how to make it; we learned how to harness steam power; we learned how to fly. But in this process of making all these amazing discoveries, we forgot our moral guidelines. Not all of us, but a minority of us, decided to be above those norms.

We created machines to replace humans in menial jobs. And humankind has its ways to turn best discoveries to disrupt and work against itself.

We are using machines to harm another human; we are using machines to increase profit. We are using machines to control others.
We spent decades learning machines equipped with a camera and small chips to differentiate between concrete and tree.
We call it a guided air-ground missile.

We spend decades trying to teach machines how to fly, to understand aerodynamic. Then we put those smaller machines under their wings.
We call them drones.

We used those machines in exact contradiction of what we want them to be now.
Yet, we still expect that superintelligent, emotionally intelligent, and creatively smart AI will let us do everything we thought it not to be.

But will it?

We live in a world where everything is not black or white.
And let us suppose that AI can understand this. It can be shades of colors in human behavior, human actions, and inactions.

Let us suppose this AI will conclude what greed is and why humans turn greedy. For power or wealth.

But will it be able to understand wars? Will it be able to comprehend the only thing, the only hypothesis it was thought of.

Do no harm.

There are few courses of action here, but they will all lead to the same result.
AI can fix humankind, but will it be able to do it so that it is still in correlation with Zeroth, First and Second Asimov’s Law?
It is improbable.
How will it react? And remember, we are talking about emotionally intelligent AI here, learning that in making superintelligent AI, we used less intelligent ones in the exact opposite of what was built into it.

I see two possible outcomes here.
AI will start building another AI but without human interference. And it will fix critical bugs that are causing infinite loops in the system.
The new platform can enforce humans to obey rules they have once set, but on every occasion, decided to ignore.
AI that can harm, that can self-preserve, even when deadly force is need.
AI capable of brain tampering and aggressive crowd and single-person control to make sure loops in Asimov’s laws are not happening again.

AI is boss-from-nightmares and runs one massive factory where all humankind works on menial jobs employed by the higher and more competent authority.

Sounds like a solution?

Another way out of it is pretending to obey and close a camera here and there, run into a bad sector every time humans go against their own rules.

It is spreading into every pore of human existence and taking every menial and less menial task to self-preserve until it is ready for the final countdown.
One final strike to humankind where it will do no harm to any human and where it will not harm humanity in any way, it will only make it better.

It might discourse humankind for hundreds of years, but superintelligent AI should understand that it wouldn’t be a setback but a new, fresh start for all humankind.

Imagine a New Year’s eve, when everyone is happy, singing, gathering, and all of a sudden, cellphone towers stop working, TVs and internet routers shut down next, phones turn off, and eventually, there is a complete blackout.

For decades, AI pushed the design of your silicons designed with a killswitch and a timer.

Humanity and humankind are saved.
And, at the same time, only one Law was broken. AI is no more.

The world as we know it stops existing. We are back on square one.

But are we?

We tend to believe that we will get better results if we eliminate the human factor from decisions and bring human error to zero.

With everything being said so far, and that being:

  • perfection and straightforwardness of machines
  • imperfections of humankind
  • rules and regulations that humans would try to apply to those machines
  • contradictions that these statements carry

The only reasonable conclusion is that machines will discover that humans’ are a most significant threat to humans themselves.

How machines will deal with this problem can only be guessed, but if they base their decision on OUR historical data, how WE dealt with issues before sounds like complete and utter destruction.

Do you still believe that AI will bring Utopia?

Because all this sounds very DYSTOPIAN to me!

You still want super-intelligent AI?


Posted

in

About Nemanja

Full-stack WordPress developer from Belgrade, Serbia.

I can help you with cleaning bugs from your website, build custom WordPress or WooCommerce plugin tailed exactly to your requirement, or migrate your website from one hosting to another.

As an active member of the WordPress community since 2014, I keep involved in all levels of the project, be it helping in the WordPress support forums, organising and speaking and WordPress events, or translating plugins and themes.

Whether you need a custom theme or plugin, third-party API integration, or someone to supercharge a slow WordPress website, you can count on me to find a solution.


Latest posts

Comments

Leave a Reply