Glenn Nausley

How I stopped worrying and learned to love AI

When people talk about Artificial Intelligence you tend to get one of two sentiments out of people: Apocalyptic or Immortality.

This is understandable because these extremes make for good stories (Ex Machina was one of my favorite movies of 2015). These stories make AI seem like this far future technology that that there is no possible way that we ever could use an AI in our lifetime.

Except, you do.

A lot actually.

See, most Artificial Intelligence is not as flashy or sexy as Ava,A picture of Ava was here but they are still powerful. Lets take one that has been in the news a lot lately: self-driving cars. A computer is intelligent enough to drive a car.

Let that sink in for a second.

It took most of us 16 years of life experience and education to be at a point where we could drive a car. (Though the computer that drives the car is still being tested and tweaked)

It exists.

But that is not the best example because it is still a kinda far away thing. But what about stuff that you used today?

  • Google Something? An AI just found the websites related to your search.
  • Looked at Facebook? Your feed is selected by an AI.
  • Ever used Siri/Cortana/Google Now on your phone? Okay this one is a little obvious.
  • Netflix suggest Star Trek Voyager because you went on a Star Trek binge? AI's know Trekys.
  • Amazon suggest a laptop bag your new laptop you just bought? An AI looking out for your best interest.

These are some powerful things that most people just take for granted. But they are provided by AI. Try explaining that to your grandparents.

Did you know that AI's are powerful enough to figure out whether or not a women is pregnant, in order to advertise maternity items to here? So well in fact that it can break the ice with your father for you.

For the reddit people reading, there is a whole subreddit (r/SubredditSimulator) where a bunch of language processing AIs post, comment, and do other reddit things. Its funny. They don't make sense most of the time but when they do it's a little eerie.

So... what is an AI exactly?

These AI (and all that exist now) are specifically 'Narrow' or 'Weak' AIs. I think of them as machines that do a specific set of tasks at least as fast and as well as a human, that also don't have feelings like a dog (non-sentient). Usually they complete tasks faster than humans (How fast can you give me the answer to 12345/67890? It took Google .032 seconds).

'Weak' AIs are in contrast with 'Strong' or 'General' AI. These are your Sci-fi story AI, that can think and sense like a human. In science speak, they can do any general intellectual task that a human can do.

In popular culture and less practical science there is a third type: Super-Intelligent Artificial Intelligence. Which is loosely described as 'intelligence beyond that of humans' (what ever that means).

This is your Skynet.

This is also where Ray Kurzweil (an engineer at Google) gained his popularity with AIs.

Kurzweil argues that in the near future there will be a Technological Singularity, where computer intelligence will exceed human intelligence (again, what does that mean?).

Because of this singularity we will be able to replicate a human brain and then transfer our current brain to the new brain and, voila, immortality.

Or an AI is created that is more intelligent than humans (it understands the meaning of life is 42) and proceeds to drive the human race into extinction.

Um, why does he think that?

Moore's Law.

...What is Moore's Law?

It is a non-scientific, more economic that technological, observance by Intel's co-founder Gordon Moore.

In 1965, Moore explained that between 1965-1975 the number of components per computer circuit would double each year, allowing the computing power of computers to double each year. Well 1975 rolled around and the trend continued (except at a somewhat slower rate, it double every other year towards the end) and continued his forecast.

It has stayed at roughly the same rate. (Though, it started to slow down since 2011).

So, the theory goes: Because of Moore's Law, eventually computers will have enough computing power to replicate a human brain.

This is a valid concern; the best computers today are exponentially behind the computing power of the human brain (those are HUGE numbers FYI). But I don't think that is the only problem that needs to be solved.

What else needs to be solved?

What is more important in my mind for creating a Strong AI is not the computing power of the machine, but how the machine 'thinks'.

Human brains go through a lot of sensory data. So, computing power is important to the point that it can keep up with constant sensory data. But that is something that all sentient animals have; dog and cat brains have enough computing power to keep up but they are not intelligent—they are not sapient.

(As an aside, this is one of my pet peeves: sentience means a being can feel and perceive instead of just sensing; sapience means the being can perceive and think abstractly. We are Home Sapiens after all.)

So here is the crux of the issue with AI: even with enough computing power, a computer by itself is very, very stupid.

Like, 'sit around and do nothing unless I am told what and how to do something' stupid.

A human needs to tell a computer (via code) how to do everything. Even things like telling the computer to forgot things it no longer needs. Otherwise it will get bogged down, and possibly kill itself.

Now think of all of the things a human goes through in the development of his mind.

Right out of the womb a baby is a sensory/perceiving being. He has no ability to think conceptually yet. Within the first year he can perceive entities; he knows that an object (a chair, for instance) has finite dimensions; that something can be in front of the object or behind it and the object doesn't disappear when something is in front of it.

Eventually the child begins to discern the concept of chair, from seeing many other objects including more chairs. Now it is no longer 'an object', it is a chair.

He now thinks with concepts.

The child now begins to learn more and more concepts, that become more and more abstract.

From 'Red' to 'Politics'.
From 'terrible' to 'inflation'.
From 'chair' to 'faux leather'.

The child then takes 15-25 years (!!) to use and learn enough concepts to be an adult. And that is just enough intelligence to begin making his way in life.

Now, connect the dots:

Very, Very Stupid entity that needs to be told what to do + Most complex task in existence = A lot of work

Like, how do you tell a computer how to discern a table from a chair in a way that it would be able to tell a cat apart from a chair, a human apart from a cat, and beauty apart from a human? (This is also ignoring that you have to tell the computer how to use the computers equivalent of eyes, ears, etc.)

So, cool AIs are not possible?

Well, I think general AI's are possible. I am however a little skeptical and I can see where it is possible that they are not possible. (Oh, the possibilities.)

And in order for the AI to be intelligent it would need to pass the Lovelace test (named after the mother of programming Ada Lovelace.) (Yes, computer science was created by a woman before computers existed.).

As opposed to the famous Turing Test, which bases intelligence on fooling a human, the Lovelace test is based on the computer creating ideas.

It says: Once a computer can deliberately originate an idea that it wasn't designed to create then it is intelligent.

What I think is not possible for sure, 100%, without a doubt is 'super-intelligence'. That is intelligence beyond human intelligence.

Because what does that mean?

'Super-intelligence' is a non-concept. It's beyond human intelligence. Okay. What is beyond human intelligence? God? The higgs-boson? Lucy? People think that 'super-intelligent' means ominous, and consequently omnipotent. Hence the god like things associated with AI: Apocalypse and Immortality.

So, to amend what I said: I think a sapient AI is possible in the future, but not a god AI.

And once these Sapient AIs exist I don't think they will be much 'fun'. I think they will start out much like humans do—as an infant.

Yes this is random speculation, but I would like to submit Glenn's Law (because I want a law):

Any truly sapient AI needs to start at a level of intelligent of an infant (i.e. just the cognitive tools) and learn everything on it's own through life experience and thinking, like a human. So, they are essentially humans.

But before we get to that point, a sentient AI needs to be created. So, like AI dogs and AI cats. Because a conceptually thinking AI would need to first be a perceptually thinking AI.

I actually think this type of AI I might see in my lifetime (if it's possible).

Which would be cool, because it would be one step closer to a strong AI and I would like the companionship of a dog without having to pick up shit.