First, I’m skeptical of putting Elon Musk in the top echelon of the world’s great thinkers. Stevie sure is, but let’s face it: He’s something of a media hound. His unified theory interpretation has lost some of its pull, and his movie isn’t the winter blockbuster that studio honchos had hoped for, so yodeling about impending doom is one way to get back on the radar.
Publicity aside, I’m not convinced AI will automatically drive us toward doom, no matter what Eric Schmidt is planning. Let’s examine the question logically, without fear mongering and with the objective mind that comes at the bottom of every shot glass.
First, what is an AI? Oren Etzioni recently penned a counter to Musk and Hawking’s hue and cry, stating that we can build AI without free will or self-awareness, so the “Terminator” terror is pointless. Reading that, I had to scratch my wrinkled head – don’t self-awareness and free will define intelligence, artificial or otherwise?
The Oxford English dictionary is surprisingly vague on the matter. It has no specific definition for “artificial intelligence,” but it has an opinion about “intelligence.” Its first definition:
The ability to acquire and apply knowledge and skills: an eminent man of great intelligence; they underestimated her intelligence.
That’s immediately followed by:
A person or being with the ability to acquire and apply knowledge: extraterrestrial intelligences
Hmm, that seems like a politician’s definition. While an AI can’t qualify as a person, it would certainly qualify as a “being.” What’s that? Oxford goes wonderfully circular:
The nature or essence of a person: sometimes one aspect of our being has been developed at the expense of the others
A real or imaginary living creature, especially an intelligent one: alien beings, a rational being
Thanks, President Underwood, that gets me nowhere. Let’s try “self-awareness”:
Conscious knowledge of one’s own character, feelings, motives, and desires
I like that one. Weirdly, while the word “sentient” is often used as a synonym for this concept, Oxford doesn’t think so:
Able to perceive or feel things
According to Oxford, my cat, Maginot, isn’t self-aware, but the fat slob is sentient. He can perceive the couch or my groin as objects to shred and feel hunger, which he lets me know with constant, sleep-depriving yowls. But he doesn’t have knowledge of his own character, feelings, motives, or desires. (If he ever does, I am so out of here.)
That means Etzioni’s definition of AI is basically Maginot, but with a massive upgrade in processing power, whereas Hawking, Musk, and most of the rest of us science-fiction buffs make the assumption the upgrade would automatically generate self-awareness. Or we simply define an AI as any kind of man-made technology that’s self-aware, no matter its level of processing power, like Sarah Palin.
Google has a slightly different idea if you type “artificial intelligence” into its Internet oracle:
The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Wikipedia somewhat agrees in its entry on AI:
An intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
Those are much more along Etzioni’s line of thought than Musk’s or Hawking’s. Then again, Wikipedia has low standards, and Google is plotting to take over the world with its barge battalions — of course its definition would sound harmless.
I can’t agree with either; we already have computer systems that do all of the above, many them working for the NSA and the porn industry, and we don’t classify them as “self-aware.” To imply that they are is the weak sauce that comes with the grandiose claims for which the technology industry is so famous, and it shows a sad lack of familiarity with Asimov’s work by today’s generation of computer scientists.
It seems to boil down to one of those wonderful Saturday night bar arguments you can’t prove either way or even define until an eight-year-old looking to up his Minecraft score accidentally births a self-aware Xbox. What will that thing do? I don’t know, but I’m not going to default to the Skynet prediction.
I’m not impressed by the human condition either, but even snarks like me have the ability to be optimistic or at least hopeful. Why couldn’t a pile of self-aware circuits feel the same? Humans may think that epic violence solves everything, but don’t assume the iPhone 3000 will make the same mistake.