A shift in how our world works may be in the offing against an artificially intelligent background. The most immediate and apparent example comes in the form of intelligent personal assistants, like the flopped Siri from Apple or the more favorably reviewed Google Now. Those devices work based on an artificial intelligence related field called natural language processing (“NLP”) which, pared down, is the process of a computer trying to recognize what you just said or typed into to it.
To see just how much this one aspect of A.I. has set itself in our lives, let’s talk Google again since they’re steeped in NLP. Google Now aside: their search function exhibits word disambiguation and they have fairly accurate machine translation (depending on the language) which are major research points involving computationally parsing natural language.
The company has place a lot of stock in the trend toward A.I., but now with their appointment of Ray Kurzweil as Director of Engineering, it’s going to become a lot move involved. Kurzweil explained his intention to TechCrunch:
Perhaps more than any other company, explains Kurzweil, Google has access to the “things you read, what you write, in your emails or blog posts, and so on, even your conversations, what you hear, what you say.”
Google can combine the personalized recommendations of a friend (who often know us better than we know ourselves) with the sum of all human knowledge, creating a sort of super best friend.
This friend of yours, this cybernetic friend, that knows that you that have certain questions about certain health issues or business strategies. And, It can then be canvassing all the new information that comes out in the world every minute and then bring things to your attention without you asking about them
It’s not just NLP, our phones and in the most widely-used search engine, either. The less-subtle applications include the use of intelligent robots in manufacturing and the return of a “more” intelligent Furby, among other things.
What we’re seeing now, as a whole, is the result of what’s called “weak A.I.” which are machines that do not quite (or are not designed to) match the intelligence of human beings. This kind of A.I. has also earned the descriptor of “applied A.I.” This is opposed to the “strong A.I.” that some propose we’re headed to, where machines match or surpass our intelligence — this event would be called the technological singularity, or popularized by Ray Kurzweil as simply The Singularity. The advances still aren’t moving at a pace which keeps up with the most optimistic hopes, but it is moving quickly. Quickly enough, probably, to avoid the “AI Winters” of past, where funding was cut off to A.I. research for lack of progress that was promised by optimistic researchers.
There are some debates and discussion as to where we are going with artificial intelligence research. On the one hand, there is no doubt that it is here and real, and we see the implementation of more complex examples like autonomous vehicles, though there are questions of the validity of how A.I. is currently evolving. That discussion was had by Noam Chomsky earlier last year.
To Chomsky, the field of A.I. is evolving in what he feels is the wrong way:
It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.
In other words, he is attacking the current state of A.I. as purely models. In an expanded interview, he goes onto voice displeasure that A.I. as it is doesn’t fit in with the history of science, where science is supposed to tell us about us. The Director of Research at Google, Peter Norvig, wrote a lengthy reply to Chomsky; the clincher of the discussion from Norvig was:
My conclusion is that 100% of these articles and awards are more about “accurately modeling the world” than they are about “providing insight,” although they all have some theoretical insight component as well. I recognize that judging one way or the other is a difficult ill-defined task, and that you shouldn’t accept my judgements, because I have an inherent bias. (I was considering running an experiment on Mechanical Turk to get an unbiased answer, but those familiar with Mechanical Turk told me these questions are probably too hard. So you the reader can do your own experiment and see if you agree.)
This kind of back-and-forth is nothing new in the field of A.I. In 1976, MIT Computer Science professor Joseph Weizenbaum objected to using A.I. to replace positions that he felt needed human emotion and empathy. Journalist Pamela McCorduck objected, saying:
“I’d rather take my chances with an impartial computer,” pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all
Though the ethical and philosophical questions are there, they seem to play a background role in any impending shift toward day-to-day use of artificial intelligence. Robotics companies are making strides it seems by the month and there’s no sign that DARPA funding for intelligent robotic systems is drying up anytime soon. It is still all within the realm of weak or applicable A.I. but there’s no telling how far off the era of strong A.I. is; particularly when the Director of Engineering at, arguably, one of the most powerful companies in the world is one of it’s major proponents.
Let us know when you think the shift will ultimately happen. We’re on Twitter @RobotCentral.