Thanks to the advent of Big Data, new algorithms and massive, affordable computing power, artificial intelligence is now, finally, on a roll again.
Try this: Go online to translate.google.com.
In the left-hand input box, type, "The spirit is willing, but the flesh is weak." In the right-hand box, decide which language you want it translated to. After it's translated the first time, copy the translated text and paste it into the left-hand box for conversion back into English.
If you don't get exactly the original text, the back-translation will in all likelihood still reflect at least part of the original thought: That the actions of the subject fell short of his or her intentions and not that the wine was good but the meat was tasteless, which the phrase could mean in a literal translation.
AI is becoming real. Jackie Fenn, Gartner Analyst
In other words, a machine figured out what you meant, not merely what you said.
"In the 1960s, this was considered impossible," explains Michael Covington, a consultant and retired associate director of the Institute for Artificial Intelligence at the University of Georgia.
For decades the field of artificial intelligence (AI) experienced two seasons: recurring springs, in which hyped-fueled expectations were high; and subsequent winters, after the promises of spring could not be met and disappointed investors turned away. But now real progress is being made, and it's being made in the absence of hype. In fact, some of the chief practitioners won't even talk about what they are doing.
Seasons old and new
"AI is becoming real," says Jackie Fenn, a Gartner analyst. "AI has been in winter for a decade or more but there have been many breakthroughs [during] the last several years," she adds, pointing to face recognition algorithms and self-driving cars.
Researcher Daniel Goehring, a member of the Artificial Intelligence Group at the Freie Universitaet (Free University), demonstrates hands-free driving during a 2011 test in Berlin. The car, a modified Volkswagen Passat, is controlled by 'BrainDriver' software with a neuroheadset device that interprets electroencephalography signals with additional support from radar-sensing technology and cameras. REUTERS/Fabrizio Bensch
"There was a burst of enthusiasm in the late 1950s and early 1960s that fizzled due to a lack of computing power," recalls Covington. "Then there was a great burst around 1985 and 1986 because computing power had gotten cheaper and people were able to do things they had been thinking about for a long time. The winter came in the late 1980s when the enthusiasm was followed by disappointment," and small successes did not turn into big successes. "And since then, as soon as we get anything to work reliably, the industry stops calling it AI."
In the "early days" -- the 1980s -- "we built systems that were well-constrained and confined, and you could type in all the information that the system would make use of," recalls Kris Hammond, co-founder of Narrative Science, which sells natural-language AI systems. "The notion was to build on a substrate of well-formed rules, and chain through the rules and come up with an answer. That was the version of AI that I cut my teeth on. There are some nice success stories but they did not scale, and they did not map nicely onto what human beings do. There was a very strong dead end."
There was a burst of enthusiasm in the late 1950s and early 1960s that fizzled due to a lack of computing power. Michael Covington, consultant
Today, thanks to the availability of vast amounts of online data and inexpensive computational power, especially in the cloud, "we are not hitting the wall anymore," Hammond says. "AI has reached an inflection point. We now see it emerging from a substrate of research, data analytics and machine learning, all enabled by our ability to deal with large masses of data."
Going forward, "The idea that AI is going to stall again is probably dead," says Luke Muehlhauser, executive director of the Machine Intelligence Research Institute (MIRI) in Berkeley, Calif. "AI is now ubiquitous, a tool we use every time we ask Siri a question or use a GPS device for driving directions."
Beyond today's big data and massive computational resources, sources cite a third factor pushing AI past an inflection point: improved algorithms, especially the widespread adoption of a decade-old algorithm called "deep learning." Yann LeCun, director of Facebook's AI Group, describes it as a way to more fully automate machine learning by using multiple layers of analysis that can compare their results with other layers.
He explains that previously, anyone designing a machine-learning system had to submit data to it, but not before they hand-crafted software to identify sought-after features in the data and also hand-crafted software to classify the identified features. With deep learning, both of these manual processes are replaced with trainable machine-learning systems.
"The entire system from end to end is now multiple layers that are all trainable," LeCun says.
(LeCun attributes the development of deep learning to a team led by Geoff Hinton, a professor at the University of Toronto who now works part-time for Google; LeCun was, in fact, part of Hinton's deep learning development team. Hinton did not respond to interview requests.)
Even so, "deep learning can only take us so far," counters Gary Marcus, a professor at New York University. "Despite its name it's rather superficial -- it can pick up statistical tendencies and is particularly good for categorization problems, but it's not good at natural language understanding. There needs to be other advances as well so that machines can really understand what we are talking about."
There needs to be other advances ... so that machines can really understand what we are talking about. Gary Marcus, Professor, New York University
He hopes the field will revisit ideas that were abandoned in the 1960s since, with modern computer power, they now might produce results, such as a machine that would be as good as a four-year-old child at learning language.
In the final analysis, "About half of the progress in the performance of AI has been from improved computing power, and half has been from improvements by programmers. Sometimes, progress is from brute force applied to get a one percent improvement. But the ingenuity of people like Hinton should not be downplayed," says MIRI's Muehlhauser.
Next: The AI rush