Wednesday 11 December 2013

60 Years Later, Facebook Heralds New Dawn for Artificial Intelligence


Yann LeCun, the new artificial intelligence guru at Facebok. Photo: Josh Valcarcel/WIRED

Yann LeCun — the NYU professor who was just hired to run Facebook’s new artificial intelligence lab — says his interest in AI began the day he first saw 2001: A Space Odyssey. He was nine years old.

The idea of artificial intelligence — machines that can process information the way people do — wasn’t that much older. In the late 1950s, a group of East Coast academics had introduced the idea during a conference at Dartmouth University, and when maverick film director Stanley Kubrick released 2001 a decade later, portraying a thinking machine in such a fascinating — if frightening — way, it captured the imagination of so many people, across academia and beyond. Well beyond.

By the early ’80s, as an engineering student in his native France, LeCun was at work on real-life AI techniques, including machine learning that involved brain-mimicking systems called “neural networks.” The only trouble was that, after years of relatively little practical progress in the field, most of the academic world had turned its back on AI. “‘Machine learning” and ‘neural nets’ were dirty words,” LeCun said that earlier this year.

But this is was what he wanted to do, and by the middle of decade, he had developed a new algorithm for use with rather complex neural networks. As it turns out, this work was a lot like the research being done across the Atlantic by another academic named Geoffrey Hinton, and after LeCun finished his PhD in France, he joined Hinton’s stubbornly defiant artificial intelligence group at the University of Toronto. For years, they and a handful of other researchers toiled on a project that few people truly believed in — it was a “very difficult idea to defend,” LeCun says — but nowadays, things are different.

As LeCun begins work on the new AI lab at Facebook, Hinton is months into a similar operation at Google, and the ideas at the heart of their neural network research — typically referred to as “deep learning” — have also found their way to into projects at Microsoft and IBM. Driven by Hinton and LeCun and others, such as Yoshua Bengio at the University of Montreal, artificial intelligence is on the verge of a major renaissance, poised to overhaul the way data is analyzed across so many of the online services we use every day.

Google is already using deep learning in the voice recognition service offered on its Android mobile operating system, and these same techniques can be used to analyze everything from images and videos to, yes, the way you interact with people on a massive social network such as Facebook.

If Facebook can use deep learning to recognize faces in your photos, it can automatically share those pics with others who may enjoy them. If it can use AI to reliably predict your behavior on its social network, it can serve you ads you’re more likely to click on. “I could even imagine Facebook identifying the brand of a product in the background of an image and then using that information to target advertisements related to that brand to the user who uploaded the image,” says George Dahl, a PhD student who works with Geoff Hinton in the deep learning group at the University of Toronto.

For Abdel-rahman Mohamed, who also studied with Hinton, the possibilities are almost endless. “They can do amazing things — amazing things,” says Mohamed, who will soon join IBM Research as part of its voice recognition team. “What Facebook can do is almost unlimited.” His point is that deep learning is merely a way of improving how computing systems operate.

Facebook has not said where, specifically, it intends to take its deep learning research. But the company clearly sees this work is a big part of its future. On Monday, Facebook founder and CEO Mark Zuckerberg and chief technical officer Michael Schroepfer were at the Neural Information Processing Systems Conference in Lake Tahoe — the annual gathering of the AI community — to announce LeCun’s hire, and the company has said that its new lab will span operations in California, London, and New York, where LeCun is based.

In the mid-80s, LeCun and Hinton developed what are called “back-propogation” algorithms. Basically, these are ways of running multi-layered neural networks — brain-like networks that can analyze information on multiple levels. Mohamed says you should think about these neural nets in much the same way you think about how your own body operates.

“If I am talking to you, you’re processing it with multiple layers,” he explains. “There are your ears that hear, but then there is another layer that interprets. There are layers that grasp words, and then the concepts, and then the overall understanding of what’s going on.”

The basic idea is now almost thirty years old, but we’re just now reaching the point where it’s practical, thanks to improvements in computer hardware — not to mention an enormous internet-driven increase in the amount of real-world data we can feed into these deep learning algorithms. “We are now at the intersection of many things that we didn’t have in the past,” Mohamed says.

As it turns out, these algorithms are suited to running on the sort of massive computing farms that drive our modern web services, farms that run myriad tasks in parallel. They’re particularly well suited to systems built with thousands of graphics processing units, or GPUs, chips that were originally designed to render graphics but are now being applied to countless other tasks that require scads of processing power. Google says that it’s using GPUs to run these types of deep-learning algorithms.

You might think that an operation like Google had been doing AI since the late 90s. But that was a very different kind of AI, an AI that took a shortcut to intelligent behavior without actually trying to mimic the way the brain works. Deep learning doesn’t take that shortcut. “It’s not exactly like a brain, but it is the closet model we have to the brain — that can process massive amounts of data,” says Mohamed.

As Mohamed points out, we don’t completely know how the brain works. Deep learning is a long way from actually cloning the way we think. But the bottom line is that it works quite well with certain modern applications, including voice and image recognition. That’s why Google is using it. That’s why Microsoft and IBM are on board. And it’s why Facebook just hired Yann LeCun.

That said, the movement is only just getting started. “Facebook, Microsoft, Google, and IBM understand how much more research needs to be done to realize the full potential of deep learning methods, which is why they are all investing so heavily in core machine learning technology today,” says Dahl. “Even with all the recent successes, it is important to remember that the exciting applications we are seeing now are built on decades of research by many different people — and the problems we are trying to solve are very very hard.”

No comments:

Post a Comment