Why Robot Brains Need Symbols

Nowadays, the words “artificial intelligence” seem to be on practically everyone’s lips, from Elon Musk to Henry Kissinger. At least a dozen countries have mounted major AI initiatives, and companies like Google and Facebook are locked in a massive battle for talent. Since 2012, virtually all the attention has been on one technique in particular, known as deep learning, a statistical technique that uses sets of of simplified “neurons” to approximate the dynamics inherent in large, complex collections of data. Deep learning has powered advances in everything from speech recognition and computer chess to automatically tagging your photos. To some people, it probably seems like “superintelligence”—machines vastly more intelligent than people—are just around the corner.

The truth is, they are not. Getting a machine to recognize the syllables in your sentence is not the same as it getting to understand the meaning of your sentences. A system like Alexa can understand a simple request like “turn on the lights,” but it’s a long way from holding a meaningful conversation. Similarly, robots can vacuum your floor, but the AI that powers them remains weak, and they are a long way from being clever enough (and reliable enough) to watch your kids. There are lots of things that people can do that machines still can’t.

I tried to take a step back, to explain why deep learning might not be enough, and where we ought to look to take AI to the next level.

And lots of controversy about what we should do next. I should know: For the last three decades, since I started graduate school at the Massachusetts Institute of Technology, studying with the inspiring cognitive scientist Steven Pinker, I have been embroiled in on-again, off-again debate about the nature of the human mind, and the best way to build AI. I have taken the sometimes unpopular position that techniques like deep learning (and predecessors that were around back then) aren’t enough to capture the richness of the human mind.

That on-again off-again debate flared up in an unexpectedly big way last week, leading to a huge Tweetstorm that brought in a host of luminaries, ranging from Yann LeCun, a founder of deep learning and current Chief AI Scientist at Facebook, to (briefly) Jeff Dean, who runs AI at Google, and Judea Pearl, a Turing Award winner at the University of California, Los Angeles.

When 140 characters no longer seemed like enough, I tried to take a step back, to explain why deep learning might not be enough, and where we perhaps ought to look for another idea that might combine with deep learning to take AI to the next level. The following is a slight adaptation of my personal perspective on what the debate is all about.

You're reading a preview, sign up to read more.

More from Nautilus

Nautilus9 min readPsychology
Our Brains Tell Stories So We Can Live: Without inner narratives we would be lost in a chaotic world.
We are all storytellers; we make sense out of the world by telling stories. And science is a great source of stories. Not so, you might argue. Science is an objective collection and interpretation of data. I completely agree. At the level of the stud
Nautilus9 min readRelationships & Parenting
Families of Choice Are Remaking America: Through their networks of friends, singles are strengthening society’s social bonds.
When Dan Scheffey turned 50, he threw himself a party. About 100 people packed into his Manhattan apartment, which occupies the third floor of a brick townhouse in the island’s vibrant East Village. His parents, siblings, and an in-law were there, an
Nautilus6 min read
The Big Bang Is Hard Science. It Is Also a Creation Story.: Even with its explanatory power, Big Bang theory takes its place in a long line of myths.
In some ways, the history of science is the history of a philosophical resistance to mythical explanations of reality. In the ancient world, when we asked “Where did the world come from?” we were told creation myths. In the modern world, we are inste