Professional Documents
Culture Documents
Subscribe
You've read 1 of 3
You know good tech journalism when you see it. Subscribe for
NOAH BERGER / AP
unlimited access.
▪ On the AI field’s gaps:
Subscribe "There’s
Already going to Sign
a subscriber? haveinto be quite a few conceptual breakthroughs...we
also need a massive increase in scale."
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 1/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
▪ On neural networks’ weaknesses: "Neural nets are surprisingly good at dealing with a rather
small amount of data, with a huge numbers of parameters, but people are even better."
▪ On how our brains work: "What’s inside the brain is these big vectors of neural activity."
The modern AI revolution began during an obscure research contest. It was 2012, the third year of the
annual ImageNet competition, which challenged teams to build computer vision systems that
would recognize 1,000 objects, from animals to landscapes to people.
In the first two years, the best teams had failed to reach even 75% accuracy. But in the third, a band
of three researchers—a professor and his students—suddenly blew past this ceiling. They won the
competition by a staggering 10.8 percentage points. That professor was Geoffrey Hinton, and the
technique they used was called deep learning.
Hinton had actually been working with deep learning since the 1980s, but its effectiveness had
been limited by a lack of data and computational power. His steadfast belief in the technique
ultimately paid massive dividends. The fourth year of the ImageNet competition, nearly every
team was using deep learning and achieving miraculous accuracy gains. Soon enough deep
learning was being applied to tasks beyond image recognition, and within a broad range of
industries as well.
Last year, for his foundational contributions to the field, Hinton was awarded the Turing Award,
alongside other AI pioneers Yann LeCun and Yoshua Bengio. On October 20, I spoke with him at
MIT Technology Review’s annual EmTech MIT conference about the state of the field and where
he thinks it should be headed next.
You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?
I do believe deep learning is going to be able to do everything, but I do think there’s going to have
to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced
transformers, which derive really good vectors representing word meanings. It was a conceptual
breakthrough. It’s now used in almost all the very best natural-language processing. We’re going to
You've
need readmore
a bunch 1 of 3breakthroughs like that.
And if we have those breakthroughs, will we be able to approximate all human intelligence through
You know good tech journalism when you see it. Subscribe for
deep learning?
unlimited access.
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement
things like reason. But we also need a massive increase in scale. The human brain has about 100
parameters, Already
Subscribe
trillion a subscriber?
or synapses. What we Sign
nowin call a really big model, like GPT-3, has 175 billion.
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 2/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
It’s a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text,
and it’s still tiny compared to the brain.
When you say scale, do you mean bigger neural networks, more data, or both?
Both. There’s a sort of discrepancy between what happens in computer science and what happens
with people. People have a huge amount of parameters compared with the amount of data they’re
getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a
huge numbers of parameters, but people are even better.
A lot of the people in the field believe that common sense is the next big capability to tackle. Do you
agree?
I agree that that’s one of the very important things. I also think motor control is very important,
and deep neural nets are now getting good at that. In particular, some recent work at Google has
shown that you can do fine motor control and combine that with language, so that you can open a
drawer and take out a block, and the system can tell you in natural language what it’s doing.
For things like GPT-3, which generates this wonderful text, it’s clear it must understand a lot to
generate that text, but it’s not quite clear how much it understands. But if something opens the
drawer and takes out a block and says, “I just opened a drawer and took out a block,” it’s hard to
say it doesn’t understand what it’s doing.
The AI field has always looked to the human brain as its biggest source of inspiration, and different
approaches to AI have stemmed from different theories in cognitive science. Do you believe the brain
actually builds representations of the external world to understand it, or is that just a useful way of
thinking about it?
A long time ago in cognitive science, there was a debate between two schools of thought. One was
led by Stephen Kosslyn, and he believed that when you manipulate visual images in your mind,
what you have is an array of pixels and you’re moving them around. The other school of thought
was more in line with conventional AI. It said, “No, no, that’s nonsense. It’s hierarchical, structural
descriptions. You have a symbolic structure in your mind, and that’s what you’re manipulating.”
I think they were both making the same mistake. Kosslyn thought we manipulated pixels because
external images are made of pixels, and that’s a representation we understand. The symbol people
thought we manipulated symbols because we also represent things in symbols, and that’s a
representation
You've read 1we ofunderstand.
3 I think that’s equally wrong. What’s inside the brain is these big
vectors of neural activity.
You know
There are some good tech
people who journalism
still believe when
that symbolic you see
representation it. Subscribe
is one for
of the approaches for AI.
unlimited
Absolutely. access.
I have good friends like Hector Levesque, who really believes in the symbolic approach
and has done great work in that. I disagree with him, but the symbolic approach is a perfectly
Subscribe Already a subscriber? Sign in
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 3/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
reasonable thing to try. But my guess is in the end, we’ll realize that symbols just exist out there in
the external world, and we do internal operations on big vectors.
What do you believe to be your most contrarian view on the future of AI?
Well, my problem is I have these contrarian views and then five years later, they’re mainstream.
Most of my contrarian views from the 1980s are now kind of broadly accepted. It’s quite hard now
to find people who disagree with them. So yeah, I’ve been sort of undermined in my contrarian
views.
Share Link
You've read 1 of 3
You know good tech journalism when you see it. Subscribe for
unlimited access.
Subscribe Already a subscriber? Sign in
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 4/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
You've read 1 of 3
You know good tech journalism when you see it. Subscribe for
unlimited access.
Artificial intelligence 2 days
A self-driving car might learn to maneuver more nimbly among human drivers if it didn’t get lost in the details
of their every twist and turn.
Opinion 2 days
How materials
You've read 1 of 3 you’ve never heard of
could clean up air conditioning
You know good tech journalism when you see it. Subscribe for
Scientists
unlimited are identifying promising “caloric materials” that undergo big temperature
access.
changes when placed under pressure and other forces.
Subscribe Already a subscriber? Sign in
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 6/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
SPONSORED
You know good tech journalism when you see it. Subscribe for
unlimited
Tech access.
policy Nov 12
So far, exposure notification apps haven’t prevented covid-19 from spreading. Experts suggest
there’s still time for them to live up to their promise.
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 8/9
19/11/2020 AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything” | MIT Technology Review
Load more
You've read 1 of 3
You know good tech journalism when you see it. Subscribe for
unlimited access.
Subscribe Already a subscriber? Sign in
https://www.technologyreview.com/2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/?fbclid=IwAR3y4Cf2A7Kq15hvooCgikZta_… 9/9