Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Will Machines Outsmart Man

Will Machines Outsmart Man

Ratings: (0)|Views: 6|Likes:
Published by Jeffrey W. Danese

More info:

Published by: Jeffrey W. Danese on Feb 01, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Printing sponsored by:
 Will machines outsmart man?
Scientists believe the point of 'Singularity' – where artificialintelligence surpasses that of humans – is closer than we thought
 Wendy M Grossman
guardian.co.uk, Tuesday 4 November 2008 19.01 EST
Ray Kurzweil at a conference — as a hologram. Photograph: Ed Murray/Corbis
They are looking for the hockey stick. Hockey sticks are the shape technology startupshope their sales graphs will assume: a modestly ascending blade, followed by a suddenturn to a near-vertical long handle. Those who assembled in San Jose in late October forthe Singularity Summit are awaiting the point where machine intelligence surpassesthat of humans and takes off near-vertically into recursive self-improvement.The key, said Ray Kurzweil, inventor of the first reading machine and author of 2005'sThe Singularity Is Near, is exponential growth in computational power - "the law of accelerating returns". In his favourite example, at the human genome project's initialspeed, sequencing the genome should have taken thousands of years, not the 15scheduled. Seven years in, the genome was 1% sequenced. Exponential acceleration hadthe project finished on schedule. By analogy, enough doublings in processing power willclose today's vast gap between machine and human intelligence.This may be true. Or it may be an unfalsifiable matter of faith, which is why thesingularity is sometimes satirically called "the Rapture for nerds". It makes assessingprogress difficult. Justin Rattner, chief technology officer of Intel, addressed a key issueat the summit: can Moore's law, which has the number of transistors packed on to achip doubling every 18 months, stay in line with Kurzweil's graphs? The end has beenpredicted many times but, said Rattner, although particular chip technologies havereached their limits, a new paradigm has always continued the pace."In some sense - silicon gate CMOS - Moore's law ended last year," Rattner said. "One of the founding laws of accelerating returns ended. But there are a lot of smart people at
Intel and they were able to reinvent the CMOS transistor using new materials." Intel isnow looking beyond 2020 at photonics and quantum effects such as spin. "The arc of Moore's law brings the singularity ever closer."
Judgment day 
Belief in an approaching singularity is not solely American. Peter Cochrane, the formerhead of BT's research labs, says for machines to outsmart humans it "depends on almostone factor alone - the number of networked sensors. Intelligence is more to do withsensory ability than memory and computing power." The internet, he adds, overtook thecapacity of a single human brain in 2006. "I reckon we're looking at the 2020 timeframefor a significant machine intelligence to emerge." And, he said: "By 2030 it really should be game over."Predictions like this flew at the summit. Imagine when a human-scale brain costs $1 - you could have a pocket full of them. The web will wake up, like Gaia. Nova Spivack,founder of EarthWeb and, more recently, Radar Networks (creator of Twine.com),quoted Freeman Dyson: "God is what mind becomes when it has passed beyond thescale of our comprehension."Listening, you'd never guess that artificial intelligence has been about 20 years away fora long time now. John McCarthy, one of AI's fathers, thought when he convened thefirst conference on the subject in 1956, that they'd be able to wrap the whole thing up insix months. McCarthy calls the singularity, bluntly, "nonsense".Even so, there are many current technologies, such as speech recognition, machinetranslation, and IBM's human-beating chess grandmaster Deep Blue, that would haveseemed like AI at the beginning. "It's incredible how intelligent a human being in frontof a connected computer is," observed the CNBC reporter Bob Pisani, marvelling at how clever Google makes him sound to viewers phoning in. Such advances are remindersthat there may be valuable discoveries that make attempts at even the wildest ideas worthwhile.Dharmendra Modha, head of the cognitive computing group at IBM's Almaden researchlab, is leading a "quest" to "understand and build a brain as cheaply and quickly aspossible". Last year, his group succeeded in simulating a rat-scale cortical model - 55mneurons, 442bn synapses - in 8TB memory of a 32,768-processor IBM Blue Genesupercomputer. The key, he says, is not the neurons but the synapses, the electrical-chemical-electrical connections between those neurons. Biological microcircuits areroughly essentially the same in all mammals. "An individual human being is stored inthe strength of the synapses."
Smarter than smart
Modha doesn't suggest that the team has made a rat brain. "Philosophically," he writeson the subject, "any simulation is always an approximation (a kind of 'cartoon') basedon certain assumptions. A biophysically realistic simulation is not the focus of our work." His team is using the simulation to try to understand the brain's high-levelcomputational principles.But computational power is nothing without software. "Would the neural code thatpowers human reasoning run on a different substrate?" the sceptical science writer JohnHorgan asked Kurzweil, who replied: "The key to the singularity is amplifyingintelligence. The prediction is that an entity that passes the Turing test and hasemotional intelligence ... will convince us that it's conscious. But that's not a

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->