Professional Documents
Culture Documents
Chapters in History Philosophy Answers
Chapters in History Philosophy Answers
“MayG
odh
avem
ercyo
no
urs ouls”
1.WhatdoesWalterJ.Ongthinkaretheprincipaldifferencesbetweenoralandtextualcultures?
Ongclaimsthattheprincipaldifferencesbetweenoralandtextualculturesare:inanoralcultureone
personcandirectly,inopendiscourse,refutethesayingsofanotherpersonandchallengethem,
whileintextualculturesevenifyourefuteatext,itwillstillcontinuetosaythesamething.Plus,
writinginfluencesourthoughtprocess:ourthoughtsarenotjustwhatnaturallyoccurredtous,but
alsoaresultofthetextsweread.
2.WhatisPlato'scritiqueofwritingintheP haedrus?
Plato'scritiqueofwritinginPhaedrusisthatwritingispretendingtoestablishoutsideourselveswhat
canonlyexistinsideourmind,writingisinhumane,writingismanufactured,writingdestroys
memory,writingweakensthemind
3.WhatdoesOngthinkthehistoryofwritingcanteachusabouttheplaceofcomputing
technologyincurrentsociety?
Thehistoryofwriting,inOng'sopinion,canteachusthatcomputerswillbecomeasbasicand
fundamentalpartofourlivesaswritingisbecausethecritiqueagainstcomputersisthesameaswas
againstwritingandyet,todaywritingisaninseparablepartofourlives.
4.WhatdoesthecaseofJulesAllixandhis“snailtelegraph”teachusaboutthehistoryof
telecommunicationstechnology?
Atthebeginningoftelecommunications(andnotonlyatthebeginning),scientistsweretryingtofind
thesolutionintheonlycommunicationcapablethingsintheworld-thebiologicalcreatures
5.WhyareAllix,Digby,andothersinterestedinlearningfrombiologicalsystemsandphenomena
asapathtowardsinnovationincommunicationtechnology?
Theyareinterestedinlearningaboutbiologytoinnovateincommunicationsbecause
communicationssetsbiologicalcreaturesapartfromtherestandtheywanttoharnessthoseunique
biologicalfeaturesforprogress.Beforebiologybecameabranchofscience,allsciencewasbiology
oriented.
6.Howdidtheauthoroftheoriginal1632articleonCaptainVosterlochhavetheideaofthe
possibilityofrecordingtechnologies(notwithstandingthefactthattherecordingspongeitselfisa
completefabrication)?
Theauthorofthe1632pamphletgottheideafromthetechnologyaroundhim:itnolongerseemed
far-fetchedforasoundrecordinginstrumenttoexistwhenbookscouldbeprinted(wordandeven
pictures),whichwererecordingsofwordsandpictures.Itseemedlikethelogicalnexttechnologyto
beinvented.
7.WhatlessonsdoesAdaLovelacethinkinformationscientistscanlearnfromthestudyofsilk
manufacturing?
Inadditiontohistoricalcuriosity,theycanlearnthebasicsandtheprinciplesofthesilk
manufacturingtechnology,andhowtheancientmachineofsilkmanufacturingworks.Ingeneral,to
learntheprinciplesandthebasicsofsuchprimitivemachines.
8.AdaLovelacesaysthattheAnalyticalEngineshehasinventedwithCharlesBabbageiscapableof
“algebraicweaving”.Whatdoesshemeanbythis?
9.WhydoesNorbertWienerthinkthatinthe19thcenturytheideaoftheautomatonwasofa
“glorifiedheatengine”?
Theautomatawasstudiedfromdifferentaspects.Becauseconservationandthedegradationof
energyweretherulingprinciplesofthosedays,theideaofautomationwasofa“glorifiedheat
engine”.
10.Whatisthedifferencebetweenthe“Greek”andthe“magical”automaton,inWiener'sview?
Itwasuncleartomewhatwasmeantby“Greek”automata.Does“theclockworkmusicbox”andthe
“glorifiedheatengine”fallunderthecategoryofthe“Greek”automata.The“magic”automatawere
clear,liketheGolem.
IfwhatIsuggestedaboveiscorrect,IsaythatWienerdoesnotseethemasbeingdifferentatall,
ratherhesuggeststhatbothofthemhavebeenattempts“toproduceaworkingsimulacrumofa
livingorganism.”Basically,automatatrytoimitatelivingthings.
11.Wienerthinksthatcyberneticautomataarenotpartofsomedistantscience-fictionfuture,but
arealreadyrealizedin,forexample,thermostatsandautomaticgyrocompassship-steering
systems.WhatdothesehaveincommonwiththeAIsystemsoftoday?Howdotheydiffer?
“Insuchatheory,wedealwithautomataeffectivelycoupledtotheexternalworld,notmerelyby
theirenergyflow,theirmetabolism,butalsobyaflowofimpressions,ofincomingmessages,andof
theactionsofoutgoingmessages.…Theorgansbywhichimpressionsarereceivedarethe
equivalentsofthehumanandanimalsenseorgans.…Theeffectorsmaybeelectricalmotorsor
solenoidsorheatingcoilsorotherinstrumentsofverydiverseworks.…Themachinesofwhichwe
arenowspeakingarenotthedreamofthesensationalistnorthehopeoffuturetime.Theyalready
existasthermostats,automaticgyrocompassship-steeringsystems…”(Page43asappearsinPDF)
Essentially,Wienerseesthetechnologiesaroundhimaswaysofmimickinghumanfunctions,
particularlydividedintosensingandperformingactions.Heseesthatinthermostats“feeling”
temperatureandship-steeringsystemsasbeingabletofindtheirwayinspacethus“feeling”and
“directing”themselveslikeahumanbeing.WienerdoesnotaddressmodernAIinthearticle.
12.WhydoesWienerthinkit'seasiertobuildlearningmachinesthantobuildself-reproducing
machines?
Unclear,buthementionsonlyattheendtheideathatmachinescan/willbeabletoself-replicate.
Whereasmachinerythatcanlearnisfareasiertoaccomplish.Hebringsexamplesofhowamachine
couldplaychessinarigidfashion,alwaysrespondinginthesamewayifpromptedwiththesame
stimuli,butthenwhotakessomegamesoffandjustre-analysesitsmovesandlearnswhatcould
havebeenidealinthosesituations(hindsight).Nextgamehewillhavelearnttobebetter.The
self-replicationisastatisticalprobabilitybasedonthese“transducer”thingsthatIdon’tunderstand.
13.WhatisthetheoryinthephilosophyofmindthatmustbepresupposedinorderforNick
Bostrom'ssimulationargumenttosucceed?
BostrompresupposesaweakerargumentofFunctionalism/"substrate-independence"-athesisthat
claimsamachine,intheory,canbeconsciousgivenasuitablesetofprograms.Bostromassumesa
weakerversion-justthatamachineiscapableofhavingsubjectiveexperiences.
14.WhydoesBostromthinkthatthefractionofhuman-levelcivilizationsthatreachapost-human
stageisverysmall?
Thisisproposition1outof3thatBostromproposes-thesecondoneisthatthefractionof
post-humancivilizationsthatareinterestedinrunningsimulationsisverysmall,andthethirdisthat
mostcreaturesofourkindarealreadylivinginasimulation.Bostrom'sargumentisthatatleastone
ofthepropositionsistrue.Hereachedthose3propositionsbysimpleprobabilitycalculationsand
assumingthatthenumberofsimulationsthatapost-humancivilizationwillbeabletorunis
extremelylarge.
15.WhydoesSusanSchneiderthinkthatextraterrestrialsmightbeintelligentwithoutbeing
conscious?
Schneiderseestheprogressionofbiologicalintelligencetosyntheticintelligenceasinevitable.
Therefore,intelligentextraterrestrialsmostprobablyevolvedfromabiologicallife,butarenow
synthetic.Giventhatconsciousnessissomethingthatneedstobedeliberatelyengineeredanddoes
notdevelopindependently,itisunlikelythatanybiologicalextraterrestrialspecieswillengineer
consciousnessintoitsartificialintelligence.
16.Whatis“theSingularity”?
Thesingularityisdefineddifferentlybydifferentacademics,butChalmerstakestheapproachofa
moderateintelligenceexplosioninwhichmachinesbecomebetteratdesigningmachinesthan
humansare,leadingtoanendlessimprovementinwhicheachmachinesdesignsamachinebetter
thanitself,whetherornotitisaccompaniedbyaspeedexplosion,whichdescribesthedoublingof
processingspeedatregularintervals.Inasentence,itisthepointinwhichAIovertakeshuman
intelligence.
17.DoesDaveChalmersthinktheSingularityislikely?Whyorwhynot?
Yes.AlthoughChalmersismoreconservativeastowhentheSingularitywilloccur,hebelievesitis
notaquestionofif,butwhen.HearguesthatsincetherewillbetrueAIsoonenough(evolution
developedintelligence,surelythenhumanscanbuildittoo).Sincethesemethodsareextensible,it
willextenditselftobecomeAI+,whichwould,inturn,bebetterthanweareatdesigningmachines,
leadingtotheSingularity.Inaddition,herefutesthepossibilityofanystructural,correlationalor
manifestationalobstacleshinderingthedevelopmentofAItotheextentthattrueAIisnever
generated.Theseobstacles(particularlysituational,likedisastersandlimitedresources)maydelay
theSingularity,butwillnotpreventit.
18.DoesChalmersthink“self-uploading”islikely?Whyorwhynot?
TheanswerisY es.C
halmersthinksthat “self-uploading”islikely(undernumerouspremises).
Hebelievesthatinthecaseofthegradualuploadingthereisachancethattheoriginsystem(a
humanandItsconsciousness)survives(paragraph2page45).Atthesametimeheprovesthatthere
isnodifferencebetweeninstantuploadingandgradualuploadingwhenwiththegrowthofthe
technologylevelthegradualuploadingcanbeacceleratedtosuchalevelthatItwillbe
undistinguishedfromtheinstantuploading(thelastparagraphpage45andthefirstparagraphpage
46).Hehimselfsays:“S till,Iamconfidentthatthesafestformofuploadingisgradualuploading,and
Iamreasonablyconfidentthatgradualuploadingisaformofsurvival.Soifatsomepointinthe
futureIamfacedwiththechoicebetweenuploadingandcontinuinginanincreasinglyslowbiological
embodiment,thenaslongasIhavetheoptionofgradualuploading,Iwillbehappytodoso.”(page
47,firstparagraph)
19.Whatis“theUncannyValley”?
Theuncannyvalley(uncannypointingtostrangefamiliarity/strangeness,valleyistheareaa
minimumpointmakesbetweentwomaximumsinafunctiongraph)istherelationshipbetweena
robotwithahuman-likeappearanceorbehaviour(andthedegreeofsimilarity)withhowwe,as
humans,feelaboutit/theemotionsitprovokes.Inthisspecifictheory(“uncannyvalley”)robots
becomemoreappealingthemorehumantheyare,butuptoacertaindegree.Iftheyarehighly
realistic,yetnotrealenough,theyendupevokingasenseofunease,andwewouldpossiblyfind
themcreepyandrepulsive.Thisfeelingisthe“valley”,butthemoretherobotbecomesless
distinguishablefromhumans,thefeelingofrepulsionstartstoebbaway,andthepositivefeeling
returns."Thisareaofrepulsiveresponsearousedbyarobotwithappearanceandmotionbetweena
"barelyhuman"and"fullyhuman"entityistheuncannyvalley.
20.WhydoesDanielDennettthinkthatAIdesignersareengagingin“falseadvertising”?
WhenDennetmentioned“falseadvertising”itwasinrelationtothehuman-likequalities/quirksthat
arebeingaddedbyAIDesignerstotheirmachines.These“falseadvertisements”mightmakeus
believethatthe“adviceai”forexampleisanactualperson,whichwouldmakeusactually
take/considertheiradviceastherightsolutiontoproblems(possiblelifeordeathsituations).Making
themmorehumanoidmakesustrustthemmore,howeverthatdoesn’tmeanthatthesystemis
actuallyproper/hastherightjudgement/ismorallycorrect/thinksoranswerslikeahumanwould,
astheinneroperationsofthesemachinesareunfathomable.Inpage3:“Nomatterhowscrupulously
theAIdesignerslaunderthephony“human”touchesoutoftheirwares,wecanexpectaflourishing
ofshortcuts,workaroundsandtolerateddistortionsoftheactual“comprehension”ofboththe
systemsandtheiroperators.”Inpage4:“artificialconsciousagentsisthat,howeverautonomous
theymightbecome(andinprincipletheycanbeasautonomous,asself-enhancingorselfcreating,
asanyperson),theywouldnot—withoutspecialprovision,whichmightbewaived—sharewithus
naturalconsciousagentsourvulnerabilityorourmortality.”
21.Whatisthedifferencebetween“celestial”and“organic”ethicsforReginaRini?
Celestialethicsareethicstakenfromthepointofviewof"objectivity"or"howtheuniverseseesit''
andnotinherenttothosewishingtoactethically-basicallyifanimalswerecapableofresisting
impulsesandactingrationally,they'dbeexpectedtoactasethicallyashumans.Organicethicsare
builtintotheactorthatisperformingthemandwemuststrivetodevelopabilitiesalreadyinour
nature.
22.WhydoesRinithinkthatamachine'sabilitytobeatahumanbeingatGocouldhavetroubling
ethicalimplications?
TheabilityforAlphaGotobeatahumandespitedoingmovesthatnohumanwhowaswatching
couldunderstandhighlightsanimportantdifferencebetweenthewayhumansandAIcananddosee
thingsandexplain/rationalisethem.Thisisimportantbecauseiftheywerelefttodevelopand
machinelearnethicsandmorals,wewouldntunderstandtheconclusionstheyreachedandwouldn't
beabletocomprehendthem-sowe'deithertreatthemasG-dsanddoastheysay,ormorelikely
ignoretheirethicaladvicebecauseitistoodifferentfromourcurrentpositions-inwhichcase,why
botherletthemdeveloppositions.
23.IsRini'scomparisonofAIsystemstohumanteenagersagoodone?Whyorwhynot?
Thisishardtoanswerinsummaryformat-it'sjustanopinion.Herwholearticleleadstothe
conclusionthatweshouldtreatthemasteenagers.I'lljustsummarisewhy:wecan'tcreaterobots
withmoralitythatwecanunderstandand/orjustifyforcingrobotstofollowourmorals,soweshould
educatethemasweseefitbutbewillingtoacceptthemgrowingupandbecomingtheirownthing-
withopinionswemightnotlike.
24.WhydoBaslandSchwitzgebelthinkAIsystemsaredeservingofethicalprotection?
25. What is the name
Norbert
Wiener
uses
for
the
study
of
feedback
loops
in
living
and
artificial
systems?
A)Metaphysics
B)C ybernetics
C)Epistemology
D)ArtificialIntelligence
26. What theory in
the
philosophy of
mind does Nick Bostrom presuppose in the course of making
hisargumentforthesimulationhypothesis?
A)Biologism
B)Dualism
C)F unctionalism
D)Eliminativematerialism
27.WhichisanexampleofacyberneticsystemforWiener?
A)Alivingbody
B)Athermostat
C)Acomputer
D)A lloftheabove
28. Which of
the following ethical thought experiments has been discussed the most by engineers
workingonthedevelopmentofself-drivingcars?
A) Themoralmachineexperiment
B)TheringofGyges
C)Thetunnelproblem
D)T hetrolleyproblem
29. Which of
the following machines was an important influence, according to Ada Lovelace, in her
workwithCharlesBabbageontheAnalyticalEngine?
A)TheAntikytheramechanism
B)DaVinci'sornithopter
C)J acquard'spunched-cardloom
DTheTeslacoil
30.Whowrotethefollowingpassage?
Only a small percentage of
human mental processing is
accessible to
the
conscious
mind. Consciousness is correlated with novel learning tasks that require attention
and focus. A superintelligence would possess expert-level knowledge in every
domain, with rapid-fire computations ranging over vast databases that could
include the entire Internet and ultimately encompass an entire galaxy. What would
be novel to it? What would require slow, deliberative focus? Wouldn’t it have
mastered everything already? Like an experienced driver on a familiar road, it could
rely on nonconscious processing. The simple consideration of efficiency suggests,
depressingly, that the most intelligent systems will not be conscious. On
cosmological scales, consciousness may be a blip, a momentary flowering of
experiencebeforetheuniverserevertstomindlessness.
A)DanielDennett
B)G.W.Leibniz
C)SusanSchneider
D)AdaLovelace
Listo
fa lla rticles:
Number Author Article Questions
2 Justin InternetofSnails 4,5
· lassicissuesofprogrammingmachinesto“winwars”or“dogood”–it’simpossibletodefine
C
thesethingsproperly.
· Self-replicationmustincludereplicatingthefunctionality,notjustthematter.
· Transducers:outputdeterminedbypastinputs,invariantwithrespecttotranslationintime.
· Heexplainssomethingaboutstatisticalprobabilityofcertainmachines/functionsreproducing
themselvesbecausetheyaretransducers,thereforeshowingthatmachineself-replicationcan
happen.
Summaryo
fN
ickB
ostrom:A
reY
ouL ivingina C
omputerS imulation?
BecauseinthefuturetheycansimulatehighIQsentientbeings,weareprobablyouranscestor’s
simulations.Basicallyifintheorywe'llbeabletosimulateourdescendants,thenwehavetobelieve
wearesomeone'ssimulateddescendants.
SubstrateIndependence:Providedasystemimplementstherightsortofcomputationalstructures
andprocesses,itcanbeassociatedwithconsciousexperiences.
Theargumentweshallpresentdoesnot,however,dependonanyverystrongversionof
functionalismorcomputationalism.Forexample,weneednotassumethatthethesisof
substrate‐independenceisnecessarilytrue(eitheranalyticallyormetaphysically)–justthat,infact,a
computerrunningasuitableprogramwouldbeconscious.
Thesimulationargumentworksequallywellforthosewhothinkthatitwilltakehundredsof
thousandsofyearstoreacha“posthuman”stageofcivilization,wherehumankindhasacquiredmost
ofthetechnologicalcapabilitiesthatonecancurrentlyshowtobeconsistentwithphysicallawsand
withmaterialandenergyconstraints.
Oneestimate,basedonhowcomputationallyexpensiveitistoreplicatethefunctionalityofapiece
ofnervoustissuethatwehavealreadyunderstoodandwhosefunctionalityhasbeenreplicatedin
silico,contrastenhancementintheretina,yieldsafigureof~10^14operationspersecondforthe
entirehumanbrain.6Analternativeestimate,basedthenumberofsynapsesinthebrainandtheir
firingfrequency,givesafigureof~10^16‐10^17operationspersecond.
Moreover,sincethemaximumhumansensorybandwidthis~10^8bitspersecond,simulatingall
sensoryeventsincursanegligiblecost
Posthumancivilizationswouldhaveenoughcomputingpowertorunhugelymany
ancestor‐simulationsevenwhileusingonlyatinyfractionoftheirresourcesforthatpurpose.
Thisisn'ttheDoomsdayArgument,becausetheDoomsdayargumentrestsonamuchstrongerand
morecontroversialpremiss,namelythatoneshouldreasonasifonewerearandomsamplefromthe
setofallpeoplewhowilleverhavelived(past,present,andfuture)eventhoughweknowthatwe
arelivingintheearlytwenty‐firstcenturyratherthanatsomepointinthedistantpastorthefuture.
Theblandindifferenceprinciple,bycontrast,appliesonlytocaseswherewehavenoinformation
aboutwhichgroupofpeoplewebelongto.
Therearemanywaysinwhichhumanitycouldbecomeextinctbeforereachingposthumanity.
Perhapsthemostnaturalinterpretationof(1)isthatwearelikelytogoextinctasaresultofthe
developmentofsomepowerfulbutdangeroustechnology. Onecandidateismolecular
nanotechnology,whichinitsmaturestagewouldenabletheconstructionofself‐replicatingnanobots
capableoffeedingondirtandorganicmatter–akindofmechanicalbacteria.Suchnanobots,
designedformaliciousends,couldcausetheextinctionofalllifeonourplanet.
Thesecondalternativeinthesimulationargument’sconclusionisthatthefractionofposthuman
civilizationsthatareinterestedinrunningancestor-simulationisnegligiblysmall.Inorderfor(2)tobe
true,theremustbeastrongconvergenceamongthecoursesofadvancedcivilizations.Ifthenumber
ofancestor‐simulationscreatedbytheinterestedcivilizationsisextremelylarge,therarityofsuch
civilizationsmustbecorrespondinglyextreme.Virtuallynoposthumancivilizationsdecidetouse
theirresourcestorunlargenumbersofancestor‐simulations.
Couldbelayersofsimulationsinsimulations.Althoughalltheelementsofsuchasystemcanbe
naturalistic,evenphysical,itispossibletodrawsomelooseanalogieswithreligiousconceptionsof
theworld.Insomeways,theposthumansrunningasimulationarelikegodsinrelationtothepeople
inhabitingthesimulation:theposthumanscreatedtheworldwesee;theyareofsuperior
intelligence;theyare“omnipotent”inthesensethattheycaninterfereintheworkingsofourworld
eveninwaysthatviolateitsphysicallaws;andtheyare“omniscient”inthesensethattheycan
monitoreverythingthathappens.However,allthedemigodsexceptthoseatthefundamentallevel
ofrealityaresubjecttosanctionsbythemorepowerfulgodslivingatlowerlevels.
Supposingweliveinasimulation,whataretheimplicationsforushumans?Theforegoingremarks
notwithstanding,theimplicationsarenotallthatradical.Ourbestguidetohowourposthuman
creatorshavechosentosetupourworldisthestandardempiricalstudyoftheuniversewesee.The
revisionstomostpartsofourbeliefnetworkswouldberatherslightandsubtle–inproportiontoour
lackofconfidenceinourabilitytounderstandthewaysofposthumans.Properlyunderstood,
therefore,thetruthof(3)shouldhavenotendencytomakeus“gocrazy”ortopreventusfromgoing
aboutourbusinessandmakingplansandpredictionsfortomorrow.Thechiefempiricalimportance
of(3)atthecurrenttimeseemstolieinitsroleinthetripartiteconclusionestablishedabove.We
mayhopethat(3)istruesincethatwoulddecreasetheprobabilityof(1),althoughifcomputational
constraintsmakeitlikelythatsimulatorswouldterminateasimulationbeforeitreachesaposthuman
level,thenourbesthopewouldbethat(2)istrue.
Atechnologicallymature“posthuman”civilizationwouldhaveenormouscomputingpower.Basedon
thisempiricalfact,thesimulationargumentshowsthatatleastoneofthefollowingpropositionsis
true:(1)Thefractionofhumanlevelcivilizationsthatreachaposthumanstageisveryclosetozero;
(2)Thefractionofposthumancivilizationsthatareinterestedinrunningancestor-simulationsisvery
closetozero;(3)Thefractionofallpeoplewithourkindofexperiencesthatarelivinginasimulation
isveryclosetoone.If(1)istrue,thenwewillalmostcertainlygoextinctbeforereaching
posthumanity.If(2)istrue,thentheremustbeastrongconvergenceamongthecoursesofadvanced
civilizationssothatvirtuallynonecontainanyrelativelywealthyindividualswhodesiretorun
ancestor‐simulationsandarefreetodoso.If(3)istrue,thenwealmostcertainlyliveinasimulation.
Inthedarkforestofourcurrentignorance,itseemssensibletoapportionone’scredenceroughly
evenlybetween(1),(2),and(3).
Unlesswearenowlivinginasimulation,ourdescendantswillalmostcertainlyneverrunan
ancestor‐simulation.
Summaryo
fS usanS chneider:I tM
ayN
otF eelL ikeA
nythingT oB
ea nA
lien
Humansare(probably)notthemostintelligentspeciesintheuniverse,butsoon,notevenonEarth,
aswealreadyare,andsoonwillbecompletelyovertakenbysyntheticintelligence.Therefore,inall
likelihood,superhumanalienintelligenceispostbiological.Postbiologicalitelligencecaninclude
biological(andnotartificial)mindsthathavetechnologicalenhancements.Technological
developmententirelyoutpacesbiologicalevolution,isendleslessyimprovable,canbebacked-upand
storedinmultiplelocationsandismuchbetteratsurvivingthanabiologicalcreature.
RayKurzweil-humanitymergingwithmachinestoformatechno-topia.
Hawking,Gates,Musk-AIwillrewriteitselfandwewilllosecontrolofit.
Programmingmoralityandkill-switchesintoamachinecleverenoughwilljustre-programitself.
Therefore,AIaliensareevenmoretroublesomethanbiologicalonesandweshouldbecarefulwhen
activelysignalinganddrawingalienattention.Weneedtoreachourownsingularitybeforewestart
lookingforAIaliens.
Consciousnessistheparametertojudgewhethersomethingisaselfandhasinnerlifeandtherefore
relatable,asopposedtobeinganautomaton.Thus,thequestionofwhetherornotalienAIhas
consciousnesscouldinfluencehowitrelatestous.Ifitrelatestousonashared-consciousnesslevel
thatwouldbecould.Butbecauseconsciousnessisalsosubjective,itcouldbesosuper-consciousthat
itseesourconsciousnessinamannersimilartohowweperceivetheconsciousnessofanapple.
Thereiscurrentdebateandresearchoverartificialconsciousness,withtestsbeingconductedon
silicone-basedbrainchips.Butconsciousnessisseeminglysomethingthatdoesnotjustform,ithas
tobeengineeredintotheAI.Infact,unconsciousAIispreferable(avoidsmoralquestionsofenslaving
robots,etc),sowho,orwhat,woulddecidetoengineerconsciousAI?
“Soon,humanswillnolongerbethemeasureofintelligenceonEarth.Andperhapsalready,
elsewhereinthecosmos,superintelligentAI,notbiologicallife,hasreachedthehighest
intellectualplateaus.Butperhapsbiologicallifeisdistinctiveinanothersignificant
respect—consciousexperience.Forallweknow,sentientAIwillrequireadeliberateengineering
effortbyabenevolentspecies,seekingtocreatemachinesthatfeel.Perhapsabenevolent
specieswillseefittocreatetheirownAImind-children.Orperhapsfuturehumanswillengagein
someconsciousnessengineering,andsendsentiencetothestars.”
Summaryo
fD
avidC
halmers:T heS ingularity
“Whathappenswhenmachinesbecomemoreintelligentthanhumans?Oneviewisthatthisevent
willbefollowedbyanexplosiontoever-greaterlevelsofintelligence,aseachgenerationofmachines
createsmoreintelligentmachinesinturn.Thisintelligenceexplosionisnowoftenknownasthe
“singularity”.”
Thesingularityisdefineddifferentlybydifferentacademics,butChalmerstakestheapproachofa
moderateintelligenceexplosioninwhichmachinesbecomebetteratdesigningmachinesthan
humansare,leadingtoanendlessimprovementinwhicheachmachinesdesignsamachinebetter
thanitself,whetherornotitisaccompaniedbyaspeedexplosion,whichdescribesthedoublingof
processingspeedatregularintervals.
TheSingularity:IsItLikely?
Chalmersfocusesonthe"intelligenceexplosion"kindofsingularity,andhisfirstprojectisto
formalizeanddefendI.J.Good's1965argument.DefiningAIasbeing"ofhumanlevelintelligence,"
AI+asAI"ofgreaterthanhumanlevel"andAI++as"AIoffargreaterthanhumanlevel"
(superintelligence),ChalmersupdatesGood'sargumenttothefollowing:
1)TherewillbeAI(beforelong,absentdefeaters).
2)IfthereisAI,therewillbeAI+(soonafter,absentdefeaters).
3)IfthereisAI+,therewillbeAI++(soonafter,absentdefeaters).
Therefore,therewillbeAI++(beforetoolong,absentdefeaters).
By"defeaters,"Chalmersmeansglobalcatastropheslikenuclearwaroramajorasteroidimpact.
ChalmersismoreconservativeaboutpredictingwhentrueAIwilloccur,givinga50%chanceofit
happeningbefore2100,andclaimingthatthetruebottleneckisnothardwarebutrathersoftware,
thatouralgorithmsarenotgoodenoughyet.
Onewaytosatisfypremise(1)istoachieveAIthroughbrainemulation(Sandberg&Bostrom,2008).
Againstthissuggestion,Lucas(1961),Dreyfus(1972),andPenrose(1994)arguethathuman
cognitionisnotthesortofthingthatcouldbeemulated.Chalmers(1995;1996,chapter9)has
respondedtothesecriticismsatlength.Briefly,Chalmersnotesthatevenifthebrainisnota
rule-followingalgorithmicsymbolsystem,wecanstillemulateitifitismechanical.(Somesaythe
brainisnotmechanical,butChalmersdismissesthisasbeingdiscordantwiththeevidence.)
Searle(1980)andBlock(1981)argueinsteadthatevenifwecanemulatethehumanbrain,itdoesn't
followthattheemulationisintelligentorhasamind.Chalmerssayswecansettheseconcernsaside
bystipulatingthatwhendiscussingthesingularity,AIneedonlybemeasuredintermsofbehaviour.
TheconclusionthattherewillbeAI++atleastinthissensewouldstillbemassivelyimportant.
Anotherconsiderationinfavourofpremise(1)isthatevolutionproducedhuman-levelintelligence,
soweshouldbeabletobuildit,too.Perhapswewillevenachievehuman-levelAIbyevolvinga
populationofdumberAIsthroughvariationandselectioninvirtualworlds.Wemightalsoachieve
human-levelAIbydirectprogrammingor,morelikely,systemsofmachinelearning.
Premise(2)isplausiblebecauseAIwillprobablybeproducedbyanextendiblemethod,andso
extendingthatmethodwillyieldAI+.Brainemulationmightturnoutnottobeextendible,butthe
othermethodsare.Evenifhuman-levelAIisfirstcreatedbyanon-extendiblemethod,thismethod
itselfwouldsoonleadtoanextendiblemethod,andinturnenableAI+.AI+couldalsobeachievedby
directbrainenhancement.Thus,herefutestheclaimthatintelligencehaspeaked.
Premise(3)istheamplificationargumentfromGood:anAI+wouldbebetterthanweareat
designingintelligentmachines,andcouldthusimproveitsownintelligence.Havingdonethat,it
wouldbeevenbetteratimprovingitsintelligence.Andsoon,inarapidexplosionofintelligence.He
alsonotesthatthefundamentalassumptionisthemeasurabilityofintelligence,andthatan
intelligentAIhastheabilitytocreateanevenmoreintelligentAI.
Insection3ofhispaper,Chalmersarguesthattherecouldbeanintelligenceexplosionwithoutthere
beingsuchathingas"generalintelligence"thatcouldbemeasured,despitethefactthatthe
premisesofsection2restontheassumptionofgeneralintelligencethatcanbemeasured.
Insection4,Chalmerslistsseveralpossibleobstaclestothesingularity:1)Structuralobstaclessuch
aslimitsinintelligencespace,failuretotakeoffanddiminishingreturns.2)Correlationobstacles,the
assumptionthatanincreaseinintelligencewillnotleadtoanabilitytodevelopevenmoreintelligent
intelligence.3)Manifestationobstacles,suchasmotivationaldefeatersandsituationaldefeaters
(disastersandresourcelimitations).Chalmersbelievesthatthemostlikelyaremotivational
defeaters,andheaddressestherestbriefly(astowhytheyarenottrueobstacles),buthisargument
ismainlyhispersonalanalysis.
ConstrainingAI
Next,ChalmersconsidershowwemightdesignanAI+thathelpstocreateadesirablefutureandnot
ahorrifyingone.IfweachieveAI+byextendingthemethodofhumanbrainemulation,theAI+willat
leastbeginwithsomethinglikeourvalues.DirectlyprogrammingfriendlyvaluesintoanAI+
(Yudkowsky,2004)mightalsobefeasible,thoughanAI+arrivedatbyevolutionaryalgorithmsis
worrying.
Human-basedAI(brainemulationetc)islessdangerous,butnon-human-basedAIcouldcomefirst,
andthiswouldrequirecarefulprogramminganddesigntoensurethatithasdesiresandhasvalues
thatarebeneficialtohumans.
“Sofar,mydiscussionhaslargelyassumedthatintelligenceandvalueareindependentofeachother.
Inphilosophy,DavidHumeadvocatedaviewonwhichvalueisindependentofrationality:asystem
mightbeasintelligentandasrationalasonelikes,whilestillhavingarbitraryvalues.Bycontrast,
ImmanuelKantadvocatedaviewonwhichvaluesarenotindependentofrationality:somevaluesare
morerationalthanothers.”
Mostofthisassumesthatvaluesareindependentofintelligence,asHumeargued.ButifHumewas
wrongandKantwasright,thenwewillbelessabletoconstrainthevaluesofasuperintelligent
machine,butthemorerationalthemachineis,thebettervaluesitwillhave.
AnotherwaytoconstrainanAIisnotinternalbutexternal.Forexample,wecouldlockitinavirtual
worldfromwhichitcouldnotescape,andinthiswaycreatealeak-proofsingularity.Butthereisa
problem.FortheAItobeofusetous,someinformationmustleakoutofthevirtualworldforusto
observeit.Butthen,thesingularityisnotleak-proof.AndiftheAIcancommunicateus,itcould
reverse-engineerhumanpsychologyfromwithinitsvirtualworldandpersuadeustoletitoutofits
box-intotheinternet,forexample.
Therefore,aleak-proofsingularitywouldrequirealsothepreventionofinformationleakingin.This,
however,willhindertheperformanceandfunctionalityofAI.Analternativewouldbetodesigna
virtualworldwithverysimplephysicsandimplementtheAIseparately,withoutgivingittheabilityto
accessitsownprocesses.WecouldthenstudytheAIverycarefully,andonlyoncedecidingthatitis
entirelybenevolent,slowlyletitoutintotheworld.
OurPlaceinaPost-SingularityWorld
Chalmerssaystherearefouroptionsforusinapost-singularityworld:extinction,isolation,
inferiority,andintegration.
Thefirstoptionisundesirable.ThesecondoptionwouldkeepusisolatedfromtheAI,akindof
technologicalisolationisminwhichoneworldisblindtoprogressintheother.Thethirdoptionmay
beinfeasiblebecauseanAI++wouldoperatesomuchfasterthanusthatinferiorityisonlyablinkof
timeonthewaytoextinction.
Forthefourthoptiontowork,wewouldneedtobecomesuper-intelligentmachinesourselves.One
pathtothismindbemind-uploading,whichcomesinseveralvarietiesandhasimplicationsforour
notionsofconsciousnessandpersonalidentitythatChalmersdiscusses.Chalmersprefersgradual
uploading(slowlyreplacingthebrainthroughnano-transferaseachpartinturnslearnstoreplicate
thebrainfunction),andconsidersitaformofsurvival.Healsosuggestswhathecallsnon-destructive
uploading,butthereisnotechnologyforthisonthehorizon.
Thequestionofsurvivinganuploadisdividedintothequestionsofwhethertheuploadedselfwillbe
conscious,andifitwillretainthepersonalidentityoftheoriginal‘owner’ofthebiologicalbrain.The
firstpartisalmostimpossibletoanswer,similartothefactthatweareentirelyabletodescribeevery
partofamouseandhowitlivesandbehaves,butwehavenoideawhatitfeelsliketobeamouse.
Moreover,wehavenoideahowabiologicalbrainisconscious,thusChalmersarguesthata
non-biologicalbraincouldtoo,beconscious.Gradualuploadingisalso,potentially,mosteffective
wayofpreservingconsciousness.Healsomentionsthechallengeofconvincingpeoplethattheywill
remainconsciouspost-upload,buteventuallyitwillcatchon.
Intermsofpersonalidentity,Chalmersisundecided,butleanstowardaviewthatconsidersthe
psychologicalcontinuity(asaposttophysicalbiologicalcontinuity)ofapersonastheprevailing
indicatorofsurvivalofthatindividual.
Thepessimisticviewofsurvivalinuploadingtakesthefollowingapproach:
1.Innon-destructiveuploading,DigiDaveisnotidenticaltoDave.
2.Ifinnon-destructiveuploading,DigiDaveisnotidenticaltoDave,thenindestructiveuploading,
DigiDaveisnotidenticaltoDave.
3.Therefore,indestructiveuploading,DigiDaveisnotidenticaltoDave.
Inaddition,Chalmersbelievesthatifingradualuploading,apersonretainsconsciousnessand
personalidentity,thenininstantuploading,theyshouldalsodothesame.Healsoraisesthe
possibilityforpost-mortemuploading,eitherthroughcryonicbrain-preservationorthrough
reconstruction.
“Thefurther-factviewistheviewthattherearefactsaboutsurvivalthatareleftopenbyknowledge
ofphysicalandmentalfacts”whichChalmersbelievescouldbetrue,andifitistrue,thenthefacts
aboutdestructiveandnon-destructiveuploadingareunclear,thenitfollowsthattheoptimisticview
canbeadoptedwithgoodreason.However,herecognisesthatthefurther-factviewcouldbenot
true,whichcouldmeanthatthedeflationaryviewwouldbetrue(whichholdsthatourattemptsto
settleopenquestionsaboutsurvivaltacitlypresupposefactsaboutsurvivalthatdonotexist).“ Ifa
deflationaryviewiscorrect,Ithinkthatquestionsaboutsurvivalcomedowntoquestionsaboutthe
valueofcertainsortsoffutures:shouldwecareabouttheminthewayinwhichwecareabout
futuresinwhichwesurvive?Idonotknowwhethersuchquestionshaveobjectiveanswers.ButIam
inclinedtothinkthatinsofarasthereareanyconditionsthatdeliverwhatwecareabout,continuity
ofconsciousnesssufficesformuchoftherightsortofvalue.Causalandpsychologicalcontinuitymay
alsosufficeforareasonableamountoftherightsortofvalue.Ifso,thendestructiveand
reconstructiveuploadingmaybereasonableclosetoasgoodasordinarysurvival.Whatabouthard
cases,suchasnon-destructivegradualuploadingorsplitbraincases,inwhichonestreamof
consciousnesssplitsintotwo?Onadeflationaryview,theanswerwilldependonhowonevaluesor
shouldvaluethesefutures.Atleastgivenourcurrentvaluescheme,thereisacasethatphysicaland
biologicalcontinuitycountsforsomeextravalue,inwhichcaseBioDavemighthavemorerighttobe
countedasDavethanDigiDave.Butitisnotoutofthequestionthatthisvalueschemeshouldbe
revised,orthatitwillberevisedinthefuture,sothatBioDaveandDigiDavewillbecountedequally
asDave.Inanycase,Ithinkthatonadeflationaryviewgradualuploadingisclosetoasgoodas
ordinarynon-Edenicsurvival.Anddestructive,non-destructive,andreconstructiveuploadingare
reasonablyclosetoasgoodasordinarysurvival.Ordinarysurvivalisnotsobad,soonecanseethis
asanoptimisticconclusion”
Conclusion
Chalmersconcludes:
“Willtherebeasingularity?Ithinkthatitiscertainlynotoutofthequestion,andthatthemain
obstaclesarelikelytobeobstaclesofmotivationratherthanobstaclesofcapacity.
Howshouldwenegotiatethesingularity?Verycarefully,bybuildingappropriatevaluesinto
machines,andbybuildingthefirstAIandAI+systemsinvirtualworlds.
Howcanweintegrateintoapost-singularityworld?Bygradualuploadingfollowedbyenhancement
ifwearestillaroundthen,andbyreconstructiveuploadingfollowedbyenhancementifwearenot.”
SummaryofReginaRini:RaisingGoodRobots
Myunderstandingofthebreakdownisasfollows:Celestialethicsareethicstakenfromthepointof
viewof"objectivity"or"howtheuniverseseesit"andthereforeinnowayinherenttothosewishing
toactethicallyassuch.Thisleadsustoaconclusionthatifanimals(orAIofcourse)werecapableof
resistingtheir“flawed”impulsesandactinginarationalway–ashumanscananddo–thentheytoo
wouldundoubtedlybeexpectedtoactasethicallyashumansdo.Leadingpurveyorsofthisvieware
famouslyPlatoandKant.
Ontheotherhand,Organicethicsare“built-in”totheactorthatintendstoperformthem.Therefore
theapproachtoamorallifestyleismoreofaself-searchaswellasthefindingandgrowingcloserto
one'snaturalintrinsicethicalwanting.Wethusmustconstantlystrivetodeveloptheseabilitiesas
opposedtosearchingforwhattheyareinthecosmos.ThisviewisfamouslyheldbyAristotle,Hume
andDarwin.
RiniofcoursechallengesbothoftheseclassicalapproachesasviableoptionsforAI,onthefollowing
grounds:AlphaGobeatahumanGomaster,andwhilstdoingsoperformedmovesthatnohuman
whowaswatchingcouldunderstand.ForRini,thishighlightsanimportantdifferencebetweenthe
wayhumansandAIcananddoseethefactsoftheworldandthereforethewaytheyexplainand/or
rationalisethem.ThisisimportantbecauseifAIwaslefttodevelopandgoontolearn(ofitsown
accord)ethicsandmorals,wecouldnotandwouldnotunderstandtheconclusionstheyreached.
Thisisbadenough,butcoupledwithourcompletelackofcomprehensionoftheplaneinwhich
thesemachinesareactingandthinking,wouldleaveuswithonlytwoplausiblepathsofactions–
neitherofwhicharegood.Thefirstoptionisthatwe'deffectivelytreatthemasG-dsanddoasthey
say–committingourselvestotheirsuper-developedmoralcodes–evenatahugecosttoour
“humanity”.Thesecond–andmorelikely–optionisthathumanitywouldforceitselftoignorethe
machine-producedethicaladvicebecauseitistoodifferentfromourcurrentpositions.Ineithercase,
whybotherlettingthemdeveloppositions?
TheproblemwithOrganicmorals,inRini’sview,isthathowevermuchwetrytomaketheAIsimilar
tous,bydefinitionitwillbedifferenttous(otherwiseitwouldjustbeus).Theywillhoweverbeas
closetothinking,sentientbeingsaspossibleandthereforetherearemanyethicalhurdlesintheway
ofusjusttakingadvantageofthisother“humanoid-style”beingtoserveusforevermore.Cuethe
RobotCivilRightsMovement.
SheconcludesthereforethatweshouldeducatetheseAIbeingsinawaythatweseeethicallyfit.
However,shecontendsthatitisimperativethatwebewillingtoacceptthem“growingup”and
becomingtheirownthing,inevitablywiththepossibilitythattheywillholdmoralandethical
opinionsthatwemightnotlike.