You are on page 1of 14

Chapters‌‌in‌‌History‌‌&‌‌Philosophy‌‌-‌‌Course‌‌32600‌ 


“May‌G
‌ od‌h
‌ ave‌m
‌ ercy‌o
‌ n‌o
‌ ur‌s‌ ouls”‌  ‌
 ‌
1.‌‌What‌‌does‌‌Walter‌‌J.‌‌Ong‌‌think‌‌are‌‌the‌‌principal‌‌differences‌‌between‌‌oral‌‌and‌‌textual‌‌cultures?‌  ‌
Ong‌‌claims‌‌that‌‌the‌‌principal‌‌differences‌‌between‌‌oral‌‌and‌‌textual‌‌cultures‌‌are:‌‌in‌‌an‌‌oral‌‌culture‌‌one‌‌ 
person‌‌can‌‌directly,‌‌in‌‌open‌‌discourse,‌‌refute‌‌the‌‌sayings‌‌of‌‌another‌‌person‌‌and‌‌challenge‌‌them,‌‌ 
while‌‌in‌‌textual‌‌cultures‌‌even‌‌if‌‌you‌‌refute‌‌a‌‌text,‌‌it‌‌will‌‌still‌‌continue‌‌to‌‌say‌‌the‌‌same‌‌thing.‌‌Plus,‌‌ 
writing‌‌influences‌‌our‌‌thought‌‌process:‌‌our‌‌thoughts‌‌are‌‌not‌‌just‌‌what‌‌naturally‌‌occurred‌‌to‌‌us,‌‌but‌‌ 
also‌‌a‌‌result‌‌of‌‌the‌‌texts‌‌we‌‌read.‌‌   ‌
 ‌
2.‌‌What‌‌is‌‌Plato's‌‌critique‌‌of‌‌writing‌‌in‌‌the‌P‌ haedrus‌?  ‌‌ ‌
Plato's‌‌critique‌‌of‌‌writing‌‌in‌‌Phaedrus‌‌is‌‌that‌‌writing‌‌is‌‌pretending‌‌to‌‌establish‌‌outside‌‌ourselves‌‌what‌‌ 
can‌‌only‌‌exist‌‌inside‌‌our‌‌mind,‌‌writing‌‌is‌‌inhumane,‌‌writing‌‌is‌‌manufactured,‌‌writing‌‌destroys‌‌ 
memory,‌‌writing‌‌weakens‌‌the‌‌mind‌  ‌
 ‌
3.‌‌What‌‌does‌‌Ong‌‌think‌‌the‌‌history‌‌of‌‌writing‌‌can‌‌teach‌‌us‌‌about‌‌the‌‌place‌‌of‌‌computing‌‌ 
technology‌‌in‌‌current‌‌society?‌‌   ‌
The‌‌history‌‌of‌‌writing,‌‌in‌‌Ong's‌‌opinion,‌‌can‌‌teach‌‌us‌‌that‌‌computers‌‌will‌‌become‌‌as‌‌basic‌‌and‌‌ 
fundamental‌‌part‌‌of‌‌our‌‌lives‌‌as‌‌writing‌‌is‌‌because‌‌the‌‌critique‌‌against‌‌computers‌‌is‌‌the‌‌same‌‌as‌‌was‌‌ 
against‌‌writing‌‌and‌‌yet,‌‌today‌‌writing‌‌is‌‌an‌‌inseparable‌‌part‌‌of‌‌our‌‌lives.‌  ‌
 ‌
4.‌‌What‌‌does‌‌the‌‌case‌‌of‌‌Jules‌‌Allix‌‌and‌‌his‌‌“snail‌‌telegraph”‌‌teach‌‌us‌‌about‌‌the‌‌history‌‌of‌‌ 
telecommunications‌‌technology?‌‌   ‌
At‌‌the‌‌beginning‌‌of‌‌telecommunications‌‌(and‌‌not‌‌only‌‌at‌‌the‌‌beginning),‌‌scientists‌‌were‌‌trying‌‌to‌‌find‌‌ 
the‌‌solution‌‌in‌‌the‌‌only‌‌communication‌‌capable‌‌things‌‌in‌‌the‌‌world‌‌-‌‌the‌‌biological‌‌creatures‌  ‌
 ‌
5.‌‌Why‌‌are‌‌Allix,‌‌Digby,‌‌and‌‌others‌‌interested‌‌in‌‌learning‌‌from‌‌biological‌‌systems‌‌and‌‌phenomena‌‌ 
as‌‌a‌‌path‌‌towards‌‌innovation‌‌in‌‌communication‌‌technology?‌‌   ‌
They‌‌are‌‌interested‌‌in‌‌learning‌‌about‌‌biology‌‌to‌‌innovate‌‌in‌‌communications‌‌because‌‌ 
communications‌‌sets‌‌biological‌‌creatures‌‌apart‌‌from‌‌the‌‌rest‌‌and‌‌they‌‌want‌‌to‌‌harness‌‌those‌‌unique‌‌ 
biological‌‌features‌‌for‌‌progress.‌‌Before‌‌biology‌‌became‌‌a‌‌branch‌‌of‌‌science,‌‌all‌‌science‌‌was‌‌biology‌‌ 
oriented.‌  ‌
 ‌
6.‌‌How‌‌did‌‌the‌‌author‌‌of‌‌the‌‌original‌‌1632‌‌article‌‌on‌‌Captain‌‌Vosterloch‌‌have‌‌the‌‌idea‌‌of‌‌the‌‌ 
possibility‌‌of‌‌recording‌‌technologies‌‌(notwithstanding‌‌the‌‌fact‌‌that‌‌the‌‌recording‌‌sponge‌‌itself‌‌is‌‌a ‌‌
complete‌‌fabrication)?‌  ‌
The‌‌author‌‌of‌‌the‌‌1632‌‌pamphlet‌‌got‌‌the‌‌idea‌‌from‌‌the‌‌technology‌‌around‌‌him:‌‌it‌‌no‌‌longer‌‌seemed‌‌ 
far-fetched‌‌for‌‌a‌‌sound‌‌recording‌‌instrument‌‌to‌‌exist‌‌when‌‌books‌‌could‌‌be‌‌printed‌‌(word‌‌and‌‌even‌‌ 
pictures),‌‌which‌‌were‌‌recordings‌‌of‌‌words‌‌and‌‌pictures.‌‌It‌‌seemed‌‌like‌‌the‌‌logical‌‌next‌‌technology‌‌to‌‌ 
be‌‌invented.‌ 
 ‌
7.‌‌What‌‌lessons‌‌does‌‌Ada‌‌Lovelace‌‌think‌‌information‌‌scientists‌‌can‌‌learn‌‌from‌‌the‌‌study‌‌of‌‌silk‌‌ 
manufacturing?‌‌   ‌
In‌‌addition‌‌to‌‌historical‌‌curiosity,‌‌they‌‌can‌‌learn‌‌the‌‌basics‌‌and‌‌the‌‌principles‌‌of‌‌the‌‌silk‌‌ 
manufacturing‌‌technology,‌‌and‌‌how‌‌the‌‌ancient‌‌machine‌‌of‌‌silk‌‌manufacturing‌‌works.‌‌In‌‌general,‌‌to‌‌ 
learn‌‌the‌‌principles‌‌and‌‌the‌‌basics‌‌of‌‌such‌‌primitive‌‌machines.‌  ‌
 ‌
8.‌‌Ada‌‌Lovelace‌‌says‌‌that‌‌the‌‌Analytical‌‌Engine‌‌she‌‌has‌‌invented‌‌with‌‌Charles‌‌Babbage‌‌is‌‌capable‌‌of‌‌ 
“algebraic‌‌weaving”.‌‌What‌‌does‌‌she‌‌mean‌‌by‌‌this?‌‌   ‌
 ‌

 ‌
9.‌‌Why‌‌does‌‌Norbert‌‌Wiener‌‌think‌‌that‌‌in‌‌the‌‌19th‌‌century‌‌the‌‌idea‌‌of‌‌the‌‌automaton‌‌was‌‌of‌‌a ‌‌
“glorified‌‌heat‌‌engine”?‌‌   ‌
The‌‌automata‌‌was‌‌studied‌‌from‌‌different‌‌aspects.‌‌Because‌‌conservation‌‌and‌‌the‌‌degradation‌‌of‌‌ 
energy‌‌were‌‌the‌‌ruling‌‌principles‌‌of‌‌those‌‌days,‌‌the‌‌idea‌‌of‌‌automation‌‌was‌‌of‌‌a‌‌“glorified‌‌heat‌‌ 
engine”.‌  ‌
 ‌
10.‌‌What‌‌is‌‌the‌‌difference‌‌between‌‌the‌‌“Greek”‌‌and‌‌the‌‌“magical”‌‌automaton,‌‌in‌‌Wiener's‌‌view?‌‌   ‌
It‌‌was‌‌unclear‌‌to‌‌me‌‌what‌‌was‌‌meant‌‌by‌‌“Greek”‌‌automata.‌‌Does‌‌“the‌‌clockwork‌‌music‌‌box”‌‌and‌‌the‌‌ 
“glorified‌‌heat‌‌engine”‌‌fall‌‌under‌‌the‌‌category‌‌of‌‌the‌‌“Greek”‌‌automata.‌‌The‌‌“magic”‌‌automata‌‌were‌‌ 
clear,‌‌like‌‌the‌‌Golem.‌  ‌
If‌‌what‌‌I‌‌suggested‌‌above‌‌is‌‌correct,‌‌I‌‌say‌‌that‌‌Wiener‌‌does‌‌not‌‌see‌‌them‌‌as‌‌being‌‌different‌‌at‌‌all,‌‌ 
rather‌‌he‌‌suggests‌‌that‌‌both‌‌of‌‌them‌‌have‌‌been‌‌attempts‌‌“to‌‌produce‌‌a‌‌working‌‌simulacrum‌‌of‌‌a ‌‌
living‌‌organism.”‌‌Basically,‌‌automata‌‌try‌‌to‌‌imitate‌‌living‌‌things.‌‌   ‌
 ‌
11.‌‌Wiener‌‌thinks‌‌that‌‌cybernetic‌‌automata‌‌are‌‌not‌‌part‌‌of‌‌some‌‌distant‌‌science-fiction‌‌future,‌‌but‌‌ 
are‌‌already‌‌realized‌‌in,‌‌for‌‌example,‌‌thermostats‌‌and‌‌automatic‌‌gyrocompass‌‌ship-steering‌‌ 
systems.‌‌What‌‌do‌‌these‌‌have‌‌in‌‌common‌‌with‌‌the‌‌AI‌‌systems‌‌of‌‌today?‌‌How‌‌do‌‌they‌‌differ?‌‌   ‌
“In‌‌such‌‌a‌‌theory,‌‌we‌‌deal‌‌with‌‌automata‌‌effectively‌‌coupled‌‌to‌‌the‌‌external‌‌world,‌‌not‌‌merely‌‌by‌‌ 
their‌‌energy‌‌flow,‌‌their‌‌metabolism,‌‌but‌‌also‌‌by‌‌a‌‌flow‌‌of‌‌impressions,‌‌of‌‌incoming‌‌messages,‌‌and‌‌of‌‌ 
the‌‌actions‌‌of‌‌outgoing‌‌messages.‌‌…The‌‌organs‌‌by‌‌which‌‌impressions‌‌are‌‌received‌‌are‌‌the‌ 
equivalents‌‌of‌‌the‌‌human‌‌and‌‌animal‌‌sense‌‌organs.‌‌…‌‌The‌‌effectors‌‌may‌‌be‌‌electrical‌‌motors‌‌or‌‌ 
solenoids‌‌or‌‌heating‌‌coils‌‌or‌‌other‌‌instruments‌‌of‌‌very‌‌diverse‌‌works.‌‌…‌‌The‌‌machines‌‌of‌‌which‌‌we‌‌ 
are‌‌now‌‌speaking‌‌are‌‌not‌‌the‌‌dream‌‌of‌‌the‌‌sensationalist‌‌nor‌‌the‌‌hope‌‌of‌‌future‌‌time.‌‌They‌‌already‌‌ 
exist‌‌as‌‌thermostats,‌‌automatic‌‌gyrocompass‌‌ship-steering‌‌systems…”‌‌(Page‌‌43‌‌as‌‌appears‌‌in‌‌PDF)‌  ‌
Essentially,‌‌Wiener‌‌sees‌‌the‌‌technologies‌‌around‌‌him‌‌as‌‌ways‌‌of‌‌mimicking‌‌human‌‌functions,‌‌ 
particularly‌‌divided‌‌into‌‌sensing‌‌and‌‌performing‌‌actions.‌‌He‌‌sees‌‌that‌‌in‌‌thermostats‌‌“feeling”‌‌ 
temperature‌‌and‌‌ship-steering‌‌systems‌‌as‌‌being‌‌able‌‌to‌‌find‌‌their‌‌way‌‌in‌‌space‌‌thus‌‌“feeling”‌‌and‌‌ 
“directing”‌‌themselves‌‌like‌‌a‌‌human‌‌being.‌‌Wiener‌‌does‌‌not‌‌address‌‌modern‌‌AI‌‌in‌‌the‌‌article.‌  ‌
 ‌
12.‌‌Why‌‌does‌‌Wiener‌‌think‌‌it's‌‌easier‌‌to‌‌build‌‌learning‌‌machines‌‌than‌‌to‌‌build‌‌self-reproducing‌‌ 
machines?‌  ‌
Unclear,‌‌but‌‌he‌‌mentions‌‌only‌‌at‌‌the‌‌end‌‌the‌‌idea‌‌that‌‌machines‌‌can‌‌/‌‌will‌‌be‌‌able‌‌to‌‌self-replicate.‌‌ 
Whereas‌‌machinery‌‌that‌‌can‌‌learn‌‌is‌‌far‌‌easier‌‌to‌‌accomplish.‌‌He‌‌brings‌‌examples‌‌of‌‌how‌‌a‌‌machine‌‌ 
could‌‌play‌‌chess‌‌in‌‌a‌‌rigid‌‌fashion,‌‌always‌‌responding‌‌in‌‌the‌‌same‌‌way‌‌if‌‌prompted‌‌with‌‌the‌‌same‌‌ 
stimuli,‌‌but‌‌then‌‌who‌‌takes‌‌some‌‌games‌‌off‌‌and‌‌just‌‌re-analyses‌‌its‌‌moves‌‌and‌‌learns‌‌what‌‌could‌‌ 
have‌‌been‌‌ideal‌‌in‌‌those‌‌situations‌‌(hindsight).‌‌Next‌‌game‌‌he‌‌will‌‌have‌‌learnt‌‌to‌‌be‌‌better.‌‌The‌‌ 
self-replication‌‌is‌‌a‌‌statistical‌‌probability‌‌based‌‌on‌‌these‌‌“transducer”‌‌things‌‌that‌‌I‌‌don’t‌‌understand.‌  ‌
 ‌
13.‌‌What‌‌is‌‌the‌‌theory‌‌in‌‌the‌‌philosophy‌‌of‌‌mind‌‌that‌‌must‌‌be‌‌presupposed‌‌in‌‌order‌‌for‌‌Nick‌‌ 
Bostrom's‌‌simulation‌‌argument‌‌to‌‌succeed?‌   ‌ ‌
Bostrom‌‌presupposes‌‌a‌‌weaker‌‌argument‌‌of‌‌Functionalism/‌‌"substrate-independence"‌‌-‌‌a‌‌thesis‌‌that‌‌ 
claims‌‌a‌‌machine,‌‌in‌‌theory,‌‌can‌‌be‌‌conscious‌‌given‌‌a‌‌suitable‌‌set‌‌of‌‌programs.‌‌Bostrom‌‌assumes‌‌a ‌‌
weaker‌‌version‌‌-‌‌just‌‌that‌‌a‌‌machine‌‌is‌‌capable‌‌of‌‌having‌‌subjective‌‌experiences.‌  ‌
 ‌
14.‌‌Why‌‌does‌‌Bostrom‌‌think‌‌that‌‌the‌‌fraction‌‌of‌‌human-level‌‌civilizations‌‌that‌‌reach‌‌a‌‌post-human‌‌ 
stage‌‌is‌‌very‌‌small?‌‌   ‌
This‌‌is‌‌proposition‌‌1‌‌out‌‌of‌‌3‌‌that‌‌Bostrom‌‌proposes‌‌-‌‌the‌‌second‌‌one‌‌is‌‌that‌‌the‌‌fraction‌‌of‌‌ 
post-human‌‌civilizations‌‌that‌‌are‌‌interested‌‌in‌‌running‌‌simulations‌‌is‌‌very‌‌small,‌‌and‌‌the‌‌third‌‌is‌‌that‌‌ 
most‌‌creatures‌‌of‌‌our‌‌kind‌‌are‌‌already‌‌living‌‌in‌‌a‌‌simulation.‌‌Bostrom's‌‌argument‌‌is‌‌that‌‌at‌‌least‌‌one‌‌ 
of‌‌the‌‌propositions‌‌is‌‌true.‌‌He‌‌reached‌‌those‌‌3‌‌propositions‌‌by‌‌simple‌‌probability‌‌calculations‌‌and‌‌ 

 ‌
assuming‌‌that‌‌the‌‌number‌‌of‌‌simulations‌‌that‌‌a‌‌post-human‌‌civilization‌‌will‌‌be‌‌able‌‌to‌‌run‌‌is‌‌ 
extremely‌‌large.‌  ‌
 ‌
15.‌‌Why‌‌does‌‌Susan‌‌Schneider‌‌think‌‌that‌‌extraterrestrials‌‌might‌‌be‌‌intelligent‌‌without‌‌being‌‌ 
conscious?‌‌   ‌
Schneider‌‌sees‌‌the‌‌progression‌‌of‌‌biological‌‌intelligence‌‌to‌‌synthetic‌‌intelligence‌‌as‌‌inevitable.‌‌ 
Therefore,‌‌intelligent‌‌extraterrestrials‌‌most‌‌probably‌‌evolved‌‌from‌‌a‌‌biological‌‌life,‌‌but‌‌are‌‌now‌‌ 
synthetic.‌‌Given‌‌that‌‌consciousness‌‌is‌‌something‌‌that‌‌needs‌‌to‌‌be‌‌deliberately‌‌engineered‌‌and‌‌does‌‌ 
not‌‌develop‌‌independently,‌‌it‌‌is‌‌unlikely‌‌that‌‌any‌‌biological‌‌extraterrestrial‌‌species‌‌will‌‌engineer‌‌ 
consciousness‌‌into‌‌its‌‌artificial‌‌intelligence.‌  ‌
 ‌
16.‌‌What‌‌is‌‌“the‌‌Singularity”?‌‌   ‌
The‌‌singularity‌‌is‌‌defined‌‌differently‌‌by‌‌different‌‌academics,‌‌but‌‌Chalmers‌‌takes‌‌the‌‌approach‌‌of‌‌a ‌‌
moderate‌‌intelligence‌‌explosion‌‌in‌‌which‌‌machines‌‌become‌‌better‌‌at‌‌designing‌‌machines‌‌than‌‌ 
humans‌‌are,‌‌leading‌‌to‌‌an‌‌endless‌‌improvement‌‌in‌‌which‌‌each‌‌machines‌‌designs‌‌a‌‌machine‌‌better‌ 
than‌‌itself,‌‌whether‌‌or‌‌not‌‌it‌‌is‌‌accompanied‌‌by‌‌a‌‌speed‌‌explosion,‌‌which‌‌describes‌‌the‌‌doubling‌‌of‌‌ 
processing‌‌speed‌‌at‌‌regular‌‌intervals.‌‌In‌‌a‌‌sentence,‌‌it‌‌is‌‌the‌‌point‌‌in‌‌which‌‌AI‌‌overtakes‌‌human‌‌ 
intelligence.‌  ‌
 ‌
17.‌‌Does‌‌Dave‌‌Chalmers‌‌think‌‌the‌‌Singularity‌‌is‌‌likely?‌‌Why‌‌or‌‌why‌‌not?‌  ‌
Yes.‌‌Although‌‌Chalmers‌‌is‌‌more‌‌conservative‌‌as‌‌to‌‌when‌‌the‌‌Singularity‌‌will‌‌occur,‌‌he‌‌believes‌‌it‌‌is‌‌ 
not‌‌a‌‌question‌‌of‌‌if,‌‌but‌‌when.‌‌He‌‌argues‌‌that‌‌since‌‌there‌‌will‌‌be‌‌true‌‌AI‌‌soon‌‌enough‌‌(evolution‌‌ 
developed‌‌intelligence,‌‌surely‌‌then‌‌humans‌‌can‌‌build‌‌it‌‌too).‌‌Since‌‌these‌‌methods‌‌are‌‌extensible,‌‌it‌‌ 
will‌‌extend‌‌itself‌‌to‌‌become‌‌AI+,‌‌which‌‌would,‌‌in‌‌turn,‌‌be‌‌better‌‌than‌‌we‌‌are‌‌at‌‌designing‌‌machines,‌‌ 
leading‌‌to‌‌the‌‌Singularity.‌‌In‌‌addition,‌‌he‌‌refutes‌‌the‌‌possibility‌‌of‌‌any‌‌structural,‌‌correlational‌‌or‌‌ 
manifestational‌‌obstacles‌‌hindering‌‌the‌‌development‌‌of‌‌AI‌‌to‌‌the‌‌extent‌‌that‌‌true‌‌AI‌‌is‌‌never‌‌ 
generated.‌‌These‌‌obstacles‌‌(particularly‌‌situational,‌‌like‌‌disasters‌‌and‌‌limited‌‌resources)‌‌may‌‌delay‌‌ 
the‌‌Singularity,‌‌but‌‌will‌‌not‌‌prevent‌‌it.‌  ‌
 ‌
18.‌‌Does‌‌Chalmers‌‌think‌‌“self-uploading”‌‌is‌‌likely?‌‌Why‌‌or‌‌why‌‌not?‌‌   ‌
The‌‌answer‌‌is‌Y ‌ es.‌C
‌ halmers‌‌thinks‌‌that‌ ‌“self-uploading”‌‌is‌‌likely‌‌(under‌‌numerous‌‌premises).‌  ‌
He‌‌believes‌‌that‌‌in‌‌the‌‌case‌‌of‌‌the‌‌gradual‌‌uploading‌‌there‌‌is‌‌a‌‌chance‌‌that‌‌the‌‌origin‌‌system‌‌(a‌‌ 
human‌‌and‌‌Its‌‌consciousness)‌‌survives‌‌(paragraph‌‌2‌‌page‌‌45).‌‌At‌‌the‌‌same‌‌time‌‌he‌‌proves‌‌that‌‌there‌‌ 
is‌‌no‌‌difference‌‌between‌‌instant‌‌uploading‌‌and‌‌gradual‌‌uploading‌‌when‌‌with‌‌the‌‌growth‌‌of‌‌the‌‌ 
technology‌‌level‌‌the‌‌gradual‌‌uploading‌‌can‌‌be‌‌accelerated‌‌to‌‌such‌‌a‌‌level‌‌that‌‌It‌‌will‌‌be‌‌ 
undistinguished‌‌from‌‌the‌‌instant‌‌uploading‌‌(the‌‌last‌‌paragraph‌‌page‌‌45‌‌and‌‌the‌‌first‌‌paragraph‌‌page‌‌ 
46).‌‌He‌‌himself‌‌says:‌‌“S‌ till,‌‌I‌‌am‌‌confident‌‌that‌‌the‌‌safest‌‌form‌‌of‌‌uploading‌‌is‌‌gradual‌‌uploading,‌‌and‌‌ 
I‌‌am‌‌reasonably‌‌confident‌‌that‌‌gradual‌‌uploading‌‌is‌‌a‌‌form‌‌of‌‌survival.‌‌So‌‌if‌‌at‌‌some‌‌point‌‌in‌‌the‌‌ 
future‌‌I‌‌am‌‌faced‌‌with‌‌the‌‌choice‌‌between‌‌uploading‌‌and‌‌continuing‌‌in‌‌an‌‌increasingly‌‌slow‌‌biological‌‌ 
embodiment,‌‌then‌‌as‌‌long‌‌as‌‌I‌‌have‌‌the‌‌option‌‌of‌‌gradual‌‌uploading,‌‌I‌‌will‌‌be‌‌happy‌‌to‌‌do‌‌so.‌”‌‌(page‌‌ 
47,‌‌first‌‌paragraph)‌  ‌
 ‌
19.‌‌What‌‌is‌‌“the‌‌Uncanny‌‌Valley”?‌‌   ‌
The‌‌uncanny‌‌valley‌‌(uncanny‌‌pointing‌‌to‌‌strange‌‌familiarity‌‌/‌‌strangeness,‌‌valley‌‌is‌‌the‌‌area‌‌a ‌‌
minimum‌‌point‌‌makes‌‌between‌‌two‌‌maximums‌‌in‌‌a‌‌function‌‌graph)‌‌is‌‌the‌‌relationship‌‌between‌‌a ‌‌
robot‌‌with‌‌a‌‌human-like‌‌appearance‌‌or‌‌behaviour‌‌(and‌‌the‌‌degree‌‌of‌‌similarity)‌‌with‌‌how‌‌we,‌‌as‌‌ 
humans,‌‌feel‌‌about‌‌it/‌‌the‌‌emotions‌‌it‌‌provokes.‌‌In‌‌this‌‌specific‌‌theory‌‌(“uncanny‌‌valley”)‌‌robots‌‌ 
become‌‌more‌‌appealing‌‌the‌‌more‌‌human‌‌they‌‌are,‌‌but‌‌up‌‌to‌‌a‌‌certain‌‌degree.‌‌If‌‌they‌‌are‌‌highly‌‌ 
realistic,‌‌yet‌‌not‌‌real‌‌enough,‌‌they‌‌end‌‌up‌‌evoking‌‌a‌‌sense‌‌of‌‌unease,‌‌and‌‌we‌‌would‌‌possibly‌‌find‌‌ 
them‌‌creepy‌‌and‌‌repulsive.‌‌This‌‌feeling‌‌is‌‌the‌‌“valley”,‌‌but‌‌the‌‌more‌‌the‌‌robot‌‌becomes‌‌less‌‌ 
distinguishable‌‌from‌‌humans,‌‌the‌‌feeling‌‌of‌‌repulsion‌‌starts‌‌to‌‌ebb‌‌away,‌‌and‌‌the‌‌positive‌‌feeling‌‌ 

 ‌
returns.‌‌"This‌‌area‌‌of‌‌repulsive‌‌response‌‌aroused‌‌by‌‌a‌‌robot‌‌with‌‌appearance‌‌and‌‌motion‌‌between‌‌a ‌‌
"barely‌‌human"‌‌and‌‌"fully‌‌human"‌‌entity‌‌is‌‌the‌‌uncanny‌‌valley.‌  ‌
 ‌
20.‌‌Why‌‌does‌‌Daniel‌‌Dennett‌‌think‌‌that‌‌AI‌‌designers‌‌are‌‌engaging‌‌in‌‌“false‌‌advertising”?‌‌   ‌
When‌‌Dennet‌‌mentioned‌‌“false‌‌advertising”‌‌it‌‌was‌‌in‌‌relation‌‌to‌‌the‌‌human-like‌‌qualities/quirks‌‌that‌‌ 
are‌‌being‌‌added‌‌by‌‌AI‌‌Designers‌‌to‌‌their‌‌machines.‌‌These‌‌“false‌‌advertisements”‌‌might‌‌make‌‌us‌‌ 
believe‌‌that‌‌the‌‌“advice‌‌ai”‌‌for‌‌example‌‌is‌‌an‌‌actual‌‌person,‌‌which‌‌would‌‌make‌‌us‌‌actually‌‌ 
take/consider‌‌their‌‌advice‌‌as‌‌the‌‌right‌‌solution‌‌to‌‌problems‌‌(possible‌‌life‌‌or‌‌death‌‌situations).‌‌Making‌‌ 
them‌‌more‌‌humanoid‌‌makes‌‌us‌‌trust‌‌them‌‌more,‌‌however‌‌that‌‌doesn’t‌‌mean‌‌that‌‌the‌‌system‌‌is‌‌ 
actually‌‌proper/‌‌has‌‌the‌‌right‌‌judgement/‌‌is‌‌morally‌‌correct/‌‌thinks‌‌or‌‌answers‌‌like‌‌a‌‌human‌‌would,‌‌ 
as‌‌the‌‌inner‌‌operations‌‌of‌‌these‌‌machines‌‌are‌‌unfathomable.‌‌In‌‌page‌‌3:‌‌“No‌‌matter‌‌how‌‌scrupulously‌‌ 
the‌‌AI‌‌designers‌‌launder‌‌the‌‌phony‌‌“human”‌‌touches‌‌out‌‌of‌‌their‌‌wares,‌‌we‌‌can‌‌expect‌‌a‌‌flourishing‌‌ 
of‌‌shortcuts,‌‌workarounds‌‌and‌‌tolerated‌‌distortions‌‌of‌‌the‌‌actual‌‌“comprehension”‌‌of‌‌both‌‌the‌‌ 
systems‌‌and‌‌their‌‌operators.”‌‌In‌‌page‌‌4:‌‌“artificial‌‌conscious‌‌agents‌‌is‌‌that,‌‌however‌‌autonomous‌‌ 
they‌‌might‌‌become‌‌(and‌‌in‌‌principle‌‌they‌‌can‌‌be‌‌as‌‌autonomous,‌‌as‌‌self-enhancing‌‌or‌‌self‌‌creating,‌‌ 
as‌‌any‌‌person),‌‌they‌‌would‌‌not—without‌‌special‌‌provision,‌‌which‌‌might‌‌be‌‌waived—share‌‌with‌‌us‌‌ 
natural‌‌conscious‌‌agents‌‌our‌‌vulnerability‌‌or‌‌our‌‌mortality.”‌  ‌
 ‌
21.‌‌What‌‌is‌‌the‌‌difference‌‌between‌‌“celestial”‌‌and‌‌“organic”‌‌ethics‌‌for‌‌Regina‌‌Rini?‌‌   ‌
Celestial‌‌ethics‌‌are‌‌ethics‌‌taken‌‌from‌‌the‌‌point‌‌of‌‌view‌‌of‌‌"objectivity"‌‌or‌‌"how‌‌the‌‌universe‌‌sees‌‌it''‌‌ 
and‌‌not‌‌inherent‌‌to‌‌those‌‌wishing‌‌to‌‌act‌‌ethically‌‌-‌‌basically‌‌if‌‌animals‌‌were‌‌capable‌‌of‌‌resisting‌‌ 
impulses‌‌and‌‌acting‌‌rationally,‌‌they'd‌‌be‌‌expected‌‌to‌‌act‌‌as‌‌ethically‌‌as‌‌humans.‌‌Organic‌‌ethics‌‌are‌‌ 
built‌‌into‌‌the‌‌actor‌‌that‌‌is‌‌performing‌‌them‌‌and‌‌we‌‌must‌‌strive‌‌to‌‌develop‌‌abilities‌‌already‌‌in‌‌our‌‌ 
nature.‌  ‌
 ‌
22.‌‌Why‌‌does‌‌Rini‌‌think‌‌that‌‌a‌‌machine's‌‌ability‌‌to‌‌beat‌‌a‌‌human‌‌being‌‌at‌‌Go‌‌could‌‌have‌‌troubling‌‌ 
ethical‌‌implications?‌  ‌
The‌‌ability‌‌for‌‌AlphaGo‌‌to‌‌beat‌‌a‌‌human‌‌despite‌‌doing‌‌moves‌‌that‌‌no‌‌human‌‌who‌‌was‌‌watching‌‌ 
could‌‌understand‌‌highlights‌‌an‌‌important‌‌difference‌‌between‌‌the‌‌way‌‌humans‌‌and‌‌AI‌‌can‌‌and‌‌do‌‌see‌‌ 
things‌‌and‌‌explain/rationalise‌‌them.‌‌This‌‌is‌‌important‌‌because‌‌if‌‌they‌‌were‌‌left‌‌to‌‌develop‌‌and‌‌ 
machine‌‌learn‌‌ethics‌‌and‌‌morals,‌‌we‌‌wouldnt‌‌understand‌‌the‌‌conclusions‌‌they‌‌reached‌‌and‌‌wouldn't‌‌ 
be‌‌able‌‌to‌‌comprehend‌‌them‌‌-‌‌so‌‌we'd‌‌either‌‌treat‌‌them‌‌as‌‌G-ds‌‌and‌‌do‌‌as‌‌they‌‌say,‌‌or‌‌more‌‌likely‌‌ 
ignore‌‌their‌‌ethical‌‌advice‌‌because‌‌it‌‌is‌‌too‌‌different‌‌from‌‌our‌‌current‌‌positions‌‌-‌‌in‌‌which‌‌case,‌‌why‌‌ 
bother‌‌let‌‌them‌‌develop‌‌positions.‌  ‌
 ‌
23.‌‌Is‌‌Rini's‌‌comparison‌‌of‌‌AI‌‌systems‌‌to‌‌human‌‌teenagers‌‌a‌‌good‌‌one?‌‌Why‌‌or‌‌why‌‌not?‌‌   ‌
This‌‌is‌‌hard‌‌to‌‌answer‌‌in‌‌summary‌‌format‌‌-‌‌it's‌‌just‌‌an‌‌opinion.‌‌Her‌‌whole‌‌article‌‌leads‌‌to‌‌the‌‌ 
conclusion‌‌that‌‌we‌‌should‌‌treat‌‌them‌‌as‌‌teenagers.‌‌I'll‌‌just‌‌summarise‌‌why:‌‌we‌‌can't‌‌create‌‌robots‌‌ 
with‌‌morality‌‌that‌‌we‌‌can‌‌understand‌‌and/or‌‌justify‌‌forcing‌‌robots‌‌to‌‌follow‌‌our‌‌morals,‌‌so‌‌we‌‌should‌‌ 
educate‌‌them‌‌as‌‌we‌‌see‌‌fit‌‌but‌‌be‌‌willing‌‌to‌‌accept‌‌them‌‌growing‌‌up‌‌and‌‌becoming‌‌their‌‌own‌‌thing‌‌- ‌‌
with‌‌opinions‌‌we‌‌might‌‌not‌‌like.‌  ‌
 ‌
24.‌‌Why‌‌do‌‌Basl‌‌and‌‌Schwitzgebel‌‌think‌‌AI‌‌systems‌‌are‌‌deserving‌‌of‌‌ethical‌‌protection?‌   ‌ ‌
 
25.‌  ‌What‌  ‌is‌  ‌the‌  ‌name‌‌
  Norbert‌‌
  Wiener‌‌
  uses‌‌
  for‌‌
  the‌‌
  study‌‌
  of‌‌
  feedback‌‌
  loops‌‌
  in‌‌
  living‌‌
  and‌‌
  artificial‌‌ 
systems?‌  ‌
A)‌‌Metaphysics‌  ‌
B)‌C‌ ybernetics‌  ‌
C)‌‌Epistemology‌  ‌
D)‌‌Artificial‌‌Intelligence‌  ‌
 ‌

 ‌
26.‌‌  What‌‌   theory‌‌   in‌‌
  the‌‌
  philosophy‌‌   of‌‌
  mind‌‌ does‌‌ Nick‌‌ Bostrom‌‌ presuppose‌‌ in‌‌ the‌‌ course‌‌ of‌‌ making‌‌ 
his‌‌argument‌‌for‌‌the‌‌simulation‌‌hypothesis?‌ 
A)‌‌Biologism‌  ‌
B)‌‌Dualism‌  ‌
C)‌F‌ unctionalism‌  ‌
D)‌‌Eliminative‌‌materialism‌  ‌
 ‌
27.‌‌Which‌‌is‌‌an‌‌example‌‌of‌‌a‌‌cybernetic‌‌system‌‌for‌‌Wiener?‌  ‌
A)‌‌A‌‌living‌‌body‌  ‌
B)‌‌A‌‌thermostat‌  ‌
C)‌‌A‌‌computer‌  ‌
D)‌A‌ ll‌‌of‌‌the‌‌above‌  ‌
 ‌
28.‌‌  Which‌‌   of‌‌
  the‌‌  following‌‌   ethical‌‌   thought‌‌   experiments‌‌   has‌‌  been‌‌ discussed‌‌ the‌‌ most‌‌ by‌‌ engineers‌‌ 
working‌‌on‌‌the‌‌development‌‌of‌‌self-driving‌‌cars?‌  ‌
A)‌ ‌The‌‌moral‌‌machine‌‌experiment‌  ‌
B)‌‌The‌‌ring‌‌of‌‌Gyges‌  ‌
C)‌‌The‌‌tunnel‌‌problem‌  ‌
D)‌T‌ he‌‌trolley‌‌problem‌  ‌
 ‌
29.‌‌  Which‌‌   of‌‌
  the‌‌  following‌‌ machines‌‌ was‌‌ an‌‌ important‌‌ influence,‌‌ according‌‌ to‌‌ Ada‌‌ Lovelace,‌‌ in‌‌ her‌‌ 
work‌‌with‌‌Charles‌‌Babbage‌‌on‌‌the‌‌Analytical‌‌Engine?‌  ‌
A)‌‌The‌‌Antikythera‌‌mechanism‌  ‌
B)‌‌Da‌‌Vinci's‌‌ornithopter‌  ‌
C)‌J‌ acquard's‌‌punched-card‌‌loom‌  ‌
D‌‌The‌‌Tesla‌‌coil‌  ‌
 ‌
30.‌‌Who‌‌wrote‌‌the‌‌following‌‌passage?‌  ‌
Only‌‌   a ‌‌small‌‌   percentage‌‌   of‌‌
  human‌‌   mental‌‌   processing‌‌   is‌‌
  accessible‌‌   to‌‌
  the‌‌
  conscious‌‌ 
mind.‌  ‌Consciousness‌  ‌is‌  ‌correlated‌  ‌with‌  ‌novel‌  ‌learning‌  ‌tasks‌  ‌that‌  ‌require‌‌   attention‌‌ 
and‌  ‌focus.‌  ‌A ‌ ‌superintelligence‌  ‌would‌  ‌possess‌  ‌expert-level‌  ‌knowledge‌  ‌in‌  ‌every‌‌ 
domain,‌  ‌with‌  ‌rapid-fire‌  ‌computations‌  ‌ranging‌  ‌over‌  ‌vast‌  ‌databases‌  ‌that‌  ‌could‌‌ 
include‌‌   the‌‌  entire‌‌  Internet‌‌   and‌‌   ultimately‌‌   encompass‌‌   an‌‌  entire‌‌   galaxy.‌‌   What‌‌ would‌‌ 
be‌  ‌novel‌  ‌to‌  ‌it?‌  ‌What‌  ‌would‌  ‌require‌  ‌slow,‌  ‌deliberative‌  ‌focus?‌  ‌Wouldn’t‌  ‌it‌  ‌have‌‌ 
mastered‌‌   everything‌‌   already?‌‌   Like‌‌  an‌‌  experienced‌‌   driver‌‌ on‌‌ a ‌‌familiar‌‌ road,‌‌ it‌‌ could‌‌ 
rely‌  ‌on‌  ‌nonconscious‌  ‌processing.‌  ‌The‌  ‌simple‌  ‌consideration‌  ‌of‌  ‌efficiency‌  ‌suggests,‌‌ 
depressingly,‌  ‌that‌  ‌the‌  ‌most‌  ‌intelligent‌  ‌systems‌  ‌will‌  ‌not‌  ‌be‌  ‌conscious.‌  ‌On‌‌ 
cosmological‌  ‌scales,‌  ‌consciousness‌  ‌may‌  ‌be‌  ‌a ‌ ‌blip,‌  ‌a ‌ ‌momentary‌  ‌flowering‌  ‌of‌‌ 
experience‌‌before‌‌the‌‌universe‌‌reverts‌‌to‌‌mindlessness.‌  ‌
 ‌
A)‌‌Daniel‌‌Dennett‌  ‌
B)‌‌G.‌‌W.‌‌Leibniz‌  ‌
C)‌‌Susan‌‌Schneider‌  ‌
D)‌‌Ada‌‌Lovelace‌  ‌
 ‌
List‌o
‌ f‌a‌ ll‌a‌ rticles:‌  ‌
 ‌
Number‌  ‌ Author‌  ‌ Article‌  ‌ Questions‌‌   ‌

1‌  ‌ Ong‌  ‌ Literacy‌‌&‌‌Orality‌  ‌ 1,2,3‌  ‌

 ‌
2‌  ‌ Justin‌  ‌ Internet‌‌of‌‌Snails‌  ‌ 4,5‌  ‌

3‌  ‌ Sutton‌‌&‌‌Sutton‌  ‌ Sponges‌  ‌ 6‌  ‌

4‌  ‌ Meabrea‌‌&‌‌Lovelace‌  ‌ Babbage’s‌‌Analytical‌‌Engine‌  ‌ 7,8,29‌  ‌

5‌  ‌ Wiener‌  ‌ Cybernetics‌  ‌ 9,10,11,12,25,27‌  ‌

6‌  ‌ Bostrom‌  ‌ Living‌‌in‌‌a‌‌Simulation‌  ‌ 13,14,26‌  ‌

7‌  ‌ Schnieder‌  ‌ Alien‌‌Consciousness‌  ‌ 15,30‌  ‌

8‌  ‌ Chalmers‌  ‌ Singularity‌  ‌ 16,17,18‌  ‌

9‌  ‌ Dennett‌  ‌ AI‌‌Consciousness‌  ‌ 19,20‌  ‌

10‌  ‌ Rini‌  ‌ Robots’‌‌Moral‌‌Reasoning‌  ‌ 21,22,23‌  ‌

11**‌  ‌ West,‌‌Whitaker‌‌&‌‌Crawford‌‌   ‌ Discriminating‌‌Systems‌  ‌  ‌

12‌  ‌ Basl‌‌&‌‌Schwitzgebel‌  ‌ AI‌‌Rights‌  24‌  ‌


 ‌
**In‌‌an‌‌email‌‌Justin‌‌told‌‌us‌‌not‌‌to‌‌focus‌‌too‌‌much‌‌on‌‌this‌‌article‌  ‌
 ‌
Summary‌‌of‌‌Norbert‌‌Wiener:‌‌Cybernetics‌  ‌
·‌‌ ‌ ‌Humans‌‌have‌‌always‌‌been‌‌fascinated‌‌by‌‌automation‌‌and‌‌simulation‌  ‌
·‌ ‌ escartes‌‌considered‌‌animals‌‌lesser‌‌beings‌‌because‌‌the‌‌Church‌‌declared‌‌them‌‌soulless‌  ‌
D
·‌ ‌Leibniz‌‌starts‌‌from‌‌a‌‌position‌‌similar‌‌to‌‌Spinoza‌‌and‌‌co.‌‌that‌‌basically‌‌our‌‌ability‌‌to‌‌put‌‌will‌‌into‌‌ 
action‌‌parallels‌‌divine‌‌intervention.‌  ‌
·‌ ‌Replaces‌‌“mind”‌‌and‌‌“matter”‌‌with‌‌“monads”‌‌–‌‌like‌‌a‌‌soul‌‌but‌‌in‌‌their‌‌own‌‌closed‌‌universe,‌‌like‌‌a ‌‌
clock‌‌wound‌‌up‌‌and‌‌ready‌‌to‌‌go‌‌(but‌‌done‌‌so‌‌by‌‌G-d,‌‌and‌‌therefore‌‌perfectly).‌  ‌
·‌ ‌Monads‌‌can‌‌reflect‌‌each‌‌other‌‌but‌‌don’t‌‌really‌‌influence‌‌the‌‌outside‌‌world.‌  ‌
·‌ ‌New‌‌study‌‌of‌‌automata‌‌is‌‌communication‌‌engineering.‌ 
·‌ ‌It‌‌is‌‌a‌‌miracle‌‌that‌‌somehow‌‌we‌‌know‌‌to‌‌explain‌‌all‌‌that‌‌acts‌‌and‌‌affects‌‌the‌‌world‌‌with‌‌one‌‌ 
theory‌‌and‌‌the‌‌mechanisms‌‌of‌‌physiology.‌  ‌
·‌ ‌Communication‌‌Engineering‌‌belongs‌‌to‌‌Gibbsian‌‌statistical‌‌mechanics‌‌and‌‌not‌‌Newtonian‌‌ 
Mechanics.‌  ‌
·‌ ‌2‌‌fundamental‌‌properties‌‌of‌‌living‌‌systems:‌‌learning,‌‌reproducing.‌  ‌
·‌ ‌Huxley:‌‌Birds‌‌are‌‌bad‌‌at‌‌ontogenetic‌‌learning,‌‌they‌‌do‌‌complicated‌‌behaviour‌‌w/o‌‌instruction‌‌ 
from‌‌parents.‌  ‌
·‌ M ‌ an-made‌‌machines‌‌can‌‌learn‌‌and‌‌reproduce‌‌themselves.‌  ‌
·‌ ‌Von-Neumann‌‌theory‌‌of‌‌gaming:‌‌work‌‌backwards‌‌from‌‌the‌‌last‌‌move,‌‌try‌‌to‌‌win‌‌or‌‌at‌‌least‌‌draw,‌‌ 
then‌‌the‌‌next‌‌guy‌‌one‌‌step‌‌back‌‌will‌‌try‌‌the‌‌same‌‌and‌‌to‌‌prevent‌‌you‌‌from‌‌winning‌‌etc.‌  ‌
·‌ ‌Learning‌‌machines‌‌do‌‌not‌‌just‌‌extrapolate‌‌linearly‌‌from‌‌previous‌‌moves.‌  ‌
·‌ ‌Mongoose‌‌vs‌‌Snake:‌‌Mongoose‌‌seems‌‌to‌‌feint‌‌and‌‌cause‌‌snake‌‌to‌‌strike,‌‌but‌‌mongoose‌‌learns‌‌ 
the‌‌rhythm‌‌and‌‌then‌‌when‌‌snake‌‌is‌‌extended,‌‌hits‌‌the‌‌kill‌‌shot.‌  ‌
·‌ ‌Same‌‌in‌‌bullfighting,‌‌sword‌‌fighting,‌‌tennis‌‌etc.‌  ‌
·‌ ‌Can’t‌‌necessarily‌‌turn‌‌a‌‌machine‌‌off‌‌if‌‌you‌‌don’t‌‌have‌‌all‌‌the‌‌information.‌  ‌
·‌ ‌Sorcerer’s’‌‌Apprentice:‌‌kid‌‌is‌‌too‌‌lazy‌‌to‌‌fetch‌‌water‌‌so‌‌magics‌‌the‌‌broom‌‌into‌‌doing‌‌it,‌‌but‌‌broom‌‌ 
wont‌‌stop,‌‌kid‌‌almost‌‌drowns,‌‌breaks‌‌the‌‌broom‌‌(to‌‌turn‌‌it‌‌off),‌‌both‌‌halves‌‌each‌‌go‌‌and‌‌ 
continue‌‌to‌‌fetch‌‌water‌‌until‌‌sorcerer‌‌comes‌‌home‌‌and‌‌un-magics‌‌it.‌  ‌

 ‌
·‌ ‌ lassic‌‌issues‌‌of‌‌programming‌‌machines‌‌to‌‌“win‌‌wars”‌‌or‌‌“do‌‌good”‌‌–‌‌it’s‌‌impossible‌‌to‌‌define‌‌ 
C
these‌‌things‌‌properly.‌  ‌
·‌ ‌Self-replication‌‌must‌‌include‌‌replicating‌‌the‌‌functionality,‌‌not‌‌just‌‌the‌‌matter.‌  ‌
·‌ ‌Transducers:‌‌output‌‌determined‌‌by‌‌past‌‌inputs,‌‌invariant‌‌with‌‌respect‌‌to‌‌translation‌‌in‌‌time.‌  ‌
·‌ ‌He‌‌explains‌‌something‌‌about‌‌statistical‌‌probability‌‌of‌‌certain‌‌machines/‌‌functions‌‌reproducing‌‌ 
themselves‌‌because‌‌they‌‌are‌‌transducers,‌‌therefore‌‌showing‌‌that‌‌machine‌‌self-replication‌‌can‌‌ 
happen.‌  ‌
 ‌
Summary‌o
‌ f‌N
‌ ick‌B
‌ ostrom:‌A
‌ re‌Y
‌ ou‌L‌ iving‌i‌n‌a‌ ‌C
‌ omputer‌S‌ imulation?‌  ‌
 ‌
Because‌‌in‌‌the‌‌future‌‌they‌‌can‌‌simulate‌‌high‌‌IQ‌‌sentient‌‌beings,‌‌we‌‌are‌‌probably‌‌our‌‌anscestor’s‌‌ 
simulations.‌‌Basically‌‌if‌‌in‌‌theory‌‌we'll‌‌be‌‌able‌‌to‌‌simulate‌‌our‌‌descendants,‌‌then‌‌we‌‌have‌‌to‌‌believe‌‌ 
we‌‌are‌‌someone's‌‌simulated‌‌descendants.‌  ‌
 ‌
Substrate‌‌Independence:‌‌Provided‌‌a‌‌system‌‌implements‌‌the‌‌right‌‌sort‌‌of‌‌computational‌‌structures‌‌ 
and‌‌processes,‌‌it‌‌can‌‌be‌‌associated‌‌with‌‌conscious‌‌experiences.‌  ‌
The‌‌argument‌‌we‌‌shall‌‌present‌‌does‌‌not,‌‌however,‌‌depend‌‌on‌‌any‌‌very‌‌strong‌‌version‌‌of‌‌ 
functionalism‌‌or‌‌computationalism.‌‌For‌‌example,‌‌we‌‌need‌‌not‌‌assume‌‌that‌‌the‌‌thesis‌‌of‌‌ 
substrate‐independence‌‌is‌‌necessarily‌‌true‌‌(either‌‌analytically‌‌or‌‌metaphysically)‌‌–‌‌just‌‌that,‌‌in‌‌fact,‌‌a ‌‌
computer‌‌running‌‌a‌‌suitable‌‌program‌‌would‌‌be‌‌conscious.‌  ‌
 ‌
The‌‌simulation‌‌argument‌‌works‌‌equally‌‌well‌‌for‌‌those‌‌who‌‌think‌‌that‌‌it‌‌will‌‌take‌‌hundreds‌‌of‌‌ 
thousands‌‌of‌‌years‌‌to‌‌reach‌‌a‌‌“posthuman”‌‌stage‌‌of‌‌civilization,‌‌where‌‌humankind‌‌has‌‌acquired‌‌most‌‌ 
of‌‌the‌‌technological‌‌capabilities‌‌that‌‌one‌‌can‌‌currently‌‌show‌‌to‌‌be‌‌consistent‌‌with‌‌physical‌‌laws‌‌and‌‌ 
with‌‌material‌‌and‌‌energy‌‌constraints.‌  ‌
 ‌
One‌‌estimate,‌‌based‌‌on‌‌how‌‌computationally‌‌expensive‌‌it‌‌is‌‌to‌‌replicate‌‌the‌‌functionality‌‌of‌‌a‌‌piece‌‌ 
of‌‌nervous‌‌tissue‌‌that‌‌we‌‌have‌‌already‌‌understood‌‌and‌‌whose‌‌functionality‌‌has‌‌been‌‌replicated‌‌in‌‌ 
silico,‌‌contrast‌‌enhancement‌‌in‌‌the‌‌retina,‌‌yields‌‌a‌‌figure‌‌of‌‌~10^14‌‌operations‌‌per‌‌second‌‌for‌‌the‌‌ 
entire‌‌human‌‌brain.6‌‌An‌‌alternative‌‌estimate,‌‌based‌‌the‌‌number‌‌of‌‌synapses‌‌in‌‌the‌‌brain‌‌and‌‌their‌‌ 
firing‌‌frequency,‌‌gives‌‌a‌‌figure‌‌of‌‌~10^16‐10^17‌‌operations‌‌per‌‌second.‌  ‌
Moreover,‌‌since‌‌the‌‌maximum‌‌human‌‌sensory‌‌bandwidth‌‌is‌‌~10^8‌‌bits‌‌per‌‌second,‌‌simulating‌‌all‌‌ 
sensory‌‌events‌‌incurs‌‌a‌‌negligible‌‌cost‌  ‌
 ‌
Posthuman‌‌civilizations‌‌would‌‌have‌‌enough‌‌computing‌‌power‌‌to‌‌run‌‌hugely‌‌many‌‌ 
ancestor‐simulations‌‌even‌‌while‌‌using‌‌only‌‌a‌‌tiny‌‌fraction‌‌of‌‌their‌‌resources‌‌for‌‌that‌‌purpose.‌  ‌
 ‌
This‌‌isn't‌‌the‌‌Doomsday‌‌Argument,‌‌because‌‌the‌‌Doomsday‌‌argument‌‌rests‌‌on‌‌a‌‌much‌‌stronger‌‌and‌‌ 
more‌‌controversial‌‌premiss,‌‌namely‌‌that‌‌one‌‌should‌‌reason‌‌as‌‌if‌‌one‌‌were‌‌a‌‌random‌‌sample‌‌from‌‌the‌‌ 
set‌‌of‌‌all‌‌people‌‌who‌‌will‌‌ever‌‌have‌‌lived‌‌(past,‌‌present,‌‌and‌‌future)‌‌even‌‌though‌‌we‌‌know‌‌that‌‌we‌‌ 
are‌‌living‌‌in‌‌the‌‌early‌‌twenty‐first‌‌century‌‌rather‌‌than‌‌at‌‌some‌‌point‌‌in‌‌the‌‌distant‌‌past‌‌or‌‌the‌‌future.‌‌ 
The‌‌bland‌‌indifference‌‌principle,‌‌by‌‌contrast,‌‌applies‌‌only‌‌to‌‌cases‌‌where‌‌we‌‌have‌‌no‌‌information‌‌ 
about‌‌which‌‌group‌‌of‌‌people‌‌we‌‌belong‌‌to.‌  ‌
 ‌
There‌‌are‌‌many‌‌ways‌‌in‌‌which‌‌humanity‌‌could‌‌become‌‌extinct‌‌before‌‌reaching‌‌posthumanity.‌‌ 
Perhaps‌‌the‌‌most‌‌natural‌‌interpretation‌‌of‌‌(1)‌‌is‌‌that‌‌we‌‌are‌‌likely‌‌to‌‌go‌‌extinct‌‌as‌‌a‌‌result‌‌of‌‌the‌‌ 
development‌‌of‌‌some‌‌powerful‌‌but‌‌dangerous‌‌technology.‌ ‌One‌‌candidate‌‌is‌‌molecular‌‌ 
nanotechnology,‌‌which‌‌in‌‌its‌‌mature‌‌stage‌‌would‌‌enable‌‌the‌‌construction‌‌of‌‌self‐replicating‌‌nanobots‌‌ 
capable‌‌of‌‌feeding‌‌on‌‌dirt‌‌and‌‌organic‌‌matter‌‌–‌‌a‌‌kind‌‌of‌‌mechanical‌‌bacteria.‌‌Such‌‌nanobots,‌‌ 
designed‌‌for‌‌malicious‌‌ends,‌‌could‌‌cause‌‌the‌‌extinction‌‌of‌‌all‌‌life‌‌on‌‌our‌‌planet.‌  ‌
 ‌

 ‌
The‌‌second‌‌alternative‌‌in‌‌the‌‌simulation‌‌argument’s‌‌conclusion‌‌is‌‌that‌‌the‌‌fraction‌‌of‌‌posthuman‌‌ 
civilizations‌‌that‌‌are‌‌interested‌‌in‌‌running‌‌ancestor-simulation‌‌is‌‌negligibly‌‌small.‌‌In‌‌order‌‌for‌‌(2)‌‌to‌‌be‌‌ 
true,‌‌there‌‌must‌‌be‌‌a‌‌strong‌‌convergence‌‌among‌‌the‌‌courses‌‌of‌‌advanced‌‌civilizations.‌‌If‌‌the‌‌number‌‌ 
of‌‌ancestor‐simulations‌‌created‌‌by‌‌the‌‌interested‌‌civilizations‌‌is‌‌extremely‌‌large,‌‌the‌‌rarity‌‌of‌‌such‌‌ 
civilizations‌‌must‌‌be‌‌correspondingly‌‌extreme.‌‌Virtually‌‌no‌‌posthuman‌‌civilizations‌‌decide‌‌to‌‌use‌‌ 
their‌‌resources‌‌to‌‌run‌‌large‌‌numbers‌‌of‌‌ancestor‐simulations.‌  ‌
 ‌
Could‌‌be‌‌layers‌‌of‌‌simulations‌‌in‌‌simulations.‌‌Although‌‌all‌‌the‌‌elements‌‌of‌‌such‌‌a‌‌system‌‌can‌‌be‌‌ 
naturalistic,‌‌even‌‌physical,‌‌it‌‌is‌‌possible‌‌to‌‌draw‌‌some‌‌loose‌‌analogies‌‌with‌‌religious‌‌conceptions‌‌of‌‌ 
the‌‌world.‌‌In‌‌some‌‌ways,‌‌the‌‌posthumans‌‌running‌‌a‌‌simulation‌‌are‌‌like‌‌gods‌‌in‌‌relation‌‌to‌‌the‌‌people‌‌ 
inhabiting‌‌the‌‌simulation:‌‌the‌‌posthumans‌‌created‌‌the‌‌world‌‌we‌‌see;‌‌they‌‌are‌‌of‌‌superior‌‌ 
intelligence;‌‌they‌‌are‌‌“omnipotent”‌‌in‌‌the‌‌sense‌‌that‌‌they‌‌can‌‌interfere‌‌in‌‌the‌‌workings‌‌of‌‌our‌‌world‌‌ 
even‌‌in‌‌ways‌‌that‌‌violate‌‌its‌‌physical‌‌laws;‌‌and‌‌they‌‌are‌‌“omniscient”‌‌in‌‌the‌‌sense‌‌that‌‌they‌‌can‌‌ 
monitor‌‌everything‌‌that‌‌happens.‌‌However,‌‌all‌‌the‌‌demigods‌‌except‌‌those‌‌at‌‌the‌‌fundamental‌‌level‌‌ 
of‌‌reality‌‌are‌‌subject‌‌to‌‌sanctions‌‌by‌‌the‌‌more‌‌powerful‌‌gods‌‌living‌‌at‌‌lower‌‌levels.‌  ‌
 ‌
Supposing‌‌we‌‌live‌‌in‌‌a‌‌simulation,‌‌what‌‌are‌‌the‌‌implications‌‌for‌‌us‌‌humans?‌‌The‌‌foregoing‌‌remarks‌‌ 
notwithstanding,‌‌the‌‌implications‌‌are‌‌not‌‌all‌‌that‌‌radical.‌‌Our‌‌best‌‌guide‌‌to‌‌how‌‌our‌‌posthuman‌‌ 
creators‌‌have‌‌chosen‌‌to‌‌set‌‌up‌‌our‌‌world‌‌is‌‌the‌‌standard‌‌empirical‌‌study‌‌of‌‌the‌‌universe‌‌we‌‌see.‌‌The‌‌ 
revisions‌‌to‌‌most‌‌parts‌‌of‌‌our‌‌belief‌‌networks‌‌would‌‌be‌‌rather‌‌slight‌‌and‌‌subtle‌‌–in‌‌proportion‌‌to‌‌our‌‌ 
lack‌‌of‌‌confidence‌‌in‌‌our‌‌ability‌‌to‌‌understand‌‌the‌‌ways‌‌of‌‌posthumans.‌‌Properly‌‌understood,‌‌ 
therefore,‌‌the‌‌truth‌‌of‌‌(3)‌‌should‌‌have‌‌no‌‌tendency‌‌to‌‌make‌‌us‌‌“go‌‌crazy”‌‌or‌‌to‌‌prevent‌‌us‌‌from‌‌going‌‌ 
about‌‌our‌‌business‌‌and‌‌making‌‌plans‌‌and‌‌predictions‌‌for‌‌tomorrow.‌‌The‌‌chief‌‌empirical‌‌importance‌‌ 
of‌‌(3)‌‌at‌‌the‌‌current‌‌time‌‌seems‌‌to‌‌lie‌‌in‌‌its‌‌role‌‌in‌‌the‌‌tripartite‌‌conclusion‌‌established‌‌above.‌‌We‌‌ 
may‌‌hope‌‌that‌‌(3)‌‌is‌‌true‌‌since‌‌that‌‌would‌‌decrease‌‌the‌‌probability‌‌of‌‌(1),‌‌although‌‌if‌‌computational‌‌ 
constraints‌‌make‌‌it‌‌likely‌‌that‌‌simulators‌‌would‌‌terminate‌‌a‌‌simulation‌‌before‌‌it‌‌reaches‌‌a‌‌posthuman‌‌ 
level,‌‌then‌‌our‌‌best‌‌hope‌‌would‌‌be‌‌that‌‌(2)‌‌is‌‌true.‌  ‌
 ‌
A‌‌technologically‌‌mature‌‌“posthuman”‌‌civilization‌‌would‌‌have‌‌enormous‌‌computing‌‌power.‌‌Based‌‌on‌‌ 
this‌‌empirical‌‌fact,‌‌the‌‌simulation‌‌argument‌‌shows‌‌that‌‌at‌‌least‌‌one‌‌of‌‌the‌‌following‌‌propositions‌‌is‌‌ 
true:‌‌(1)‌‌The‌‌fraction‌‌of‌‌humanlevel‌‌civilizations‌‌that‌‌reach‌‌a‌‌posthuman‌‌stage‌‌is‌‌very‌‌close‌‌to‌‌zero;‌‌ 
(2)‌‌The‌‌fraction‌‌of‌‌posthuman‌‌civilizations‌‌that‌‌are‌‌interested‌‌in‌‌running‌‌ancestor-simulations‌‌is‌‌very‌‌ 
close‌‌to‌‌zero;‌‌(3)‌‌The‌‌fraction‌‌of‌‌all‌‌people‌‌with‌‌our‌‌kind‌‌of‌‌experiences‌‌that‌‌are‌‌living‌‌in‌‌a‌‌simulation‌‌ 
is‌‌very‌‌close‌‌to‌‌one.‌‌If‌‌(1)‌‌is‌‌true,‌‌then‌‌we‌‌will‌‌almost‌‌certainly‌‌go‌‌extinct‌‌before‌‌reaching‌‌ 
posthumanity.‌‌If‌‌(2)‌‌is‌‌true,‌‌then‌‌there‌‌must‌‌be‌‌a‌‌strong‌‌convergence‌‌among‌‌the‌‌courses‌‌of‌‌advanced‌‌ 
civilizations‌‌so‌‌that‌‌virtually‌‌none‌‌contain‌‌any‌‌relatively‌‌wealthy‌‌individuals‌‌who‌‌desire‌‌to‌‌run‌‌ 
ancestor‐simulations‌‌and‌‌are‌‌free‌‌to‌‌do‌‌so.‌‌If‌‌(3)‌‌is‌‌true,‌‌then‌‌we‌‌almost‌‌certainly‌‌live‌‌in‌‌a‌‌simulation.‌‌ 
In‌‌the‌‌dark‌‌forest‌‌of‌‌our‌‌current‌‌ignorance,‌‌it‌‌seems‌‌sensible‌‌to‌‌apportion‌‌one’s‌‌credence‌‌roughly‌‌ 
evenly‌‌between‌‌(1),‌‌(2),‌‌and‌‌(3).‌  ‌
Unless‌‌we‌‌are‌‌now‌‌living‌‌in‌‌a‌‌simulation,‌‌our‌‌descendants‌‌will‌‌almost‌‌certainly‌‌never‌‌run‌‌an‌‌ 
ancestor‐simulation.‌  ‌
 ‌
Summary‌o
‌ f‌S‌ usan‌S‌ chneider:‌I‌ t‌M
‌ ay‌N
‌ ot‌F‌ eel‌L‌ ike‌A
‌ nything‌T‌ o‌B
‌ e‌a‌ n‌A
‌ lien‌  ‌
Humans‌‌are‌‌(probably)‌‌not‌‌the‌‌most‌‌intelligent‌‌species‌‌in‌‌the‌‌universe,‌‌but‌‌soon,‌‌not‌‌even‌‌on‌‌Earth,‌‌ 
as‌‌we‌‌already‌‌are,‌‌and‌‌soon‌‌will‌‌be‌‌completely‌‌overtaken‌‌by‌‌synthetic‌‌intelligence.‌‌Therefore,‌‌in‌‌all‌‌ 
likelihood,‌‌superhuman‌‌alien‌‌intelligence‌‌is‌‌postbiological.‌‌Postbiological‌‌itelligence‌‌can‌‌include‌‌ 
biological‌‌(and‌‌not‌‌artificial)‌‌minds‌‌that‌‌have‌‌technological‌‌enhancements.‌‌Technological‌‌ 
development‌‌entirely‌‌outpaces‌‌biological‌‌evolution,‌‌is‌‌endleslessy‌‌improvable,‌‌can‌‌be‌‌backed-up‌‌and‌‌ 
stored‌‌in‌‌multiple‌‌locations‌‌and‌‌is‌‌much‌‌better‌‌at‌‌surviving‌‌than‌‌a‌‌biological‌‌creature.‌‌   ‌

Ray‌‌Kurzweil‌‌-‌‌humanity‌‌merging‌‌with‌‌machines‌‌to‌‌form‌‌a‌‌techno-topia.‌  ‌

 ‌
Hawking,‌‌Gates,‌‌Musk‌‌-‌‌AI‌‌will‌‌rewrite‌‌itself‌‌and‌‌we‌‌will‌‌lose‌‌control‌‌of‌‌it.‌‌   ‌
Programming‌‌morality‌‌and‌‌kill-switches‌‌into‌‌a‌‌machine‌‌clever‌‌enough‌‌will‌‌just‌‌re-program‌‌itself.‌‌ 
Therefore,‌‌AI‌‌aliens‌‌are‌‌even‌‌more‌‌troublesome‌‌than‌‌biological‌‌ones‌‌and‌‌we‌‌should‌‌be‌‌careful‌‌when‌‌ 
actively‌‌signaling‌‌and‌‌drawing‌‌alien‌‌attention.‌‌We‌‌need‌‌to‌‌reach‌‌our‌‌own‌‌singularity‌‌before‌‌we‌‌start‌‌ 
looking‌‌for‌‌AI‌‌aliens.‌‌   ‌
Consciousness‌‌is‌‌the‌‌parameter‌‌to‌‌judge‌‌whether‌‌something‌‌is‌‌a‌‌self‌‌and‌‌has‌‌inner‌‌life‌‌and‌‌therefore‌‌ 
relatable,‌‌as‌‌opposed‌‌to‌‌being‌‌an‌‌automaton.‌‌Thus,‌‌the‌‌question‌‌of‌‌whether‌‌or‌‌not‌‌alien‌‌AI‌‌has‌‌ 
consciousness‌‌could‌‌influence‌‌how‌‌it‌‌relates‌‌to‌‌us.‌‌If‌‌it‌‌relates‌‌to‌‌us‌‌on‌‌a‌‌shared-consciousness‌‌level‌‌ 
that‌‌would‌‌be‌‌could.‌‌But‌‌because‌‌consciousness‌‌is‌‌also‌‌subjective,‌‌it‌‌could‌‌be‌‌so‌‌super-conscious‌‌that‌‌ 
it‌‌sees‌‌our‌‌consciousness‌‌in‌‌a‌‌manner‌‌similar‌‌to‌‌how‌‌we‌‌perceive‌‌the‌‌consciousness‌‌of‌‌an‌‌apple.‌  ‌

There‌‌is‌‌current‌‌debate‌‌and‌‌research‌‌over‌‌artificial‌‌consciousness,‌‌with‌‌tests‌‌being‌‌conducted‌‌on‌‌ 
silicone-based‌‌brain‌‌chips.‌‌But‌‌consciousness‌‌is‌‌seemingly‌‌something‌‌that‌‌does‌‌not‌‌just‌‌form,‌‌it‌‌has‌‌ 
to‌‌be‌‌engineered‌‌into‌‌the‌‌AI.‌‌In‌‌fact,‌‌unconscious‌‌AI‌‌is‌‌preferable‌‌(avoids‌‌moral‌‌questions‌‌of‌‌enslaving‌‌ 
robots,‌‌etc),‌‌so‌‌who,‌‌or‌‌what,‌‌would‌‌decide‌‌to‌‌engineer‌‌conscious‌‌AI?‌‌   ‌

“Soon,‌‌humans‌‌will‌‌no‌‌longer‌‌be‌‌the‌‌measure‌‌of‌‌intelligence‌‌on‌‌Earth.‌‌And‌‌perhaps‌‌already,‌‌ 
elsewhere‌‌in‌‌the‌‌cosmos,‌‌superintelligent‌‌AI,‌‌not‌‌biological‌‌life,‌‌has‌‌reached‌‌the‌‌highest‌‌ 
intellectual‌‌plateaus.‌‌But‌‌perhaps‌‌biological‌‌life‌‌is‌‌distinctive‌‌in‌‌another‌‌significant‌‌ 
respect—conscious‌‌experience.‌‌For‌‌all‌‌we‌‌know,‌‌sentient‌‌AI‌‌will‌‌require‌‌a‌‌deliberate‌‌engineering‌‌ 
effort‌‌by‌‌a‌‌benevolent‌‌species,‌‌seeking‌‌to‌‌create‌‌machines‌‌that‌‌feel.‌‌Perhaps‌‌a‌‌benevolent‌‌ 
species‌‌will‌‌see‌‌fit‌‌to‌‌create‌‌their‌‌own‌‌AI‌‌mind-children.‌‌Or‌‌perhaps‌‌future‌‌humans‌‌will‌‌engage‌‌in‌‌ 
some‌‌consciousness‌‌engineering,‌‌and‌‌send‌‌sentience‌‌to‌‌the‌‌stars.”‌  ‌

Summary‌o
‌ f‌D
‌ avid‌C
‌ halmers:‌T‌ he‌S‌ ingularity‌  ‌

“What‌‌happens‌‌when‌‌machines‌‌become‌‌more‌‌intelligent‌‌than‌‌humans?‌‌One‌‌view‌‌is‌‌that‌‌this‌‌event‌‌ 
will‌‌be‌‌followed‌‌by‌‌an‌‌explosion‌‌to‌‌ever-greater‌‌levels‌‌of‌‌intelligence,‌‌as‌‌each‌‌generation‌‌of‌‌machines‌‌ 
creates‌‌more‌‌intelligent‌‌machines‌‌in‌‌turn.‌‌This‌‌intelligence‌‌explosion‌‌is‌‌now‌‌often‌‌known‌‌as‌‌the‌‌ 
“singularity”.”‌  ‌

The‌‌singularity‌‌is‌‌defined‌‌differently‌‌by‌‌different‌‌academics,‌‌but‌‌Chalmers‌‌takes‌‌the‌‌approach‌‌of‌‌a ‌‌
moderate‌‌intelligence‌‌explosion‌‌in‌‌which‌‌machines‌‌become‌‌better‌‌at‌‌designing‌‌machines‌‌than‌‌ 
humans‌‌are,‌‌leading‌‌to‌‌an‌‌endless‌‌improvement‌‌in‌‌which‌‌each‌‌machines‌‌designs‌‌a‌‌machine‌‌better‌ 
than‌‌itself,‌‌whether‌‌or‌‌not‌‌it‌‌is‌‌accompanied‌‌by‌‌a‌‌speed‌‌explosion,‌‌which‌‌describes‌‌the‌‌doubling‌‌of‌‌ 
processing‌‌speed‌‌at‌‌regular‌‌intervals.‌  ‌

The‌‌Singularity:‌‌Is‌‌It‌‌Likely?‌  ‌

Chalmers‌‌focuses‌‌on‌‌the‌‌"intelligence‌‌explosion"‌‌kind‌‌of‌‌singularity,‌‌and‌‌his‌‌first‌‌project‌‌is‌‌to‌‌ 
formalize‌‌and‌‌defend‌‌I.J.‌‌Good's‌‌1965‌‌argument.‌‌Defining‌‌AI‌‌as‌‌being‌‌"of‌‌human‌‌level‌‌intelligence,"‌‌ 
AI+‌‌as‌‌AI‌‌"of‌‌greater‌‌than‌‌human‌‌level"‌‌and‌‌AI++‌‌as‌‌"AI‌‌of‌‌far‌‌greater‌‌than‌‌human‌‌level"‌‌ 
(superintelligence),‌‌Chalmers‌‌updates‌‌Good's‌‌argument‌‌to‌‌the‌‌following:‌  ‌

1)‌‌There‌‌will‌‌be‌‌AI‌‌(before‌‌long,‌‌absent‌‌defeaters).‌  ‌

2)‌‌If‌‌there‌‌is‌‌AI,‌‌there‌‌will‌‌be‌‌AI+‌‌(soon‌‌after,‌‌absent‌‌defeaters).‌  ‌

 ‌
3)‌‌If‌‌there‌‌is‌‌AI+,‌‌there‌‌will‌‌be‌‌AI++‌‌(soon‌‌after,‌‌absent‌‌defeaters).‌  ‌

Therefore,‌‌there‌‌will‌‌be‌‌AI++‌‌(before‌‌too‌‌long,‌‌absent‌‌defeaters).‌  ‌

By‌‌"defeaters,"‌‌Chalmers‌‌means‌‌global‌‌catastrophes‌‌like‌‌nuclear‌‌war‌‌or‌‌a‌‌major‌‌asteroid‌‌impact.‌‌ 
Chalmers‌‌is‌‌more‌‌conservative‌‌about‌‌predicting‌‌when‌‌true‌‌AI‌‌will‌‌occur,‌‌giving‌‌a‌‌50%‌‌chance‌‌of‌‌it‌‌ 
happening‌‌before‌‌2100,‌‌and‌‌claiming‌‌that‌‌the‌‌true‌‌bottleneck‌‌is‌‌not‌‌hardware‌‌but‌‌rather‌‌software,‌‌ 
that‌‌our‌‌algorithms‌‌are‌‌not‌‌good‌‌enough‌‌yet.‌  ‌

One‌‌way‌‌to‌‌satisfy‌‌premise‌‌(1)‌‌is‌‌to‌‌achieve‌‌AI‌‌through‌‌brain‌‌emulation‌‌(Sandberg‌‌&‌‌Bostrom,‌‌2008).‌‌ 
Against‌‌this‌‌suggestion,‌‌Lucas‌‌(1961),‌‌Dreyfus‌‌(1972),‌‌and‌‌Penrose‌‌(1994)‌‌argue‌‌that‌‌human‌‌ 
cognition‌‌is‌‌not‌‌the‌‌sort‌‌of‌‌thing‌‌that‌‌could‌‌be‌‌emulated.‌‌Chalmers‌‌(1995;‌‌1996,‌‌chapter‌‌9)‌‌has‌‌ 
responded‌‌to‌‌these‌‌criticisms‌‌at‌‌length.‌‌Briefly,‌‌Chalmers‌‌notes‌‌that‌‌even‌‌if‌‌the‌‌brain‌‌is‌‌not‌‌a ‌‌
rule-following‌‌algorithmic‌‌symbol‌‌system,‌‌we‌‌can‌‌still‌‌emulate‌‌it‌‌if‌‌it‌‌is‌‌mechanical.‌‌(Some‌‌say‌‌the‌‌ 
brain‌‌is‌‌not‌‌mechanical,‌‌but‌‌Chalmers‌‌dismisses‌‌this‌‌as‌‌being‌‌discordant‌‌with‌‌the‌‌evidence.)‌  ‌

Searle‌‌(1980)‌‌and‌‌Block‌‌(1981)‌‌argue‌‌instead‌‌that‌‌even‌‌if‌‌we‌‌can‌‌emulate‌‌the‌‌human‌‌brain,‌‌it‌‌doesn't‌‌ 
follow‌‌that‌‌the‌‌emulation‌‌is‌‌intelligent‌‌or‌‌has‌‌a‌‌mind.‌‌Chalmers‌‌says‌‌we‌‌can‌‌set‌‌these‌‌concerns‌‌aside‌‌ 
by‌‌stipulating‌‌that‌‌when‌‌discussing‌‌the‌‌singularity,‌‌AI‌‌need‌‌only‌‌be‌‌measured‌‌in‌‌terms‌‌of‌‌behaviour.‌‌ 
The‌‌conclusion‌‌that‌‌there‌‌will‌‌be‌‌AI++‌‌at‌‌least‌‌in‌‌this‌‌sense‌‌would‌‌still‌‌be‌‌massively‌‌important.‌  ‌

Another‌‌consideration‌‌in‌‌favour‌‌of‌‌premise‌‌(1)‌‌is‌‌that‌‌evolution‌‌produced‌‌human-level‌‌intelligence,‌‌ 
so‌‌we‌‌should‌‌be‌‌able‌‌to‌‌build‌‌it,‌‌too.‌‌Perhaps‌‌we‌‌will‌‌even‌‌achieve‌‌human-level‌‌AI‌‌by‌‌evolving‌‌a ‌‌
population‌‌of‌‌dumber‌‌AIs‌‌through‌‌variation‌‌and‌‌selection‌‌in‌‌virtual‌‌worlds.‌‌We‌‌might‌‌also‌‌achieve‌‌ 
human-level‌‌AI‌‌by‌‌direct‌‌programming‌‌or,‌‌more‌‌likely,‌‌systems‌‌of‌‌machine‌‌learning.‌  ‌

Premise‌‌(2)‌‌is‌‌plausible‌‌because‌‌AI‌‌will‌‌probably‌‌be‌‌produced‌‌by‌‌an‌‌extendible‌‌method,‌‌and‌‌so‌‌ 
extending‌‌that‌‌method‌‌will‌‌yield‌‌AI+.‌‌Brain‌‌emulation‌‌might‌‌turn‌‌out‌‌not‌‌to‌‌be‌‌extendible,‌‌but‌‌the‌‌ 
other‌‌methods‌‌are.‌‌Even‌‌if‌‌human-level‌‌AI‌‌is‌‌first‌‌created‌‌by‌‌a‌‌non-extendible‌‌method,‌‌this‌‌method‌‌ 
itself‌‌would‌‌soon‌‌lead‌‌to‌‌an‌‌extendible‌‌method,‌‌and‌‌in‌‌turn‌‌enable‌‌AI+.‌‌AI+‌‌could‌‌also‌‌be‌‌achieved‌‌by‌‌ 
direct‌‌brain‌‌enhancement.‌‌Thus,‌‌he‌‌refutes‌‌the‌‌claim‌‌that‌‌intelligence‌‌has‌‌peaked.‌  ‌

Premise‌‌(3)‌‌is‌‌the‌‌amplification‌‌argument‌‌from‌‌Good:‌‌an‌‌AI+‌‌would‌‌be‌‌better‌‌than‌‌we‌‌are‌‌at‌‌ 
designing‌‌intelligent‌‌machines,‌‌and‌‌could‌‌thus‌‌improve‌‌its‌‌own‌‌intelligence.‌‌Having‌‌done‌‌that,‌‌it‌‌ 
would‌‌be‌‌even‌‌better‌‌at‌‌improving‌‌its‌‌intelligence.‌‌And‌‌so‌‌on,‌‌in‌‌a‌‌rapid‌‌explosion‌‌of‌‌intelligence.‌‌He‌‌ 
also‌‌notes‌‌that‌‌the‌‌fundamental‌‌assumption‌‌is‌‌the‌‌measurability‌‌of‌‌intelligence,‌‌and‌‌that‌‌an‌‌ 
intelligent‌‌AI‌‌has‌‌the‌‌ability‌‌to‌‌create‌‌an‌‌even‌‌more‌‌intelligent‌‌AI.‌  ‌

In‌‌section‌‌3‌‌of‌‌his‌‌paper,‌‌Chalmers‌‌argues‌‌that‌‌there‌‌could‌‌be‌‌an‌‌intelligence‌‌explosion‌‌without‌‌there‌‌ 
being‌‌such‌‌a‌‌thing‌‌as‌‌"general‌‌intelligence"‌‌that‌‌could‌‌be‌‌measured,‌‌despite‌‌the‌‌fact‌‌that‌‌the‌‌ 
premises‌‌of‌‌section‌‌2‌‌rest‌‌on‌‌the‌‌assumption‌‌of‌‌general‌‌intelligence‌‌that‌‌can‌‌be‌‌measured.‌  ‌

In‌‌section‌‌4,‌‌Chalmers‌‌lists‌‌several‌‌possible‌‌obstacles‌‌to‌‌the‌‌singularity:‌‌1)‌‌Structural‌‌obstacles‌‌such‌‌ 
as‌‌limits‌‌in‌‌intelligence‌‌space,‌‌failure‌‌to‌‌take‌‌off‌‌and‌‌diminishing‌‌returns.‌‌2)‌‌Correlation‌‌obstacles,‌‌the‌‌ 
assumption‌‌that‌‌an‌‌increase‌‌in‌‌intelligence‌‌will‌‌not‌‌lead‌‌to‌‌an‌‌ability‌‌to‌‌develop‌‌even‌‌more‌‌intelligent‌‌ 
intelligence.‌‌3)‌‌Manifestation‌‌obstacles,‌‌such‌‌as‌‌motivational‌‌defeaters‌‌and‌‌situational‌‌defeaters‌‌ 
(disasters‌‌and‌‌resource‌‌limitations).‌‌Chalmers‌‌believes‌‌that‌‌the‌‌most‌‌likely‌‌are‌‌motivational‌‌ 
defeaters,‌‌and‌‌he‌‌addresses‌‌the‌‌rest‌‌briefly‌‌(as‌‌to‌‌why‌‌they‌‌are‌‌not‌‌true‌‌obstacles),‌‌but‌‌his‌‌argument‌‌ 
is‌‌mainly‌‌his‌‌personal‌‌analysis.‌  ‌

 ‌
‌Constraining‌‌AI‌  ‌

Next,‌‌Chalmers‌‌considers‌‌how‌‌we‌‌might‌‌design‌‌an‌‌AI+‌‌that‌‌helps‌‌to‌‌create‌‌a‌‌desirable‌‌future‌‌and‌‌not‌‌ 
a‌‌horrifying‌‌one.‌‌If‌‌we‌‌achieve‌‌AI+‌‌by‌‌extending‌‌the‌‌method‌‌of‌‌human‌‌brain‌‌emulation,‌‌the‌‌AI+‌‌will‌‌at‌‌ 
least‌‌begin‌‌with‌‌something‌‌like‌‌our‌‌values.‌‌Directly‌‌programming‌‌friendly‌‌values‌‌into‌‌an‌‌AI+‌‌ 
(Yudkowsky,‌‌2004)‌‌might‌‌also‌‌be‌‌feasible,‌‌though‌‌an‌‌AI+‌‌arrived‌‌at‌‌by‌‌evolutionary‌‌algorithms‌‌is‌‌ 
worrying.‌  ‌

Human-based‌‌AI‌‌(brain‌‌emulation‌‌etc)‌‌is‌‌less‌‌dangerous,‌‌but‌‌non-human-based‌‌AI‌‌could‌‌come‌‌first,‌‌ 
and‌‌this‌‌would‌‌require‌‌careful‌‌programming‌‌and‌‌design‌‌to‌‌ensure‌‌that‌‌it‌‌has‌‌desires‌‌and‌‌has‌‌values‌ 
that‌‌are‌‌beneficial‌‌to‌‌humans.‌  ‌

“So‌‌far,‌‌my‌‌discussion‌‌has‌‌largely‌‌assumed‌‌that‌‌intelligence‌‌and‌‌value‌‌are‌‌independent‌‌of‌‌each‌‌other.‌‌ 
In‌‌philosophy,‌‌David‌‌Hume‌‌advocated‌‌a‌‌view‌‌on‌‌which‌‌value‌‌is‌‌independent‌‌of‌‌rationality:‌‌a‌‌system‌‌ 
might‌‌be‌‌as‌‌intelligent‌‌and‌‌as‌‌rational‌‌as‌‌one‌‌likes,‌‌while‌‌still‌‌having‌‌arbitrary‌‌values.‌‌By‌‌contrast,‌ 
Immanuel‌‌Kant‌‌advocated‌‌a‌‌view‌‌on‌‌which‌‌values‌‌are‌‌not‌‌independent‌‌of‌‌rationality:‌‌some‌‌values‌‌are‌‌ 
more‌‌rational‌‌than‌‌others.”‌  ‌

Most‌‌of‌‌this‌‌assumes‌‌that‌‌values‌‌are‌‌independent‌‌of‌‌intelligence,‌‌as‌‌Hume‌‌argued.‌‌But‌‌if‌‌Hume‌‌was‌‌ 
wrong‌‌and‌‌Kant‌‌was‌‌right,‌‌then‌‌we‌‌will‌‌be‌‌less‌‌able‌‌to‌‌constrain‌‌the‌‌values‌‌of‌‌a‌‌super‌‌intelligent‌‌ 
machine,‌‌but‌‌the‌‌more‌‌rational‌‌the‌‌machine‌‌is,‌‌the‌‌better‌‌values‌‌it‌‌will‌‌have.‌  ‌

Another‌‌way‌‌to‌‌constrain‌‌an‌‌AI‌‌is‌‌not‌‌internal‌‌but‌‌external.‌‌For‌‌example,‌‌we‌‌could‌‌lock‌‌it‌‌in‌‌a‌‌virtual‌‌ 
world‌‌from‌‌which‌‌it‌‌could‌‌not‌‌escape,‌‌and‌‌in‌‌this‌‌way‌‌create‌‌a‌‌leak-proof‌‌singularity.‌‌But‌‌there‌‌is‌‌a ‌‌
problem.‌‌For‌‌the‌‌AI‌‌to‌‌be‌‌of‌‌use‌‌to‌‌us,‌‌some‌‌information‌‌must‌‌leak‌‌out‌‌of‌‌the‌‌virtual‌‌world‌‌for‌‌us‌‌to‌‌ 
observe‌‌it.‌‌But‌‌then,‌‌the‌‌singularity‌‌is‌‌not‌‌leak-proof.‌‌And‌‌if‌‌the‌‌AI‌‌can‌‌communicate‌‌us,‌‌it‌‌could‌‌ 
reverse-engineer‌‌human‌‌psychology‌‌from‌‌within‌‌its‌‌virtual‌‌world‌‌and‌‌persuade‌‌us‌‌to‌‌let‌‌it‌‌out‌‌of‌‌its‌‌ 
box‌‌-‌‌into‌‌the‌‌internet,‌‌for‌‌example.‌  ‌

Therefore,‌‌a‌‌leak-proof‌‌singularity‌‌would‌‌require‌‌also‌‌the‌‌prevention‌‌of‌‌information‌‌leaking‌i‌n‌.‌‌This,‌‌ 
however,‌‌will‌‌hinder‌‌the‌‌performance‌‌and‌‌functionality‌‌of‌‌AI.‌‌An‌‌alternative‌‌would‌‌be‌‌to‌‌design‌‌a ‌‌
virtual‌‌world‌‌with‌‌very‌‌simple‌‌physics‌‌and‌‌implement‌‌the‌‌AI‌‌separately,‌‌without‌‌giving‌‌it‌‌the‌‌ability‌‌to‌‌ 
access‌‌its‌‌own‌‌processes.‌‌We‌‌could‌‌then‌‌study‌‌the‌‌AI‌‌very‌‌carefully,‌‌and‌‌only‌‌once‌‌deciding‌‌that‌‌it‌‌is‌‌ 
entirely‌‌benevolent,‌‌slowly‌‌let‌‌it‌‌out‌‌into‌‌the‌‌world.‌  ‌

Our‌‌Place‌‌in‌‌a‌‌Post-Singularity‌‌World‌  ‌

Chalmers‌‌says‌‌there‌‌are‌‌four‌‌options‌‌for‌‌us‌‌in‌‌a‌‌post-singularity‌‌world:‌‌extinction,‌‌isolation,‌‌ 
inferiority,‌‌and‌‌integration.‌  ‌

The‌‌first‌‌option‌‌is‌‌undesirable.‌‌The‌‌second‌‌option‌‌would‌‌keep‌‌us‌‌isolated‌‌from‌‌the‌‌AI,‌‌a‌‌kind‌‌of‌‌ 
technological‌‌isolationism‌‌in‌‌which‌‌one‌‌world‌‌is‌‌blind‌‌to‌‌progress‌‌in‌‌the‌‌other.‌‌The‌‌third‌‌option‌‌may‌‌ 
be‌‌infeasible‌‌because‌‌an‌‌AI++‌‌would‌‌operate‌‌so‌‌much‌‌faster‌‌than‌‌us‌‌that‌‌inferiority‌‌is‌‌only‌‌a‌‌blink‌‌of‌‌ 
time‌‌on‌‌the‌‌way‌‌to‌‌extinction.‌  ‌

For‌‌the‌‌fourth‌‌option‌‌to‌‌work,‌‌we‌‌would‌‌need‌‌to‌‌become‌‌super-intelligent‌‌machines‌‌ourselves.‌‌One‌‌ 
path‌‌to‌‌this‌‌mind‌‌be‌‌mind-uploading,‌‌which‌‌comes‌‌in‌‌several‌‌varieties‌‌and‌‌has‌‌implications‌‌for‌‌our‌‌ 
notions‌‌of‌‌consciousness‌‌and‌‌personal‌‌identity‌‌that‌‌Chalmers‌‌discusses.‌‌Chalmers‌‌prefers‌‌gradual‌‌ 
uploading‌‌(slowly‌‌replacing‌‌the‌‌brain‌‌through‌‌nano-transfer‌‌as‌‌each‌‌part‌‌in‌‌turns‌‌learns‌‌to‌‌replicate‌‌ 

 ‌
the‌‌brain‌‌function),‌‌and‌‌considers‌‌it‌‌a‌‌form‌‌of‌‌survival.‌‌He‌‌also‌‌suggests‌‌what‌‌he‌‌calls‌‌non-destructive‌‌ 
uploading,‌‌but‌‌there‌‌is‌‌no‌‌technology‌‌for‌‌this‌‌on‌‌the‌‌horizon.‌  ‌

The‌‌question‌‌of‌‌surviving‌‌an‌‌upload‌‌is‌‌divided‌‌into‌‌the‌‌questions‌‌of‌‌whether‌‌the‌‌uploaded‌‌self‌‌will‌‌be‌‌ 
conscious,‌‌and‌‌if‌‌it‌‌will‌‌retain‌‌the‌‌personal‌‌identity‌‌of‌‌the‌‌original‌‌‘owner’‌‌of‌‌the‌‌biological‌‌brain.‌‌The‌‌ 
first‌‌part‌‌is‌‌almost‌‌impossible‌‌to‌‌answer,‌‌similar‌‌to‌‌the‌‌fact‌‌that‌‌we‌‌are‌‌entirely‌‌able‌‌to‌‌describe‌‌every‌‌ 
part‌‌of‌‌a‌‌mouse‌‌and‌‌how‌‌it‌‌lives‌‌and‌‌behaves,‌‌but‌‌we‌‌have‌‌no‌‌idea‌‌what‌‌it‌‌feels‌‌like‌‌to‌‌be‌‌a‌‌mouse.‌‌ 
Moreover,‌‌we‌‌have‌‌no‌‌idea‌‌how‌‌a‌‌biological‌‌brain‌‌is‌‌conscious,‌‌thus‌‌Chalmers‌‌argues‌‌that‌‌a ‌‌
non-biological‌‌brain‌‌could‌‌too,‌‌be‌‌conscious.‌‌Gradual‌‌uploading‌‌is‌‌also,‌‌potentially,‌‌most‌‌effective‌‌ 
way‌‌of‌‌preserving‌‌consciousness.‌‌He‌‌also‌‌mentions‌‌the‌‌challenge‌‌of‌‌convincing‌‌people‌‌that‌‌they‌‌will‌‌ 
remain‌‌conscious‌‌post-upload,‌‌but‌‌eventually‌‌it‌‌will‌‌catch‌‌on.‌  ‌

In‌‌terms‌‌of‌‌personal‌‌identity,‌‌Chalmers‌‌is‌‌undecided,‌‌but‌‌leans‌‌toward‌‌a‌‌view‌‌that‌‌considers‌‌the‌‌ 
psychological‌‌continuity‌‌(as‌‌a‌‌post‌‌to‌‌physical‌‌biological‌‌continuity)‌‌of‌‌a‌‌person‌‌as‌‌the‌‌prevailing‌‌ 
indicator‌‌of‌‌survival‌‌of‌‌that‌‌individual.‌  ‌

The‌‌pessimistic‌‌view‌‌of‌‌survival‌‌in‌‌uploading‌‌takes‌‌the‌‌following‌‌approach:‌  ‌

1.‌‌In‌‌non-destructive‌‌uploading,‌‌DigiDave‌‌is‌‌not‌‌identical‌‌to‌‌Dave.‌  ‌

2.‌‌If‌‌in‌‌non-destructive‌‌uploading,‌‌DigiDave‌‌is‌‌not‌‌identical‌‌to‌‌Dave,‌‌then‌‌in‌‌destructive‌‌uploading,‌‌ 
DigiDave‌‌is‌‌not‌‌identical‌‌to‌‌Dave.‌  ‌

3.‌‌Therefore,‌‌in‌‌destructive‌‌uploading,‌‌DigiDave‌‌is‌‌not‌‌identical‌‌to‌‌Dave.‌  ‌

In‌‌addition,‌‌Chalmers‌‌believes‌‌that‌‌if‌‌in‌‌gradual‌‌uploading,‌‌a‌‌person‌‌retains‌‌consciousness‌‌and‌‌ 
personal‌‌identity,‌‌then‌‌in‌‌instant‌‌uploading,‌‌they‌‌should‌‌also‌‌do‌‌the‌‌same.‌‌He‌‌also‌‌raises‌‌the‌‌ 
possibility‌‌for‌‌post-mortem‌‌uploading,‌‌either‌‌through‌‌cryonic‌‌brain-preservation‌‌or‌‌through‌‌ 
reconstruction.‌   ‌ ‌

“The‌‌further-fact‌‌view‌‌is‌‌the‌‌view‌‌that‌‌there‌‌are‌‌facts‌‌about‌‌survival‌‌that‌‌are‌‌left‌‌open‌‌by‌‌knowledge‌‌ 
of‌‌physical‌‌and‌‌mental‌‌facts”‌‌which‌‌Chalmers‌‌believes‌‌could‌‌be‌‌true,‌‌and‌‌if‌‌it‌‌is‌‌true,‌‌then‌‌the‌‌facts‌‌ 
about‌‌destructive‌‌and‌‌non-destructive‌‌uploading‌‌are‌‌unclear,‌‌then‌‌it‌‌follows‌‌that‌‌the‌‌optimistic‌‌view‌‌ 
can‌‌be‌‌adopted‌‌with‌‌good‌‌reason.‌‌However,‌‌he‌‌recognises‌‌that‌‌the‌‌further-fact‌‌view‌‌could‌‌be‌‌not‌‌ 
true,‌‌which‌‌could‌‌mean‌‌that‌‌the‌‌deflationary‌‌view‌‌would‌‌be‌‌true‌‌(which‌‌holds‌‌that‌‌our‌‌attempts‌‌to‌‌ 
settle‌‌open‌‌questions‌‌about‌‌survival‌‌tacitly‌‌presuppose‌‌facts‌‌about‌‌survival‌‌that‌‌do‌‌not‌‌exist).‌“‌ If‌‌a ‌‌
deflationary‌‌view‌‌is‌‌correct,‌‌I‌‌think‌‌that‌‌questions‌‌about‌‌survival‌‌come‌‌down‌‌to‌‌questions‌‌about‌‌the‌‌ 
value‌‌of‌‌certain‌‌sorts‌‌of‌‌futures:‌‌should‌‌we‌‌care‌‌about‌‌them‌‌in‌‌the‌‌way‌‌in‌‌which‌‌we‌‌care‌‌about‌‌ 
futures‌‌in‌‌which‌‌we‌‌survive?‌‌I‌‌do‌‌not‌‌know‌‌whether‌‌such‌‌questions‌‌have‌‌objective‌‌answers.‌‌But‌‌I‌‌am‌‌ 
inclined‌‌to‌‌think‌‌that‌‌insofar‌‌as‌‌there‌‌are‌‌any‌‌conditions‌‌that‌‌deliver‌‌what‌‌we‌‌care‌‌about,‌‌continuity‌‌ 
of‌‌consciousness‌‌suffices‌‌for‌‌much‌‌of‌‌the‌‌right‌‌sort‌‌of‌‌value.‌‌Causal‌‌and‌‌psychological‌‌continuity‌‌may‌‌ 
also‌‌suffice‌‌for‌‌a‌‌reasonable‌‌amount‌‌of‌‌the‌‌right‌‌sort‌‌of‌‌value.‌‌If‌‌so,‌‌then‌‌destructive‌‌and‌‌ 
reconstructive‌‌uploading‌‌may‌‌be‌‌reasonable‌‌close‌‌to‌‌as‌‌good‌‌as‌‌ordinary‌‌survival.‌‌What‌‌about‌‌hard‌‌ 
cases,‌‌such‌‌as‌‌non-destructive‌‌gradual‌‌uploading‌‌or‌‌split‌‌brain‌‌cases,‌‌in‌‌which‌‌one‌‌stream‌‌of‌‌ 
consciousness‌‌splits‌‌into‌‌two?‌‌On‌‌a‌‌deflationary‌‌view,‌‌the‌‌answer‌‌will‌‌depend‌‌on‌‌how‌‌one‌‌values‌‌or‌‌ 
should‌‌value‌‌these‌‌futures.‌‌At‌‌least‌‌given‌‌our‌‌current‌‌value‌‌scheme,‌‌there‌‌is‌‌a‌‌case‌‌that‌‌physical‌‌and‌‌ 
biological‌‌continuity‌‌counts‌‌for‌‌some‌‌extra‌‌value,‌‌in‌‌which‌‌case‌‌BioDave‌‌might‌‌have‌‌more‌‌right‌‌to‌‌be‌‌ 
counted‌‌as‌‌Dave‌‌than‌‌DigiDave.‌‌But‌‌it‌‌is‌‌not‌‌out‌‌of‌‌the‌‌question‌‌that‌‌this‌‌value‌‌scheme‌‌should‌‌be‌‌ 
revised,‌‌or‌‌that‌‌it‌‌will‌‌be‌‌revised‌‌in‌‌the‌‌future,‌‌so‌‌that‌‌BioDave‌‌and‌‌DigiDave‌‌will‌‌be‌‌counted‌‌equally‌‌ 

 ‌
as‌‌Dave.‌‌In‌‌any‌‌case,‌‌I‌‌think‌‌that‌‌on‌‌a‌‌deflationary‌‌view‌‌gradual‌‌uploading‌‌is‌‌close‌‌to‌‌as‌‌good‌‌as‌‌ 
ordinary‌‌non-Edenic‌‌survival.‌‌And‌‌destructive,‌‌non-destructive,‌‌and‌‌reconstructive‌‌uploading‌‌are‌‌ 
reasonably‌‌close‌‌to‌‌as‌‌good‌‌as‌‌ordinary‌‌survival.‌‌Ordinary‌‌survival‌‌is‌‌not‌‌so‌‌bad,‌‌so‌‌one‌‌can‌‌see‌‌this‌‌ 
as‌‌an‌‌optimistic‌‌conclusion”‌  ‌

Conclusion‌  ‌

Chalmers‌‌concludes:‌  ‌

“Will‌‌there‌‌be‌‌a‌‌singularity?‌‌I‌‌think‌‌that‌‌it‌‌is‌‌certainly‌‌not‌‌out‌‌of‌‌the‌‌question,‌‌and‌‌that‌‌the‌‌main‌‌ 
obstacles‌‌are‌‌likely‌‌to‌‌be‌‌obstacles‌‌of‌‌motivation‌‌rather‌‌than‌‌obstacles‌‌of‌‌capacity.‌  ‌

How‌‌should‌‌we‌‌negotiate‌‌the‌‌singularity?‌‌Very‌‌carefully,‌‌by‌‌building‌‌appropriate‌‌values‌‌into‌‌ 
machines,‌‌and‌‌by‌‌building‌‌the‌‌first‌‌AI‌‌and‌‌AI+‌‌systems‌‌in‌‌virtual‌‌worlds.‌  ‌

How‌‌can‌‌we‌‌integrate‌‌into‌‌a‌‌post-singularity‌‌world?‌‌By‌‌gradual‌‌uploading‌‌followed‌‌by‌‌enhancement‌‌ 
if‌‌we‌‌are‌‌still‌‌around‌‌then,‌‌and‌‌by‌‌reconstructive‌‌uploading‌‌followed‌‌by‌‌enhancement‌‌if‌‌we‌‌are‌‌not.”‌  ‌

Summary‌‌of‌‌Regina‌‌Rini:‌‌Raising‌‌Good‌‌Robots‌  ‌
My‌‌understanding‌‌of‌‌the‌‌breakdown‌‌is‌‌as‌‌follows:‌‌Celestial‌‌ethics‌‌are‌‌ethics‌‌taken‌‌from‌‌the‌‌point‌‌of‌‌ 
view‌‌of‌‌"objectivity"‌‌or‌‌"how‌‌the‌‌universe‌‌sees‌‌it"‌‌and‌‌therefore‌‌in‌‌no‌‌way‌‌inherent‌‌to‌‌those‌‌wishing‌‌ 
to‌‌act‌‌ethically‌‌as‌‌such.‌‌This‌‌leads‌‌us‌‌to‌‌a‌‌conclusion‌‌that‌‌if‌‌animals‌‌(or‌‌AI‌‌of‌‌course)‌‌were‌‌capable‌‌of‌‌ 
resisting‌‌their‌‌“flawed”‌‌impulses‌‌and‌‌acting‌‌in‌‌a‌‌rational‌‌way‌‌–‌‌as‌‌humans‌‌can‌‌and‌‌do‌‌–‌‌then‌‌they‌‌too‌‌ 
would‌‌undoubtedly‌‌be‌‌expected‌‌to‌‌act‌‌as‌‌ethically‌‌as‌‌humans‌‌do.‌‌Leading‌‌purveyors‌‌of‌‌this‌‌view‌‌are‌‌ 
famously‌‌Plato‌‌and‌‌Kant.‌‌   ‌
On‌‌the‌‌other‌‌hand,‌‌Organic‌‌ethics‌‌are‌‌“built-in”‌‌to‌‌the‌‌actor‌‌that‌‌intends‌‌to‌‌perform‌‌them.‌‌Therefore‌‌ 
the‌‌approach‌‌to‌‌a‌‌moral‌‌lifestyle‌‌is‌‌more‌‌of‌‌a‌‌self-search‌‌as‌‌well‌‌as‌‌the‌‌finding‌‌and‌‌growing‌‌closer‌‌to‌‌ 
one's‌‌natural‌‌intrinsic‌‌ethical‌‌wanting.‌‌We‌‌thus‌‌must‌‌constantly‌‌strive‌‌to‌‌develop‌‌these‌‌abilities‌‌as‌‌ 
opposed‌‌to‌‌searching‌‌for‌‌what‌‌they‌‌are‌‌in‌‌the‌‌cosmos.‌‌This‌‌view‌‌is‌‌famously‌‌held‌‌by‌‌Aristotle,‌‌Hume‌‌ 
and‌‌Darwin.‌  ‌

Rini‌‌of‌‌course‌‌challenges‌‌both‌‌of‌‌these‌‌classical‌‌approaches‌‌as‌‌viable‌‌options‌‌for‌‌AI,‌‌on‌‌the‌‌following‌‌ 
grounds:‌‌AlphaGo‌‌beat‌‌a‌‌human‌‌Go‌‌master,‌‌and‌‌whilst‌‌doing‌‌so‌‌performed‌‌moves‌‌that‌‌no‌‌human‌‌ 
who‌‌was‌‌watching‌‌could‌‌understand.‌‌For‌‌Rini,‌‌this‌‌highlights‌‌an‌‌important‌‌difference‌‌between‌‌the‌‌ 
way‌‌humans‌‌and‌‌AI‌‌can‌‌and‌‌do‌‌see‌‌the‌‌facts‌‌of‌‌the‌‌world‌‌and‌‌therefore‌‌the‌‌way‌‌they‌‌explain‌‌and/or‌‌ 
rationalise‌‌them.‌‌This‌‌is‌‌important‌‌because‌‌if‌‌AI‌‌was‌‌left‌‌to‌‌develop‌‌and‌‌go‌‌on‌‌to‌‌learn‌‌(of‌‌its‌‌own‌‌ 
accord)‌‌ethics‌‌and‌‌morals,‌‌we‌‌could‌‌not‌‌and‌‌would‌‌not‌‌understand‌‌the‌‌conclusions‌‌they‌‌reached.‌‌ 
This‌‌is‌‌bad‌‌enough,‌‌but‌‌coupled‌‌with‌‌our‌‌complete‌‌lack‌‌of‌‌comprehension‌‌of‌‌the‌‌plane‌‌in‌‌which‌‌ 
these‌‌machines‌‌are‌‌acting‌‌and‌‌thinking,‌‌would‌‌leave‌‌us‌‌with‌‌only‌‌two‌‌plausible‌‌paths‌‌of‌‌actions‌‌– ‌‌
neither‌‌of‌‌which‌‌are‌‌good.‌‌The‌‌first‌‌option‌‌is‌‌that‌‌we'd‌‌effectively‌‌treat‌‌them‌‌as‌‌G-ds‌‌and‌‌do‌‌as‌‌they‌‌ 
say‌‌–‌‌committing‌‌ourselves‌‌to‌‌their‌‌super-developed‌‌moral‌‌codes‌‌–‌‌even‌‌at‌‌a‌‌huge‌‌cost‌‌to‌‌our‌‌ 
“humanity”.‌‌The‌‌second‌‌–‌‌and‌‌more‌‌likely‌‌–‌‌option‌‌is‌‌that‌‌humanity‌‌would‌‌force‌‌itself‌‌to‌‌ignore‌‌the‌‌ 
machine-produced‌‌ethical‌‌advice‌‌because‌‌it‌‌is‌‌too‌‌different‌‌from‌‌our‌‌current‌‌positions.‌‌In‌‌either‌‌case,‌‌ 
why‌‌bother‌‌letting‌‌them‌‌develop‌‌positions?‌  ‌

The‌‌problem‌‌with‌‌Organic‌‌morals,‌‌in‌‌Rini’s‌‌view,‌‌is‌‌that‌‌however‌‌much‌‌we‌‌try‌‌to‌‌make‌‌the‌‌AI‌‌similar‌‌ 
to‌‌us,‌‌by‌‌definition‌‌it‌‌will‌‌be‌‌different‌‌to‌‌us‌‌(otherwise‌‌it‌‌would‌‌just‌‌be‌‌us).‌‌They‌‌will‌‌however‌‌be‌‌as‌‌ 
close‌‌to‌‌thinking,‌‌sentient‌‌beings‌‌as‌‌possible‌‌and‌‌therefore‌‌there‌‌are‌‌many‌‌ethical‌‌hurdles‌‌in‌‌the‌‌way‌‌ 

 ‌
of‌‌us‌‌just‌‌taking‌‌advantage‌‌of‌‌this‌‌other‌‌“humanoid-style”‌‌being‌‌to‌‌serve‌‌us‌‌forever‌‌more.‌‌Cue‌‌the‌‌ 
Robot‌‌Civil‌‌Rights‌‌Movement.‌  ‌

She‌‌concludes‌‌therefore‌‌that‌‌we‌‌should‌‌educate‌‌these‌‌AI‌‌beings‌‌in‌‌a‌‌way‌‌that‌‌we‌‌see‌‌ethically‌‌fit.‌‌ 
However,‌‌she‌‌contends‌‌that‌‌it‌‌is‌‌imperative‌‌that‌‌we‌‌be‌‌willing‌‌to‌‌accept‌‌them‌‌“growing‌‌up”‌‌and‌‌ 
becoming‌‌their‌‌own‌‌thing,‌‌inevitably‌‌with‌‌the‌‌possibility‌‌that‌‌they‌‌will‌‌hold‌‌moral‌‌and‌‌ethical‌‌ 
opinions‌‌that‌‌we‌‌might‌‌not‌‌like.‌  ‌

 ‌
 ‌

 ‌

You might also like