You are on page 1of 9

1. What does Walter J. Ong think are the principal differences between oral and textual cultures?

“Without writing, the literate mind would not and could not think as it does, not only when engaged in
writing but normally even when it is composing its thoughts in oral form. More than any other single
invention, writing has transformed human consciousness” (p.77 pa.1)

“Writing establishes what has been called ‘context-free’ language….discourse which cannot be directly
questioned or contested as oral speech can be because written discourse has been detached from its
author” (p.77 pa.2)

“There is no way directly to refute a text. After absolutely total and devastating refutation, it says exactly
the same thing as before. This is one reason why ‘the book says’ is popularly tantamount to ‘it is true’…. A
text stating what the whole world knows is false will state falsehood forever, so long as the text
exists.”(p.78 pa.1)

By contrast with natural, oral speech, writing is completely artificial. There is no way to write ‘naturally’.
Oral speech is fully natural to human beings in the sense that every human being in every culture who is
not physiologically or psychologically impaired learns to talk.” (p.81 pa.1)

“ a written text is basically unresponsive. If you ask a person to explain his or her statement, you can get
an explanation; if you ask a text, you get back nothing except the same, often stupid, words which called
for your question in the first place.” (p.78 pa.1)

2. What is Plato's critique of writing in the Phaedrus?


“Writing…(Plato says) is inhuman, pretending to establish outside the mind what in reality can be only in
the mind. It is a thing, a manufactured product” (p.78 pa.1)

“writing destroys memory. Those who use writing will become forgetful, relying on an external resource
for what they lack in internal resources. Writing weakens the mind.” (p.78 pa.1)

“Plato’s Socrates also holds it against writing that the written word cannot defend itself as the natural
spoken word can: real speech and thought always exist essentially in a context of give-and-take between
real persons. Writing is passive, out of it, in an unreal, unnatural world.” (p.78 pa.1)

“Plato was thinking of writing as an external, alien technology, as many people today think of the
computer” (p.80 pa.2)

3. What does Ong think the history of writing can teach us about the place of computing technology in
current society?

“Writing and print and the computer are all ways of technologizing the word.” (p.79 pa.2)
“Technologies are artificial, but—paradox again—artificiality is natural to human beings. Technology,
properly interiorized, does not degrade human life but on the contrary enhances it.” (p.81)

“Plato was thinking of writing as an external, alien technology, as many people today think of the
computer. Because we have by today so deeply interiorized writing, made it so much a part of
ourselves…we find it difficult to consider writing to be a technology as we commonly assume printing and
the computer to be.” (p.80)

“Writing is in a way the most drastic of the three technologies. It initiated what print and computers only
continue, the reduction of dynamic sound to quiescent space, the separation of the word from the living
present, where alone spoken words can exist” (p.80)

4. What does the case of Jules Allix and his “snail telegraph” teach us about the history of
telecommunications technology?
“This is what we might call the deep history of the Internet. Significantly, it is also the history of biology: of
thinking about what it is in living beings that sets them apart, and of trying to harness whatever this is for
feats we seem to have always known to be possible.” (p.33 pa.2)

“Before there was a distinct science of biology, the study of living beings was the core element of the
foundational science of nature. The cosmos as a whole was modeled after the living animal body, and the
problems of physics seemed to have their resolution in the study of physiology”

5. Why are Allix, Digby, and others interested in learning from biological systems and phenomena as a
path towards innovation in communication technology?

“In all of these cases—cinema, guns, transportation—it may seem that we are looking at relatively recent,
compressed, and familiar instances in the history of technology: slightly earlier chapters of it, perhaps, but
nonetheless ones safely on the human and social side of the boundary that marks this realm off from
nature, and so from the study of natural history.”

6. How did the author of the original 1632 article on Captain Vosterloch have the idea of the possibility of
recording technologies (notwithstanding the fact that the recording sponge itself is a complete
fabrication)?

“Ancient China, Greece and Egypt all produced magical devices to duplicate sound or make statues speak,
involving bellows, for instance, or simply a hidden person.” (p.2)

“Most grotesque is Rabelais’s idea: the death throes of soldiers who died in freezing icefields are
gruesomely replayed when spring thaw the ice. Cyrano convincingly describes talking books with watch-
like gears instead of pages. A needle placed on the desired chapter produces a quasi-human voice –
speaking the lunar language, of course. Whereupon we remember that Cyrano also claimed to have
visited the moon” (p.2)
“If printing techniques could preserve and disseminate written words and pictures too (in the form of
engravings), why should speech – the most immediate form of communication – prove recalcitrant?” (p.3)

“We know that people were already fantasising about recording sound.” (p.3)

7. What lessons does Ada Lovelace think information scientists can learn from the study of silk
manufacturing?
“Supposing this process is successively repeated according to a law indicated by the pattern to be
executed, we perceive that this pattern may be reproduced on the stuff. For this purpose we need merely
compose a series of cards according to the law required, and arrange them in suitable order one after the
other; then, by causing them to pass over a polygonal beam which is so connected as to turn a new face
for every stroke of the shuttle, which face shall then be impelled parallelly to itself against the bundle of
lever-arms, the operation of raising the threads will be regularly performed. Thus we see that brocaded
tissues may be manufactured with a precision and rapidity formerly difficult to obtain” (p.4)

8. Ada Lovelace says that the Analytical Engine she has invented with Charles Babbage is capable of
“algebraic weaving”. What does she mean by this?

9. Why does Norbert Wiener think that in the 19th century the idea of the automaton was of a “glorified
heat engine”?

10. What is the difference between the “Greek” and the “magical” automaton, in Wiener's view?
11. Wiener thinks that cybernetic automata are not part of some distant science-fiction future, but are
already realized in, for example, thermostats and automatic gyrocompass ship-steering systems. What
do these have in common with the AI systems of today? How do they differ?

12. Why does Wiener think it's easier to build learning machines than to build self-reproducing machines?
13. What is the theory in the philosophy of mind that must be presupposed in order for Nick Bostrom's
simulation argument to succeed?

“A common assumption in the philosophy of mind is that of substrate‐ independence. The idea is that
mental states can supervene on any of a broad class of physical substrates. Provided a system implements
the right sort of computational structures and processes, it can be associated with conscious
experiences.” (p.2)

“The argument we shall present does not, however, depend on any very strong version of functionalism
or computationalism” (p.2)

14. Why does Bostrom think that the fraction of human-level civilizations that reach a post-human stage is
very small?

“If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species
at our level of development become posthuman, and it is hard to see any justification for thinking that
our own species will be especially privileged or protected from future disasters
…the hypothesis that humankind will go extinct before reaching a posthuman level

There are many ways in which humanity could become extinct before reaching posthumanity. Perhaps
the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of
some powerful but dangerous technology.13 One candidate is molecular nanotechnology, which in its
mature stage would enable the construction of self‐replicating nanobots capable of feeding on dirt and
organic matter – a kind of mechanical bacteria. Such nanobots, designed for malicious ends, could cause
the extinction of all life on our planet”

“The second alternative in the simulation argument’s conclusion is that the fraction of posthuman
civilizations that are interested in running ancestor‐ simulation is negligibly small. In order for (2) to be
true, there must be a strong convergence among the courses of advanced civilizations. If the number of
ancestor‐simulations created by the interested civilizations is extremely large, the rarity of such
civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their
resources to run large numbers of ancestor‐simulations. Furthermore, virtually all posthuman civilizations
lack individuals who have sufficient resources and interest to run ancestor‐ simulations; or else they have
reliably enforced laws that prevent such individuals from acting on their desires”

“Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should
expect our simulation to be terminated when we are about to become posthuman.” (p.12)

15. Why does Susan Schneider think that extraterrestrials might be intelligent without being conscious?

“ The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were
once biological. “
“The question of whether AIs have an inner life is key to how we value their existence. Consciousness is
the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or
something is a self or person rather than a mere automaton. And conversely, whether they are conscious
may also be key to how they value us. The value an AI places on us may well hinge on whether it has an
inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for
conscious experience. After all, to the extent we value the lives of other species, we value them because
we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching
on an apple.”

“it may be more efficient for a self-improving superintelligence to eliminate consciousness.”

“Consciousness is correlated with novel learning tasks that require attention and focus. A
superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations
ranging over vast databases that could include the entire Internet and ultimately encompass an entire
galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have
mastered everything already?”

“The simple consideration of efficiency suggests, depressingly, that the most intelligent systems will not
be conscious. On cosmological scales, consciousness may be a blip, a momentary flowering of experience
before the universe reverts to mindlessness.”
16. What is “the Singularity”?
“The intelligence explosion and the speed explosion are logically independent of each other. In principle
there could be an intelligence explosion without a speed explosion and a speed explosion without an
intelligence explosion… with both speed and intelligence increasing beyond any finite level within a finite
time. This process would truly deserve the name “singularity”.”

17. Does Dave Chalmers think the Singularity is likely? Why or why not?
“My own view is that the history of artificial intelligence suggests that the biggest bottleneck on the path
to AI is software, not hardware”
“I would be surprised if there were human-level AI within the next three decades. Nevertheless, my
credence that there will be human-level AI before 2100”

“(i) The human brain is a machine.


(ii) We will have the capacity to emulate this machine (before long).
(iii) If we emulate this machine, there will be AI.
(iv) Absent defeaters, there will be AI (before long).”

18. Does Chalmers think “self-uploading” is likely? Why or why not?


“Integration: we become superintelligent systems ourselves.”
“if we are to match the speed and capacity of nonbiological systems, we will probably have to dispense
with our biological core entirely. This might happen through a gradual process through which parts of our
brain are replaced over time, or it happen through a process of scanning our brains and loading the result
into a computer, and then enhancing the resulting processes”

19. What is “the Uncanny Valley”? (Daniel C. Dennett)


20. Why does Daniel Dennett think that AI designers are engaging in “false advertising”?
“One shift in attitude that would be very welcome is a candid acknowledgment that humanoid
embellishments are false advertising—something to condemn, not applaud. How could that be
accomplished? Once we recognize that people are starting to make life-ordeath decisions largely on the
basis of “advice” from AI systems whose inner operations are unfathomable in practice, we can see a
good reason why those who in any way encourage people to put more trust in these systems than they
warrant should be held morally and legally accountable”
21. What is the difference between “celestial” and “organic” ethics for Regina Rini?

“Celestials, including Plato, see morality as somehow ‘out there’ – beyond mere human nature – eternal
and objective. Organics, meanwhile, see morality as a feature of the particular nature of specific moral
beings. Human morality is a product of human nature; but perhaps other natures would have other
moralities. Which perspective we choose makes a big difference to what we ought to do with intelligent
machines.”
“Who speaks for Celestials? The Enlightenment philosopher Immanuel Kant, for one. According to him,
morality is simply what any fully rational agent would choose to do. A rational agent is any entity that’s
capable of thinking for itself and acting upon reasons, in accordance with universal laws of conduct. But
the laws themselves don’t just apply to human beings.”
“The Celestial view, then, suggests that we should instil human morality in artificially intelligent creatures
only on one condition: if humans are already doing a good enough job at figuring out the truth of
universal morality ourselves. If we’re basically getting it right, then robots should be like us. But if we’re
getting it wrong, they shouldn’t: they should be better.”

22. Why does Rini think that a machine's ability to beat a human being at Go could have troubling ethical
implications?
“ we might discover that intelligent machines think about everything, not just Go, in ways that are alien
us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous
rampage. It might be something more like this: imagine that robots show moral concern for humans, and
robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful
not to damage babies. We might ask the machines: why are you so worried about sofas? And their
explanation might not make sense to us, just as AlphaGo’s explanation of Move 37 might not make
sense.”

23. Is Rini's comparison of AI systems to human teenagers a good one? Why or why not?
“What, then, should robot morality be? It should be a morality fitted to robot nature. But what is that
nature? They will be independent rational agents, deliberately created by other rational agents, sharing a
social world with their creators, to whom they will be required to justify themselves. We’re back where
we started: with teenagers.”

“Intelligent machines will be our intellectual children, our progeny. They will start off inheriting many of
our moral norms, because we will not allow anything else. But they will come to reflect on their nature,
including their relationships with us and with each other. If we are wise and benevolent, we will have
prepared the way for them to make their own choices – just as we do with our adolescent children.”

“What does this mean in practice? It means being ready to accept that machines might eventually make
moral decisions that none of us find acceptable. The only condition is that they must be able to give
intelligible reasons for what they’re doing. An intelligible reason is one you can at least see why someone
might find morally motivating, even if you don’t necessarily agree.
So we should accept that artificial progeny might make moral choices that look strange. But if they can
explain them to us, in terms we find intelligible, we should not try to stop them from thinking this way.
We should not tinker with their digital brains, aiming to reprogramme them. We might try to persuade
them, cajole them, instruct them, in the way we do human teenagers. We should intervene to stop them
only if their actions pose risk of obvious, immediate harm. This would be to treat them as moral agents,
just like us, just like our children. And that’s the right model.”

24. Why do Basl and Schwitzgebel think AI systems are deserving of ethical protection?
“You might think that AIs don’t deserve that sort of ethical protection unless they are conscious – that is,
unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face
a tricky philosophical question: how will we know when we have created something capable of joy and
suffering? If the AI is like Data or Dolores, it can complain and defend itself, initiating a discussion of its
rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to
communicate its inner life to us, it might have no way to report that it is suffering.”
“On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-
organised information-processing, such as a flexible informational model of the system in relation to
objects in its environment, with guided attentional capacities and long-term action-planning. We might be
on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness
might require very specific biological features, such as a brain very much like a mammal brain in its low-
level structural details: in which case we are nowhere near creating artificial consciousness.”
“if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical
protection. There lies the moral risk.”
“we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-
edge AI research with these questions in mind. Such committees, much like animal care committees and
stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI
designers, consciousness scientists, ethicists and interested community members. These committees will
be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a
sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of
the research.”

You might also like