Professional Documents
Culture Documents
VOLUME 8
Editor-in-chief
Pieter Vermaas, Delft University of Technology, the Netherlands.
Editors
David E. Goldberg, University of Illinois at Urbana-Champaign, USA.
Evan Selinger, Rochester Institute of Technology, USA.
Ibo van de Poel, Delft University of Technology, the Netherlands.
The ultimate aim of this volume is to further the philosophical reflection on technology
within the context of Luciano Floridi’s philosophy of technology. Philosophical
reflection on technology is as old as philosophy itself, dating back to the Ancient
Greek philosophers. The themes that have dominated the philosophical discourse on
technology since then can be roughly categorized into three: (i) the social, cultural,
and political impacts of technological developments; (ii) the epistemological status
of technological knowledge, especially in relation to scientific knowledge; and (iii)
the ontological status of the products of technology, i.e., technological artifacts.
Luciano Floridi’s philosophy of technology, which is based on his philosophy of
information, has something to say about each of these themes. Not only that, his
philosophical analysis of new technologies leads to a novel metaphysical framework
in which our understanding of the ultimate nature of reality shifts from a materialist
one to an informational one, in which all entities, be they natural or artificial, are
analyzed as informational entities (Floridi 2010). This is the main rationale behind
the choosing of his philosophy of technology as the topic of this volume.
There is no doubt that the information and communication technologies of the
twentieth century have had a significant impact on our daily lives. They have brought
new opportunities as well as new challenges for human development. According to
Floridi, however, this is not the whole story. He claims that these new technologies
have led to a revolutionary shift in our understanding of humanity’s nature and its
role in the universe. By referring to an earlier categorization, he calls this the “fourth
revolution.” The Copernican revolution was the first, leading to the understanding
that we as humans are not at the center of the universe. The second revolution was
the Darwinian realization that we are not unnaturally distinct or different from the rest
of the animal world. The third was the Freudian revolution, which taught us that we are
not as transparent to ourselves as we once thought. With the fourth revolution, says
Floridi, “we are now slowly accepting the idea that we might be informational organ-
isms among many agents …, inforgs not so dramatically different from clever, engi-
neered artefacts, but sharing with them a global environment that is ultimately made
of information, the infosphere. The information revolution [the fourth revolution] is
not about extending ourselves, but about re-interpreting who we are” (Floridi 2008a).
v
vi Preface
This radical claim forms the basis of Floridi’s philosophy of technology. Given
this basis, philosophical reflection on technology is not only valuable in and of
itself, but also brings a completely new framework of analysis for philosophy.
In other words, philosophical reflection on technology takes a central role in
philosophical analysis. To give an example, Floridi’s analysis of object-oriented
programming methodology (Floridi 2002), which relies on a method borrowed from
a branch of theoretical computer science called Formal Methods, paves the way for
defining a new macroethical theory, i.e., Information Ethics. The method he borrows
from Formal Methods is the method of levels of abstraction. By using this method,
Floridi claims that the moral value of human actions is not different in kind than the
moral evaluation of other informational objects. The idea behind the method of
levels of abstraction is quite simple and straightforward: the reality can be viewed
from different levels. The roots of this simple idea go back to Eddington’s work in
the early decades of the twentieth century (Eddington 1928). Let me give a brief
example in Floridi’s own words:
Suppose, for example, that we interpret p as Mary (p = Mary). Depending on the LoA and
the corresponding set of observables, p = Mary can be analyzed as the unique individual
person called Mary, as a woman, as a human being, as an animal, as a form of life, as a
physical body, and so forth. The higher the LoA, the more impoverished is the set of observ-
ables, and the more extended is the scope of the analysis (Floridi 2002).
Perhaps the most crucial feature of the method of levels of abstraction is that the
identification relation between two variables (or observables) is never absolute.
Rather, the identification is always contextual and the context is a function of the
level of abstraction chosen for the required analysis (Floridi and Sanders 2004a).
Floridi utilized his method not only in Information Ethics but also in several
other subfields of philosophy. The following quote from his Minds and Machines
article (2008), in which he responded to some objections raised against the method of
levels of abstraction, provides a list of the areas in which the method has been used.
Jeff Sanders and I were forced to develop the method of abstraction when we encountered
the problem of defining the nature of agents (natural, human, and artificial) in Floridi and
Sanders (2004b). Since then, we have been applying it to some long-standing philosophical
problems in different areas. I have used it in computer ethics, to argue in favour of the
minimal intrinsic value of informational objects (Floridi 2003); in epistemology, to prove that
the Gettier problem is not solvable (Floridi 2004c); in the philosophy of mind, to show how an
agent provided with a mind may know that she has one and hence answer Dretske’s question
“how do you know you are not a zombie?” (Floridi 2005a); in the philosophy of science, to
propose and defend an informational approach to structural realism that reconciles forms
of ontological and epistemological structural realism (Floridi 2004b); and in the philosophy of
AI, to provide a new model of telepresence (Floridi 2005b). In each case, the method
of abstraction has been shown to provide a flexible and fruitful approach (Floridi 2008c).
The jury is still out as to the truth value of the claim stated in the last sentence of
this quote. One thing, however, is certain. Floridi’s method borrowed from the
Formal Methods branch of theoretical computer science and its applications have
led to prolific and novel discussions in many different areas of philosophy. For
the purposes of this volume, one of the most important applications of the method
is in computer ethics. As mentioned above, Floridi claims that his Information
Preface vii
such as a software package. The second level is the designer’s perception of the
system. Their ultimate conclusion in that paper is that the ethical responsibilities of
a software designer significantly increase with the development of artificial agents
because of the more intricate relationship between LoA1 and LoA2. In their contri-
bution to this volume, they extend their original analysis by introducing a third level
of abstraction, LoAS, the level that refers to society’s perspective. This is important
because new artificial agents not only have effects on individuals but also on the
whole society that comprises those individuals. With this new addition, they test the
applicability of Floridi’s Information Ethics and the method of levels of abstraction
to two new computing paradigms: cloud computing and quantum computing. Their
overall conclusion is a positive one. They claim that although there are new chal-
lenges for Information Ethics in these two computational paradigms, Information
Ethics has the potential of successfully meeting those challenges. It should be noted
that their chapter also provides a nice and brief overview of the fundamental
concepts of quantum computing.
Lucas’ chapter is an extensive and detailed criticism of Information Ethics. He
criticizes three notions that form the fundamentals of Floridi’s theory, which are
interactivity, autonomy, and adaptability. Lucas’ ultimate conclusion is that Infor-
mation Ethics, mainly because of being only formally defined, is too artificial and
too simple for a natural characterization of morality. Although Floridi thinks that
Lucas’ understanding of Information Ethics is based on serious misunderstandings
and that Lucas’ chapter is beyond repair (please see Floridi’s reply at the end of this
volume), the chapter paves the way for a closer scrutiny of some of the arguments
that Floridi has provided in defense of Information Ethics. An example might be
helpful at this point. The essential motivation of Information Ethics is to be able to
count artificial agents as moral agents. It should be noted that this essential motiva-
tion is somewhat different than the motivation behind the earlier characterizations
of computer and information ethics. Moor (1985) is a good example of the classic
treatment of the subject. In one of their earlier characterizations of Information
Ethics, Floridi and Sanders consider a set of possible objections to their main claim
about the moral value of artificial agents. These are the teleological objection, the
intentional objection, the freedom objection, and the responsibility objection. They
then provide counterarguments against those objections. Lucas thinks that none of
these counterarguments sufficiently overcome the four possible objections that
Floridi and Sanders consider. Of course, whether Lucas is right in his assessment or
not is a matter of debate, but Lucas’ reasoning urges us to reevaluate the fundamental
arguments provided for the philosophical value of Information Ethics. In that
respect, it is a valuable contribution to this volume.
Russo, in her chapter, focuses on one particular aspect of Floridi’s Information
Ethics, the reconciliation of physis and techne in a constructionist manner. According
to Floridi, traditional macroethical theories take the situation which is bound to
moral evaluation as given, but this traditional approach ignores the poietic nature
of humans as ethical agents. Ignoring the poietic nature of humans is the ultimate
basis of the dichotomy between physis and techne (Floridi and Sanders 2003).
The demarcation line between these two has been disappearing because of digital
Preface ix
technologies. Russo agrees with Floridi’s analysis and attempts to take the analysis
one step further. For Russo, the gradual disappearance of the demarcation line
between physis and techne is not just a result of the new digital technologies; rather,
it is dominated by new technologies in general. These new technologies include
biotechnology and nanotechnology, which allow us to be “creating altogether new
environments that pose new challenges for the understanding of us in the world.”
Floridi’s Information Ethics, according to Russo, successfully accounts for the ethical
implications of these new technologies, but, she continues, the epistemological
implications are also at least equally important and need to be analyzed. This is what
she aims to achieve in her chapter. In that respect, it would not be wrong to say
that Russo takes Floridi’s original analysis of digital technologies and applies it to
a wider domain.
The two chapters in Part II provide novel ways of categorizing scientific and
technological advancements on the basis of metrics different than Floridi’s metric,
which is based on introverted effects of scientific changes on the way we understand
human nature. These are Anthony F. Beavers’ “In the Beginning Was the Word
and Then Four Revolutions in the History of Information” and Valeria Giardino’s
“I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information.”
Beavers, in his chapter, gives us a different categorization of the technological
revolutions that mankind has experienced in its entire history. As mentioned above,
Floridi’s categorization of the information revolution as the fourth revolution is
based on the metric of the way scientific developments change our understanding
of ourselves. Thus, according to this metric, scientific developments that have led
to a reassessment of humanity’s fundamental nature and role in the universe are
counted as revolutionary. Of course, as Floridi himself states, other metrics are also
possible. In his chapter, Beavers offers a different metric that is not supposed to be
an alternative to Floridi’s metric, but rather complementary. The suggested metric
is the history of information flow itself. In other words, technological and scientific
advancements are categorized according to “the kind of information that can be
stored and transmitted, the speed of information transmission, its preservation, and
its reach.” This metric also gives us four revolutions: the Epigraphic Revolution,
the Printing Revolution, the Multimedia Revolution, and the Digital Revolution.
The last one, which corresponds to Floridi’s fourth revolution, is characterized by
the introduction of automated information processing. There are two interesting
features of Beavers’ categorization that I would like to mention in this short preface.
The first is that in his categorization, the Digital Revolution is not considered as a
discontinuity from the previous revolutions, because information transmission and
coding were also present, albeit in different forms, in the previous revolutions.
What the Digital Revolution has brought to the table is new and revolutionary
technological affordances that are made possible by automated information pro-
cessing. This interesting feature, perhaps, is what fundamentally differentiates
Beavers’ categorization from Floridi’s categorization. The second point is that the
trajectory of the history of information flow is not characterized merely by the
evolution of particular technologies, but also by the evolution of the informational
networks that those particular technologies enable. After establishing his new
x Preface
(OLPC) program as an example of the top-down approach and shows the difficulties
involved in changing educational practices in this way. According to Pasquinelli,
change from the top is difficult, mainly because of the sheer size of educational
institutions and the long tradition of educational paradigms and practices. A second
reason, which is clearly seen in the OLPC case, is that top-down changes usually do
not include students, who are the ultimate users of education, in the design of chang-
ing programs. Then she proceeds to give an example of a bottom-up approach that
she claims to be more promising. Her fascinating example is the experience of Math
on MXit from South Africa. With this example, she urges educational institutions
and actors to implement the new technologies from the bottom up. The ultimate
goal of such changes, for her, is to turn students into infostudents and teachers into
infoteachers. During this transformation, which will be slow and gradual, she says,
the old paradigms of education will be challenged because of the new tools and
approaches of the information revolution. As the dominant example of the old
educational paradigms, she gives the Victorian school, which was defined by the
following three characteristics: (i) a dedicated and separated space for learning,
(ii) a dedicated time for learning, and (iii) well-defined roles for the learner and the
teacher. With the Internet, mobile phones, and digital media, she says, learning
could occur anywhere and anytime. Moreover, the demarcation line between the
student and the teacher will be blurred to the point of disappearance. In short,
Pasquinelli’s chapter is an informative and fascinating one in which she urges us to
reontologize and reconceptualize our environment for education.
Cohen-Almagor, in his chapter, uses Floridi’s Information Ethics in order to
identify the democratic regulative principles of freedom of speech on the Internet
and the responsibility of Internet Service Providers and Web Hosting Services. He
starts his analysis by distinguishing three different senses of “net neutrality”: (i) net
neutrality as a nonexclusionary business practice; (ii) net neutrality as an engineer-
ing principle, allowing traffic on the Internet in a nondiscriminatory manner; and
(iii) net neutrality as content nondiscrimination. He calls the third sense Content Net
Neutrality. Although he accepts the first two senses as the fundamental principles
that should underlie Internet regulation, he rejects Content Net Neutrality. Following
Floridi’s proactive approach to Ethics, which states that the ethical obligation in the
information age is not limited to ethical behaviors in the infosphere but needs to
extend to actively shaping the infosphere for the betterment of the humanity, Cohen-
Almagor urges us to regulate the available content on the Internet. He argues that
content that is morally repugnant and/or at odds with democratic ideals should not
be made available on the Internet, and that the primary responsibility for this lies
with Internet Service Providers and Web Hosting Services. Throughout his discus-
sion, he uses several striking examples that seem to support his position.
As Silva and Ribeiro point out, Information Science as an autonomous field of
study that appeared in the late 1950s. Since then, this new field of inquiry, which
could be seen as a continuation of the library sciences, has seen an immense and
rapid growth. Despite this rapid growth, however, its nature has not yet been pre-
cisely defined. This is perhaps due to the inherently interdisciplinary character of
the field. Most interdisciplinary fields, for example Cognitive Science, have gone
xii Preface
through a similar stage of development. Silva and Ribeiro, in their chapter, provide
an all-encompassing framework for the nature and identity of Information Science.
In their framework, Information Science is “a unitary yet transdisciplinary field of
knowledge, included in the overarching area of the human and social sciences,
which gives theoretical support to some applied disciplines such as Librarianship,
Archivistics, Documentation and some aspects of Technological Information
Systems.” After providing their framework, they turn to Floridi’s Philosophy of
Information with the aim of finding a firm philosophical grounding for Information
Science. While doing that, they state their own definition of information, which
implies the following properties: structured by an action, integrated dynamical, has
potentiality, quantifiable, reproducible, and transmissible. Their definition of infor-
mation has some differences from Floridi’s definition of semantic information.
Perhaps one of the crucial differences is their distinction between informational
data and noninformational data. The analysis of the differences and similarities
between their definition of information and Floridi’s semantic information is by
itself valuable. Moreover, along the way they also bring together different threads of
discussions, ranging from the French philosopher Ruyer’s work on visual sensation
to Søren Brier’s Cybersemiotics. Given their analysis of Information Science and
the connections they identify between Information Science and Philosophy of
Information, it is plausible to conclude that Information Science could be under-
stood as applied Philosophy of Information.
The main focus in Part IV is the epistemic and ontic aspects of Floridi’s Philosophy
of Information. The contributions here are Eric T. Kerr and Duncan Pritchard’s
“Skepticism and Information,” Joseph E. Brenner’s “Levels of Abstraction; Levels of
Reality,” and Steve T. McKinlay’s “The Floridian Notion of the Information
Object.”
It is almost a truism to say that information should be “adequately created, pro-
cessed, managed and used” (Floridi 2010). The bombardment of information that
we all face in this day and age requires proper information management. As rightly
pointed out by Kerr and Pritchard, proper information management requires paying
attention to the connection between information and knowledge. After all, informa-
tion is valuable as long as it paves the way for the acquiring of knowledge. In their
chapter, Kerr and Pritchard focus on this important issue, i.e., the epistemic value of
information. One of the milestones in the literature on the epistemic value of infor-
mation is Dretske’s book Knowledge and the Flow Information, in which a compre-
hensive epistemology based on information is provided. One of the controversial
features of Dretske’s framework is its denial of the principle of epistemic closure,
which simply states that if an agent knows a proposition and knows that the proposi-
tion in question implies another one, then the agent also knows the implied
proposition. Dretske’s main reason behind the denial of closure is that, for him,
information about appearances can never completely rule out skeptical doubts. Kerr
and Pritchard claim that Dretske is wrong and that there are ways in which informa-
tion could address skeptical doubts. They examine two such ways in their chapter:
Ram Neta’s contextual approach and John McDowell’s disjunctivism. Kerr and
Pritchard’s chapter is valuable in and of itself. Moreover, it opens doors for a different
Preface xiii
References
Eddington, Arthur. 1928. The nature of the physical world. Cambridge: Cambridge University Press.
Floridi, Luciano. 2002. On the intrinsic value of information objects and the infosphere. Ethics and
Information Technology 4: 287–304.
Floridi, Luciano. 2003. On the intrinsic value of information objects and the infosphere. Ethics and
Information Technology 4(4): 287–304.
Floridi, Luciano. 2004b. The informational approach to structural realism. final draft available as
IEG – Research Report 22.11.04. http://www.wolfson.ox.ac.uk/~floridi/pdf/latmoa.pdf
Floridi, Luciano. 2004c. On the logical unsolvability of the gettier problem. Synthese 142(1):
61–79.
Floridi, Luciano, and Jeffry W. Sanders. 2004a. On the morality of artificial agents. Minds and
Machines 14(3): 349–379.
Preface xv
Floridi, Luciano, and Jeffry W. Sanders. 2004b. On the morality of artificial agents. Minds and
Machines 14(3): 349–379.
Floridi, Luciano. 2005a. Consciousness, agents and the knowledge game. Minds and Machines.
15(3–4): 415–444.
Floridi, Luciano. 2005b. Presence: From epistemic failure to successful observability. Presence:
teleoperators and virtual environments 14(6): 656–667.
Floridi, Luciano. 2008a. Artificial intelligence’s new frontier: Artificial companions and the fourth
revolution. Metaphilosophy 39(4/5): 652–654.
Floridi, Luciano. 2008b. A defence of informational structural realism. Synthese 161(2): 219–253.
Floridi, Luciano. 2008c. The method of levels of abstraction. Minds and Machines 18: 303–329.
Floridi, Luciano. 2010. The philosophy of information. Oxford: Oxford University Press.
Floridi, Luciano, and Jeffry W. Sanders. 2003. Internet ethics: The constructionist values of
Homo poieticus. In The impact of the internet on our moral lives, ed. R. Cavalier, 195–214.
New York: SUNY.
Moor, James. 1985. What is computer ethics? Metaphilosophy 16(4): 266–275.
Wiener, Norbert. 1948. Cybernetics: or control and communication in the animal and the machine.
New York: Technology Press/Wiley.
Wolf, M.J., K. Miller, and F.S. Grodzinsky. 2009. On the meaning of free software. Ethics and
Information Technology 11(4): 279–286.
Contents
xvii
xviii Contents
Gordana Dodig-Crnkovic
1.1 Introduction
There are, however, “correct accounts” that may complement and reinforce each other, like
stones in an arch. Floridi (2008a, b, c, d)
Ten years after the introduction of Information Ethics (IE) which is an integral part
the Philosophy of Information (PI) (Floridi 1999, 2002), Floridi’s contribution to
the subsequent production of knowledge in several research fields has been
reviewed. Among others, two recent special journal issues dedicated to Floridi’s
work, Ethics and Information Technology, Vol. 10, No. 2–3, 2008 edited by Charles
Ess and Metaphilosophy, Vol. 41, No. 3, 2010 edited by Patrick Allo witness the
vitality of his research program of PI. It is far from a closed chapter in the history
of philosophy. Contrariwise, it is of great interest for many researchers today, and
its development can be expected to contribute to the elucidation of a number of
central issues introduced or enhanced by Information and Communication
Technologies, ICT.
For IE, moral action is an information processing pattern. It focuses on the fun-
damentally informational character of reality (Floridi 2008a) and our interactions
with it. According to Floridi, ICTs create our new informational habitat “consti-
tuted by all informational entities (such as informational agents, their properties,
interactions, processes and mutual relations)” which is an abstract equivalent of an
eco-system. IE is thus a generalization of environmental ethics towards a:
– less anthropocentric concept of agent, including non-human (artificial) and
distributed (networked) entities
G. Dodig-Crnkovic (*)
School of Innovation, Design and Engineering, Computer Science Laboratory,
Mälardalen University, Västerås, Sweden
e-mail: gordana.dodig-crnkovic@mdh.se
produce – through mutual interactions – a collective state that in its turn influences the
behavior of each of the bottom-state elements. It should be emphasized that this mech-
anism, though exhibiting circularity, does not produce “vicious circles” as it stands in
a continuous interaction with the environment which provides variation.1
The explication of the role of IE is based on the following Info-Computational
elements:
1. Ontology is informational; the fabric of reality is (proto) information.
(Informational Structural Realism, (Floridi 2008a))
2. Being is process of (natural) computation = Being is information processing,
based on natural computing, which is both digital and analog. (Pancomputationalism
2009)
3. Information (structure) and computation (process) are two basic complementary
concepts that constitute dual-aspect ontology.
4. Informational structures are physical; there is no information without physical
implementation.
5. Based on physical laws, informational structures interact, evolve, and build more
and more complex constellations, especially in intelligent living organisms that
use “raw information”/(proto) information from the world to construct knowl-
edge and form decisions. (Info-Computational Naturalized Epistemology
(Dodig-Crnkovic 2008))
6. Ethical norms are among mechanisms that humans have developed in order to
provide guidance in decision making and conduct. They can be understood as a
result of successive evolution of info-computational structures in goal-driven liv-
ing organisms.
7. Informational structures constitute complex systems which can be analyzed on
different levels of organization/levels of description/levels of abstraction. IE is
the first ethical approach focused on the fundamental level of information.
The above is based on the following fundamental principles, defined in Dodig-
Crnkovic and Müller (2010)
(IC1) The ontologically fundamental entities of the physical reality are information
(structure) and computation (change).
(IC2) Properties of a complex physical system cannot be derived solely from the
properties of its components. Emergent properties must be taken into account.
(IC3) Change of informational structures is governed by laws.
(IC4) The observer is a part of the system observed.
1
Among physical systems, living organisms are known to use this type of mechanisms in diverse
contexts, such as metabolism, reproduction, growth end alike. On a theoretical level, Computing
with Computer Science as its subset presents a rich source of examples of self-referential, circular
systems that are not vicious, but perform intelligible functions like e.g. program loops, fractals and
other recursive functions.
6 G. Dodig-Crnkovic
One of the most important insights of PI and IE is their explicit addressing of differ-
ent Levels of Abstraction/Levels of Organization/Levels of Description of analysis:
LoAs are teleological, or goal-oriented. Thus, when observing a building, which LoA one
should adopt -architectural, emotional, financial, historical, legal, and so forth – depends
on the goal of the analysis. There is no “right” LoA independently of the purpose for which
it is adopted, in the same sense in which there is no right tool independently of the job that
needs to be done. (Floridi 2008a, b, c, d)
Some critics feel uneasy with the Levels of Abstraction in fear of ethical relativism,
but the fear is unfounded. Defining the Level of Abstraction adds to our understanding
1 Floridi’s Information Ethics as Macro-ethics… 7
Among the criticisms of IE, Capurro (2008) focus on the intrinsic value of
informational objects, Brey (2008) makes a proposal to modify IE from a value-
based into a respect-based theory in order to agree with the received view that
8 G. Dodig-Crnkovic
“inanimate things in the world deserve moral respect, not because of intrinsic
value, but because of their (potential) extrinsic, instrumental or emotional value
for persons”, while Søraker (2007) proposes attribution of relational value to infor-
mational objects making the distinction between intrinsic, relational, and instru-
mental value. All critique points towards humans as a nexus of our ethical interest,
which PI is from the outset constructed to avoid:
IE adopts this informational ontology (or better: the corresponding LoA) as a minimal
common denominator that unifies all entities. (Floridi 2008a, b, c, d)
This article concerns systems of humans and intelligent adaptive artifacts and in the
first place the problem of (moral) responsibility distribution. It argues that for all
practical purposes, moral responsibility in autonomous intelligent systems is best
handled as a regulatory mechanism, with the aim to assure their desirable behavior.
“Responsibility” is thus ascribed an intelligent artifact in much the same way
as “intelligence” and it is considered to be a matter of degree. We will expect a
(morally) responsible artifactual intelligent agent to behave in a way that is
traditionally thought to require human (moral) responsibility.
1 Floridi’s Information Ethics as Macro-ethics… 9
In order to make the point about artificial moral agency (Grodzinsky et al. 2008)
adopt concept of Levels of Abstraction and discuss the difference between artificial
agents whose behavior is completely defined by their designers, and agents able to
learn and adapt, changing their own programs autonomously. They conclude that
designers and other concerned stakeholders must maintain responsibility for those
artifacts, no matter how autonomous they may be. Actually this conclusion shall not
come as a surprise. The question Grodzinsky, Miller and Wolf ask: “Can an artificial
agent that changes its own programming become so autonomous that the original
designer is no longer responsible for the behavior of the artificial agent?” in the per-
spective of distributed responsibility discussed in detail later on, gets an obvious
answer. Such an artificial agent with an artifactual equivalent of “free will” can not be
more autonomous than a human within a techno-social system. Even though humans
have free will and autonomy, there is a distribution of responsibility in a system.2
Again: the idea of building moral responsibility into artificial agents is not meant as leaving
those agents outside of the techno-sociological control.
One of the central concepts in this context is the concept of agent. Unlike (Himma
2009) who concludes his essay by the claim that artificial moral agency is possible
if it is possible for ICTs to be conscious, in the field of Agent Based Modeling
(http://www.scholarpedia.org/article/Agent_based_modeling) agents are supposed
to include even much simpler entities. Agent-Based Modeling (ABM) is an individual-
based modeling of a phenomenon as a system of interacting agents (actors) such
that agents have internal states.3 Humans may in this context be seen as highly com-
plex agents.
Agents in general may be as simple as cellular automata but may also have
random-access memory, i.e. they can interact with the environment beyond concur-
rent state communication by using memory to save representations of the environment.
Members of an agent society can share information and knowledge. Such agents are
dynamically incoherent as their next state is not only dependent on the previous
state but also on their memory (which keeps the same value until it is accessed).
Agent interactions can be local, global or intermediate (small-world network). The
system evolves over time, and since agents behave individually in parallel, interac-
tions are generally asynchronous.4 ABMs are powerful modeling tools which relate
Artificial Life, Game Theory and Artificial Intelligence and in this context are use-
ful in the studying of ethics in IE applications.
2
As long as artifacts are under human control, such as GPS devices, we have no problem to follow
their command. But what kind of assurance do we need when artifacts with superior cognitive
capacities have their own agenda? I believe that we will get successively better insights into that
issue as we enhance our own cognitive capacities through distributed cognition in networks of
biological and artificial agents.
3
Internal states are represented by discrete or continuous variables.
4
In ABM, both time and space can be discrete or continuous.
10 G. Dodig-Crnkovic
5
Unlike the micro-ethical level where one considers what an individual should do, at the macro-
ethical level the question is what macro-systems, such as political institutions, corporations or
professional organizations, should do.
1 Floridi’s Information Ethics as Macro-ethics… 11
When predicting global development we have to take into account that while we
are changing technology, technology in its turn is changing us (Becker 2006; Russell
and Norvig 2003). The next question is what happens when cognitive capabilities of
6
Vol. 6, IRIE, 2006 International Review of Information Ethics dedicated to Ethics of Robotics, see
http://www.i-r-i-e.net/archive.htm
12 G. Dodig-Crnkovic
autonomous intelligent artifacts surpass those of humans. Are we going to have any
need or indeed any means to control such systems? It is good to address those issues
as we are developing new intelligent and autonomous learning technologies and
anticipating their future advance.
While Roboetics has its focus on the phenomena on the level of traditional
applied ethics, relying on already existing insights in a sense close to Deborah
Johnson’s views, Floridi’s Information Ethics allows analysis beyond traditional
approaches. Based on an informational level, IE makes possible search for underly-
ing mechanisms – patterns and processes – that in a network of agents results in a
certain behavior. As already mentioned while discussing agent-based models, IE is
not only applicable for the modeling of artificial agent networks, but even includes
the possibility of modeling human behavior. Artificial agents are just the first and
simplest application which may be made in a straightforward way.
Since IE uncovers an underlying layer of reality, its goal in ethical praxis may
be seen not as excluding existing ethical theory and practice, but helping us to
understand the fine structure of phenomena. Next I will try to establish the rela-
tionship between IE, Roboethics and other classical applied ethics. Especially,
I will focus on the issue of trust and responsibility as understood in different
frameworks.
Questions of intentionality (Dennett 1994) and free will of an agent are difficult to
address in practical engineering circumstances, such as development and use of
intelligent adaptive robots/softboats. Consequently, Dennett and Strawson suggest
that we should understand moral responsibility not as an individual duty but instead
as a role defined by externalist pragmatic norms of a group (Dennett 1973; Strawson
1974). We will also adopt a pragmatic approach, closer to actual robotic applica-
tions, where the question of free will is not the main concern. Moral responsibility
can best be seen as a social regulatory mechanism which aims at enhancing actions
considered to be good, simultaneously minimizing what is considered to be bad.
“Responsibility” can thus be ascribed an intelligent artifact in much the same way
as “intelligence”. Dodig-Crnkovic and Persson (2008), and Adam (2008) all empha-
size the parallel between artificial intelligence and artificial morality.
Artificial/artifactual intelligence is an ability of artificial agents to accomplish
tasks that are traditionally thought to require human intelligence.
In the same way, we define artificial/artifactual morality as an ability of an artificial
agent to behave in a way that is traditionally thought to require human morality.
Does it mean that artifactual intelligence is the same thing as human intelli-
gence? No. It just produces the same behavior and solves the same problems.
And that is why we build intelligent systems. We want them to solve problems for
us. As they become more and more intelligent and autonomous, we want them to
behave in accordance with our value systems and ethical norms.
We take the instrumental approach that while full-blown moral agency may be beyond the
current or future technology, there is nevertheless much space between operational moral-
ity and “genuine” moral agency. This is the niche we identified as functional morality.
(Wallach and Allen 2009)
7
Floridi (2008b) does not talk about responsibility but instead accountability of artificial
agents.
8
Coleman, K. G., Computing and Moral Responsibility, The Stanford Encyclopedia of Philosophy
(Spring 2005 Edition), Edward N. Zalta (ed.), Available: http://plato.stanford.edu/archives/
spr2005/entries/computing-responsibility/
14 G. Dodig-Crnkovic
between different actors in the system (Adam 2005). Commonly, the distribution of
responsibility for the production and use of a system can be seen as a kind of con-
tract, a manual of operations, which specifies how and under what circumstances
the artifact or system should be used (Matthias 2004). This clear distinction between
the responsibilities of the producer and user was historically useful, but with
increased distribution of responsibilities throughout a socio-technological system,
the distinction becomes less clear-cut. Production and use of intelligent systems has
increased the difficulty, as the intelligent artifacts themselves display autonomous
morally significant behavior, which has lead to a discussion about the possibility of
ascribing moral responsibility to machines, see Matthias (2004), Johnson (2006),
Floridi and Sanders (2004a, b) and Stahl (2004). Many of the practical issues in
determining responsibility for decisions and actions made by intelligent systems
will probably follow already existing models that are now regulated by product
liability laws (Stahl 2004). There is a doubt that this approach may not be enough,
and that alternative ways of looking at responsibility for the production and use of
intelligent systems may be needed (Stahl 2006).
In sum, having a system which “takes care” of certain tasks intelligently, learns
from experience and makes autonomous decisions gives us good reasons to talk
about a system as being “responsible for a task”. Technology is morally significant
for humans, so the responsibility for a task with moral consequences could be seen
as moral responsibility. The consequential responsibility which presupposes moral
autonomy will, however, be distributed through the system.
Numerous interesting questions arise when the issue of artificial agents capable
of moral responsibility in the classical sense is addressed by defining autonomous
ethical rules of their behavior. Those are issues addressed within the field of Machine
Ethics (Moor 2006) which includes developing ethical rules of behavior for e.g.
softbots which seems to be both useful and practical.
When it comes to the practical applications, based on the experiences with safety
critical systems such as aerospace, transportation systems and nuclear power, one
can say that the socio-technological structure which supports their functioning
consists of safety barriers preventing and mitigating their malfunction. The central
and most important part is to assure the safe functioning of the system under nor-
mal conditions, which is complemented by the preparedness for abnormal/acciden-
tal condition mitigation. There are several levels of organizational and physical
barriers ready to cope with different levels of severity of malfunctions (Dodig-
Crnkovic 1999).
Handling risk and uncertainty in the production of a safety critical technical sys-
tem is done on several levels. Producers must take into account everything from
technical issues, through issues of management and of anticipating use and effects,
16 G. Dodig-Crnkovic
to larger issues on the level of societal impact (Huff 2004; Asaro 2007). The central
ethical concerns for engineers are: “How to evaluate technologies in the face of
uncertainty” and “How safe is safe enough” (Shrader-Frechette 2003; Stamatelatos
2000; Larsson 2004).
Any technology subject to uncertainty and with a potentially high impact on
human society is expected to be handled cautiously, and intelligent systems surely
fall into this category, where the precautionary principle (Montague 1998) applies.
Thus, preventing harm and having the burden of proof of harmlessness is something
that producers of intelligent systems are responsible for. The analogy might be with
a state sending an army to a battlefield, where responsibility is organized hierarchi-
cally, with the highest responsibility on the top of the hierarchy, but which includes
responsibilities of each and every soldier, be they human or artifacts.
1.8 Conclusions
Acknowledgements The author wants to thank Mark Coeckelbergh for insightful comments on
earlier versions of this paper.
References
Adam, Alison. 2005. Delegating and distributing morality: Can we inscribe privacy protection in a
machine? Ethics and Information Technology 7: 233–242.
Adam, Alison. 2008. Ethics for things. Ethics and Information Technology 10(2–3): 149–154.
Arkin, Ronald C. 1998. Behavior-based robotics. Cambridge: MIT Press.
Asaro, Peter M. 2007. Robots and responsibility from a legal perspective. In Proceedings of the IEEE
2007 international conference on robotics and automation, Workshop on RoboEthics, Rome.
1 Floridi’s Information Ethics as Macro-ethics… 19
Becker, Barbara. 2006. Social robots – emotional agents: Some remarks on naturalizing man-machine
interaction. International Review of Information Ethics 6: 37–45.
Becker, Barbara. 2009. Social Robots – Emotional Agents: Some Remarks on Naturalizing Man-
machine Interaction. In Ethics and robotics, ed. R. Capurro and M. Nagenborg. Amsterdam:
IOS Press.
Brey, Philip. 2008. Do we have moral duties towards information objects? Ethics and Information
Technology 10(2–3): 109–114.
Capurro, Rafael. 2008. On Floridi’s metaphysical foundation of information ecology. Ethics and
Information Technology 10(2–3): 167–173.
Coeckelbergh, Mark. 2010. Moral appearances: Emotions, robots, and human morality. Ethics and
Information Technology 12(3): 235–241. ISSN 1388-1957.
Coleman, K.G. 2005. Computing and moral responsibility. In The Stanford encyclopedia of phi-
losophy, Spring edn, ed. Edward N. Zalta. Stanford: Standford University. Available: http://
plato.stanford.edu/archives/spr2005/entries/computing-responsibility/
Crutzen, C.K.M. 2006. Invisibility and the meaning of ambient intelligence. International Review
of Information Ethics 6: 52–62.
Danielson, Peter. 1992. Artificial morality virtuous robots for virtual games. London: Routledge.
Dennett, Daniel C. 1973. Mechanism and responsibility. In Essays on freedom of action, ed.
T. Honderich. Boston: Routledge/Keegan Paul.
Dennett, Daniel C. 1994. The myth of original intentionality. In Thinking computers and virtual
persons: Essays on the intentionality of machines, ed. E. Dietrich, 91–107. San Diego/London:
Academic.
DIRC project. http://www.comp.lancs.ac.uk/computing/research/cseg/projects/dirc/projectthemes.
htm (accessed October 26, 2010).
Dodig-Crnkovic, Gordana. 1999. ABB atom’s criticality safety handbook, ICNC’99 sixth interna-
tional conference on nuclear criticality safety, Versailles, France. http://www.idt.mdh.se/
personal/gdc/work/csh.pdf (accessed October 26, 2010).
Dodig-Crnkovic, Gordana. 2005. On the importance of teaching professional ethics to com-
puter science students. In Computing and philosophy, Computing and philosophy confer-
ence, E-CAP 2004, Pavia, Italy, ed. L. Magnani. Pavia: Associated International Academic
Publishers.
Dodig-Crnkovic, Gordana. 2006a. Investigations into information semantics and ethics of computing.
Västerås: Mälardalen University Press. http://mdh.divaportal.org/smash/get/diva2:120541/
FULLTEXT01 (accessed October 26, 2010).
Dodig-Crnkovic, Gordana. 2006b. Professional ethics in computing and intelligent systems.
In Proceedings of the ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006),
Espoo, Finland, October 25–27.
Dodig-Crnkovic, Gordana. 2008. Knowledge generation as natural computation. Journal of
Systemics, Cybernetics and Informatics 6: 12–16.
Dodig-Crnkovic, Gordana. 2009. Information and computation nets. Saarbrücken: VDM Verlag.
Dodig-Crnkovic, Gordana. 2010. The cybersemiotics and info-computationalist research pro-
grammes as platforms for knowledge production in organisms and machines. Entropy 12:
878–901. http://www.mdpi.com/1099-4300/12/4/878 (accessed October 26, 2010).
Dodig-Crnkovic, Gordana, and Margaryta Anokhina. 2008. Workplace gossip and rumor. The
information ethics perspective. In ETHICOMP-2008, Mantova, Italy.
Dodig-Crnkovic, Gordana, and Vincent Müller. 2010. A dialogue concerning two world systems:
Info-computational vs. mechanistic. In Information and computation, ed. G. Dodig-Crnkovic
and M. Burgin. Singapore: World Scientific Publishing Co.
Dodig-Crnkovic, Gordana, and Persson Daniel. 2008. Sharing moral responsibility with robots:
A pragmatic approach. In Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008,
Frontiers in artificial intelligence and applications, vol. 173, ed. A. Holst, P. Kreuger, and
P. Funk. Amsterdam: IOS Press.
Epstein, Joshua M. 2004. Generative social science: Studies in agent-based computational modeling,
Princeton studies in complexity. Princeton/Oxford: Princeton University Press.
20 G. Dodig-Crnkovic
Eshleman, Andrew. 2004. Moral responsibility. In The Stanford encyclopedia of philosophy, Fall
ed, ed. Edward N. Zalta. Stanford: Stanford University. http://plato.stanford.edu/archives/
fall2004/entries/moral-responsibility (accessed October 26, 2010).
Fellous, Jean-Marc, and Michael A. Arbib (eds.). 2005. Who needs emotions?: The brain meets the
robot. Oxford: Oxford University Press.
Floridi, Luciano. 1999. Information ethics: On the theoretical foundations of computer ethics.
Ethics and Information Technology 1(1): 37–56.
Floridi, Luciano. 2002. What is the philosophy of information? Metaphilosophy 33(1/2): 123–145.
Floridi, Luciano. 2008a. A defence of informational structural realism. Synthese 161(2): 219–253.
Floridi, Luciano. 2008b. Information ethics: Its nature and scope. In Moral philosophy and infor-
mation technology, ed. Jeroen van den Hoven and John Weckert, 40–65. Cambridge: Cambridge
University Press.
Floridi, Luciano. 2008c. The method of levels of abstraction. Minds and Machines 18(3): 303–329.
Floridi, Luciano. 2008d. Ethics Information ethics: A reappraisal. Ethics and Information
Technology 10: 189–204.
Floridi, Luciano, and J.W. Sanders. 2004a. On the morality of artificial agents. Minds and Machines
14(3): 349–379.
Floridi, Luciano, and J.W. Sanders. 2004b. On the morality of artificial agents. In Minds and
machines, vol. 14, 349–379. Dordrecht: Kluwer Academic Publishers.
Gilbert, Nigel. 2008. Agent-based models, Quantitative applications in the social sciences. Los
Angeles: Sage Publications.
Grodzinsky, Frances S., Keith W. Miller, and Marty J. Wolf. 2008. The ethics of designing artificial
agents. Ethics and Information Technology 11(1): 115–121.
Hansson, Sven Ove. 1997. The limits of precaution. Foundations of Science 2: 293–306.
Hansson, Sven Ove. 1999. Adjusting scientific practices to the precautionary principle. Human
and Ecological Risk Assessment 5: 909–921.
Himma, Kenneth E. 2009. Artificial agency, consciousness, and the criteria for moral agency:
What properties must an artificial agent have to be a moral agent? Ethics and Information
Technology 11(1): 19–29.
Hongladarom, Soraj. 2008. Floridi and Spinoza on global information ethics. Ethics and
Information Technology 10: 175–187.
Huff, Chuck. 2004. Unintentional power in the design of computing systems. In Computer ethics
and professional responsibility, ed. T.W. Bynum and S. Rogerson, 98–106. Kundli: Blackwell
Publishing.
Järvik, Marek. 2003. How to understand moral responsibility?, Trames, 7(3), 147–163. Tallinn:
Teaduste Akadeemia Kirjastus.
Johnson, Deborah G. 2006. Computer systems: Moral entities but not moral agents. In Ethics and
information technology, vol. 8, 195–204. Dordrecht: Springer.
Johnson, Deborah G., and Keith W. Miller. 2006. A dialogue on responsibility, moral agency, and
IT systems. In Proceedings of the 2006 ACM symposium on Applied computing table of con-
tent, Dijon, France, 272–276.
Johnson, Deborah G., and Keith W. Miller. 2008. Un-making artificial moral agents. Ethics and
Information Technology 10(2–3): 123–133.
Johnson, Deborah G., and T.M. Powers. 2005. Computer systems and responsibility: A normative
look at technological complexity. In Ethics and information technology, vol. 7, 99–107.
Dordrecht: Springer.
Larsson, Magnus. 2004. Predicting quality attributes in component-based software systems. PhD
thesis, Mälardalen University Press, Sweden. ISBN: 91-88834-33-6.
Latour, Bruno. 1992. Where are the missing masses, sociology of a few mundane artefacts,
originally. In Shaping technology-building society. Studies in sociotechnical change, ed. Wiebe
Bijker and John Law, 225–259. Cambridge, MA: MIT Press. http://www.bruno-latour.fr/
articles/1992.html (accessed October 26, 2010).
Lik Mui. 2002. Computational models of trust and reputation: Agents, evolutionary games, and
social networks. PhD thesis, MIT. http://groups.csail.mit.edu/medg/ftp/lmui/computational%20
models%20of%20trust%20and%20reputation.pdf (accessed October 26, 2010).
1 Floridi’s Information Ethics as Macro-ethics… 21
Lomi, Alessandro, and Erik Larsen (eds.). 2000. Simulating organizational societies: Theories,
models and ideas. Cambridge, MA: MIT Press.
Magnani, Lorenzo. 2007. Distributed morality and technological artifacts. In 4th international con-
ference on human being in contemporary philosophy, Volgograd. http://volgograd2007.gold-
enideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf (accessed October 26 2010).
Marino, Dante, and Guglielmo Tamburrini. 2006. Learning robots and human responsibility.
International Review of Information Ethics 6: 46–51.
Martin, Mike W., and Ronald Schinzinger. 1996. Ethics in engineering. New York: McGraw-
Hill.
Matthias, Andreas. 2004. The responsibility gap: Ascribing responsibility for the actions of learn-
ing automata. In Ethics and information technology, vol. 6, 175–183. Dordrecht: Kluwer
Academic Publishers.
Minsky, Marvin. 2006. The emotion machine: Commonsense thinking, artificial intelligence, and
the future of the human mind. New York: Simon and Shuster.
Montague, Peter. 1998. The precautionary principle. Rachel’s Environment and Health Weekly,
No. 586. http://www.biotech-info.net/rachels_586.html (accessed October 26, 2010).
Moor, James H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent
Systems 21(4): 18–21.
Nissenbaum, Helen. 1994. Computing and accountability. In Communications of the ACM, vol. 37,
73–80. New York: ACM.
Pancomputationalism. 2009. http://www.idt.mdh.se/personal/gdc/work/Pancomputationalism.mht
(accessed October 26, 2010).
Prietula, Michael. 2000. Advice, trust, and gossip among artificial agents, chapter. In Simulating
organizational societies: Theories, models and ideas, ed. A. Lomi and E. Larsen. Cambridge,
MA: MIT Press.
Ramchurn Sarvapali, D., Dong, Huynh, and Nicholas, R. Jennings. 2004. Trust in multi-agent sys-
tems. The Knowledge Engineering Review 19:1–25. Cambridge: Cambridge University Press.
Roboethics links. http://www.roboethics.org, http://www.scuoladirobotica.it, http://roboethics.stan-
ford.edu, http://ethicalife.dynalias.org/schedule.html, http://www-arts.sssup.it/IEEE_TC_
RoboEthics, http://ethicbots.na.infn.it, http://www.capurro.de/lehre_ethicbots.htm. ETHICBOTS
seminar by Rafael Capurro http://www.roboethics.org/icra2009/index.php?cmd=program
ICRA2009 Roboethics workshop on IEEE Conference on robotics and automation (accessed
October 26, 2010).
Russell, Stuart, and Peter Norvig. 2003. Artificial intelligence – a modern approach. Upper Saddle
River: Pearson Education.
Shrader-Frechette, Kristen. 2003. Technology and ethics. In Philosophy of technology – the
technological condition, ed. R.C. Scharff and V. Dusek, 187–190. Padstow: Blackwell
Publishing.
Silver, David A. 2005. Strawsonian defense of corporate moral responsibility. American
Philosophical Quarterly 42: 279–295.
Siponen, Mikko. 2004. A pragmatic evaluation of the theory of information ethics. Ethics and
Information Technology 6(4): 279–290.
Sommerville, Ian. 2007. Models for responsibility assignment. In Responsibility and dependable
systems, ed. G. Dewsbury and J. Dobson. London: Springer. ISBN 1846286255.
Søraker, Johnny H. 2007. The moral status of information and information technologies: A relational
theory of moral status. In Information technology ethics: Cultural perspectives, ed. S. Hongladarom
and C. Ess, 1–19. Hershey: IGI Global.
Stahl, Bernd C. 2004. Information, ethics, and computers: The problem of autonomous moral
agents. In Minds and machines, vol. 14, 67–83. Dordrecht: Kluwer Academic Publishers.
Stahl, Bernd C. 2006. Responsible computers? A case for ascribing quasi-responsibility to
computers independent of personhood or agency. In Ethics and information technology, vol.
8, 205–213. Dordrecht: Springer.
Stamatelatos, Michael. 2000. Probabilistic risk assessment: What is it and why is it worth performing
it? NASA Office of Safety and Mission Assurance. http://www.hq.nasa.gov/office/codeq/
qnews/pra.pdf (accessed October 26, 2010).
22 G. Dodig-Crnkovic
Strawson, Peter F. 1974. Freedom and resentment. In Freedom and resentment and other essays.
London: Methuen.
Veruggio, Gianmarco, and Fiorella Operto. 2008. Roboethics, Ch. 64 in Springer. In Handbook of
robotics. Berlin/Heidelberg: Springer.
Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong.
Oxford: Oxford University Press.
Chapter 2
Artificial Agents, Cloud Computing,
and Quantum Computing: Applying Floridi’s
Method of Levels of Abstraction
2.1 Introduction
In his paper “On the Intrinsic Value of Information Objects and the Infosphere,”
Luciano Floridi asserts that the goal of Information Ethics (IE) “is to fill an ‘ethical
vacuum’ brought to light by the ICT revolution, to paraphrase Moor” (1985).
He claims “IE will prove its value only if its applications bear fruit. This is the work
that needs to be done in the near future” (Floridi 2002). Our chapter proposes to do
part of that work. Initially we focus on Floridi’s Method of Levels of Abstraction
(LoA). We begin by examining his methodology as it was first developed with J. W.
Sanders in “The Method of Abstraction” (Floridi and Sanders 2004) and expanded
in “The Method of Levels of Abstraction” (Floridi 2008b). Then we will demon-
strate the general applicability and ethical utility of the method of levels of abstrac-
tion by considering three different computational paradigms: artificial agents, cloud
computing, and quantum computing. In particular, we examine artificial agents as
systems that embody the traditional digital computer (modeled as a single Turing
machine). This builds on previous work by Floridi and Sanders (2004) and
Grodzinsky et al. (2008). New contributions of this chapter include the application
For Floridi, a LoA qualifies the level at which a system is considered and informs the
discussion of such a system. When we analyze a system, we do so from a particular per-
spective or level of abstraction. This often results in a model or prototype that identifies
the system at the “given LoA”. Floridi refers to this as the system-level-model-structure
scheme: “Thus, introducing an explicit reference to the LoA makes it clear that the model
of a system is a function of the available observables, and that it is reasonable to rank
different LoAs and to compare and assess the corresponding models” (Floridi 2008b).
When developers understand the particular LoA under which a system is being
built, the discussion of the analysis and design of the system and eventually its
2 Artificial Agents, Cloud Computing, and Quantum… 25
realization can be more productive. Floridi (2008b) asserts that “[t]he definition of
observables is only the first step in studying a system at a given LoA. The second
step consists in deciding what relationships hold between the observables.” He
defines this as the concept of system “behaviour.” A behaviour of a system, at a
given LoA, is defined to consist of a predicate whose free variables are observables
at that LoA. The substitutions of values for observables that make the predicate true
are called the system behaviours. A moderated LoA is defined as a LoA together
with a behaviour at that LoA.
There can be many LoAs applied to the same system; a helpful distinction is that
of a Gradient of Abstractions. “A Gradient of Abstractions is a formalism defined to
facilitate discussion of discrete systems over a range of LoAs. Whilst a single LoA
formalizes the scope or granularity of a single model, a GoA provides a way of
varying the LoA in order to make observations at differing levels of abstraction”
(Floridi 2008b).
To effectively work with LoA’s and GoA’s, Floridi has created a Method of
Abstraction. The steps of the method consist of: (Floridi 2008b)
• First, specifying the LoA means clarifying, from the outset, the range of ques-
tions that (a) can be meaningfully asked and (b) are answerable in principle.
Knowing at which LoA the system is being analyzed is indispensable, for it
means knowing the scope and limits of the model being developed.
• Second, being explicit about the LoA adopted provides a healthy antidote to
ambiguities, equivocations and other fallacies or errors due to level-shifting.
• Third, by stating its LoA, a theory is forced to make explicit and clarify its onto-
logical commitment. The ontological commitment of a theory is best understood
by distinguishing between a committing and a committed component. A theory
commits itself ontologically by opting for a specific LoA. A theory becomes
ontologically committed in full through its model, which is therefore the bearer
of the specific commitment.
We have seen that a model is the output of the analysis of a system, developed at
some LoA(s), for some purpose. So a theory of a system comprises at least three
components:
(i) an LoA, which determines the range of available observables and allows the
theory to investigate the system under analysis;
(ii) an elaboration of the ensuing model of that system
(iii) the identification of a structure of the system at the given LoA.
that same system. We extend those notions in this paper to refer generally to the
user’s view and to the designer’s view of each system under consideration. That
is, LoA1 is a set of observables available to a user of a system and LoA2 is a set
of observables available to the designer of a system.
In that paper, we focused on LoA2 and described a model of computation
whereby artificial agents could exhibit traits that at LoA1 appeared similar to, if not
indistinguishable from, human traits we call learning and intentionality. This explo-
ration of the interaction between these two LoAs demonstrated that if the designer
failed to consider an expansive enough set of observables at LoA1 to be given con-
sideration at LoA2, the designer might miss certain ethical responsibilities that arise
at LoA1. If the designer is focused on low-level observables (LoA2) such as the
changing of the value of a variable or the changing of the sequence of operations
carried out by the artificial agent, the designer may well get the code for the agent
“right.” However, the observables properly associated with LoA1 take on new
importance when the designer is producing an artificial agent that appears to be
learning or demonstrating intentionality. We demonstrated that these sorts of agents
are more prone to unpredictable future behaviors and are capable of emergent
behaviors not initially programmed by the developer. Thus, we concluded that a
designer of artificial agents is under an increased burden of care. That burden
requires a thorough examination of observables at LoA1 and their implications.
Once those are understood, the designer must consider the GoA, the interface
between LoA2 and LoA1 and design the system (an LoA2 endeavor) in such a way
as to minimize the risk of undesirable behaviors at LoA1.
In this paper, we are still interested in LoA1 and LoA2, but we also introduce a
third Level of Abstraction: LoAS, where the “S” stands for “society.” LoAS is the
set of observables available to an observer of society. This set of observables con-
sists of those social structures and relationships that are prevalent in the functioning
of an information society. At LoAS, observables include a set of variables that
describes the characteristics of entities that are or could be affected by a piece of
software: descriptive observables concerning individuals, businesses, and govern-
ments are all possible members of the set. Questions that might be addressed could
be, e.g., If individuals are among the buyers, is there a particular demographic that
dominates the buyers? The set of observables at LoAS might be available to a user
of a software system. It might be, however, that some observers at LoAS will have
access to certain research that is not typically available to a user or even a designer
of software systems. It might be that LoAS observables are largely disjoint from the
observables typically considered at LoA1 and LoA2.
Our ethical analysis at LoAS focuses on the changes in the users from using the
software, and on the changes in others because of the existence of the software in
society. Our observations at LoAS are concerned with identifying not only the
changes in individuals, but also the cumulative effect of these changes to larger
groups and organizations, effects that may be attributable to the software, or to the
software combined with other sociotechnical factors (Johnson and Miller 2009).
One particular GoA of interest is the combination of LoA2 and LoAS. The
designer might be looking at the demographic in deciding who the users of the
2 Artificial Agents, Cloud Computing, and Quantum… 27
system are and their values (see work on Value Sensitive Design, such as
Friedman 1996). For example, in designing e-voting software, the developers
had to consider the user interface for able-bodied users, and for those with dis-
abilities due to infirmities and age. In the state of Connecticut in the United
States, for example, several interfaces were tested at several sites, to see what
potential users actually preferred. The secretary of state contracted with a team
of University of Connecticut engineering faculty “to provide advice to the state
regarding new voting technology and to assist in the certification and acceptance
testing of the AccuVote Optical Scan voting machines…” (UCONN 2010). This
team conducted pre-election and post-election audits of the memory cards used
in the machines. Once these cards are programmed the integrity of the vote falls
upon the precinct polling personnel (LoAS). Misinterpretation of instructions,
failure to conduct pre-election tests, inadequate training of precinct personnel all
led to problems that were unlikely to have been anticipated at LoA1 or LoA2.
Concerned with fair voting practices, Connecticut is using several safeguards to
verify the accuracy of the election outcomes. In another example, developers of
social networking sites like Facebook and Twitter did not accurately predict the
impact of these products on the communication habits of the users when the
products were launched.
LoAS can frame earlier work by social psychologists, sociologists, and society
and technology scholars. In the early 1990s Chuck Huff developed a social impact
statement for software developers based on an idea of Ben Schneiderman’s. Huff
encouraged software designers to “find out the social impact of the systems they
design in time to incorporate changes in those systems as they are built” (1996).
Cast in our terms, Huff was encouraging developers to consider LoAS as they
manipulated a program at LoA2.
The Embedded Values approach of Friedman and Nissenbaum concerned itself
with the ways in which biases emerge in computer systems. These authors exam-
ined preexisting biases of the individual or organization, technical biases and emer-
gent biases which arise when “the social contexts in which the system is used is not
the one intended by its designers.” For example, an ATM that relies heavily on writ-
ten instructions may be deployed in a neighborhood with an illiterate population
(Friedman and Nissenbaum 1996). If designers are aware of biases (at LoA2) that
have significant impacts at LoA1 and LoAS, they can use that awareness to design
systems that avoid problems. An analysis that incorporates LoAS could be an effec-
tive method for managing emergent biases.
In his piece Moral Methodology and Information Technology, Jeroen van den
Hoven states, “We need to give computers and software their place in our moral
world. We need to look at the effects they have on people, how they constrain and
enable us, how they change our experiences, and how they shape our thinking”
(2008:50). He asserts that,
We are now entering a third phase in the development of IT, where the needs of human
users, the values of citizens and patients and some of our social questions are considered in
their own right and are driving IT, and are no longer seen as mere constraints on the suc-
cessful implementation of technology (van den Hoven 2008:60).
28 M.J. Wolf et al.
One theorist who has embraced this concept is Phillip Brey. Brey’s Disclosive
Ethics reveals embedded values in IT systems (2010). His theory concerns itself
with the question: “Is it possible to do an ethical study of computer systems them-
selves independently of their use by human beings?” (Brey 2010). His answer is
basically no. He espouses Disclosive Ethics as a method in which different parties
responsible for the design, adoption, use and regulation of computer technology
share responsibility for the moral consequences of using it, and in which the tech-
nology itself is made part of the equation (Brey 2010:53). The GoA of LoA1, LoA2
and LoAS suggests a formalism that could address Brey’s concerns.
We contend that the addition of LoAS to the method of levels of abstraction is
consistent with Floridi’s desire to formulate “an ethical framework that can treat the
Infosphere as a new environment worth the moral attention and care of the human
inforgs inhabiting it” (Floridi 2010:19). LoAS consolidates the concerns of those
working on embedding values in design and those concerned with the effect of tech-
nology on society. It expands Floridi’s method beyond the levels of designer and
user and includes society in the mix.
A plausible criticism of using LoAS is that LoAS adds nothing to the existing
work described above, and merely muddies the water with new (superfluous) termi-
nology. We disagree. Our contention is that the idea of LoAS is a concept that
unifies, rather than obscures, the underlying commonality in the existing work of
Huff, Friedman, Nissenbaum, van den Hoven, Brey, and others. The similarities in
their work derive, at least in part, from the high level of abstraction (as compared to
LoA1 and LoA2) at which they work. Their different emphases can be seen as a
consequence of their different choices of observables at LoAS.
In addition to providing a framework for better understanding existing work at
the sociotechnical level, LoAS also helps integrate work on technology and society
at the different levels LoAS, LoA1 and LoA2. When ethical analysis at these three
different levels is perceived as being in competition, or at odds with each other,
unnecessary conflicts can arise. If work at these different levels is seen as similar
analyses, recognizably using the same fundamental concepts, but using different
observables, we are convinced that a more effective coherence can be perceived and
refined. This theoretical coherence could, and we hope will, lead to practical nego-
tiations and agreements between academics and practitioners who will be better
able to understand, together, the important differences and similarities at LoAS,
LoA1, and LoA2. In the next sections, we explore how these levels of abstraction
can be used in concert to examine carefully the ethical significance of three comput-
ing paradigms.
the burden of care borne by those who design artificial agents (2008). We include a
brief description of it here to give readers a sense of how the concept of theoretical
computational machines complements Floridi’s notions of LoA, and how technical
details of an artificial agent’s implementation can have significant impacts for LoAS.
Readers are referred to the original work for a more detailed presentation.
Our model closely follows the Turing Machine model of computation and
includes a large mapping table with a mechanism for mapping inputs and the cur-
rent state to a next state and output values. In any practical situation, the mapping
table is prohibitively large, though finite. The table is a model for the programming
(and therefore the design) of the agent. We explored two variations of the model in
which the agent had the ability to modify part of its mapping table. In the first, the
agent can modify any part of the table that defines the intelligent agent’s behavior
during its execution; in other words, the agent can self-modify. In this variation, the
agent can add new entries to the table, delete entries from the table and modify
entries that exist in the table. Its execution proceeds as in the original case, except
when the table fails to contain a valid mapping. In this case, the agent is forced to
stop. An agent with a table with this variation (called “fully modifiable”) has enough
power to render itself useless by introducing changes that force it into a state for
which the table contains no mapping. Note that it is also possible for an agent with
such a table to add an entry to the table that would duplicate an existing entry except
with different outputs or a different next state. A table with multiple identical entries
except for the next state seemingly exhibits nondeterminism,1 since the same input/
state would have two different output/state mappings. Although the steps outlined
above are deterministic, the choice of which of the two mappings might indeed be
arbitrary since the possibility of multiple mappings are not explicitly dealt with in
the fundamental behavior of the agent.
In the second variation (called “modifiable”), the mapping table is divided into
two parts: in one part, the mappings can be modified; in the other part, the mappings
cannot be modified. In other words, some parts of the mapping table are protected
from self-modification by the agent. Since the mapping table governs the entire oper-
ation of the agent, the designer may wish to prevent the artificial agent from carrying
out certain modifications like the one mentioned above. Thus, the designer may opt
to protect the entries that govern self-modification from self-modification. While this
idea has a certain appeal, especially from the perspective of designing the mapping
table in such a way that a modifiable agent always behaves properly, we showed that
the modifiable variation can readily promote itself to a fully modifiable machine.
This argument suggests that there is no absolute distinction between a modifiable
agent and a fully modifiable agent. However, the two models give different perspec-
tives on the ethical intentions of the designer, even if the designer’s intentions may
eventually be thwarted.
1
Note that we are referring to computational nondeterminism. Computational nondeterminism is a
theoretical construct that allows a device to be in two or more states simultaneously, with each state
experiencing independent sets of inputs and producing independent sets of outputs. This notion is
not to be confused with the philosophical notion of nondeterminism.
30 M.J. Wolf et al.
We can expand on our earlier work, which emphasized LoA1 and LoA2 for
artificial agents, by examining LoAS for artificial agents. A natural question at
LoAS is about the consequences on society as artificial agents become increasingly
common; one possible and ethically significant consequence is that many jobs pre-
viously held by people will be done by artificial agents. The use of machines to
replace human employees is nothing new, but the sophistication of modern artificial
agents may result in people being displaced in jobs that used to be considered
immune from automation. At LoAS, we could examine observables such as the
number of artificial agents deployed in, for example, care of the elderly or as recep-
tionists; then we could examine the number of humans in jobs in this same area.
Next, we could see if people displaced from jobs in these areas found work else-
where. Finally, we could examine what groups of people had gained from the
increased use of artificial agents (perhaps corporations and employers), and what
groups of people had lost from that same use (perhaps former employees)
(Grodzinsky et al. 2009). Another LoAS consideration might include benefits and
risks from the increased use of artificial agents; in health care, for example, an
observable might be accidental deaths among the elderly. Perhaps the use of artificial
agents to care for the elderly would, overall, reduce such deaths; perhaps not.
Paying attention to LoAS observables during and after artificial agents are
designed, developed and deployed should help computer professionals build
artificial agents that are more likely to benefit people, and less likely to harm them.
At LoA1, the people who directly use a computing artifact are directly in focus; at
LoAS, people who are affected by, but do not directly use, an artifact are also in
focus.
The paradigm for computation that most users have experienced since the advent
of the personal computer includes the user as owner of the hardware and software
that is holding and manipulating the user’s data. Typically at this level, the software
itself is opaque to users, except for software where the source code is freely avail-
able, e.g. free software and open source software. The user is familiar with his or her
data and its meaning, and by the locality of the media, controls access to the data –
at least to a first approximation. The user decides which software to install on the
computer, which programs get access to which data files and how long they get that
access.
The cloud computing paradigm brings a different set of access and control fea-
tures. For example, it is quite possible that the data is no longer stored on hardware
owned by the user, but stored “in the cloud.” Both Facebook and Google docs are
early examples of this kind of service, and now many other providers have entered,
or are planning to enter, the market. Another distinction with cloud computing is
that the software that manipulates the data is not necessarily present on the same
device that is used to access or compute the data. Instead, the user submits data to a
software service, the service carries out the computation on its hardware with its
software and returns the result to the user. A search executed by a commercial search
engine is a common example of this protocol. Only the search query and the search
results are ever local to the user; the algorithms and data necessary to carry out the
search are owned by the search engine company, and are located on its servers.
Google is a prominent example of this kind of “in the cloud” service. To the user,
the observables are the same: click on an icon, the program runs and a result is pro-
duced. Thus at LoA1, the user may not even be in a position to distinguish SaS from
a traditional program. At LoA2, there are significantly more observables in the
cloud computing paradigm, visible to a developer but not to a user. Everything from
web addresses to the type of compression make a difference at the designer’s level
of abstraction.
There are also regulatory issues and control issues that impact the user who may
or may not have the means of supplying the supporting data if it is stored in the
cloud. An analysis of cloud computing at the LoAS level would include issues of
trust between cloud providers and customers; issues of control, security and
confidentiality, standardization attempts, and consequences of the outcomes. In
each of these issues, knowledge of hardware and software are not sufficient; instead,
people, institutions and events would have to be taken into account. While these
issues do involve technical details at LoA2, they are driven by human values that
may be reflected in observables at the LoAS level; thus, forming a GoA is helpful.
Observables at LoAS can be used to gather empirical data useful in making an
ethical analysis. For example, it was initially thought that cloud computing could be
used to reduce overall energy consumption, but some scholars now dispute that
claim (Berl et al. 2010). The data necessary to test claims about energy consumption
and the cloud would be available in LoAS observables.
We briefly consider Facebook’s role as a cloud-based document storage service
provider as an example. Among other things, Facebook stores users’ pictures. In the
typical flow of operations, the user has complete control over who gets to see the
photos and how long the photos remain with Facebook. Facebook has a fiduciary
32 M.J. Wolf et al.
relationship with the user in which it agrees to show the photos to only those people
the user has identified and to delete the photos when the user asks for them to be
deleted. Of course, there are no assurances that Facebook complies with these sorts
of requests. This is especially true for any backup copies of the photos that Facebook
may have made to maintain a high quality of service level.
Facebook’s recent difficulties with users unhappy about its policies, illustrates
that social forces can influence technical decisions (either proactively or retroac-
tively). Issues of privacy and confidentiality will be played out on the LoAS level as
cloud computing becomes an increasingly competitive marketplace. It may be that
ethical behavior and good business will coincide when users gravitate to vendors
that treat their users with respect. People’s trust for cloud computing (LoA1 and
LoAS) will be affected by whether cloud computing providers are trustworthy stew-
ards of users’ data. Users will have to trust cloud providers in order to be comfort-
able giving up a large measure of control over their data and processes and should
choose vendors wisely. Therefore, as with artificial agents, we contend that comput-
ing professionals (at LoA2) should pay careful attention to LoAS observables as
part of the development process.
Cloud computing, more specifically SaS, presents a potential ethical impact on
the Free and Open Source Software (FOSS) communities. When the Free Software
Foundation developed version three of the GNU General Public License (GPLv3)
there was controversy surrounding provisions dealing with SaS. As a result of those
controversies the provisions addressing SaS were removed from GPLv3 and
included in a second, companion license, the Affero General Public License (AGPL).
Our analysis of GPLv3 and the AGPL identified a piece of software where the
observables at LoA2 were the same, yet the observables at LoAS were very differ-
ent and had a different social impact (Wolf et al. 2009). Depending on how the
software was deployed, the developer was under different legal obligations regard-
ing the release of modified source code. That is, in one scenario, the developer was
required by the AGPL either to share the modifications with the community, or in
another, seemingly ethically equivalent scenario, the developer was under no legal
obligation to share. Our interest here, however, is in showing how analysis of LoAS
raises the question of the impact that SaS will have on the sharing ethic that is preva-
lent in FOSS communities.
When considering quantum information, there are two different aspects that are of
importance. One is the notion of quantum computation and the other is the notion of
quantum information transfer. As a practical matter, both are currently feasible.
A quantum computer that factors an integer has been built. That integer is 15 (Blatt
2005:244). ID Quantique offers a quantum computer that uses quantum principles
to generate truly random numbers (rather than pseudo-random numbers common
in classical computers) for 1,000–2,500 €. The same company offers a quantum
computer embedded in a quantum networking device – a product that implements
2 Artificial Agents, Cloud Computing, and Quantum… 33
secure classical information transfer using both quantum and classical means
(see: http://www.idquantique.com).
The quantum network device is an important example, since it uses a combination
of quantum techniques and classical (non-quantum) encryption algorithms to trans-
mit secret data. Researchers and others have made claims such as: “Quantum cryp-
tography makes an absolutely safe communication possible for the first time”
(Weinfurter 2005:166). The physics of the quantum mechanics ensure that should an
eavesdropper “listen in” on the communication, both the sender and the receiver will
know that the communication has been intercepted. Yet, European researchers have
recently demonstrated that with easily obtainable components they can remotely
control a key component of the system and obtain the data in the communication and
remain undetected (Lyderson et al. 2010). Clearly, there are ethical problems lurking
in the development of, and understanding of, quantum computing.
General quantum computing and quantum teleportation are in the research stage
and barring an unexpected breakthrough will not be used or available in a general
sense for quite some time. However, much is known about the nature of quantum
information and fundamental quantum computation techniques, giving us the oppor-
tunity to begin exploration of ethical issues that are emerging along with this model
of computation. As in the previous sections we will draw attention to the three
different levels of abstraction. However, our main focus will be at LoA2 and how,
as in the artificial agent case, quantum developers will carry an increased burden of
care. We will find that due to the nature of quantum computation, “quantum devel-
opers” seems to include a broader range of people than we normally consider in the
development of traditional computing applications.
Next we give an overview of some of the fundamentals of quantum-information
processing and transfer. We are especially concerned with two distinctions that
make the quantum case different from the classical case: superposition and entan-
glement. We will then look at three applications of quantum techniques: factoring,
searching and cryptography. Once these ideas are presented we will consider the
impact these distinctions have on LoA2, and in particular quantum developers. We
will conclude this section with an analysis of quantum computation’s impact at
LoAS. We anticipate that fundamental differences between quantum and classical
computation will raise significant ethical issues when users routinely access
machines based on quantum computing.
Perhaps the most striking difference between classical computation and quantum
computation is the way that information is conceived. In classical computation, the
smallest piece of information is the bit – either a 0 or a 1 – and it is given a physical
realization. Once the bit is given a physical realization, it can be read again and
again and it should always yield the same information. Quantum information on
the other hand, is stored in a superposition of classical states. That is, to a first
34 M.J. Wolf et al.
( 00 ñ + 11ñ) / 2
Note that the state implicitly includes the description that under no decoherence
scenario will the two subsystems register different bits. Entanglement introduces
two additional properties that are important for quantum computation and quantum
information transfer systems and challenge usual assumptions about information
and computation. The first is that locality of information is no longer required. Once
two quantum systems are entangled to form a single quantum system, there is no
requirement that they be kept in close physical proximity. Thus, from the notation
example, the two subsystems can be separated, one of the two subsystems can
then be measured, and without measuring the other, its state can be known with
certainty.
Researchers on quantum teleportation systems have recently separated entangled
photon pairs 16 km (Jin et al. 2010). These photons were sent through free space,
rather than a fiber optics cable. While the current research is obviously experimen-
tal, our point is to demonstrate that locality of quantum systems should not be
assumed in consideration of ethical concerns.
The final property to consider is that is possible to entangle multiple quantum
systems into a single quantum system. Under certain conditions, measurement of a
single subsystem can result in either complete decoherence or partial decoherence.
Roos et al. entangle three quantum systems in two different ways (2004). Using the
notation above, they are:
( 000 ñ + 111ñ) / 2
and
Note that all of the entanglements and superpositions are lost when any one of
the bits is read. The second one is different. Say that the first bit is read, if it is a 1,
then the second and third bits retain their coherence in the state (|01ñ + |10ñ) /Ö2. If
the first bit is a 0, the second and third bits still retain their coherence, but in the state
|11ñ. Note that these experiments demonstrate similar behavior when reading any of
the three bits in the second entangled system.
Two of the more well-known quantum algorithms are for the factoring problem
(given an integer, find its prime factors) and the database searching problem. In
addition to their interesting technical attributes, these algorithms demonstrate that
practical implementations of “quantum computation” are really a combination of
36 M.J. Wolf et al.
to work, Bob needs to have the decryption key. Thus, Alice needs to send a key to
Bob via a secure medium. Obviously, the key cannot be encrypted, otherwise Alice
would need to send Bob a key to decrypt the encrypted key.
Quantum cryptography, or more properly, Quantum Key Distribution, solves this
problem in a secure way. Using superposition of photons, Alice transmits the key to
Bob. Through unsecured communication Bob and Alice agree on a key based on the
photons Alice sent. Quantum properties of the photons ensure that if the photons are
intercepted, both Alice and Bob know that the key has been compromised. Once a
key is agreed upon, Alice uses the key to encrypt the data and then transmits the
encrypted data to Bob. Bob can then decrypt the data. The actual transmission is
secure. But, as we will note in the next section, this does not mean that it is impos-
sible for an eavesdropper to intercept the message without being detected.
Quantum approaches to computation and information transmission can take
advantage of both superposition and entanglement. Superposition and entanglement
are the resources that leads to new possibilities for data transmission and the speed-
ups found in quantum algorithms (Werner 2005:183).
During its research stage, quantum computing has already begun to bring to light some
ethical concerns. If quantum computing becomes a common, practical technology,
we expect significant ethical issues will arise. Although there are important practi-
cal speed and efficiency advantages to quantum computing, qubits by their very
nature do not register information in the same way that conventional digital memo-
ries do, thus challenging some of the most fundamental assumptions of Information
Ethics. If quantum computing is to become practical for most users (many research-
ers believe that it will eventually), it seems likely that the probabilistic nature of
quantum memory and computation will be hidden from users. Further, users will
likely not even know when the quantum subprocessor has been used to determine a
result. Seen another way, when most users enter input, they are not going to include
a probability threshold to be used to determine whether an output is correct. Users
will want to assume confidently that the output is correct; it will be left to those
developing and implementing quantum algorithms to determine the level to set the
threshold for correctness. Only someone with an LoA that includes at least some
knowledge about the intricacies of quantum computing can make an informed ethi-
cal choice about picking the threshold (LoA2). There is power and responsibility in
that choice. The autonomy of users at LoA1 is impinged upon when they receive
output without knowing about the inherently probabilistic nature of quantum
computation.
For some algorithms (such as factoring, described above), a conventional pro-
gram can easily and efficiently check to see if the quantum algorithm has delivered
a correct answer. However, many useful applications of quantum computing in
which such “post quantum checking” will not be practical. These applications
38 M.J. Wolf et al.
Floridi’s Information Ethics. Floridi and Sanders discuss the nature of an act by an
actor a to a patient p:
Evil action = one or more negative messages, initiated by a, that brings about a transformation
of states that (can) damage p’s welfare severely and unnecessarily; or more briefly, any patient
unfriendly message (Floridi and Sanders 2001:57).
It is important for our purposes at LoAS to note that the patient p in Floridi and
Sanders’ formulation may be human, biological but not human, or artificial.
We contend that this definition of evil means that the probabilistic nature of
quantum computing may be considered fundamentally evil, or at least not entirely
commendable. Quantum computing introduces an inherent uncertainty. Such uncer-
tainty can sometimes be managed (as in Shor’s quantum factoring algorithm), but
that does not remove the objection that quantum computing is, at its core, less cer-
tain that traditional computing. If less certain, then it can be argued, it is less good
in Information Ethics (Floridi 2005).
2.7 Conclusions
In this chapter we have approached three cases using Floridi’s Method of Levels of
Abstraction. It is clear to us that this method offers a usable framework in the analy-
sis and development of software applications. The addition of LoAS provides us
with an added dimension which addresses the direct and indirect effects of software
on society. The three levels that we have chosen to define, LoA1, LoA2 and LoAS,
are clearly applicable to Artificial Agents and the emerging paradigm of Cloud
Computing. The use of the method with Quantum Computing demonstrates its
effectiveness even with nascent notions of computing. The challenge for Quantum
Computing developers is to find a way to address the ethical concerns that the intrin-
sic nature of quantum computing presents at all three levels of abstraction, and the
challenge to IE theorists is to address how or if quantum applications fit into their
conception of the Infosphere and IE. We have begun part of that work; there is much
more to be done.
References
Floridi, L. 2002. On the intrinsic value of information objects and the infosphere. Ethics and
Information Technology 4(4): 287–304. doi:10.1023/A:1021342422699.
Floridi, L. 2005. Information ethics, its nature and scope. Computers and Society 35(2): 3. June
2005.
Floridi, L. 2008a. Foundations of information ethics. In The handbook of information and com-
puter ethics, ed. K. Himma and H. Tavani, 3–23. Hoboken: Wiley.
Floridi, L. 2008b. The method of levels of abstraction. Minds and Machines 18: 303–329.
doi:10.0007/s11023-008-9113-7.
Floridi, L. 2010. Ethics after the information revolution. In The Cambridge handbook of information
and computer ethics, ed. L. Floridi, 3–19. Cambridge: Cambridge University Press.
Floridi, L., and J.W. Sanders. 2001. Artificial evil and the foundation of computer ethics. Ethics
and Information Technology 3: 55–66.
Floridi, L., and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines
14(3): 349–379.
Fogarty, K. 2009. Cloud computing definitions and solutions. http://www.cio.com/article/501814/
Cloud_Computing_Definitions_and_Solutions?page=1&taxonomyId=3024. Accessed June, 2010.
Friedman, B. 1996. Value-sensitive design. Interactions 3(6): 16–23.
Friedman, B., and H. Nissenbaum. 1996. Bias in computer systems. ACM Transactions on
Computer Systems 14(3): 335.
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. 2008. The ethics of designing artificial agents.
Journal of Ethics and Information Technology 10(2–3): 115–121. doi:10.1007/s10676-008-
9163-9.
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. 2009. Why turing shouldn’t have to guess. Asia-
Pacific Computing and Philosophy Conference, Tokyo, October 1–2, 2009.
Grover, L. 1997. Quantum mechanics helps in searching for a needle in a haystack. Phys Rev Let
79(2): 325–328.
Huff, Chuck. 1996. About social impact statements. http://www.stolaf.edu/people/huff/prose/SIS.
html. Accessed September, 2010.
Jin, X., J. Ren, B. Yang, Z. Yi, F. Zhou, X. Xu, S. Wang, D. Yang, Y. Hu, S. Jiang, T. Yang, H. Yin,
K. Chen, C. Peng, and J. Pan. 2010. Experimental free-space quantum teleportation. Nature
Photonics 4: 376–381. doi:10.1038/nphoton.2010.87.
Johnson, D., and K. Miller. 2009. Computer ethics: Analyzing information technology, 4th ed.
Upper Saddle River: Prentice-Hall.
Lyderson, L., C. Wiechers, C. Wittmann, D. Elser, J. Skaar, and V. Makarov. 2010. Hacking com-
mercial quantum cryptography systems by tailored bright illumination. Nature Photonics 4:
686–689. doi:10.1038/nphoton.2010.214.
Parnas, D.L. 2009. Document based rational software development. Know-Based Syst 22(3):
132–141.
Roos, C.F., M. Riebe, H. Häffner, W. Hänsel, J. Benhelm, G. Lancaster, C. Becher, F. Schmidt-
Kaler, and R. Blatt. 2004. Control and measurement of three-qubit entangled states. Science
304(5676): 1478–1480. doi:10.1126/science.1097522.
Shor, P. 1994. Algorithms for quantum computation: Discrete logarithms and factoring. In
Proceedings of the 35th annual symposium on foundations of computer science, ed.
S. Goldwasser, 124–134. Los Alamitos: IEEE Computer Society Press.
University of Connecticut (UCONN). 2010. http://www.engr.uconn.edu/votercentertechnology.
php. Accessed October 25, 2010.
van den Jeroen, Hoven. 2008. Moral methodology and information technology. In The handbook
of information and computer ethics, ed. K. Himma and H. Tavani, 49–67. Hoboken: Wiley.
Weinfurter, H. 2005. Quantum information. In Entangled world: The fascination of quantum infor-
mation and computation, ed. J. Audretsch, 143–168. Weinheim: Wiley-VCH.
Werner, R.F. 2005. Quantum computers – The new generation of supercomputers? In Entangled
world: The fascination of quantum information and computation, ed. J. Audretsch, 169–201.
Weinheim: Wiley-VCH.
Wolf, M.J., K. Miller, and F.S. Grodzinsky. 2009. On the meaning of free software. Ethics and
Information Technology 11(4): 279–286. doi:10.1007/s10676-009-9207-9.
Chapter 3
Levels of Abstraction and Morality
Richard Lucas
3.1 Introduction
Floridi and Sanders’ work on Levels of Abstraction (LoA) is one of philosophical depth
and innovation, one of significance for the field of philosophy of information generally
and ethics and information particularly. However, with this significance comes a price.
This price is that the concept of LoA contains a number of innovative and controversial
concepts that require lengthy and careful examination to appreciate.
In many papers, Floridi (2004, 2008 and with Sanders in several papers
(especially 2001)) has persuasively argued the case that systems (generally thereby
artificial agents) can be conceived of as moral agents. To do this, he and Sanders
introduced the notion of Levels of Abstraction and combined this with state-transi-
tion theory to produce what they call an effective characterisation of moral agents.
I examine this claim in general and LoAs in particular from the point of view of
systems as agents, ordered levels of abstraction, state transitions, moral agency,
LoA2, and interactivity, adaptability, autonomy, and cognition.
The structure of this chapter is as follows: first I will examine some basic termi-
nology, such as their view of action, their take on agency, and their view of morality.
I will then examine and critique some so-called natural LoA examples.
I critique their schema in two ways: their characterisation of morality as a thresh-
old function and the conception of LoAs as systems. I claim that there are difficulties
with LoAs as systems (especially LoAs as closed systems) and that most LoAs cannot
R. Lucas (*)
Head of Discipline, Information Systems, Faculty of Information
Sciences and Engineering, University of Canberra,
Canberra, ACT, Australia
CAPPE, Australian National University, LPO Box 8079 ANU,
Acton, ACT 2601, Australia
e-mail: richard.lucas@canberra.edu.au
say anything more than that the system meets the criteria. I find difficulties with
Floridi and Sanders then calling that meeting of criteria a kind of morality.
3.2.1 Action
For Floridi and Sanders, the evaluation of actions as moral actions is dependent
upon two things: thresholds and humans.
The idea of a threshold and a threshold function is used to define an action as a
moral action. So what is a threshold function?
A threshold function … is a function which, given values for all the observables … returns
another value (Floridi and Sanders 2004, p. 369).
beings simply know that we are moral agents and ought to be able to judge and
recognise when other agents are moral agents. The first part of this is relatively
unproblematic, but the second part is not. The problem with this, of course, is an
old one. There is much dispute among humans about what counts as a moral agent
and, under such uncertainty, which group is to be accorded the right to make the
final determination of the moral agency of another.
3.2.2 Agency
Much of what Floridi and Sanders say about the moral agency of artificial agents
hinges on their conception of agency. While they do spend some time on groups as
moral agents, I will not pursue that line here. I will instead concentrate on their
depiction of agents as individual units. I will take without argument that the term
“agent” includes both human and artificial agents. I also accept Floridi and Sanders’
assertion that both of these kinds of agents are legitimate sources of moral action
(though perhaps not sources of moral agency).
Agents vs. patients: Floridi and Sanders’ first move in defining agents is to suggest
that agents can be both moral patients and moral agents; moral agents are originators
of moral action and moral patients are receivers of moral action. This discrimination
has two purposes: to allow them to separately focus on the possibility of agents being
moral agents without having to consider whether they might also be moral patients,
and to thus narrow the focus of their exploration to moral agents only.
Agents as systems: Taking their inspiration from classical information systems
theory, Floridi and Sanders provide a new way of conceiving of agents. They begin
with the idea that agents are systems and show that, indeed, most things can be
systems. They further show that systems have necessary and sufficient conditions
for determining whether any suggested entity is a particular kind of system.
Floridi and Sanders do this because they recognise that it is impossible to always be
definite about a definition, or, in this case, to be definite about what an agent is. They
see the use of agents as systems as a way around this problem. This is a way out of the
vagueness because it now allows us to think of such notions as agents as sets. After all,
systems are just sets of values and processes that go together to produce some kind of
output. To further give this notion a concrete appearance, they defer to the mathe-
matical/logical conception of a set, where a set is seen as a collection of members
(parameters). They then go on to say that, for a particular set, it is possible to define a
set of these parameters while still allowing any definition of that set to remain fuzzy.
They call this defining of the set of parameters specifying a Level of Abstraction (LoA)
and call the set of parameters a LoA. The idea of LoAs is explained in the next section.
Agents are, then, for Floridi and Sanders, simply systems that are examined
using a particular but not necessarily unique LoA. All agents are systems, but not all
systems are agents.
The important conclusion to take from this characterisation of agents is that
moral agents are systems as viewed through a particular LoA.
46 R. Lucas
Everything in their account of the morality of artificial agents hinges on the idea of
Levels of Abstraction.
A LoA consists of a collection of observables, each with a well-defined possible set of
values or outcomes (Floridi and Sanders 2004, p. 354).
To further reinforce the idea that systems can be agents, Floridi and Sanders say
that they are particularly interested in those kinds of systems that can be seen as
agents and that agents are agents of change. From this, they therefore see agents as
systems that must be capable of change.
A natural way to conceive of a system, and hence an agent, that changes is to use
the idea of states and state transitions.
State transitions: Standard computing and information science conceive of systems
as having states. Floridi and Sanders use this idea to show how LoAs can be attrib-
uted to systems. These states are merely the set of values that the variables of a
system have at some particular time, say, T0. Systems are always in a state; that is,
they always have a particular configuration of values and any system might change
from the state it is in to another state. This is known as a state transition: systems are
state-transition models.
By combining the idea of a LoA and state transitions, Floridi and Sanders claim
to be able to achieve the level of precision that they think is necessary to be able to
sufficiently characterise a system as a moral agent.
Floridi and Sanders (pace Allen et al. 2000) do this by saying that the right LoA
necessary for moral agenthood (called LoA2) is one that satisfies three criteria: inter-
activity, autonomy, and adaptability. Here I quote Floridi and Sanders’ definitions of
these three criteria:
Interactivity means that the agent and its environment (can) act upon each other. Typical
examples include input or output of a value, or simultaneous engagement of an action by
both agent and patient – for example, gravitational force between bodies. … Autonomy
means that the agent is able to change state without direct response to interaction: it can
48 R. Lucas
perform internal transitions to change its state. So an agent must have at least two states.
This property imbues an agent with a certain degree of complexity and decoupled-ness
from its environment. … Adaptability means that the agent’s interactions (can) change the
transition rules by which it changes state. This property ensures that an agent might be
viewed, at the given LoA, as learning its own mode of operation in a way which depends
critically on its experience. Note that if an agent’s transition rules are stored as part of its
internal state then adaptability follows from the other two conditions (Floridi and Sanders
2004, pp. 357–8).
To further their effort to establish the idea of LoAs as legitimate constructs with
which to characterise agents, in particular moral agents, they provide what they call
an effective characterisation of agents. What they have in mind is a sufficient char-
acterisation for determining if a system/agent is a moral agent.
The first thing to notice about this characterisation is that the idea of a LoA is
given a subscript: for example, LoA1. This implies that there are many more LoAs
(LoA1, LoA2, …, LoAn). They go on to describe LoAs and imply that there is some
sort of hierarchy or organisation of LoAs.
Consider the following:
Described at this LoA1, Henry is an agent if Henry is a system, situated within and a part of
an environment, which initiates a transformation, produces an effect or exerts power on it, as
contrasted with a system that is …. acted on or responds to it, called the patient. At LoA1,
there is no difference between Henry and an earthquake (Floridi and Sanders 2004, p. 357).
The difficulty with their description is that it implies that all, or at least most,
systems can be so organised. This is not so. While some LoAs may be hierarchically
related, most are not. Many systems have common elements, but few are imbedded
in such a way as to accommodate a hierarchy. This hearkens back to their earlier
discussion of the relationship between moral agents and moral patients. It would
seem much stronger if these (LoA1, LoA2, …, LoAn) had some sort of ordering crite-
ria against which to judge a particular selected set of characteristics. That is, pick
some LoA and compare it to the scaling and see where it fits. This is not done, and
the reader is left wondering what such an ordering might be like. The idea of mul-
tiple (and related in some strong sense) LoAs also leads us to ask whether some
different LoAs might simply be a case of Wittgenstein’s seeing-as (Wittgenstein
1997). The account of abstraction theory would need to have something to say about
this. If there is a set of LoAs, then some might conclude that there is something that
can be said about the set of information that captures all of the LoAs for a given
entity. As described earlier, this can be either a fundamental subset/core/essence, a
minimal set, or a superset of all of the characteristics that might be called on in
creating all of the LoAs for a given entity.
3.2.4 Morality
Having accounted for the idea that LoAs can be ordered and that there is something
natural about that ordering, Floridi and Sanders move to the idea that morality is a
part of this natural ordering and offer a definition of the LoA that fits with their idea
of morality. Floridi and Sanders characterise morality as interactivity, autonomy,
and adaptability. They then match this definition with the conception of LoA and
say that a LoA that can be seen to naturally have moral characteristics (a moral LoA)
is called LoA2.
To support this claim, they describe two hypothetical systems, H and W, charac-
terising them as having interactivity, autonomy, and adaptability, and then ask the
3 Levels of Abstraction and Morality 51
question: are they moral? Answering this question, they say, requires expanding the
criterion of identification to the following:
An action is said to be morally qualifiable if and only if it can cause moral good or evil. An
agent is said to be a moral agent if and only if it is capable of morally qualifiable action
(Floridi and Sanders 2004, p. 364).
They continue with the example and add that, for us to be able to use this new
definition, H and W must perform some action that qualifies as moral. For this, they
say that H kills a patient and that W cures a patient. After some discussion, they
conclude that both H and W are moral agents, and then reveal that H is a human and
W is an artificial agent (AA).
Anticipating that some might object to them saying that W (the AA) is a moral
agent, Floridi and Sanders (2003a, pp. 16–19) discuss the reasons why someone
might object. Four of these objections centre on the idea of a responsible morality
and are called the teleological objection, the intentional objection, the freedom
objection, and the responsibility objection.
The teleological objection is that “an AA has no goals,” and that this matters
morally speaking. Here their characterisation of this objection seems incomplete.
Some might argue that, usually, simply having goals is not what is meant. Crucial to
a more complete teleological objection is that the goals are of the right kind and that
they are not simply added simpliciter. Their claim that the LoA can be “readily …
upgraded” so that both H and W have goals seems like merely changing the LoA so
as to meet the objection. The notion of upgrading a LoA seems arbitrary and self-
serving. Once a LoA is chosen, is one not obliged to stick with it? This analysis
simply does not counter the claim that it matters that an AA has no goals.
The intentional objection is that “an AA has no intentional states,” with the
implication that having intentional states is crucial to being a moral agent. Floridi
and Sanders’ (On the morality of artificial agents [private communication]. (Received
24 December 2003), 2004) counter to this is that intentional states are nice but
unnecessary for moral agency as they have conceived it and that intentional states
require some form of privileged access (something like a God view), and that is not
possible. But this is exactly what Floridi and Sanders rely on when describing the
examples in which they seem to want to have access to internal states without inter-
activity. Thus, their argument against intentional states, because they require this
privileged access, is the same one that they rely on to make their earlier case. They
cannot have it both ways.
The freedom objection is that “an AA cannot be held responsible for its actions.”
That is, an AA is not free. Floridi and Sanders’ counter to this is that AAs are “already
free in the sense of being non-deterministic systems” (Floridi and Sanders 2004,
p. 366), assuming the stance on determinism taken in their Sect. 2.4. I raise the same
objection here as I did above, that the use of “non-determinism” is confused. Floridi
and Sanders go on with the claim that the AAs “could have acted differently if they had
chosen differently and they could have chosen differently because they are informed,
autonomous and adaptive” (Floridi and Sanders 2004, p. 366). I contend that they have
not shown that their definitions of autonomy and adaptability give what is needed.
52 R. Lucas
It seems that the dogs are not main players (agents), but rather tools used by the
main players, that is, those organising and doing the searching, towards moral
ends. Surely to count as a moral agent means that the agent must be aware of
being such an agent. As Floridi and Sanders say, the dogs have no sense that this
is anything other than a game, and there is no evidence that they are aware of
themselves as sources of moral action. If this is true, then these dogs are not moral
agents according to most other accounts of moral agency. This would seem to
reinforce the prevailing view that Floridi and Sanders’ characterisation is simply
wrong. This sort of example does nothing to convince doubters of the veracity of
their argument.
In their third example, citing the trials and tribulations of Oedipus, it should be
noted that while Oedipus did not try to kill his father, he did try to kill the king. That
the person was his father is of less importance; the greater importance is the inten-
tion to kill. The example of marrying his mother is more to the point; there is
nothing inherently wrong with marriage. In this example, as he is ignorant of the
fact that his bride is his mother, Oedipus is not morally responsible due to his ignorance.
It is his ignorance that mitigates his responsibility. He is accountable in the sense
that we can account for or attribute the source of the moral wrong without attaching
responsibility. However, once his ignorance is addressed, then the responsibility
adheres.
If my understanding of Floridi and Sanders’ account of accountability is correct,
then many would say that what they are doing is simply attaching the word “moral”
as a field of study to the notion of “accountable.” They would then go on to say that
this, on its own, does not count as moral.
It does seem somewhat disingenuous to set up a LoA where the fact of his bride
being his mother was permanently omitted when, in fact, in the story this information
comes to light. Surely all of the morally relevant facts must be included in a LoA
that is being used to make an assessment of the moral agency of one subject to a
moral claim.
3 Levels of Abstraction and Morality 53
Interactivity:
Interactivity means that the agent and its environment (can) act upon each other (Floridi and
Sanders 2004, p. 357).
To reinforce their view of the interaction of a system with its environment, Floridi
and Sanders refer to the example of “gravitational forces between bodies” (Floridi
and Sanders 2004, p. 357), where there is simultaneous interaction. This hardly
seems to be in tune with the shift from mere agency to moral agency. Common
sense would reject gravity as even remotely analogous to a moral issue. It would
seem that one of the hallmarks of moral interactivity is choice, and, indeed, Floridi
and Sanders make this very point. The agent must be able to decide to be the source
of moral action. Deciding that the laws of gravity are as morally troublesome as,
say, local parking regulations flies in the face of sense. A much better example is
needed here. Might there be a better argument than the one put forward by Floridi
and Sanders?
It is true that LoAs that are moral agents must interact with their environment and
so must be of either type I3 or I4. Now, it is possible that moral agents exist that know
a priori all they need to know in order to be moral (that is, type I3 agents), but it
seems that neither humans nor artificial agents have such a priori knowledge. So,
type I3 agents can only be moral agents in the sense that humans are moral agents.
Autonomy:
Autonomy means that the agent is able to change state without direct response to interaction:
it can perform internal transitions to change its state (Floridi and Sanders 2004, p. 357).
The problem with this is the following: All computer systems change state
due to some stimuli. There are none that change state for no apparent reason.
Now, this stimulus can be either external or internal. This is a problem for Floridi
and Sanders because, while the idea of external stimuli equates to Floridi and
Sanders’ inputs from the environment, they have no corresponding notion for
internal stimuli.
One account of internal stimuli might be the following.
Internal stimuli come from one of two sources. They can come from some
background subsystem that is always running while the machine is active. That
is to say, the subsystem checks for some particular internal state configuration,
which, when detected, causes some action (state change) to take place.
Conversely, they can come from the passing of time. Note that the passage of
time can trigger a state change in two ways: either when a predetermined time
has been reached, or when some previously determined time period has
elapsed.
In the background subsystem case, the particular state can be reached by one of
only three ways:
(i) External stimuli (Se), same as a straightforward direct response as a conse-
quence to external stimuli;
(ii) Passage of time (St) such that eventually the required internal state may occur; or
(iii) Creation of new information (Si), and hence new states, based on an analysis
of existing states.
From this, we can see that there are only two cases that we need to consider. The
first possibility (Se) is out, as it is just external stimuli. The only plausible candidates
for non-direct response stimuli are thus time and creation. This means that autonomy,
in the sense used here and discounting magic as stimuli, is equivalent to either time-
based transition St or the analysis algorithms necessary for Si.
The task is to now analyse St and Si to have an account that will decide if St
can ever occur, and to analyse the set of state transitions to determine whether the
particular internal state can ever be reached and, if so, how.
Adaptability:
Adaptability means that the agent’s interactions (can) change the transition rules by which
it changes state (Floridi and Sanders 2004, p. 358).
Are there to be meta-meta-rules, and, if so, when does this regression stop? For the
kinds of moral agents that humans are, there is, in principle, no limit to this regression,
and so, if Floridi and Sanders wish to continue relating the moral agency of artificial
agents with human agency, there seems to be no reason to place a limit on this
regression for artificial moral agents. This then turns into the halting problem that
Turing highlighted (see Lucas 2009, p. 74).
If Floridi and Sanders do not mean for their rule changing to be of the meta-
rule sort, then the fact that the rules change must simply mean that some (particular)
states change, but the fact that particular rules change is no different to any other
state that can change. This is saying that the particular pieces that allow for or
cause these rules to be changed are treated as just another ordinary state, no dif-
ferent than any other state within the system. A state is a state is a state, whether
it changes the rules for state transitions or it changes the values that variables in
the system can hold.
But this ordinariness of rule changeability as a state change eliminates the spe-
cialness of adaptability and makes the stating of adaptability explicitly at least
unnecessary and, probably, pointless. Adaptability under this is then an inherent
character of all systems. This is what I take Floridi and Sanders to mean by their
note that adaptability follows from interactivity and autonomy. If the rules are dis-
cernable at a LoA, then adaptability as a separate LoA characterisation is unnecessary.
However, if they are not, then it seems that adaptability is not possible at that LoA
unless there is return to transition-rule changes being meta-rules.
Where this leads to a problem is in the thermometer example, which is frequently
invoked in arguments over agency. The case in which a thermometer becomes too
hot and bursts, with the mercury leaking out, could be viewed as simply just another
state change, the change from the state of being able to report the temperature to the
state of not being able to report the temperature. To say anything sensible about this,
we would then need to return to what the purpose of the thermometer is. If we change
the rules about what counts as a state change, as in the example above, then it seems that
we are changing its purpose from a device that reports the temperature to a device
that does not. Now, normally, this sleight of hand is seen for what it is: redefinition for
the sake of it, and it would be rejected as being not in the spirit of what was intended
when the thermometer was specified. It is no longer a thermometer.
If they have something substantial and different to say about adaptability, then all
of the above seems to imply that we cannot simply treat the rules for state change as
ordinary states, but rather must take the meta-rule stance. Of course, adaptability
might be a layered concept. In that event, it also implies that we must include this
stance and its layering parametric value in the LoA. It seems straightforward to say
that the LoA chosen will determine the depth of recursion.
Morality as a threshold function: Given the above, Floridi and Sanders also claim
that morality can be seen as a threshold function that can “in principle at least be
mathematically determined,” and that this threshold function can be subject to
“some pre-agreed value.” This value is called a tolerance. Once this tolerance is
reached, then the agent is considered to be a moral agent.
56 R. Lucas
This idea of a “pre-agreed” tolerance seems to present Floridi and Sanders with
a problem. It seems to conflict with the freedom objection and their claim that
artificial agents are non-deterministic (Floridi and Sanders 2004, p. 354). The way
out that Floridi and Sanders take is to claim that this tolerance is “identified by
human agents exercising ethical judgments” (Floridi and Sanders 2004, p. 369),
thus introducing a non-deterministic element.
Floridi and Sanders then consider the general and moral agency of two enti-
ties, H and W. In their description of the actions of H and W, they write: “They
both acted autonomously: they could have taken different courses of actions…”
(Floridi and Sanders 2004, p. 364). This is the first mention of the connection
between autonomy and the ability to take different actions. Originally, their
characterisation of autonomy was of the ability to change states. There is a dif-
ference between ability to change states and ability to take different actions.
What might Floridi and Sanders mean? Well, there are two senses of taking
action that might apply here: changing states, and acting in the world (i.e., the
outputs). The first refers to Floridi and Sanders’ original definition in their
Sect. 2.2, but the second does not. It is this second one that is both the normal,
default meaning of action and the only one that we have with which to judge
autonomy.
With the notion of morality being a threshold concept, there seems to be a
difficulty with the claim that “the types of all observables can in principle at least be
mathematically determined” (Floridi and Sanders 2004, p. 369). Why does it matter
that the types of observables can be so determined? Surely the only things that mat-
ter are the observables themselves. In the next paragraph, Floridi and Sanders write
that it is not known if “all relevant observables” can be determined. It seems that this
passage is comparing observables with types of observables. If it is not known that
“all relevant observables can be mathematically determined,” then how can it be
known that all types of variables can be known? Surely there might be an unknown
observable that is of an unknown type of variable. Even more, if not all relevant
observables can be known, then how can it be known if the threshold has been
reached? Floridi and Sanders do not say.
The idea that morality is a threshold function seems problematic, and Floridi and
Sanders do not adequately account for the difficulties noted above to make the con-
cept clear.
As part of the argument that LoAs generally have some natural correspondence with
natural entities in the physical world, Floridi and Sanders (2004, p. 359) offer a
number of examples to support the idea that LoAs can be interpreted as indicating
when a system is a moral agent. I reproduce the table here so that the reader can
compare it with my expanded conception.
3 Levels of Abstraction and Morality 57
There are, however, several problems with the examples in this table. I choose
two:
First, it seems difficult to make any sense out of the idea of being able to use a
video camera for 30 s to be able to comprehend a solar system qua solar system.
Perhaps it is my limited imagination, but I cannot find any way of conceiving of that
at all. I have asked others, but no systems analyst or philosopher I contacted was
able to offer an explanation.
Second, there is difficulty in understanding the example of a closed ecosystem.
The difficulty with closed systems is that if they are closed, then the existence
of particular ones can be speculated about, and perhaps deduced (in, say, a
Kantian sense), but not known. An analogy would be deducing the existence of
a planet from the effect it has on the orbits of other planets and not by direct
observation; another might be deducing the existence of a particular black
hole.
Merely identifying a system means that we have some information about it,
which must come from being (at least) able to discern its inputs/outputs; this means
that that discrimination must be part of the LoA. There is no other way of knowing
the states of a system without either being sufficiently superior or by the system
making available its states by direct inspection. This availability is just making its
states some form of output.
Being able to pass judgment on its adaptability and its autonomy means having
access to its internal states. This level of accessibility must mean a different LoA.
This, in turn, must imply that there are different LoAs for interaction, autonomy, and
adaptability. It is either that, or all LoAs must have some kind of conception of inter-
nal states that allows for their inspection without that inspection taking the form of
an output; this seems implausible.
With the example of a video camera over 30 s, we cannot know about a closed
system’s adaptability or its autonomy because we have no access to the internal
states of a closed system. The camera cannot work. There is no unobserved
observer.
Now, I look at those cases where interaction is given as NO.
The problematic nature of finding an example for the second of these is simply
because of the above discussion. If there is no interaction, then we cannot know nor
have any evidence for anything about either its autonomy or its adaptability. Simply
saying “yes” to complete the truth-table is misleading. This completion gives two
impressions: first, that all of the cases have been accounted for, and, second, that the
ability to determine the values for autonomy and adaptability is independent of
interaction.
3 Levels of Abstraction and Morality 59
Of course, I have not considered the multiple types of interaction that I specified
earlier.
Interaction type Name Inputs Outputs
I2 Black hole Yes No
As these, too, have no outputs, they would also be undecided for autonomy and
adaptability. To account for the other systems types, I1, I3, and I4, in this way would
further reduce the table size to five entries.
However, this is not the end of it. I need to return to the original list of agenthood
examples and revise it in light of the extra values that I have outlined above. This
gives a more comprehensive understanding and complete account of these terms.
Interactivity would now have four values, and autonomy, three values. These will
take more fully into account the concerns I have specified in the previous section,
and this would give 24 possibilities:
Now combine the concerns expressed about the explanations of the three
conceptions, interaction, autonomy, and adaptability, into the table. As interactivity
now has four values and autonomy three values, the new table would be reduced to
14 possibilities, as follows:
To further bolster their claim, Floridi and Sanders provide detailed examples of
agents; they cite Webbots and a piece of software called MENACE.
MENACE: In the description of MENACE (a noughts and crosses piece of software),
Floridi and Sanders make reference to the program learning. I suspect that, excluding
adherents of GOFAI, many might take exception to this use of the term “learning.”
Normally, people would say that to be able to say that a system had learned some-
thing, it (in this case, MENACE) ought to be able to say not only what it had learned,
but that it had learned. It would need to be able to answer the question: What has
been learned? I do not know if it is necessary to be so upfront in using such a limited
used of the term “learning.”
In the paragraph beginning with, “This distinction is vital for current software”
(Floridi and Sanders 2004, p. 361), they make availability the issue for determining
adaptability. There, however, seems to be more to it than this. Along with availability
(of outputs), the knowledge-ability of the system doing the evaluation seems to be
crucial. With sufficient knowledge of how particular systems work, every (or no)
system will be judged as an agent. Is the system doing the judging to suspend what
it knows to merely what is at hand (the inputs and outputs of the LoA)? But this cannot
be so, because of the difficulties that are immediately raised by two related questions:
How is it decided what I am to suspend my beliefs about in order to make the LoA
sensible? What am I to use to be able to even start thinking about determining the
status of the system being evaluated? These require answers before the availability
issue can be settled. As for the second question, merely being able to recognise a
LoA must imply a certain level of knowledge of that system as a system and of systems
as systems generally. Without that, nothing sensible can be said at all about the
(or indeed, any) system. If the LoA is to predetermine what I can count as being able
to be used in my determination of the system-at-hand’s agency, then the whole process
seems to pre-determine the outcome. Any sufficiently ignorant determiner would
see all (or no) entities as (moral) agents. “Indeed only since the advent of applets
and such downloaded executable but invisible files has the issue of moral account-
ability of AAs become critical” (Floridi and Sanders 2004, p. 361).
It seems that the download ability per se is not the problem, but rather two other
things are: the relative ignorance of those affected by their execution (dealt with
above), and their reach. The downloading of executable software has been around
since the 1950s; it is the now-extensive reach of the downloading that extends its
sphere of influence to make it a commonplace concern. Even then, most people
would still not think of it as a problem of the morality of the software agent, but
rather of the originators of such software. To make the case for the criticality that
downloading gives to the AA, it would seem that Floridi and Sanders would need to
show that either the reach was within the intentional grasp of the AA or that there
was something special in the relationship between reach and accountability. This
last part is not clear in their work.
Floridi and Sanders also write: “There are natural LoA’s at which such systems are
agents.” Given my questioning of LoAs so far, I claim that there seem to be no LoAs
that might be called natural, at least not in the sense of the natural world. More
62 R. Lucas
explanation is needed to spell out what is meant by this. They are quite right to say
that the two LoAs are at variance, but there is more at issue than simply the ‘open
source’ versus the ‘commercial’ view. Again, all of this assumes a particular char-
acterisation (dare I say LoA?) of system A, the one doing the comparison.
Webbot: Floridi and Sanders claim that “Since we value our email, a webbot is
morally charged” (Floridi and Sanders 2004, p. 370). Surely my valuing my email
must have nothing to do with the moral status of a webbot. I, personally, do not
value email, while my brother does: does that mean that a webbot is not always
morally charged? That cannot be right. A webbot cannot be both morally charged
and not morally charged: a contradiction cannot prevail.
In the example of Webbots, the phrase “abstracting the algorithm” (Floridi and
Sanders 2004, p. 362) seems almost too convenient. This seems like engineering the
selection to get the outcome rather than morality being a consequence of some
independent selection of abstraction. Further on, they say, “…we do not have access
to the bot’s code” (Floridi and Sanders 2004, p. 362). Access is not the only crucial
problem. Allowing the evaluator’s knowledge is. Access is not necessary if we are
allowed to generalise from prior knowledge of bots. Floridi and Sanders’ demand
must make us necessarily ignorant of how bots work. We are not allowed to use any
knowledge that we have concerning bots. How are we to do anything that resembles
analysis of the bot without this? It does not seem possible. It also seems that one
conclusion that can be arrived at is that the moral agency of another is a function of
the evaluator’s ignorance.
Floridi and Sanders write that the difficulties humans face resulting from the
non-single person (i.e., group) attribution of creating programs can be solved by
making AAs morally accountable. I cannot see how this follows. The trail of attribu-
tion in the development, testing, implementation, and maintenance of software is
indeed long and complex, and some of that trail may become lost in the mists of
time and bureaucracy, but this is equally said of human beings. The notion of col-
lective accountability may give the general impression of attributing accountability
across this process, but this also does not necessarily follow. Simply saying that
extending the class of moral agents to include groups as well as artificial agents, as
Floridi and Sanders do, does not mean that such groups are moral agents. Nor does
saying that a group is accountable make it accountable. Surely individual AAs must
suffer from the same difficulties that humans do and have the same difficulties in
relation to groups as humans do if they are to have equal moral status. I do not take
up the challenge of any of their claims here.
3.4 Conclusion
Floridi and Sanders’ program of ethics for artificial agents depends upon two things:
an effective characterisation of agents and a specifiable definition of ethics. Both of
these have been claimed by them, centrally through the use of LoAs; however, I have
3 Levels of Abstraction and Morality 63
found that there are difficulties with each and have suggested where they might be
strengthened. In the end, the construction of LoA2 is too artificial and too simple to
count as a natural characterisation of morality.
References
Allen, C., G. Varner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal
of Experimental and Theoretical Artificial Intelligence 12(3): 252–261.
Chaitin, G. 1998. The limits of mathematics. Singapore: Springer.
Chaitin, G. 1999. The unknowable. Singapore: Springer.
Clarke, A.C. 1972. Report on planet three. New York: Harper & Row.
Floridi, L. (ed.). 2004. The Blackwell guide to the philosophy of computing and information.
Malden: Blackwell Publishing, Ltd.
Floridi, L. 2008. The method of levels of abstraction. Minds and Machines 18: 303–329.
Floridi, L., and J.W. Sanders. 2001. Artificial evil and the foundations of computer ethics. Minds
and Machines 3(1): 55–66.
Floridi, L., and J.W. Sanders. 2003. The method of abstraction. In The yearbook of the artificial,
ed. M. Negrotti. Bern: Peter Lang.
Floridi, L., and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14:
349–379.
Gill, A. 1962. Introduction to the theory of finite-state machines. New York: McGraw-Hill Book
Company.
Lucas, R. 2009. Machina Ethica. Berlin: Verlag Dr. Müller.
Putnam, H. 1975. The meaning of meaning. In Mind, language and reality, ed. H. Putnam, 215–271.
Cambridge: Cambridge University Press.
Weckert, John. 1986. Putnam, reference and essentialism. Dialogue 25: 509–521.
Wittgenstein, L. 1997. Philosophical investigations, 2nd ed. Cambridge, MA: Basil Blackwell.
Chapter 4
The Homo Poieticus and the Bridge
Between Physis and Techne
Federica Russo
Very few would deny that the advent of computers radically changed our lives,
let alone science and society. Some—notably Luciano Floridi (2008, 2009)—even
equates the ‘digital revolution’ for its importance to the Copernican, the Darwinian,
and the Freudian revolutions. The first, putting the Sun at the centre of the universe,
radically changed the position of Man and his own perception with respect to Nature.
The second, finding common ancestors to various species, vanished the supposed
privileged place of Man in the biological kingdom. The third, discovering the
unconscious dimension of the mind, made Man realise that he is not fully rational
nor transparent, even to himself.
The core change behind the digital revolution is that we are becoming aware of
our status of informational organisms among many others—an idea that traces back
to Alan Turing. In his pioneering paper ‘Computer machinery and intelligence’
(1950), Turing asked the controversial—perhaps even irreverent—question of
whether machines can think and discussed the imitation game as a test for intelligence.
Reading Turing some 60 years later in hindsight, we more easily realise that in his
arguments there was more than simply seeds for a new area of research—artificial
intelligence—but for an altogether different way of looking at intelligent beings
(we, the humans) in relation to ourselves, to the environment, and to the (digital)
artefacts we are the creators of.
This is the immense change the digital revolution carries forward. We humans
lose our privileged place in an anthropocentric world and slowly become aware and
F. Russo (*)
Center Leo Apostel, Vrije Universiteit Brussel
Centre for Reasoning, University of Kent
Department of Philosophy, University of Kent, Kent, UK
e-mail: f.russo@kent.ac.uk
accept that we are informational organisms, or, as Floridi says, inforgs. Being inforgs
means that we are not, after all, so different from other intelligent engineered
artefacts—in fact, Turing was not embarrassed at all in asking whether
machines—that is things, artefacts—can think and be intelligent. As a matter of
fact, we share with intelligent engineered artefacts something essential: the infor-
mational environment or, as Floridi says, the infosphere. The infosphere is the global
space of information, which includes the cyberspace as well as classical mass media
such as libraries and archives. If the infosphere is the whole space of possible
information, then nature belongs to the infosphere too. Thus, recognising that we,
intelligent humans and intelligent engineered artefacts, equally share this space
brings upfront the need to reinterpret Man’s position in reality—that is Man’s posi-
tion in the infosphere.
The strength of Floridi’s arguments about the digital revolution is that we don’t
have to think of post-modern science fiction environments, where humans are
de-humanised and AI technology took over. The digital revolution is a revolution
that we started living in since the pioneering works in information technologies and
that is nowadays blossoming—just think of how many ‘digital’ actions we perform
since we wake up in the morning until we go to bed. The digital revolution, in other
words, changed at once our interaction with the external world and our views about
who we are. Whilst Floridi argued that such a radical change concerns our role as
ethical agents, I will further argue that the radical change concerns as well our role
as epistemic agents, in the sense of agents that aim to acquire knowledge about the
surrounding world, and as agents that engage in poietic, that is creative and produc-
tive, activities.
More importantly, the digital revolution, according to Floridi, brings up again
questions about the relations between physis and techne, respectively understood as
nature and reality on the one hand and as practical science and creation of artefacts
on the other hand. The digital revolution, in particular, is increasingly changing the
physis, in the sense of the ‘off-line’ world. Our off-line world made of real physical
objects is becoming itself part of the ‘digital’ infosphere because the distinction
between ‘on-line’ and ‘off-line’ is itself becoming more and more blurred until it
will disappear. Information technologies are creating altogether new e-nvironments
that pose new challenges for the understanding of us in the world.
But those arguments, I will argue, are not confined to the digital revolution.
The question of the tension between physis and techne is raised by technology, in
general, and in particular by the emerging technologies, such as bio- or nanotech-
nologies, and consequently by digital technologies too. The digital revolution is in
fact a technological revolution and as such it encompasses extrovert and introvert
changes in our understanding of the world, of ourselves, and of ourselves-in-relation-
with-the-world. Whilst Floridi emphasises that the fourth revolution is digital and
that it therefore affects the position and role of man as ethical agent, I want to
emphasise that the fourth revolution is a technological revolution and that it there-
fore affects the position and role of man as epistemic agent engaging in various
poietic activities.
As I shall discuss thoroughly in Sect. 4.3, the revolution technology brings in is
a shift in the tools to acquire knowledge about the world. As soon as we understand
4 The Homo Poieticus and the Bridge Between Physis and Techne 67
that intervening on nature grants us epistemic access to nature and opens up new
possibilities for the creation of artefacts, (pure) science ceases to be the privileged
lieu for knowledge. The new configuration is that of a techno-science, where the
poietic aspect is no less important than the noetic one.
It is along those lines that, I think, we have to read the considerations that
Nordmann (2004) makes about technoscience. Technoscience, he says, is character-
ised by a shift of focus from representing to intervening, plus a change in societal
expectations and in the way researchers see themselves. The vocabulary chosen by
Nordmann is borrowed from Hacking’s well-known Representing and Intervening
(1983). The choice is certainly not chancy and is in fact well calibrated. Hacking
gives us ground to cultivate the idea that the importance of intervening on nature lies
in the fact that it changed the way we, as epistemic agents, relate to nature.
Interestingly, Ihde (1991) even reverses the perspective: he talks about science’s
embodiment in technology and in the experiment, rather than technology and exper-
iment entering the scientific realm.
Carrier (2004) also investigates the tension and the possible reconciliation between
physis and techne, albeit in slightly different terms. He argues for a sort of reconcili-
ation between the two approaches, on the grounds that there is no substantial difference
between scientific (theoretical) modelling and modelling in the applied sciences.
Carrier’s argument ultimately aims to undermine the view of those who claim the
alleged inferiority of the applied sciences, on the grounds that modelling is more
local in scope. But, the argument goes, more local models do contribute significantly
in theoretical research and are not a distinctive feature of applied science.
I would like to further argue that what is special about the emerging technologies
is that they are not only making new discoveries, but are altogether creating new
environments. Those environments are at once cognitive—in the sense of the space
of knowledge—and applied—in the sense of the space of application of such knowl-
edge. Nanotechnologies exemplify this situation quite well. On the one hand, nano-
science is discovering that materials have different properties at the nanoscale and
at the macroscale. These new properties are opening up possibilities for a new
understanding of matters, because the same material displays different properties
depending on the scale of analysis, as well as for new applications in domains as
different as nanomedicine and the food sector. But for this very same reason new
ethical challenges arise. The reason, simply put, is the following. There is uncertain
and impartial knowledge about the nanoscale and at the same time there is strong
enthusiasm and élan for new applications, creation of new artefacts etc. The ques-
tion arises whether there exist unknown risks for health and environment. Unknown,
because the biological activity of nano-materials depends on parameters that are not
considered by classical toxicology. This situation leads the various stakeholders
(nanoscientists, technologists, policy makers, lay-people, philosophers) to worry
about the consequences of licensing the use of nanoartefacts, for instance.
But perhaps the ethical worries arising from the emerging technologies ought to
be accompanied and even preceded by epistemological worries. It is in this sense
that, it seems to me, the new environments created by technology put up front again
questions about the relation between ‘physis’, to be passively observed, and ‘techne’,
as a practical and applied science. The gap, as is typically understood, concerns the
68 F. Russo
Elsewhere, Floridi suggested that the reconciliation between physis and techne
might be provided by the notion of homo poieticus (see Floridi and Sanders 2003).
The homo poieticus is the ethical agent in the era of technology: she is the creator
of the situations subject to ethical appreciation. Such a constructionist framework
goes beyond traditional ethics and is suited to the new environments created by
technology. The advantage of a constructionist ethics lies in the fact that, unlike
traditional ethics, it does take into account the genesis and the various circumstances
that led the agent to be in the situation she is facing. Instead, traditional ethical
accounts, whether in the framework of consequentialism or virtue ethics, take the
situation as ‘given’, so to speak. But this, argues Floridi, neglects what is perhaps
the most important feature of the ethical agent it the digital era: her poietic skills.
In this paper, I also take up the challenge of reconciling physis and techne.
The underdeveloped notion of homo poieticus, I will argue, is the bridge between
physis and techne. Following in Floridi’s footsteps, I want to argue that the homo
poieticus is not just the ethical agent. The homo poieticus is also the technoscientist,
as a creator of crafts and of knowledge, and the philosopher, as a creator of concepts.
On the one hand, the technoscientist uses technology both as a means to know the
world and as a means to create new ‘objects’. Unlike the Aristotelian scientist that
passively observes the world, the Baconian technoscientist is a ‘constructionist
epistemologist’ that builds, designs, and models reality to create knowledge. On the
other hand, the philosopher, in this perspective, becomes a ‘conceptual constructionist’:
facing new epistemological and ethical environments, the philosopher cannot content
4 The Homo Poieticus and the Bridge Between Physis and Techne 69
herself with applying old concepts or perhaps with adjusting them to the new setting.
The philosopher has to integrate herself in this ‘poietically enabling’ environment
and create new modes of thinking.
The paper is organised as follows. Section 4.2 presents the figure of the homo
poieticus in Floridi’s work on computer ethics. Section 4.3 extends the notion of the
homo poieticus first to the technoscientist, and then to the philosopher. Section 4.4
closes the paper drawing general conclusions about the relations between ethics and
epistemology.
As mentioned earlier, Floridi introduces the notion of homo poieticus in the context
of what he calls the ‘fourth revolution’, which is the digital revolution. Notably,
Floridi is interested in developing a new ethical approach able to cope with the
situations that ethical agents, as inforgs, create in the infosphere.
The reason to look for a new approach is that traditional ethical theories all
encounter the same problem. Traditionally, ethical discourse focused on what is
right and what is wrong to do in a given situation. Floridi stressed the point that
hardly any traditional ethical approach considers how the ethical agent got into
the situation she is in. This is why Floridi groups traditional ethical theories under
the label ‘reactive approaches’. The only aspects that count are the values (in virtue
ethics) or the consequences (in consequentialist ethics) of the action taken in a given
situation. Nevertheless, the point Floridi wants to make is that behaving morally is
not just to be judged a posteriori based on values or on consequences. Behaving
morally starts much earlier than the moral judgement: it has in fact to do with
“constructing the world, improving its nature and shaping its development in the
right way” (Floridi and Sanders 2003). Moral behaviour has to do, in Floridi’s view,
with the poietic skills of ethical agents. This poietic dimension is even pushed further
(Floridi and Sanders 2003):
In a global information society, the individual agent (often a multi-agent system) is like a
demiurge. Her ontic powers can be variously exercised (in terms of control, creation or
modelling) over herself (e.g. genetically, physiologically, neurologically and narratively),
over human society (e.g. culturally, politically, socially and economically) and over natural
or artificial environments (e.g. physically and informationally).
Thus, what is needed to cope with the poietic skills of the ethical agent is a
‘proactive approach’, that is a ‘constructionist’ approach to ethics. A proactive,
rather than reactive, approach emphasises that the agent plans and initiates action
responsibly, thus reducing reliance on ‘moral luck’.
Moral luck refers to the problem of morally assessing an agent for facts, factors,
or situations that she has no full control of. In fact, on the face of it, it is an accept-
able principle, in any ethical theory, that agents should be morally assessable only for
what is under their control (Control Principle). However, everyday life shows that
70 F. Russo
this isn’t the case—i.e., that we own full control of the situations we are in.
Moreover, everyday life also shows that agents indeed undergo moral assessment
in such situations. An apparent impasse thus arises because, adhering to a narrow
version of the Control Principle, we end up in a situation where we cannot assess
anyone for anything (for an introduction and discussion on the problem of moral
luck, see Nelkin 2008).
A constructionist ethics can overcome the problem of moral luck because, if
moral behaviour is but one of the poietic actions of the agent, then there will
certainly be at least some factors of which the agent had control of and that led her
to be in the situation undergoing moral assessment.
The environment created by the digital revolution is a “poietically-enabling
environment, which both enhances and requires the development of a constructionist
ethics” (Floridi and Sanders 2003). The moral agent in such an environment is, as
Floridi calls it, a homo poieticus. The homo poieticus focuses not only on the results
of her actions in order to use and exploit them, but also on the processes that lead to
those results. Thus, she is truly the ‘maker’, that is the creator and initiator, of both
the situation she happens to be in and of the actions she decides to take. She is not
simply a homo faber—who uses and exploits natural resources—nor simply a homo
oeconomicus—who produces, distributes, and consumes wealth. In the infosphere,
the homo poieticus herself creates and alters digital constructs. This does not neces-
sarily mean being ourselves the creators of some digital artefact such as a computer
program, or of a technological device to get connected to the internet, etc. It may
simply mean using any object that takes us into the ‘online’ dimension. Floridi uses
the example of following the instructions of a GPS: in spite of appearance, this
simple and now so common action has already an online dimension. But there is
more than that. As Floridi says, “as a new social space and digital environment, it
has also greatly enhanced the possibility of developing egopoietic, sociopoietic
and ecopoietic projects” (Floridi and Sanders 2003), that is, as the words suggest,
projects about the individual as a persona, about the social environment she shares
with other individuals, and about the larger environment she is in.
In Floridi’s view, the ‘homo poieticus’ is a successful way of describing the
ethical agent in the ‘cyberspace’ (as well as in the world ‘out there’) because it goes
beyond the approach of ‘situated action ethics’ by appreciating the artefacts and the
new technology, as well as the creator of these new artefacts. In other words, a
constructionist ethics suits the emerging information technology exactly because it
puts up front its main characteristics: the creation of a special kind of artefacts—the
digital artefacts.
Galimberti (1999) insists that the origins of man’s poietical skills are to be seen
in the intrinsic biological and instinctual incompleteness of man, leading him to
develop technological tools and methods to overcome this situation. It is thus in this
sense that techne is the very essence of man. The thesis of an instinctual incomplete-
ness of man, leading him to develop other skills to survive in the world, has been
anticipated by a number of thinkers from Plato to Bergson, passing through Aquinas,
Kant and Nietzsche. The Greeks had illustrated it vividly in the myth of Prometheus.
Prometheus steals from Ephesto and Diana technical wisdom and fire and gives them
4 The Homo Poieticus and the Bridge Between Physis and Techne 71
to man in order to supply a lack: contrary to other races, man is naked, barefoot,
and defenceless. But Prometheus could not give man the practical and political
wisdom, as these were with Zeus.
In the next section I want to argue that there is much more about the homo
poieticus. Whilst Floridi focused on the homo poieticus as the ethical agent, I develop
this notion further: the homo poieticus is also a technoscientist and a philosopher.
The Greeks were perhaps the first who tried to study the world scientifically, that is
independently of religious questions. The Greeks were in fact interested in finding
the physical principles governing the cosmos (Ficham 1993). Many would agree
that Aristotle has indeed been a pioneering scientist, especially in the field of biology.
Many others would argue, though, that science—at least in its modern acceptation—
could not begin until some ‘basic principles’ of the Aristotelian method had been
discarded. In particular, Aristotle and his scholars at the lyceum carried on scientific
investigations through empirical observations and collection of facts.
The idea that the natural world is known by passive observation is in sharp
contraposition with the modern conception of science and of scientific method.
Arguably, more than in discarding the basic principles of the Aristotelian method,
the main change in modern science concerned the introduction of new tools to
acquire knowledge. One such new tool is experimentation. For Aristotle, experimen-
tation is not a means to acquire knowledge but just a means to illustrate knowledge
already acquired (for a discussion, see Harris 2005, ch. 1). The scientist, according
to Aristotle, aims to establish the ‘first principles’—science is episteme, namely
knowledge of the physis through its contemplation (theoria). On the contrary, science
is not techne, namely practical or practically oriented science. In other words,
science is characterised by noetic goals. Poiesis, instead, is confined to the arts, to
techne, and does not allow to reach the upper kingdom of episteme.
Let us now make a very long jump forward in time. Since the Scientific Revolution
(ca 1550–1700), the natural world is a world that the scientist actively interacts with
and manipulates in order to both know and create. The shift is from an ‘organic’
view of the cosmos, typical of the Greeks and perpetuated in the Middle-Age, to a
‘mechanical philosophy’ that bright and pioneering scholars such as Francis Bacon,
René Descartes, Galileo Galilei and Isaac Newton started to develop. The change
has been so profound that ‘science’ does not just connote ‘knowledge’ and ‘under-
standing’, but embodies, rather than opposes, also practical skills (Ficham 1993).
It is in fact with Bacon that science becomes a scientia operativa (Klein 2008,
2009): to come to know about the world the scientist does not just passively observe
it, but she interacts with it. The modern scientist is a maker; she performs
experiments, namely she actively manipulates factors to find out what causes what
72 F. Russo
(Ducheyne 2005). Experiments, in Bacon’s view, are tools to acquire new information,
but also tools to test theories according to Galileo (Ficham 1993).
Making experiments is thus a way to make, build, construct truth—this is in oppo-
sition to an ancient truth of physis to be simply discovered. Galimberti also lucidly
explaines the tension between physis and techne. He sees a deep difference between
the way the Greeks and the Moderns mathematise Nature. He says (1999, p.313):
In this respect the difference is abyssal: whilst for the Greek mathematics is the order of
nature in its making itself manifest (aletheia) to man, for the scientist in the Modern age
mathematics is the order that man assigns to nature, forcing it to respond to the anticipated
hypotheses.1
In sum, there are two major innovations introduced by scholars of the Scientific
Revolution: (i) in order to know we need to make, and (ii) what we know is going to
be of some practical use. These are, in short, the strongholds of the concept of
technoscience. As a corollary, the technoscientist, as I will discuss next, is a homo
poieticus, that is an epistemic agent that creates both crafts and knowledge.
Let us consider the creation of crafts first. The technoscientist produces the
‘objects of technology’, e.g. computers, nuclear weapons, medical devices. In general,
these are humanly fabricated artefacts. Traditionally, Lewis Mumford proposed a
categorisation of technological objects that included utensils, apparatus, utilities,
tools and machines (see for instance Mumford 1934). Later on Mitcham (1994)
added to Mumford’s categorisation also the following: clothes, structures, and
automata or automated machines. This list of technological artefacts includes ‘tools
of doing’ and ‘tools of making’ alike. Needless to say, there are interesting remarks
to be made about the distinctions between ‘tools of doing’ and ‘tools of making’.
Also, one may debate about alternative categorisations of technological tools.
Much can indeed be learned from the phenomenology of artefacts investigating, for
instance, their personal or societal effects, or the way they may extend human capa-
bilities and, consequently, alter our experience with the external world (Ihde 1979).
But I will not enter those debates here. What interests us the most is that technological
objects—crafts—are the products of the poietic activity of the technoscientist.
In other words, the technoscientist is essentially a homo poieticus. Although Floridi’s
homo poieticus was essentially a creator of e-nvironments, it is legitimate to extend
the notion to the technoscientist because she also creates.
But there is another aspect of the poietic activity of the technoscientist that is of
relevance here: the technoscientist creates knowledge. This, we shall see, is some-
how the trait d’union between the homo poieticus in her role of technoscientist and
of philosopher.
Let us then turn the attention to the creation of knowledge. As before (namely
concerning the creation of artefacts), Floridi does not explicitly consider the homo
poieticus to be a creator of knowledge. Yet, some insights about the technoscientist
1
Qui la differenza è abissale: se per il greco la matematica è l’ordine della natura nel suo mani-
festarsi (aletheia) all’uomo, per lo scienziato dell’epoca moderna è l’ordine che l’uomo assegna
alla natura, costringendola a rispondere alle ipotesi su di essa anticipate. (My translation.)
4 The Homo Poieticus and the Bridge Between Physis and Techne 73
What characterises the homo poieticus is her making, producing, not only (digital)
artefacts but also knowledge through technoscience. I want to further argue that
‘making’ involves also different and, perhaps, higher spheres of the processes of
‘making’: producing and using thought and ideas.
4 The Homo Poieticus and the Bridge Between Physis and Techne 75
Again, the seeds are in Floridi’s work and hopefully the discussion that will
follow will give them manure to grow. Floridi (2010, ch. 1) embraces a particular
view of philosophy, namely as conceptual engineering: “Philosophy is the art of
identifying conceptual problems and of designing, proposing and evaluating explan-
atory solutions.”
In this perspective, philosophical investigation is neither fully logico-mathematical
nor fully empirical. This view clearly goes against early stances à la Carnap (1935)
and Reichenbach (1951), but also against very recent formal trends in philosophy—
see for instance the work of groups in Tilburg, Leuven, or Konstanz, just to mention
some scattered over Europe.
Reichenbach (1951, p. 123), for instance, expressed his viewpoint about the need
for logical analysis of scientific problems thus:
It was not until our generation that a new class of philosophers arose, who were trained in
the techniques of the sciences, including mathematics, and who concentrated on philo-
sophical analysis. These men saw that a new distribution of work was indispensable, that
scientific research does not leave a man time enough to do the work of logical analysis, and
that conversely logical analysis demands a concentration which does not leave time for
scientific work—a concentration which because of its aiming at a clarification rather than
discovery may even impede scientific productivity. The professional philosopher of science
is the product of this development.
contemplates, the mind also is rendered great, and becomes capable of that union with the
universe which constitutes its highest good.2
Nevertheless, Russell doesn’t tell us yet what the philosopher does exactly. Gilles
Deleuze and Felix Guattari (1994) are instead much more specific about that.
The philosopher, they argue, creates concepts. Philosophy is not just contemplation,
reflection, or communication. These are activities that any discipline or science can
do without claiming to do philosophy. Here is the lengthy passage from What is
philosophy? (Deleuze and Guattari 1994, p. 5–6):
More rigorously, philosophy is the discipline that involves creating concepts. […] We can
at least see what philosophy is not: it is not contemplation, reflection, or communication.
This is the case even though it may sometimes believe it is one or the other of these, as a
result of the capacity of every discipline to produce its own illusions and to hide behind its
own peculiar smokescreen. It is not contemplation, for contemplations are things them-
selves as seen in the creation of their specific concepts. It is not reflection, because no one
needs philosophy to reflect on anything. It is thought that philosophy is being given a great
deal by being turned into the art of reflection, but actually it loses everything. Mathematicians,
as mathematicians, have never waited for philosophers before reflecting on mathematics,
nor artists before reflecting on painting or music. So long as their reflection belongs to their
respective creation, it is a bad joke to say that this makes them philosophers. Nor does
philosophy find any final refuge in communication, which only works under the sway of
opinions in order to create ‘consensus’ and not concepts.
What the philosopher does is to find new concepts that explain and account for
the phenomena around us. Given the ever-changing character of reality, we cannot
believe that philosophy finds eternal and ever-lasting concepts. As the world
changes, so do the concepts we philosophers create to make sense of it. Paradigmatic
examples of concepts created by philosophers in the past are, in the eyes of Deleuze
and Guattari, the ‘I’ of Descartes, that is the concept of self, or the concept of
One and the concept of Idea in Plato’s philosophy. Deleuze and Guattari employ the
term ‘constructivism’ exactly to denote this philosophical activity of making up
concepts.
Consider now present-day philosophy. Philosophy of information invented concept
of infosphere and inforgs. Philosophy of technology invented concept of technosci-
ence. The corresponding sciences could not invent these concepts. The reason is
that such concepts are the answers to philosophical questions about the surrounding
phenomena, not to scientific problems. At best, scientific disciplines can give new
names to scientific objects or phenomena, but these aren’t philosophically loaded
per se, or per the reflection of the scientist. To give another example, scientists—
notably von Bertalanffy (1968)—introduced the concept of ‘system’ and made a start
in what is now called system analysis or systemics; but philosophers—e.g. Bunge
(1979a and 2000)—created the concept of ‘system’ to explain a new approach to
reality and knowledge.
2
Quoted from the online version of the book http://www.ditext.com/russell/rus15.html, accessed
4th May 2010.
4 The Homo Poieticus and the Bridge Between Physis and Techne 77
So far, I presented the homo poieticus in the clothes of the ethical agent and I argued that
she also wears the clothes of the technoscientist (who creates artefacts and knowledge)
and of the philosopher (who creates concepts). In this final section I would like to draw
some conclusions about what I think is really at stake, philosophically speaking, in this
reconciliation between physis and techne, through the figure of the homo poieticus.
78 F. Russo
Let me start with an insightful quote from Carl Mitcham’s work. He says that,
even in making history of ideas about technology, this should be “the study of how
different periods and individuals have conceived of and evaluated the human
making activity, and how ideas have interacted with technologies of various sorts”
(Mitcham 1994, p. 116).
Now, the homo poieticus allows us to do just that. As a ‘maker’, the homo
poieticus embodies the many aspects of the human making activity: the creation of
situations liable to be morally assessed, the creation of crafts and knowledge, and
the creation of (philosophical) concepts.
Seen from the eyes of the homo poieticus, technology can be conceived of, with
no further tension or contradiction, both as ‘knowledge’—that is, as a means to
acquire knowledge about technological artefacts as well as natural objects—and as
creation of artefacts in the strict sense of the Greek tecnh or of the Latin ars. But in
a constructionist perspective, technology can also be conceived of as an activity.
Mitcham (1994) lists the following as possible technological activities: crafting,
inventing, designing, manufacturing, working, operating, maintaining. Here, the
activity may concern the ‘action of making’ or the ‘process of using’.
Once we refer to the purpose or end to which the technical artefact is used for,
this action is ipso facto subject to ethical evaluation. The challenge of ethical theory
in response to the rise of technology is not only to enlarge its scope in order to cope
with new situations—think of issues raised with regards to the environment (e.g.,
nuclear weapons) or to the individual (e.g., cloning, transplants), or to the conse-
quences of information society (e.g., individual privacy, corporate security). The
challenge is also, as Floridi rightly noticed, to change the ethical theory in order to
cope with the roles—technoscientist, ethical agent or philosopher—man has in the
era of technology. There is one word that summarises those roles—this is poiesis.
The original tension between physis and techne lied in forces apparently pulling in
opposite directions: passive observation of the world versus active manipulation of it.
But technology is to be seen as an opportunity for the agent to better know and act
upon the world around, not as the guilty responsible of such tension. Technology asks
new questions with respect to ‘classical’ epistemology. Interestingly enough, many of
the questions and worries technology raises (and, particularly, emerging technologies
such as bio- or nanotechnology), crucially depend on what we know about these
emergent spaces of possibilities. Until we don’t make clear how we can know about
the new environments created by technology, any ethical appreciations, especially if
anchored to traditional ethical accounts, will be partial and inappropriate.
In other words, if a constructionist ethics is needed (according to Floridi) for the
poietic environments created by the digital revolution, a constructionist epistemol-
ogy is in turn needed for a constructionist ethics (according to arguments hereby
given). The reason is that, to say it with Floridi, “the chances of constructing an
ethically good x increase the better one knows what an ethically good x is, and vice
versa. Constructionism depends on a (satisfactory epistemic access to, or under-
standing of, the) relevant ontology” (Floridi and Sanders 2003).
Floridi is not an isolated voice in promoting this summit between epistemology
and ethics. For instance, Ferrari (2010) urges a contextualisation of the ethical dis-
course within the ontological, epistemological, socio-economic, and political reflections.
4 The Homo Poieticus and the Bridge Between Physis and Techne 79
Ferrari’s arguments are tested against the specific case of nanotechnology; she is
particularly interested in discussing the limits of ethical approaches, such as conse-
quentialists or deontological approaches, that frame all issues in terms of cost-
benefit analyses. The consequentialist, for instance, cannot make reliable predictions
(due to the high uncertainties at the nanoscale) and therefore can’t perform reliable
risk-benefit analyses. To this pars destruens, Ferrari (2010) joins a construens pars:
“A rigorous unpicking of the ways in which trust informs the work of scientists,
affects their social embeddedness, and plays a role in the social construction of
technology is still lacking.”
Ferrari’s overall conclusion is thus that epistemological issues do have a bearing
on ethical issues. The main epistemological issue she identifies is, for the case
of nanotechnology, the following: “The absence of a commonly accepted definition
of nanotechnologies has precise epistemological implications, because it influences
the setting and legitimisation of scientific research areas and therefore the scope of
the research” (Ferrari 2010). But this situation is not confined to nanotechnologies.
Her argument, in fact, generalises to technologies in that “the setting of goals clearly
has ethical implications, because goals and aims are shaped by society and because
goals are matters of research policy—in particular through priority-setting”.
Floridi, recall, urged us to work towards a successful reconciliation between
physis and techne. The stumbling block seems to be, though, the non-neutral char-
acter of technology. Galimberti (1999) cogently argues that the non-neutrality also
stems from the fact that techne is already the environment we are in, not simply the
object of our choice. To be sure, the tension between physis and techne arose because
the Moderns, by manipulating Nature, overstep its insuperable limits. In the Greek
world men cannot dominate the order of Nature but just ‘revealing’ it. It is for this
reason that revealing the truth (a-letheia) of Nature (physis), that is contemplating
Nature (theoria), leads to the kind of knowledge that regiments human action and
production (praxis and poiesis). This was the origin of the supremacy of theory over
praxis in the Greek world. As Galimberti accurately explains again, for the Greeks
there cannot be correct technological or political action without knowledge of the
immutable laws of Nature.
But the situation has changed. On the one hand, techne, that is poieis, also
contributes to acquiring knowledge of the physis. On the other hand, science and
technoscience do not discover immutable and eternal truths. Yet, with due amend-
ment, we should follow the advice of the Greeks: sound knowledge of the world
positively contributes to make better decisions and to take better actions both in
technological and in political contexts.
In sum, a successful marriage between physis and techne, to echo Floridi, is
achievable and also utterly desirable. The reason is not only a ‘restyling’ of the
ethical agent with the clothes of the homo poieticus, but also the need of an improved
awareness of the technoscientist with respect to her poietic skills. Those two should
not travel on parallel tracks that never cross. Instead, they should aim to cross paths
to improve our experiences of moral agents and of technoscientists. One may then
wonder how to make those tracks cross one another. It seems to me that it is the task
of the ‘conceptual engineer’, i.e. of the philosopher, to engage with such a poietic
activity.
80 F. Russo
Acknowledgements I wish to thank Hilmi Demir for organising this volume on Luciano Floridi’s
philosophy of technology and for encouraging me to contribute to the debate. I would also like to
thank Luciano Floridi for discussing with me the core idea of the paper at the very beginning of its
gestation. Phyllis Illari was (as always!) kind enough to provide very useful and stimulating
comments at the mid-stage draft of the paper. Thanks to the pressing suggestions of Cristiano
Turbil, I undertook the reading of the complex work of Galimberti. Finally, financial support from
the British Academy is also gratefully acknowledged.
References
Mitcham, C. 1994. Thinking through technology. London: The University of Chicago Press.
Mumford, L. 1934. Technics and civilisation. New York: Harcourt Brace.
Nelkin, D.K. 2008. Moral Luck. In The Stanford encyclopedia of philosophy, Fall 2008 ed, ed.
Edward N. Zalta. Stanford: Stanford University. http://plato.stanford.edu/archives/fall2008/
entries/moral-luck/. Accessed 4 June 2010.
Nordmann, A. 2004. Collapse of distance. Epistemic strategies of science and technoscience.
Danish Yearbook of Philosophy 41: 7–34. http://www.unibielefeld.de/ZIF/FG/2006Application/
PDF/Nordmann_essay2.pdf. Accessed 4 June 2010.
Reichenbach, H. 1951. The rise of scientific philosophy. Berkeley/Los Angeles: University of
California Press.
Russell, B. 1912. The problems of philosophy. Oxford: Oxford University Press.
Turing, A.M. 1950. Computing machinery and intelligence. Mind 59: 433–460.
van de Poel, I. 2009. The introduction of nanotechnology as a societal experiment. In Technoscience
in progress. Managing the uncertainty of nanotechnology, ed. S. Arnaldi, A. Lorenzet, and
F. Russo. Amsterdam: Ios Press.
von Bertalanffy, L. 1968. General system theory: Foundations, development, applications.
New York: Braziller.
Part II
The Information Revolution and
Alternative Categorizations of
Technological Advancements
Chapter 5
In the Beginning Was the Word and Then Four
Revolutions in the History of Information
Anthony F. Beavers
In the beginning was the word, or grunt, or groan, or signal of some sort. This, however,
hardly qualifies as an information revolution, at least in any standard technological
sense. Nature is replete with meaningful signs, and we must imagine that our early
ancestors noticed natural patterns that helped to determine when to sow and when
to reap, which animal tracks to follow, what to eat, and so forth. Spoken words at
first must have been meaningful in some similar sense. But in time the word became
flesh (corpus) and dwelt among us, as “inscription” (literally, to put into writing)
inaugurated the dawn of human history. This did not happen instantly. One place to
enter the story is with clay tokens to represent trade transactions that in time became
accounting tablets and, then, the world’s first literature (Enmerkar and the Lord of
Aratta, The Epic of Gilgamesh, etc.) and codes of law (The Codes of Ur-Nammu,
Lipit-Ishtar, Hammurabi, and so forth.) This event happened around the north shore
of the Persian Gulf sometime in the 4th millennium BCE and was enshrouded in
mystery as the role of the scribe trained in the art of inscribing and deciphering signs
belonged to the priest (Deibert 1997). With the sanction of religion, writing gave
birth to “civility” (literally, life in the city) and defined the line between “history”
and “pre-history,” the latter being a term designating everything that happened
before. There is little doubt that the invention of writing was significant and that it
deserves recognition as the first revolution in the history of information. Life as we
live it today would have been impossible otherwise.
Innovations in writing technologies happened with significant effects, but at various
points in the history of information, changes in technology were so dramatic that
they reshaped the course of human history in radical ways. The revolution in printing
is well-studied; the invention of the printing press and movable type (c. 1450) has
been credited as the catalyst for the Reformation (sixteenth–seventeenth centuries)
and for allowing the Renaissance (fourteenth–seventeenth centuries) to take hold,
both as necessary contributors to the Enlightenment (seventeenth–eighteenth centuries),
which gave birth to the modern state and innovations in philosophy and science
(Martin 1993; Deibert 1997; Eisenstein 2005). A ripple effect followed the printing
press requiring reassessment of the theological enterprise that redefined our under-
standing of the human being’s place in the world and the cosmos, as we went from
being an imago dei (a divine “imprint” made in the image of God) living in nature,
God’s creation just outside the Garden of Eden, to human individuals set afloat in a
solar system, though quite able and endowed with curiosity and reason.
More transformative still was the revolution in information technologies that
began in the middle of the nineteenth century. The invention of the Daguerreotype
(1839) signaled the birth of practical photography; and other mechanical and elec-
trical technologies including the telegraph (1836), the telephone (1877), the phono-
graph (1878), radio (1906) and television (1926) made a multiplicity of informational
media move quickly, crossing spatial and temporal boundaries at an alarming rate to
bring a world of people closer in the span of a few short years. The rise of the modern
corporation and, of course, international, world-wide warfare are tied inextricably
to this information revolution, since they could not have emerged without them,
along with tools to allow friends and family members to migrate across geographical
locations while remaining “in touch.”
Of recent interest and often credited as the start of the information age is what we
might call the “digital revolution” that began with Alan Turing (1937) and firmly
took hold with the popularization of the PC in the 1980s. It accelerated the flow of
multimedia information so far beyond what was possible in the previous era that
even information visionaries like Thomas Edison and Alexander Graham Bell could
not have imagined its extent, though, as we will see, they anticipated it nonetheless.
More so, the introduction of computers into communications technologies added
another dimension to this history by introducing automated information processing.
No longer was informational technology restricted to the mere storage, transmission
and retrieval of information; machines could be built to manipulate it as well.
We live in this context today. Inter-networked digital technologies afford com-
munications between human and artificial information processors (both “inforgs” in
Floridi’s language) that interact together in a collective space (the “infosphere”) to
produce a collective body of information that is archived for easy retrieval. Of
course, these technologies have produced their own variety of toys and with them
mechanisms for several forms of social interaction that range from the trivial (though
not un-importantly) entertaining to the educationally, and even interpersonally,
complex. No doubt, something major is happening around us informationally by the
addition of automated digital information processing to the technological affor-
dances of previous generations. Sitting at the start of what will no doubt be an
unimaginable transformational revolution involving everything human and historical,
it is impossible to know now what all of it can mean. But we see its effects emerging
as the geopolitical scene explodes into a global arena populated with multi-national
5 In the Beginning Was the Word and Then Four Revolutions in the History… 87
corporations richer than many countries and where the mechanisms of civil (and
uncivil!) control rely significantly on the politics of information flow, all the while
comprehending it through the lenses of computer-mediated information technologies
and interacting with each other via email, text message, chat client, Twitter, and
other social networking sites such as Facebook.
A transformation of this magnitude must certainly qualify as a revolution, a
fourth one in the history as I have here outlined. For the sake of clarity in what follows,
I name them the (1) Epigraphic, (2) Printing, (3) Multimedia, and (4) Digital
Revolutions, making no claims to have discovered them, since each has been studied
in extreme detail. In what follows, I will comment on each revolution in turn before
offering a discussion spawned by Floridi’s notion of “the Fourth Revolution” (see
2008, 2009, 2010, for instance), which corresponds to the last I have enumerated
here. Though we name the fourth in common, Floridi’s three previous revolutions
are designated differently. I say this without criticism, because he intends to draw
out the implications of the “Fourth Revolution” in different relief. That is, he largely
situates his revolutions “in the process of dislocation and reassessment of humani-
ty’s fundamental nature and role in the universe” (Floridi 2009, p. 156). Thus, he is
primarily concerned with shifting identities (of both self and world) across revolu-
tions and the philosophical implications of such. My comments are of a more his-
torical nature. Nonetheless, because this reflection is offered as broad commentary
on the context in which Floridi situates the “Fourth Revolution,” it is important to
say something about his taxonomy. Perhaps it is best here to let him speak for
himself:
Science has two fundamental ways of changing our understanding. One may be called
extrovert, or about the world, and the other introvert, or about ourselves. Three scientific
revolutions have had great impact in both ways. They changed not only our understanding
of the external world, but also our conception of who we are. After Nicolaus Copernicus
(1473–1543), the heliocentric cosmology displaced the Earth and hence humanity from the
centre of the universe. Charles Darwin (1809–1882) showed that all species of life have
evolved over time from common ancestors through natural selection, thus displacing
humanity from the centre of the biological kingdom. And following Sigmund Freud (1856–
1939), we acknowledge nowadays that the mind is also unconscious and subject to the
defence mechanism of repression. So we are not immobile, at the centre of the universe
(Copernican revolution), we are not unnaturally separate and diverse from the rest of the
animal kingdom (Darwinian revolution), and we are very far from being Cartesian minds
entirely transparent to ourselves (Freudian revolution). (Floridi 2009, p. 156)
To be clear, I do not doubt the historical reality of these revolutions and the
meaning that Floridi attaches to them, even though we must recognize that any such
talk is pretty coarse grained (and Floridi does, as I do of my own views here).
However, we could just as well have added the “Marxist Revolution” into this mix,
citing Marx’s conception of human beings as workers situated in a network of
bureaucratic relations in the midst of industrial and economic transformation and
the incredible efficacy it enacted on the geopolitical stage. This would make Floridi’s
“Fourth Revolution” a 5th, and possibly a 6th or 7th, depending on the how one
carves out history. There is also the philosophical question of whether the named
revolutions have come and gone or whether they continue to fight it out in the effort
88 A.F. Beavers
to reinterpret who we are (see Floridi 2008). (Consider, as an example, the battle
that continues between creationism and evolution in the United States.) These are
mere quibbles, since it is clear that Floridi enumerates his revolutions to provide a
context for characterizing what is happening today as a result of life within the
infosphere. A taxonomy of every historical revolution that has influenced our under-
standing of human identity and its context is not his immediate concern. (To be sure,
this would be an impossible project, in any case.)
Floridi, of course, is not blind to the fact that the information revolution could be
said to begin with writing, noting that this historical usage is “not what is typically
meant by the information revolution” (2010a, p. 4). Nevertheless, casting the Digital
Revolution against the backdrop of these others (the Copernican, Darwinian,
Freudian, etc.) lends focus on what to target in analyzing the information age; so
perhaps something complementary can be said, if Floridi’s “Fourth Revolution”
were to be plotted on the trajectory of the history of information flow itself. My hope
here then is to resituate this central concept in Floridi’s work for just a moment to
help fill out the context for the philosophy of information. To this end, I will present a
short (over-generalized and abridged) characterization of each revolution as I have
laid them out, and then offer a bit of discussion. The next section presents caricatures
that I hope true enough in their generalities to set the stage for comment.
When speech takes to writing, it transcends the moment to make its mark in space.
Whether this occurrence is a recipe for remembering or forgetting, as Plato ques-
tions in his Phaedrus, the event signals the spatialization of temporal information
and the emergence of an early form of hard storage useful because it off loads infor-
mation from a brain into a shared environment. Its elegant simplicity is almost
unfortunate, since it easily leads us to overlook its magnitude; in fact, only fairly
recently has research on the impact of cognitive technology (e.g., Norman 1994;
Clark 1997, 2001) made this significance clear. Marks of some sort serve as “stand
ins” for (or representations of) words, things or ideas that are etched onto a surface
that preserves them for however long. A technique governs this art that in essence
inscribes temporal streams of thought into a spatial arrangement in the act of writing
itself to be temporally resequenced later in the act of reading. The precise spatial
arrangement is unimportant, whether proceeding from the top of the “page” to the
bottom, from left to right, right to left, or alternating back and forth in tracks like
those left by a plow, so long as the technique of reading follows the proper order for
deciphering signs.
Other technicalities (literally) are quite important. The encoding strategy
(whether using pictographs, ideographs, logographs, a syllabary or letters of an
alphabet) is critical, because it determines the granularity of information that can be
5 In the Beginning Was the Word and Then Four Revolutions in the History… 89
spatial (and thus historical and political) boundaries. Whether polis follows logos,
or logos polis, civility is irrevocably tied to the spread of information. Where one
goes, so does the other; and as the lines of textual dissemination go farther and
faster, polis grows to empire. The Epigraphic Revolution is thus tied to the age of
civilization. Soon, however, Christianity will learn the power of the word, and as
people learn to worship it, the Church will rise to curator of ancient wisdom. The
city will decline to reawaken toward the end of the Middle Ages, in the thirteenth
century, about the time that paper is introduced into the West from China and
the great Medieval universities were founded. We still live in the wake of this
reawakening.
The Renaissance began in the fourteenth century before the printing press (c. 1450),
advocating a new humanism and supplying a need for texts. Even long before the
Renaissance, books were already on the scene (Diringer 1982). Though the revolu-
tion in printing follows a spark that it therefore could not have ignited, it nonetheless
can be credited in large part for contributing to the Enlightenment, including inno-
vations in philosophy, politics, mathematics and science that brought with them a
new worldview and a new sense of self-awareness (Deibert 1997). It definitely
facilitated the Reformation which depended on the quick duplication and the wide-
spread dissemination of texts (Deibert 1997; Edwards 1993; Eisenstein 2005). So,
when the fifteenth century opened, inventors and an industry were ready in wait to
respond with what might best be described as the mass production of writing. They
moved quickly too. Citing Saxby (1990) and Febvre and Martin (1976), Deibert
(1997) aptly describes the situation: “About 20 million books were printed before
1500 in Europe among a population at the time of about 100 million. This number
of books, produced in the first fifty years of printing, eclipsed the entire estimated
product of the previous thousand years” (p. 65). He goes on to note that “Febvre and
Martin estimate that 150 million to 200 million were then produced in the next
hundred years.”
Of course with the demand for books, an industry immediately responded.
Deibert continues:
By 1475, printing workshops had been established throughout the Rhineland, and in Paris,
Lyons, and Seville. By 1480, printing centers had sprouted through all of Western Europe
… in all to 110 towns.... By 1500, the number of towns … had risen to 236. By the sixteenth
century, western Europe had entered a new communications environment at the center of
which were cheap, mass-produced printed documents emanating from the many printing
presses stretched across the land. (pp. 65–66)
Thus, one can perhaps surmise that by 1600 there were approximately two books
in circulation in Europe for every literate and non-literate person, nothing like what
we will see in terms of the information explosion of today, but significant
nonetheless.
It was especially significant in terms of resituating authority and creating a spirit
of individualism. By becoming the primary vehicle through which the Protestant
Revolution would take hold, the Printing Revolution challenged the hegemony of
the Catholic church. Equally importantly, as Lawhead (2002) points out, is that it
engendered a sense of epistemic Protestantism as well. Just as with regard to theology
Protestantism provided the faithful with a direct line to God, human beings were
resituated with regard to the study of what was the case. Individual minds were now
conceived as having direct access to a truth that could be discovered by following
the proper methods. The scientific revolution unfolded in this light, and with it, the
sense of rationally-enlightened individualism that would support the rise of our
modern democracies. Coupled with the rapid increase in texts published in the
vernacular, a new sense of national identity also emerged (Deibert 1997).
In broad terms it seems fair to say that by the eighteenth century it was more
fashionable to be a well-informed individual than a child of God, or, at least, that
God had been redefined as a divine architect whose essence could be read directly
off of “the book of nature,” in which case being a child of God meant being a well-
informed inquirer in pursuit of truth, metaphorically “enlightened” no longer by
mystery or divine inspiration, but by reason. The appearance of two texts bear
witness to this transformation even in their titles: John Toland’s Christianity Not
Mysterious: or, a Treatise Shewing That There Is Nothing in the Gospel Contrary to
Reason, Nor Above It, and That No Christian Doctrine Can Be Properly Call’d a
Mystery, first published in 1696, and Matthew Tindal’s Christianity as Old as
Creation; or, the Gospel as a Republication of the Religion of Nature, published in
1730. In the beginning was the word, and in the emerging religion of the Enlightenment
it was printed in nature itself and republished in the form of scripture.
The printing press, and indeed the printing metaphor itself, will thoroughly take
hold before the eighteenth century closes, spreading literacy, a new authority in a
new institution of authorship, and a collection of enlightened minds, empowered
and able to govern themselves as informed citizens of democratic states. Indeed, as
a result of the Printing Revolution, the word was now set free. Though several will try
in subsequent generations, there will be no taking it back, and as free inquiry, indi-
vidual invention and experimentation carry us through the next century and physics
transforms into mechanical and electrical engineering, the flow of information itself
will be industrialized. We still live in this era of industrialized information flow.
The Multimedia Revolution started with a distant sound beeping out in dashes and
dots, taking letters that originally code for sound and matching them to other audible
92 A.F. Beavers
patterns that could be easily sent over a wire. Just two tokens could represent every
letter, readily affording the transmission of writing over distances. This event is
significant because it decoupled the flow of information from the exigencies of
transportation technology. Where prior the transmission of text was by physically
moving it around, now it could move on its own independent of the courier, caravan
and wagon cart. Before this revolution is finished, technology will increase the
speed of transmission in ways never before imaginable, transcending the wires in
time to take to the airwaves, sending moving text, pictures and sound directly to our
living rooms thanks to the marvels of radio and television (Winston 1998).
The history of technological innovation during the Multimedia Revolution is
convoluted and complex. Even trying to describe it primarily in terms of the reach
of information would exceed the space allowed, since the industrial revolution
industrialized information flow itself, providing a sudden escalation in the develop-
ment and spread of information-based technologies. Some of these (along with their
approximate date of invention) include: Telegraphy in 1836; The Daguerreotype in
1839; The Telegraphic Printer in 1856; The Stock Ticker in 1863; The Telephone in
1877; The Phonograph in 1878; The Light Bulb and the Photophone in 1880;
Wireless Telegraphy, Wax Cylinder Phonography and the Motion Picture Camera,
all in 1891; The Rotary Telephone in 1898; Radio and Teletype in 1906; Television
in 1926; Electric Phonography in 1927; and Magnetic Tape in 1928. Innovation
continued into the second half of the twentieth century with Cable Television in
1948; Cassette Tape Recorders in 1958; Touch Tone Phones in 1963; Color
Television in 1966; and the VCR in 1969.
Though innovation in information technologies occur to this very day (altered
greatly by the digitization of information and the sudden popularity of the personal
computer in the early 1980’s), even early on in the Multimedia Revolution major
effects were already being felt (Beavers and Sigler 2010). Just 34 years after the
invention of the telephone, a full length history of it appeared. Herbert Casson’s
History of the Telephone in 1910 paints a vivid picture of the social changes engen-
dered by its arrival on the scene. He writes:
What we might call the telephonization of city life, for lack of a simpler word, has remark-
ably altered our manner of living from what it was in the days of Abraham Lincoln. It has
enabled us to be more social and cooperative. It has literally abolished the isolation of sepa-
rate families, and has made us members of one great family. It has become so truly an organ
of the social body that by telephone we now enter into contracts, give evidence, try lawsuits,
make speeches, propose marriage, confer degrees, appeal to voters, and do almost everything
else that is a matter of speech. (p. 199)
When we look back from the perspective of today, it might initially seem that the
trajectory of technologies that culminate in our networked world was accidental.
But the inventors behind this revolution were conscious of what they were doing,
what was happening around them as a result and where we were headed with regard
to information technology. Of the ten affordances that Edison promoted with the
invention of the phonograph, using it to record music is named fourth. Distance
education, or at least asynchronous learning outside the presence of a teacher, is
indicated in his list. More importantly is what we might call the Edison/Bell vision
5 In the Beginning Was the Word and Then Four Revolutions in the History… 93
of an information network. Enumerated last, Edison notes that one affordance of the
phonograph is “connection with the telephone, so as to make that instrument an
auxiliary in the transmission of permanent and invaluable records, instead of being
the recipient of momentary and fleeting communication” (Edison 1878). We can
easily see here a system of hard storage accessible over telephone lines, a point
emphasized more poignantly by the fact that Turing noted in 1946 that his ACE
computer could also be connected to the telephone system (Hodges).
Though initially tied to wires, almost immediately, Bell and others were working
on wireless telephone transmission. A variety of techniques were tried; Bell’s favorite
invention, the photophone, invented in 1881, for instance, could send signals 200
yards on a beam of light (Bell Family Papers 1862-1939), thereby anticipating mod-
ern fiber optic information transmission. Furthermore, long before the turn of the
century, the promise of a global communications network with and without wires
was in place, so much so that in a lecture at the Imperial Institute in 1897, W. E. Ayrton
made an apt prediction:
There is no doubt that the day will come, maybe when you and I are forgotten, when copper
wires, gutta-percha coverings, and iron sheathings will be relegated to the Museum of
Antiquities. Then, when a person wants to telegraph to a friend, he knows not where, he will
call in an electro-magnetic voice, which will be heard loud by him who has the electro-
magnetic ear, but will be silent to everyone else. He will call, ‘Where are you?’ and the
reply will come, ‘I am at the bottom of the coal-mine’ or ‘Crossing the Andes,’ or ‘In the
middle of the Pacific’. (Fahie 1900, p. vii)
Regular cell phone service still might not reach to the bottom of a mine shaft or
to the top of the Andes, but Ayrton’s prediction was still correct in its generalities.
The world was about to change, and these early inventors knew it even before the
twentieth century began. Bell’s introduction of helpful services not only answered a
need for the telephone in society, but soon people would wonder how they ever lived
without it. By mid-century the backbone and the vision of an information super-
highway was firmly in place awaiting the digitization of information. Something
of global significance was about to happen. We now live in the early days of this
transformation.
hard disks. […] Five exabytes of information is equivalent in size to the information
contained in 37,000 new libraries the size of the Library of Congress book collections’
(Lyman and Varian [2003]). In 2002, this was almost 800 MB of recorded data produced
per person. It is like saying that every newborn baby came into the world with a burden of
30 feet of books, the equivalent of 800 MB of data on paper. This exponential escalation has
been relentless: ‘between 2006 and 2010 […] the digital universe will increase more than
six fold from 161 exabytes to 988 exabytes.’ (2009, p. 154)
5.3 Discussion
The tracks cut into history by the current exposition are way too broad, even if
viewed only from the perspective of the history of information. The control of infor-
mation in each of these ages by civil, religious and economic authority means that a
politics of information and, equally, an economics of information, must be taken
into account in understanding these historical transformations, along with the role
that the computational sciences (math, logic, computer science and computer
engineering) exercised in processing information and advancing human understanding.
Outside of informational phenomena, a variety of other scientific and technological
changes must also be considered. The transportation industry itself continues to
support the circulation of information, as it did early on. Today, planes, trains and
automobiles afford easy changes in physical presence as people come together
across the globe to visit, speak and exchange ideas. And, of course, the history of
educational institutions themselves and their curricula is more significant than many
are inclined to acknowledge. Even so, a broad outline of these information revolutions
in terms of the history of information technology tells a salient part of the story.
Where there is change, something must remain the same, or we are dealing with
entirely different phenomena. A revolution implies a change and thus occurs in the
wake of the one that came before, preserving something of what was there as one
epic unfolds into the next. Thus, revolutions should be thought of as overlapping
5 In the Beginning Was the Word and Then Four Revolutions in the History… 95
waves, rather than a sequence of different eras. This seems explicitly clear in the
case of information revolutions. The fact that I am sitting here writing text through
the medium of a digital computer while using a variety of computational tools to
access text for research connects me to the Epigraphic and Digital Revolutions.
Reading and writing have yet to vanish, and human beings still think through the
vehicle of words. That I’m writing this at 11:30 p.m. in my study lit by bulbs indi-
cates that I remain bound to the Multimedia Revolution, the television on with the
sound down and airing news of the Gulf oil spill, as I listen to a (digitized) Strauss
Opera on iTunes. Furthermore, that this “paper” will be disseminated through the
vehicle of the publishing industry still shows vestiges of the Printing Revolution.
But these superficial traces barely touch the unifying elements that tie these
revolutions together.
These common elements are founded on old ideas, though transposed into the
language of the Digital Revolution, they can sound quite new. This is unfortunate,
because it lends the appearance that we are reading the future back into the past
when, in fact, we are not. Whether coded digitally and sent over the airwaves or
coded alphabetically and pressed into a tablet, information is encoded, stored, trans-
mitted and received. These basic elements thus comprise the unifying components
of continuity across epochs, until the Digital Revolution adds what might appear at
first as a new technological affordance, namely, information processing. (I use the
word “appear” here for reasons that will be clear in the next section, even though
mechanical information processing is indeed reserved for the advent of automated
computational devices.) Though this is not insignificant, the primary substrate for
the changes from one revolution to another concern then the kind of information
that can be stored and transmitted, the speed of information transmission, its preser-
vation, and its reach. Indeed, these elements allow information to transcend the
moment to make its mark in space and time, thereby allowing it to cross temporal
and spatial boundaries.
By contrast, what changes are the specific techniques and technologies that allow
these elements to have their play. Thus, with the invention of writing we see a tech-
nology for off loading information into the environment. As other changes in infor-
mation encoding and improvements in the materials for information storage are
made, the speed of information transmission and its reach increase. Minor techno-
logical improvements (with major historical consequences) occur until the invention
of the printing press, which affords a sudden escalation in the speed of information
flow and its reach, because mass produced text allows information to travel along
various routes in parallel inexpensively. Multiple copies of a text stored in various
places, of course, also affect the preservation of information. As we move through
the Printing Revolution, this escalation in the reach of information is inextricably
tied to the collapse of the Medieval world and the rise of the Enlightenment, which
brought with it new understandings of self and world.
The industrialization of information flow that began at the end of the nineteenth
century represents yet another sudden leap in the speed of information transmission
and its reach, but this time with machines that could also move pictures and sound, not
just text. It also decoupled the mobility of information from the transportation industry.
96 A.F. Beavers
Telephone, teletype, radio and television, significantly ushered in a new world order
only on the basis of which could something like a World War be possible. It also
allowed a new kind of communicative presence between persons (and nations!),
both synchronous and asynchronous, that brought interlocutors together without
making them present in the flesh.
As the technologies of the Multimedia Revolution start moving digitized
information and new digital machines emerge, we find ourselves once again at the
beginning of an unfathomable leap in the availability of information, the speed of its
transmission and its reach. To overstate the case just slightly, massive amounts of
information are globally ubiquitous, though respect requires that we acknowledge a
new division between the information rich and the information poor. The Digital
Revolution affords such easy mobility of information that one-on-one audio-visual
communication via tools like Skype, private news sources in the form of blogs with
international readerships, and the fact that anyone anywhere can make a movie for
all the world to see are quickly becoming omnipresent, so quickly, in fact, that it is
impossible for governmental legislation and scholarly analysis to keep up. Even so,
does the transition from the Multimedia to the Digital Revolution represent a mere
difference in degree, more information moving faster and farther, or is something
different in kind also going on?
asymmetrical, they nonetheless collided with other networks, once again expanding
the reach of information, but with a cost. The situation was aptly summarized by
Emmanuel Levinas in 1982, who characterized society at the time as one …
whose boundaries have become, in a sense, planetary: a society, in which, due to the ease of
modern communications and transport, and the worldwide scale of its industrial economy,
each person feels simultaneously that he is related to humanity as a whole, and equally that
he is alone and lost. With each radio broadcast and each day’s papers one may well feel
caught up in the most distant events, and connected to mankind everywhere; but one also
understands that one’s personal destiny, freedom or happiness is subject to causes which
operate with inhumane force. One understands that the very progress of technology—and
here I am taking up a commonplace—which relates everyone in the world to everyone else,
is inseparable from a necessity which leaves all men anonymous. Impersonal forms of
relationship come to replace the more direct forms, the ‘short connections’ as Ricoeur calls
them, in an excessively programmed world. (p. 212)
Some of us who were inhabiting the academy at that time lamented or praised the
end of “logocentrism” that inaugurated a postmodern worldview in which the
representation replaces the presentation and in which the forces of dissemination
empowered Hermeneutics, Deconstructionism, Post-structuralism, Critical Theory
and a host of other ways to approach the communications environment of the day.
None of us were ready, it is fair to say, for the onslaught of multimedia, two-way,
synchronous and asynchronous communications between individuals and groups
that would come with the Internet and that would allow individuals to interact infor-
mationally with the collective. Indeed, the same year that Levinas offered the
description above, Time Magazine named the computer as “person of the year,”
noting that …
in 1982 a cascade of computers beeped and blipped their way into the American office, the
American school, the American home. The “information revolution” that futurists have
long predicted has arrived, bringing with it the promise of dramatic changes in the way
people live and work, perhaps even in the way they think. America will never be the same.
(Friedrich 1983)
Nothing could have been further from the truth, as we now all know, and not only
for America, but also for the world. The Digital Revolution had now begun, and in
the context of even historical time, it immediately exploded (within 20 years) into
an information network of global proportions uniting human and automated informa-
tion processors, thereby significantly rearranging the communicative playing field.
In terms of the network perspective I am taking here, interactivity changes every-
thing, and in the emerging world of the Internet, an arena in which all information
is, in principle, retrievable from anywhere and in which any two people or a com-
munity can communicate instantaneously, it is making a staggering difference at a
rate beyond our ability as humans to comprehend. It is difficult to say what it all
means, to determine whether it was destined from the start, and to say where it will
end, but there is no doubt that it matters more than any of the three revolutions pre-
viously mentioned in the evolution of our species, even though it depends on each
previous revolution in important ways. For the first time, with the Digital Revolution
our species can relate interpersonally through the mediation of machines that
5 In the Beginning Was the Word and Then Four Revolutions in the History… 99
process information along the way and thereby affect who relates to whom, which
facets of our social life and interests will develop, what kind of economic and political
action we may take, and our sense of self (or selves, as the case may be). Thus,
something different in kind does arrive with the Digital Information. Consequently,
if the previous revolutions altered so greatly the shape of human history, there can
be no doubt that this one will do so with greater force, thereby, as with the past,
raising foundational philosophical questions and inviting new methodologies for
addressing them. The stage is thus set for historically contextualizing the philosophy
of information.
to find a way to travel from mind to mind. The Republic of Letters, a community of
“enlightened” individuals in the seventeenth and eighteenth centuries that crossed
national boundaries (and, indeed, the Atlantic ocean), took up arms in the form of
writing and publishing, making use of the Printing Revolution and giving birth not
solely to modern science and modern philosophy, but also to learned societies and
academic journals. The creation of the Royal Society in 1662 fostered the spirit of
individual inquiry according to the doctrine of epistemic Protestantism mentioned
above that afforded individuals direct access to the truth in spite of ancient authority.
This spirit is aptly present in philosophers of the Early Modern period, such as
Descartes (1596–1650), Spinoza (1632–1677), Locke (1632–1704), Leibniz
(1646–1716), Berkeley (1685–1753), Hume (1711–1776) and Kant (1724–1804).
Private thoughts, and with them the notion of privacy more generally (private property,
individual and inalienable rights, etc.), gave birth to the notion of society as a
community of individuals engaging in collective action within a new kind of public
state, the modern democracy. At its core was the idea that the collective was defined
by the individuals who lived within it.
Hegel (1770–1831) challenged this picture, looking for a more integrated
relationship between the individual and the “system,” culminating, at least according
to Kierkegaard’s reactive reading in Fear and Trembling (1843), in a doctrine that
defined individuals in terms of the collective rather than the reverse. In between,
Samuel Morse set to work in 1825 in search of a quick means for communicating
information over distance, and the Multimedia Revolution was about to begin. Its
effects would be felt on both sides of the Atlantic as the vision of a long distance
international communications network would become a reality. Philosophically, the
presence of a networked conception of humanity was visible in two forms, one
positive, advocating a new communitarianism, as in Marx (1818–1883), and
negatively, advocating emancipation from the herd, as in Nietzsche (1844–1900). In
epistemology, similar effects were apparent in a reaction against Cartesianism and,
particularly, against the notion that knowledge could be validated on the basis of
independent thought. In America, pragmatism overtook the quest for privately-
validated truth in the works of Peirce (1839–1914), who situated truth as a public
agreement among a community of inquirers, while, on the European Continent, the
Vienna Circle (founded in 1922) advanced a doctrine of logical positivism that
would constrain meaning itself to empirical verifiability and, thus, to public visibility.
Wittgenstein’s posthumously-published Philosophical Investigations (1946/1953)
famously argued that there can be no private language the very year before
Heidegger’s Letter on Humanism (1947/1993) asserted that the very distinction
between private and public is itself problematic, even as he tried to rescue philo-
sophical thought from a techno-scientific conception inherited from Husserl, who
sought to make philosophy an exact science.
Though twentieth century philosophy is notoriously characterized as divided
between the “Continental” and “Analytic” traditions, in the context of the networked
global communications environment sketched here, they seem more concerned with
a similar set of issues rather than different ones, even though they disagree on
method. As we move past the initial shock of the “telephonization of city life”
5 In the Beginning Was the Word and Then Four Revolutions in the History… 101
We find here not a philosophy of evolution, but the notion that philosophy is
evolutionary, that it belongs to a community of inquirers, who as responsible
processors of information, disseminate their findings to build an information com-
mons beyond the comprehension of any single individual, yet, in hyperbolic terms,
accessible as needed to all, not a Republic of Letters but a networked community
of informants. We are, to situate this in the language of the Digital Revolution,
information processors who read from and write to a common tape and who will,
in time, find each other as needed and when relevant, thanks to the mediation of
socially networked computer technologies.
In advocating the philosophy of information as a new philosophia prima, Floridi
sets out on a new frontier, “not by putting together pre-existing topics, and thus
reordering the philosophical scenario, but by enclosing new areas of philosophical
inquiry—which have been struggling to be recognized and have not yet found room
in the traditional philosophical syllabus…” (p. 24). From the perspective of this
paper, Floridi is not merely calling for a new philosophy suited to an old communi-
cations environment, but among the first to respond within the constraints of a new
one. What will philosophy look like as we become aware of our place as inforgs
within the infosphere? What indeed will our presence in the infosphere do to the
history of philosophy? It is far too soon to say. But in a world where the speed of
informational change is so rapid that legislation and analysis cannot keep up,
we will either adapt to new methods of inquiry and new informational tools or let
the forces of technological change roll over us (or perhaps, worse yet, both.) We are
undergoing something dramatic, and we do not yet know what. Perhaps this very
imperative will necessitate the transformation of philosophy in the Fourth Revolution
that digital technologies both afford and require with the philosophy of information
at its foundation.
Acknowledgments I wish to thank Dick Connolly, Christopher Harrison and Brent Sigler for
their help with research on this paper, and, of course, Luciano Floridi, for providing something
provocative to which I could react.
102 A.F. Beavers
References
Beavers, Anthony, and Brent Sigler. 2010. Mechanists of the revolution: The case of Edison and
Bell. In Proceedings of the VIII European conference on computing and philosophy, ed. Klaus
Mainzer, 426–430. Munich: Verlag Dr. Hut.
Casson, H. 1910. The history of the telephone. Chicago: A. C. McClurg and Co.
Clark, A. 1997. Being there: Putting brain, body and world back together again. Cambridge, MA:
MIT Press.
Clark, A. 2001. Mindware: An introduction to the philosophy of cognitive science. Oxford: Oxford
University Press.
Deibert, R. 1997. Parchment, printing and hypermedia: Communication in world order transformation.
New York: Columbia University Press.
Diringer, D. 1982. The book before printing: Ancient, medieval and oriental. New York: Dover.
Edison, Thomas. 1878. North American Review. U. S. Library of Congress. http://memory.loc.gov/
ammem/edhtml/edcyldr.html. Accessed Aug 1 2010.
Edwards, M. 1993. Printing, propaganda and Martin Luther. Berkeley: University of California
Press.
Eisenstein, E. 2005. The printing revolution in early modern Europe, 2nd ed. New York: Cambridge
University Press.
Fahie, J. 1900. A history of wireless telegraphy, 1828–1899, including some bare-wire proposals
for subaqueous telegraphs. London: William Blackwood and Sons.
Febvre, Lucien, and Henri-Jean Martin. 1976. The coming of the book: The impact of printing
1450–1800 (trans: David Gerard). New York: Verso. (Orig. pub. 1958.)
Fischer, H. 1989. The origins of Egyptian hieroglyphs. In The origins of writing, ed. W. Senner,
59–76. Lincoln: University of Nebraska Press.
Floridi, L. 2008. Artificial intelligence’s new frontier: Artificial companions and the fourth revolution.
Metaphilosophy 39(4/5): 652–654.
Floridi, L. 2009. The information society and its philosophy: Introduction to the special issue on
‘The philosophy of information, its nature and future developments’. The Information Society
25(3): 153–158.
Floridi, L. 2010. Information: A very short introduction. New York: Oxford University Press.
Floridi, L. 2011. The philosophy of information. New York: Oxford University Press.
Friedrick, Otto. 1983. The computer. Time Magazine, January 4th. http://www.time.com/time/
subscriber/personoftheyear/archive/stories/1982.html. Accessed 14 Feb 2011.
Green, M. 1989. Early Cuneiform. In The origins of writing, ed. W. Senner, 43–57. Lincoln:
University of Nebraska Press.
Heidegger, Martin. 1993. The letter on humanism. In Martin Heidegger: Basic writings, ed. David
Krell, 213–266. New York: HarperCollins. (Orig. pub. 1947.)
Hodges, Andrew. The Alan Turing Internet scrapbook. http://www.turing.org.uk/turing/scrapbook/
ace.html. Accessed 4 Aug 2010.
Kierkegaard, Søren. 1983. Fear and trembling (trans: Howard Hong and Edna Hong). Princeton:
Princeton University Press. (Orig. pub. 1843.)
Lawhead, W. 2002. The modern voyage: 1400–1900, 2nd ed. Belmont: Wadsworth.
Levinas, Emmanuel. 1989. The pact. In The Levinas reader, ed. Seán Hand, 211–226. Cambridge,
MA: Blackwell. (Orig. pub. 1982.)
Logan, R. 1986. The alphabet effect: The impact of the phonetic alphabet on the development
of Western civilization. New York: William Morrow and Company.
Lyman, Peter, and Hal Varian. 2003. How much information? 2003. http://www2.sims.berkeley.
edu/research/projects/how-much-info-2003/. Accessed 14 Feb 2011.
Martin, Henri-Jean. 1993. The history and power of writing (trans: Lydia Cochrane). Chicago:
The University of Chicago Press. (Orig. pub. 1988.)
Norman, D. 1994. Things that make us smart: Defending human attributes in the age of the
machine. Cambridge, MA: Perseus Books.
5 In the Beginning Was the Word and Then Four Revolutions in the History… 103
Saxby, S. 1990. The age of information: The past development and future significance of computing
and communications. London: Macmillan.
The Alexander Graham Bell Family Papers at the Library of Congress: 1862–1939. U. S. Library
of Congress. http://memory.loc.gov/ammem/bellhtml/bellinvent.html. Accessed 1 Aug 2010.
Tindal, Matthew. 1730. Christianity as old as creation; or, the Gospel as a republication of the
religion of nature. Google Books. Accessed 14 Feb 2011.
Toland, J. 1696. Christianity not mysterious: or, a treatise shewing that there is nothing in the
gospel contrary to reason, nor above it, and that no Christian doctrine can be properly call’d
a mystery. Google Books. Accessed 14 Feb 2011.
Turing, A. 1937. On computable numbers, with an application to the Entscheidungsproblem.
Proceedings of the London Mathematical Society 2(1): 230–265.
Winston, B. 1998. Media technology and society—a history: From the telegraph to the internet.
New York: Routledge.
Wittgenstein, Ludwig. 1953. Philosophical investigations (trans: G.E.M. Anscombe). Oxford:
Blackwell.
Chapter 6
I Mean It! (And I Cannot Help It):
Cognition and (Semantic) Information
Valeria Giardino
To introduce Luciano Floridi’s theses, I will start from what I believe is his own
starting point: defining the role and the challenges of philosophy in the contempo-
rary world. In his writings, Floridi presents to his readers a scenario that is very
familiar to any human being who is a member of the contemporary society and
pursues everyday all the typical activities of that society. It is before our eyes: in
the last decades, the world has changed dramatically and so fast that also relatively
young people have witnessed some of these changes in person. The metamorphosis
is still in progress: it is easy to predict that in the following years the world will
continue changing and evolving. The question now is: towards what will these
changes bring our world and us? Moreover, are we ready for such a new world and
are we aware of what is happening at all?
It is at this point of the story that philosophy enters the scene, offering the con-
ceptual tools that are necessary in order to answer to these questions. Some critics
could think that an ‘old’ discipline such as philosophy has nothing to say about
the dramatic transformations that are happening today. As a consequence, it cannot
have a role neither in finding solutions to the new challenges that this new world
presents us nor in making any prediction about what is going to happen next.
The same critics could think that philosophy has nothing to offer since other kinds
of expertise are needed today: the contemporary world is calling for people who
are able to speed up these changes, for example pushing the new technologies
to their limits and then beyond them, or creating tools that would allow for better
interactions between humans and machines. Floridi shows that these critics are
wrong, and, being a philosopher myself, I think he is right.
V. Giardino (*)
Institut Jean Nicod (CNRS-ENS-EHESS), Paris, France
e-mail: Valeria.Giardino@ens.fr
is everywhere – for instance, there are more and more archives of all kinds of
information accessible from the Internet. It is more correlated because the very
notion of what an interaction is has changed: as informational beings, we are almost
always interconnected. Time, space and interactions as we knew and learnt them are
thus dramatically evolving into a ‘different’ time, a ‘different’ space and finally into
a ‘different’ concept of what acting and reacting amount to.
There is even more to say: it is not only the environment that has changed, but we
have changed too. As Floridi suggests, “we are probably the last generation that
will experience a clear difference between onlife and online” (Floridi 2007, p. 9),
since the onlife of the infosphere as a whole will always be online: at some point,
there will be no difference between processors and processed, online and offline,
and all interactions will become equally digital. As a consequence, a new form of
agent is emerging, a hybrid (multi)agent, partly artificial and partly human. In the
scene described, I am such a hybrid agent, equipped as I am with my laptop, my
phones, and all my, let’s say, technological extensions. I am an inforg, not merely an
organism but an informational one.
At the end of the twentieth century, a new myth emerged in films and novels:
popular science kept telling us that, at some point in the future, a new being would
have appeared, whose body would have had partly human features and partly
mechanical ones. Today, the myth of cyborgs – half human-half machine beings
completely identical to us on the surface – has faded. We realize now that some
transformations had taken place indeed, but not in our body, as it was predicted at
the dawn of the Artificial Intelligence program. The transformations occurred
through the re-ontologization of our environment and of ourselves: we have not
evolved into cyborgs as we thought but into inforgs. Our body has not changed
in any uncanny way but we have found ways of augmenting our mental and
informational capacities. More drastically, our environment and we have gone
through a process of re-ontologization that has changed our way of seeing the world
and ourselves forever.
geographic, socio-economic and cultural divides”. Moreover, this gap will not be
reducible to the distance between industrialized and developing countries, since it
will cut across society (Floridi 2002). In this respect, our categories must be once
again revised: the contemporary society is preparing the ground for tomorrow’s
digital favelas. To this picture, I would add that also within the informationalized
portion of the society, an abyss will separate those who have access to all the infor-
mation from those who have a partial or worst a controlled access to it. Another
important ethical issue concerns the very notion of Self, which assumes new features
in the infosphere.
The second scenario is instead epistemological. The question is: how are these
transformations affecting our way of perceiving the world and ourselves as agents?
In which respects do the intrinsically limited powers of our mind get augmented
when we become inforgs? Are we revising our criteria for something to be considered
as knowledge? And finally, do we access meaning differently in the infosphere?
In the following sections, I will discuss some of these questions.
The ethical and the epistemological are the two main scenarios opened to the
philosophical analysis, not to mention another scenario that lies in between the first
two and rises both ethical and epistemological issues: education. As Floridi points
out in many passages, we are constructing a new environment that will be inhabited
by future generations. It is not something far away from us: it concerns our children.
In Floridi’s words, at the moment we are e-migrants, since the Umwelt as we knew
it is being absorbed by the infosphere. But this situation will not last long. In fact,
future generations will be different from us because they will be digital natives and
not digital immigrants: our children will be born in the infosphere and therefore
they will recognize themselves from their birth as inforgs. The very crucial question
is how this change will affect their way of learning and their criteria for what is
reliable knowledge. For example, consider the discussion about the so-called
‘wisdom of the crowd’ (Surowiecki 2004): if we look at new Web tools such as
Wikipedia, we know that in most cases they are considered reliable. This is possible
because the number of people contributing to them is so high that it annihilates the
potential errors or imprecisions due to one or more individuals. Are educational
systems and institutions ready for this kind of transformations? What does an inforg-
child need to learn and to know to prepare for her future life in an informational
society? Once we will be able to answer to these questions, a further challenge
would be to discuss and to define what the infosphere requires as the most appropriate
and effective tools for teaching.
Of course, the scenarios I have just described are all intertwined and do have an
influence on one another. I would say: everything and everyone is interconnected in
the infosphere. The world we are experiencing today is more and more permeated
by information; this information comes in different forms and formats and is
diffused through the new technologies. We currently make use of these technologies
that are becoming more and more familiar to us and get continuously improved.
It is an epochal change, a fourth revolution, as Floridi claims.
If the challenge of philosophy is to analyze how this revolution has changed our
understanding of the world and of ourselves, my challenge in this article will be to
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 109
claim that some of Floridi’s suggestions should be partly revised and further
discussed. In the remainder of the article, I will present the four revolutions Floridi
talks about, and I will claim that there are other revolutions that can be considered
in the history of human culture. Some of them are interesting in the perspective
of discussing the reshaping of our new environment and of our new selves in the
infosphere. I will discuss an ambiguity in Floridi’s use of the term information
and propose to consider his fourth revolution as the Second Information revolution.
To solve this ambiguity, I will distinguish between information and semantic
information, which implies meaning and understanding. If I am right, this distinction
will make new difficulties emerge, in dealing with the new picture of the world
and possibly in providing predictions. Finally, I will present some questions that
emerge once we consider humans’ cognitive capacities in accessing meaning on the
background of the new context, the infosphere.
revolution, the Freudian one, which brought human beings to the discovery that
the mind has an unconscious side. The consequence was that a portion of our Self
became inaccessible to us.
This is not the end, since the recent transformation of the environment has shown
that we need to re-ontologize further our picture of the world and of ourselves.
What revolution are we experiencing now? What is revolutionary about the scene
depicting me working at the airport with my computer on my knees?
According to Floridi, we are not – or at least not only – experiencing a computer
revolution. It is not sufficient to acknowledge the widespread diffusion of computa-
tional devices to describe what is happening today. Think once again at the scene
I described: there is a computer, it is true, but there are also mobiles and the possibility
of connecting to the Internet as well. So, if not a computer revolution, are we
experiencing a digital revolution? Once again, Floridi’s answer is negative: what
about the success of enterprises such as Amazon, which are giving to books – and
to e-books – a new renaissance? Therefore, following Floridi, there is just one
possible answer: the revolutionary element in the new scenario is information
and the role it plays in it. Things evolve into energy and into information, and
what matters are the changes in the life cycle of this information. Going back to me
sitting in the airport hall, what matters there is the information flowing around
the scene and, through me, across the different devices near me and into the Internet.
It is the twenty-first century and we are part of the Information revolution. We have
coined a name for the society we are living in, permeated by computer science
and ICTs: the information society. But what about us? We are informational
beings and the world has turned into an informational world. The Information
revolution is the forth revolution, and is happening now. Moreover, the Information
revolution has its hero as well: Alan Turing. His work, or better what was in nuce in
his work, has changed forever our understanding of the world and of ourselves as
cognitive agents.
Though I am in general sympathetic with Floridi’s rational reconstruction, I would
argue that in the course of human cultural evolution, it is possible to individuate
other crucial steps in the transformation of our ontology, before the Copernican
revolution. As I will show, once an evolutionary perspective is assumed, our engage-
ment in symbolic activities appears as being crucial for our cognition. For this
reason, I will claim that the Information revolution Floridi refers to is in fact the
Second Information revolution; moreover, according to some views, it can be
considered as a degeneration of another revolution: the Cognitive revolution.
Let me consider first other topical moments in the evolution of human culture
and, more specifically, in the evolution of cognitive artifacts, before Floridi’s first
revolution. Though very back in time, it is unquestionable that cognitive artifacts
have played a major role in the shaping of our world and of us as cognitive agents.
To clarify, I am not arguing that Floridi is not aware of the relevance for our cogni-
tive history of the several innovations that were introduced each time that a new
cognitive technology was created. What I am suggesting is rather that we might assume
an evolutionary perspective and consider two very important events. First, the time
when human beings began to communicate by means of a language; secondly, after
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 111
that, the time when they invented writing, and thus began not only to produce
words but to share them in a kind of public format that could be stored in archives.
Both those steps were crucial in the evolution of human cognition, since they
revolutionized human beings’ access to meaning: new channels became available
to communicate and to make sense of the world around us and ourselves.
Take numerical cognition as an example. From what has been shown in the
empirical research, humans are equipped with some spontaneous representations:
whatever their education and their culture, they are able to make simple comparisons
between numerosities. Humans, together with some of their evolutionary precursors
and other animals, are able to represent precisely numerosities up to 4, and, for larger
numerosities, they approximate (Dehaene 1997). This given, as some authors
suggest, once number words are acquired, our representational powers get crucially
improved (Frank et al. 2008). In fact, numerals play a fundamentally compressing
role for what regards our more spontaneous representations. Experimental data
show that a subject who does not master the number word system will not be able
to track numerosities across time and space. This is the case of the speakers of the
Amazonian monolingual hunter-gatherer tribe Pirahã, who have a limited inventory
of words for numbers. Although they show the same spontaneous representations
of Western controls, their performances in matching tasks get inaccurate when
the tasks involve different spatial organization or memory. This suggests that number
words can be conceived as a cognitive technology: once available, they add a
second and most of the time preferred route for encoding and processing informa-
tion. The same could be claimed for other cognitive activities as well, such as color
recognition (Gilbert et al. 2006; Uchikawa and Shinoda 1996; Winawer et al. 2007)
and navigation (Hermer-Vazquez et al. 1999 ). In fact, also in distinguishing
colors and in orienting in space, humans have at their disposal from very soon some
spontaneous representations; it is only afterwards that these representations get
integrated by an appropriate and public code. These codes serve as cognitive tech-
nology like in the case of numerals, because they constitute a useful and effective
cognitive tool. This does not mean that once this code is acquired, the more spon-
taneous representations are lost; on the contrary, when the appropriate code is
suppressed or not useful, speakers perform in the same way as speakers of
languages who do not possess the relevant technologies. In this perspective, other
cognitive technologies that offered new possibilities for our cognition can be
acknowledged. Let us consider for example writing, which improved the possibility
of sharing words – and therefore our ideas, our opinions, our knowledge – with
other members in our community, across time and space. A further improvement
has been in particular alphabetic writing, that solved the problems of ideographic
writing by offering a reliable and public code in order to visualize human speech.
My approach is in line with the idea that cognition is ‘distributed’: as Hutchins
(1995a, b) explains, cognitive events are not encompassed by the skin or skull of an
individual. If we look at human cognitive activity ‘in the wild’, we discover at least
three interesting kinds of distribution of cognitive processes: the cognitive processes
can be distributed (i) across social groups, (ii) in the coordination between internal
and external structure, be it material or environmental, and finally (iii) through time,
112 V. Giardino
in such a way that the products of earlier events can transform the nature of later
events. We must consider these kinds of distributions if we want to understand
human cognition. In fact, as I suggest here, the invention and the use of cognitive
artifacts as scaffolding structures for our reasoning are involved in the organization
of our functional skills into cognitive functional systems (Dror and Harnad 2008).
Human beings, despite the limitations of the cognitive systems we know they
are born with (Kinzler and Spelke 2007; Spelke 2004), were able to develop new
practices and new cognitive strategies to augment the powers of their minds,
showing an extraordinary capacity in creating tools that would help them in the
processes of both describing the world around them and acting upon it. Some of
these tools had an intrinsically cognitive function, which allowed them to enhance
recognition, communicate, economize their cognitive resources, and provide faster
and accurate transitions from premises to conclusions. Therefore, language is
special because it is cognitively primary, but not so special in the end. Our relation-
ship with language as well as with complex mathematics is analogous to our
relationship with chess. As Tomasello (1999, p. 208) claims, the cognitive skills involved
in language, complex mathematics or chess “are products of both historical and ontoge-
netic developments working with a variety of preexisting human cognitive skills, some
of which are shared with other primates and some of which are uniquely human”.
What then to say about these important steps in the evolution of cognitive artifacts
if we consider Floridi’s view?
First, I want to point out that my objective is not to suggest that every introduc-
tion of a new cognitive artifact such as numerals or alphabetic writing should be
considered as a revolution in our way of relating to the world and to ourselves
as cognitive agents. What I want to claim is rather that the task of creating new
technologies to find new ways of improving our more ancient and spontaneous
cognitive capacities and communication skills has started a long time ago; we have
always urged to produce semantic information, by all possible means: speaking,
writing, printing books, painting pictures, shooting films… One objection to this
idea could be that there are also other reasons why these forms of externalization of
our thoughts have been introduced. For example, there may had been esthetic
reasons, for instance in order to arouse emotions or evoke pleasure, or social
reasons, for instance in order to affect action or promote collaboration. I am not
denying this; though, I want to focus on the cognitive and communicative reasons
for which they had been created: my aim is to point out that to some extent we have
been living in an informational environment all along. In fact, our culture deals
by nature with information and pursues the realization of newer and newer means
to reach the world and the others around us.
The consideration of these crucial moments in the cognitive history of human
beings helps thus reshaping Floridi’s infosphere: the infosphere encompasses
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 113
informational cognitive agents – the inforgs – and informational cognitive tools – the
info-artifacts – that are a new form of cognitive artifacts. In fact, humans have
been creating and inventing cognitive artifacts all the time; nevertheless, the most
recent artifacts they invented, beginning with the Information revolution, have
proved to be drastically different. They are not only cognitive tools at our disposal,
but also seem to have an onlife of their own: not only they memorize, as writing
does to some extent, but they also learn, and most of all, respond to us. Think about
Plato’s objection to the use of writing: written words speak as though they make
sense, but if one asks them an explanation for what they are saying, they will go on
telling the same thing for ever, over and over again (Plato 1997). The new tools,
instead, interact with us, in a dialogue-like exchange, and this is not metaphorical
talk since it faithfully describes our everyday interaction with them.
This brings us to another issue, which concerns the possibility of considering an
older Information revolution. Let us accept that from the beginning we have been
creating cognitive tools; one could think that, as a consequence, from the beginning
we have been living in some kind of proto-infosphere; with this environment as a
background, we were proto-inforgs: we were doing our best in sharing information
and finding new ways of improving our communication, and yet we were lacking
the necessary technology – the one that Turing provided – to make this information
flow all around as it happens today. Though, this reconstruction is ill posed, since it
is given from the point of view of what has happened ‘next’.
A more faithful reconstruction would rather show how the history of our cognition
has been deeply influenced by the fact that from the very beginning we engaged
ourselves in symbolic activities, and that these activities have become, in a long
historical and cultural process of creation and selection, more and more complex.
As Deacon (1997) observes, our ancestors found a way to create and reproduce a
simple system of symbols. Once available, these symbolic tools quickly became
indispensable. This was indeed a revolution in the ontology of information
through the billions of years of the evolutionary process, from the time when
living processes became encoded in DNA sequences: “because this novel form of
information transmission was partially decoupled from genetic transmission, it sent
our lineage of apes down a novel evolutionary path – a path that has continued to
diverge from all other species ever since” (p. 45).
If this is correct, then I propose that the revolution Floridi talks about is the
‘Second’ Information revolution. From sequences of DNA to cultural transmission
(First Information revolution), from cultural transmission to online transmis-
sion (Second Information revolution). Therefore, Floridi’s fourth revolution is a
revolution not because information is now everywhere – since we have always
urged for it, but because now it is conveyed and spread by genuinely new artifacts:
infoartifacts.
It is undeniable that the new tool available, the Internet, is revolutionary.
Nevertheless, we could ask whether it has really qualitatively changed our way of
accessing information. Actually, though information is all around, it does not seem
to imply that we will take advantage of it. In fact, we are driven by our choices – most
of the time biased – and Internet surfing is no exception. Our own interests guide us,
114 V. Giardino
not dramatically different we are from smart, engineered artifacts, since we have,
as they do, an informational nature. But what kind of information is Floridi talking
about when he refers to ‘informational nature’ in the two cases? Are we referring
to the same notion of information? If we are not, as I believe it is the case, then
there is still a sense in which we indeed are dramatically different from responsive
artifacts, as Floridi himself seems to admit in a recent paper (Floridi 2009).
In the previous section, I discussed the reasons why I believe we do not refer
to the same kind of information in the two cases: we as humans have a relation with
information – semantic information – that is different from the relation machines
have. Let us go back to the cognitive artifacts we use, and to symbols alternative
to words available to us in order to communicate and externalize thought, such as
gestures or diagrams. We invented them – maybe by taking inspiration from nature – in
order to convey information, both to ourselves and to the others (Tversky 2005
together with Kessell and Tversky 2006). Given the fact that we are now living in
the infosphere, is there a reason why we should renounce to them? Would they
become useless or would they disappear? I do not think so. We all keep on gesturing
or drawing diagrams, despite all the informational powers of computational devices.
Do machines make gestures or draw sketches? Despite the advancement in technology,
at the moment they cannot.
A great number of cognitive activities imply the use of such cognitive tools,
which are widespread. Let us consider the case of mathematics. Avigad, in trying to
define what mathematical understanding is, claims that ascriptions of understanding
are best understood in terms of the possession of certain abilities (Avigad 2008).
The individuation of such abilities is in fact a central issue for automated formal
verification, since in order to improve the human-machine interaction it is necessary
to consider how different sorts of agents will have different strengths and weaknesses.
For example, in the process of looking for a mathematical proof, humans commonly
show to spontaneously refer to diagrams, most likely because they are good at
recognizing symmetries and relationships in information when it is so represented.
The same cannot be said for machines, which can keep track of gigabytes of
information and carry out exhaustive computation where humans would be forced
to rely on little tricks. Avigad’s suggestion is that our theories of understanding
should then be relativized to the particularities of the relevant class of agents. They
are all informational agents, but they treat information differently.
If our aim is simply to claim that machines as well as humans have access to
information and are cognitive agents of some sorts, then we have to conclude that
from this point of view they are not different. Yet, they are still dramatically different
in the way they deal with information. In a recent paper Floridi shows to be aware
of this difference: he claims that humans are the only semantic engines that have
been so far available in the universe: we produce meaning and we have always
produced it (Floridi 2009). By contrast, artificial agents are “syntactic engines,
cannot process meaningful data, i.e. information as content, only data at a lower- or
higher- level. … Humans are the only semantic engines available, the ghosts in the
machines” (my italics). Though, this seems to contrast with his previous claim that
after the fourth revolution we re-ontologized ourselves in such a way that we have
116 V. Giardino
become of the same nature of machines. Humans and machines may well have the
same informational nature, but it appears now that only humans process semantic
information, while machines are syntactically powerful. They possess two different
abilities. This is far from new and may seem to echo Searle (1980) and his
Chinese Room objection to Strong AI. Nevertheless, my aim in this article is more
modest than Searle’s: I simply want to discuss the possibility of distinguishing
among different cognitive subjects having different cognitive capacities, and I do
not want to take any stance on the nature of consciousness.1
There has been, especially in the 1990s, an enormous effort in AI as well as in
computer science to create semantic machines or a Semantic Web; nonetheless,
the results have not met the target yet.2 Up to now, machines are still only syntactic
engines, and we know why – we are the ‘ghosts’ in them, in the end! – but what to
say about our own powers? What are the conditions behind our being semantic
engines? This question was, as I will show, at the origin of what has been defined the
Cognitive revolution.
I will consider now Bruner’s (1990) point of view on what he defined the Cognitive
revolution, taking place in the 1950s. We know that it is the same revolution Floridi
refers to; nevertheless, Bruner gives a different interpretation to it. According to
Bruner’s reconstruction, the aim of that revolution at the beginning was to discover
and describe formally the meanings that human beings were able to create out of
their encounters with the world. The objective in the long run was to set forth
hypotheses about which meaning-making processes were implicated in humans’
cognitive activity. As I have already discussed, human beings engage themselves in
symbolic activities to construct and make sense of the world and of themselves as
well. Bruner’s hope was that such a revolution, as it was conceived at its origins,
would have lead to the collaboration of psychology with its sister interpretative
disciplines such as the humanities and the social sciences. It is only a collaboration
of this kind that can allow the investigation of such a complex phenomenon as
meaning-making. But the happy ever after did not work out. In fact, in Bruner’s
opinion, the emphasis began shifting from the construction of meaning to the
processing of information, which are profoundly different matters. The notion of
computation was introduced and computability became ‘the’ good theoretical
model; this brought far from the original question – the revolutionary one – which was
about the conditions of our meaning-making activity, whose answer would have
1
Moreover, I take the distinction between syntax and semantics from my work on the philosophy
of mathematical practice, and on the limits of the foundationalist approach to mathematics.
2
See Floridi (2009) for an interesting discussion on the contrast between the Semantic Web and the
Web 2.0 enterprises.
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 117
Consider the use of diagrams in particular, and the way they externalize thought
in order to facilitate reasoning. Why are they so effective? If we try to answer to
this question by considering the way it would be possible to extract propositional
information from diagrams, we would be on the wrong track. There is no unique
and apparent propositional content that can be extracted at different times from a
diagram. Rather, diagrams are subjects to both physical constraints – since they are
two-dimensional physical objects – and conceptual constraints – because they
require a user to interpret them. Topological relations, for example, are very basic
spatial relations such as proximity or enclosure that would not change in a diagram
if the diagram were printed on a rubber sheet and the sheet were stretched or
twisted (Willats 1997). Nevertheless, the recognition of such spatial relation-
ships must be accompanied by interpretation, so that the diagram can be used to the
aim of obtaining new conclusions within a specific theory. These two constraints
are integrated in the way the diagram is reproduced and manipulated: a diagram
is thus interpreted dynamically and informal inferences take the form of physical
transformations. In fact, the rules of diagrammatic representations are normally
externalized as procedures, and, as a consequence, what must be learnt in order to
master a diagrammatic system is not abstract rules, but instructions on how to act
on the diagrams and to read and interpret them correctly. To sum up, the correct
interpretation of a diagram gets intimately connected to the systematic actions that
are performed on it (Giardino 2010).
I want to argue then that representations must be considered in the way they are
used in the meaning-making process. It would also be possible to think, as Walton
(1990) suggested, that some of the things in our environment prompt our imagination
in order to broaden our imaginative horizons: “imagining is a way of toying with,
exploring, trying out new and sometimes farfetched ideas. Hence the value of luring
our imaginations into unfamiliar territory” (p. 22). Children games express paradig-
matically the ability we show in coping with meaning flexibility and meaning-
making, also in situations that are completely new. This is a strong characteristic of
our cognitive capacities: we look for meaning and create meaning where we don’t
find it. And in many cases this cognitive primitive capacity is not apparent in the
subsequent stabilization of explicit and formal rules to constrain information.
In his work in developmental psychology, Tomasello (1999) has pointed out to a
feature that could be one of the conditions for our meaning-making capacity.
According to his studies, we are not only cognitive agents but most of all we are
intentional agents. The human symbolic activity, of which language is a direct man-
ifestation but not the only one, derives from the joint attentional and communicative
activities that the understanding of others as intentional agents engenders. Once
we recognize the other as an intentional agent, then we are ready as individuals to
take an outsider’s perspective on our own behavior and cognition, and as a consequence
we engage ourselves in a representational redescription, iteratively re-presenting
in different representational formats what our internal representations represent
(Karmiloff-Smith 1992). Cognition becomes then more systematic: we become capable
of using knowledge in a more flexible way in a wider array of relevant contexts.
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 119
In the same spirit, Bruner proposes that it is precisely this ‘push’ to construct
narratives that determines the order of priority in which grammatical forms
are mastered by the young child. The child is not only reporting: she is trying to
make sense of herself and of the world around her, to structure her experience.
The achievement of the capacity of giving representational redescriptions and
providing narratives is not simply a mental achievement but an achievement of
social practice that lends stability to the child’s social life.
thing, what will happen to meaning and information? Will our capacity of negotiating
meaning be reduced?
Another crucial issue to be considered is the role of intentionality in the infosphere.
Following Tomasello, sharing intentionality is a basic requirement for human cognition.
If we accept his analysis, what about then our relationship with technological
responding artifacts? I have shown that we have already re-ontologized them as
agents, which are informational as well as we are, but what about the possibility of
attributing intentions to them? If we believe we can attribute intentions to them,
then it would be really difficult to claim that they are different from us. If we believe
we cannot, then we still have to check whether their omnipresence has transformed
our very attributions of intentions and our capacity of assuming the other’s perspective.
If information is everywhere, we should be able to develop ways of understanding
where the relevant information is, namely the one that has been given to us with a
communicative intention. Will we be able to do that?
The same considerations can be made for representational redescription and
narrative in the infosphere: what shape will they take in an environment that is
always interconnected and always on line? How will the child structure her experi-
ence in the infosphere in order to lend stability to her social onlife?
A final worry could be whether we should accept a sort of technological
determinism, according to which the technology will fix any social problem, or on
the contrary assume a kind of social determinism of knowledge, in which case the
inforgs will be identified as the ones who define the new technological scenarios.
These and more questions arise when we take into account the difference between
information and meaning and we project it into the infosphere. In my view, they
are the most crucial issues that philosophy of information should be concerned with,
in order to provide predictions and to prepare the inforgs to what will happen next,
to them and to their world, in their infosphere.
6.6 Conclusions
Let me go back to the scene at the beginning (thought in reality as you might expect
it has changed: hopefully I am not anymore waiting for my flight in Hall 3 at the
Orly airport). There is the computer, the mobile phones, the Internet and me. I am
living in the infosphere, and I am an inforg, that means a hybrid being partly human
partly artificial, always – or almost always – on line. I have an informational nature,
but, differently from my computer and my phones, this informational nature derives
from the fact that I am a symbolic being. I am the ghost in my smart phone and
in all the other info-artifacts I use. As Deacon (1997) pointed out, though I share
the same earth with millions of living creatures, I also live in a world that no
other species has access to, a world full of abstractions, impossibilities, paradoxes.
As other members of my species, I was born with some cognitive systems ready
to work, but they are limited – and I am limited as well, since I will not last forever.
Though, thanks to cultural transmission, I can overcome my limits: I have at my
6 I Mean It! (And I Cannot Help It): Cognition and (Semantic) Information 121
disposal all sort of cognitive artifacts that are ready for me to be used and shared, the
more powerful of which is language. Moreover, after Floridi’s forth revolution,
which I define as the Second Information revolution, information permeates all
the surroundings, in such a way that I am always connected: my time, my space and
my interrelations with the others and with the very new and responding infoartifacts
have changed. I am a new form of symbolic being: it does not mean that my cogni-
tive abilities have changed but that the ontology of the world around me and of
myself has changed. What will my representational description of my situation
of ‘e-migrant’ be?
The challenge of philosophy is to answer to questions such as this. Floridi
has uncovered a domain of research, the infosphere, which was calling for a new
theorization and a conceptual analysis. The methodology he suggests to adopt is to
reason in terms of the re-ontologization of our world and the re-discussion of
our beliefs. My suggestion is that any further step in this direction requests the
collaboration of what Bruner calls ‘the interpretative disciplines’, such as psychology
and the human and social sciences. At least two scenarios are open to the investigation,
an ethical and an epistemological one, and issues in education have emerged as well.
In this article, I tried to show that a particularly interesting aspect to discuss is the
role, in this picture, of semantic information, which is the expression of a symbolic
activity that has up to now been shown to be specifically human. Will one day a fifth
revolution come that will take away from us also this ultimate illusion? That day,
will our own technology design intentional and semantically powerful machines?
At the moment, we do not know. The task of philosophy of information is to provide
the appropriate framework that would allow us to make useful predictions in order
to prepare the future generations and ourselves.
Acknowledgements I want to thank the group working on Public Representations at the Institut
Jean Nicod for all our useful discussions on similar topics, and in particular Elena Pasquinelli
and Giuseppe A. Veltri who read a preliminary version of this article. The research was supported
by the European Community’s Seventh Framework Program ([FP7/2007- 2013]) under a Marie
Curie Intra-European Fellowship for Career Development, contract number no. 220686—DBR
(Diagram-based Reasoning).
References
Avigad, J. 2008. Understanding proofs. In The philosophy of mathematical practice, ed. P. Mancosu,
317–353. Oxford: Oxford University Press.
Berger, P.L., and T. Luckmann. 1966. The social construction of reality: A treatise in the sociology
of knowledge. Garden City: Anchor Books.
Bruner, J. 1990. Acts of meaning. Cambridge, MA/London: Harvard University Press.
Deacon, T.W. 1997. The symbolic species. New York/London: W. V. Norton Company.
Dehaene, S. 1997. The number sense. New York/Cambridge (UK): Oxford University Press/
Penguin press.
Dror, I.E., and S. Harnad (eds.). 2008. Cognition distributed: How cognitive technology extends
our minds. Amsterdam: John Benjamins.
122 V. Giardino
Floridi, L. 2002. Information ethics: An environmental approach to the digital divide. Philosophy
in the Contemporary World 9(1): 39–45.
Floridi, L. 2007. A look into the future impact of ICT on our lives. The Information Society 23(1):
59–64. An abridged and modified version was published in TidBITS.
Floridi, L. 2009. The semantic web vs. web 2.0: A philosophical assessment. Episteme 6: 25–37.
Frank, M.C., D.L. Everett, E. Fedorenko, and E. Gibson. 2008. Number as a cognitive technology:
Evidence from Piraha language and cognition. Cognition 108(3): 819–824.
Giardino, V. 2010. Intuition and visualization in mathematical problem solving. Topoi 29: 29–39.
Gilbert, A.L., T. Regier, P. Kay, and R.B. Ivry. 2006. Whorf hypothesis is supported in the right
visual field but not the left. Proceedings of the National Academy of Sciences 103: 489–494.
Grosholz, E. 2007. Representation and productive ambiguity in mathematics and the sciences.
Oxford: Oxford University Press.
Hermer-Vazquez, L., E.S. Spelke, and A.S. Katsnelson. 1999. Sources of flexibility in human
cognition: Dual-task studies of space and language. Cognitive Psychology 39: 3–36.
Hutchins, E. 1995a. Cognition in the wild. Cambridge, MA: MIT Press.
Hutchins, E. 1995b. How a cockpit remembers its speeds. Cognitive Science 19: 265–288.
Karmiloff-Smith, A. 1992. Beyond modularity: A developmental perspective on cognitive science.
Cambridge, MA: MIT Press.
Kessell, A.M., and B. Tversky. 2006. Using gestures and diagrams to think and talk about insight
problems. In Proceedings of the 28th Meeting of the Cognitive Science Society, ed. R. Sun and
N. Miyake. Mahwah: Lawrence Erlbaum Associates, Inc.
Kinzler, K.D., and E.S. Spelke. 2007. Core systems in human cognition. Progress in Brain Research
164: 257–264.
Plato. 1997. Phaedrus (trans: Alexander Nehamas and Paul Woodruff). In Complete works, ed. John
M. Cooper and D.S. Hutchinson. Indianapolis/Cambridge: Hackett Publishing Company.
Searle, J.R. 1980. Minds, brains, and programs. The Behavioral and Brain Sciences 3(3): 417–457.
Spelke, E.S. 2004. Core knowledge. In Attention and performance: Functional neuroimaging of visual
cognition, vol. 20, ed. N. Kanwisher and J. Duncan, 29–56. Oxford: Oxford University Press.
Surowiecki, J. 2004. The wisdom of crowds: Why the many are smarter than the few and how collective
wisdom shapes business, economies, societies and nations. New York: Doubleday.
Tomasello, M. 1999. The cultural origins of human cognition. Cambridge, MA/London: Harvard
University Press.
Tversky, B. 2005. Visuospatial reasoning. In Handbook of reasoning, ed. K. Holyoak and R. Morrison.
Cambridge: Cambridge University Press.
Uchikawa, K., and H. Shinoda. 1996. Influence of basic color categories on color memory
discrimination. Color Research and Application 21: 430–439.
Walton, K.L. 1990. Mimesis as make-believe. On the foundations of the representational arts.
Cambridge, MA/London: Harvard University Press.
Willats, J. 1997. Art and representation: New principles in the analysis of pictures. Princeton, NJ:
Princeton University Press.
Winawer, J., N. Witthoft, M.C. Frank, L. Wu, A.R. Wade, and L. Boroditsky. 2007. Russian blues
reveal effects of language on color discrimination. Proceedings of the National Academy of
Sciences 104: 7780–7785.
Part III
Applications: Education, Internet and
Information Science
Chapter 7
What Happens to Infoteachers and Infostudents
After the Information Turn?
Elena Pasquinelli
7.1 Introduction
The information revolution has changed the world profoundly, irreversibly and problematically,
at a pace and with a scope never seen before. It has provided a wealth of extremely powerful
tools and methodologies, created entirely new realities and made possible unprecedented
phenomena and experiences. It has caused a wide range of unique problems and conceptual
issues, and opened up endless possibilities hitherto unimaginable. (Floridi 2003)
E. Pasquinelli (*)
Department of Cognitive Studies, Ecole normale supérieure (Paris), Paris, France
Groupe Compas – Education, Technologies, Cognition, 29, rue d’Ulm,
75005, Paris, France
e-mail: elena.pasquinelli@gmail.com
are less and less relevant. Let us imagine walking in the street with our mobile
phone in our pocket (not a huge leap of imagination, in fact). Someone calls from
far away, we answer and engage in a conversation about a strange art object we are
looking at, right in front of us; a picture of the mysterious object is soon taken, and
sent to the phone-friend. The phone-friend, tickled by curiosity, searches the Internet
for street exhibitions in our town. Meanwhile, we approach the object, and find a
code; we then point the camera of our smart-phone onto the code, and an artist
appears next to the mysterious object – on the screen of our phone, of course - ready
to explain the meaning of the artwork, and to guide us – GPS activated – through an
entire maze of no-more so mysterious objects of art that are physically installed in
town and through another maze of artworks that the same artist has created with
digital tools: representations that are activated by special codes disseminated in the
town and that we see on the screen of our telephone, when we point the camera on
the real spot. By simply using a smart-phone one can experience that “The digital is
spilling over into the analogue and merging with it” (Floridi 2007, p. 64), and that
the real world is part of the infosphere (the picture you sent to your phone-friend).
This is why the infosphere is “now vast and infinite” (Floridi 2007, p. 62), ICT
(Information and Communication Technologies) being “among the most influential
factors that affect the ontological friction in the infosphere” (Floridi 2004, p. 63).
Friction is the resistant force to the flow of information within a certain region of the
infosphere; when friction is low, information freely circulates in a way that makes
inforgs – as inhabitants of the infosphere – not necessarily savvy, but at least
informed: they have no right to claim ignorance, and they know that others know.
Mobile phones have done much to reduce friction. They are so portable, always in
(the pocket) and always (switched) on, that they are much more similar to glasses
for short-sighted people, than to sophisticated ICT. But they are sophisticated ICT.
This fact transforms those who wear them in nicely sophisticated ITentities with
troubles in sight. Troubles mainly concern ethical issues, such as the risk that the
digital divide – the unequal distribution of information technologies, hence: of fric-
tion in the infosphere – will generate new populations of “excluded” across and
within societies.
As a consequence of such re-ontologization of our ordinary environment, we shall be living
in an infosphere that will become increasingly synchronized (time), delocalised (space) and
correlated (interactions). … Although this might be read, optimistically, as the friendly face
of globalization, we should not harbour illusions about how widespread and inclusive the
evolution of information societies will be. The digital divide will become a chasm, generating
new forms of discrimination between those who can be denizens of the infosphere and those
who cannot, between insiders and outsiders, between information rich and information
poor. … But the gap will not be reducible to the distance between industrialized and devel-
oping countries, since it will cut across societies. (Floridi 2010, p. 9)
At the same time, developing countries are showing a great deal of ingenuity in
exploiting the potentialities of ICT so as to create economical and educational
otherwise absent possibilities. Forms of mobile banking in Kenya and in other
African countries (Greenwood 2009), as well as educational mobile practices in
South Africa and India – that I will illustrate later in this chapter – even suggest a
7 What Happens to Infoteachers and Infostudents After the Information Turn? 127
but also game consoles; more rarely mobile phones. The number of computers
naturally increases when moving from primary towards higher education, and the
same is true for Internet access and bandwidth (Rudd et al. 2009). Meanwhile,
teachers who have no digital literacy at all are becoming rare (less than 7% in
Europe in 2006). And yet, the information revolution has not happened: a rough
picture of the use and different distribution of computers in schools – which is not
limited to the gap separating developed and developing countries - of the challenges
of an evolving digital literacy, of the ambivalence towards new forms of interaction
made possible by mobile phones, video games, wikis and other forms of social
networking, shows a relative resistance of the world of education in terms of infor-
mation friction.
In spite of the wide diffusion of ICT tools, even the very optimistic report produced
in 2009 by Becta (the late British agency for the introduction of technologies in
education) admits that there is still room for improvement, for instance in what
concerns the use of otherwise widespread new technologies for interactive and
engaging forms of learning and teaching that go beyond the projection of presenta-
tions on electronic whiteboards (Rudd et al. 2009, p. 26). In other words, even in
technologically advanced contexts such as the educational British system, the use of
digital tools is not as developed as one could hope. Electronic whiteboards and
computers can still be used as traditional tools. This is maybe explained also by the
fact that infoteachers are still far from being an established reality:
In the following five countries, more than 5% of all teachers are not using computers
because they say they see “no or unclear benefits”: Germany (10.5%), Latvia (8.6%),
France (7.5%), Belgium (5.8%) and the Czech Republic (5.5%). There exists a strong
correlation between this scepticism and lack of motivation to use ICT in class and the age
of teachers: the older the teachers, i.e. the longer they are teaching, the more likely they are
to lack motivation for ICT use in class because they do not see benefits in its use for pupils.
(Korte and Hüsing 2007, p. 22)
Moreover, the distribution of digital resources is not uniform. Let us take the
situation of Europe in 2006:
The clear European leaders are Denmark (27 computers per 100 pupils, 26 of which are
connected to the internet), Norway (24 computers per 199 pupils/23 internet connected),
the Netherlands (21/20) and the UK (20/19) and Luxembourg (20/18). The figures in these
countries are significantly higher than the European average of 11 computers per 100
pupils (of which 10 are internet computers). Almost all new member states belong to the
group of laggards which include countries such as Latvia, Lithuania, and Poland; however
Portugal and Greece also find themselves in this group of countries, with 100 pupils having
to share only 6 computers. (Korte and Hüsing 2007, p. 20)
7 What Happens to Infoteachers and Infostudents After the Information Turn? 129
The part of the picture devoted to the use and distribution of ICT would not be
complete without considering the situation of developing countries. It is true that
these are the best candidates for becoming digital slums (Floridi 2010); but develop-
ing countries can also surprise us, and inspire education in unpredicted ways. ICT
for education is a major concern for international organizations supporting develop-
ment in poor countries – such as the World Bank – and it has become one of the
topics of developing countries policies. However, as stressed by Kozma (2008),
policies and changes in the classroom practice (where classrooms exist, or are
attended) can significantly diverge. Accurate policies can crash against digitally
illiterate teachers, or pre-existing educational programs based on rote learning.
It should be added that ICT policies are expensive choices, and must be justified
against results, especially when developing countries are at stake. Outcomes
expected from the introduction of ICT in education should hence be stated in
measurable ways, and actually measured in order to monitor their effects (Wagner
et al. 2005). Maybe one good reason why information and communication tech-
nologies haven’t revolutionized education, yet, is because persuasive evaluations of
the capacity of ICT to enhance, or to transform education, are still lacking.
It should also be reminded that, unlike other forms of literacy, digital literacy is
an evolving competence. The 2010 version of the Horizon report (the annual issue
of a research project established in 2002 with the aim of identifying emerging tech-
nologies likely to have a meaningful impact on the following 5 years’ education,
training and research) describes the situation as follows: everybody agrees on the
importance of digital literacy, but training in digital skills is still rare in education
programs; this lack is made more salient by the continuous transformation of the
technology, which changes the very notion of literacy. As opposed to learning to
write and compute, digital literacy is in fact always evolving, so training quickly
becomes obsolete (at least training focused on tools).
This reality is exacerbated by the fact that as technology continues to evolve, digital literacy
must necessarily be less about tools and more about ways of thinking and seeing, and of
crafting narrative. (Johnson et al. 2010, p. 7)
E.g., the Horizon report 2010 indicates the following four key trends for the
period from 2010 to 2015: first, the pervasiveness of information, which challenges
educators to revisit their capacity in sense-making and credentialing information
that is everywhere. Second, the desire and possibility to work and learn wherever
and whenever, to access information just in time and on demand; this second trend
is potentially disruptive for the distinction between formal (school) and informal
learning, and is especially made possible by ubiquitous computing. Pragmatically:
by the development of mobile phones and mobile learning, and by the decentralisation
of the IT support, we are becoming more and more used to the idea of browser-
based software independent from a specific hardware device, and this is the third
trend. The fourth trend is: collaboration. This looks more like wishful thinking,
when it comes to education, but the idea is that (some) schools have created an
environment and a climate in which students and teachers work together toward a
common goal. So, if these are the trends for the next 5 years, what do the technologies
130 E. Pasquinelli
to keep watch over (the emerging technologies that present remarkable potential
from an educational perspective) look like? Mobile computing devices (e.g. smart
phones) and open content are expected to reach mainstream use in the next year;
electronic books and augmented reality accessible to everyone should hit education
in 2–3 years; and finally, gesture-based computers and visual data analysis are
foreseen to have an impact on education in a 5 years long term perspective. Definitely,
digital literacy cannot be bound to computer related skills, but becomes a matter of
gaining an attitude towards the opportunities (and side-effects) represented by new
media technologies and practices. A situation that could represent another reason
for the recalcitrance of education to the information turn: because of the necessity
of continuous updating, and of the acquisition of a general attitude on the side of
(info)teachers.
Among new media and practices, are also included controversial items such as
mobile phones and videogames. The active engagement and diffusion of videog-
ames among learners – representing an opportunity for education for more engaging
experiences – is absent from the Horizon Report 2010, but strongly present in the
reflection about educational technologies; e.g., the proceedings of the 2006
Summit on Educational Games sponsored by the Federation of American Scientists
(FAS 2006) open with an enthusiastic endorsement of the introduction of videog-
ames in education:
Modern video and computer games offer a rich landscape of adventure and challenge that
appeal to a growing number of Americans. Games capture and hold the attention of players
for hours as they struggle to operate a successful football franchise, help Romans defeat the
Gauls, or go through the strict regimen of Army basic training in virtual landscapes. People
acquire new knowledge and complex skills from game play, suggesting gaming could help
address one of the nation’s most pressing needs – strengthening our system of education
and preparing workers for 21st century jobs. (FAS 2006, p. 3)
Both mobile phones and video gaming are hence foreseen (by different communities)
as potentially disruptive technologies for learning and education. Their diffusion
forces training in digital literacy to evolve. At the same time, the two are strongly
fought by many teachers, and parents – not the indiscriminate, compulsory use of
videogames, or the bad habits that teens seldom/often show with mobile phones,
but their very existence and use by kids and pupils. So, while Mobile Learning
(or Mobile Computing) becomes a domain of research – structured by a community
of practice, a series of conferences, an association, a number of national and interna-
tional projects involving developed countries as well as developing ones1 – mobile
phones are banned from schools in a number of countries (BBC 2005; Bremner
2009). Health issues and misbehaviour (from cheating to bullying) are the reasons
adduced for the ban, not adverse effects on learning. The fate of educational videogames
is less dramatic, but still controversial. The success of videogames for simulating
1
E.g.: The International Association of Mobile Learning: http://mlearning.noe-kaleidoscope.org/;
Handheld learning conference: http://www.handheldlearning.co.uk/; MoLeNET: http://www.
molenet.org.uk/; MobileActive: http://mobileactive.org/
7 What Happens to Infoteachers and Infostudents After the Information Turn? 131
military and other ‘serious’ situations has reached the world of training, vocational
training and education and produced a domain of studies called Game-Based
Learning. Like Mobile Learning, Game-Based Learning gives rise to a number of
conferences and projects, and the ground for a certain number of educational prod-
ucts (the diffusion of both is much more evident in the UK than in other countries,
due to the activity of several organizations).2 It is however not easy to evaluate the
effective gain in learning that Game-Based Learning or Mobile Learning produce:
controlled test are an absolute rarity in classrooms; studies on the positives effects
of (video)gaming mainly concern visuo-motor coordination (Byron 2008; Mitchell
and Savill-Smith 2004). Again, the lack of evidence, of proper measures – and more
generally: of methods for evaluating the effects of technologies as complex as
videogames on skills as complex as those required by schooling – could be one reason
for the slow penetration of this technology, when added to the fact that videogames
have raised strong, negative reactions. In the US, the National Institute on Media
and the Family3 (a private association) has been conducing a strong fight against
video games, arguing from studies such as Gentile (2009) and Anderson et al. (2006)
on videogame addiction and on the arousal of violent behaviours (the data only refer
to immediate reactions after the game, long-lasting effects not having been measured).
Supporters of videogames at school, oppose to the naysayers’ view that studies on
videogame addiction do not prove any causal effect of video games on negative
behaviours, but just show that – in a minority of children – negative schooling
attitudes and an excessive use of videogames occur together (Prensky 2006; Gee
2007a, b). However, the lack of a large, solid, shared body of evidence certainly
undermines both the positive and the negative attitude. A gap exists between trends
and penetration that is reinforced by the difficulty of updating skills on an evolving
domain and the absence of assessments capable of proving opportunities, and
measuring side-effects.
The purpose of this chapter is not to take a stand in the debate, but to show that
when it comes to school education and to young learners the introduction of digital
technologies is far from being neutral and technologies raise strong suspicions and
resistance. What about more common tools: blogs, Wikipedia, and all the manifes-
tations of the Horizon Report’s trend number one, in a nutshell, the pervasive circulation
of information? In educators’ talk, we found the same ambiguity that affects games
and mobile phones. Wikipedia is perhaps the most controversial issue with its “cut
and paste” easy solution to homework. On par with searching a solution on the web
during in-class exams, or receiving tips via the mobile phone, cut and paste in home-
work research and composition is perceived as a form of cheating (Bulstrode 2008;
Johnson 2007). Cheating is in fact one of the bad habits attributed to technology. It is
true that it becomes quite easy to answer pre-shaped, factual questions with Google,
2
E.g.: Games Based Learning initiative, The Consolarium, LTS Scotland: http://www.ltscotland.
org.uk/ictineducation/gamesbasedlearning/index.asp; Games Based Learning conference: http://
www.gamebasedlearning2010.com/; Educause http://www.educause.edu/EDUCAUSE+Review/
EDUCAUSEReviewMagazineVolume39/GameBasedLearningHowtoDelighta/157927
3
http://www.mediafamily.org/
132 E. Pasquinelli
or Wikipedia. But the question is: Why blame technology, and not (at least also) the
questions? It is a fact that not all questions are easily answered by copying Wikipedia
entries, and that just asking students to write an entry would block them from copying
it (and teach them about the process of writing and modifying entries). But this
example shows that introducing and using new technologies in education might
involve an additional change of attitude that goes beyond adjusting to evolving
technologies: a change in the goals of education and in the understanding of how
learning occurs. Before dealing with this issue in the second part of this chapter I will
analyse the ways the information revolution could take for overcoming the
recalcitrance of education.
One consideration arising from the quick tour we have taken in the (promised) land
of infoteachers and infostudents is that formal education seems to be recalcitrant to
the information revolution, or at least to approach this revolution with the pace of a
slow penetration, and a huge amount of doubts. Another consideration is that policy
makers seem to be totally sold to the idea that the 4th revolution should/will change
school.4 The information turn thus is really a promised land for educational policies,
e.g. for the European Commission.
The Commission’s policy of “information society for all” (European Commission, 2000,
2004) emphasizes the need to bring every business, school, home, and citizen into the digital
age. One goal of the policy is to promote digital literacy that would provide students with
new skills and knowledge that they will need for personal and professional development
and for active participation in an information-driven society. (Kozma 2008, p. 1086)
4
E.g.: Becta: http://www.becta.org.uk/; European Schoolnet: http://www.eun.org/web/guest;jsessi
onid=C88D79E4E3EE7B1A7E8583DF559DF3D6; National Education Technology Plan: http://
www2.ed.gov/about/offices/list/os/technology/plan/2004/site/edlite-default.html; InfoDev: http://
www.infodev.org/en/index.html
7 What Happens to Infoteachers and Infostudents After the Information Turn? 133
that are commonly close in our words very distant from one another: professional
courses in typewriting had adopted the QWERTY model, and developed training
that fitted with that particular keyboard. Very soon, better machines were produced,
and the original QWERTY model became useless. Nonetheless, since then nothing
has changed, and we are still struggling with our illogical keyboards, and trying to
explain to our kids why their so technologically-advanced computers and gesture
sensitive, born-for-natural-interaction devices do not allow to quickly identify letters.
The story of education could be the same as QWERTY keyboards, resisting change
coming from top-down claims for rationality, efficiency, cognitive functioning. This
is a good reason, Papert says, for believing that change will come from outside the
system of education. In 1980 he prophesied that the day every child and adult would
possess a computer, learning would undergo a seachange, and schools would have
to follow (Papert 1980).
It is on these premises that the One Laptop Per Child (OLPC) was born, claiming
that each and every child in the world should possess a computer, especially kids
from developing countries.5 The day this will happen the very idea of teaching and
learning and of teachers and learners as we conceive of them will dissolve (and a big
amount of social injustice will dissolve too, overcome by knowledge and global
participation). For this reason, low cost (the goal – not quire reached – was to keep
the cost under 100 $), low power, robust laptop computers (called “XO”) have been
designed, and given to about 1.6 millions of kids in the world (with governments
paying for the computers). The XO is delivered with programs not directly aimed at
learning, but rather at creating and interacting: each laptop is in fact connected with
the XO laptops of the area so as to allow distance collaboration and sharing of the
contents that kids are able to create with their personal computer. At the heart of the
OLPC project hence lie the idea that a quantitative factor can translate into a qualita-
tive revolution (a revolution hitting both education and poverty), and the view that
learning is a constructive process: children are the agents of change, once they
become active in their learning, and in teaching as well – for instance teaching their
parents to read and write, as it happens in Peru. When this occurs, the information
revolution has a major effect in blurring the boundaries between teaching and learning,
as infostudents become infoteachers. But does this happen? In 2009, OLPC has met
a big objective: a contract with the government of Uruguay to bring a green computer
to every child in the country. In March 2010, Rwanda’s government decided to
endow every Rwanda kid from 9 to 12 with a XO laptop. Despite all this, many
consider OLPC a failure (Nussbaum 2007; Dukker 2007). The number of XO sold
to governments has not reached the expectations that could make it economically
viable – big ‘clients’ such as India and China have not followed the OLPC sirens.
Additionally, OLPC computers require maintenance and have to travel in difficult
conditions, requiring a large and distributed organization and lots of diplomacy.
5
The director of the project is Nicholas Negroponte, but Seymour Papert and Alan Kay (all three
from MIT) are amongst the educational theorists and computer scientists recognized as being at the
same time inspirers and supporters of the initiative that has seen light in 2005. OLPC: http://laptop.
org/en/
134 E. Pasquinelli
Above all, even the OLPC project has somehow taken the top-down path to the
information revolution, rather than the bottom-up path. Customers (the kids) have
been involved only at the later stage of the project; the laptop, its programs and
concept, have been delivered as ready-to-use – unsolicited – “gifts”. Everybody
knows how it feels to get a birthday gift one didn’t ask for. A big surprise, but well,
we so badly needed or wanted that other beautiful whatever-it-is. This could be in
part the story of OLPC. Just in part because OLPC remains an inspiring and influent
project, and because meanwhile the OLPC initiative has contributed to lower the
price of laptops in a meaningful way. Nevertheless, some critics have expressed the
idea that OLPC designers should have spent more time in India, Africa, South
America villages, observing uses and needs of the local populations, namely
children; and influent experts in ICT in education have contrasted the OLPC top
down-model with a truly bottom-up approach: the steady, spontaneous multiplica-
tion of mobile phones in developed countries as well as in developing ones
(Trucano 2009).
6
Mxit: http://www.mxitlifestyle.com/
7 What Happens to Infoteachers and Infostudents After the Information Turn? 135
homework; he never solves their problems directly, but guides them step by step to
the solution, and to an understanding (as we can read from the transcriptions of the
interactions). This model cannot but remind us of the way forums work. Results of
the evaluation of Dr Math project are still to come, but users’ comments are positive
and a new project has been launched (Imfundo Yami Imfundo Yethu) which involves
a Finnish organisation, the South Africa Department of Education, Nokia, and 260
Grade 9 and 10 learners from six schools for producing controlled evaluations
(Vecchiatto 2009).
What do we learn from the Dr. Math (on MXit) case?
Math on MXit takes advantage of the fact that teenagers are already using MXit to
communicate with their friends. (Butgereit 2007)
First, the educational activity provided by Math on MXit is not the reproduction
of something that exists in traditional education: one-to-one tutoring being a very
desirable but expensive situation (Bloom 1984). We also know that African families
are hungry for tutors for their kids, but these tutors are often amateurish and expensive.
We hence thus a clear need on the educational and social side, which traditional
systems can hardly fulfil. Secondly, rather than inventing new practices, and trying
to make them popular, Dr Math colonises existing, common practices with educa-
tional purposes. In a perspective that is coherent with the information revolution
described by Floridi, the principle of colonisation consists in grafting educational
purposes into the ecology of the infosphere. When students go to school they are
taken away from the “real world”: the infosphere, with its practices and its ecology
made of phone cells, messages, and a wide variety of ways of producing and sharing
information. Once back home they are inforgs again (they start again mxing, gaming,
surfing the net). Colonisation represents an ecological approach to bringing the
information revolution in the domain of education.
What would the opposite scenario look like? Something like this: a brand new
technology that students (and also teachers) do not know how to use is added to the
classroom. Moreover, this unpractised technology does not bring a new function
into the educational panorama, but it is limited to the electrification of pedagogical
activities, tools and roles that can be very well realized in more traditional ways
(Casati 2009). In other words, technology is used as a modernizing paintbrush or as
a form of electrification of books, teachers and blackboards.
To conclude, the 4th revolution is yet to reach education, for several reasons
among which we can cite: the lack of appropriate and shared evaluations of effects
and side-effects, the difficulty of upgrading to continuous changes in hardware and
practices, the challenge to educational habits. Moreover, injecting technology is
not enough and changing educational habits is such a hard job – prone to the mis-
take of simply adding some digital make-up onto traditional activities. The way for
the 4th revolution to come and invest education could be then be better represented
by a form of ecological colonisation of existing, widespread technologies and
practices. A double colonisation, since this model comes from brilliant ideas
spreading from developing countries: will innovation in education be the place for
a counter-colonisation?
136 E. Pasquinelli
President Obama has not condemned ICT as a whole (it would have been rough
for someone who has made an exemplary use of the Internet, and is still making an
unprecedented use of Twitter and YouTube). According to The Economist’s analysis,
infopresident Obama’s speech implicitly contains a distinction between good,
empowering information and bad, distracting information. Still, the discourse was
addressed to students of Hampton University, and the quoted sentence could thus be
interpreted in the following alternative way: information is not good or bad in itself;
yet, when educational environments do not limit the pervasiveness and free circula-
tion of information (when the infosphere becomes frictionless) it becomes difficult
to attend to the information proposed by the teacher in the classroom. It is not untrue
that sending SMS messages, consulting YouTube or even Wikipedia is incompatible
with the Victorian model of the classroom, where a teacher speaks to listening
pupils. But this is not the only, possible scenario.
In addiction to critics, Papert introduces two other categories of attitudes towards
ICT in education: “optimists” and “sceptics”. Optimists believe that computers can
make a qualitative difference in learning; it is not just a matter of improving instruc-
tional teaching and school education, but of empowering individuals to choose the
way they want to learn by creating learning tools that can be used outside schools:
ICT augments education, in the sense that it changes education into something,
which can benefit from the entire infosphere. What grounds the optimistic attitude
is the view that learning is a cognitive process, which goes beyond dedicated instruction
7 What Happens to Infoteachers and Infostudents After the Information Turn? 137
(school): people learn from their experience, all their lives; children learn from their
environment and culture. Changing the furniture of the environment, changing the
tools and habits that are part of the culture, also changes the way we learn, and
think. Floridi would say that this produces a re-ontologisation of the learning
environment, a transformation of its intrinsic nature (Floridi 2007). For instance,
infolearners will ask for different schools that correspond to their way of learning, and
to their idea of knowledge: knowledge which is accessible anytime, anywhere;
knowledge which is constructed by multiple, interconnected intelligences; and
knowledge which is gained through active patterns of search, hence meaningful
from the searching individual’s perspective. In the framework of this massive change
information overload no longer is a problem, because the very structure of educa-
tional contexts, methods, and aims is transformed by the expansion of the infos-
phere. On the opposite side, sceptics do not expect the presence of computers to
produce a massive change in how people learn, and think; according to them, all that
ICT can do is to enhance instruction (as opposed to augmentation), by providing a
means for better teaching in schools. Interactive whiteboards can be considered as
“enhanced” blackboards, which allow teachers to display multi-modal contents
(images, videos, charts) and to save exercises and notes; this is a lot more than can
be done with a traditional blackboard, but it does not represent (or at least, not
necessarily) a revolution in how students learn, and teachers teach.
The three categories described by Papert do not belong to the same “natural
kind”. Critics and optimists both believe that ICT will produce a radical change on
learning and thinking, but they evaluate differently the desirability of its effects.
Sceptics do not hope or fear, they just don’t believe (or estimate) the information
turn to represent a massive change for education. We obtain two axes along which
different positions can be aligned. E.g., (Aviram and Talmi 2005) draw a matrix
along two axes they call approaches and attitudes. Approaches range from the
assumption that technology can be subsumed under the traditional school and
curriculum – and that its introduction has a qualitative but not a “revolutionary”
effect – to challenges to the very notion of school as a physical space and to its
aims. The transition from one extreme position to the other is represented by seven
beliefs: a. that computers should simply be present at school as they are everywhere
else; b. that technology should serve curricular purposes, by becoming a discipline
(computer science) or by taking advantage of ICT for teaching the subject matters
included in the current curriculum (e.g. sciences or maths); c. that new technologies
are part of a change in the way contents are taught/learnt at school (for instance,
through more constructive and interactive methods); d. that the whole organization
of educational spaces and time, roles and curricula is changed by the advent of ICT;
e. that school disappears in favour of remote and even virtual schools; f. that ICT in
education is part of a deep cultural revolution ; g. that change should be shaped by
values (Aviram and Talmi 2005). The cultural approach that characterises f. fits
particularly well with the philosophy of information proposed by Luciano Floridi,
because it recognises that ICT has a re-defining (re-ontologising) impact on our
138 E. Pasquinelli
way of living and thinking about things, and because it acknowledges the fact that
the educational revolution is part of a deeper revolution that has transformed
Western culture.
The cultural approach is quite rare in discussions on ICT and education. Those who rely on
it are mainly academics, intellectuals or futurists. The approach remains unknown to many
teachers, and even to many academics. Adherents of the cultural approach maintain that
educationists should be aware of the revolutionary, defining nature of ICT, and strive to
adapt the education system to the new culture. Such adaptation could take diverse routes.
One may judge the rising postmodern culture favorably and recommend radical changes in
the school structure in order to adapt it to the new ‘human situation’ (what we call below
the ‘radical’ attitude). Conversely, one might judge it unfavorably and opt for preserving
and strengthening the existing structure of education (the conservative attitude). (Aviram
and Talmi 2005, p. 171)
As for the second axis, Aviram and Talmi distinguish five attitudes, which can be
driven by goals: those of i. agnostics, ii. conservatives, iii. moderates, iv. radicals
and v. extreme radicals, ranging from those who do not care about what the impact
of ICT would or should be, to those who “believe that ICT is a Trojan horse inside
the base of the prevailing educational system, and that the latter will not (and, quite
often, should not) survive it.” (Aviram and Talmi 2005, p. 172).
In what follows I will illustrate some examples of what Trojan horses could look
like, and their potential effects on the Victorian school. The process of bringing the
effects of the information revolution into formal education is slow because formal
education has created special places for learning, and these special places tend to
keep learners separated form the world. Trojan horses can enter the heart of the
educational system: school; but they can also settle in the periphery of the citadel and
slowly change the perception of what education is (i.e., re-ontologise education).
“Hole in the wall” is an initiative aimed at slums and poor villages in India, imagined
and realized by Sugata Mitra (now professor of Educational Technology at the
School of Education, Communication, and Language Sciences, Newcastle
University). In 1999, in Kalkahi (a poor borough of New Delhi) a real hole was
made in the real wall separating the NIIT (the learning solutions corporation Mitra
was working with) from the adjoining slum: a computer was slipped into the hole,
for free use. Children came, spontaneously, and started using the computer to look
up information on the Internet (videogames and CDs have also been employed in
further settings). Many skills were required, which the kids did not possess yet: to
use a mouse, to understand how a web page is structured, and most of all to read
English – a major issue for education in India, where English is the mandatory
requirement for the access to higher education. The observation of kids operating
with the computer, coming back day after day, getting better at digital literacy, and
collaborating, came to reinforce the pedagogical stance that Mitra has since then
identified as “Minimally Invasive Education” and “unsupervised learning”: learning
7 What Happens to Infoteachers and Infostudents After the Information Turn? 139
that develops from the natural exploratory activity of children, especially when
children are brought together and interact with an object, which is able to deliver
information in different shapes (Mitra and Rana 2001). This same model has been
exported from India to Cambodia, to Africa, and even to UK, as a project named
“Self Organised Learning Environments” (Mitra 2009).
“Minimally invasive” refers to the least possible, negligible, or the minimum help required
by the child to initiate and continue the process of learning basic computing skills. This mini-
mal amount of help from other children at the MIE learning station is necessary and sufficient
for the children to become computer literate. This “help”, which is the fundamental aspect
of MIE, could be from peers, siblings, friends, or any other child familiar with computers.
Children are found to collaborate and support each other. The learning environment is char-
acterized by its absence from adult intervention, openness and flexibility. Children are free
to operate the computer at their convenience, they can consult and seek help from any other
child/children, and are not dictated by any structured settings. (Mitra et al. 2005, p. 3)
The method is meant to apply whenever there are no real teachers at hand, at
least of good teachers (because they will not accept work in remote parts of devel-
oping countries; as well as of developed ones). Many solutions have been put in
place, in the world, to compensate for this absence – special books, radios transmit-
ting courses to the classroom (and the physical teacher), educational TVs, open
universities and what is called e-learning, or learning at distance (through CDs, the
Internet or even mobile phones) - all with a common denominator: being addressed
to the individual learner, or to the individual learner as immersed in a typical class-
room structure (as when learners look and listen to radio and wait for questions
posed by the teacher or take their exercises on a mobile phone) (Trucano 2005).
In Minimally Invasive Education this modality is challenged twice: first, learners
become teachers for other learners (they peer-teach each other in groups); and,
second, learners search information, instead of receiving it as a form of instruction,
or of test (which is not to say that instruction and tests are not useful and effective).
MIE has turned out to be effective, at least for the acquisition of computer literacy:
children collaborating around a computer reach levels of digital literacy, which are
comparable with those acquired by the means of traditional classroom instruction
(but it should be acknowledged that they normally spend more time interacting with
the computer than children using the computer at school) (Mitra et al. 2005). This
means that a minimal investment (much less that one computer per child) could
make the difference in terms of digital divide (which has been cited at the beginning
of this chapter as a major ethical preoccupation in the information age). Moreover,
the Hole in the Wall experience points to an issue, which is deeper than the positive
effects of MIE on digital literacy and divide: a different way of learning is made
possible – or at least made easier – by the fact of living with other inforgs in a rather
frictionless infosphere (friction being reduced in this case by the presence of just
one computer). This form of learning is self-directed, collaborative, and independent
from formal structures and settings.
Would it have been possible to achieve the same result before the information
turn? In other words: is the fact that information can freely circulate a necessary
condition for this form of learning to exist? A thought experiment can shed light on
140 E. Pasquinelli
this question: let us imagine a group of children wandering among the scrolls of the
Ancient Library of Alexandria; certainly, they did access large quantities of information;
however, those scrolls could not respond to their actions: they would not close in
response to a bad search. Computers do. Sugata Mitra describes the discovery of
one of the first “Hole in the wall children”: he touches touching the screen in a
certain way and sees the page disappear, another appear; the kid then goes back and
forth in search of new reactions from the machine. Tools of the information turn do
react to learners’ actions (and to states of the world, if they have a GPS, or whatever
kind of sensor) with a change in their informational content. They can thus become
part of a dialogue in a way books, radios, cinema (and even just reading a Wikipedia
page) cannot afford.
Hole in the wall and MIE is however a rather extreme form of no-schooling,
confined to places with no school to choose as an alternative. Few parents in Paris,
London, Rome, New York, Singapore, Tokyo, would choose to send their children
playing with a computer in the street, rather than going to school. But some
children, even in these big cities, do not want to go to school: they quit, disengage,
suffer school phobia, or illness. Is there an alternative to bringing them back to
the classroom? Forms of no-schooling or virtual schools have been tested: learners
do not meet physically, do not collaborate in presence, but only at distance, and
via the computer; they receive some form of follow-up which is somewhat more
“invasive” than Minimally Invasive Education. Notschool project, originally
founded by Stephen Heppell, aims at reengaging students in the learning process,
without imposing a school environment: learners have access to chat rooms, mentors
(one for six learners or researchers), and a virtual community.7 In 2007 the project
included 1,000 learners, with 96% obtaining some form of accreditation. The
principle behind Notschool is the principle that education can reach learners
everywhere: they do not need a physical space called school (which does not
mean that schools should not exist). What makes it possible is the existence of a
continuous flow of information, and of high bandwidth.
We have seen how computers can transform learners into infoteachers and breach
the walls of schools (re-ontologisation of authority and space). Schools are also
time-organisers, defining which is the moment for learning, and which for entertain-
ment (not during the lesson: information overload), for play, and for socialising.
Does the information turn also affect (re-ontologise) our perception of time and, in
particular, the idea that there is a time for learning and a time for doing other things?
I think this particular re-ontologisation could depend upon the spread of two practices,
which I have cited as influential on education in the first part of this chapter: mobile
learning and serious gaming. Let us consider them in turn.
7
Not School: http://www.notschool.net
7 What Happens to Infoteachers and Infostudents After the Information Turn? 141
8
English in Action (Open University): http://www.englishinaction.com/
9
Millee: http://www.millee.org/
142 E. Pasquinelli
Or better, she searches for their definition on Google, she writes SMS to course
mates and tutors and ask their advice on the particular problem she has met
(Kukulska-Hulme 2009).
Second, mobile phones sense the environment, and respond to it: bar codes
scanners, GPS, compasses, accelerometers gyroscopes embedded in smartphones
are sensors that allow multiple applications: from augmenting reality with digital
contents10; to write on the phone just writing words in the air (Agrawal et al. 2009);
or to write on a projected keyboard so that even physical action in the world becomes
digital information and command for the machine (Maes and Minstry 2009). Even
walking can become digital information, with an iPod (if one wears Nike shoes).11
Third, mobile phones are tools for communicating (via voice, texts, and images)
with other individuals, and also with machines (Sharples 2005). One can receive
tips about lessons, be put in a network with other learners that are interested in the
same topics, by automatic systems for managing networking and administration in
a University campus (Brown 2008). In some cases individuals and machines can be
combined in such a way as to become one indistinguishable information station.
Imagine being in a remote village of Cameroon, and badly needing to know who is
the richest man in the world, or what is the price of tomatoes today, or which is the
right pesticide for your dying plantation. Imagine not having access to Internet, at
least not directly, but owing a mobile phone (and some credit), and a number to call,
where an operator takes the question, searches the Internet, and provides the answer.
A smooth flux of information, a frictionless infosphere is established up to the
remotest corners of the planet, if one can make a call, even in the absence of the
computer functionalities. This is the lesson of Question box project: a service for
calling an operator who searches the infosphere, from dedicated phones distributed
in Indian villages, or from one’s own mobile phone in Cameroon.12
The three uses just mentioned can affect the way we conceive access to information,
knowledge and education, in a way that goes beyond the anytime, anywhere refrain.
First of all: learners can access information when it is really needed, when it is
meaningful. Accessing information just in time has potentially large consequences,
which go beyond education. Knowledge does not necessarily need to be stored in
mind, when one knows where to find information and how, and if one is confident
that one will be able to access it at any moment. Mobility is hence a premise for
considering ICT as a form of cognitive extension (Clark and Chalmers 1998), for
instance a memory extension, which is not so different from “internal” or “brain”
memory. As memory, mobile phones are always with us, always on. This does not
mean that there is no difference between internal processes and extended ones, but
that ICT tools can be used as cognitive tools with an effect on cognitive actions
and performances. It is obvious that the fact of possessing mobile phones wouldn’t
have changed the necessity for people living in Ray Bradbury’s Fahrenheit 451 world
10
E.g.: Layar http://www.layar.com/
11
Apple-Nike: http://www.apple.com/ipod/nike/
12
Question box: http://questionbox.org/
7 What Happens to Infoteachers and Infostudents After the Information Turn? 143
to learn entire books by heart, in order to save them from burning. Mobile phones
can burn, too (actually, we could imagine a future where memory can be selectively
erased; but it is the future).
Secondly, learning can happen in context. While other media tend to set a separation
between the learner and the physical world, and between digital information and
physical objects, mobile phones allow a perfect integration of the two: they take the
learner out of the box (Van der Klein 2008). Thus, in mobile conditions context can
affect learning in two complementary ways: objects raise questions, which learners
can answer through the help of digital information (augmented reality mode); and
objects provide answers to questions raised by digital information (augmented
digital representation mode). An educational project developed by Waag, a Dutch
company, illustrates the second mode: grouped in small teams, young learners
follow a quest in the Medieval streets of Amsterdam; they walk in search of monu-
ments in order to answer the problems raised by a video game for mobile phone; at
the same time, they stay in contact with residential teams searching the Internet with
computers.13 Mobile phones thus allow a form of experiential education, as Dewey
described it in the last century: where knowledge is acquired through experience
(active exploration), and connects to the learners’ experiences (interests, motivation,
life) (Dewey 1997).
Experience is a key word for the vision of education purported by John Dewey, a
vision never fully realized. Lack of appropriate means could explain it. So, let us see
what happens when new technologies are employed to make experience possible. In
the 1990s, a group of researchers of Vanderbilt University, coordinated by John
Bransford, launched a long-term project devoted to designing and testing a method
for the learning of sciences, which would comply with Dewey’s considerations
about experience, and with the notion of inert knowledge as introduced in 1929 by
another philosopher: Alfred North Whitehead (CTGV 1990). Whitehead had
claimed, in front of his colleagues, that Victorian school provides students with a
form of knowledge, which is not used for anything but for responding to tests
(Whitehead 1929). This knowledge is inert, because schools teach broad, but not
deep, and because they disconnect knowledge from the reasons of its existence,
from the contexts of its application. But in Whitehead’s view, as well as in Dewey’s,
pieces of knowledge are nothing but tools, which help people coping with the world.
This is also the perspective adopted by Bransford and colleagues (and they certainly
are not the only ones) in proposing an anchored instruction method for learning
maths, pivoting around the videotaped adventures of a fictional character: Jasper
Woodbury (CTGV 1990). Jasper finds himself driving a boat or a plane, and facing
13
Frequency 1550: http://freq1550.waag.org/
144 E. Pasquinelli
problems of fuel, distance, time. At the end of the movie, students are asked to plan
the solution to the quantitative problems Jasper is faced with, and to compute.
Instruction is thus anchored to “real” contexts, and mathematical tools serve to solve
“real” problems.
Twenty years later, this same approach is proposed in the framework of serious
(video)gaming: not only are concrete problems posed to learners in the context of
the representation of a certain situation; but learners are asked to find the solution
and to implement it directly in the game (something that was impossible with non-
interactive technologies like videotapes). Serious games, as well as simulations
without gaming (the difference being that simulations have no winners, no reward
and no competition), have spread in a number of domains: military training – including
the simulation of social interactions with civil populations – surgical training –
including training on virtual frogs in school using dissections – the training of pilots
– also to earn a (real) licence to fly civil planes, with no other experience than driv-
ing military planes. Going back to Jasper Woodbury, games have been designed for
teaching and learning biology, physics, history, and mathematics (Prensky 2005).
Commercial Off-The-Shelf games (COTS) are currently used in schools for stimu-
lating children to write and imagine scenarios, for inviting them to collaborate
around the organisation of events, for increasing efficiency and speed in elementary
computation, and for all those learning activities, which inventive teachers can
imagine from diverting commercial products from their original aim and colonising
them with educational purposes (again) (Felicia 2009).
Naturally, the idea that play is important for children, and even for learning, is
not new: it is not a product of the information age, or of the videogame industry.
Historically, the first theories of play were purely descriptive and aimed at finding a
role for play in the human development (from the end of the nineteenth century).
The normative idea that play should be exploited for learning is more recent; among
others it has been asserted by Maria Montessori, and is still implemented by schools
inspired by her vision. Still more recently this same idea has been revived by the
advent of videogames, and has given birth to what is called Game-Based Learning,
or better: Digital Game-Based Learning (Prensky 2005; Gee 2007a, b). It has even
been asserted that modern videogames (whatever their original scope) are machines
for learning: players must learn how to play in order to enjoy the game; if the game
does not facilitate learning, then the designer is out (Gee 2007a, b). This strong
constraint would be the reason why videogames embed very efficient pedagogical
principles: learners feel like active agents because they make things happen; learners
form expertise by practicing skills until they are nearly automatic, then having
those skills become insufficient to face new situations in a way that makes it
necessary to think and learn anew; learners are put into fish tanks that are similar to
real situations for their structure, but without the dangers and excessive complexity
of the real world: only certain variables are selected and stressed (“With today’s
capacity to build simulations, there is no excuse for the lack of fish tanks in schools”:
Gee 2007a, p. 39); players do not start from the manual, but from playing the game
and then going to read the manual for knowing more (“Game manuals, just like
science text books, make little sense if one tries to read them before having played
7 What Happens to Infoteachers and Infostudents After the Information Turn? 145
the game”: Gee 2007a, p. 38); hence learners can start from experience rather than
from general definitions and principles; and, naturally, learning and pleasure are
joined together.
Pleasure and learning: For most people these two don’t seem to go together. But that is a
mistruth we have picked up at school, where we have been taught that pleasure is fun and
learning is work, and thus that work is not fun. (Gee 2007a, p. 10)
So, at the same time, games (digital or not, but it is a fact that the discussion has
been revived by so-called Digital Game Based Learning) question the distinction
between time for learning and time for pleasure, and make it possible or at least
easier to challenge the idea of education as the transmission to the new generation
of bodies of information and skills that have been worked out in the past (Dewey
1997), because digital fish tanks are ideal tools for experiencing simplified, models
of reality that are designed for pedagogy.
New technologies, or better: the way they are practiced in some exemplary cases,
challenge some of the tenets of a model of schooling and education – a model which
probably is not realised in its complete form in any school of the twenty-first
century, but that is present in our vision of education, positively or negatively.
7.4 Conclusions
In the preceding sections, I have identified the Victorian school with a number of
characteristics: a dedicated space (separated from other social enterprises and phys-
ical places), a dedicated time (the time for learning), well-defined roles (one teaches,
the others learn), and contents (inert knowledge). I have shown that all these
characteristics are challenged by practices that have become possible after the
information revolution, even if this does not mean that they will be transformed.
The pervasive flux of information is a potential Trojan horse into the traditional
structure of education. Firstly, ICT practices spreading in developing countries, and
in especially “deprived” conditions (in terms of educational systems and access to
literacy), can be colonised by educational purposes; and, secondly, alternative edu-
cational practices with ICT can challenge all those who are interested in education
to revise their conceptions about education and learning.
Understanding that social structures (as school) and concepts (as education) can
also become different, opens the door not to one, but to number of alternatives,
because it is a process of de-naturalisation. Concepts are not frozen, “natural”
entities. They live their life in the middle of contracts, negotiations, practices, and
debates. They have a history, and a context from which they take their meaning.
When the context changes, concepts can undergo mutations. If they don’t, they
become obsolete and are replaced (as it has happened to the notion of phlogiston).
That’s why examples are important to me, and I have used many in this chapter. It is
the old Wittgensteinian methodological rule: see this way, and now see it the
other way, but do not stop seeing it other ways. The effect is that we acquire what
146 E. Pasquinelli
Robert Musil called the sense of possibility, as opposed to the sense of reality. So,
the fact that new, alternative practices spread does not mean that schools will or
should be closed – unless they prove ineffective in relationship with the objectives
they assign to themselves; or unless these objectives conflict with wider objectives,
which become dominant in the society surrounding and supporting schools
(principle of reality). However, the spread of new practices certainly forces us to
re-conceptualise what we intend when we talk about schools, education, learning,
and knowledge (principle of possibility). For example, peer-teaching practices and
self-directed learning induce a mutation in the notion of authority and of education
as the transmission of knowledge from someone who possesses information to
hollow learners. At the same time, the distinction between formal education and
informal learning becomes less important, because learning is no longer bound to
official places for transmission.
In the context of information, which is accessible anytime, anywhere, on demand
and just in time, even the notion of knowledge as something we possess in our brain
undergoes some adjustment. In certain respects, our mind can be considered as an
extended structure encompassing the brain under the scalp and of the tools in our
pockets. In this perspective, the idea that school should transmit all the contents,
which may be needed in the future becomes redundant.
Thus, it becomes more reasonable to concentrate on learning deep, rather than on
learning broad; also because of the possibility of learning from experience in
concrete – even if digital – settings that are models of reality with a stress on relevant
variables, and relevant variables only. If this re-conceptualisation sounds too extreme
let us just side with sceptics, and leave big re-conceptualisations to optimists.
As we have seen, the main difference between optimists and sceptics lies in the
following opposition: on the one side, the idea that when the infosphere extends to
schools, frontiers between schooling and no-schooling are redesigned, as it happens
to physical and virtual artefacts in augmented reality (augmentation); on the other
side, the idea that friction in the circulation of information will always make a
difference between places inside the educational system and places outside it, because
ICT will be functional to enhance the present state of affairs (enhancement).
Evaluation (and the definition of proper systems of evaluation that are apt to
monitor the achievement stated objectives) is a crucial condition for asserting that
(a certain) technology represents the best tool for enhancing education. If we can
prove it works, it will become easier to foster the use of new technologies in school,
in order to enhance students’ performances. Accountability and a systematic use of
evaluation and test applied to choose the best teaching strategy is the key for the
identification of good tools, and for the spreading of ICT tools (that are worth
spreading). Some (radicals) might argue that this is not a big gain, and certainly not
a revolution in education.
7 What Happens to Infoteachers and Infostudents After the Information Turn? 147
From the first years of their life, kids perceive the world as being structured: they
use criteria for parcelling the flux of stimuli into separated, consistent, dynamically
coherent objects; they distinguish between non-animated and animated entities;
they get habituated to regularities, and show surprise when faced with violations of
expectations. They also develop beliefs about how the physical, the biological, and
the psychological worlds work, and interpret events in terms of these beliefs, which,
quite often, can reveal false when compared with scientific theories of the same
phenomena. Replacing or updating false beliefs is referred to as “conceptual
change”, and it is a big challenge for education. This wouldn’t be the case had the
hollow box been a correct image: hollow boxes do not oppose any resistance at
being filled in. But learners, whatever young they are, are not hollow boxes. They
are rather complicated interpreting machines that use what they know and their
previous experiences to make sense of new events and of the world.
As well as technologies, knowledge from cognitive science is challenging some
of the tenets of education and suggesting that education should start from how we
learn (hence from the observation of good practices and the study of mind) rather
than from the consideration of what is useful to learn (even in the twenty-first
century perspective). How their joint venture will be able to affect education is more
a matter of will, than of divination.
References
Agrawal, Sandip, et al. 2009. PhonePoint Pen: Using mobile phones to write in air. In MobiHeld09.
Barcelona, Spain.
Ally, Mohamed. 2009. Mobile learning. Transforming the delivery of education and learning.
Edmonton: AU Athabasca Press.
Anderson, Craig Alan, Douglas A. Gentile, and Katherine E. Buckley. 2006. Violent video game
effects on children and adolescents. Theory, research, and public policy. Oxford/New York:
University of Oxford Press.
Aviram, Aharon, and Deborah Talmi. 2005. The impact of information and communication
technology on education: The missing discourse between three different paradigms. E-Learning
and Digital Media 2(2): 169–191.
BBC. 2005. Should mobile phones be banned in schools? May 27. http://news.bbc.co.uk/cbbcnews/
hi/newsid_4570000/newsid_4579100/4579159.stm
Bloom, Benjamin. 1984. The 2 sigma problem: The search for methods of group instruction as
effective as one-to-one tutoring. Educational Researcher 13(6): 4–16.
Bransford, John D., et al. 2000. How people learn: Brain, mind, experience, and school. Washington,
DC: National Academy Press.
Bremner, Charles. 2009. Mobile phones to be bannes in Frenchprimary schools to limit health
risks. The Times online, May 27. http://www.timesonline.co.uk/tol/news/world/europe/
article6366590.ece
Brown, Tom H. 2008. Mlearning in Africa: Doing the unthinkable and reaching the unreachable.
In International handbook of information technology and primary and secondary education,
Springer International Handbooks of Education, vol. 20, no. 9, ed. Jone Voogt and Gerald
Knezek, 861–871. New York: Springer.
Bulstrode, Mark. 2008. Half of Cambridge students admit cheating. The Independent, October 31.
http://www.independent.co.uk/news/education/education-news/half-of-cambridge-students-
admit-cheating-980727.html
7 What Happens to Infoteachers and Infostudents After the Information Turn? 149
Butgereit, Laurie. 2007. Math on MXit: the medium is the message. In Proceedings 13th annual
national congress of the association of mathematics education of South Africa, White River,
South Africa.
Byron, Tanya. 2008. Safer children in a digital world. The report of the Byron review 2008. http://
publications.education.gov.uk/default.aspx?PageFunction=productdetails&PageMode=public
ations&ProductId=DCSF-00334-2008&
Casati, Roberto. 2009. Learning beyond electrification. Mobile technology offers opportunities for
redesigning the teaching process. Interdisciplines. http://www.interdisciplines.org/mobilea2k/
papers/5
Clark, Andy, and David Chalmers. 1998. The extended mind. Analysis 58: 10–23.
CTGV. 1990. Anchored instruction and its relationship to situated cognition. Educational
Researcher 19(6): 2–10.
Dewey, John. 1997. Experience and education. New York: Free Press.
Dukker, Stephen. 2007. Is the OLPC project doomed a failure? Znet, August 07. http://www.zdnet.
co.uk/news/it-strategy/2007/08/07/is-the-olpc-project-doomed-to-failure-39288450/
FAS. 2006. Harnessing the power of video games for learning. Summit on educational games.
http://www.fas.org/gamesummit/Resources/Summit%20on%20Educational%20Games.pdf
Felicia, Patrick. 2009. How are digital games used in schools? Complete results of the study.
European schoolnet. http://games.eun.org/upload/gis-full_report_en.pdf
Floridi, Luciano. 2003. Two approaches to the philosophy of information. Minds and Machines
13(4): 459–469.
Floridi, Luciano. 2004. The Blackwell guide to the philosophy of computing and information.
Malden: Blackwell.
Floridi, Luciano. 2007. A look into the future impact of ICT. The Information Society 23(1):
59–64.
Floridi, Luciano. 2010. The Cambridge handbook of information and computer ethics. Cambridge/
New York: Cambridge University Press.
Gee, James Paul. 2007a. Good video games + good learning: Collected essays on video games,
learning, and literacy. New York: P. Lang.
Gee, James Paul. 2007b. What video games have to teach us about learning and literacy. New York:
Palgrave Macmillan.
Gentile, Douglas A. 2009. Pathological video game use among youth 8 to 18: A national study.
Psychological Science 20: 594–602.
Greenwood, Louise. 2009. Africa’s mobile banking revolution. BBC News, August 12. http://news.
bbc.co.uk/2/hi/8194241.stm
Johnson, Rachel. 2007. A degree in cut and paste. The Times online, March 11. http://www.
timesonline.co.uk/tol/comment/columnists/rachel_johnson/article1496130.ece
Johnson, Lawrence F., et al. 2010. The Horizon report. Austin: The New Media Consortium. http://
wp.nmc.org/horizon2010/
Keating, Candes, and Murray Williams. 2006. Schools seek to ban addictive Mxit. IOLNews, August
23. http://www.iol.co.za/news/south-africa/schools-seek-to-ban-addictive-mxit-1.290620
Korte, Werner B., and Tobias Hüsing. 2007. Benchmarking access and use of ICT in European
Schools 2006. Final Report from Head Teacher and Classroom Teacher Surveys in 27 European
Countries. Elearning Papers, 2, 1. http://www.elearningeuropa.info/files/media/media11563.
pdf
Kozma, Robert. 2008. International handbook of information technology in primary and secondary
education. Berlin: Springer.
Kukulska-Hulme, A. 2009. Will mobile learning change language learning? ReCALL 21(2):
157–165.
Maes, Pattie, and Pranav Mistry. 2009. Unveiling the “Sixth Sense,” game-changing wearable tech.
In TED 2009, Long Beach, CA.
Mitchell, Alice, and Carol Savill-Smith. 2004. The use of computer and video games for learning.
A review of the literature. London: Learning and Skills Development Agency. http://gmedia.
glos.ac.uk/docs/books/computergames4learning.pdf
150 E. Pasquinelli
Mitra, Sugata. 2009. Remote presence: Technologies for ‘beaming’ teachers where they cannot go.
Journal of Emerging Technologies in Web Intelligence 1(1): 55–59.
Mitra, Sugata, and Vivek Rana. 2001. Children and the Internet: Experiments with minimally
invasive education in India. British Journal of Educational Technology 32(2): 221–232.
Mitra, Sugata, et al. 2005. Acquisition of computing literacy on shared public computers: Children
and the ‘hole in the wall’. Australasian Journal of Educational Technology 21(3): 407–426.
Nussbaum, Bruce. 2007. It’s time to call One Laptop Per Child a failure. Businessweek,
September 24. http://www.businessweek.com/innovate/NussbaumOnDesign/archives/2007/09/
its_time_to_call_one_laptop_per_child_a_failure.html
Papert, Seymour. 1980. Mindstorms: Children, computers, and powerful ideas. New York: Basic
Books.
Papert, Seymour. 2004. Entretien avec Seymour Papert. Education et Territoires – Conseil Général
des Landes. http://www.dailymotion.com/video/x5zdl4_seymour-papert-2004_webcam/
Prensky, Marc. 2005. What can you learn from a cell phone? Almost anything. Innovate. Journal
of Online Education 1(5). http://innovateonline.info/pdf/vol1_issue5/What_Can_You_Learn_
from_a_Cell_Phone__Almost_Anything!.pdf
Prensky, Marc. 2006. Don’t bother me mom, I’m learning! How computer and video games are
preparing your kids for twenty-first century success and how you can help! St. Paul: Paragon
House.
Resnick, Mitchell, et al. 2009. Scratch: Programming for all. Communications of the ACM 52(11):
60–67.
Rudd, Peter, et al. 2009. Harnessing Technology Schools Survey 2009 Analysis report. Berkshire:
National Foundation for Education Research. http://research.becta.org.uk/upload-dir/downloads/
page_documents/research/ht_schools_survey08_analysis.pdf
Sharples, Mike. 2005. Learning as conversation: Transforming education in the mobile age.
In Proceedings of the conference on seeing, understanding, learning in the mobile age,
Budapest, Hungary.
The Economist. 2010. Don’t shoot the messenger. America’s president joins a long (but wrong)
tradition of technophobia. May 13. http://the-economist.com/node/16109292/comments
Traxler, John, and Agnes Kukulska-Hulme. 2005. Mobile learning in developing countries.
Commonwealth of Learning. http://www.col.org/SiteCollectionDocuments/KS2005_mlearn.pdf
Trucano, Michael. 2005. Knowledge maps: ICTs in education. Washington, DC: infoDev/World
Bank. http://www.infodev.org/en/Publication.8.html
Trucano, Michael. 2009. Mobile phones: Better learning tools that computers? (An EduTech
debate). EduTech. http://blogs.worldbank.org/edutech/mobile-phones-better-learning-tools-than-
computers-an-edutech-debate-0
Van der Klein, Raimo (Thinkmobile). 2008. The box and beyond. Slideshare. http://www.slideshare.
net/Thinkmobile/the-box-and-beyond-web
Vecchiatto, Paul. 2009. Mxit becomes teachers’ pet. MyDigitalLife, April 20. http://www.mydigitallife.
co.za/index.php?option=com_content&task=view&id=1045673&Itemid=35
Wagner, Daniel, et al. 2005. Monitoring and evaluation of ICT in education. A handbook for
developing countries. Washington, DC: infoDev/World Bank. http://robertkozma.com/images/
ict_ed_ch2_monitoringandeval.pdf
Whitehead, Alfred North. 1929. The aims of education and other essays. New York: Free Press.
Chapter 8
Content Net Neutrality – A Critique
Raphael Cohen-Almagor*
8.1 Introduction
In a recent article, Luciano Floridi (2010a, p. 11) argues that we are now experiencing
the fourth scientific revolution. The first was that of Nicolaus Copernicus (1473–1543),
the first astronomer to formulate a scientifically-based heliocentric cosmology that
displaced the Earth and hence humanity from the center of the universe. The second
was Charles Darwin (1809–1882), who showed that all species of life have
evolved over time from common ancestors through natural selection, thus displacing
humanity from the centre of the biological kingdom. The third was Sigmund Freud
(1856–1939), who acknowledged that the mind is also unconscious and subject
to the defence mechanism of repression, thus we are far from being Cartesian minds
entirely transparent to ourselves. And now, in the information revolution, we are
in the process of dislocation and reassessment of humanity’s fundamental nature
and role in the universe. Floridi argues that while technology keeps growing
bottom-up, it is high time we start digging deeper, top-down, in order to expand
and reinforce our conceptual understanding of our information age, of its nature,
less visible implications and its impact on human and environmental welfare, and thus
give ourselves a chance to anticipate difficulties, identify opportunities and resolve
problems, conflicts and dilemmas (Floridi 2009, 2010a).
*All websites were accessed during December 2010. I am most grateful to Jacqueline Lipton and
Jack Hayward for their valuable comments.
Raphael Cohen-Almagor (D. Phil., Oxon) is an educator, researcher and human rights activist;
Chair in Politics, University of Hull, UK. To date, he has published 15 books, including two books
of poetry. http://www.hull.ac.uk/rca; http://almagor.blogspot.com/
R. Cohen-Almagor (*)
Department of Politics, University of Hull, Cottingham, UK
Floridi has made many contributions in his attempts to “dig deeper.” In this paper
I would like to focus on some of Floridi’s ideas on information ethics which he
describes as the study of the moral issues arising from the availability, accessibility
and accuracy of informational resources, independently of their format, type and
physical support. He further clarifies that information ethics, understood as informa-
tion-as-a-product ethics, may cover moral issues arising, for example, in the context
of accountability, liability, libel legislation, testimony, plagiarism, advertising, pro-
paganda, and misinformation (Floridi 2008). I wish to add to this list answerability
and responsibility and to focus on these two concepts as well as on accountability.
Answerability is closely related to accountability. The former accentuates more
the need to respond to external claims, pressures, demands; providing explanation
for one’s conduct. The accompanying concept of responsibility refers to a person or
organization that is able to answer for one’s conduct and obligations. When we
speak of social responsibility we refer to the responsibility of individuals, groups,
corporations and governments to society. The difference between responsibility,
on the one hand, and answerability and accountability on the other is that the
first connotes a more voluntary and self-directed character. Responsibilities are
typically accepted, not imposed by force, although they can be contracted and
attributed. In contrast, answerability and accountability have a more external character,
although they can also be voluntary. The more voluntary it is, the more conduct is
compatible with freedom and even coterminous with responsibility. The accountable
person or organization is also answerable.
In other words, responsibility, answerability and accountability complement each
other, the one being an extension of the other (McQuail 2003, p. 306; Tavani 2011,
pp. 119–123). They are designed to improve the quality of the service or product,
promote trust of those who are using the service or product and protect the interests
of all parties concerned, including the business at hand. Business known to be respon-
sible, answerable and accountable for its services and/or products enjoys solid repu-
tation and may attract more customers. Responsibility, answerability and
accountability are important as sometimes people and organizations seek indepen-
dence from their responsibilities. Ambrose Bierce (1911) described responsibility as
a “detachable burden easily shifted to the shoulders of God, Fate, Fortune, Luck or
one’s neighbor. In the days of astrology it was customary to unload it upon a star”.
In the Internet age, an interesting phenomenon emerged that confuses the concept
of moral and social responsibility. In the offline, real world, people know that they
are responsible for the consequences of their conduct, speech as well as action.
In the online, cyber world, we witness responsibility shake-off. You can assume
your dream identity and then anything goes. The Internet has a dis-inhibition effect.
The freedom allows language one would dread to use in real life, words one need
not abide by, imagination that trumps conventional norms and standards. It is high
time to bring to the fore discussion about morality and responsibility. My discussion
focuses upon the concept of net neutrality.
In his recent book, The Philosophy of Information, Floridi (2010c) addressed the
issue of the truthfulness of data which he termed alethic neutrality. I, in turn, wish
8 Content Net Neutrality – A Critique 153
The issue of responsibility of ISPs and host companies is arguably the most intriguing
and complex. Their actions and inactions directly affect the information environ-
ment. An Internet Service Provider (ISP) is a company or other organization that
provides a gateway to the Internet, usually for a fee, enabling users to establish
contact with the public network. Many ISPs also provide e-mail service, storage
capacity, proprietary chat rooms, and information regarding news, weather, banking
or travel. Some offer games to their subscribers. A Web Hosting Service (WHS) is a
service that runs various Internet servers. The host manages the communications
protocols and houses the pages and the related software required to create a website
on the Internet. The host machine often uses the Unix, Windows, Linux, or Macintosh
operating systems, which have the TCP/IP protocols built in (Gralla 2007, p. 173).
It is generally agreed in both the United States and Europe that the access provider
should not be held responsible for the contents of messages. In Europe, this has
been codified in the E-Commerce Directive of the European Union as well as the
German Teleservices Act. In the United States, so-called “common carrier” provisions
allow certain carriers of communications to carry all manner of traffic without liability.
American courts tend to hold that ISPs are not liable for content posted on their
servers, this under Section 230(1) of the Communications Decency Act (1996)
(the “Good Samaritan provision” to be discussed infra). More recently, Congress granted
limited immunity to access providers for violations of copyright law in the Digital
Millennium Copyright Act (National Research Council 2001, pp. 119–120).
WHS’s, however, are a different story. A host provider may be a portal or a
proprietary service that gathers in one place a large amount of third-party content
for user access. Being closer to a virtual forum site or bazaar than to a postal system, it
provides Web space, helps its subscribers find material more easily, and establishes
154 R. Cohen-Almagor
“bulletin boards” and e-mail services. Generally, the host provider does not have
anything to do with the content placed on the server, but a good deal to do with its
organization in the “marketplace” (National Research Council 2001, p. 120).
Because the host provider offers more than a connection service, the question of
liability is more complicated. Legal systems have to determine when the value
added by the host provider’s services begins to make it look less like an access
provider and more like a content provider. The task is made all the more difficult as
new technologies create new business opportunities for inventive entrepreneurs,
and the services offered by host providers change. It is unlikely that a simple or
permanent resolution to this question will become available soon (National Research
Council 2001, p. 120).
Yahoo! has terms of service that prohibit to “upload, post, email or otherwise
transmit any Content that is unlawful, harmful, threatening, abusive, harassing, tor-
tious, defamatory, vulgar, obscene, libellous, invasive of another’s privacy, hateful,
or racially, ethnically or otherwise objectionable” (http://uk.docs.yahoo.com/info/
terms.html). However, if such content is not removed by the ISP, neither it nor
its partners assume any liability. In the United States, the guiding principle inspired
by the First Amendment and the special status that freedom of expression enjoys is
of net neutrality. The underlying belief is that the Internet should remain an open
platform for innovation, competition, and social discourse, free from unreasonable
discriminatory practices by network operators. All content, sites, and platforms
should be treated equally, free of any value judgment. In justifying this philosophy,
American new media experts explain that the Internet was built and has thrived
as an open platform, where individuals and entrepreneurs are able to connect and
interact, choose marketplace options, and create new services and content on a level
playing field. Richard Whitt, Google’s Washington Telecom and Media Counsel,
writes that “No one seems to disagree with that fundamental proposition,” arguing
for the need to “protect that unique environment” and supporting the adoption of
“rules of the road” to ensure that the broadband on-ramps to the net remain open and
robust (Whitt 2009). Jack Balkin, from Yale Law School, said that the open Internet
is crucial to freedom of speech and democracy because it allows people to actively
participate in decentralized innovation, form new digital networks, and allows
freedom from prior government constraints. People can reach all audiences and find
a way around gatekeepers with great new tools and applications (Naoum 2009).
I wish to take issue with these arguments, that of net neutrality and that the
Internet environment is unique to the extent that it is a public domain and thus any
speech should be freely available on the Internet. I argue that some value screening
of content may be valuable and that the implications affording the Internet the
widest possible scope can be very harmful. Contra Balkin I think that limitless
freedom of speech might undermine democracy and bring about its destruction.
Indeed, one of the dangers we face is that the principles that underlie and characterize
the Internet might undermine freedom. Because the Internet is a relatively young
phenomenon, people who use and regulate it lack experience in dealing with pitfalls
involved in its working. Freedom of expression should be respected as long as it
does not imperil the basic values that underlie our democracy. Freedom of expression
is a fundamental right, an important anchor of democracy; but it should not be used
8 Content Net Neutrality – A Critique 155
Net neutrality is one of the core principles of the Internet. In October 2009, a group
of the world’s largest Internet companies wrote a letter of support to the US Federal
Communications Commission (FCC). The letter is the latest in an ongoing debate
about “network neutrality” – or how data is distributed on the web. The letter, signed
inter alia by the chief executives of Google, eBay, Skype, Facebook, Amazon, Sony
Electronics, Digg, Flickr, LinkedIn and Craigslist, says that maintaining data
neutrality helps businesses to compete on the basis of content alone: “An open
internet fuels a competitive and efficient marketplace, where consumers make
the ultimate choices about which products succeed and which fail… This allows
businesses of all sizes, from the smallest start-up to larger corporations, to compete,
yielding maximum economic growth and opportunity” (BBC Reporter 2009).
This is yet another step in a sustained and till now quite successful effort to grant
Internet companies the widest possible freedom and independence to conduct their
affairs in a way that best serves their commercial interests. Their responsibility, as
these large companies see it, is to provide their customers with efficient service.
Net neutrality is also about the organization of the Internet. No one application
(WWW, email, messenger) is preferred to another. All applications should be treated
by Internet intermediaries equally. Information providers – which may be websites,
online services, etc., and who may be affiliated with traditional commercial
enterprises but who also may be individual citizens, libraries, schools, or nonprofit
entities – should have essentially the same quality of access to distribute their offer-
ings. “Pipe” owners (carriers) should not be allowed to charge some information
providers more money for the same pipes, or establish exclusive deals that relegate
everyone else (including small noncommercial or startup entities) to an Internet
“slow lane.” This principle should hold true even when a broadband provider is
providing Internet carriage to a competitor.1 To this I agree. The public is interested
in having a neutral platform that supports innovations and the emergence of the best
technological applications.
1
“Network Neutrality,” American Library Association http://www.ala.org/ala/issuesadvocacy/tele-
com/netneutrality/index.cfm
156 R. Cohen-Almagor
However, the American Library Association also holds that the principle of
net neutrality maintains that consumers/citizens should be free to get access to –
or to provide – the Internet content and services they wish, and that consumer
access should not be regulated based on the nature or source of that content or
service.2 Similarly, the Norwegian Post and Telecommunications Authority (NPT)
holds that netusers are entitled to an Internet connection that enables them to
send and receive content of their choice as well as to Internet connection that is
free of discrimination with regard not only to type of application and service
but also content.3
This part of net neutrality that concerns content I find much more complicated,
and problematic. It should be separated from the principle of net neutrality. I call it
content net neutrality.
Content net neutrality holds that we should treat all content that is posted on
the Internet equally. ISPs and WHSs should not privilege or in one way or another
discriminate between different types of content. Now, it is unclear what the
implications of such a view are. One possible implication against which content
net neutrality warns is that a specific search engine might pay ISPs fees to ensure
that responses from its Web site would be delivered to the user faster than the
results from a competing search engine that had not paid special fees. Another
possible wrong implication against which we all protest is that an ISP might
accord a lower priority to packets transmitting, say, video feeds – unless the
customer were to pay a special fee for higher-speed access. The most alarming
scenarios involve outright blockage of content by source or by type. An example
of blockage by source often cited in news stories is that of the Canadian ISP
Telus, which blocked subscribers’ access to a Web site of the Telecommunications
Workers Union, with which it was in conflict (Kabay 2006). Labour disputes
should never constitute grounds for content discrimination. The example of
type-based blocks much mentioned in the debate is that of Madison River
telecommunications provider, which blocked voice over IP (VoIP) traffic from
Vonage as an anticompetitive move to protect its own long-distance conventional
telephony service.4
This kind of brute discrimination, motivated by narrow economic interests, is
also illegitimate. Such incidents demonstrate the skewed incentives that ISPs
might have in controlling content and applications. The present debate is about
the extent that ISPs should be allowed to control the size of the pipes: Can ISPs
actively control the bandwidth available to certain websites based on the type of
content they provide, thus influencing the Internet speed available to netusers?
2
Ibid.
3
Network Neutrality – Guidelines for Internet neutrality (Post-og teletilsynet, February 24, 2009).
4
“FCC Chairman Michael K. Powell Commends Swift Action to Protect Internet Voice Services,”
Federal Communications Commission News (March 3, 2005), at http://tinyurl.com/hscav
8 Content Net Neutrality – A Critique 157
Tim Wu helps us understand the logic behind net neutrality by arguing that a useful
way to understand this principle is to look at other networks, like the electric grid,
which are implicitly built on a neutrality theory. The general purpose and neutral
nature of the electric grid is one of the things that make it extremely useful. The
electric grid does not care if you plug in a toaster, an iron, or a computer. Consequently
it has survived and supported giant waves of innovation in the appliance market.
The electric grid worked for the radios of the 1930s and it works for the flat screen
TVs of the 2000s. For that reason the electric grid is a model of a neutral, innova-
tion-driving network.5
However, does this mean that as you do not expect to control the content of
the electric grid so you should not aim to control the Internet’s content? If this
is a plausible deduction then this comparison is misleading. The electric grid
transmits power that enables the functioning of electric equipment. It does not have
content, messages, propaganda, instructions, means to abuse you or to harm you.
The Internet, on the other hand, has all this. As Floridi (2010a, p. 13) rightly writes, a
digital interface is a gate through which a user can be present in cyberspace.
Regarding the electronic grid you cannot develop subjective notions. The Internet
which contains the best and worst products of its customers may lead you to develop
subjectivity. The Internet contains the power to influence your life in constructive
and destructive ways. As thinking people who are able to differentiate between right
and wrong, good and evil; as morally responsible beings, we must discriminate
between contents. We cannot be neutral about it if we wish to continue leading a
free, autonomous lives. The only meaningful aspect in the comparison between
the Internet and the electric grid is that in both we insist on some measures to
assure our security. These measures do not need to include subjectivity when
we consider the electric grid. They do require subjectivity when we consider the
Internet. Ethics requires us to care about the consequences of our actions and to
take responsibility for them. As Floridi and Sanders (2005, pp. 195–196) rightly
note, ethics is about constructing the world, improving its nature, and shaping its
development in the right way.
In a House Committee on The Juidiciary, Telecom & Antitrust Task Force, Wu
(2006) said that the “instinct” of protecting consumers’ rights on network is very
simple: Let customers use the network as they please. With due appreciation to
instincts which often serve as good guides for conduct, by nature they are not
thoughtful. Sometimes, after reflecting and pondering, we act against our instincts,
for good reasons. I think there are ample reasons to doubt whether allowing customers
to use the network as they please is a good policy to follow. While the majority of
netusers appreciate this policy and would not abuse it, some people might opt for
abuse. We should respect the users and protect ourselves against the abusers.
5
Tim Wu, “Network Neutrality FAQ,” at http://timwu.org/network_neutrality.html
158 R. Cohen-Almagor
On the other hand, Wu in an earlier article (2003) explains that the basic principle
behind a network anti-discrimination regime is to give users the ability to use
non-harmful network attachments or applications, and provide innovators the
corresponding freedom to supply them. ISPs should have the freedom to reasonably
control their network (“Police what they own”) and, at the same time, the Internet
community should view with suspicion restrictions premised on inter-network
criteria (Wu 2003, pp. 142, 145).
What does “reasonably control their network” mean? First, ISPs prohibit netusers
from using applications or conduct that could hurt the network or other netusers.
For instance, Akamai Acceptable Use Policy states: “Customer shall not use the
Akamai Network and Services to transmit, distribute or store material that contains
a virus, worm, Trojan horse, or other component harmful to the Akamai Network
and Services, any other network or equipment, or other Users.”6
Blocking denial of service attacks and spam are also within what we perceive as
legitimate network management.
Second, some companies market equipment aimed to facilitate application-
based screening and control for broadband networks. Companies like Check Point
Enterprise7 and Symantec Gateway Security8 provide traffic-management features
with highly-developed security-management tools. Allot Communications provides
facilities to manage traffic and produces a fully integrated, carrier-class platform
capable of identifying the traffic flows of individual subscribers.9
Packeteer tracks link and provides statistics per application – including peak and
average utilization rates (down to 1 min), bytes, availability, utilization, top talkers
and listeners, network efficiency and frames. It monitors use and performance
through proactive alarming and exception reporting or through comprehensive
central reporting tools.10
Third, ISPs may prohibit netusers from inflicting harm on others by upholding
and promoting crime-facilitating speech designed to uphold harmful conduct. The
effort, writes Wu (2003, p. 168) quite rightly, is to strike a balance between prohibiting
ISPs, absent a showing of harm, from restricting what netusers do with their Internet
connection, while giving them general freedom to manage bandwidth consumption.
This non-discrimination principle works by recognizing a distinction between local
network restrictions, which are generally allowable, and inter-network restrictions
which are suspect. The effort is to develop forbidden and permissible grounds for
discrimination in broadband usage restrictions. Wu has in mind illegal activities. I
argue that ISPs and WHSs should also consider prohibiting hate speech, which is
legal in the USA. While racist Nazi speech is protected under the First Amendment,
the same speech is not protected in most European countries. Morally speaking,
such speech is repugnant.
6
“Acceptable Use Policy,” http://www.akamai.com/html/policies/acceptable_use.html
7
http://www.checkpoint.com/products/enterprise/
8
http://www.symantec.com/avcenter/security/Content/Product/Product_SGS.html; http://www.
symantec.com/business/products/allproducts.jsp
9
http://www.allot.com/index.php?option=com_content&task=view&id=2&Itemid=4
10
http://www.packeteer.com/solutions/visibility.cfm
8 Content Net Neutrality – A Critique 159
8.3.2 Anti-perfectionism
Conceptually, both net neutrality and content net neutrality emphasize diversity and
plurality. Diversity entails openness and more opportunities for living a valuable
and richer life. Pluralism is perceived indispensable for having the potential for a
good life. Methodologically, the idea of neutrality is placed within the broader
concept of anti-perfectionism. The implementation and promotion of conceptions
of the good, though worthy in themselves, are not regarded as a legitimate matter for
governmental action. The fear of exploitation, of some form of discrimination, leads
to the advocacy of plurality and diversity. Consequently, ISPs and WHSs are not to
act in a way that might favour some ideas over others. ISPs and WHSs ought to
acknowledge that every person has her own interest in acting according to her
own beliefs; that everyone should enjoy the possibility of having alternative con-
siderations; that there is no single belief about moral issues and values that should
guide all and, therefore, each has to enjoy autonomy and to hold her ideals freely.
The concept of anti-perfectionism comprises the “political neutrality principle”
and the “exclusion of ideals” doctrine (Cohen-Almagor 1994). The “political neutrality
principle” holds that ISPs and WHSs’ policies should seek to be neutral regarding
ideals of the good. It requires them to make sure that their actions do not help
acceptable ideals more than unacceptable ones; to see to it that their actions will not
hinder the cause of false ideals more than they do that of true ones. The “exclusion
11
Comments of the Motion Picture Association of America, Inc. In response to the Workshop on
the Role of Content in the Broadband Ecosystem,” Before the Federal Communications
Commission, Washington, DC 20554, In the Matter of A National Broadband Plan For Our Future
(October 30, 2009).
160 R. Cohen-Almagor
of ideals” doctrine does not tell ISPs and WHSs what to do. Rather it forbids
them to act for certain reasons. The doctrine holds that the fact that some
conceptions of the good are true or valid should never serve as justification for
any action. Neither should the fact that a conception of the good is false, invalid,
unreasonable or unsound be accepted as a reason for a political or other action.
The doctrine prescribes that ISPs and WHSs refrain from using one’s conception
of the good as a reason for state action. They are not to hold partisan (or non-
partisan) considerations about human perfection to foster social conditions (Raz
1986, pp. 110–111).
Advocates of content net neutrality, in their striving to convince us of the necessity
of the doctrine, are conveying the assumption that the decision regarding the
proper policy is crucial because its grave consequences. Content net neutrality
entails pluralism, diversity, freedom, public consensus, non-interference, vitality
etc. If we do not adhere to neutrality, then we might be left with none of these
virtues. This picture leads to the rejection of subjectivity (or perfectionism), while
this Essay suggests a rival view that observes conduct of policies on a continuous
scale between strict perfectionism, on the one hand, and complete neutrality on
the other. The policy to be adopted does not have to be either the one, or the other.
It could well take the middle ground, allowing plurality and diversity without resorting
to complete neutrality; involving some form of perfectionism without resorting to
coercion. For perfectionism does not necessarily imply exercise of force, nor does
it impose the values and ideals of one or more segments of society on others, or
strive to ensure uniformity, as neutralists fear. On this issue my view comes close
to that of Joseph Raz (1986). I call his view the Promotional Approach (PA).
8.4.1 Terror
One of the gravest threats we are facing today is the threat of terrorism. Presently
there are more than 40 active terrorist groups, each with an established presence on
the Internet. Most active terrorist groups have established their Internet presence
with hundreds of websites worldwide. These websites use slogans to catch attention,
often offering items for sale (such as T-shirts, badges, flags, and video or audio
cassettes). Frequently the websites are designed to draw local supporters, providing
information in a local language and giving information about the activities of a
local cell as well as those of the larger organization. The website is, thus, a
recruiting tool as well as a basic educational link for local sympathizers and sup-
porters (Combs 2006, p. 139).
The Internet is the single most important factor in transforming largely local
jihadi concerns and activities into the global network that characterizes al Qaeda
today (Atwan 2006, p. 124). The sheer accessibility of cyber warfare capabilities
to tens, perhaps hundreds, of millions of people is a development without historical
precedent. Thus the ethical dimensions of acts of war and terror conducted by
networks of individuals, operating via the virtual realm, might become just as
important as the considerations for nation-states (Arquilla 2010).
Unfortunately, many of the terrorist websites are hosted by servers in the western
world. In the United States alone, al Qaeda has received funds from numerous social
charities based on its own soil. Some of the message boards and the “information
hubs” where terrorists post texts, declarations, and recordings are often included
in the “communities” sections of popular Western sites such as Yahoo!, Lycos,
and others (Wright 2004). The concept of content net neutrality which rejects any
responsibility for content facilitates this phenomenon. However, overconfidence,
arrogance, dismissiveness, laziness, dogmatism, incuriosity or self-indulgence are
no justification or excuse. The Internet is not outside the democratic realm. ISPs
and WHSs are a necessary part of it. They also know that democracy and terrorism
are mutually exclusive. A zero-sum game exists between them. The victory of one
comes at the expense of the other. Therefore, if the spirit and ideas of democracy
are dear to ISPs and WHSs and if they wish the democracy that enables their opera-
tion to prevail, they cannot shield themselves under the concept of content net
neutrality. It is necessary to take sides, distinguishing good from evil, adopting PA.
However, many Internet experts believe that all they need to do is to provide
the structure and the rest is up for the public. They preach content net neutrality,
which means ignorance. Such ignorant neutrality is aethical at best, and unethical
at worse. Let me say something about their belief and conduct. I think all humane
people perceive bombing civilian targets – be they buses, trains, airplanes, shopping
malls, buildings – as immoral, wrong, wicked, and odious. We also think that
these views are true, i.e., in this case we might be sufficiently confident to say that
we know they are true, and that people who disagree are making a bad mistake. We
think, moreover, that our opinions are not just subjective reactions to the idea of indis-
criminate massacre of innocent lives, but opinions about its actual moral character.
162 R. Cohen-Almagor
We think that it is an objective matter – a matter of how things really are – that
terrorism is wrong and wicked. This claim that I am advancing now – that terrorism
is objectively wrong – is equivalent to the claim that terrorism would still be wrong
even if no one thought it was. That is another way of emphasising that terrorism
is plainly wicked, not wicked only because people think it is so (Dworkin 1996,
pp. 92–98). Therefore, advancing content net neutrality at the expense of social
responsibility serves wicked aims that undermine the platform people wish to
protect and the society that promotes the democratic spirit in which they thrive.
Terrorism, I trust, is not a contested issue. Hate speech, however, is contested and in
the United States is protected under the First Amendment. Morally speaking, it is
repugnant speech. Hate is a social evil that offends the two most basic principles
that underlie any democratic society: Respecting others and not harming others.
Generally speaking, hate is derived from one form or another of racism and modern
racism has facilitated and caused untold suffering. It is an evil that has taken
catastrophic proportions in all parts of the world. Notorious examples include
Europe under Nazism, Yugoslavia, Cambodia, South Africa and Rwanda. Elsewhere
I argued that in hate messages, members of the targeted group are characterized
as devoid of any redeeming qualities and are innately evil. Banishment, segregation
and eradication of the targeted group are proposed to save others from the harm
being done by this group. By using highly inflammatory and derogatory language,
with the tone of extreme hatred and contempt and through comparisons to and associa-
tions with animals, vermin, excrement and other noxious substances, hate messages
dehumanize the targeted groups (Cohen-Almagor 2010).
Hate messages undermine the dignity and self-worth of the targeted group members
and they erode the tolerance and openmindedness that must flourish in democratic
societies committed to the ideas of pluralism, justice and equality. Furthermore,
hate speech might lead to hate crimes. Benjamin Smith and Richard Baumhammers
are two Aryan supremacists who in 1999 and 2000 respectively went on racially
motivated shooting sprees after being exposed to Internet racial propaganda. Smith
regularly visited the World Church of the Creator website, a notorious racist
and hateful organisation.12 He said: “It wasn’t really ‘til I got on the Internet, read
some literature of these groups that… it really all came together… It’s a slow, gradual
process to become racially conscious” (Wolf 2004). Rabbi Abraham Cooper
(1999) of the Wiesenthal Center argued that the Internet provided the theological
12
For information on ‘World Church of the Creator’, see http://www.volksfront-usa.org/creator.
shtml; http://www.nizkor.org/hweb/orgs/american/adl/cotc/; http://www.reed.edu/~gronkep/
webofpolitics/fall2001/yagern/creator.html; http://www.adl.org/poisoning_web/wcotc.asp; http://
www.apologeticsindex.org/c171.html
8 Content Net Neutrality – A Critique 163
of the material hosted by Fairview. Fairview, rather than sign the contract proposed
by BC Tel for renewal, gave up providing internet service (Howard 1998; Matas 2009).
Many other ISPs, WHSs and social networks take a PA responsible stance against
hate, barring blatant expressions of bigotry, racism and/or hate.13 Facebook, the
largest social networking site with more than 845 million users,14 prohibits posting
content that is hateful or threatening.15 XOOM.com of San Francisco, California,
bans “hate propaganda” and “hate mongering.”16 Lycos Terms of Service prohibit to
“Upload, post, e-mail, otherwise transmit, or post links to any Content, or select any
member or user name or e-mail address, that is unlawful, harmful, threatening,
abusive, harassing, tortuous, defamatory, vulgar, obscene, pornographic, libelous,
invasive of privacy or publicity rights, hateful, or racially, sexually, ethnically or
otherwise objectionable.”17 Fortunecity requires its users to agree to “not upload,
post, email, transmit or otherwise make available (collectively, ‘Transmit’) any Content
that is unlawful, harmful, threatening, abusive, harassing, tortuous, defamatory,
vulgar, obscene, libelous, invasive of another’s privacy, hateful, or racially, ethnically
or otherwise objectionable.”18
In this context, let me mention that the American Congress passed the “Good
Samaritan provision”, included in the 1996 Communication Decency Act (section
230-c-2) which protects ISPs that voluntarily take action to restrict access to prob-
lematic material: “No provider or user of an interactive computer service shall be
held liable on account of – (A) any action voluntarily taken in good faith to restrict
access to or availability of material that the provider or user considers to be obscene,
lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,
whether or not such material is constitutionally protected.”19
One may raise the question of how we decide whether something on the Internet
is terroristic or hateful? Many cases are quite straightforward. But there might be
more obscure or contested cases. One solution is to approach law enforcement agen-
cies or the courts. This surely would be a long and costly process. An alternative is
to seek online arbitration. Online arbitration is a private dispute resolution process
that involves the intervention of a neutral decision maker, namely the arbitrator, who
listens to both parties’ arguments and renders a decision that is binding on them.
Compared with a court procedure, arbitration is faster, cheaper and also confidential.
Online arbitration is increasingly appreciated by companies active on the Internet
because it is more rapid and less costly than legal proceedings or classical arbitration.
13
For instance, Atlas Systems, http://www.atlas-sys.com/products/aeon/policy.html; Elluminate
Online Services, http://www.elluminate.com/license_agreement.jsp; Evehosting.co.uk; Host2Host,
http://host2host.com/contract.htm
14
http://www.facebook.com/press/info.php?statistics
15
http://www.facebook.com/terms.php?ref=pf
16
ADL, Combating Extremism in Cyberspace (2000): 11.
17
http://info.lycos.com/tos.php
18
https://secure.fortunecity.com/order/register/agreement.php?siteid=55527.
19
CDA 47 U.S.C at http://www4.law.cornell.edu/uscode/47/230.html
8 Content Net Neutrality – A Critique 165
Everything is done online: the claimant fills out a form on the Cyber Tribunal site,
and the form is then sent to the other party. If the other party agrees to participate in
arbitration, he or she is asked to respond to the claim. When they undertake arbitra-
tion, the parties agree to comply with the award, no matter what is its decision. In case
of non-compliance and in accordance with applicable laws and treaties, the injured
party can obtain enforcement of the award (Katsh and Rifkin 2001).20
8.5 Conclusions
Luciano Floridi (2001) argues that the ethical use of information and communica-
tion technologies and the sustainable development of an equitable information
society need a safe and public infosphere for all, where communication and
collaboration can flourish coherently with the application of human rights and the
fundamental freedoms in the media. Sustainable development means that our interest
in the sound construction of the infosphere must be associated with an equally
important, ethical concern for the way in which the latter affects and interacts
with the physical environment, the biosphere and human life in general, both
positively and negatively (Floridi 2001, pp. 18–19). Ethical behavior considers
the consequences of one’s actions, and it is about being accountable for them.
Information professionals cannot be neutral regarding content as this behavior is
irresponsible and unprofessional. They have a prima facie moral duty to provide
stakeholders with certain level of security.
Ethics, Floridi (2010b) rightly notes, is not only a question of dealing morally
with a given world. It is also a question of shaping the world for the better. This is
a proactive approach which perceives agents as world owners, creators, game
designers, producers of moral goods and evils, providers, hosts. Accordingly, ISPs
should be able to plan and initiate action responsibly, in anticipation of future events,
in an attempt to control their course by making something happen, or by preventing
something from happening.
Moreover, I have argued that the Internet is a form of new media but it is still a
media. It is not reasonable to prohibit certain expressions in print and allow the
same objectionable expression electronically. We cannot be neutral with regard to
certain conduct which falls within the parameter of harming others; then the dangers
to democracy, to our fellow citizens, to the moral basis of society, to values which
we hold dear, might be too grave.
We need to take into account the temper of the time. The level of tolerance is in
flux. What is needed is to evoke awareness as to abuse of the Internet for promoting
anti-social, criminal activities and the appropriate ways to counter those activities.
The discussion, no doubt, will continue for many years to come.
20
See, for instance, CyberTribunal II at http://www.cybertribunal.org/index.en.html; net-ARB at
http://www.net-arb.com/; WPO at http://www.wipo.int/amc/en/arbitration/online/index.html
166 R. Cohen-Almagor
References
Motion Picture Association of America Inc. 2009. Comments in response to the workshop on the
role of content in the broadband ecosystem. Before the Federal Communications Commission,
Washington, DC 20554. In the Matter of a National Broadband Plan for Our Future, October
30, 2009.
Naoum, C. 2009. Web content producers favor net neutrality. Reject regulation of search engines,
December 16. BroadbandBreakfast.com
National Research Council. 2001. Global networks and local values: A comparative look at
Germany and the United States. Washington, DC: National Academy Press.
Network neutrality – Guidelines for Internet neutrality. 2009. Post-og teletilsynet, February 24, 2009.
Network neutrality. American Library Association. http://www.ala.org/ala/issuesadvocacy/telecom/
netneutrality/index.cfm
Oliver Chronicle, July 24, 1996.
Raz, R. 1986. The morality of freedom. Oxford: Clarendon.
Sohn, G.B. 2009. Content and its discontents: What net neutrality does and doesn’t mean for
copyright. Yale Information Society Project, Yale Law School, New Haven, October 27, 2009.
http://www.publicknowledge.org/node/2740
Tavani, Herman T. 2011. Ethics and technology: Controversies, questions, and strategies for ethical
computing. Hoboken: Wiley.
Trans Atlantic Consumer Dialogue. 2008. Resolution on net neutrality. DOC No. INFOSOC 3608.
Whitt, R. 2009. Time to let the process unfold. Google Public Policy Blog, October 22. http://
googlepublicpolicy.blogspot.com/2009/10/time-to-let-process-unfold.html
Wolf, C. 2004. Regulating hate speech qua speech is not the solution to the epidemic of hate on the
Internet. In OSCE meeting on the relationship between Racist, Xenophobic and Anti-Semitic
Propaganda on the Internet and hate crimes, Paris, June 16–17, 2004.
Wright, L. 2004. The terror web. The New Yorker, August 2.
Wu, T. 2003. Network neutrality, broadband discrimination. Journal of Telecommunications and
High Technology Law 2: 141–179.
Wu, T. 2006. Testimony, hearing on “Network neutrality: Competition, innovation, and nondiscrimi-
natory access.” House Committee on the Juidiciary, Telecom & Antitrust Task Force.
Wu, T. Network neutrality FAQ, at http://timwu.org/network_neutrality.html
Chapter 9
Information Science and Philosophy
of Information: Approaches and Differences
It is usually accepted that Information Science (IS) has its early origins in
Documentation, conceived and implemented by Paul Otlet and Henri La Fontaine at
the end of nineteenth century. However, the designation of “Information Science”
only appeared at the end of the 1950s in close connection to scientific and technical
information, which was growing strongly at that time (Rayward 1997; Saracevic
1996; Shera and Cleveland 1977; Silva and Ribeiro 2002; Williams et al. 1997).
This new field of study and work developed alongside the traditional areas –
Archivistics and Librarianship – which emerged as scientific disciplines in the mid-
nineteenth century, in the framework of Historicism and Positivism, but with an
“auxiliary” status to History and characterized by high-level erudition.
A gradual technological revolution, initiated with the telegraph, the telephone,
the typewriter, the wireless set, cinema and photography, was at the origin of new
forms of communication and new information media, different from the traditional
paper format. Thus, new documents such as the graphic, sound and audiovisual
were produced and joined books, journals and manuscripts, giving rise to thought
and reflection which differed from the norm. Paul Otlet and Henri La Fontaine
shared such concerns and searched for the foundations for a new area that they
called “Documentation”.1 This field of work did not mean a break with ways of
1
The major expression of their work appears in Paul Otlet’s Traité de Documentation, published in
1934. A translation of this book has been published some years ago by Universidad de Múrcia
(Otlet 1996). On Paul Otlet’s work, see for instance: (Day 1997).
A.M. da Silva (*) • F. Ribeiro
Faculty of Arts and Humanities, University of Porto/CETAC.MEDIA, Porto, Portugal
e-mail: malheiro@letras.up.pt
viewing and doing of the traditional disciplines, but put the emphasis on the technical
aspects of processing documents and the organization of services, in order to
improve access to and the use of information.
The growth of IS, in the continuity of Documentation and the expansion of its
technical aspects, accompanied technological development and took place in close
connection to scientific and technical information since 1958.2 Some concerns with
its definition and theoretical foundations quickly arose. In fact, over the last half
century, the evolution of IS has been quite significant in what concerns its scientific
consolidation, above all in the academic sphere. As testimony to this growth we can
mention the proliferation of undergraduate programmes and advanced studies
(masters and doctorates) all over the world, but with great emphasis in Europe
and the USA, as well as the emergence of several journals connected to universities
and research groups that involve teachers and researchers from academic institutions
in the majority of the countries.
The technological revolution of the last decades and society’s involvement in the
information phenomenon, today completely linked to digital media, provoked pro-
found changes in the IS field, because of the urgency to provide answers to new
problems and challenges, whose solutions demanded increasingly more consistent
theoretical and methodological groundings, able to support applied research and
intervention in the diverse organizational and social contexts. But, in spite of the
quick growth of IS, the scientific consensus as to its nature and identity are, still
today, a problem, because its disciplinary construction did not occur at the same
time and in the same way across all countries and contexts and, consequently, its
degree of development varies significantly and makes unitary thinking on the disci-
plinary field itself quite difficult.
These constraints however do not hinder us from taking a clear position as to the
scientific nature and identity of IS, which may be understood as a contribution to the
epistemological, theoretical and methodological foundation of this field of
knowledge.
The perspective we defend and have attempted to consolidate over the last decade
at the University of Porto, assumes IS as a unitary yet transdisciplinary field of
knowledge, included in the overarching area of the human and social sciences,
which gives theoretical support to some applied disciplines such as Librarianship,
Archivistics, Documentation and some aspects of Technological Information
Systems. The way in which we see the cartography of the IS scientific field at the
University of Porto has been explained in an epistemological paper, edited in 2002,
and represented in a diagram, that gave support to the education model developed
in the undergraduate and master’s curricula lectured at the University of Porto
2
Anthony Debons states that, before 1958, the term information science rarely appears in specialized
literature (Debons 1986); and according to Shera and Cleveland, the happening that marked the
transformation of documentation into IS was the International Conference on Scientific Information,
that took place in Washington, in 1958, resulting from cooperation between ADI, FID, National
Academy of Sciences and National Research Council. This meeting jointed the greatest names in
documentation at world level (Shera and Cleveland 1977).
9 Information Science and Philosophy of Information: Approaches and Differences 171
(Silva and Ribeiro 2002). Later on, this diagram was redesigned and improved in
the context of another theoretical paper, (Silva 2006), and is presented below:
AND ALSO,
LITERARY AND
ARTISTIC STUDIES
2. integrated dynamically – the informational act is involved with, and results from,
conditions and circumstances both internal and external to that action
3. has potentiality – a statement (to a greater or lesser extent) of the act which
founded and modelled the information is possible
4. quantifiable – linguistic, numeric or graphic codification is capable of quantification
5. reproducible – information can be reproduced without limit, enabling, therefore,
its subsequent recording/memorization
6. transmissible – informational (re)production is potentially transmissible or
communicable.
These six properties, and especially the last two, characterize information, not
only as a phenomenon but also as a process. In this second dimension we include the
idea of information behaviour, as well as all the activities related to the creation,
organization, representation, storage, retrieval and use of information. Thus, infor-
mation comprises the core (single and cross-disciplinary) of an academic field,
which is itself dynamic and closely interrelated with other disciplines, as the diagram
in the Appendix demonstrates.
The assumption of social information as the object of knowledge has wide-
ranging and unexpected implications. The main one is the emergence of a scientific-
informational paradigm, shaped by the following factors:
(a) the value of information (and not the medium on which it is recorded) as a
human and social phenomenon/process, with its own historicity (organic and
contextual) and its cultural importance;
(b) the statement of the natural and continuous dynamism of information in oppo-
sition to documental immobility;
(c) the impossibility of keeping the traditional divisions of information according
to the institutional or technological space where it is preserved (archival service,
library or computer package) because such a criterion does not embrace the
dynamic context of its production, of its recording and of its use/access
(functionality);
(d) the need to know (to understand and to explain) social information through
theoretical-scientific models, increasingly more effectively, instead of an empir-
ical practice reduced to a set of technical procedures such as arrangement,
description and retrieval;
(e) the replacement of the process-oriented perspective evident in the terms ‘records
management’ or ‘information management’ by a new scientific view that tries
to understand the information involved in the management process of any orga-
nization; this means that the informational practices/procedures are aligned
with managers’ conceptions and practices and with the organizational culture.
These characterizing elements, together with the definition of Information, can be
considered the minimum and fundamental basis of a scientific approach to that which
we consider to be the object of study and work of IS, understood as a theoretical and
practical field in consolidation that supports multifaceted professional competencies,
in accordance with the contexts and demands of professional activities.
9 Information Science and Philosophy of Information: Approaches and Differences 173
In what concerns the methodological component of IS, we can sum up the ideas
largely explored in the book mentioned previously (Silva and Ribeiro 2002).
According to the topological model proposed by Paul de Bruyne, J. Herman and
M. de Schoutheete for research in the social sciences (De Bruyne et al. 1974;
Lessard-Hébert et al. 1994), the method of information science is achieving greater
acceptance and tends to find consolidation through quadripolar research dynamics,
which are operated and continuously repeated within the field of knowledge itself.
This action combines quantitative approaches (there are aspects of the object which
can be observed, experimented on and measured) and qualitative approaches, in which
the subject’s interpretative/explanatory ability necessarily has modelling implica-
tions. The research dynamics mentioned thus imply permanent interaction on four
poles, that is, the epistemological, theoretical, technical and morphological.
alternative to the epistemological impasse which has afflicted the debate on the
status of IS – the impasse of reducing this “science” to an inter-discipline, which
seems to be no more than a “non place” ….
Clearly, in this exploratory paper, we do not intend to overview all the implica-
tions and much less go into them in detail. But there is a need to at least identify and
highlight those which arise as central points in structural and analytical reflection on
the epistemological project in which we are deeply involved. So as to understand
this intention, we have to retake here the operational definition of information put
forward previously, and attempt a “deconstruction” which clearly reveals the under-
lying epistemological and, particularly, philosophical assumptions.
Since having begun and broadened the epistemological debate on IS, we feel that
the need for an operational definition of information and another of communication
is strategic, so as to clearly understand the “texture” and contours of the object of
study of our scientific discipline. Since it is apparent that the object is a discursive
and social construct by a group or a community of practicians/researchers, it also
seems clear that this constructed object has to point to a phenomenal reality or to
phenomena which arise from an external reality independent of the subject-
researcher. This positioning can be seen in philosophical terms, or in terms of theory
of knowledge, as a mitigated realism which aggregated reconciles the representa-
tional subjectivity of the themes and related problems as a set to be explored by IS,
with the objective rooting of those themes and problems in a reality to which the
concepts of information and communication report. Specifically, the human mind
and body, socially embedded, ultimate instance which we intend to know and which
is beyond the palpable materiality of the documental, seen as epiphenomenon of
semiosis, that is, of the signifying and symbolic capacity of the (human and social)
production of meaning. This view can and should be completed with that of Robert
Escarpit in L’Information et la communication: théorie générale, in which docu-
ment is defined as a visible or touchable informational object endowed with a dual
independence in relation to time: synchrony or internal independence of the mes-
sage which is no longer a linear sequence of events, but a multidimensional juxta-
position of traits; and the stability or global independence of the informational
object which is no longer an event registered in the course of time, but a material
medium of the trait which can be preserved, transported, reproduced (Escarpit
1991:123). Traits or codified representations? There is at the core of this question a
certain divergence with Escarpit and the idea that information does not dematerialize
itself, even when it is produced in the mind and can be absorbed by another by
means of phonetic and direct communication, or by means of a record on a physical
medium (document).
The concepts of information and communication, within the historical specificity
of IS, emerge and are adopted and used less by influence of the Mathematical Theory
of Information of Claude Shannon and Warren Weaver (1949), and more by the
reflexive deconstruction of the old notion of document. The importance of documental
action as a potentially communicational practice and the natural criticism of Shannon
and Weaver’s mathematical theory mark the specificity of the info-communicational
object in the epistemological conception defended here.
176 A.M. da Silva and F. Ribeiro
This definition already establishes the bridge with human and social interaction,
which the concept of communication substantiates and to which it is intrinsically
complementary, although not to be confused with information, despite some authors
who have accepted this mistaken overlap.
Communication: Process of transmitting information among agents who share a set of
signs and semiotic rules (syntactic, pragmatic and semantic), whose objective is the
construction of meaning. Synonymous of human and social interaction and necessarily
assuming information in the form of messages or contents which are transmitted, shared, in
sum, communicated (Silva 2006; DeltCI 2007).
3
Psychological Information is quasi- or proto-information by reference to the mathematical and
physical conception of Shannon and Weaver.
178 A.M. da Silva and F. Ribeiro
The present book goes further: it is an inter- and trans-disciplinary project in the philosophy
of science that analyses modern efforts to arrive at a unified conceptual framework, one that
encompasses the complex fields of information, cognition and communication science, and
semiotic scholarly studies – fields that together are often referred to as information science.
This book offers an interpretation of those “information science” research programs of
the sort which unified information science can offer; it also discusses what is needed to
supplement present approaches. As such, it is part of the Foundation of Information Science
(FIS) research program, in that it asks whether there can be a transdisciplinary informa-
tion science that encompasses the technical, natural, and social sciences, as well as the
humanities, in its under-standing of understanding and communication, a vision that origi-
nally came from Norbert Wiener in his book Cybernetics; or, Control and Communication
in the Animal and the Machine (1961) (…)
This book aims to formulate a new transdisciplinary framework based on Peirce’s semi-
otics, second-order cybernetics, Luhmann’s systems theory, cognitive semantics, and lan-
guage game theory. I apply concepts found in second-order cybernetics and the semiotics
of Charles Sanders Peirce to solve various transdisciplinary conceptual problems at the
heart of cognitive science, since cybernetics was among the original contributors to modern
information and communication science. I will refer to this transdisciplinary framework as
‘Cybersemiotics’ (Brier 2008:3–4).
Bearing in mind these citations, used here for merely illustrative purposes, there
is undoubtedly an intense dialogue to be developed with Brier’s Cybersemiotics, as
with the Luciano Floridi’s Philosophy of Information, although the latter presumes
at the outset a less linear and more complex dialogic process. It is precisely for this
reason that we seek to begin this dialogue here in this article, indicating points of
convergence and deviation, more even than points of divergence.
The very recent book by Floridi, Information: A Very Short Introduction (2010),
may prove a good starting point. The author summarizes the brief and recent “history”
of a concept which has been appropriated by and adjusted to different areas of activity
and scientific knowledge, much as Anthony Wilden did in the Information entry of
the Enciclopedia Einaudi (Wilden 2001:11–72). And among the several meanings
collected, we will see how Floridi presents semantic information, since this “concep-
tual variant” comes quite close to the psychological information of the operational
definition we used in our epistemological approach to IS. However, there is a prior
aspect to consider in the way Floridi conceives semantic information and which
has to do with the relationship that he establishes between data and information in
Chap. 2 – The Language of Information – of the above-mentioned book:
Over the past decades, it has become common to adopt a General Definition of Information
(GDI) in terms of data + meaning. GDI has become an operational standard, especially in
fields that treat data and information as reified entities, that is, stuff that can be manipulated
(consider, for example, the now common expressions “data mining” and ‘information
management’). A straightforward way of formulating GDI is as a tripartite definition
(Table 9.1):
According to (GDI.1), information is made of data. In (GDI.2), ‘well formed’ means
that the data are rightly put together, according to the rules (syntax) that govern the chosen
system, code, or language being used. Syntax here must be understood broadly, not just
linguistically, as what determines the form, construction, composition, or structuring of
something (…).
Regarding (GDI.3), this is where semantics finally occurs. ‘Meaningful’ means that the
data must comply with the meanings (semantics) of the chosen system, code, or language
9 Information Science and Philosophy of Information: Approaches and Differences 179
in question. Once again, semantic information is not necessarily linguistic. For example, in
the case of the car’s operation manual, the illustrations are supposed to be visually meaning-
ful to the reader (Floridi 2010:20–21).
We retain and highlight the statement contained in this extract that information is
made of data that are rightly put together, according to the rules (syntax) that govern
the system, code, or language being used. It should also be noted that, for Floridi,
syntax here is meant broadly and is not restricted to the linguistic dimension. It covers
other codes and systems. This is a highly relevant aspect implied in the initial part
of our operational definition of information – codified (mental and emotional)
representations address a plurality of codes, from spoken and written word to Braille
code, Morse or “programming languages”, including musical notation, mathematical
codification (digits, propositions, equations, algorithms, etc.), geometry and chromatic
code. The intention is to identify, through the concept of information, a broad and
unified object of study, which aggregates types of representation which are still
persistently classified and “arranged” into different and even incompatible “categories”.
In this perspective, there is the assumption of complex thought as founding matrix
of the info-communicational field and that an “IS approach” to people’s informa-
tional behaviour contemplates everything. To maintain Floridi’s example, this means
the illustrations included in a car manual intersected in the same support with text,
but we can go further, and connect the manual to publicity spots about that car
model, the audiovisual pieces with that car’s test-drives, etc. Floridi seems to con-
verge on our perspective, by highlighting the amplitude of the notion of semantics
and by emphasizing the importance of syntax (code, system and languages), in the
correct structuring of the data that compose information. This defining strategy does
not clarify a distinction we have come to make between data.1 and data.2. Data.1 in
Computer Science is the conventional representation, through codification, of a
piece of information which enables its electronic processing. This allows us to say
that there in absolutely no difference between data and information and they address
the same phenomenon (cerebral mental and psychological activity). Data.2 means
the physical, electromagnetic, seismic, etc., impulse or vibration which, through
specific technological devices, is converted into graphic representations (information) –
in this sense, data and information differ, in that they address distinct phenomena
(Silva 2006:145).
Floridi seems to address this distinction, which we believe is enlightening, when
he says that data can be lacks of uniformity in the real world. There is no specific
name for such ‘data in the wild’. One may refer to them as dedomena, that is, ‘data’
in Greek (…). They are pure data, that is, data before they are interpreted or subject
to cognitive processing (Floridi 2010:23); and when a little later, still in Chap. 2, he
180 A.M. da Silva and F. Ribeiro
raises reservations and doubts because it places data proceeding from nature or from
mechanical and artificial systems, and not from human and social cognitive activity,
as secondary data. Furthermore, we feel that the distinction between metadata and
metainformation is redundant, since the indication of copyright considered as
metainformation is a codified mental representation, much as the indication of place
of edition, considered as metadata.
Continuing with this comparison of perspectives, we will now focus on Chap. 4,
Semantic Information, which begins with a highly relevant warning from Floridi:
the MTC [mathematical theory of communication] is not interested in the meaning,
reference, relevance, reliability, usefulness or interpretation of the information
exchanged, but only in the level of detail and frequency in the uninterpreted data
that constitute it (Floridi 2010:48). This is a fitting insight, particularly bearing in
mind that, for Floridi, the difference between MTC and semantic information is to
the same order, different yet related, as the Newtonian description of the physical
laws that govern the dynamic of a tennis match and the narration of that same match
by a sports commentator – The two are certainly related, the question is how close
(Floridi 2010:48). In the perspective of IS, both descriptions are information in
codes which obey different rules (syntaxes), thus configuring the respective object
of study which includes the way in which a certain type of information is produced,
in which context and with which aims, how it is organized, stored, made accessible,
used and reproduced. For PI, on the other hand, what needs to be discussed is what
type of relationship exists between apparently different phenomena, such as the
physics of movement and the description with mental signs and symbols of a specific
game, taking place in time and space. Through this and other specifications, it is
possible to show where IS and PI clearly function on differentiated planes and how
they do not intersect or do so very contingently.
Throughout his instructive book, Floridi provides several diagrams, some of
which are reproduced with certain indications and signs for the readers, such as
“You are here”, letting them know where they are or the matter being discussed.
This image thus reproduced several times emerges as a “tree” scheme through which
we come to perceive a precise conceptual sequence: the (structured) data are divided
into environmental (information) and semantic (content); these, in turn, are sub-
divided into instructional (with a linking trait to environmental) and factual, which
are further subdivided into untrue and true (information); true information generates
knowledge, whereas untrue information is unintentional (misinformation) inten-
tional (disinformation). Here, we are interested in the figure in which Floridi focuses
on the conceptual point of factual information, from which he guides us. The most
relevant distinction under factual is between “semantic content and semantic
information: the latter needs to be true, whereas the former can also be false”
(Floridi 2010:50). At the basis of this distinction lies the definition (DEF) of factual
semantic information thus formulated: p qualifies as factual semantic information if
and only if p is (constituted by) well-formed, meaningful and veridical data (Floridi
2010:50). And there are at least three advantages in this DEF: the first is that it
clarifies that false information is not a genuine type of information (when semantic
182 A.M. da Silva and F. Ribeiro
dinner tonight; or B. there will be some guests tonight; or C. there will be three
guests tonight; or D. there will and will not be some guests tonight – only (c) has a
maximum degree of informativeness because it fully corresponds to the truth of
situation w.
The solution proposed by Floridi for the Bar-Hillel-Carnap paradox has natu-
rally direct implications on the scientific study of human and social communica-
tion: the interaction between a sender A and a receiver B will be all the more
complete and perfect the higher the degree of informativeness transmitted, that is,
correct communication has to be based on the assimilation of this assumption.
A good journalist, for example, depends entirely on this assumption. However,
this formulation may have fault by reducing the complexity of a well-conditioned
“logical environment”, that is, built on the basis of respect for prior good conditions,
which may not translate or capture the incoherence, the usury and irrationality of
daily life.
This imbalance is aggravated if we take into account the concept of IS which we
put forward, which means that we are clearly positioned within the emergent post-
custodial, informational and scientific paradigm. Within this paradigm, IS faces the
complexity of the real world more clearly: to study information scientifically does
not exclude the recourse to hermeneutics, as has been suggested by Rafael Capurro
(2002), and in this sense, it is important to bear in mind the meaning of words,
images, drawings, colours, sounds, etc., but the internal or inherent meaning(s) of
each text do(es) not comprise IS’s main object of study. IS does not preferentially
search for truth in each unit of information or meaning produced and which can be
communicated, but rather, it seeks the (possible) truth in the info-communicational
cycle or process developed through a number of stages and significant moments,
such as the production of information in a certain context, the respective organiza-
tion, arrangement and storage in that context or another, its use according to the
specific needs of the user acting in situation and in context, its reproduction … Seen
from this perspective, information cannot be reduced only to the notion of semantic
information, a notion we accept without any restrictions in particular. However, the
phenomenon underlying IS’s object of study intertwines individual psychology and
social dynamic, thus making the study situations much more complex. Thus, taking
as a critical example the video games produced according to an inferential narrative
logic involving the linear resolution of problems and obstacles, naturally distinct
from, for example, the classical literary narrative. There is no concern with the truth
in this process, but rather with building the plausible and the degree of informative-
ness cannot be measured by the correspondence to the truth of situation w to which
it reports. In the perspective of IS, it is not the degree of informativeness which is
studied, but who produces the video games, in which context and to what end, how
are they accessed (and this implies knowing how are they organized, accumulated
and disseminated), what informational needs do they satisfy (how are they generated,
reproduced, modified and contextualized), and what impact they have on the personal
and professional lives of their “consumers”.
We have here undoubtedly a topic for further analysis and debate which we hope
to retake in other contributions. Before concluding, there are two more topics we
would like to mention.
184 A.M. da Silva and F. Ribeiro
4
Floridi believes that “we are now slowly accepting the idea that we might be information organisms
among many others, significantly but not dramatically different from natural entities and agents
and smart, engineered artifacts” (Floridi 2009:156).
186 A.M. da Silva and F. Ribeiro
to ownership; and (c) the criterion for existence is no longer being immutable (Greek
metaphysiks) or being potentially subject to perception (modern metaphysics) but being
interactable (Floridi 2009:156).
Together with a Metaphysics, Floridi, in line with other authors such as Capurro,
has come to raise the foundations of an Ethics, both forming the fundamental
components of PI and a space to reformulate the classical, crucial philosophical
problems according to the current state of the World and Mankind. A project of this
type does not override or substitute the specific path of Science and, particularly, of
the information and computational sciences-ICS, on which Floridi explicitly focuses
in his article, and among which IS stands in its own right, but can and should accom-
pany them in regular, intense debate, bringing clear benefits to all.
References
Brier, Soren. 2008. Cybersemiotics: Why information is not enough! Toronto: University of
Toronto Press. ISBN 978-0-8020-9220-5.
Capurro, Rafael. 2002. La hermeneutica y el fenómeno de la información. Available in: http://www.
capurro.de/herminf.html. Accessed 14 Apr 2010.
Day, Ron. 1997. Paul Otlet’s book and the writing of social space. JASIS – Journal of the American
Society for Information Science. 48(4): 310–317. New York. ISSN 0002-8231.
De Bruyne, P., et al. 1974. Dynamique de la recherche en sciences sociales de pôles de la pratique
méthodologique. Paris: PUF.
Debons, A. 1986. Information science. In ALA world encyclopedia of library and information
services, 2nd ed, 354–358. Chicago: American Library Association. ISBN 0-8389-0427-0.
DeltCI – Dicionário Eletrônico de Terminologia em Ciência da Informação. 2007. http://www.
ccje.ufes.br/dci/deltci/index.htm. Accessed on 14 Apr 2010.
dir Tiberghien, G. 2002. Dictionnaire des sciences cognitives. Paris: Armand Colin. ISBN 2-200-
26247-7.
Escarpit, R. 1991. L’Information et la communication: théorie générale. Paris: Librairie Hachette.
Floridi, L. (ed.). 2004. The Blackwell guide to the philosophy of computing and information.
Malden: Blackwell Publishing. ISBN 0-0631-22918-3.
Floridi, L. 2009. The Information society and its philosophy: introduction to special issue on
“The philosophy of Information, its nature, and future developments. The Information Society
25: 153–158. London: Routledge. ISSN 0197–2243.
Floridi, L. 2010. Information: A very short introduction. Oxford: Oxford University Press. ISBN
978-0-19-955137-8.
Le Coadic, Y.-F. 2004. A Ciência da Informação. Trad. de Maria Yêda F. S. de Filgueiras Gomes,
2.ª ed. Brasília: Briquet de Lemos – Livros. ISBN 85-85637-23-4.
Leclerc-Reynaud, S. 2006. Pour une documentation créative: l’apport de la philosophie de
Raymond Ruyer. Paris: ADBS-Association des Professionnels de l’Information et de la
Documentation. ISBN 2-84365-2.
Lessard-Hébert, M., et al. 1994. Investigação qualitativa: fundamentos e práticas. Lisboa: Instituto
Piaget. ISBN 972-9295-75-1.
Mella, P. 1997. Dai Sistemi al pensiero sistémico: per capire i sistemi e pensare com i sistemi.
Milano: Franco Angeli. ISBN 88-464-0336-3.
Nonaka, I. and H. cop. Takeuchi. 1998. A Theory of the firm’s knowledge-creation dynamics. In
The Dynamic firm: the role of technology, strategy, organization and regions, ed. A. Chandler,
P. Hagström, and Ö. Sölvell, 214–241. New York: Oxford University Press. ISBN 019-
829604-5.
9 Information Science and Philosophy of Information: Approaches and Differences 187
Otlet, P. 1996. El Tratado de Documentación : el libro sobre el libro: teoria y práctica. Trad. Maria
Dolores Ayuso Garcia. Múrcia: Universidad. ISBN 84-7684-766-1.
Rayward, W.B. 1997. The origins of information science and the International Institute of
Bibliography//International Federation for Information and Documentation (FID). JASIS –
Journal of the American Society for Information Science 48(4): 289–300. New York. ISSN 0002-
8231.
Saracevic, T. 1996. Ciência da informação: origem, evolução e relações. Perspectivas em Ciência
da Informação 1(1): 41–62. Belo Horizonte. ISSN 1413–9936.
Shera, J.H., and D.B. Cleveland. 1977. History and foundations of information science. Annual
Review of Information Science and Technology, Washington 12: 249–275.
Silva, A.M. 2006. A Informação: da compreensão do fenómeno e construção do objecto científico.
Porto: Edições Afrontamento; CETAC.COM. ISBN 972-36-0859-3.
Silva, A.M., and F. Ribeiro. 2002. Das “Ciências” Documentais à Ciência da Informação:
ensaio epistemológico para um novo modelo curricular. Porto: Edições Afrontamento. ISBN
972-36-0622-4.
Wilden, A. 2001. Informação. In Enciclopédia Einaudi. Vol. 34. Comunicação-Cognição. Lisboa:
Imprensa Nacional-Casa da Moeda. ISBN 972-27-0923-2.
Williams, R.V., L. Whitmire, and C. Bradley. 1997. Bibliography of the history of information
science in North America, 1900–1995. JASIS – Journal of the American Society for Information
Science 48(4): 373–379. New York. ISSN 0002-8231.
Part IV
Epistemic and Ontic Aspects of the
Philosophy of Information
Chapter 10
Skepticism and Information
1
Notes: It should be noted that gathering, creating, processing, managing and using information is
not always done for the acquisition of knowledge or other epistemic standings. Sometimes, for exam-
ple, information is collected for the sake of collecting more information or for justifying policy deci-
sions. Nevertheless, the kind of information-based inquiry we explore here is that which is pursued
with the final purpose of gaining knowledge about the matter at hand. This is the kind of inquiry
pursued in Dretske (1981) and Floridi (2010), among others. These scholars accordingly view
information as, in their own distinctive ways, an important component of epistemology.
E.T. Kerr (*) • D. Pritchard
School of Philosophy, Psychology and Language Sciences, University of Edinburgh,
Dugald Steward Building, Charles Street, Edinburgh EH8 9AD, UK
e-mail: E.T.Kerr@sms.ed.ac.uk; duncan.pritchard@ed.ac.uk
I am in (Fallis and Whitcomb 2009). As has been noted elsewhere (e.g., Himma
2007), we are overloaded with information in the modern age. In this paper we
examine these paths from information to knowledge and how constricting the range
of relevant information is critical to information management.
With the development of fairly recent technology, information has become a
ubiquitous cultural buzz-word: the Information Age; information overload; the
Information Superhighway; freedom of information; information technology; infor-
mation science; and so on. Information and knowledge appear together frequently
both in popular writing and scientific disciplines either as conflated terms for the
same phenomena or related terms in some way involved in practices of inquiry,
discovery, knowledge acquisition, and so on. The job of relating these concepts
more precisely has tended to be undertaken by various academic disciplines that
take information as a key theoretical concept. Although it is other disciplines such
as information technology, knowledge management and library science that have
devoted sustained analysis to information, such growing cultural awareness of infor-
mation has provoked some philosophers to comment on its societal, epistemological,
ontological, or axiological significance and sometimes to use it as a component in
their philosophical work.2
Two philosophers of particular note in this regard are Fred Dretske and Floridi.
Both have developed technically complex epistemologies with information playing
a central role (See especially Dretske 1981, 1983, 2000, 2006; Floridi 2005, 2010).
Dretske connects information to knowledge via an ordinary dictionary definition of
the former:
[By information] I mean nothing very technical or abstract. In fact, I mean pretty much what
(I think) we all mean in talking of some event, signal or structure carrying (or embodying)
information about another state of affairs. A message (i.e., some event, stimulus or signal)
carries information about X to the extent to which one could learn (come to know)
something about X from the message. (Dretske 1983, 10)
According to this rendering of the RAT view, our capacity to possess perceptual
knowledge is heavily affected by our environment. Pritchard (2009, 5) makes the
2
See, for example, Fallis (2004), Harms (1998), and Goldman (1999, 161–182).
10 Skepticism and Information 193
3
The barn-façade case was first put forward in print by Goldman (1976), who credits the example
to Carl Ginet.
4
For Dretske’s initial rejection of epistemic closure, see Dretske (1970, 1971). See also his recent
exchange with Hawthorne (Dretske 2005a, c; Hawthorne 2005). For a critical discussion of the
implications of Dretske’s informational epistemology on epistemic closure see Jäger (2004) and
Shackel (2006).
194 E.T. Kerr and D. Pritchard
epistemic closure is the principle that if an agent knows one proposition, and knows
that it entails a second proposition, then that agent also knows the second proposition.
So, for example, if one knows that one is presently in Edinburgh, and one knows
that this entails that one is not a BIV on Alpha Centauri, then one knows that one is
not a BIV on Alpha Centauri. Although this principle has broad intuitive support,
Dretske rejects it.5 But why is it that on Dretske’s view I can acquire knowledge
about a proposition but not about a proposition which I know full well is entailed by
it? Dretske is led into this position through two closely related commitments: (i) that
perceptual information is never relevant to skeptical hypotheses, and (ii) that infor-
mation is essentially non-factive evidence.
We noted the first commitment above. Since, ex hypothesi, agents cannot discriminate
between normal scenarios and skeptical alternatives, so it follows, according to
Dretske, that agents lack an informational basis for dismissing skeptical alterna-
tives. The second commitment becomes clear once we reflect that if information
could be factive evidence for what it is evidence for—if, that is, it could entail the
truth of what it is evidence for—then it would follow that the information we have
to support our beliefs in normal circumstances might well suffice to entail the denial
of the target skeptical scenario. Clearly, however, Dretske does not think that we
ever have evidence of this sort, and hence a non-factive view of the evidence
provided by information is clearly implicit here.
In order to more closely examine these commitments, consider the following
local skeptical hypothesis, which we will call ‘Zebra’:
Zebra
Fred is at the zoo. If he perceives what he takes to be a zebra, Fred can have no informa-
tional basis for believing that what he perceives is not, in fact, a cleverly-disguised mule. In
other words, the signal carrying this information does not allow him to discriminate between
‘a zebra in my perceptual field’ and ‘a cleverly disguised mule in my perceptual field’.
Fred may interpret the signal as evidence that there is a zebra in front of him as
a matter of habit, or perhaps relying on other evidence such as the sign on the fence
or assumptions about what kinds of animals are in a zoo. However, his information
is, it seems, non-factive. Just because I receive a signal such as this does not entail
that there is in fact a zebra in the pen. More generally, as Dretske claims, it appears
that none of the information that the subject possesses which indicates that he is
perceiving a zebra is information which offers him an adequate epistemic basis on
which he can dismiss the ‘cleverly disguised mule’ skeptical scenario.
This way of thinking about our evidential position with regard to skeptical
challenges has, however, been challenged. Ram Neta (2002, 2003), for example,
has argued that the scope of your evidence is affected by context. Under this account,
there is a range of contexts in which evidence (read: information) is factive. Neta
argues that the skeptic only appears to succeed by restricting what counts as
evidence. In normal contexts my evidence typically is factive, and it only becomes
5
Although there are few philosophers these days who deny this principle, it was also famously
denied by Nozick (1981), for reasons very similar to the reasons offered by Dretske.
10 Skepticism and Information 195
non-factive in skeptical contexts in which very demanding standards for what counts
as evidence are in play. Hence, in the zebra case, my evidence for believing that
there is a zebra before me could well be factive in normal contexts. For example, if
my evidential state in normal contexts is that of seeing that there is a zebra before
me, then, since seeing that p entails p, my evidential state actually entails that there
is a zebra before me, and which hence entails that I am not currently being presented
with a cleverly disguised mule. Relatedly, if my evidence, in normal contexts, for
believing that I have two hands is that I can see them before me, then I have
evidence which entails not only that I have two hands, but also that I’m not a handless
BIV on Alpha Centauri.
According to Neta, however, the context can change in such a way as to restrict
the scope of one’s evidence. If I were to gain evidence that cast doubt upon my
belief that I have hands—for example, if I were to witness a room of BIVs—then
this would make the possibility that I am a BIV a relevant alternative. This is effectively
what the skeptic does: to describe such a scenario and cast doubt upon what was
previously undoubted. There are two ways in which this may be done.
On the first, the skeptic may simply suggest the possibility of a skeptical hypoth-
esis that had previously been ignored or unexamined by the subject. This may place
an onus on the subject to now eliminate that possibility in order to be correctly said
to know the proposition. This intuition suggests that we cannot know a proposition
until we have ruled out all relevant alternatives and that the range of relevant alter-
natives is determined by the conversational context (Pritchard 2010, 19). In other
words, being made aware of an alternative, however implausible or absurd, can
make that alternative relevant.
The second way in which the skeptic can make the alternative relevant is by actually
offering evidence for thinking that a skeptical scenario has obtained. For example,
consider an extension to the case of Zebra:
Zebra*
Fred’s friend and skeptic, Frank, mentions to Fred that he once read a science-fiction story
in which all the world’s zebras are replaced by hologram zebras and the real zebras are
taken to a neighbouring planet. A little while later, Frank notices a pot of paint lying beside
the animal and brings this to Fred’s attention by gesturing towards it. He also tells Fred that
the sign on the outside of the pen appears to have been written over an older sign, suggesting
that a different message was once written there.
In this example, Frank initially merely presents Fred with a radical skeptical
hypothesis. In the view of some epistemologists such pronouncements can change
the conversational context in which evidence requirements and relevant alternatives
are set.6 Frank’s story may thus rob Fred of his knowledge that there is a zebra in the
pen before him. In the subsequent details of the story, however, Frank presents Fred
with perceptual information and testimonial evidence for calling into doubt Fred’s
knowledge of what is in the pen.
6
See, for example, DeRose (1995) and Lewis (1996).
196 E.T. Kerr and D. Pritchard
Here are two apparent truisms. First, that our interest as inquirers in information
is often motivated by our desire to gain knowledge about something.7 Second, that
we are almost always faced with limited information about the target issue. At the
very least, one can always think that it would be better if one had more information
about this subject matter. What falls out of these two statements? One might think
that, as Aristotle claimed of knowledge (De Anima, 402a1), more information is
always better than less and so we should endeavor to collect as much information as
possible on the matter in question with the hope of, at some point, turning it into
knowledge. Cursory reflection reveals that this is evidently false (Himma 2007).
Internet search engines are a good example. Type in a random search string and it
will probably return hundreds of thousands of results. No human could sort through
that amount of information and so the search engine is designed to return those
results that are likely to be most beneficial to the user first. A great problem of the
Information Age is our inability to keep the technology for sorting and filtering
relevant information apace with the rapidly developing technology for collecting
information. This is a familiar problem for anyone tasked with making use of any of
the many web search engines out there. Access is almost always there, but relevancy
is sporadic and limited. Thus, in order to deal with problems as they arise one needs
to put constraints on what evidence and information is relevant. According to Neta,
the skeptic unduly restricts evidence in certain contexts. What information manage-
ment effectively does is make the same judgments about appropriate restrictions.
Dretske’s account is primarily an account of perceptual knowledge and informa-
tion. He therefore feels entitled to conclude that, since the mere appearance of an
object cannot communicate its non-skeptical status, any signal which carries infor-
mation about appearance cannot answer a skeptical doubt. However, we have pro-
vided examples (such as Bouwsma’s adventures and Zebra) where perceptual
information does justify a skeptical hypothesis or a non-skeptical proposition. It
would seem that Dretske is wrong to think that information is irrelevant to combat
local skeptical scenarios. Agents can receive information (even if we think of infor-
mation as non-factive) for dismissing such scenarios (once we do not limit their
information to the bare visual scene) (Pritchard 2010). Whether Dretske is right
about radical skeptical scenarios depends on whether information is ever factive.
If it is always factive then Dretske has no need to deny closure. Even if information
is only sometimes factive (i.e., in ordinary contexts, à la Neta) then Dretske is
still wrong.
7
For an extended discussion of the goal of information collection and dissemination see Fallis
(2002). Note that even those who deny that the goal of information services is for users to acquire
knowledge grant that in a large range of contexts our goal in collection and disseminating informa-
tion is to acquire knowledge. For example, the information management scholar Chun Wei Choo
expresses, albeit in different terms, a widely held view when he states that the primary goal of
information management is to ‘harness the information resources and information capabilities of
the organization in order to enable the organization to learn and adapt to its changing environment’
(Choo 2002, xv). Later, Choo writes that the ‘transfiguration of information into knowledge is the
goal of information management’ (Choo 2002, xiv).
198 E.T. Kerr and D. Pritchard
Let us consider an argument that reasons (under which heading we may include
perceptual evidence) are factive, which is from John McDowell (1995). Earlier in
the Chapter, we discussed Neta’s comment that external world skepticism is not
meant to cast doubt upon certain ‘inner’ reasons such as ‘that I am not having a
visual experience of a white expanse before me’. McDowell argues against a tacit
assumption throughout epistemology that these inner reflections can encompass
factive empirical reasons (Pritchard 2008, 10).
However, McDowell does not think that no empirical reasons are factive. In the
case of veridical perception, we have a kind of perceptual evidence which is not
present in cases of non-veridical perception such as illusion or hallucination.
McDowellian epistemological disjunctivism presents an option for Dretske which
has so far been left unexplored but which may undermine his case against epistemic
closure, with concomitant implications for his theory of information. In brief, if
perceptual evidence is (sometimes) factive, then Dretske is wrong to say that there
is no perceptual evidence which can serve as evidence against skeptical hypotheses.
Dretske’s view is that all perceptual evidence is defeasible when it comes to radical
skeptical hypotheses. No matter how competently one receives and judges the infor-
mation one is presented with, these processes never amount to something which
entails the denial of the target skeptical hypothesis. The view is intuitive and persua-
sive but the McDowellian view offers one alternative: that there is a disjunct between
cases of factive and non-factive reasons. That is, there is some reason or warrant or
a kind of support missing in cases of radical skepticism that is present in so-called
‘ordinary’ cases.
Dretske takes it for granted that any given knowledge claim can be subject to a
skeptical rebuttal. Such rebuttals challenge the upgrading of an information-based
belief (that something appears to be the case) to information-based knowledge
(knowledge that something is the case). In the case of Zebra* there is information
that carries the signal to Fred that what is in the pen is a painted mule. Dretske might
insist that this does not undermine his thesis as these pieces of information may
themselves be subject to skeptical hypotheses and are providing only non-factive
evidence. However, if one follows McDowell down his disjunctivist path then it is
not inevitable that Dretske takes such a position and consequently not inevitable that
he is lead to reject the principle of epistemic closure.
Neta presents a contextualist account of evidence or reasons in which the evidential
requirements for knowledge are affected by context. Dretske closely links informa-
tion to non-factive evidence but under the contextualist account there are cases of
factive evidence which would provide information-based knowledge of the denials
of skeptical hypotheses in some cases. Additionally, McDowell provides a non-
contextualist account of evidence or reasons in which there is an epistemic
component present in some cases, not present in others (such as cases of hallucina-
tion or illusion—the hallmark of skeptical hypothesizing), and in which factive
evidence warrants the denial of skeptical hypotheses (Gomes 2011). As a consequence,
these distinctions between skeptical and ordinary contexts or between factive and
non-factive evidence present alternatives to Dretske’s inference that perceptual
information can never give us evidence or reasons to refute skeptical hypotheses.
10 Skepticism and Information 199
References
Bouwsma, O.K. 1965. Descartes’ evil genius. In Meta-meditations: Studies in Descartes, ed.
A. Sesonske and N.Fleming Belmont. Belmont: Wadsworth.
Choo, C.W. 2002. Information management for the intelligent organization: The art of scanning
the environment, 3rd ed. Medford: Information Today.
DeRose, K. 1995. Solving the skeptical problem. Philosophical Review 104: 1–52.
Dretske, F. 1970. Epistemic operators. Journal of Philosophy 67: 1007–1023.
Dretske, F. 1971. Conclusive reasons. Australasian Journal of Philosophy 49: 1–22.
Dretske, F. 1981. Knowledge and the flow of information. Cambridge, MA: MIT Press.
Dretske, F. 1983. The epistemology of belief. Synthese 55(1): 3–19.
Dretske, F. 2000. The pragmatic dimension of knowledge. In Perception, knowledge and belief:
Selected essays, ed. F. Dretske. Cambridge: Cambridge University Press.
Dretske, F. 2005a. The case against closure. In Contemporary debates in epistemology, ed. E. Sosa
and M. Steup, 13–26. Oxford: Blackwell.
Dretske, F. 2005b. Is knowledge closed under known entailment? In Contemporary debates
in epistemology, ed. E. Sosa and M. Steup, 13–26. Oxford: Blackwell.
Dretske, F. 2005c. Reply to Hawthorne. In Contemporary debates in epistemology, ed. E. Sosa and
M. Steup, 43–46. Oxford: Blackwell.
Dretske, F. 2006. Information and closure. Erkenntnis 64: 409–413.
Fallis, D. 2002. Introduction. Social Epistemology and Information Science, special issue of Social
Epistemology 16(1): 1–4.
Fallis, D. 2004. Epistemic value theory and information ethics. Mind and Machines 14(1):
101–117.
Fallis, D., and D. Whitcomb. 2009. Epistemic values and information management. The Information
Society 25(3): 175–189.
Floridi, L. 2005. Is semantic information meaningful data? Philosophy and Phenomenological
Research 70(2): 351–370.
Floridi, L. 2010. The philosophy of information. Oxford: Oxford University Press.
Goldman, A. 1976. Discrimination and perceptual knowledge. The Journal of Philosophy 73:
771–791.
Goldman, A. 1999. Knowledge in a social world. Oxford: Oxford University Press.
Gomes, A. 2011. McDowell’s disjunctivism and other minds. Inquiry 54(3): 277–292.
Harms, W.F. 1998. The use of information theory in epistemology. Philosophy of Science 65(3):
472–501.
Hawthorne, J. 2005. The case for closure. In Contemporary debates in epistemology, ed. E. Sosa
and M. Steup, 26–43. Oxford: Blackwell.
Himma, K.E. 2007. The concept of information overload: A preliminary step in understanding the
nature of a harmful information-related condition. Ethics and Information Technology 9:
259–272.
200 E.T. Kerr and D. Pritchard
Jäger, C. 2004. Skepticism, information, and closure: Dretske’s theory of knowledge. Erkenntnis
61(2–3): 187–201.
Lewis, D. 1996. Elusive knowledge. Australasian Journal of Philosophy 74: 549–567.
McDowell, J. 1995. Knowledge and the internal. Philosophy and Phenomenological Research 55:
877–893.
Neta, R. 2002. S knows that P. Noûs 36: 663–681.
Neta, R. 2003. Contextualism and the problem of the external world. Philosophy and
Phenomenological Research 66: 1–31.
Nozick, R. 1981. Philosophical explanations. Cambridge, MA: Harvard University Press.
Pritchard, D.H. 2008. McDowellian Neo-Mooreanism. In Disjunctivism: Perception, action,
knowledge, ed. A. Haddock and F. Macpherson, 283–310. Oxford: Oxford University Press.
Pritchard, D.H. 2009. Wright Contra McDowell on perceptual knowledge and scepticism. Synthese
171: 467–479.
Pritchard, D.H. 2010. Relevant alternatives, perceptual knowledge and discrimination. Noûs 44:
245–268.
Shackel, N. 2006. Shutting Dretske’s door. Erkenntnis 64: 393–401.
Shope, R.K. 2002. Conditions and analyses of knowing. In The oxford handbook of epistemology,
ed. P.K. Moser, 25–70. Oxford: Oxford University Press.
Chapter 11
Levels of Abstraction; Levels of Reality
Joseph E. Brenner
11.1 Introduction
Among the key issues in what van Benthem and van Rooy (2003) called the “lively
present stage of investigations of information” is the integration of its qualitative,
content-oriented and quantitative aspects. Theories of information as a process or an
operator, changing the states of receivers and embodying meaning on the one hand,
and on approaches that concentrate on how much information is communicated by
a message coexist, somewhat incoherently.
Hofkirchner (2009) among others has argued for the desirability of a unified
theory of information (UTI) that would encompass the different manifestations of
information processes. Such a UTI should be capable of balancing the apparently
contradictory properties of information – physical and non-physical, universal and
particular – without reduction. Its underlying principle should be “as abstract as
necessary but as concrete as possible at the same time.”
As an integral part of his Philosophy of Information (PI), in fact as the core strategy
for analysis of informational issues and solving information-related problems,
Luciano Floridi has made a critical construction of epistemological Levels of
Abstraction (LoAs), a notion from computer science. In applying LoAs in various
fields, Floridi correctly critiques other uses of ‘levels’ in philosophy (levelism),
especially, the lack of a satisfactory concept of ontological levels.
This chapter approaches the problem of levels in the philosophy of information
from a novel perspective, namely, that of an extension of logic to complex real
processes, including those of information production and transfer. The proposed
non-propositional, non-truth-functional logic – Logic in Reality; LIR (Brenner
2008) – is grounded in the fundamental dualism (dynamic opposition) inherent in
energy and accordingly present in all real phenomena. The picture of the world
that is used is one of different, physical levels of reality, to all of which LIR applies.
As Capurro (Capurro 1996) notes, technology is “non-neutral”, and hence
LIR is appropriate to it, rather than standard logics that are virtually required to be
topic-neutral and context-independent.
uniquely-named conceptual entity (the variable) and a set, called its type, consisting
of all the values that the entity may take. An observable is an interpreted typed
variable, that is, a typed variable together with a statement of what feature of the
system under consideration it represents. The additional key notion is that of behavior
of a system that defines the relationships holding between observables. Behavior at
a given LoA is a predicate whose free variables are those observables.
Being an abstraction, an observable does not necessarily result from quantitative
measurement or empirical perception. The feature of the system under consideration
may be a physical magnitude, but or an artifact of a conceptual model, constructed
for the purpose of analysis. Roughly, a LoA can account for the behavior of a
discrete system, describing the latter in a formalism that corresponds functionally
to that of differential calculus in analog systems. The output of a LoA is a model of
the system, comprising information, whose amount is lower at higher levels.
For Floridi, the finality of introducing Levels of Abstraction and their combination
into Gradients of Abstraction (see Sect. 11.4) as a method is to bring additional
rigor into theories of information and the systems, models and structures that can be
constructed from experiential data. Floridi limits his discussion, however, to LoAs
as epistemological and avoids the question of whether the method of abstraction
used may be exported to, especially, ontological contexts. Rather, he defends a
version of epistemological levelism that is compatible with criticisms of other
forms of levelism. This position leaves open the option, however, that Floridi’s
constructionist view of information might be supported by an interpretation of
ontological levels that does not suffer from the weaknesses of the levelism he
correctly critiques (cf. Sect. 11.3). I will show that in fact application of Floridi’s
Levels of Abstraction (LoAs) to informational issues can be supported by a concept
of ontological levels of reality (LoRs) based on LIR, defined in terms of the different
but isomorphic laws applicable to them.
The concept of Levels of Abstraction can then be seen as a component of a
broader theory of information and information technology in which LoAs coexist
and interact with Levels of Reality. Such a joint theory might provide additional
explications of the properties of informational entities and of the behavior of the
informational component present in all phenomena. In this chapter, I claim that
Logic in Reality provides an interpretation of the ontological content and properties
of Levels of Reality that accomplishes this objective.
11.1.4 Outline
Since Logic in Reality is both relatively unfamiliar and is the framework in which
all the subjects in this chapter will be discussed, I give fi rst a brief outline of
it in Sect. 11.2, as a complete but non-standard logic, including its approach to
information. In Sect. 11.3, I return to Floridi’s critique of ontological levels and
discuss the LIR categorial ontology and conception of ontological levels of reality.
These are contrasted with some different conceptions of the concept of Levels of
204 J.E. Brenner
entity at a higher level of reality or complexity can take place at the point of
equilibrium or maximum interaction between the two.
LIR should be seen as a logic applying to processes, in a process-ontological
view of reality (Seibt 2009), to trends and tendencies, rather than to ‘objects’ or the
steps in a state-transition picture of change. Processes are described formally
as transfinite chains of chains of chains, etc. of alternating actualizations and
potentializations of implications, considered with the other logical operators,
conjunction and disjunction as real processes themselves. The directions of change
are either (1) toward stable macrophysical objects and simple situations, the result
of processes of processes, etc. going in the direction of a “non-contradictory”
identity or diversity: or (2) toward a state of maximum contradiction (T-state for
included third term) from which new entities can emerge. LIR is, therefore, a logic
of emergence, a new non-propositional, non-truth-functional logic of change. There
is an interesting connection to be explored between the LIR conception of potential
and Floridi’s use of ‘virtual’ information to by-pass (my term) standard deduction
(Floridi 2011, p. 171).
Standard logic underlies, rather, the construction of simplified models which fail
to capture the essential dynamics of biological and cognitive processes, such as
reasoning (Magnani 2002). LIR does not replace classical binary or multi-valued
logics but reduces to them for simple systems and situations. The interactive
relationships within or between levels of reality to which LIR applies are character-
istic of entities with some form of internal representation, biological or cognitive.
In contrast to standard logics, LIR has no difficulty in accepting inconsistency,
interpreting it as a natural consequence of the underlying oppositions in physical
reality. Many if not most of the problems in the (endless) debate about the nature
of change, as pointed out by Mortensen (2008), seem to require a fundamental
inconsistency in the world, which LIR naturalizes. Logic in Reality, then, is an infor-
mation system that is not “brittle, like a classical logic system” (Floridi 2011, p. 161)
in the presence of an inconsistency. Inconsistency is in the former is not only not as
destructive as in the latter, but is accepted as an essential part of its ontology.
(e.g. as patterns of physical signals, which are neither true nor false), also known
as environmental information; information about reality (semantic information,
alethically qualifiable); and information for reality (instructions, like genetic infor-
mation, algorithms, orders, or recipes).
Many extensionalist approaches to the definition of information as reality or
about reality provide different starting points for answering the question of what
information is, but the broad theory of information proposed by Floridi requires
an understanding of the properties and role of information at all levels of reality,
in all entities. Whatever contributes to this understanding must accordingly
be valuable for philosophy in general, and I propose this chapter as a clarification
of the relevant ontological properties of information.
The definition of information that is most congenial to LIR was made by
Kolmogorov (Mindell and Gerovitch 2003) to the effect that information is any
operator which changes the distribution of probabilities in a given set of events.
This is quite different from his well-known contribution to algorithmic information
theory, but fits the process conceptions of LIR. In LIR, logical elements of real
processes resemble (non-Kolmogorovian) probabilities, and the logical operators
are also processes, such that a predominantly actualized positive implication, for
example, is always accompanied by a predominantly potentialized negative
implication. It is possible to analyze both information and meaning (higher level
information, cf. Brenner 2010a) as having the potential or being a mechanism to
change the informational context.
LIR thus can provide bridging concepts or ‘glue’ between the concept of semantic
information that Floridi defines at the lowest data level and the broader applications
that he looks forward to. It is also Floridi’s view that higher LoAs subsume aspects
of semantic information. LIR places this concept, and thus the “superconcept”
(Hofkirchner 2009) of information, in a naturalized physical, metaphysical and
logical context. Information is both a means to model the world and part of the
world that is modeled (by LoAs), and LIR describes the dialectic relation between
them. Floridi finds the concept that semantic information is true if it points to the
actual state of the world somewhat equivocal, but I believe it fits the LIR processual
logic, in that logical (in the LIR sense) information is the actual state of the world.
11.3.1 Levelism
The idea that reality is divided into levels that are more or less distinct and involve
different degrees of complexity has been proposed, in various forms, since antiquity,
but it has received more rigorous attention since the advent of quantum mechanics
and insight into brain functioning. Floridi uses the term levelism to reflect a tendency
toward the end of the last century to make philosophical descriptions in terms of
ontological levels of reality and epistemological levels of observation or interpretation.
11 Levels of Abstraction; Levels of Reality 207
1
In a later paper, Heil modified his identity theory to permit some interaction between his key
notions of dispositions and qualities.
208 J.E. Brenner
at the macroscopic level, like that being explored at the quantum level, provides a
principle of organization or structure in macroscopic phenomena that has been
neglected in science and philosophy.
The formal ontology that I propose is a theory that provides non-mathematical
formulations of the properties and relations of certain categories of phenomena at
different levels of reality or complexity. It is intended to be systematic in the sense
of stating formally at least some aspects of what all entities are, as well as relating
all entities of a certain kind to one another. The approach I have taken is that of
Hartmann (Werkmeister 1990) who developed the categories of his new ontology
“step-by-step from an observation of existing realities”. The fundamental assertions
of an ontology are about being and have the character of universal constitutive prin-
ciples. In my analysis, the realities are the manifold dualities of physics, biological
science and the dialectics of human thought and behavior. I define a constitutive
principle2 here as one that establishes the relation to an object of experience, while
at the same time incorporating the even more fundamental LIR Principle of Dynamic
Opposition (PDO) that obtains throughout Nature.
The philosophy of LIR can thus be characterized as a non-naïve dualistic realism
that postulates a real, interactive, oppositional relation between all the classic dualities
when they are instantiated in reality. It is part of the new ontological turn in philosophy.
The LIR view, critical for any discussion of ethics and the origin of moral responsi-
bility, is that the world is ontologically deterministic and epistemologically indeter-
ministic, in the contradictorial relation suggested above.
Two kinds of levels physically exist in reality: those determined by (a) simple
macrophysical differences in a gravitational field (height) and (b) energy differ-
ences in quantum entities more complex than the quark or lepton. The notions
of levels of anything else, be it reality, complexity, abstraction or information
are intellectual constructs, closely related to that of the emergence of the con-
cepts, phenomena or properties that are designated as “inhabiting” that Level of
Reality (O’Connor and Wong 2002). If the event, process or property is new, it
must also have an origin and/or be different in some fundamental way from that
origin, again, and/from other entities designated as being in other levels. Since
the original discussions of the British emergentists, much debate has taken place
as whether the entities at a new level have anything in common with the old ones,
posing the question of determinism.
The key issue in the discussion of Levels of Reality is not their number, but the
existence of ontological “intermediate” or “sub-levels” with real, significantly differ-
ent properties that are nonetheless tied together by intra- and inter-level interactions.
2
Below and in Brenner (2008), I discuss the regulative aspects of the PDO.
11 Levels of Abstraction; Levels of Reality 209
The major problems in the notion of levels are to characterize (a) the relationships
that hold between the entities in a given level and between entities at different
levels; and (b) the theories proposed to account for such relationships.
I claim that the fundamental LIR ontology of energy enables a new, useful
interpretation of levels that cuts through much of the debate. I have formalized these
ideas further (Brenner 2008) in my Logic in Reality (LIR) as a Two-Level Framework
for Functional Analysis. In LIR, there are two types of tools for dealing with
complex interactive phenomena at the object- and meta-levels. For the structure
of theories and their inter-relations, in particular reduction, the PDO is used as a
metatheoretical methodological principle for looking at the relations between
entities in a domain of dualities or dichotomies, between either classes of entities
or two individual terms. For the structure of reality as revealed by physical and
biological science, PDO can be used as a quasi-natural law within the language of
the scientific theory itself.
Critical examples of interacting object level and meta-level entities, to which Non-
Separability applies in the LIR process ontology, are syntax and semantics; types
and tokens; data of theories and theories; theories and metatheories; and individuals
and groups. All are contradictorially related by the LIR axiom of the functional
association of any entity with its opposite or contradiction. Another relational
structure is that between processes or events and the explanations of those events.
According to LIR, any total separation between theoretic (epistemological) entities
and those of science is arbitrary, since the same object-level and meta-level relations
are involved in both. LIR refers to the non-separability of some pairs of those entities,
and their alternating actuality and potentiality, and states that both horizontal and
vertical part-whole relations are instantiated that follow this dialectics. LIR avoids the
difficulties resulting from classical mereology that closely mirrors classical binary
logic for the same reason as above: it is a restatement of the standard theory of classes
or sets as wholes and their elements as totally separated parts of those wholes.
LIR states that the relation of parts to wholes may be dynamic, that is, that parts
and wholes can share one another’s properties, in the sense that aspects of the
whole are potentialized in the parts, and aspects of the parts are potentialized in
the whole. The parts that constitute the content of the object level share properties
of the meta-level as a whole. At the level of physical individuals and groups, the
situation is the same: the group has some of the characteristics of the individuals
that comprise it and the latter have or have internalized aspects of the group.
The above discussion is based on the notion of a logic as instantiating the dynamic
opposition in energy and following the law of a logical included middle. It was first
proposed by Stéphane Lupasco (1987) and subsequently extended by Basarab
210 J.E. Brenner
The above attempt to answer the question: What is a level of reality? constitutes
what Poli terms an ‘objectual’ approach (2006) and to which he offers his own
categorical approach as an alternative (Floridi includes Poli’s description of levels
of reality and analysis of the complex relations that obtain both between and within
levels (Poli 2001) in his compendium of ontological levelism). The following
3
I use the term ‘mechanism’ here in an informal descriptive sense without implying that computable
models exist for all the transitions between levels. Indeed, my position is that such models for living
organisms cannot be constructed.
11 Levels of Abstraction; Levels of Reality 211
methodological steps summarize Poli’s approach, which takes the work of Hartmann
(Werkmeister 1990) as its starting point4:
1. Distinguish three strata, rather than levels, of reality: the material, the psychological
and the social (the latter encompassing all phenomena of history, language,
science, morals, in fact, the entire body of human knowledge and ideation).
2. Define the hierarchical relations of dependence between strata.
3. Define the hierarchical relations within strata, organized into levels (or layers).
The layers within strata correspond to “levels of organization”, different structur-
ings of the same fundamental laws (Nicolescu 2002).
Each stratum has its own principles, laws and ontological categories, and there are
clear discontinuities between strata. This approach is also realistic in that it seeks to
extract the relevant categories directly from objects. Levels of reality are radically
different from levels of organization; the latter do not presuppose a rupture of funda-
mental concepts. Several levels of organization or hierarchies can belong to one and
the same level of reality, that is, sets of different structures governed by the same
fundamental laws. On this point Nicolescu, Poli and I are in agreement. (Poli (2006)
also suggests an index of complexity based on the relations between levels and
sub-levels of reality defined by Hartmann which I will not develop here.)
Poli makes the further important distinction between ontological levels of reality and
epistemological levels of interpretation. In his view, only some of the latter can be
taken as levels of reality, namely those that are grounded on ontological categories.
Levels of reality constrain the ‘items’ (JEB: real entities) of the universe as to which
types of causation and agency are admissible. A level of reality can be taken to be a
level of interpretation endowed with an appropriate web of causes or an appropriate
type of agency. One might say that this concept offers a relation between Levels of
Abstraction and Levels of Reality, but it remains too abstract, and Poli admits that in
his approach, “the links connecting together the various levels of reality are still
unknown”. I have suggested above the LIR view of those ‘links’.
In a subsequent dialogue with Nicolescu (Poli 2010), Poli further emphasized the
categorical aspects of his approach, stating that the main reason for distinguishing
different levels is to identify “the entirely different new categorical series” that may
be needed for their respective analysis. In criticizing the grounding of Nicolescu’s
theory in the logic of energy (that of LIR), Poli stated that the logic appropriate for
his view of levels of reality was an intuitionist logic, which maintains an unmodified
principle of non-contradiction. This logic is adequate for the entities of classical
ontologies and their categories, but it does not describe real ontological levels in
the LIR sense, that is, involving contradictorial interactions. For example, the
tendencies in and between levels toward physical homogeneity or biological
heterogeneity are not independent but are related as discussed above.
4
Hartmann’s “fourth law” of categorical relationships states that “each individual category implies
all the others in the same stratum, where ‘implication’ does not mean standard logical implication,
but is an ontic relationship basic to that stratum.” This is close to the LIR view of implication as a
real process.
212 J.E. Brenner
In summary, Logic in Reality offers a principled way of using some of the insights
of several approaches to levels, without conflating them. Let us now return to
the Floridi approach and discuss both his Levels of Abstraction and his Levels
of Organization in relation to the conception of ontological Levels of Reality (LoRs)
I have outlined.
11.4.1 Definitions
Floridi proposes the method of levels of abstraction (LoAs) “as a more inter-subjective,
socially constructible, dynamic and flexible way to further an approach to the
knowledge of reality that is still Kantian”. In Floridi’s terms, this is a step away
from internal realism (the kinds, categories and structures of the world are only a
function of our conceptual schemes), but not yet a step into external or metaphysical
realism (the kinds, categories and structures of the world belong to the world
and are not a function of our conceptual schemes, either causally or ontologically).
If necessary, it might be called liminal realism.
Going beyond the overview in Sect. 11.1.3 above, I note that Floridi further
defines the input of a LoA as consisting of the system under analysis, comprising
a set of data; and its output is a model of the system, comprising information.
The quantity of information in a model varies with the LoA: a lower LoA, of greater
resolution or finer granularity, produces a model that contains more information
than a model produced at a higher, or more abstract, LoA. Thus, a given LoA provides
a quantified commitment to the kind and amount of relevant semantic information
that can be extracted from the system. The choice of a LoA pre-determines the
type and quantity of data that can be considered and hence the information that can
be contained in the model. Knowing at which LoA any system is being analyzed
means knowing the scope and limits of the model being developed.
In the method of Levels of Abstraction, Floridi notes the following as important
ways to speak about the levels of analysis of a system:
1. Levels of explanation (LoEs) support an epistemological approach and do not
really pertain to the system or its model, but provide a way to distinguish between
different epistemic approaches and goals. In the LIR Two-Level Framework
for Analysis, it is not necessary to maintain an absolute dichotomy between
explanandum and explanans, as real processual entities (Brenner 2008), but this
issue will not be discussed further here.
2. Levels of Organization (LoOs) support an ontological approach, according to
which the system under analysis is supposed to have a (usually hierarchical)
structure in itself, or de re – its ‘Organization’ – which is allegedly captured and
11 Levels of Abstraction; Levels of Reality 213
In all descriptions of levels, not excluding those in this chapter, it is often difficult
to say what property or scalar or vector quantity distinguishes them. Floridi indi-
cates that LoAs can be discrete or analog, more or less abstract or concrete, or can
have can have a higher or lower behavioral structure, depending on the complexity
of the relations involved, or differ in granularity. One GoA may include a different
number of LoAs, one or many. The concept of Gradients5 of Abstraction implies
that different LoAs are of different kinds that differ by some parameter or value
which may be, but does not have to be their complexity, e.g., disjoint GoAs (whose
views are complementary) and nested GoAs (whose views provide successively
more information).
As noted above, however, what is essential is not the number of Levels of
Abstraction or Levels of Reality or Complexity that it is convenient to designate, but
their fundamental characteristics of properties. LIR thus supports Floridi’s statement
that the assumption that reality must be digital/discrete (grainy) or continuous/
analog (smooth) is not justified. “Digital and analog are features of the LoA modeling
the system, not of the modeled system in itself.” This statement clears the ground for
5
The concept of ‘gradient’ itself is suggestive. I feel that we are dealing with an epistemological
‘field’ that is something like a physical energy gradient. Albeit only metaphorically, the GoA points
toward the non-separability I have proposed between the epistemology and the ontology of LIR.
214 J.E. Brenner
Informational Structural Realism (ISR) that treats the ultimate nature of reality as
relational. In the same way, the fundamental principle of LIR leads to a contradic-
torial concept of reality which is both continuous and discrete, and in which the
relations between entities are as important as the entities (relata) themselves.
The fundamental principle of LIR leads to the conclusion that Levels of Abstraction
and Levels of Reality are not totally distinguishable or separable. There is always
something real about a level of abstraction, as a perspective, a method or a stance,
and always something abstract about a level of reality, even if one or the other
aspect predominates at a particular time. This can be seen also in the tension
in Floridi between the conceptualization of the method of Levels of Abstraction and
the experiential use of that method ‘in reality’.
In Floridi’s critique, metaphysics, when used as a negative label, is what
is done by ‘sloppy reasoning’ without taking into consideration, at least
implicitly, the level of abstraction at, and hence the purpose for which a
theory is being developed. “Metaphysics is that LoA-free zone where anyone
can say anything without fear of ever being proved wrong, as long as the basic
law of non-contradiction is respected.”
There is no place in the LIR picture of reality for a “basic law of non-contradiction”.
Rather, there is a principled theoretical relation possible between experience, in
which the evolution of contradictorial components are inferred, and the model
which is constructed in the process of employing the method of Levels of Abstraction
(MLA). Clearly, the model is not the experience, but LIR defines rules for the
evolution of that experience. The model can be seen as the result or consequence of
the MLA being a selection process not unlike a Husserlian bracketing, in which
important elements are (temporarily) set aside, without disappearing totally.
Model and reality constitute a dialectically related pair, with one or the other
predominating at any time, according to the Principle of Dynamic Opposition (PDO)
as a scientific principle. From this perspective, the MLA functions as a Kantian
regulative principle for LIR, in the sense of Cassirer (Brenner 2008): “A scientific
principle fulfills a regulative task of systematizing and conferring order on empirical
knowledge, while being an integral part of that knowledge”. The resulting
metaphysics is no longer a domain where “anything goes”.
In the definition of a Gradient of Abstraction, a surjective function guarantees
that a relation exists and can be described between the observables at the LoAs the
GoA “contains”. LIR postulates, on the other hand, that a given complex process
entity has a contradictory counterpart. A joint method would start as Floridi pro-
poses, by first stating explicitly the Levels of Abstraction of interest and their grouping
into a Gradient of Abstraction, as a rigorous method of limiting the domain of
analysis, and then making inferences about the behavior (evolution) of the elements
in a contradictorial, interactive process.
11 Levels of Abstraction; Levels of Reality 215
The following concepts are examples of where and how Floridi’s theory of LoAs
can be placed into correspondence with and supported by Logic in Reality:
Internal realism as defined in Sect. 11.4.1 is a basically anti-realist position. The realism
of LIR is external or metaphysical, but it can accept the existence of an intermediate
epistemological domain. This intermediate domain, which is that of Levels of
Abstraction that Floridi designates as liminal, can be considered to overlap or interact
dialectically with the ‘external’ domain.
Liminal realism is thus related to the informal description of LoAs as interfaces.
The description of Levels of Abstraction in Sect. 11.4 above is the formal description,
but the informal description is to look at LoAs as being conceptually positioned
between data sources and the information spaces of an agent, a ‘place’ where indepen-
dent systems meet, act on or communicate with each other. In this domain, the LIR
description of the dynamics of the processes involved in the ‘movement’ across the
interface would seem appropriate.
6
LIR is thus clearly an anti-representationalist theory.
216 J.E. Brenner
but the operation of the Principle of Dynamic Opposition avoids the problems
associated with standard Identity Theories of Mind.
Further points of convergence, adapted from Brenner (2010a), are outlined in the
next two sections.
Floridi’s position is that the ultimate nature of reality is informational. It thus makes
sense to select Levels of Abstraction (LoAs) that commit our theories to a view of reality
as mind-independent and constituted by structural objects that are neither substantial nor
material but at least informational. The ‘at least’ is my suggestion that, without arguing
the entire case, would allow for a unified theory in which theories of LoAs and LIR
interact, both characterizing the structures of reality seen as dynamic processes.
Floridi’s Informational Structural Realism (Floridi 2008a) is a version of Ontic
Structural Realism (OSR) (cf. Ladyman and Ross 2007) that supports the onto-
logical commitment to a view of the world as a totality of informational objects
dynamically interacting with each other. I refer the reader to the historical devel-
opment of OSR as a response to the problems of naïve Scientific Realism (SR), the
anti-realist empirical critique of SR, and the limitations of simple Structural Realism
consequent on its primarily mathematical orientation in Floridi (2011).
ISR provides an ontology applicable to both sub-observable and to observable
structural objects by translating them into informational objects, defined as cohering
clusters of data, not in the alphanumeric sense of the word, but in an equally common
sense of differentiae de re, i.e. mind-independent, concrete points of lack of uniformity.
These cohering clusters of data as relational entities are the elementary relata
required in by Floridi’s modified version of OSR. Thus, the structuralism in question
here is based on relational entities (understood structurally) that are particular, not
on patterns that are abstract and universal.
Another area of convergence, then, as noted in Brenner (2010a), is that, as Floridi
makes clear, the interpretation of structural objects as informational objects is
not meant to replace an ontology of concrete things (better, processes) with one of
virtual entities. By conceptualizing concrete differentiae de re as data structures
and hence as informational objects, he defends a version of structural realism that
supports ‘at least’ an irreducible, fundamental dualism as a more correct description
of the ultimate nature of reality.
My claim is that the epistemological point to which Floridi has arrived is the
ontological foundation of Logic in Reality as a dualist metaphysics, grounded, as
noted, in the self-dualities of quantum entities and the thermodynamic dualities
of our world of experience. LIR is thus compatible with the informational portion of
Floridi’s approach, and is available, so to speak, to offer insights into the dynamics
of processes at higher Levels of Abstraction and Organization.
The method of LoA is an efficient way of making explicit and managing the
ontological commitment of a theory. As stated by Floridi, ISR supports the adoption
11 Levels of Abstraction; Levels of Reality 217
just living systems in general, are raised to the role of agents and patients of any
action, with environmental processes, changes and interactions are equally described
informationally.
Floridi says that he is not “limiting the analysis to (veridical) semantic contents –
as any narrower interpretation of IE, as a microethics inevitably does”. Floridi goes
beyond, here, a definition of information as solely meaningful, truthful and well-formed
data. This statement justifies, in my opinion, the use of Floridi’s conceptions as a
basis for a discussion of ethical issues at higher Levels of Abstraction, to which the
application of Logic in Reality and its concept of Ethical Information (Brenner 2010a)
may be useful (see also Marijuan 2009).
For LIR, the respect due to informational entities is a logical consequence of
our general dialectic relationships to “external” objects, and to ourselves as patients
as well as agents who have internalized these relationships. As Floridi assures us,
the minimalism advocated by IE is only methodological. Its intent is to support the
view that entities can be analyzed by focusing on their lowest common denominator,
represented by an informational ontology.
Logic in Reality operates at such a higher LoA7 since it uses a definition of a moral
agent from a process standpoint rather that as a transition system that, like Floridi’s,
sees change as discrete steps from one state to the next. All the entities considered
by LIR are interactive and adaptable in Floridi’s sense, but they are not autonomous,
in any case not completely so. LIR accepts that higher level entities, in particular
human beings, share the basic informational aspects of their existence with all
entities, through their minimal common ontology. Other Levels of Abstraction
can then be adduced to deal with more human-centered values.
11.5.1 Hierarchies
7
By Floridi’s definition, the level should be more abstract and involve less semantic information,
but I would argue that this is offset by the increased functionality of the information (information-
as-operator).
11 Levels of Abstraction; Levels of Reality 219
hierarchically organized. Hierarchy theory is neither more nor less than another
epistemological method of modeling events and interactions in the material
world.
Salthe defines a compositional hierarchy and a subsumption hierarchy
which differ in the following ways: (1) the level considered the ‘focal’ level; (2) the
kinds of complexity, intensional or extensional, which they embody; and (3) the
categorization of the elements in each (part-whole, classes-sub-classes) and their
conceptual evolution, that is, how new levels can be seen as appearing in the con-
struction, by interpolation and emergence respectively.
Floridi’s view of the nesting of LoAs would seem to place them in the category
of synchronic compositional hierarchies. LIR has the properties of a subsumption
hierarchy since its informational relations are definitely transitive, and in fact
Salthe also uses the term operator to refer to the causal properties of information.
Salthe talks about “interpolation of a new level” when one goes from the real
world to the conceptual world (“a hierarchy is a conceptual construction”). In the
real world, it is new entities that emerge at another level of reality or complexity.
Salthe’s ‘included level’ is the epistemological equivalent or projection of a new
ontological entity.
According to Floridi, LoAs can be connected together to form broader, structures
of abstraction, going from linear hierarchies of abstractions to nets of abstraction.
Similar non-linear hierarchies of Gradients of Abstraction are possible, where the
relation is more complex than nesting. These constructions, however, remain in
the epistemological domain.
I conclude that hierarchical concepts and Levels of Abstraction can both be
considered as ‘pointers’ or ‘meta-pointers’ to reality that answer the question where.
‘Where’ means: where in the evolving real world the LIR Principle of Dynamic
Opposition is in operation. There is no problem in using standard set and category
theory (e.g., in defining nesting, classes and sub-classes, etc.) with regard to hierarchies
and LoAs because they do not have their own dynamics; they ‘point to’ where the
dynamics are.
in which emergent behavior, while real, does not necessarily correspond to what is
found empirically. There is an expression here, in another form, of the Gödel
principle: if a system is real, it cannot be modeled completely, and if it is modeled
completely, it cannot be real. As pointed out by Minati for collections of birds or
other creatures (swarms) neither the behavior of the group nor of any individual
in it can be predicted in reality.
From the LIR standpoint, the difference between Levels of Abstraction and
Levels of Logical Openness is in their degree of approach to reality. Although the
Minati approach requires the participation of an external observer, exercising his
competence to effect classifications and analyses at different levels, this participation
falls short of an actual interaction, as in a process of information exchange, in which
the observer is physically involved.
References
Steve T. McKinlay
12.1 Introduction
Ontological questions are questions about the nature, existence or reality of objects.
And whilst there is a deceptive air of simplicity about the most basic ontological
question,1 “What is there?” the equally simple and somewhat obvious answer,
“Everything” leaves us somewhat unsatisfied. Obvious controversies arise when a
scientist or philosopher argues that there is something or other which she purports
exists, to which I or another scientist or philosopher would not agree. Thus with
regard to questions of ontology Quine reminds us “there remains room for disagreement
over cases” (1953a, p. 1).
It’s perhaps no coincidence that the Object Oriented (OO) programming com-
munity has adopted a similar maxim to their own end. To the question, “What is an
object?” the OO analyst would also answer, “Everything”. Yet just what exactly an
“Object” is, is still by and large up for grabs, not only to ontologists, and informa-
tion theorists, but perhaps surprisingly to OO programmers themselves who, one
would have thought, had a mortgage on such terminology. Thus like information,
1
This question was famously coined by Quine in his 1953a article “On What There Is”.
S.T. McKinlay (*)
School of Information Technology, Wellington Institute of Technology,
Buick Street, Petone, New Zealand
Faculty of Arts, Charles Sturt University, Wagga Wagga, NSW, Australia
e-mail: steve.mckinlay@weltec.ac.nz
2
We are obliged to point out that Floridi does limit the scope of his adoption of OO concepts and
theory by saying “OOP is not a viable way of doing philosophical ontology, but a valuable
methodology to clarify the nature of our ontological components” (2004a, p. 5).
12 The Floridian Notion of the Information Object 225
Thus there are significant differences in the way Floridi talks about and wants to
utilise information objects, and the way in which an OO designer or application
uses OO objects and in the way their progenitors use object classes. Accordingly the
information objects’ unusualness is amplified by the way Floridi seems to want to
treat information objects, that is, as independent and external3 objects of themselves,
almost as if they were something more than abstract and worthy of genuine
ontological status. It may be that this talk is merely a convenience or some kind of
metaphor about information objects; if indeed this turns out to be the case then our
job will be to clarify such talk.
Consequently this paper is about Floridi’s conception of the information object
and whether we can rightly confer ontological status upon such objects. During this
investigation I will consider the validity of using OO theory/terminology as a means
of “clarifying” the information object concept. As part of this investigation I will
argue that there appears to be a fundamental distinction between OO objects and
from a wider perspective the concept of the information object as discussed by
Floridi. It may be noted before long that I am something of a gentle nominalist with
regard to conceptual objects such as OO objects, their corresponding classes,
Floridian informational objects and the like. As such I will continue to draw upon
Quine’s ideas (as well as others) to support my arguments and will explain my
nominalist position shortly. I do see value, particularly with regard to Floridi’s
Information Ethics, in the notion of the information object, thus we shall see if we
can salvage the idea in the face of this critical analysis.
On the one hand, Floridi acknowledges questions surrounding the nature of
information as legitimate threads of enquiry (2004a, 2008b). If information is not an
independent ontological category then to which category could it be reducible? On
the other hand, if it (information) indeed does constitute a valid ontological category
then another problem emerges, just how does it relate to the objects to which it
usually refers? Such questions lead to enquiry vis-à-vis the nature of information
per se, its relationship with meaning and its status as a natural human independent
phenomenon or entity.
Although Floridi relies upon the terminology and the conceptual framework that
is representative of the OO programming and design paradigm, his literal applica-
tion of the concept of the information object differs considerably from the service
the OO object4 is put to within OO computing. That the concept warrants any onto-
logical status seems, on the face of it, at odds with the OO conception of the object
3
By “independent and external” I mean something whose existence is independent to human thinking
or perceiving, and therefore would exist whether or not (for example) humans existed, in other
words “observer-independent”.
4
When I use the phrase “OO Object” I am talking about the structure and function of objects in the
service of some OO application, design or model. I want to distinguish this from the phrase
“Information Object” which embodies the meaning explicit in Floridi’s IE and Informational
Realism and while Floridi uses OO terminology and method to explain his conception of the infor-
mation object I want to show how, even if we do accept the “information object” concept, they
cannot really be like OO objects.
226 S.T. McKinlay
5
Development of an object oriented domain class begins with the systematic identification and
modelling (and diagramming) of all the entities or objects, attributes, operations and relationships
that an OO designer perceives to be important about a particular problem domain, be it business-
oriented problems such as invoicing or accounts receivable or scientific problems such as the
modelling of biological or genetic systems and so on. Individuals within the domain class are then
generalised and represented as object classes which characterise the structure and behaviour
common to all objects in that class.
12 The Floridian Notion of the Information Object 227
6
The idea of theory-ladeness comes from philosophy of science whereby scientific observations
are said to be theory-laden when the language and terminology used to describe such observations
in question is largely derived from the theory itself. Thus, discussions about the nature of information
using OO terminology could be accused of being non-theory neutral. Having said that, it is difficult
to see how any discussion regarding information could not be influenced by various aspects of
culture and language.
228 S.T. McKinlay
objects of that type. Of course instantiated objects are not really concrete; instead
they represent individual members of the particular abstract class which defines
them. Thus the class reptile represents the properties common to all reptiles, cold
blooded, scaly skin and the like. While we typically identify (real world) individuals
in any class via ostension, this particular lizard or that particular crocodile, OO
objects are always an abstract expression representing an instance of the defining
class. The distinction between abstract classes and concrete instantiated objects is
relative. By analogy an object class is a predicate and an object a proposition.
Inheritance is a function built into an object’s structure whereby objects in a
hierarchy inherit the data elements and behaviours of their parent object class. Thus
subtypes in the reptile class, such as crocodile, inherit all the reptilian attributes and
behaviours and then add a few properties specific to crocodiles. Encapsulation is an
OO specific mechanism whereby an object’s components are restricted from being
directly accessed. That is, the internal representation of the object is hidden from the
outsider’s view – just how the attributes and behaviours of the class reptile are
implemented within the OO application is not available for scrutiny by users of the
system.
The method is the mechanism by which all object interactions and manipulations
are performed. Note here that under all self-respecting OO development environ-
ments objects are instantiated via what is often termed a class constructor method,
they certainly do not pop into existence spontaneously just because some corre-
sponding real-world object needs to be represented. It is directly due to the abstraction
approach that one cannot have direct access to data that might exist inside an object;
instead the method, sometimes utilising a message (often called parameters in
programming), must be invoked. The message may also contain some identity
condition – that is a way of identifying which object you wish to refer to. Furthermore
object creation within OO is explicit. Objects are created as per the needs of the
application or the application user.
Polymorphism refers to the ability of an object’s methods (which might also be
implemented as operators), as designed by an OO designer or programmer, to be
utilised in more than one way depending upon the context within which it is used.
Thus, consider the operator “+”, appropriately defined we might use to add up num-
ber data types or alternatively if presented with text strings the operator may concat-
enate or append them to a list depending upon the usage desired by the designer.
Persistent objects usually refer to real world objects, or states of affairs – some-
thing that the OO designer wishes the OO application to represent. Objects may
persist, in which case they would be correspondingly represented in a database
somewhere after the application closes or they may be temporary, existing only
while the OO application is running. Temporary objects are often associated with
the general operation or running of the application. For example in a Windows
application scroll bars, dialog boxes, menus and the like are all instantiated objects
which exist and are represented in the memory of the machine (or server) only
whilst the application is running.
There is no doubt, much more to say about the OO object concept, however this
brief overview should provide a starting point for our discussion and comparison.
12 The Floridian Notion of the Information Object 229
It should be clear by now that OO object classes and their instantiated objects are
nothing much like their real world counterparts. The OO class reptile is an abstract
representation, something more akin to “reptileness” than any individual reptile and
any instantiated OO reptile object is a highly-stylised, conceptual and extremely
simplified model of a reptile, nothing like an actual reptile. Furthermore each instan-
tiation of an OO reptile object, provided it carries the same attribute values, is
logically identical to every other similarly-defined reptile object. This of course can
never be the case for actual reptiles of the same species.
These concepts and rules are general to all OO systems, nevertheless (perhaps
surprisingly) consensus regarding the concept of the object within OO design is far
from agreed upon. For example, after introducing some key controversies observed
in the OO literature, Date and Darwen ask, “So what exactly is an object? Is it a
value? Is it a variable? Is it both? Is it something else entirely?” Due to the alleged
ambiguity they go on to assert, “As a matter of fact, it is largely because of this
confusion over what objects really are that we prefer … not to use object terminol-
ogy at all, except in a few very informal contexts” (2000, p. 10). Instead Date et al.,
prefer to rely upon a vocabulary that draws upon predicate logic and set theory to
explicate their model of data representation.
Whilst the sheer variety and volume of OO programming and design literature
available no doubt contribute to the confusion, even across a small sample we see
inconsistent points of view emerge. Booch for example (1994, p. 35) coins a simple
truism, “What we can agree upon is that the concept of an object is central to any-
thing object-oriented.” Martin (1992, p. 241) perhaps in the tradition of Berkeley,7
prefers, “An ‘object’ is anything to which a concept applies”, and “A concept is an
idea or notion we share that applies to certain objects in our awareness”. On a first
reading James Rumbaugh, one of the founding OO methodologists, appears to get
closer to the mark with, “We define an object as a concept, abstraction or thing with
crisp boundaries and meaning for the problem at hand” (1991, p. 21). Whilst this
approach is certainly useful when it comes to defining an object class model for a
well-defined problem domain to be implemented as an OO application or database,
it seems problematic as the basis for a universally favoured ontology. Indeed quite
the reverse seems to be the case. Concepts and ideas seem to have vague rather than
crisp boundaries. This is particularly the case when the applicability of a predicate
to its subject is tolerant, for example when does a child cease to be a child and begin
being an adult? The fact is most concepts do not have easily-defined boundaries;
reality is not crisp. On Rumbaugh’s definition most of reality would be thrown in
the too-hard basket with regard to object modelling. One might be tempted to argue;
all we need is a set of clear semantic rules that apply to our artificial OO style language
and we could by and large eliminate such vagueness and ambiguity. However this
approach seems to point to a required preciseness in meaning which naturally gives
way to appeal to definitions and hence does not appear to solve our problem.
7
George Berkeley famously argued in his Treatise Concerning the Principles of Human Knowledge
that material objects were merely ideas or concepts.
230 S.T. McKinlay
8
By actual representation I mean the physical or internal codification or implementation of the
data as it exists on disk.
12 The Floridian Notion of the Information Object 231
are conceptually crisp. Indeed there are a great many OO programs that have been
written to represent electronic versions of the game of chess. Designing an object
class with the appropriate attributes and methods that represent the physical pawn is
a relatively trivial exercise from an OO programming perspective.
Such analysis according to Floridi relies upon another computational concept,
that of levels of abstraction (LoA). Put simply, we can discuss computational systems
at differing levels of abstraction. High conceptual levels often involve abstract dia-
grammatic models. At lower levels we might imagine some written computer code
or logical statements in SQL or the like and at even lower levels, strings of scalar
variables and combinations of bits and bytes. Thus according to Floridi, “The choice
of LoA pre-determines the type and quantity of data that can be considered and hence
the information that can be contained in the model” (2008a, p. 16). The entire notion
of the information object thus is couched within the levels of abstraction concept.
Yet this reductionist construal still seems odd to me. I wonder what utility the use
of a terminology specific to a particular LoA might have at other levels. For example
the level where we might construe objects as OO-like informational objects and
then naming related concepts, objects or structures and their relationships or link-
ages across varied levels of abstraction using OO terminology surely cannot give us
any extra insight into the ontological nature of the corresponding real world objects.
The OO-like model provides value insofar as it is a representation of its real world
counterpart but OO-like concepts are usually reserved exclusively for the develop-
ment of OO applications and are in part a matter of convention. Indeed there are
various valid levels of abstraction used within the OO paradigm, these include
unified modelling language (UML) constructs such as use-case diagrams, system
sequence diagrams and state machine diagrams but each of these levels introduce
structures and models that have very clear abstraction relationship rules linking
them with domain class diagrams and their consequent computational implementa-
tions.9 However, nothing in Floridi’s literature suggests that informational objects
exhibit similar relationships or rules across analogous levels of abstraction. This
seems to be particularly the case between information objects and the real world
objects that define them.
Thus, whilst I agree that a pawn (with all its requisite behaviours and attributes)
may be imagined and that a pair of chess players with sufficient memories could
somehow visualise an entire chess game, this is not the same thing as a pawn being
represented in terms of OO theory, nor is it the same as a physical pawn. Of course
there are certainly some properties that each pawn representation shares, but there
are a great many differences also. Floridi, however, clearly takes a certain selected
set of properties of the pawn quite seriously and these seem to be more definitive or
significant to him. By way of example when I imagine moving a pawn on a chess
9
I do not intend to rehearse the literature on the development of OO models and their abstraction
relationships and rules across differing levels of abstraction, however, any review of UML OO
modelling literature will suffice should the reader wish to read further. The UML Wikipedia page
perhaps might be a good starting point.
232 S.T. McKinlay
This analysis I believe raises several questions. Firstly how do the information
objects defined by things such as mental images relate, if at all, to all the other infor-
mation objects that represent real world objects, which by ostension we’d agree are
part of the same class or set other than by loose consensus? They surely relate at
some level since they are all supposed to represent the same thing – but this, I con-
tend is largely folk talk. What Floridi seems to be talking about with regard to
pawns is a relativity of identity of type. The OO method of defining a pawn neces-
sarily relies upon pawns being of the same type and thus sharing some well-defined
properties. Floridi relies upon this methodology to clarify his information object
and as such is quite serious about these particular properties. He takes it that these
properties do exist and that they are constituent properties of pawns. Hence two dif-
ferent tokens, be it a cork or a carved piece of wood, have the same properties – such
properties constitute the tokens’ identity transcending the material properties, that
12 The Floridian Notion of the Information Object 233
identity presumably, in this case, being pawnhood. It seems that what Floridi is talking
about with regard to the information object (taking “pawn” as the example), is
something like the universal concept of pawnhood for which we already seem to
have a theory, albeit a controversial one.10
However, it is clear that the cork pawn and the wooden pawn (as well as the
imagined pawn and the OO pawn), whilst they share some properties, these proper-
ties are not identical across all pawns. Each set of properties relating to each pawn
are particular to that pawn.
Another issue raised is this: although the LoA approach is well proven within
OO and computer systems design, this is because there exist very explicit rules
about how differing levels of abstraction are linked to one another. Such rules about
how real world objects and their informational counterparts are linked via LoA do
not seem to be addressed by Floridi. Instead he offers us a conceptual discussion
regarding ontological commitment and levels of abstraction (2008a, p. 17).11 The
structural approach taken by Floridi works well for classes, and perhaps the abstract
entities that are information objects, but we initially learn about pawns not through
abstract structural discourse but via ostension. Thus while we might agree upon
what qualifies as a pawn, my class of pawns can be quite different to yours.
Two categories of problems come to mind with regard to the Floridian Information
Object. The first I will call the Methodological Problem. Whilst Floridi draws heav-
ily upon OO programming and design terminology in order to explicate his infor-
mational realism (as well as supporting the role of the information object within his
IE), the object concept itself, within OO programming or design, (issues of clarity
aside) is heavily contextualised and specific. The rules linking different levels of
abstraction from high level conceptual models to much lower level compiled object
classes and programs and their consequent representation at the disk level are very
explicit. To extract the OO object concept from its own theoretical environment
leaves us wondering as to the explanatory value such discourse could have outside
metaphor and analogy. The application of OO concepts is specific to their domain
and use of them outside this domain requires considerable ad hoc addition and
modification which mostly ends up in confusion and misunderstanding. This is even
the case within computing circles, a clear example of which can be seen in recent
attempts to apply OO concepts to the relational model of data.12
10
The problem of Universals was originally discussed by Plato and Aristotle and as a topic has
captivated philosophy ever since. Universals are generally considered repeatable or recurrent
abstract entities that can be instantiated in individual objects, classic examples are considered to be
qualities shared by entities, such as two green chairs sharing the quality of “greenness” and
“chairness”.
11
Floridi does attempt to address ontological commitment to different LoA by attempting to recon-
cile epistemic and ontological structural realism. However, in this paper I am concerned with the
relationships between information objects, OO concepts and real world objects, as such this is
outside the scope of this particular paper.
12
Date and Darwen (2000, p. 371) call this a “great blunder” arguing that it both dilutes OO
concepts and undermines the conceptual integrity of the relational model.
234 S.T. McKinlay
The object concept seems to have such a wide range of applicability that it ends
up somewhat ambiguous. We have noted that Date and Darwen (2000) dispense
with OO terminology in favour of a vocabulary based on set theory and predicate
logic in their discussions on data representation. Certainly on Rumbaugh’s descrip-
tion it seems difficult to understand how OO-type objects could represent anything
within the ranges of our normal understanding of language.
The second difficulty I am calling the Identity Problem. This issue relates to
where and how a Floridian information object’s data members are represented or
manipulated. While Floridi bases his notion of the information object on OO con-
cepts and terminology his goal for the information object is clearly quite different to
that of an OO application or data model designer. It could be that a Floridian infor-
mation object isn’t meant to be a referent in the same way an OO object is. The issue
seems to a problem related to identity or correspondence relations between the
abstract information object and its real world counterparts.
The methodological problem is only a problem when OO concepts are used out-
side an OO context. OO programming for all intents and purposes, “works” and all
the philosophical anxiety in the world over just what an object might be or whether
it accurately addresses ontological problems doesn’t really matter, certainly not to
the OO designer or programmer who is simply solving what is usually an informa-
tion management problem using a particular development/design environment. In
other words OO theory, or at least parts of it, is instrumentally reliable with regard
to the creation of “working” Object Oriented programs and their corresponding
object structures. The question of whether they are (or not) ever directly representa-
tive or answerable to any external truth about the real world is not at issue. OO
programs (or OO databases) are structured collections of relatively simple facts
represented by sets of values and governed behaviourally by simple computational
procedures and functions. However, the truth of such facts, or more precisely the
correctness of the data representations depends entirely upon whether or not the
values are (a) consistent with the rules (usually the “business rules”) upon which
the OO application has been designed, and (b) correspondent with the external envi-
ronment, that is, not erroneous. Of course a is exclusively the responsibility of the
OO designer, whereby b is almost certainly contingent.
Whilst there are philosophical disagreements regarding a precise definition of
just what constitutes an object, the points above and the distinction between a and b
is generally accepted within the OO community. An OO model is designed at a
conceptual design level by an OO designer conceptualising object classes which are
somewhat representative of the external (data) environment or as it is sometimes
termed, “the problem domain”. Moving through the requisite LoA appropriate to
OO modelling, the OO model is implemented internally at a logical level as a set of
definable structures specific to a particular vendor’s database or programmatic
development environment. The resulting OO application or database by and large
serves some clear delimited business, engineering or scientific function.
Of course Floridi is not saying OO objects are the same as informational entities,
the kind that he supposes reality is comprised of. He is not trying to directly co-opt
the OO object concept into the service of ontology. Yet he does borrow and rely
12 The Floridian Notion of the Information Object 235
upon OO terminology and unless clarified, it seems reasonable to argue that the
information object could only suffer the same ambiguous issues that the OO object
endures. Thus it is not at all clear that OO terminology can play any pragmatic role
in explicating a universal ontology other than providing some kind of loose model
of the ontological components of his informational realism.
The Identity Problem is a more complex philosophical problem and concerns the
issue of how we can have knowledge of abstract entities and the role their composite
attributes play, as well as how and what such components reference. We shall con-
sider this problem as part of the next section which considers the philosophical
position with regard to abstract objects and some objections.
We have already stated that philosophy views “abstract objects” as those which are
non-spatiotemporal in nature but it also often considers abstract objects to be caus-
ally inert, that is they generally have no direct ability to affect the “real world”.
Whilst there are varied theories about the nature of abstract objects, for the purposes
of this essay we shall follow a rather generalist approach. Thus we acknowledge a
general claim that all objects fall into two exclusive categories, either those which
are concrete or those which are abstract. Whilst there is much debate surrounding
the concrete/abstraction distinction following our general approach we will con-
sider, for the most part, concrete objects to spatiotemporally extended, (like tables,
chairs and mountains), and abstract objects not (like sets, prime numbers, predi-
cates, fictional characters and informational objects). Further we assume no object
can straddle the distinction. We also assume that whilst an information object itself
is abstract, it can represent both concrete and abstract objects. Thus an information
object is equally capable of representing Wonderland’s Alice as it is Aoraki.13
The information object, however, seems to exhibit a special kind of abstractness.
As discussed in the previous section, Floridi’s use of OO theory to “clarify” the
notion of the information object leads us to conclude that he must think there can be
groups of things of the same type (pawns, for example), and these things are all
members of the same class (which is something like an OO class), and that this
sameness is taken in a strict sense – that is, these things share some important
selected identical properties. This must be the case since OO programming requires
strict relations of identity with regard to object construction via class methods.
Whilst this notion of abstraction in the OO world is taken for granted, in philosophy
it does not seem to be quite so simple. I tend to think (as per the illustrated case of
pawns above) that notions of identity across groups of resembling objects are not
strict at all, and whilst it is difficult to avoid talk of like-properties across individuals
13
Aoraki is the indigenous Maori name for Mt Cook, New Zealand’s highest mountain.
236 S.T. McKinlay
14
Generally the claim in this context refers to non-trivial or non-tautological facts. For example,
1 + 1 = 2 and, “unmarried men are bachelors” qualify as facts but they have no cause. (I thank
Morgan Luck for pointing this out to me). We might note here as a by-line that Floridi and many
others doubt the informative nature of tautologies or necessary truths.
12 The Floridian Notion of the Information Object 237
the rub, while the referent doesn’t point to the external thing it attempts to model, in
the case of our example above, the OO pawn, it does point literally to some
arbitrarily complex piece of data, and that is what the referent is physically
referencing. Furthermore there are direct, explicit, and traceable causal links
between the various levels of abstraction.
But what does an information object point to? Clearly the intention is that it
either points to or emerges from the thing that it references. However in the case of
an information object there is no implementable (as there is with an OO object)
physical layer of abstraction which provides mappings between logical representa-
tions at higher levels of abstraction and physical, or the actual material data repre-
sentation. Thus in the case of an information object the abstraction mapping process
is either a behavioural or a cognitive process, relating some vague sensory input
between the extant object and the agent or organism’s experience of it. This experi-
ence will be different for each organism. Humans experience exposure to the sun for
example in an entirely different way to the way algae experience it, if we can even
call it that.
There is a final related distinction I will make between OO objects and Floridian
information objects. In this section I draw upon an argument made by Deborah
G. Johnson (2006) regarding the distinction between natural phenomena or natural
entities (which according to Floridi can be explained as “dynamically interacting
informational entities”) and human-made entities or what are often termed artifacts.
I first outline Johnson’s argument and then apply it to my general discussion.
Johnson’s paper develops the thesis that computer systems can be moral entities
but not moral agents. Although I don’t intend to discuss the main thrust and conclu-
sions of Johnson’s argument regarding the moral status of computer systems in this
paper, whilst developing her argument she highlights two important distinctions
which are salient to the present discussion. The first distinction is between artifacts or
what would normally be considered human-made entities and those entities which
occur naturally. A second distinction is made between artifacts and technology.
Although she concedes that these distinctions are inherently problematic in the
sense that there is no sharp boundary, they are nevertheless significant. Any “rejection
or re-definition of these distinctions obfuscates and undermines the meaning and
significance of claims about morality, technology and computing” (2006, p. 196),
Johnson asserts.
The challenges are illustrated as follows: a stick used by a tribesman as a spear
to hunt an animal for instance, is a naturally-occurring object which has been used
as a tool. Thus the stick, whilst seen as a natural entity, is also utilised as a form of
technology. Newer technologies such as genetic modification and nano-molecular
technology may also appear to blur the line between nature and technology. Just
which parts are naturally occurring parts and which are human-made artifactual
238 S.T. McKinlay
parts maybe difficult to assess. The only difference Johnson notes, between
biotechnology and other types of technology such as computer systems and the like
is the extent to which they manipulate, or the level at which the manipulation of
nature occurs (2006, pp. 196–197). Thus while challenges can be made to the dis-
tinctions, Johnson argues it doesn’t mean the distinctions are incoherent or unten-
able. On another level, according to Johnson, these distinctions allow us to make
sense of the kinds of questions related to what kind of effect human behaviour has
on the planet, versus something independent of human behaviour i.e. nature.
Although absolute definitions of technology are problematic, referencing
Heidegger, Johnson attempts to avoid some of the debate by simply arguing that
technology is a contrivance and inherently refers to human-made things. This of
course includes computer systems at both logical and physical levels. While Johnson
confines the term artifact to physical objects,15 I think however, we can extend
Johnson’s definition of artifact. Computer systems are made up of both physical and
logical components and the logical components are just as artifactual in nature as
the physical or material components. This is clear since there are explicit design and
development processes associated with the creation of all logical components of
computer systems. The resulting objects have unambiguous designed functions, can
be identified and map to specific arbitrarily complex data representations on disk or
in the memory of the computer. They are indeed virtual artifacts.
Furthermore Johnson points out that “technology is a combination of artifacts, social
practices, social relationships and systems of knowledge… sometimes called socio-
technical systems” (2006, p. 197). Thus artifacts (as components of socio-technical
systems) cannot exist according to Johnson without systems of knowledge, social prac-
tices and human interactions and relationships. Artifacts are created, distributed, util-
ised and have meaning only within the context of human social activity (ibid.). Whilst
there are differences between logical and physical artifacts, they are both clearly the
result of human behaviour and decision processes.
Object oriented objects are artifacts. They are nothing but the result of explicit
design processes by humans. Alternatively informational objects seem to exhibit
some of the characteristics evident in both natural entities and human-made arti-
facts. Information objects, however, may be just a way of talking about and inter-
preting the things and events around us, in other words, mental entities.
This points toward the contingency of information objects since the existence of
such objects relies upon some vague correspondence between the object’s internal
structures, whether we describe such structures as “attributes” and “methods” and
so on, and their relationship with the real world physical entity (table, chair, moun-
tain etc.) they seek to represent. We should point out that Floridi’s construal of the
information object seems to differ from the role classes play in a similar discourse,
15
Johnson (2006, p. 197) “A common way of thinking about technology – perhaps the layperson’s
way – is to think that it is physical or material objects. I will use the artifact to refer to the physical
object.”
12 The Floridian Notion of the Information Object 239
where the information object seems to pick out individuals, the role of classes, to
use Plato’s metaphor, are our attempt to carve the beast of reality along its joints.
The problem becomes more complex when we include informational objects
representing abstract entities into the mix, the kind that nominalism would typically
reject. For example, there could be any number of imaginable information objects
representing anything for any possible interpretation. To paraphrase Hayaki (2006, p. 81)
who considers similar problems associated with contingent objects, we are not count-
ing actual possible physical objects; we are counting the ways in which an object
might be represented (by an information object) by any possible agent.
The information object suffers from an identity crisis. Following Johnson, in
order to identify an information object as an independent entity requires us to sepa-
rate the object from its context, however, in this process we are extracting it from the
context that gives it its meaning and function. This appears to be a problem the
analysis of information qua information suffers from in general.
Thus whilst I agree it does not make sense to ask the question, “Where are these
information objects you talk of Luciano?” Abstract objects such as information
objects do not exist in space, I do think it legitimate to ask, “How do everyday (concrete)
objects map to their information object counterparts?” This question is answered in
the OO case since there are clear and explicit abstraction relations between different
levels of the model. This level of detail seems obscure with regard to informational
objects. I hope I have made it clear that OO system entities are never isomorphic
with any kind of external reality (but informational objects are supposed to be). OO
objects are merely a more or less accurate model of reality. Indeed OO models are
pragmatic by nature – the goal is to solve a business, engineering or scientific prob-
lem, that is, a problem that can be adequately solved with an OO application. The
Floridian account however seems to suggest an object qua information object does
indeed reference the real world object it purports to represent but just how this
works is not explained. Floridi argues, “the ultimate nature of reality is informa-
tional, that is, it makes sense to adopt a level of abstraction at which our mind-
independent reality is constituted by relata that are neither substantial nor material
(they might well be but we have no reasons to suppose them to be so) but informa-
tional” (2004b, p. 5).
The problem is not that informational entities are not materially evident, neither
are classes, but they are an essential part of the natural sciences. Quine’s nominal-
ism for instance admits of such abstract objects such as classes, numbers and sets
and the like into his physicalist ontology because science simply could not proceed
without them – this in essence reveals Quine’s pragmatism. However, is such a
comparison any argument for the admission of Floridian information objects as a
legitimate ontological category? While most of us would agree that there are con-
crete objects in the world that are both substantial and material, the question remains
as to how seemingly mind-independent, non-material, abstract relata causally inter-
act with the material world. For this gentle nominalist at least Quine’s (as well as
Occam’s) suspicions are aroused.
240 S.T. McKinlay
12.5 Conclusion
Whilst OO programming and design was in its infancy Gaifman (1975, p. 329) a
philosopher, summed up our predicament, “objects are notoriously theory-laden; an
informative discussion of objects of this or that kind presupposes already a whole
conceptual scheme”. The conceptual scheme which supports a notion of the infor-
mation object isn’t surprising, we live in the information age – our economy, social
structure and culture are virtually defined by information and information technol-
ogy. This technology is implemented in computer systems that are developed using
OO development environments such as Java and C#. These environments are sup-
ported by a popular design methodology UML which explicitly lays out rules for
mapping high level conceptual models to implementable OO class models and
eventually computer code. The notion of the most basic primitive ontological cate-
gory being informational by nature is somewhat attractive; it fits with our current
and popular world view, it appeals to our natural desire to impose order upon
our world.
Having said that, implementation of levels of abstraction within computing envi-
ronments are explicit and functional. A fundamental and necessary property of OO
objects is that they are referents and this referential property is implemented directly
via the implementation of LoA between various conceptual and physical layers
ending with bits mapped to hard disk or memory addresses. Identity relations with
regard to OO objects are explicit and clear. OO objects of the same type, sharing the
same attributes, are logically (and digitally) identical. It isn’t the case that nature
operates in a way anything like an OO application. Floridi doesn’t say as much, but
this does raise the question as to what extent a seemingly primitive natural kind, the
information object, can be made clear with appeals to human-made artifacts. Careful
derivation of semantic information from objects and states of affairs led to the devel-
opment of the OO model. It doesn’t then make sense that that the OO model can
clarify nature in any way outside loose metaphor.
Bas Van Fraassen advises us that, “Theories with some degree of sophistication
always carry some metaphysical baggage” (1980, p. 68), just like hidden variable
theories in quantum physics, the hope is that carrying the baggage will eventually
pay off. What I have done is presented a fallible argument as to how informational
objects (if we at least entertain the possibility of such things) do not seem to be
much like OO objects. The ontological argument for such objects and hence infor-
mational realism is ultimately metaphysical by nature but Floridi adopts a structur-
alist methodology thus what we can salvage with some conviction is the properties
and relations that are part of these postulated entities. Whether these are captured by
universals (as in the property of pawnhood) or are ultimately unique to instantiated
objects (what Armstrong (1989) called particulars) has not been addressed by this
paper but perhaps introduces a topic for future investigation. It should be evident
(with respect to my own declared nominalistic tendencies) that my analysis of these
problems has being cautiously parsimonious with regard to postulated entities.
David Armstrong concludes his significant 1989 text Universals with a quote that
12 The Floridian Notion of the Information Object 241
seems so fitting I adapt it here for my own purposes. The topic of informational
realism is most certainly an intellectually fascinating one for those interested in
what D.C. Williams terms “grubbing around in the roots of being” (1966).
Acknowledgments I am indebted to both John Weckert and Morgan Luck who read earlier versions
of this article and kindly provided many thoughtful and inspiring comments. I also graciously thank
Skye Bothma for her indispensable editing and formatting assistance; of course any remaining errors
or omissions are mine alone.
References
Armstrong, D.M. 1989. Universals: An opinionated introduction, Focus series. Boulder, Colorado:
Westview Press.
Booch, G. 1994. Object-oriented analysis and design with applications. 2nd ed. Benjamin/
Cummings Publishing Company, Inc: Redwood City, California.
Date, C., and H. Darwen. 2000. Foundation for future database systems: The third manifesto, 2nd
ed. Boston/Reading, MA: Addison-Wesley.
Floridi, L. 2002. What is the philosophy of information? Metaphilosophy 33(1–2): 123–145.
Floridi, L. 2004a. Open problems in the philosophy of information. Metaphilosophy 35(4): 554.
Floridi, L. 2004b. Informational realism. In IEG research report, ed. G.M. Greco. Oxford:
Information Ethics Group.
Floridi, L. 2008a. A defence of informational structural realism. Synthese 161(2): 219–253.
Floridi, L. 2008b. Modern trends in the philosophy of information. In Philosophy of information.
Holland: Elsevier.
Gaifman, H. 1975. Ontology and conceptual frameworks, part I. Erkenntnis 9: 329–353.
Goodman, N., and W.V. Quine. 1947. Steps toward a constructive nominalism. The Journal of
Symbolic Logic 12(4): 105–122.
Hayaki, R. 2006. Contingent objects and the Barcan formula. Erkenntnis 64: 75–83.
Himma, E. 2004. There’s something about Mary: The moral value of things qua information
objects. Ethics and Information Technology 6: 145–159.
Johnson, D.G. 2006. Computer systems: Moral entities but not moral agents. Ethics and Information
Technology 8: 195–204.
Martin, J., and J. Odell. 1992. Object oriented analysis and design. Englewood Cliffs/Upper
Saddle River, NJ: Prentice-Hall Gale.
Quine, W.V. 1953a. On what there is. In From a logical point of view, 1–19. Harvard: Harvard
University Press.
Quine, W.V. 1953b. Two dogmas of empiricism. In From a logical point of view, 20–46. Harvard:
Harvard University Press.
Quine, W.V. 1957. Speaking of objects. Proceedings and Addresses of the American Philosophical
Association 31: 5–22.
Quine, W.V. 1974. The roots of reference. La Salle: Open Court.
Quine, W.V. 1992. Structure and nature. The Journal of Philosophy 89(1): 5–9.
Rumbaugh, J., M. Blaha, W. Lorensen, F. Eddy, and W. Premerlani. 1991. Object-oriented modeling
and design. Upper Saddle River, NJ : Pearson Education.
Salmon, W. 1998. Causality and explanation. Oxford: Oxford University Press.
van Fraassen, B. 1980. The scientific image. Oxford: Oxford University Press.
Williams, D.C. 1966. The elements of being. In The principles of empirical realism. Springfield:
Charles C. Thomas.
Wittgenstein, L. 1961. Tractatus logico-philosophicus. London/New York: Routledge.
Part V
Replies by Floridi
Chapter 13
The Road to the Philosophy of Information
Luciano Floridi
13.1 Introduction
There are places, like the small village where I live, that are difficult to find. They lay
in remote locations, not well indicated on the map, few people have ever heard of
them, and hardly anyone can tell you how to get there. There are places, like the
university where I work, which are difficult to reach. They are so big that, if you are
driving following a GPS, their postcodes actually take you miles away from the
campus, to a mail deposit. Sometimes, I fear that the philosophy of information
that I have been working on combines the geographical problems of my home and
working places: difficult to find and hard to reach. This is why the invitation to
contribute to this volume is not only a great honour, of which I am fully aware, but
also a very welcome opportunity, for which I am deeply grateful. For it allows
me to map some less tortuous paths that, if followed, should help the reader to get
to the philosophy of information that I have in mind, and alert the same reader to
some wrong turns, potential pitfalls and misleading road signs that have side-tracked
more than a fellow traveller. Of course, being able to indicate more clearly how to
reach a place does not mean that the place itself is worth visiting. I believe that the
philosophy of information is the philosophy of our time properly conceptualised for
our time, but then you might expect this level of commitment on my side. I also
hope that the journey to reach it will be rewarding, but on this I can only rely on the
traveller’s experience. What I may say is that the view from here is very interesting
and shows an immense conceptual space still virgin. If you join me, you will see.
L. Floridi (*)
UNESCO Chair in Information and Computer Ethics, University of Hertfordshire,
de Havilland Campus, Hatfield, Hertfordshire AL10 9AB, UK
Faculty of Philosophy and Department of Computer Science, University of Oxford, Oxford, UK
e-mail: l.floridi@herts.ac.uk
I am indebted to Dodig Crnkovic for her very perceptive, informative and insightful
chapter. In several cases, I doubt I could have put things any better. Her analysis of
the method of levels of abstraction is remarkable, especially insofar as she correctly
sees that
Some critics feel uneasy with Levels of Abstraction in fear of ethical relativism, but the fear
is unfounded. Defining Level of Abstraction adds to our understanding of a model.
I could easily carry on (see for example her comparison between computational
modelling in IE and the use of a microscope in medical diagnostics) but the reader
will have grasped the point. This is a chapter from which I have learnt a lot, and it
provides a very good introduction to information ethics as I understand it.
My shortest reply to the chapter by Wolf, Grodzinsky, and Miller is that I agree
with them wholeheartedly. Their application of the method of levels of abstraction
is clever and instructive. A slightly longer reply might include the clarification of
a minor point.
13 The Road to the Philosophy of Information 247
This is the point I am not sure that I fully grasp: in which sense is LoAS an
“addition” or “expands” the method of LoA? The authors themselves clearly
indicate that LoAS is just a third Level of Abstraction in which the “S” stands for
“society.” They are constituted by “the set of observables available to an observer of
society.” But this means that a LoAS does not really expand or extend or add
anything to the method, it is just part of its application. Of the unlimited number of
LoAs and combinations of LoAs into Gradients of Abstraction that are possible,
the authors have chosen a “societal” one. This is correct, in terms of applicability
of the method, and very useful, given the goals of their analysis in the chapter.
Yet presenting it as an extension of it would be like describing 12 − 5 = 7 as an addition
or extension of the general method of subtraction.
As I wrote above, this is really a small clarification, which should not cast any
doubt on my full agreement with their work and conclusion:
this method offers a usable framework in the analysis and development of software
applications.
They are right. It seems to me the right method to approach the ethical and
epistemological challenges emerging in our information society.
things straight again. There is such a thing as fatal and irreversible conceptual
damage, and I started wondering whether trying to improve the chapter at all
costs might actually be a case of futile medical care. Luckily, this reminded me of a
fundamental distinction, which I shall exploit, at the risk of disappointing the reader.
Replying is a right, not a duty. As such, it does not have to be exercised. So, in
this case, I hope the reader will accept my apology for being unable to engage with
a text that I am unable to improve. Perhaps it is unfixable. Perhaps others more able
than me will do better. They are welcome to try, but my suggestion is to follow
Virgil’s advice to Dante in the Third Canto of Inferno, verse 51:
Non ragionam di loro, ma guarda e passa.
Let us not talk (reason) about them, just look and move on.
Instead of fixing the chapter, I shall try to explain the method in simple terms.
It seems to me to provide an intuitive and powerful approach. I hope the reader will
agree. But just in case some cynic were to suspect that the problem lies with the
hapless doctor, not with the hopeless chapter, let me invite anyone interested in
understanding what the method of levels of abstraction is, and how it can be applied
to ethical issues, to read the chapters in this book by Wolf, Grodzinsky, Miller and
by Dodig Crnkovi. They are critical, but definitely worth your time. And now, here
is the method again.
The latest formalisation of the Method of Abstraction can be found in Floridi
(2011). The terminology has been influenced by an area of Computer Science,
called Formal Methods, in which discrete mathematics is used to specify and
analyse the behaviour of information systems. Despite that heritage, the idea is not
at all technical and for present purposes no mathematics is required, for only the
basic idea will be outlined.
Let us begin with an everyday example. Suppose we join Anne (A), Ben (B) and
Carole (C) in the middle of a conversation. Anne is a collector and potential buyer;
Ben tinkers in his spare time; and Carole is an economist. We do not know the object
of their conversation, but we are able to hear this much:
A. Anne observes that it (whatever “it” is) has an anti-theft device installed, is kept
garaged when not in use and has had only a single owner;
B. Ben observes that its engine is not the original one, that its body has been recently
re-painted but that all leather parts are very worn;
C. Carole observes that the old engine consumed too much, that it has a stable
market value but that its spare parts are expensive.
The participants view the object under discussion according to their own interests,
which determine their conceptual interfaces or, more precisely, their own levels
of abstraction (LoA). They may be talking about a car, or a motorcycle or even a
plane, since any of these three systems would satisfy the descriptions provided by
A, B and C above. Whatever the reference is, it provides the source of information
and is called the system. Each LoA (imagine a computer interface) makes possible
an analysis of the system, the result of which is called a model of the system
(see Fig. 13.1). For example, one might say that Anne’s LoA matches that of an
13 The Road to the Philosophy of Information 249
owner, Ben’s that of a mechanic and Carole’s that of an insurer. Evidently a system
may be described at a range of LoAs and so can have a range of models.
A LoA can now be defined as a finite but non-empty set of observables, which are
expected to be the building blocks in a theory characterised by their very choice.
Since the systems investigated may be entirely abstract or fictional, the term “observ-
able” should not be confused here with “empirically perceivable”. An observable
is just an interpreted typed variable, that is, a typed variable together with a
statement of what feature of the system under consideration it stands for. It may be
qualitative or quantitative, digital or analog, continuous or discrete. An interface
(called a gradient of abstractions) consists of a collection of LoAs. An interface
is used in analysing some system from varying points of view or at varying LoAs.
In the example, Anne’s LoA might consist of observables for security, method
of storage and owner history; Ben’s might consist of observables for engine
condition, external body condition and internal condition; and Carole’s
might consist of observables for running cost, market value and maintenance
cost. The gradient of abstraction might consist, for the purposes of the discussion,
of the set of all three LoAs.
The Method of Abstraction allows the analysis of systems by means of models
developed at specific gradients of abstractions. In the example, the LoAs happen to
be disjoint but in general they need not be. A particularly important case is that
in which one LoA includes another. Suppose, for example, that Delia (D) joins the
discussion and analyses the system using a LoA that includes those of Anne and
Carole plus some other observables. Let’s say that Delia’s LoA matches that of a
buyer. Then Delia’s LoA is said to be more concrete, or finely grained or lower, than
Anne’s and Carole’s, which are said to be more abstract, or more coarsely grained
or higher; for Anne’s or Carole’s LoA abstract some observables which are still
“visible” at Delia’s LoA. Basically, not only has Delia all the information about
the system that Anne and Carole might have, she also has a certain amount of infor-
mation that is unavailable to either of them.
250 L. Floridi
The chapter by Russo is, quite frankly, impressive. She combines, in a coherent
picture, several themes I developed in different writings, in a way that I can only
admire. I would definitely recommend the reader to start with this text, if she wishes
to have a clear, insightful, and at the same time critical and original analysis of
topics such as the fourth revolution, the nature of inforgs, and the development of
the infosphere. But enough of praise. Probably the best way to return the favour
is to contribute one more idea to the coherent picture provided by Russo. The idea is
that of enveloping the world. In order to explain it, I will need to introduce two
concepts, that of infosphere and that of re-ontologization (Floridi 2007).
Infosphere is a neologism I coined years ago on the basis of “biosphere”, a term
referring to that limited region on our planet that supports life. It denotes the
whole informational environment constituted by all informational entities (thus
including informational agents as well), their properties, interactions, processes
and mutual relations. It is an environment comparable to, but different from,
cyberspace (which is only one of its sub-regions, as it were), since it also includes
off-line and analogue spaces of information. It is an environment (and hence a
concept) that is rapidly evolving.
Re-ontologising is another neologism that I have recently introduced in order to
refer to a very radical form of re-engineering, one that not only designs, constructs
or structures a system (e.g., a company, a machine or some artefact) anew, but
that fundamentally transforms its intrinsic nature. In this sense, for example,
nanotechnologies and biotechnologies are not merely re-engineering but actually
re-ontologizing our world.
These two concepts are not indispensable – the reader is welcome to rely on any
other useful shortcuts – but they are helpful to formulate the claim that digital ICTs
are re-ontologizing the very nature of (and hence what we mean by) the infosphere,
while the infosphere is progressively becoming the world in which we live. It follows
that, while we are pursuing the development of digital technologies that can operate
in the world, we are actually re-ontologising the world to fit them. Especially in
recent years, the world as infosphere has been adapting to technologies’ limited
capacities increasingly well. Using a term from robotics, we have been enveloping1
the world without fully realising it. The example of a dishwasher is elementary but
still helpful in making the point. We do not build robots that wash dishes like us, we
envelop micro-environments around simple robots to fit and exploit at best their
limited capacities and still deliver the desired output. It is the difficulty of finding
the right enveloping that makes ironing (as opposed to pressing) so time-consuming.
Enveloping used to be either a stand-alone phenomenon (you buy the robot with
the required envelop, like a dishwasher or a washing machine) or implemented within
the walls of industrial buildings (in a mundane context, think of the tunnel-like system
1
In robotics, an envelope (also known as reach envelop) is the three-dimensional space that defines
the boundaries that the robot can reach.
252 L. Floridi
of the conveyorised, automatic car wash in which you drive). Nowadays, enveloping
the environment into a technology-friendly infosphere has started pervading any
aspect of reality and is visible everywhere, on a daily basis. If driverless vehicles
can move around with decreasing trouble, this is not because AI has finally arrived,
but because the “around” they need to negotiate has become increasingly suitable
to AI applications.2 We do not have semantically proficient technologies, but we
have accumulated so much data, can rely on so many humans, and have such good
statistical tools that purely syntactic technologies can bypass problems of meaning
and understanding, and still deliver what we need: a translation, the right picture of
a place, the preferred restaurant, the interesting book, the right answer, and so forth.
The victory of Watson – IBM computer that answers questions posed in natural
language – against two human players during a two-game, combined-point match of
Jeopardy! is only the most recent episode in such trend. Indeed, some of the issues
we are facing today, e.g., in e-health or in financial markets, already arise within
highly enveloped environments in which all relevant (and sometimes the only) data
are machine-readable, and decisions as well as actions may be taken automatically, by
applications and actuators that can execute commands and output the corresponding
procedures, from alerting or scanning a patient, to buying or selling some bonds.
Examples could easily be multiplied. Enveloping is a trend that is robust, cumulative
and progressively refining: everyday sees the availability of more tags, more humans
online, more documents, more statistical tools, more devices that communicate with
each other, more sensors, more RFID tags, more satellites, more actuators, more data
collected on all possible transitions of any system, in a word, more enveloping.
This is good news for the future of smart technologies, which will be exponentially
more useful and successful with every step we take in the expansion of the infosphere.
Enveloping is a process that has nothing to do with some sci-fi singularity, for it is
not based on some unrealistic (as far as our current and foreseeable understanding
of AI and computing is concerned) speculations about some super AI taking over
the world in the near future. But it is a process that raises some challenges. In order
to express the one I have in mind, let me use a parody.
Two people T and H are married and they really wish to make their relationship
work, but T, who does increasingly more in the house, is inflexible, stubborn,
intolerant of mistakes and unlikely to change, whereas H is just the opposite, but is
also becoming progressively lazier and dependent on T. The result is an unbalanced
situation, in which T ends up shaping the relationship and distorting H’s behaviours,
practically, if not purposefully. If the marriage works, that is because it is carefully
tailored around T. Now, AI and smart technologies play the role of T in the previous
analogy, whereas their human users are clearly H. The risk we are running is that,
by enveloping the world, our technologies might shape our physical and conceptual
environments and constrain us to adjust to them because that is the best, or some-
times the only, way to make things work. New humans are born inside pre-existing
technological environments and they plastically adapt to them. After all, T is the stupid
2
See the progressive successes of the DARPA Grand Challenge.
13 The Road to the Philosophy of Information 253
but laborious spouse and humanity the intelligent but lazy one, who is going to
adapt to whom, given that a divorce is not an option? The reader will probably
recall many episodes in real life when something could not be done, or had to be
done in a very cumbersome or silly way because that was the only way to make the
technology in question do what it had to do. Here is a more concrete, trivial example
(philosophically, things are way more complex). The risk is that we might end up
building houses with round walls and furniture with sufficiently high legs in order
to fit the capacities of a Roomba (http://www.irobot.com/) much more effectively.
I certainly wish our house were more Roomba-friendly. The example is useful
to illustrate not only the risk but also the opportunity represented by ICT’s re-ontol-
ogising power and the enveloping of the world.
There are many “roundy” places in which we live, from igloos to medieval towers,
from bow windows to public buildings where corners of the rooms are rounded for
sanitary reasons. If we spend most of our time inside squarish boxes that is because
of another set of technologies related to the mass production of bricks and concrete
infrastructures, and the ease of straight cuts of building material. It is the mechanical
circular saw that, paradoxically, generates a right-angled world. In both cases, squarish
and roundy places have been built following the predominant technologies, rather
than through the choices of their potential inhabitants. Following this example, it is
easy to see how the opportunity represented by technologies’ re-ontologising
power comes in three forms: rejection, critical acceptance, and proactive design.
By becoming more critically aware of the re-ontologising power of AI and smart
ICT applications, we might be able to avoid the worst forms of distortion (rejection)
or at least be consciously tolerant of them (acceptance), especially when it does
not matter (consider the Roomba-friendly length of the legs of the furniture) or
when this is a temporary solution, while waiting for a better design. In the latter
case, being able to imagine what the future will be like and what adaptive demands
technologies will place on their human users may help to devise technological
solutions that can lower their anthropological costs. In short, intelligent design
should play a major role in shaping the future of our interactions with forthcoming
technological artefacts. After all, it is a sign of intelligence to make stupidity work
for you.
In his chapter, Beavers provides an interesting analysis of what I have called “the
fourth revolution”. He does so from the perspective afforded by the history of
the technologies that have made possible the recording and transmission of data.
The topic is immense and fascinating, and the chapter wisely highlights some of its
most significant aspects. As I mentioned in my reply to Giardino, there are indeed
many reasonable ways of interpreting the sort of radical information changes that
we are witnessing in these decades. Among them, Beavers’ approach is not only
plausible, but also fruitful. Likewise, if one were to look for further perspectives, it
254 L. Floridi
might seem obvious to connect the information revolution to the agricultural and
the industrial revolutions that preceded it. This would also make sense, and the
reader keen on other ordinal numbers might wish to check the article “Lists of cultural,
intellectual, philosophical and technological revolutions” provided by the usual
Wikipedia. As for the “fourth revolution”, in this brief reply I would like to clarify
two points which may aid our understanding of what I mean by it, and why I believe
it to be a useful perspective to conceptualise our time.
The first point is historical. Some people (not Beavers) seem to think that the
origins of the fourth revolution can be dated back roughly to the invention of the
first computing machines and the work of Alan Turing or perhaps Claude Shannon.
This is fine, but it is not what I have been arguing. The information revolution under-
stood as a fourth revolution dates back to the animals scratched by our ancestors on
the walls of their caves and the rudimentary signs they used to communicate. Thus,
the information revolution is not an episode in human history, but what makes
history possible. The information revolution has always preceded us, for we are its
children. The crucial difference is that it is only in the last decades that it has begun
to be the most salient feature of our lives. And this leads me to the second point,
which is hermeneutical. If the information revolution began such a long time ago,
if it has been with us all along, why so much stress on it only now? And why call
it a fourth revolution? Why not a third, or a fifth, or … you number it. Of course
the number is not essential and other metrics are perfectly fine. What matters is that
“the fourth (or nth) revolution” is an interpretation of the information revolution as
a transformation whose greatest significance does not lie in the new ways in which
we manage data, nor in what such new data management enables us to do in our
interactions with the world, nor in how wealth and well-being is generated by such
interactions, but rather in the way in which we are rethinking our nature and role in
the universe. In other words, whatever number you think best captures the informa-
tion revolution (third, if you count the agricultural and the industrial, second, if you
count analogue then digital, etc.) the “fourth” refers to how many times we already
have been through this radical change in our self-perception. We have looked into
the mirror of science and technology before and realised that we had changed. It has
happened with Copernicus, Darwin and Freud (or neuroscience, if you prefer), and
it is now happening again witch computer science and ICTs. So the fourth revolution
is not a serial number to label a sequence of historical transformations in our
technologies. It is a way of recalling that we find the transformations brought about
by our information and communication technologies so radical today because they
are now changing who we think we are and can be. And this is revolutionary.
The chapter by Giardino captures well several ideas I have articulated in recent years,
while providing insightful comments, some interesting suggestions and a wealth of
very valuable, if difficult, questions.
13 The Road to the Philosophy of Information 255
revolution that Giardino has in mind is literally vital, but it does not seem to me to
belong to the same line of development through which scientific advancements
about the world and how we interact with it indirectly, ended up telling us a very
significant story about ourselves and our place in the universe. The beginning of life
on our planet and the evolution of DNA did not make us radically re-address the
question about our fundamental nature. They allowed us to pose such question in
the first place, but that is a different story.
This leads me to a clarification that might be of interest to the reader. Giardino is
right in drawing a neat distinction between different ways in which we speak about
information. I share the same concern (Floridi 2011b, 2004a). Simplifying, one
might be talking about semantic information about something (consider the BBC
news), of ontic information as something (consider the fingerprints of an individual),
or of procedural information for something (consider a recipe for a cake). In Floridi
(2010a, b) I have provided an introductory map of these and other cognate concepts
and stressed, like Giardino, that much care needs to be exercised in order to avoid
misleading confusions. However, when talking about inforgs in the infosphere, one
must be able to use all three dimensions, the semantic, the ontic and the procedural,
or the analysis would be over-simplistic. Thus, I have argued both that human agents
are informational organisms – who share many features with artificial, biological
and hybrid agents – and that, to the best of our current and foreseeable knowledge,
our informational condition is utterly unique (the proviso is due to the possible
discovery of intelligent life elsewhere in the universe and to the logical, though
implausible, possibility of engineering real AI one day). There is no contradiction.
At a very reasonable level of abstraction, we are informational structures, which
process inputs in order to deal with their environments successfully, and as such
we are indistinguishable from other agents. Think of those cases when, in your
email exchanges with an online service, you are not sure whether you are dealing
with a person or a computer. Or consider how these days you might be asked to
prove that you are not a piece of software by completing a CAPTCHA (Completely
Automated Public Turing test to tell Computers and Humans Apart), a simple test
often involving pattern recognitions, administered and evaluated by a computer,
which presumably a machine would be unable to pass. However, we are also the only
informational structures in the universe capable of intelligent, semantic structuring.
Humanity has informational organism as genus and structuring structure as species.
This, I hope, clarifies the apparent tension between similarity and uniqueness: we
are inforgs, but our intelligent anti-entropic nature is what makes us a special kind
of inforgs. There is, however, a further potential confusion that I would like to avoid.
We might be a glitch in the infosphere, what I like to call the wonderful mistake.
For as long as we are here, we realise and rightly boast that (to the best of our
current scientific knowledge) we are the infosphere’s only chance of having a mental,
conscious life. Such responsibility is enormous. However, unless there is a divine
plan (and I am happy to leave the answer to this question to the reader), we are
that portion of the infosphere that merely won the mental lottery. There was no
reason to be the owners of the lucky ticket, so amazement is more than justified
(the exclamation mark effect, the “we won the mind lottery!” attitude) but puzzlement
13 The Road to the Philosophy of Information 257
would be out of place (the question mark effect, the “why did we win the mind
lottery?” attitude). There are so many other lotteries that we lost. The cheetah, for
example, won the lottery for the fastest runner on earth, with its astonishing speed
of 70 mph, but lost the climbing lottery. We simply won the (possibly only) lottery
(ticket) in the universe that allows a justified sense of amazement and a (possibly, if
the atheist is right) mistaken sense of puzzlement. There is no why from wow.
Her analysis of the current state of education and what the future might bring is
both informative and enlightening. In this brief reply, I would like to offer one more
example that seems to go in the (right) direction outlined by Pasquinelli’s chapter,
and a broader suggestion of what education may look like after the fourth revolution,
to use a phrase from her chapter.
The example first. Through small appliances, known as clickers, Classroom
Response Systems or CRS (also known as Classroom Communication Systems,
Personal Response Systems, Electronic Response Systems, or Audience Response
Systems) allow a variety of interactions between students and teachers, e.g.,
by conveying yes/no answers to questions shown on the board. Such IT-mediated
interactions can increase participation or provide immediate feedback on whether
the material delivered has been understood, for example. In different forms,
CRSs have been available since the 1960s. Their increasing popularity today is
due not only to advancements in technology – clickers may easily be replaced by
mobile phones – but also to the synergy between the systems and students’ ordinary
habit of writing and sending SMSs while attending their classes. Instead of pro-
hibiting the uses of any communication technology in the classroom as mere
distractions, a better approach is to harness the relevant technology and the corre-
sponding skills in order to improve the learning experience. This example converges
on the same conclusion reached in the chapter about the use of Wikipedia.
It is pointless to try to stop students from relying on it. It is immensely more fruitful
to teach them how to use it critically, and ask them to contribute and edit new
entries, or improve old ones.
Similar examples point in the direction of a substantial change in our educational
practices. I still believe that the acquisition of some basic information and skills
is crucial. Of course, I do not mean learning by heart lists of names, dates, facts, or
grammatical rules and so forth, but possessing the sort of basic information that
258 L. Floridi
allows one to understand a decent newspaper. There is little one can do intellectually
without some reliable and critically assimilated input. Which information needs to
be privileged today poses a challenge, but this too is hardly new. The novelty is
represented by the interpretation of information societies as neo-manufacturing
organizations, in which the raw material is represented by a zettabyte (1021) of data.
In such societies, learning by making, as was the case with artisans before the
Industrial Revolution, seems crucial. Informational goods require new skills that
will be increasingly important and can be kept updated only if properly learnt at
the right age. Many of such skills are “linguistic”. By this I do not mean to refer to
natural languages, which of course are fundamental – especially one’s own mother
tongue on which clear and precise thinking depends so heavily – but to the
languages spoken by the information society: general mathematics, logic, statistics,
ICT. Such languages enable the critical and creative handling of data, the open-ended
acquisition of new skills and further information, and the intelligent production of
informational goods. Unfortunately, I am not very optimistic. Not because our
technologies are “making us stupid”. This is ridiculous. But because such technolo-
gies are making increasingly clear that the old hurdles of availability and accessibility
of information were merely eclipsing the real difficulty of understanding. Today, a
good Wikipedia entry is trivially available and accessible, but it might still be impos-
sible to grasp its contents, if one lacks the required competences, e.g., if one
does not speak “chemistry”. The truth is that once understanding is unveiled as the
real difficulty, it becomes clear that only time, patience, resolve and intelligence
can help. And these have always been scarce resources that no educational system
can miraculously multiply.
As a contribution to the debate on net neutrality I only wish to provide a brief and
general consideration.
Unfortunately, the debate on net neutrality has been affected, among other things, by
a loaded terminology, which has made the ecological aspect of the issue less visible. If
we had been speaking all along in terms of net diversity instead, we would have been
able to appreciate more easily the fact that, in a complex infosphere, more nuanced and
articulated rules about the various services that could be offered to end-users could
13 The Road to the Philosophy of Information 259
increase, and not decrease, the opportunities for growth and development, as long as a
fair entry-level is guaranteed to all participants. This holds true in many transport
systems, with different tickets for different classes in trains and airplanes, and in
postal services, where public and private providers compete and different tariffs apply.
The point is of course more complex, but it has been well argued in a recent paper
(Turilli et al. forthcoming), which I recommend to the reader. The take home message
seems to me that in net neutrality what matters are the minimal conditions of fair equality,
not the maximal imposition of unfair sameness.
The chapter by Silva and Ribeiro provides a wealth of details about information
science (IS) and the philosophy of information (PI), and the connections between
the two disciplines. It deserves to be studied by anyone interested in the interactions
between PI and IS. In this reply, I would like to contribute to their effort by briefly
recalling the contents of two articles in which I argued that IS (or LIS, library and
information science, as it is known in the States) might be understood as applied PI,
which could provide its conceptual foundations.
In Floridi (2002) I analysed the relations between PI, IS and social epistemology
(SE). In that context, I argued that there is a natural relation between philosophy
and IS but that SE cannot provide a satisfactory foundation for IS. Rather, SE should
be seen as sharing with IS a common ground, represented by the study of infor-
mation, to be investigated by PI. In that context, I outlined the nature of PI as the
philosophical area that studies the conceptual nature of information, its dynamics
and problems, and then defined IS as a form of applied PI. The hypothesis supported
was that PI should replace SE as the philosophical discipline that can best provide
the conceptual foundation for IS. In the conclusion, I suggested that the “identity”
crisis undergone by IS has been the natural outcome of a justified but precocious
search for a philosophical counterpart that has emerged only recently, namely PI.
The development of IS should not rely on some borrowed, pre-packaged theory.
In a later contribution (Floridi 2004b), I defended the suggestion that, as applied
PI, IS can fruitfully contribute to the growth of basic theoretical research in PI itself
and provide its own foundation. We often hear about the differences between
the information worker, busily involved in managing and delivering information
services, and the information scientist or the IS expert, deep in theoretical speculations.
The line of reasoning here seems to be that a foundation for IS should satisfy both,
and that this is something that PI cannot achieve, hence the objection that PI is not
“social” enough. I accept the inference, but I disagree on the premise. For I think we
should distinguish as clearly and neatly as possible between three main layers.
There is a first layer where we deal with information contents and services.
Compare this with the accountant’s calculations and financial procedures. One may
wish to develop a theory of everyday mathematics and its social practices – surely
this would be a worthy and interesting study – but it seems impossible to confuse it
260 L. Floridi
with the study of mathematics as a formal science. The latter is a second layer.
It is what IS amounts to, what one learns, with different degrees of complexity,
through the university curriculum that educates an information specialist. There is
then a third layer, in which only a minority of people is interested. We call it foundational.
For mathematics, it is the philosophy of mathematics. I suggested PI for IS. My
point here is that it is important to acknowledge and respect the distinction between
these three layers; otherwise one may criticize x for not delivering y when x is not
there to deliver y in the first place anyway. When checking whether the bank charged
you too much for an overdraft, you are not expected to provide an analysis of
the arithmetic involved in terms of Peano’s axioms. Likewise, a scientist may be
happy with a clear understanding of statistics without ever wishing to enter into the
philosophical debate on the foundations of probability theory. So it seems to me that
IS could be provided with an equally theoretical approach, capable of addressing
issues that the ordinary practitioner and the expert would deem too abstract to deserve
attention in everyday practices (mind that I am talking about layers, not people;
one can wear different hats in different contexts; this is not the issue here). In the
end, I agree that PI should seek to explain a very wide range of phenomena and
practices. I would add that this is precisely the challenge ahead. The scope of PI
spans a whole variety of practices, precisely because the aim of PI is foundationalist.
IS seems to be well posed to benefit enormously from the development of a sound
philosophy of information.
Trivial, isn’t it? PKIC just states in a very verbose way that S holds the informa-
tion that q. This will not do. It would be interesting to understand better why the
translation deprives K of its conceptual value, but this would go well beyond
the scope of this reply, so let us not get side-tracked but check whether we can
obtain PIC by adapting another version of PEC, known as the straight principle
of epistemic closure. This states that:
SP) If S knows that p, and p entails q, then S knows that q.
3
The interested reader is referred to the excellent (Luper 2010). I use K and SP for consistency
with the literature.
262 L. Floridi
SPIC is not trivial, or at least not in the sense in which PKIC is. And it seems
exactly what we need to revise Dretske’s argument informationally, depending
on how we handle the entailment occurring in it. Mind, I do not say interpret it, for
this is another matter. In what follows, I shall simplify our task by assuming that
the entailment is interpreted in terms of material implication.
The entailment in SPIC can be handled in several ways. I shall mention two
first, for they provide a good introduction to a third one that seems preferable for
our current purpose.
A modest proposal is to handle p entails q in terms of feasibility. S could obtain
the information that q, if only S cares enough to extract it from the information
that p and the information that p entails q, both of which S already holds. Consider:
the bank holds the information that Peter, one of its customers, is unemployed.
As a matter of fact, the bank also holds the information (endorses the entailment)
that, if a customer is unemployed then that customer does not qualify for an overdraft.
So the bank can (but might not) do something with or about the entailment.
Peter might keep enjoying his overdraft for as long as the bank fails to use the infor-
mation at its disposal to generate the information that he no longer qualifies.
A slightly more ambitious proposal, which has its roots in work done by Hintikka
(1962), is to handle p entails q normatively: S should obtain the information that
q. In our example, the bank should reach the conclusion that Peter no longer qualifies
for an overdraft; if it does not, that is a mistake, for which someone (an employer)
or something (e.g., a department) might be reprimanded.
A further alternative, more interesting because it bypasses the limits of the previous
two, is to handle p entails q as part of a sufficient procedure for information extraction
(data mining): in order to obtain the information that q, it is sufficient for S to hold
the information that p entails q and the information that p. This third option, which
captures better the formulation of the closure principle based on the distribution
axiom, leaves unspecified whether S will, can or even should extract q. One way for
the bank to obtain the information that Peter does not qualify for an overdraft is
to hold the information that if a customer is unemployed that customer does not
qualify for an overdraft, and the information that Peter is unemployed. Handling
the entailment as part of a sufficient procedure for information extraction means
qualifying the information that q as obtainable independently of further experience,
or empirical evidence or factual input, that is, it means showing that q is obtainable
without overstepping the boundaries of the available database. This is another way
of saying that the information in question is obtainable a priori.
SPIC, with the entailment embedded in it handled in terms of a priori information
extraction, provides the necessary translation of the first step in Dretske’s revised
argument. The second and third step are very simple, for they consist in providing
an interpretation of the information that p and of the information that q such that
p entails q. Following Kerr and Pritchard, we have:
p: = S is in Edinburg
e: = if S is in Edinburg then S is not a brain in a vat on Alpha Centauri.
13 The Road to the Philosophy of Information 263
The fourth and final step is a negative thesis, already formulated by Dretske in an
informationally suitable vocabulary:
NT) information alone could never answer a skeptical doubt.
NT seems very plausible: I agree with Dretske that one cannot solve sceptical
doubts of a Cartesian nature by piling up information. One of the reasons for raising
them is precisely because they block such possibility. We would have stopped
discussing sceptical questions a long time ago if this were not the case.
We can now reformulate Dretske’s argument informationally thus:
(i) if SPIC, p and e
(ii) then S can generate the information that q;
(iii) but q is sufficient for S to answer the sceptical doubt (in the example, S holds
the information that S is not a brain in a vat on Alpha Centauri);
(iv) and (iii) contradicts NT;
(v) but NT seems unquestionable;
(vi) so something is wrong with (i)–(iii): in a Cartesian scenario, S would simply
be unable to discriminate between being in Edinburgh or being a brain in a vat
on Alpha Centauri, yet this is exactly what has just happened;
(vii) but (iii) is correct;
(viii) and the inference from (i) to (ii) is correct;
(ix) and e in (i) seems innocent;
(x) so the troublemaker in (i) is SPIC, which needs to be rejected.
It all sounds very convincing, but I am afraid SPIC has been framed, and I hope
you will agree with me, once I show you by whom.
Admittedly, SPIC looks like the only suspicious character in (i). However, consider
more carefully what SPIC really achieves; that is, look at e. The entailment
certainly works, but does it provide any information that can answer the sceptical
doubt? Not by itself. For e works even if both p and q are false, of course. This is
exactly as it should be, since valid deductions, like e, do not generate new information,
a scandal (D’Agostino and Floridi 2009) that, for once, it is quite useful to expose.
NT has a logical counterpart: deductions alone could never answer a sceptical
doubt, either. If e did generate new information, we would have a case of synthetic
a priori reasoning (recall the handling of the entailment as a sufficient procedure
for information extraction), and this seems a straightforward reductio. The fact is
that the only reason why we take e to provide some anti-sceptical information about
S’ location is because we also assume that p in e is true. Ex hypothesis, not only S is
actually in Edinburgh, but S holds such information as well. So, if SPIC works anti-
sceptically, it is because q works anti-sceptically, but this is the case because e + p
work anti-sceptically, but this is the case only if p is true. Now, p is true. Indeed it
should be true, and not just in the chosen example, but in general, or at least for
Dretske and anyone else, like me, who subscribes to the veridicality thesis, according
to which p qualifies as information only if p is true. But then, it is really p that works
anti-sceptically. All the strength in the anti-sceptical interpretation of (i)–(iii) comes
from the truth and informativeness of p. This becomes obvious once we realise that
264 L. Floridi
no shrewd sceptic will ever concede p in the first place, because she knows that, if
you concede p, then the sceptical challenge is over, as Descartes correctly argued.
Informationally (but also epistemically), it never rains, it pours: you never have
just a bit of information, you always have a lot more, Quine was right about this.
Allow a crack in the sceptical dam and the epistemic flooding will soon be inevitable.
This is why, in the end, local or circumscribed scepticism is either just critical
thinking or must escalate into global scepticism of a classic kind, e.g., Pyrrhonian
or Cartesian. So it is really the initial input quietly provided by p that is the real
troublemaker and SPIC is only following orders, as it were. For SPIC only
exchanges the higher informativeness of a true p (where S is located) into the
lower informativeness of a true q (where S is not located, being located where he is).
This is like exchanging a 20 lb banknote into many 1 $ bills. It might look like you
are richer, but of course you are just a bit poorer, in the real life analogy because of
the exchange rate and the commission charged, and in Dretske’s argument because
you moved from a positive statement (where you actually are located) to a negative
one (one of the infinite number of places where you are not, including places dear
to the sceptic). If you do not want the effects of q, do not blame SPIC, just never
concede p in the first place.
It follows that the informational answer to the sceptical doubt, which we agreed
was an impossibility, is provided not by q, but by p, and this disposes of Dretske’s
objection that SPIC is untenable because information can never provide an answer
to sceptical doubts. It never does because you may never be certain that you hold
it (you cannot assume p), not because, if you hold it, it does not.
One may object that all this leaves the last word to the sceptic. I agree, it does,
but it does so only in this context, and this is harmless. SKIP was never meant to
provide an anti-sceptical argument in the first place. It was the alleged accusation
that it did in a mistaken way that was seen to be the problem. So what happens next?
If being in Edinburgh means that I may not be sure that I am there, then we are talking
about a scenario in which no further information, no matter how far-reaching,
complex, sophisticated or strongly supported, will manage to eradicate once and
for all such Cartesian doubt. I believe it is this the proper sense in which all the
information in the world will never meet the sceptical challenge. For information is
a matter of empirical facts, and sceptical doubts are based on logical possibilities.
The former just cannot cure the latter. Is this, then, finally a good reason to reject
SPIC? The answer is again in the negative. SPIC was not guilty when we were
assuming to have a foot in the door, a piece of information about how the world
really is, namely p. It is not guilty now that we are dealing with a web of information
items that might turn out to be a complete fabrication. On the contrary, in the former
case it is SPIC that helps us to squeeze some (admittedly rather useless) further
bits of information from p. In the latter case, it is SPIC (though of course not only
SPIC) that makes the coherence of the whole database of our information tidy and
tight. But if SPIC is to be retained in both cases, what needs to be discharged?
Either nothing, if we are allowed a foot in the door, because this is already sufficient
13 The Road to the Philosophy of Information 265
This is what happens when we apply the method of abstraction, but Plato’s well-
known metaphor of “carving nature at the joints” is unfortunate. Not merely because
we have acquired a different sensitivity about animals, but mainly because, through
his metaphor, a form of ontological interpretation quietly sneaks in. From the old
debate on universals to the more recent debate on natural kinds, the metaphor
might easily lead one to presuppose as uncontroversial the view that the structures,
invariants, patterns, types, universals, natural kinds and so forth, identified by our
4
Translation from Plato (1989).
266 L. Floridi
analytic procedures, are entirely intrinsic to the system. They are discovered, in the
same way as we carefully discover where to carve a body. This is a mistake. It would
be like saying that, given the contents of the fridge, we simply discover the dishes
we can cook. Meaningless. The ingredients provide affordances and constraints,
but there are different ways in which we can take advantage of the former while
respecting the latter. The most important things to remember are, first, that the
choice of the level of abstraction and hence the purpose which orients such choice,
make a significant difference in the way in which we analyse the structure of any
system, be this biological, artificial, chemical, physical, social and so forth. And,
second, that the system under observation is – or, more cautiously, that it would be
safer to assume that it most likely is – a unity, in which articulations, organizing
patterns and so forth are still aspects of a single whole. As it has been repeatedly
and convincingly argued5 reality does not come in well-organised bits, all properly
disjoint in non-overlapping pigeonholes, which only need to be collected and
catalogued. Taxonomy is a teleological science of design (not invention nor discovery),
based on levels of abstractions.
There is nothing wrong with talking about carving nature at the joints as long as one bears
in mind that nature’s joints are not always disjoint. (Khalidi 1993, p. 112)
For all these reasons, I would be much happier to use either the cooking
metaphor, introduced above, or adapt from Leibniz a different carving analogy,
which he used in a related context, the debate on innate ideas (Leibniz 1996 ) .
A system, anything from the smallest and simplest element to the all-encompassing
universe, is like a block of veined marble (Leibniz’s metaphor) or better, a gemstone.
We carefully carve and polish a cameo according to our goals (the purpose of the
analysis), skills (the specific level of abstraction chosen in view of the purpose)
and the veins or contrasting colour (the ontological constraints and affordances)
in the gemstone. The patterns in the gemstone encourage some outcomes but not
others. They allow for several different images to be carved, but not any: for
some will be impossible, some unlikely and require ad hoc, virtuoso solutions,
and some others will be much more feasible, say the picture of an eagle. Like
gemstones, systems are inhomogeneous, not disjoint. This does not undermine
the scientific realism and reliability of our analyses: the eagle is no less real just
because we could have carved a phoenix. The omelette is no more a fiction of
our imagination than the zabaglione we could have obtained from the same
eggs. How we structure the world depends both on us and on the world. It is a
mistake to underestimate either side of this interactive relation. I hope this
clarifies why I am so reluctant to accept any theory that postulates ontological
levels of organization existing independently of the level of abstraction at which
they are conceptualised.
5
For a great article with plenty of further references and scientific examples of the criss-crossing
nature of reality see Khalidi (1993).
13 The Road to the Philosophy of Information 267
The interesting chapter by McKinley tackles a very important issue, namely the
informational nature (or I should say conceptualization, but see below) of worldly
objects. If I do not misunderstand him, McKinlay quotes Quine approvingly, when
the latter states that:
the very notion of object, or of one and many, is indeed as parochially human as the parts
of speech; to ask what reality is really like, however, apart from human categories, is
self-stultifying. (Quine 1992, p. 9)
If so, then I could hardly agree more. Quine, McKinlay and I are on the same side
of the river, the other bank being populated by all those who hold that the world is
really made of objects, with the latter being pretty much the sort of objects with
which we deal in our kitchens. There is much more on which I agree with McKinlay.
In his chapter, for example, he provides a clear and perfectly shareable analysis
of the nature of objects as understood in Object Oriented Programming (OOP).
The reader who does not know much about this topic will find that part helpful.
He also spends a considerable amount of time arguing that
informational objects (if we at least entertain the possibility of such things) do not seem to
be much like OO objects.
Maybe (see the two problems discussed below), but the important fact is that this
is irrelevant. For I actually suggested that
OOP provides us with a rich concept of informational objects that can be used to concep-
tualize a structural object as a combination of both data structure and behaviour in a single
entity, and a system as a collection of discrete structural objects. Given the flexibility of the
conceptualization, it becomes perfectly possible, indeed easy, to approach the macroworld
of everyday experience in a structural-informational way. (Floridi 2008a)
Let me now turn to some more important problems with the chapter. McKinlay
highlights a crucial point when he writes that:
We are obliged to point out that Floridi does limit the scope of his adoption of OO concepts
and theory by saying “OOP is not a viable way of doing philosophical ontology, but a valuable
methodology to clarify the nature of our ontological components.” (2004a, b, p. 5)
Yet here I wish he had treated such “obligation” seriously, and used it to inform
the whole chapter, instead of relegating it to a footnote and then forgetting about it.
If he had taken his own advice seriously, then in the chapter we would have encountered
at least a passing reference to what I actually mean by informational objects. They are
the structural objects discussed in structural realism, and
A straightforward way of making sense of these structural objects is as informational
objects, that is, as cohering clusters of data, not in the alphanumeric sense of the word, but
in an equally common sense of differences de re, i.e. mind-independent, concrete points of
lack of uniformity. (Floridi 2008a)
For that is a perfectly sensible question to ask, which ultimately (this clause
requires some philosophical work, I admit) has a perfectly sensible answer: i-objects
are in the world (well, the world as experienced on this side of our human cognitive
interfaces, i.e. our levels of abstraction) you are sitting on one, and driving another.
It is his other question that makes no sense to me, namely:
I do think it legitimate to ask, “How do everyday (concrete) objects map to their informa-
tion object counterparts?”
What does he mean “map”? This is like asking how everyday (concrete) water
maps to its chemical object counterpart, H2O. Water does not map, it is H2O, at the
chosen chemical level of abstraction. In OOP, a button does not “map” a button, it is a
button. Everyday concrete objects are (aggregates of) i-objects, at the informational-
structuralist level of abstraction. I hope this also clarifies why I find the following
remark utterly puzzling:
The Floridian account however seems to suggest an object qua information object does
indeed reference the real world object it purports to represent but just how this works is not
explained. Floridi argues, “the ultimate nature of reality is informational, that is, it makes
sense to adopt a level of abstraction at which our mind-independent reality is constituted by
relata that are neither substantial nor material (they might well be but we have no reasons
to suppose them to be so) but informational.” (2004b, p. 5)
References
D’Agostino, Marcello, and Luciano Floridi. 2009. The enduring scandal of deduction. Is proposi-
tional logic really uninformative? Synthese 167(2): 271–315.
Floridi, Luciano. 2002. On defining library and information science as applied philosophy of
information. Social Epistemology 16(1): 37–49.
Floridi, Luciano. 2004a. The Blackwell guide to the philosophy of computing and information.
Malden/Oxford: Blackwell.
Floridi, Luciano. 2004b. LIS as applied philosophy of information: A reappraisal. Library Trends
52(3): 658–665.
Floridi, Luciano. 2006. The logic of being informed. Logique et Analyse 49(196): 433–460.
Floridi, Luciano. 2007. A look into the future impact of ICT on our lives. The Information Society
23(1): 59–64.
Floridi, Luciano. 2008a. A defence of informational structural realism. Synthese 161(2): 219–253.
Floridi, Luciano. 2008b. The method of levels of abstraction. Minds and Machines 18(3): 303–329.
Floridi, Luciano. 2010a. Information – A very short introduction. Oxford: Oxford University Press.
Floridi, Luciano. 2010b. Information, possible worlds, and the cooptation of scepticism. Synthese
175(1): 63–88.
Floridi, Luciano. 2010c. Ethics after the information revolution. In The Cambridge handbook
of information and computer ethics (Chapter 1), ed. L. Floridi, 3–19. Cambridge: Cambridge
University Press.
Floridi, Luciano. 2011a. The philosophy of information. Oxford: Oxford University Press.
Floridi, Luciano. 2011b. Semantic conceptions of information. In The Stanford encyclopedia of
philosophy, ed. E. N. Zalta.
Floridi, Luciano. 2011b. A defence of constructionism: Philosophy as conceptual engineering.
Metaphilosophy 42(3): 282–304.
Hintikka, Jaakko. 1962. Knowledge and belief: An introduction to the logic of the two notions,
contemporary philosophy. Ithaca: Cornell University Press.
13 The Road to the Philosophy of Information 271
Khalidi, Muhammad Ali. 1993. Carving nature at the joints. Philosophy of Science 60(1): 100–113.
Luper, Steven. 2010. The epistemic closure principle. In The Stanford encyclopedia of philosophy,
ed. E. N. Zalta.
Plato. 1989. The collected dialogues of Plato: Including the letters, 14th ed, ed. Edith Hamilton
and Huntington Cairns, with introduction and prefatory notes. Princeton: Princeton University
Press.
Quine, Willard Van Orman. 1992. Structure and nature. The Journal of Philosophy 89(1): 5–9.
Turilli, Matteo, Antonino Vaccaro, and Mariarosaria Taddeo. forthcoming. Network neutrality:
Ethical issues in the internet environment. Philosophy & Technology.
von Leibniz, Gottfried Wilhelm Freiherr. 1996. New essays on human understanding. Cambridge:
Cambridge University Press.
Index
F
Facebook, 27, 31, 32, 87, 90, 94, 155, 164 K
Fourth Revolution (4th Revolution), x, v, ix, Kant, Immanuel, xiii, 46, 70,
66, 68, 69, 87, 88, 101, 108, 109, 73, 100
L Q
Levels of Abstraction (LoA), the method of, vi, Quantum computation/computing, vii, viii,
xii, vii, viii, xiii, 5–11, 17, 23–40, 23–40
43–63, 201–221, 231–233, 237, 239, QWERTY, 132, 133
240, 246–250, 256, 265, 266, 269
S
M Semantic information, ix, xii, 105–121,
Moral responsibility, vii, 4, 8, 9, 11–16, 178–184, 206, 212, 218, 224,
18, 208 240, 256
MXit, xi, 134, 135 Shor’s algorithm, 36, 39
Structural realism, vi, xiv, xiii,
5, 214, 216–217, 221,
N 233, 268
Nondeterminism, 29, 48, 51
T
O Techne, ix, vii, viii,
Object-oriented programming (OOP)–Java, vi, 65–79, 185
xiv, 217, 223, 224, 227, 234, 240, Turing, Alan, 55, 65, 66, 86, 93, 110, 113,
267–269 254, 255
One Laptop Per Child (OLPC), x–xi, 133, 134 Turing machine,
23, 29, 114
Twitter, 27, 87, 136
P
Pancomputation, 5
Physis, ix, vii, viii, 65–79, 185 W
Plato, 46, 70, 76, 88, 99, 113, 233, 239, 265 Webbot, 48, 61, 62