P. 1
Ed in Reverse Info Tech and Cog Rev

Ed in Reverse Info Tech and Cog Rev

|Views: 22|Likes:
Published by api-3840552

More info:

Published by: api-3840552 on Oct 18, 2008
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Educational Philosophy and Theory, Vol. 39, No.

7, 2007

doi: 10.1111/j.1469-5812.2007.00314.x

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia
Published by Blackwell Publishing, 9600 Garsington Road, Oxford, OX4 2DQ, UK and
350 Main Street, Malden, MA 02148, USA

Blackwell Publishing Ltd Oxford, UK EPAT Educational Philosophy and Theory 0013-1857 © 2007 Philosophy of Education Society of Australasia xxx Original Article ‘Ed Tech in Reverse’ Norm Friesen & Andrew Feenberg

‘Ed Tech in Reverse’: Information
technologies and the cognitive revolution










Open Learning Division, Thompson Rivers University School of Communication,
Simon Fraser University


As we rapidly approach the 50


year of the much-celebrated ‘cognitive revolution’, it
is worth reflecting on its widespread impact on individual disciplines and areas of
multidisciplinary endeavour. Of specific concern in this paper is the example of the influence
of cognitivism’s equation of mind and computer in education. Within education, this paper
focuses on a particular area of concern to which both mind and computer are simultaneously
central: educational technology. It examines the profound and lasting effect of cognitive
science on our understandings of the educational potential of information and communi-
cation technologies, and further argues that recent and multiple ‘signs of discontent’, ‘crises’
and even ‘failures’ in cognitive science and psychology should result in changes in these
understandings. It concludes by suggesting new directions that educational technology
research might take in the light of this crisis of cognitivsm.

Keywords: cognitive psychology, educational technology, artificial intelli-
gence, mindtools
With a precision that is perhaps characteristic of the field, the birth of cognitive
science has been traced back to a very particular point in space and time: September
11, 1956, at a ‘Symposium on Information Theory’ held in Cambridge at the
Massachusetts Institute of Technology. On that day, George Miller, Noam Chomsky,
Alan Newell and Herbert Simon presented papers in the apparently disparate fields
of psychology, linguistics and computer science. As Miller himself later recalls,
I went away from the Symposium with a strong conviction, more intuitive
than rational, that human experimental psychology, theoretical linguistics
and computer simulation of rational cognitive processes were all pieces of
a larger whole, and that the future would see progressive elaboration and
coordination of their shared concerns. (Miller, 2003, p. 143)
This remarkable sense of moment did not dissipate; neither did the synergy of
disciplinary interests listed by Miller. Instead, this significance and synergy
strengthened and spread to include not only psychology and linguistics, but also
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

philosophy, neuroscience and anthropology (Bechtel, Abrahamsen and Graham,
1998). And the shared concerns of all of these areas were increasingly understood as
being coordinated and elaborated under the aegis of a bold new multi-disciplinary
endeavour known as cognitivism (see also Piaget, 1971).
Given this momentous beginning, it is perhaps not surprising that some 15 to 20
years later, this revolutionary cognitive movement had all but ‘routed’ behaviourism
as a psychological paradigm, and had founded societies, journals, and university
departments in its name (e.g. Thagard, 2002; Waldrop, 2002, pp. 140, 139).
It is also not surprising that this ‘“revolution”’, as Bereiter & Scardamalia (1992,
p. 517) observe, is correspondingly ‘very much in evidence in education’. As is the
case in psychology, cognitivism has left its name on journals and programs of
education. Prominent examples include

Cognition & Instruction

, the

Journal of Cognitive Technologies

, and the ‘Cognition and Technology Group’ at
Vanderbilt University. Indeed, the cognitive revolution can be said to have redefined
all of the key concepts in educational research: Learning itself is understood not as
an enduring behavioural change achieved through stimulus and response condition-
ing (Domjan, 1993); instead, it is seen as changes in the way information is
represented and structured in the mind (e.g. Ausubel, 1960; Craik & Lockhart,
1972; Piaget & Inhelder, 1973). Teaching is no longer conceptualized as the provision
of rewards for the successive approximation of target behaviours. Instead, it is
understood in terms of the support of the effective processing, representation and
structuring of information by the student’s cognitive apparatus. Effective pedagogy
then becomes the provision of these conditions and supports, such as computational
modelling and constructivist scaffolding, for these processes and representations
(Scardamalia & Bereiter, 2003; West

et al

. (1991), p. 27; Wolfe, 1998). Educational
research itself thus changes from the observation of persistent changes in behaviour
to the use of computational modelling of constructs such as memory, sensory and
information processing (e.g. Bereiter & Scardamalia, 1992).
As we approach the 50


anniversary of the birth of this powerful interdisciplinary
paradigm, it is worth reflecting on the impact of cognitive science on the field of
education and particularly educational technology. This paper will consider the
powerful influence of cognitivism on how we understand the value and use of
information and communication technologies, specifically for education. It will
examine the historical affiliations of cognitive science with educational technology,
and consider critically how recent and multiple ‘crises’ and ‘signs of discontent’ in
cognitive science (Bechtel, Abrahamsen and Graham, 1998; Gergen & Gigerenzer,
1991) are changing the disciplinary grounding available to educational technology.
In doing so, this paper does not seek to deny the value and importance of individual
contributions that cognitive science has made to education. (Prominent examples
of these contributions might include the functions of long and short term memory,
or the educational value of advanced organizers.) What this paper


however, is to throw into question the overarching understanding of human
thought as fundamentally computational—an understanding that so often grounds
the most ambitious pretensions of cognitivism. In addition, this paper also indicates
how the


-computational conceptualizations of thought and learning might

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

change the ways in which we understand the value and potential of computer
technology in education.

Artificial Intelligence: Foundations

To understand the impact of cognitive science in educational technology and in
education more generally, it is important to begin with what many have recognized
as the foundational or ‘central discipline’ in cognitive science—the one ‘most likely
to crowd out, or render superfluous, other older fields of study’. This is Artificial
Intelligence or simply, AI (Boden, 1996, p. 46; Gardner, 1985, p. 40).
As a foundational element in cognitive science, and as an ambitious and contro-
versial movement in its own right, it is significant that AI itself has very deep roots
in the intellectual tradition of the West. This is confirmed by proponents and critics
of AI alike. One critic of traditional AI, Terry Winograd, emphasizes its origins in
what he refers to as the West’s ‘rationalistic tradition’ (Winograd, 1991). A second
AI critic, Hubert Dreyfus, characterizes the central concern preoccupying this
tradition as follows:
Since the Greeks invented logic and geometry, the idea that all reasoning
might be reduced to some kind of logic of calculation—so that all
arguments could be settled once and for all—has fascinated most of the
Western tradition’s rigorous thinkers. (Dreyfus, 1992, p. 67)
Among the earliest thinkers included in AI genealogies are Gottfried Wilhelm von
Leibniz, George Boole and Charles Babbage (e.g. see Buchanan, 2004; Haugeland,
1985). The first of these dreamt of identifying the ‘mathematical laws’ or logical
rules of ‘human reasoning’ (Leibniz, cited in Davis, 2000, p. 17) that could be
followed in a kind of ‘blind thinking’ (Leibniz, cited in Schroeder, 1997). The last
of these gave this dream a promising material instantiation in the form of the first
large-scale mechanical computer.
The earliest mechanized anticipations of the computer were control systems for
organs and music boxes, a bit like a player piano rolls, and for weaving patterns.
These systems acted mechanically to produce an output that had meaning for
human hearers or viewers. Eventually similar techniques were applied to mathe-
matical computation. Babbage attempted to use mechanical parts to achieve what
we now achieve with electronic components. His initial goal was to calculate accurate
astronomical tables for the use of the British Navy and he received support in his
project from the 19


century British equivalent of the Pentagon. When computers
finally became everyday facts of life for millions of people, they continued to
operate in much the same manner but fast enough to appear far more sophisticated
than their mechanical ancestors. All the time, underneath the surface, they
remained fundamentally the same. As John Haugeland puts it, computers—as well
as their mechanical ancestors—all operate as ‘formal systems … in which tokens
are manipulated according to rules, in order to see what configurations are
obtained’ (Haugeland, 1985, p. 48). The implications of this, as Haugeland further
explains, are substantial:
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

Since formal systems are (by definition) self-contained, they never permit
any (formal) role for meanings; as a result [the various aspects and
complexities] of meaning cannot infect them. [ … ] Interpretation and
semantics [therefore] transcend the strictly formal. (Haugeland, 1985,
pp. 87, 100)
What this means is that the results of computation, of tokens being manipulated
in formal systems, carry no meaning in and of themselves. It is only human users
who are able to understand the outputs of this switching and calculation. Purposeful
action and even meaning resulting from computational processes are ultimately a
ruse. In these instances, meaning appears as an epiphenomenon of the meaningless.
The basic idea of the reducibility of thought to these ultimately simple, formalized
logical, mathematical rules and processes lies at the heart of artificial intelligence.
It is already explicit in the first definition for this field: Artificial intelligence is
proposed as ‘a study’ proceeding from ‘the conjecture that every aspect of learning
or any other feature of intelligence can in principle be so precisely described that
a machine can be made to simulate it’ (McCarthy,

et al.

, 1955).
This early conjecture has later taken the form of a scientific, experimental
hypothesis that has firmly linked artificial intelligence with cognitive science
generally (e.g. Gardner, 1985; Sun, 1998). Simply put, this hypothesis is as follows:
if thinking is indeed a form of mechanical calculation, then development of
calculating machines (i.e. computers) has the potential to unlock the ‘mathematical
laws of human reasoning’ posited in the West’s rationalistic tradition. Carl Bereiter
and Marlene Scardamalia (1992) provide an illustration of how this hypothesis
works by referring to the spell-checking software familiar to any user of MSWord
or WordPerfect:
... if you use a spelling checker often enough, you are likely to wonder
and form hypotheses about how it selects the list of words it presents as
alternatives. You are, in effect, speculating about how the program
‘thinks’. If you formulate your hypotheses clearly enough, you may be
able to test them—for instance, by inserting words that you predict will
‘fool’ the spelling checker in a particular way. You could now claim to be
doing cognitive science, with the spelling checker as a subject. (Bereiter
& Scardamalia, pp. 518–519)
If thinking is information processing, in other words, then hypotheses about how
we think can either be proven or falsified by being successfully ‘modelled’—or by
being proven intractable—through the development and testing of software systems.
Conjectures about how we carry out cognitive tasks—for example, how we recognize
faces or sounds, or how we remember or solve problems—can be substantiated by
being modelled by a computer. If it were possible to get a computer to accomplish
the same kinds of tasks that we do, it would seem reasonable to conclude that our
minds accomplish the same task in the same way. As Gardner puts it, these com-
putational simulations provide a kind of ‘existence-proof ’ for invisible contents and
processes of learning and thought:

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

… if a man-made machine can be said to reason, to have goals, revise its
behavior, transform information and the like, human beings certainly
deserve to be characterized in the same way. (Gardner, 1985, p. 40)
As will be evident from the examples provided in this paper, such a computational
model can serve as a static structural or semantic representation or encoding, or it
can take the form of rule-bound process, procedure, or algorithm. Ideally, both
these static, representational and procedural, algorithmic aspects are to be united
in a single model.
Among AI researchers, this idea of the computer and mind being equal in one
or more fundamental aspects, as mirroring each other’s operations and contents,
has come to be known as the ‘strong AI’ thesis. According to this way of thinking,
the value of computer programs and models is not that they provide one predictive
or falsifiable account of mental phenomena among others. Instead, they reproduce,
make transparent or provide an existence proof for the contents and operations
of the mind. Representations and algorithms in this sense can serve both as
description and as proof for the kinds of ‘mathematical laws of reasoning’ that
Leibniz and his predecessors had only dreamed of bringing to light. These mental
entities can be construed in a variety of ways, but in each case, as John Searle
explains, strong AI assigns its computer programs or simulations a very special
significance: ‘programs are not mere tools that enable us to test psychological
explanations; rather, the programs are


the explanations’ (Searle, 1981,
pp. 282–283, our italics).
This has far-reaching consequences for psychology in general, and educational
psychology and technology in particular. For just as mind and computer can be
unequivocally equated, so to can the study of the mind be identified with the
testing and construction of intelligent computer systems. Psychologist Zenon
Pylyshyn describes such an identification as occurring between AI and his own field
of cognitive psychology: ‘both fields are concerned with the same problems and
thus must ultimately be judged by the same criteria of success’. ‘I believe’, he
concludes, ‘that the field of AI is coextensive with that of cognitive psychology’
(Pylyshyn, 1981, pp. 68–69). But then limitations of the computer become limitations
of the mind as psychology understands it.

Educational Technology: AI in Reverse

Like cognitive psychologists before them, educational technologists have seen in
the identification of mind and machine the possibility of a new identity or foundation
for their discipline. This new vision for educational technology research provides a
particular way of understanding the educational efficacy or potential of computer
technology. Specifically, it understands this power in terms of an intimate and
mutually-reinforcing correspondence between computation and the computational
processes occurring in the mind: ‘To be effective, a tool for learning must closely
parallel the learning process; and the computer, as an information processor, could
hardly be better suited for this’ (Kozma, 1987, p. 22).
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

This identification of mind and machine particular to cognitivism is also seen as
providing a new link between psychological theory and the design and practice of
educational technology. Just as cognitive psychological theories can be modeled
and tested against software systems, so too can the success of instructional
computer systems serve as means to validate the cognitive and learning theories
that have informed their design. It is in the context of this understanding that
prominent cognitive psychologists such as Roger Shank and Seymour Papert have
been readily regarded as experts also in the field of educational technology—with
their bold claims of radical educational change and transformation (e.g. Shank &
Cleary, 1995; Papert, 1996). It is in this context as well that Roy Pea (1985;
quoting an unpublished paper by Greeno) predicts: ‘“important advances in
instructional technology and in basic cognitive science will occur as an integrated
To inform education effectively, theory and practice will need to be
unified through the invention of research informed electronic learning
systems that work in education settings. [ ... ] I would argue that these
technologies can serve as the educational infrastructure linking psycho-
logical research to educational practice [ ... ]. (Pea, 1985, p. 179)
The design of instructional systems is seen by Pea as essentially the same as the
development of cognitive models in AI computer systems. Except that in the case
of educational technology, there is one additional factor: The mental phenomena
under study are by definition undergoing change and development,

i.e. learning

. As
a result, computer systems are seen as having the power to shape and remodel the
mental or cognitive phenomena to which they would otherwise correspond. This
idea is captured with remarkable clarity in the recurring use of the phrase ‘AI in
reverse’ in the literature of educational technology. In his popular book

as Mindtools for Schools

David Jonassen defines this phrase in conjunction with the
key concept ‘mindtool’.
Mindtools represent AI in reverse: rather than having the computer
simulate human intelligence, get humans to simulate the computer’s
unique intelligence and come to use it as part of their cognitive
apparatus. (Jonassen, 1996, p. 7)
Human intelligence, in other words, is developed through the example of artificial
intelligence, with the goal of making human cognitive operations as efficient and
effective as those engineered and tested for computers. This same phrase was
originally introduced in an influential and widely-referenced 1988 paper by Gavriel
Salomon entitled ‘AI in Reverse: Computer tools that turn cognitive’. In it, Salomon
explains his coinage as follows:
The purpose of applying AI to the design of instruction and training is
basically to communicate with learners in ways that come as close as
possible to the ways in which they do or ought to represent the new
information to themselves. One expects to save the learners, so to speak,

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

unnecessary cognitive transformations and operations. (Salomon, 1988,
pp. 123–124)
Such an understanding of ‘AI in reverse’ incorporates not only the ‘strong AI’
equation of mind and computer, but also relies on Vygotskian or constructivist
notions of the influence of linguistic and other ‘cognitive tools’ on mental develop-
ment. These ideas, which first appear in the educational technology literature in
the latter half of the 1980’s, are based on the idea that linguistic signs or symbols
act as ‘instrument[s] of psychological activity in a manner analogous to the role of
[tools] in labour’ (Vygotsky, 1978, p. 52). Like tools of physical labor, ‘cognitive’
tools can help form and shape the character of the labourer or tool user. The
‘primary role for computers’ in educational technology then becomes one not
simply of mirroring or augmenting cognitive operations, but is understood in terms
of fundamentally ‘reorganizing our mental functioning’ (Pea, 1985, p. 168). The
fact that computers represent ‘universal machines for storing and manipulating’
(Pea, 1985, p. 168) such signs and tools of thought is now seen as one of the
primary sources of their educational potential.
Similar and related elaborations or variations on the underlying correspondence
of mind and machine have been articulated under the aegis of ‘distributed’ and
‘situated’ cognition. These understandings expand the scope of ‘cognitive systems’
of mind and computer to include a variety of artifacts, conditions, and also other
minds in the environment. One of the first to articulate these understandings,
anthropologist Edward Hutchins describes how cognition can be conceptualized as
distributed and situated: ‘Cognitive processes are seen to extend across the traditional
boundaries as various kinds of coordination are established and maintained
between “internal” and “external” [cognitive] resources (Hollan, Hutchins and
Kirsch, 2000, p. 174). In educational technology, it then becomes a question of
‘considering the software and the learner[s] as a single cognitive system, variably
distributed over a human and a machine’ (Dillenbourg, 1996, p. 165). Many of
the educational technology scholars cited above as enthusiastically supportive of
cognitive and computational approaches in the 1980s—including Greeno, Salomon
and Pea—have subsequently made significant contributions to understanding how
cognitive tools can be understood as operating in distributed and situated terms or
contexts (e.g. Greeno, 1998; Salomon, 1998; Pea, 1994).
It is in eclectic combination with ‘distributed’, ‘constructivist’ or other, related
approaches that the same ‘strong AI’ thesis appears and reappears in educational
technology literature to this day. One recent example that combines this thesis with
notions of ‘knowledge technology’ and ‘management’ is provided in an article by
Scardamalia (2003). In it, she describes how the design of a software product
known as ‘Knowledge Forum’
… rests on the deep underlying similarity of the socio-cultural and
cognitive processes of knowledge acquisition and knowledge creation. In
Knowledge Forum these normally hidden knowledge processes are made
transparent to users. Support for the underlying concept that these
processes are common to creative knowledge work across ages, cultures,
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

and disciplines comes from the fact that Knowledge Forum is used across
the whole spectrum [of educational levels], and for a broad range of ...
organizations involved in creative knowledge work. (Scardamalia, 2003,
pp. 23–24)
Cognitive operations are linked (as they are in constructivist accounts) to socio-
cultural processes of knowledge acquisition and creation. Both sets of processes are
understood as being mirrored or made visible or ‘transparent’—and even cleansed
of historical or cultural specificity—in specific applications of information and
communication technology.
A similar case for the instantiation of a different kind of mental phenomenon—
in this case interactive and dynamic ‘cognitive visualizations’—through the repre-
sentational ‘affordances’ of the computer is made in a 2004 article by Michael
The design of cognitive visualizations (CVs) takes advantage of the
representational affordances of interactive multimedia, animation, and
computer modeling technologies. ... CVs are dynamic qualitative
representations of the mental models held by experts and novice learners.
(Jacobsen, 2004, p. 41)
Cognitive visualizations are, of course, just one of the many kinds of cognitive
technologies or tools that are discussed, researched and promoted in the recent
literature of educational technology (see, for example, Jonassen, Peck & Wilson,
1999, Jonassen, 2003, and Lajoie, 2000 for further book-length investigations of
these tools). Educational technologies appear to be conceptualized most clearly and
forcefully in these terms in the literature of the late 1980s. Later, and continuing
to this day, this conceptualization is reaffirmed and rearticulated in conjunction
with constructivist, distributed and situated variations on cognition. Whether
computer and information technologies are described as building knowledge,
representing mental models, processing information, or amplifying or restructuring
cognition, the underlying justification is the same: This technology gains its unique
educational potential from its underlying similarity to cognitive processes and
representations—to the similarly computational cognitive apparatus of human learners.

Strong AI Enfeebled

As the ramifications of the equation of mind and machine have been explored and
elaborated in educational technology scholarship over the last twenty years, the
field that provided the original justification and impetus for this activity has itself
undergone radical and far-reaching change. While the strong AI hypothesis has
been accepted, adapted, and even metaphorically ‘reversed’ in the field of educa-
tional technology, this same hypothesis has been criticized and ultimately rejected
in the field of AI.
Take for example the largest and most expensive of the strong AI projects. Tens
of millions of dollars and over twenty years in the making, this project is known

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

simply as ‘Cyc’ (short for en


lopedia). Cyc is a knowledge base not exactly of
encyclopaedic or specialized knowledge, but of encyclopaedic


. Cyc consists
of millions of encoded logical propositions, constitutive of ‘common sense’ or
human ‘consensus knowledge’ (Freedman, 1990, p. 66). As such, CycCorp president
Douglas Lenat explains, Cyc is ‘very closely related to the way humans think’ (in
Goldsmith, 1994).
Early in the project, Lenat claimed that Cyc would do things like ‘make scientific
discoveries’ and ‘counsel unhappy couples’ (Freedman, 1990, p. 65) and that it would
become ‘operational’ in this sense—‘learning on its own’ and otherwise being
‘smart’—by 1994 (Gams, 1997, p. 33; Goldsmith, 1994). In 1994, in an interview
that labelled his project somewhat derisively as ‘CYC-O’ (see Goldsmith, 1994),
Lenat moved this date back to 1997. By the end of the 1990’s (if not earlier), it was
becoming clear that such scenarios would remain in the realm of science fiction,
and that Cyc would likely remain forever ‘as dull as any other program’ (Gams,
1997, p. 33).
This last derisive characterization of the fate of the Cyc AI project is only one
of a number of convergent findings about strong AI research readily available in
the literature. It and other, similar conclusions are voiced in a wide-ranging collec-
tion of papers from contributors in computer and cognitive science departments
around the world. The title of this collection asks whether two prominent AI critics
mentioned earlier were actually correct:

Mind versus Computer: Were Dreyfus and
Winograd Right?

(Paprzycki & Wu, 1997). Of course, this titular question is
answered in the affirmative: ‘Strong AI’ is described as ‘an adolescent disorder’ of
an emerging field (Michie, 1997, p. 1), as being in a state of ‘stagnation’ (Watt, 1997,
p. 47), or simply as an outright ‘failure’ (Amoroso, 1997, p. 147): ‘Computational
procedures are not constitutive of mind, and thus cannot play the foundational role
they are often ascribed in AI and cognitive science’. (Schwiezer, 1997, p. 195).
Developments in AI have underscored the conclusion that thinking is fundamentally
different in nature from the encoding and processing associated with computer
technology. Take the fact of recognition, a familiar act that occurs each time we
understand a new concept, detect a pattern, or figure out a complex behaviour.
Recognition appears intuitively to have a holistic character very different from
formalized representation and rule-based reasoning. Meaning is an inextricable
part of this holistic activity, and not something that appears

post facto

, at the end
of a formalized process from which meaning is otherwise absent.
When Big Blue defeated Kasparov, it did not recognize the situation on the board
faster and better than its human opponent. Rather it performed a different sort of
operation that corresponds mechanically to one process in which Kasparov also
engaged, representing and evaluating possible future moves and countermoves, but
left out many other aspects of his relation to the board such as, precisely, his ability
to recognize the meaning or pattern of the situation. The assumption that Kasparov
was simply an unconscious computer, and one of lesser power, is no more plausible
than the attribution of consciousness to Big Blue.
The alternative to the strong AI hypothesis is widely seen as being some form of
‘cautious’ or ‘weak AI’. The computer in this context is no longer seen as mirroring
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

or paralleling the processes and contents of the mind. Instead, it provides a tool in
the study of the mind that is ultimately


in character. ‘For example’, as
Searle says, the computer ‘enables us to formulate and test [psychological] hypotheses
in a more rigorous and precise fashion’ (Searle, 1981, p. 282). The computer
provides only one predictive or falsifiable account among others of how minds or
human subjects might behave or respond under particular circumstances. However,
the computer does


make transparent or provide an existence proof for the
contents or thought processes of the human mind. AI in this context has as its goal
the development of software that can simply produce the external


intelligence. A commonplace illustration of this kind of AI work can be found in
speech recognition software used in voice mail systems. Its value ultimately derives
from its practical application, rather than any claimed correspondence with underlying
human thought, decision or communication processes.

Learning from Failure

All of this implies a very different set of possible configurations and valuations of
the relationship of mind and machine for education and educational technology.
Understood in terms of their educational potential, computer technologies appear
disenchanted. They are no longer instruments directly involved with mental
amplification or reorganization, no longer ‘cognitive’ or ‘knowledge’ technologies,
or ‘mindtools’ that ‘closely parallel the learning process’. The notion of educational
technology as ‘AI in reverse’ is also robbed of its efficacy as a practice or guiding
principle. Software ceases to present any kind of paradigm or exemplar for optimal
human cognition—if such an absolutization of technological efficiency were ever
really desirable or palatable. Computer technologies must also abdicate their role
as constituting an ‘educational infrastructure’ that would closely link learning
theory with technology design practice. As will be illustrated below, contrary to Pea’s
prediction, important advances in instructional technology and in basic cognitive
science simply have


been occurring in any coordinated or integrated way.
But what then is to be the source of the educational power or potential of computer
and information technology? How can these technologies be reconceptualized to
have theoretically-supported value in education?
One possible answer can be found in developments in educational technologies
occurring over the last ten or more years. Instead of being linked ‘as an integrated
activity’ to progress in the cognitive revolution, what is new, exciting and promising
in educational technology has perhaps been linked above all with one overarching
technical, social and cultural phenomenon: the rise of the Internet and World Wide
Web, and the proliferation of communicative cultural and other practices that have
emerged with it.
Indeed, any number of the myriad developments associated with this more recent
and figurative revolution suggests ways of understanding the educational potential
of networked information and especially


technologies. For example,
the recently growing popularity of chat and discussion tools, e-portfolios, blogs
and wikis in educational contexts shows that these applications of information

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

technologies have palpable and practical educational value. This is indicated by the
proliferation of such communication tools and applications—especially chat and
discussion tools—in both open source and commercial systems such as WebCT,
FirstClass and Moodle. However, unlike ‘cognitive visualization’ or ‘knowledge
construction’ technologies, the value of these technologies cannot be understood
as resulting from their support of cognitive or knowledge-building processes that
might be similar across ‘ages, cultures and disciplines’. Instead, this value is to be
found precisely in historically and culturally contingent communicative, discursive
and other practices and conventions that these technologies enable, encourage or
allow to develop. These emerging possibilities and practices can be understood in
a wide variety of ways: in terms of histories of technology use, in terms of cultural
and interpretive systems, communicative genres, social practices and norms,
ontological dispositions, and other frames of reference. Such frameworks are not
grounded first and foremost on the metaphors and understandings of the operations
of computer systems and networks—data ‘flows’ or ‘processing’ for example.
Instead they are affiliated above all with areas of research associated with


rather than computational


. Examples of these human sciences include post-
structuralism, hermeneutics, semiotics, phenomenology, ethnography and discourse
analysis—approaches that have risen to prominence in many areas of interdisciplinary
activity during the precipitous rise and subsequent decline of the cognitive
These approaches have as a common starting point the premise that human
meaning and experience cannot be adequately explicated and investigated using
technical and scientific terms and methods. In this sense, the human sciences
emphasize and reinforce the same separation mentioned earlier between human
meaning on the one hand, and computational rules and formalisms on the other.
Narrowing this opposition down to the world of human language and the objects
of physics and natural science, Hans-George Gadamer articulates this separation
as follows:
The world that appears in language and is constituted by it does not have,
in the same sense, being-in-itself, and is not relative in the same sense as
the object of the natural sciences. [ … ] We cannot see a linguistic world
from above in this way, for there is no point of view outside the
experience of the world in language from which it could become an
object. Physics does not provide this point of view, because the world—
i.e., the totality of what exists, is not the object of its research and
calculation. (Gadamer, 1989, p. 452)
There is no way to account for language and meaning, except through language
and meaning themselves. Because educational technologies—although dependent
on the laws of science and physics—ultimately find their utility in the context of
language and meaning, they need to be studied and understood in these humanistic
terms as well. And it is in this context that communication—rather than AI or sheer
computation—has arisen as a valuable and meaningful educational application of
networked computer technologies.
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

In this context, the case can be made that it is educational technology itself—
rather than AI—that appears to have been operating in metaphorical reverse: If
the arrival of cognitivism is itself construed as a revolution, then a different, commu-
nicative, cultural and technological Internet ‘revolution’ has more recently been
taking place. In addition, a host of related theoretical approaches have simultaneously
been introduced in a variety of educational sub-fields like curriculum studies and
adult education (e.g. Doll & Gough, 2003; Fenwick, 2003). All the while, much
activity in education technology has been based on a cognitive hypothesis disavowed
by the very AI community that has been entrusted with its validation.
The decline of the broader cognitive endeavour has been registered—either
implicitly or explicitly—in different ways in a range of fields and disciplines. In
most cases, recognition of this downturn is prompted by factors less obvious than the
failure of a hypothesis as central as ‘strong AI’ has been to educational technology.
For example, the psychologist George Miller, whose enthusiastic assessment of the
‘birth’ of cognitive science was used to open this paper, admitted by 1990 that this
same scientific movement now appeared a ‘victim’. (In Miller’s own generous
assessment, it appeared victimized by nothing less than ‘its own success’ [Miller,
as cited in Bruner, 1991, p. 4].)
It is also around 1990 that a program of psychology that has subsequently
labeled itself ‘post-cognitivist’ (Potter, 2000) began to emerge, defining itself in
terms of its opposition to the mechanistic, deterministic and individualistic emphases
of cognitivism (see Still & Costal, 1991; Gergen & Gigerenzer, 1991). Areas of design,
too, such as human interface and even instructional design have shown a clear
loosening from their cognitivist moorings. A number of book-length studies have
recently appeared in the area of interface and systems engineering, systematically
applying ethnomethodological, phenomenological and other techniques to this
conventionally positivistic area of endeavour (e.g. Dahlbom & Mattiasen, 1993;
Dourish, 2001; Svanæs, 1999). Traditionally conceived in terms of engineering to
account for biologically-given human factors, this area has been making increasing
use of understandings of culturally contingent activities, meanings and artifacts.
Similarly, recent investigations into the everyday application of cognitively-based
instructional design methods and understandings have highlighted the irreducible
importance of ‘“moment-by-moment contingencies” not readily accommodated by
the algorithmic prescriptions of these same methods (Suchman, 1987 as cited in
Streibel, 1995, p. 131; see also Schwier, Campbell & Kenny, 2004). This has led
some to the conclusion that such ‘cognitive-science-based instructional systems
[ultimately] serve the “human interests” of the cognitive science community rather
than those of “the learner”. As such, these systems need to be radically re-considered’
(Streibel, 1995, p. 159).
Of course, this radical reconsideration extends far beyond specific disciplinary
activity and particular research programs. The political and practical implications
of the crisis of cognitivism are themselves radical and far reaching. For the
cognitivist vision of the symbiotic inter-operation of mind and machine has been
used to advocate much more than simply the use of cognitive or knowledge building
tools in existing classrooms and schools. This vision has also been closely associated

Norm Friesen & Andrew Feenberg

© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

with imaginings of the inevitable decline or even the cataclysmic destruction of
entire educational institutions and systems. Seymour Papert, an aforementioned
student of the cognitive psychologist Jean Piaget, predicted in 1984 that the ‘power’
of the formal models provided by the computer was sufficient to ‘blow up the
school’—to end its social utility as an institution (as cited in Cuban, 1986, p. 72;
see also Papert, 1980). In keeping with this prediction, Papert has since made
claims that (for example) ‘game designers have a better take on the nature of
learning than curriculum designers’ (Papert, 1998, p. 88) and has advocated (and
in some cases achieved) the widespread acquisition of laptops for schoolchildren
(e.g. Witchalls, 2005). Papert has been consistently repeating variations on his
bold prophecy of the ‘end’ over the past 20 years (e.g. Papert, 1996; Papert, 2004).
He has been joined in making such predictions by other self-styled prophets of
educational doom and re-birth. These have included those advocating the u se of
artificially intelligent tutors and instructional agents to achieve ‘scalability’ in
education (e.g. Taylor, 2002; Laws

et al.

, 2003). These also include many of the
most prominent advocates of the ‘learning objects’ and ‘e-learning standardization’
(e.g. Hodgins, 2004; Rehak, 2004)—who have also happily described how their
‘technology will destroy schools’(Wiley, 2004).
This destruction of conventional educational practices and institutions through
the powerful symbiosis (of one form or another) of human and machine is both
politically powerful and perhaps superficially enticing. It is also substantially, if not
explicitly, aligned with an understanding of computers as devices of control and
management—control and management of things digital, physical and also human
(see, for example, Weizenbaum, 1976; Feenberg, 2002; Gere, 2002). Especially
among some of those advocating scalability, standardization and learning objects,
this computational control and management is envisioned on a grand scale: It
extends from models and monitoring of the smallest behaviours, needs and desires
of the learner to grand visions for a universally accessible infrastructure and
resources to respond to these individual requirements. Hodgins, for example,
envisions ‘a world where every person and every file can be connected directly, one
to one; indirectly through webs of such connections’ (Hodgins, in press). Hodgins
further envisions this as being accomplished not only through the use of sophisti-
cated technologies, but also through the deployment of ‘tremendous gains in
understanding how people learn’, recently made by ‘cognitive scientists’ (Hodgins,
in press).
But the failure of the cognitivist project undercuts these bold visions perhaps
even more radically than it does the other, more modest attempts at studying
cognitive or knowledge building tools. Given this fact, it seems all the more important
that we should look beyond any powerful or revolutionary correspondence between
mind and machine for an understanding of the educational value of information
and communication technologies. In addition, we might also want to recognize that
cultural change is more complex, and that institutions and practices more robust
and flexible than Papert and others might allow. Instead of looking to any alleged
correspondence between mind and machine as the grounds for institutional and
practical change, we might instead more productively understand the evolution of these
‘Ed Tech in Reverse’


© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia

communities and practices in social, communicative and other, non-computational
terms. It is only in these ways that the educational value of information technologies—
otherwise so readily valorised in our society—can be given renewed and theoretically-
grounded understanding.


Amoroso, R. L. (1997) The Theoretical Foundations for Engineering a Conscious Quantum
Computer, in: M. Paprzycki & X. Wu (eds),

Mind Versus Computer: Were Winograd and
Dreyfus right?

(Amsterdam, IOS Press).
Ausubel, D. P. (1960) The Use of Advance Organizers in the Learning and Retention of
Meaningful Verbal Material,

Journal of Educational Psychology

, 51.
Bechtel, B., Abrahamsen, A. & Graham, G. (1998) The Life of Cognitive Science, in: W. Bechtel
& B. Graham (eds),

A Companion to Cognitive Science

(Malden, MA, Blackwell Publishers
Bereiter, C. & Scardamalia, M. (1992) Cognition and Curriculum, in: P. W. Jackson (ed.),

Handbook of Research on Curriculum

(Old Tappan, NJ, Macmillan).
Boden, M. (1996) Artificial Intelligence, in: D. M. Borchert (ed.),

The Encyclopedia of
Philosophy: Supplement

(New York, McMillan).
Bruner, J. (1991)

Acts of Meaning

(Cambridge, MA, Harvard University Press).
Buchanan, J. (2004) Brief History of Artificial Intelligence. Accessed December 10, 2004 at:
Craik, F. & Lockhart, R. (1972) Levels of Processing: A framework for memory research,

of Verbal Learning & Verbal Behavior

, 11.
Cuban, L. (1986)

Teachers and Machines: The classroom use of technology since 1920

(New York,
Teachers College Press).
Dahlbom, B. & Mathiassen, L. (1993)

Computers in Context: The philosophy and practice of systems

(London, Blackwell Publishers Ltd).
Davis, M. (2000)

The Universal Computer: The road from Leibniz to Turing

(New York, Norton).
Dillenbourg, P. (1996) Distributing Cognition Over Brains and Machines, in: S. Vosniadou,
E. De Corte, B. Glaser & H. Mandl (eds),

International Perspectives on the Psychological
Foundations of Technology-Based Learning Environments

(Mahwah, NJ, Lawrence Erlbaum).
Doll, W. & Gough, N. (2003)

Curriculum Visions

(New York, Peter Lang).
Domjan, M. (1993)

The Principles of Learning and Behavior



edn.) (California, Brooks/Cole
Publishing Co).
Dourish, P. (2001)

Where the Action Is: The foundations of embodied interaction

. (Cambridge, MA,
MIT Press).
Dreyfus, H. L. (1992)

What Computers Still Can’t Do: A critique of artificial reason

MA, MIT Press).
Feenberg, A. (2002)

Transforming Technology: A critical theory revisited

(Oxford, Oxford University
Fenwick, T. (2003)

Learning Through Experience: Troubling orthodoxies and intersecting questions

(Melbourne, FL, Kreiger).
Freedman, D. H. (1990) Common Sense and the Computer,


, 11:8.
Gadamer, H. G. (1989)

Truth and Method

(New York, Continuum).
Gams, M. (1997) Is Weak AI Stronger than Strong AI?, in: M. Paprzycki & X. Wu (eds),

Versus Computer: Were Dreyfus and Winograd right?

(Amsterdam, IOS Press).
Gardner, H. (1985)

The Mind’s New Science: A history of the cognitive revolution

(New York, Basic
Gere, C. (2002)

Digital Culture

(London, Reaktion Books).
Gergen, K. J. & Gigerenzer, G. (1991) Cognitivism and its Discontents: An introduction to the
734 Norm Friesen & Andrew Feenberg
© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia
issue, Theory and Psychology, 1:4. Accessed March 22, 2005 at: http://www.psych.ucalgary.ca/
Goldsmith, J. (1994) CYC-O, Wired, 2:04. Accessed December 10, 2004, at: http://wired.com/
Greeno, J. G. (1998) The Situativity of Knowing, Learning, and Research, American Psychologist,
Haugeland, J. (1985) Artificial Intelligence: The very idea (Cambridge, MA, MIT Press).
Hodgins, W. (2004) Interviews—me-Learning: What if the impossible isn’t? Online Educa Berlin
2004: SpecialSite. Accessed March 22, 2005 at: http://www.global-learning.de/g-learn/
Hodgins, W. (in press) Into the Future of meLearning EVERY ONE LEARNING; Imagine If:
The Impossible Isn’t!, in: E. Maisie (ed.) Learning: Rants, raves, and reflections—a collection
of passionate thoughts about the world of learning.
Hollan, J., Hutchins, E. & Kirsch, D. (2000) Distributed Cognition: Toward a new foundation
for human-computer interaction research, ACM Transactions on Computer-Human Inter-
action, 7:2.
Jacobsen, M. (2004) Cognitive Visualizations and the Design of Learning Technologies. International
Journal of Learning Technology, 1:1.
Jonassen, D. (1996) Computers as Mindtools for Schools: Engaging critical thinking (Upper Saddle
River, NJ, Prentice-Hall).
Jonassen, D. (2003) Using Cognitive Tools to Represent Problems, Journal of Research on
Technology in Education, 35:3.
Jonassen, D. H., Peck, K. L. & Wilson, B. G. (1999) Learning with Technology: A constructivist
perspective (Upper Saddle River, NJ, Merrill).
Kozma, R. B. (1987) The Implications of Cognitive Psychology for Computer-based Learning
Tools, Educational Technology, 27.
Lajoie, S. P. (ed.) (2000) Computers as Cognitive Tools (Volume 2): No more walls (Mahwah, NJ,
Lawrence Erlbaum Associates).
Laws, R. D., Howell, S. L. & Lindsay, N. K. (2003) Scalability in Distance Education: ‘Can we
have our cake and eat it too?’ Online Journal of Distance Learning Administration. 7(4).
Accessed March 22, 2005 at: http://www.westga.edu/∼distance/ojdla/winter64/laws64.htm
McCarthy J., Minsky, M. L., Rochester N. & Shannon C. E. (1955) A Proposal for the Dart-
mouth Summer Research Project on Artificial Intelligence. Accessed March 22, 2005 at:
Michie, D. (1997) Strong AI: An adolescent disorder, in: M. Paprzycki & X. Wu (eds), Mind
Versus Computer: Were Dreyfus and Winograd right? (Amsterdam, IOS Press).
Miller, G. A. (2003) The Cognitive Revolution: A historical perspective, Trends in Cognitive
Sciences, 7:3. Accessed December 10 at: http://www.cogsci.princeton.edu/∼geo/Miller.pdf
Papert, S. (1980) Mindstorms: Children, computers, and powerful ideas (New York, Basic Books).
Papert, S. (1996) The Connected Family: Bridging the digital generation gap (Atlanta, GA,
Longstreet Press).
Papert, S. (1998) Does Easy Do It? Children, games, and learning, Game Developer, June.
Papert, S. (2004) The Tipping Point. Program of the International Conference on Educational
Multimedia Available at: http://www.rima2004.org/eng/programmation/default.html
Paprzycki, M. and Wu, X. (1997) Mind Versus Computer: Were Dreyfus and Winograd right?
(Amsterdam, IOS Press).
Pea, R. D. (1985) Beyond Amplification: Using computers to reorganize human mental
functioning, Educational Psychologist, 20, pp. 167–182.
Pea, R. D. (1994) Seeing What We Build Together: Distributed multimedia learning environ-
ments for transformative communications, Journal of the Learning Sciences, 3:3, pp. 283–
Piaget, J. (1971) Insights and Illusions of Philosophy (New York, Meridian Books).
Piaget, J. & Inhelder, B. (1973) Memory and Intelligence (New York, Basic Books).
‘Ed Tech in Reverse’ 735
© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia
Potter, J. (2000) Post-cognitive Psychology, Theory & Psychology, 10:1, pp. 31–37. Accessed
December 10 at: http://www.lboro.ac.uk/departments/ss/depstaff/staff/bio/JPpages/
Pylyshyn, Z. (1981) Complexity and the Study of Artificial and Human Intelligence, in:
J. Haugeland (ed.), Mind Design (Bradford Books, MIT Press).
Rehak, D. (2004) Good&Plenty, Googlezon, Your Grandmother and Nike: Challenges for
ubiquitous learning & learning technology. Accessed March 14, 2005 at: http://
Salomon, G. (1988) AI in Reverse: Computer tools that turn cognitive, Journal of Educational
Computing Research, 4:2, pp. 123–139.
Salomon, G. (1998) (ed.) Distributed Cognitions. Psychological and educational considerations (New
York, Cambridge University Press).
Scardamalia, M. (2003) Knowledge Forum (Advances beyond CSILE), Journal of Distance
Education, 17 (Suppl. 3, Learning Technology Innovation in Canada), pp. 23–28.
Scardamalia, M. & Bereiter, C. (2003) Knowledge Building, in: Encyclopedia of Education, 2
edn. (New York: Macmillan Reference) pp. 1370–1373.
Schwier, R., Campbell, K. & Kenny, R. (2004) Instructional Designers’ Observations About
Identity, Communities of Practice and Change Agency, Australasian Journal of Educational
Technology, 20:1, pp. 69–100.
Schroeder, M. (1997) A Brief History of the Notation of Boole’s Algebra, Nordic Journal of
Philosophical Logic, 2:1.
Schweizer, P. (1997) Computation and the Science of Mind, in: M. Paprzycki & X. Wu (eds),
Mind Versus Computer: Were Dreyfus and Winograd right? (Amsterdam, IOS Press).
Searle, J. (1981) Minds, Brains, and Programs, in: J. Haugeland (ed.), Mind Design (Bradford
Books, MIT Press).
Shank, R. & Cleary, C. (1995) Engines for Education (Mahwah, NJ, Lawrence Erlbaum Associates).
Still, A. & Costall, A. (eds) (1991) Against Cognitivism: Alternative foundations for cognitive
psychology (Hemel Hempstead, Harvester Wheatsheaf ).
Streibel, M. J. (1995) Instructional Plans and Situated Learning: The challenge of Suchman’s
theory of situated action for instructional designers and instructional systems, in: G. J.
Anglin (ed.). Instructional Technology: Past, present, and future (Englewood, CO, Libraries
Suchman, L. A. (1987) Plans and Situated Actions (Cambridge, Cambridge University Press).
Sun, R. (1998) Artificial Intelligence, in: W. Bechtel & B. Graham (eds), A Companion to Cognitive
Science (Malden, MA, Blackwell Publishers Ltd).
Svanæs, D. (1999) Understanding Interactivity: Steps to a phenomenology of human-computer
interaction. Unpublished Dissertation, available at: http://www.idi.ntnu.no/∼dags/
Taylor, J. C. (2002) Automating e-Learning: The higher education revolution, in: S. E. Schubert,
B. Reusch & N. Jesse (eds), Informatik bewegt: Informatik 2002,32. Jahrestagung der Gesells-
chaft für Informatik, pp. 64–82.
Thagard, P. (2002) Cognitive Science, in: Stanford Encyclopedia of Philosophy. http://
Vygotsky, L. S. (1978) Mind in Society: The development of higher psychological processes
(Cambridge, MA, Harvard University Press).
Waldrop, M. M. (2002) The Dream Machine: J.C.R. Licklider and the revolution that made
computing personal (New York, Penguin).
Watt, S. (1997) Naive Psychology and Alien Intelligence, in: M. Paprzycki & X. Wu (eds), Mind
Versus Computer: Were Dreyfus and Winograd right? (Amsterdam, IOS Press).
Weizenbaum, J. (1976) Computer Power and Human Reason: From judgment to calculation (San
Francisco, CA, W. H. Freeman).
West, C. K., Farmer, J. A. & Wolff, P. M. (1991) Instructional Design: Implications from cognitive
science (Englewood Cliffs, NJ, Prentice Hall).
736 Norm Friesen & Andrew Feenberg
© 2007 The Authors
Journal compilation © 2007 Philosophy of Education Society of Australasia
Witchalls, C. (2005) Bridging the Digital Divide, in: Guardian Unlimited. February 17, 2005.
Accessed March 22, 2005 at: http://www.guardian.co.uk/print/0,3858,5128106-
Wiley, D. (2004) How Technology will Destroy Schools. Accessed March 14, 2005 at: http://
Winograd, T. (1991) Thinking Machines: Can there be? Are we?, in: J. Sheehan & M. Sosna
(eds), The Boundaries of Humanity: Humans, animals, machines. (Berkeley, CA, University
of California Press) pp. 198–223.
Wolfe, P. (1998) Revisiting Effective Teaching, Educational Leadership, 56:3, pp. 61–64.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->