You are on page 1of 168

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/233230558

The Concept of Emergence in Social Sciences: Its History and Importance

Article  in  Emergence: Complexity & Organization · December 2000


DOI: 10.1207/S15327000EM0204_08

CITATIONS READS

55 15,969

1 author:

Geoffrey Martin Hodgson


Loughborough University London
353 PUBLICATIONS   20,203 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Geoffrey Martin Hodgson on 22 May 2014.

The user has requested enhancement of the downloaded file.


EDITORIAL BOARD
Michael Lissack, Institute for the Study of Coherence and Emergence
<http://www.emergence.org/>; <lissack@lissack.com>
Max Boisot, Judge Management Institute, Cambridge, <Boisot@attglobal.net>
David Boje, New Mexico State University, <dboje@NMSU.Edu>
Jerry L.R. Chandler, George Mason University, <jlrchand@erols.com>
Robert Chia, University of Essex, <rchia@essex.ac.uk>
Colin Crook, Citicorp (retired), <Colin_Crook@email.msn.com>
Lynn Crawford, University of Technology, Sydney <Lynn.Crawford@uts.edu.au>
Keith Devlin, St. Mary's College, <devlin@csli.stanford.edu>
Kevin Dooley, Arizona State University, <kevin.dooley@asu.edu>
William Frederick, University of Pittsburgh, <billfred@katz.pitt.edu>
Raghu Garud, New York University, <rgarud@exchange.stern.nyu.edu>
Ben Goertzel, WebMind, <ben@goertzel.org>
Jeffrey Goldstein, Adelphi University, <jegolds@ibm.net>
Hugh Gunz, University of Toronto, <gunz@fmgmt.mgmt.utoronto.ca>
John Hassard, Manchester School of Management, <john.s.hassard@umist.ac.uk>
Heather Hopfl, Newcastle Business School, <h.hofpl@unn.ac.uk>
Alicia Juarrero, Prince George’s Community College, <juarreax@pg.cc.md.us>
Stu Kauffman, BiosGroup, <stu@biosgroup.com>
Ben Kutz, Ideatree, <ideatree@PLANET.EON.NET>
Hugo Letiche, University for Humanist Studies, <h.letiche@uvh.nl>
Steve Maguire, McGill University, <smaguire@management.mcgill.ca>
Bill McKelvey, University of California at Los Angeles, <mckelvey@anderson.ucla.edu>
Yasmin Merali, University of Warwick, <Yasmin.Merali@warwick.ac.uk>
Don Mikulecky, Virginia Commonwealth University, <mikulecky@hsc.vcu.edu>
Eve Mitleton-Kelly, London School of Economics, <E.MITLETON-KELLY@lse.ac.uk>
John Perry, Stanford University, <john@csli.stanford.edu>
Stanley Peters, Stanford University, <peters@csli.stanford.edu>
Steven Phelan, University of Texas, Dallas, <sphelan@utdallas.edu>
Larry Prusak, IBM Consulting, <lprusak@us.ibm.com>
Kurt Richardson, Institute for the Study of Coherence and Emergence, <kurt@kurtrichardson.com>
Jan Rivkin, Harvard Business School, <jrivkin@hbs.edu>
Johan Roos, Imagination Lab Foundation, <johan@imagilab.org>
Duska Rosenberg, Royal Holloway Institute, <d.rosenberg@rhbnc.ac.uk>
John Seely Brown, XEROX PARC, <John_Seely_Brown@pa.xerox.com>
Haridimos Tsoukas, University of Cyprus, <htsoukas@atlas.pba.ucy.ac.cy>
Willard Uncapher, University of Texas, <paradox@home.actlab.utexas.edu>
Robert Lincoln Wood, Scient Corporation, <rwood@scient.com>
Production Editor: Rebecca Vogt, Lawrence Erlbaum Associates, Inc., <rvogt@erlbaum.com>
Editorial Assistant: Jacco van Uden, <jacco@isce.edu>

Subscriber Information: Emergence is published four times a year by Lawrence Erlbaum Associates, Inc., 10
Industrial Avenue, Mahwah, NJ 07430-2262. Subscriptions are available only on a calendar-year basis.
Printed: Journal subscription rates are US $45 for individuals, US $160 for institutions, and US $20 for students within
the United States and Canada; US $75 for individuals, US $190 for institutions, and US $40 for students outside the
United States and Canada. Order printed subscriptions through the Journal Subscription Department, Lawrence
Erlbaum Associates, Inc., 10 Industrial Avenue, Mahwah, NJ 07430-2262.
Electronic: Full price print subscribers to Volume 2, 2000 are entitled to receive the electronic version free of
charge. Electronic only subscriptions are available at a reduced subscription price. Please visit the LEA Web site
at http://www.erlbaum.com for complete information.
Send information requests and address changes to the Journal Subscription Department. Address changes should
include the mailing label or a facsimile. Claims for missing issues cannot be honored beyond 4 months after mail-
ing date. Duplicate copies cannot be sent to replace issues not delivered due to failure to notify publisher of
change of address.

Copyright © 2000, Lawrence Erlbaum Associates, Inc. No part of this publication may be reproduced, stored in
a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, micro-
filming, recording, or otherwise, without permission of the publisher. Send special requests for permission to the
Permissions Department, Lawrence Erlbaum Associates, Inc., 10 Industrial Avenue, Mahwah, NJ 07430-2262.
Printed in the United States of America. ISSN 1521-3250.
Emergence
A Journal of Complexity Issues in
Organizations and Management

a publication of The Institute for the Study of


Coherence and Emergence

Volume #2, Issue #4, 2000


Special Editors Yasmin Merali & David J. Snowden,
Information Systems Research Unit, Warwick Business School

Editor’s Note 3

Special Editors’ Note: Complexity and Knowledge Management 5

Knowledge, Complexity, and Understanding


Paul Cilliers 7

The Organic Metaphor in Knowledge Management


Yasmin Merali 14

The Emergence of Knowledge in Organizations


Ralph Stacey 23

Out of Control into Participation


Brian Goodwin 40

New Wine in Old Wineskins: From Organic to Complex


Knowledge Management Through the Use of Story
David J. Snowden 50

The Concept of Emergence in Social Science:


Its History and Importance
Geoffrey M. Hodgson 65
EMERGENCE

Knowledge, Ignorance, and Learning


Peter M. Allen 78

Knowledge as Action, Organization as Theory:


Reflections on Organizational Knowledge
Haridimos Tsoukas 104

“Shall I Compare Thee to … an Organization?”


Max Boisot & Jack Cohen 113

Complex Information Environments: Issues in


Knowledge Management and Organizational Learning
Duska Rosenberg 136

Handling Complexity with Self-Organizing Fractal


Semantic Networks
Jürgen Klenk, Gerd K. Binnig, & Günter Schmidt 151

About the Authors 163

2
EMERGENCE, 2(4), 3–4
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Editor’s Note

A t the close of our second year, Emergence is pleased to


present this special issue on knowledge management. To
gather Paul Cilliers, Yasmin Merali, Ralph Stacey, Brian
Goodwin, David Snowden, Geoffrey Hodgson, Peter
Allen, Haridimos Tsoukas, Max Boisot, Jack Cohen, Duska Rosenberg,
Jürgen Klenk, Gerd Binnig, and Günter Schmidt in one place would be
the KM meeting of the year. The list of authors is unprecedented and the
collected wisdom nonpareil. For this all of us are indebted to Yasmin
Merali and Dave Snowden, who served as special editors for the issue,
and to the Institute for Knowledge Management (an affiliate of IBM),
which sponsored some of the research contained herein.
Knowledge management has developed a bad reputation in corporate
circles, due to a plethora of failed projects, poorly conceived initiatives,
and frankly too much hype too soon. As Thoreau wrote about building
castles in the air, “It is fine so long as you put a foundation beneath.” The
authors in this issue are dedicated to creating a sound foundation on
which knowledge management practice can rest. In so doing, much of the
hype of the past five years about unlocking the value of networked know-
ledge workers might become reality. The increasingly global nature of
organizations, enabled by technology and to a lesser extent by the growth
of the internet, has increased network connections to the point where the
old infrastructure of BPR, the balanced scorecard, and the learning
organization has started to break down.
Dave Snowden writes, “Knowledge management is a difficult and
challenging subject that has been subject to oversimplistic approaches
from a variety of authors and technology vendors.” The work of the
authors in both this issue and the issue before it is very different. No
longer will the august members of Boston’s Lunar Society (a monthly
meeting of the knowledge management “gurus”) need to demur from

3
EMERGENCE

using the words “knowledge management” when discussing knowledge


management practice. More importantly, no longer will CEOs roll their
eyes when hearing the words.
Knowledge, in my idiosyncratic definition, is whatever it takes to
create a willingness to act. Thus, knowledge management is about creat-
ing environments, opportunities, values, conditions, and constraints,
which promote such willingness. This issue of Emergence is an act of such
creation.

Michael Lissack
Editor

4
EMERGENCE, 2(4), 5–6
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Special Editors’ Note:


Complexity and Knowledge
Management

T he content of this special issue of Emergence has its origins,


but not its entirety, in a meeting at Bedfont Lakes, UK, enti-
tled “Complexity and Knowledge Management.” The meet-
ing was attended by a number of academics and
practitioners from a range of disciplines and backgrounds. Many are con-
tributors to this issue and many of the topics are closely related to the dis-
cussions at the meeting. The title for the special issue was chosen to
represent a movement away from Tayloristic models of knowledge man-
agement to metaphors that are closer to the organic in their complexity.
Scholars and practitioners in management have always drawn on a
seemingly eclectic array of disciplines in order to gain insights into the
behavior and management of organizations. The emergence of complexity
“theory” and the labeling of organizations as complex adaptive systems
have created common ground for individuals from different disciplines to
engage in discourse and debate about the characteristics and capabilities of
organizations over time and space. This is a relatively new movement and
much of the work in the domain is exploratory. For academics, the multi-
paradigmatic nature of this work is both exhilarating and dangerous. There
is the potential to gain new insights by looking at organizations through dif-
ferent lenses, but there is also a danger of borrowing inappropriately from
“foreign” disciplines. The latter may be a result of inadequate understand-
ing of the fundamental concepts of unfamiliar disciplines, of facile and shal-
low use of borrowed metaphor, or of succumbing to the temptation of
sliding from metaphorical reference to attribution of analogous properties
based on an anthropocentric Weltanschauung.

5
EMERGENCE

Knowledge management as a subject for management attention rose


in popularity over the late 1990s. In the early years, its “productization”
by management consultants became inextricably linked with computer-
based knowledge management environments. The organizational imper-
ative was to extract so-called tacit knowledge from individuals and to
convert it into explicit knowledge that could be codified and stored in
computerized knowledge repositories for perpetual access. In the later
part of the decade there were expositions on the futility of such an
endeavor, asserting that knowledge and the social systems in which it
resided were too complex to be dealt with simplistically. The mainstream
discourse on knowledge management had reached an impasse with this
polarization of views.
However, away from the popular knowledge management journals, a
number of researchers and practitioners were interested in looking at
organizations as complex adaptive systems. Distancing themselves from
the schism between IT- and OD-dominated perspectives, they placed a
high value on the utilization of multiple perspectives for making sense of
emergent organizational properties in dynamic settings. The issues of
information, intelligence, meaning, values, action, and human interaction
raised in the knowledge management arena were integral to this work.
The meeting at Bedfont Lakes was designed to introduce this type of
thinking into the wider discourse on knowledge management.
Fourteen of us spent two days talking over what excited us about
research and practice in this arena. Among us we had perspectives from the
social and natural sciences, including biology, physics, linguistics, econom-
ics, philosophy, anthropology, computer science, strategy, and management.
Many of us had multidisciplinary backgrounds, and all of us were interested
in emergent properties of social systems. The purpose of the meeting was to
provide a space and a platform for uninhibited discourse, to generate dis-
cussions around the issues that were important to address in future
research. Far from seeking a consensus to define “valid” complexity research
in knowledge management, diversity of perspectives was celebrated. The
aim was to catalyze the emergence of a self-perpetuating movement.
The diversity of papers in this issue reflects the ethos of the Bedfont
Lakes meeting. Some of the papers contain contentious views, and the
purpose of the issue is to encourage debate.

Yasmin Merali & David J. Snowden


Special Editors

6
EMERGENCE, 2(4), 7–13
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Knowledge, Complexity, and


Understanding
Paul Cilliers

The strange thing about television is that it doesn’t tell you everything. It
shows you everything about life on earth, but the mysteries remain.
Perhaps it is in the nature of television.
Thomas Jerome Newton in The Man Who Fell to Earth

D uring most events concerned with knowledge manage-


ment, someone starts a presentation by saying that they
will not revisit the problem of the distinction between
knowledge and data. Usually a sigh goes through the audi-
ence, seemingly signifying relief. But why relief? Is it because they will
not be bored with an issue that has been resolved already, or because they
are glad that they will not be confronted with these thorns again? I sus-
pect that they want to believe that the first reason is the case, but that in
fact it is the second.
In what follows I therefore want to problematize the notion of “know-
ledge.” I will argue that when talking about the management of “know-
ledge,” whether by humans or computers, there is a danger of getting
caught in the objectivist/subjectivist (or fundamentalist/relativist)
dichotomy. The nature of the problem changes if one acknowledges the
complex, interactive nature of knowledge. These arguments, presented
from a philosophical perspective, should have less influence on the prac-
tical techniques employed in implementing knowledge management sys-
tems than on the claims made about what is actually achieved by these
systems.

7
EMERGENCE

THE TRADITIONAL TRAP

The issues around knowledge—what we can know about the world, how
we know it, what the status of our experiences is—have been central to
philosophical reflection for ages. Answers to these questions, admittedly
oversimplified here, have traditionally taken one of two forms. On the
one hand there is the belief that the world can be made rationally trans-
parent, that with enough hard work knowledge about the world can be
made objective. Thinkers like Descartes and Habermas are often framed
as being responsible for this kind of attitude. It goes under numerous
names including positivism, modernism, objectivism, rationalism, and
epistemological fundamentalism. On the other hand, there is the belief
that knowledge is only possible from a personal or cultural-specific per-
spective, and that it can therefore never be objective or universal. This
position is ascribed, correctly or not, to numerous thinkers in the more
recent past like Kuhn, Rorty, and Derrida, and its many names include
relativism, idealism, postmodernism, perspectivism, and flapdoodle.
Relativism is not a position that can be maintained consistently,1 and of
course the thinkers mentioned above have far more sophisticated positions
than portrayed in this bipolar caricature. There are also recent thinkers
who attempt to move beyond the fundamentalist/relativist dichotomy,2 but
it seems to me that when it comes to the technological applications of
theories of knowledge, there is an implicit reversion to one of these tradi-
tional positions. For those who want to computerize knowledge, knowledge
has to be objective. It must be possible to gather, store, and manipulate
knowledge without the intervention of a subject. The critics of formalized
knowledge, on the other hand, usually fall back on arguments based on sub-
jective or culture-specific perspectives to show that it is not possible, that
we cannot talk about knowledge independently of the knowing subject.
I am of the opinion that a shouting match between these two positions
will not get us much further. The first thing we have to do is to acknow-
ledge the complexity of the problem with which we are dealing. This will
unfortunately not lead us out of the woods, but it should enable a discus-
sion that is more fruitful than the objectivist/subjectivist debate.

COMPLEXITY AND UNDERSTANDING

An understanding of knowledge as constituted within a complex system


of interactions3 would, on the one hand, deny that knowledge can be seen
as atomized “facts” that have objective meaning. Knowledge comes to be

8
VOLUME #2, ISSUE #4

in a dynamic network of interactions, a network that does not have dis-


tinctive borders. On the other hand, this perspective would also deny that
knowledge is something purely subjective, mainly because one cannot
conceive of the subject as something prior to the “network of knowledge,”
but rather as something constituted within that network. The argument
from complexity thus wants to move beyond the objectivist/subjectivist
dichotomy. The dialectical relationship between knowledge and the sys-
tem within which it is constituted has to be acknowledged. The two do
not exist independently, thus making it impossible to first sort out the sys-
tem (or context), and then identify the knowledge within the system. This
codetermination also means that knowledge and the system within which
it is constituted are in continual transformation. What appears to be
uncontroversial at one point may not remain so for long.
The points above are just a restatement of the claim that complex sys-
tems have a history, and that they cannot be conceived of without taking
their context into account. The burning question at this stage is whether
it is possible to do that formally or computationally. Can we incorporate
the context and the history of a system into its description, thereby mak-
ing it possible to extract knowledge from it? This is certainly possible (and
very useful) in the case of relatively simple systems, but with complex
systems there are a number of problems. These problems are, at least to
my mind, not of a metaphysical but of a practical nature.
The first problem has to do with the nonlinear nature of the inter-
actions in a complex system. From this it can be argued (see Cilliers,
1998: 9–10 and Richardson et al., 2000) that complexity is incompress-
ible. There is no accurate (or, rather, perfect) representation of the system
that is simpler than the system itself. In building representations of open
systems, we are forced to leave things out, and since the effects of these
omissions are nonlinear, we cannot predict their magnitude. This is not
an argument claiming that reasonable representations should not be con-
structed, but rather one that the unavoidable limitations of the represen-
tations should be acknowledged.
This problem—which can be called the problem of boundaries4—is
compounded by the dynamic nature of the interactions in a complex sys-
tem. The system is constituted by rich interaction, but since there is an
abundance of direct and indirect feedback paths, the interactions are con-
stantly changing. Any activity in the system reverberates throughout the
system, and can have effects that are very difficult to predict; once again
as a result of the large number of nonlinear interactions. I do not claim
that these dynamics cannot be modeled. It could be possible for richly

9
EMERGENCE

connected network models to be constructed. However, as soon as these


networks become sizable, they become extremely difficult to train. It also
becomes rather hard to figure out what is actually happening in them.
This is no surprise if one grants the argument that a model of a complex
system will have to be as complex as the system itself. Reduction of com-
plexity always leads to distortion.
What are the implications of the arguments from complexity for our
understanding of the distinction between data and knowledge? In the
first place, it problematizes any notion that data can be transformed into
knowledge through a pure, mechanical, and objective process. However,
it also problematizes any notion that would see the two as totally differ-
ent things. There are facts that exist independently of the observer of
those facts, but the facts do not have their meaning written on their faces.
Meaning only comes to be in the process of interaction. Knowledge is
interpreted data. This leads us to the next big question: What is involved
in interpretation, and who (or what) can do it?

KNOWLEDGE AND THE SUBJECT

The function of knowledge management seems to be either to supple-


ment the efforts of a human subject who has to deal with more data than
is possible, or to free the subject up for other activities (perhaps to do
some thinking for a change). Both these functions presuppose that the
human subject can manipulate knowledge. This realization leads to ques-
tions in two directions. One could debate the efficiency of human strate-
gies to deal with knowledge and then attempt to develop them in new
directions. This important issue will not be pursued further here.
There is another, perhaps philosophically more basic, question, and
that has to do with how the human subject deals with knowledge at all.
Given the complexities of the issue, how does the subject come to forms
of understanding, and what is the status of knowledge as understood by a
specific subject? This has been pursued by many philosophers, especially
in the discipline known as hermeneutics. However, I am not aware that
this has occurred in any depth in the context of complexity theory.5 How
does one perceive of the subject as something that is not atomistically
self-contained, but is constituted through dynamic interaction?
Moreover, what is the relationship between such a subject and its under-
standing of the world? A deeper understanding of what knowledge is, and
how to “manage” it, will depend heavily on a better understanding of the
subject. This is a field of study with many opportunities.

10
VOLUME #2, ISSUE #4

Apart from calling for renewed effort in this area, I only want to make
one important remark. It seems that the development of the subject from
something totally incapable of dealing with the world on its own into
something that can begin to interpret—and change—its environment is a
rather lengthy process. Childhood and adolescence are necessary phases
(sometimes the only phases) in human development. In dealing with the
complexities of the world there seems to be no substitute for experience
(and education). This would lead one to conclude that when we attempt
to automate understanding, a learning process will also be inevitable.
This argument encourages one to support computing techniques that
incorporate learning (like neural networks) rather than techniques that
try to abstract the essence of certain facts and manipulate them in terms
of purely logical principles. Attempts to develop a better understanding
of the subject will not only be helpful in building machines that can man-
age knowledge, they will also help humans better understand what they
do themselves. We should not allow the importance of machines (read
computers) in our world to lead to a machine-like understanding of what
it is to be human.

IMPLICATIONS
In Nicholas Roeg’s remarkably visionary film The Man Who Fell to Earth
(1976), an alien using the name Thomas Jerome Newton (superbly played
by David Bowie) tries to understand human culture by watching tele-
vision, usually a whole bunch of screens at the same time. Despite the
immense amount of data available to him, he is not able to understand
what is going on directly. It is only through the actual experience of polit-
ical complexities, as they unfold in time, that he begins to understand. By
then he is doomed to remain earthbound.
I am convinced that something similar is at stake for all of us. Having
access to untold amounts of information does not increase our under-
standing of what it means. Understanding, and therefore knowledge, fol-
lows only after interpretation. Since we hardly understand how humans
manage knowledge, we should not oversimplify the problems involved in
doing knowledge management computationally. This does not imply that
we should not attempt what we can—and certain spectacular advances
have been made already—but that we should be careful in the claims we
make about our (often still to be finalized) achievements. The perspective
from complexity urges that, among others, the following factors should be
kept in mind:

11
EMERGENCE

◆ Although systems that filter data enable us to deal with large amounts
of it more effectively, we should remember that filtering is a form of
compression. We should never trust a filter too much.
◆ Consequently, when we talk of mechanized knowledge management
systems, we can (at present?) only use the word “knowledge” in a very
lean sense. There may be wonderful things to come, but at present I
do not know of any existing computational systems that can in any way
be seen as producing “knowledge.” Real breakthroughs are still
required before we will have systems that can be distinguished in a
fundamental way from database management. Good data manage-
ment is tremendously valuable, but cannot be a substitute for the
interpretation of data.
◆ Since human capabilities in dealing with complex issues are also far
from perfect, interpretation is never a merely mechanical process, but
one that involves decisions and values. This implies a normative
dimension to the “management” of knowledge. Computational sys-
tems that assist in knowledge management will not let us escape from
this normativity. Interpretation implies a reduction in complexity. The
responsibility for the effects of this reduction cannot be shifted away
on to a machine.
◆ The importance of context and history means that there is no substi-
tute for experience. Although different generations will probably
place the emphasis differently, the tension between innovation and
experience will remain important.

These considerations should assist in developing an understanding of


knowledge management that could be called “organic,” but perhaps also
“ethical.”

NOTES
1 If relativism is maintained consistently, it becomes an absolute position. From this one
can see that a relativist is nothing but a disappointed fundamentalist. However, this
should not lead one to conclude that everything that is called postmodern leads to this
weak position. Lyotard’s seminal work, The Postmodern Condition (1984), is subtitled A
Report on Knowledge. He is primarily concerned with the structure and form of differ-
ent kinds of knowledge, not with relativism. An informed reading of Derrida will also
show that deconstruction does not imply relativism at all. For a penetrating philosoph-
ical study of the problem, see Against Relativism (Norris, 1997).
2 The critical realism of Bhaskar (1986) is a good example.
3 Complex systems are discussed in detail in Cilliers (1998).
4 The problem of boundaries is discussed in more detail in Cilliers (2001).
5 An important contribution was made by reinterpreting action theory from the perspec-

12
VOLUME #2, ISSUE #4

tive of complexity (Juarrero, 1999). Some preliminary remarks, more specifically on


complexity and the subject, are made in Cilliers & De Villiers (2000).

The financial assistance of the National Research Foundation: Social Sciences and
Humanities (of South Africa) toward this research is hereby acknowledged. Opinions
expressed and conclusions arrived at are those of the author, and are not necessarily to be
attributed to the National Research Foundation.

REFERENCES
Bhaskar, R. (1986) Scientific Realism and Human Emancipation, London: Verso.
Cilliers, P. (1998) Complexity and Postmodernism: Understanding Complex Systems,
London: Routledge.
Cilliers, P. (2001) Boundaries, Hierarchies and Networks in Complex Systems (forthcoming).
Cilliers, P. & De Villiers, T. (2000) “The Complex ‘I,’” in Wheeler, W. (ed.), The Political
Subject, London: Lawrence & Wishart.
Juarrero, A. (1999) Dynamics in Action: Intentional Behavior as a Complex System,
Cambridge, MA: MIT Press.
Lyotard, J. F. (1984) The Postmodern Condition: A Report on Knowledge, Manchester, UK:
Manchester University Press.
Norris, C. (1997) Against Relativism: Philosophy of Science, Deconstruction and Critical
Theory, Oxford, UK: Blackwell.
Richardson, K., Cilliers, P., & Lissack, M. (2000) “Complexity Science: A ‘Grey’ Science for
the ‘Stuff in Between,’” Proceedings of the First International Conference on Systems
Thinking in Management, Geelong, Australia, 532–7.

13
EMERGENCE, 2(4), 14–22
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

The Organic Metaphor in


Knowledge Management
Yasmin Merali

I nformation technology has enabled the world to become increas-


ingly interconnected. The competitive context is growing more
complex and dynamic as the terrain for competition assumes
global dimensions and the focus of competition extends from the
occupation of physical space to that of cyberspace. An associated spacio-
temporal contraction (resulting from the speed with which information
traverses the globe and the ability to coordinate geographically distrib-
uted processes and activities across organizational boundaries in real
time) means that changes in the competitive environment affect organi-
zations more quickly. In many industries, this phenomenon is marked by
an increased rate of organizational transformation, with firms developing
or acquiring capabilities that often lie outside the traditional trajectories
of their native industry sector. Organizations are both of and in the com-
petitive context: while they are affected by changes in the competitive
context, their own actions influence that context and the behavior of other
players within it.
As communication constraints are rapidly diminishing, some material
and physical constraints are becoming more pronounced, with the impact
of environmental and economic disasters being transmitted more directly
across the world. For example, the recent Far Eastern economic crisis had
significant impacts on the West, while the impact of dumping First World
technology rejects (like CFC refrigerators) on the Third World is rapidly
catching up with us. Global warming and other environmental issues are
perceived to be important for the futures of the First and Third Worlds
alike. According to media coverage, the future of the planet is uncertain at

14
VOLUME #2, ISSUE #4

a macro level. Although this pronouncement of uncertainty does not gen-


erally drive businesses to behave more responsibly for the greater good, it
does promote an awareness of the interconnectedness of individual (per-
sonal, organizational, national) actions and global outcomes.
The notion of success in this type of uncertain and dynamic environ-
ment raises three issues. The first is related to change: How can an organ-
ization organize itself to change quickly enough, efficiently, and
appropriately in an environment that cannot be predicted? The second
issue is one of sustainability: How does an organization ensure that any
transformation it undertakes leaves it with the requisite capabilities to
deal effectively with future changes in the environment? And finally, how
can an organization ensure that its actions do not diminish those aspects
of the environment that favor its survival?
Biological systems display many of the characteristics that appear
desirable for organizational survival in this type of context: coordination,
robustness, requisite variety, and adaptability in the face of significant
changes in the environment. The organization of such systems is there-
fore of interest to those who believe that emergent properties arise from
relationships and interactions of constituent components of systems with
each other and with their environment. In the following sections we look
at the utility of metaphor in general and that of the organic metaphor in
particular to gain insights into the organization and being of socially con-
structed intelligent organizations in transformational contexts.

THE USE OF METAPHOR

There is no “real” expression and no real knowing apart from metaphor.


But deception on this point remains … The most accustomed metaphors,
the usual ones, now pass for truths and as standards for measuring the
rarer ones. The only intrinsic difference here is the difference between
custom and novelty, frequency and rarity.
Nietszche (1872: 50–51)

There is a substantial body of work propounding that the use of metaphor


is necessary for any conceptualization of reality (e.g., Nietszche, 1872;
Lackoff & Johnson, 1980; Morgan, 1986; Lackoff, 1987). In this section
we consider the role of metaphor in the development, articulation, and
establishment of knowledge claims.
At a fundamental level, metaphor facilitates an understanding of
complex things by making reference to a known concept in a direct and

15
EMERGENCE

implicit way. Because it implies a comparison between two apparently


unconnected things, it invites the recipient to construct at an abstract
level a meaningful (to the recipient) attribution of aspects of correspon-
dence between the two things. By promoting exploration (the search for
suitable attribution) and participation (by the recipient taking the
sender’s metaphor and making it work), the metaphor provides the vehi-
cle and substrate for the evolution and diffusion of thought and concept.

CONNECTING WITH “REALITY”


The use of metaphor to articulate common knowing and shared experi-
ence enables the establishment of common points of reference for the
positioning of new concepts in the knowing of the participants. While the
map is not the territory, establishing a map enables mutual exploration of
the territory. The explicit use of metaphor can facilitate the individual and
collective action-learning cycles (Merali, 2000) through recursive experi-
ential mapping: the metaphor is used to conceptualize the experience and
the validity of the concept is tested empirically. The cycle is repeated
until the metaphor ceases to be useful or valid.
For the purposes of communication, the utility of a metaphor that
works is illustrated by words and expressions absorbed into our everyday
linguistic coinage (e.g., “daisy” [day’s eye], “time flies”). This represents
the intersubjective consensus about the validity of specific metaphors,
bounded by common semantic usage.
One commonly cited danger of the metaphor that works too well is
that its domination may lead to mistaking the map for the territory. This,
it is suggested, will lead to the bounding of the concept space. As a con-
sequence, when people turn from using the metaphor to conceptualize
reality in abstraction to realizing the concept in the world of action, they
get a nasty surprise from running into characteristics of the real world
that were not presaged by the metaphor.
At this juncture, it is useful to distinguish between metaphor and anal-
ogy. A metaphor operates referentially, whereas an analogy is powerful
because of its representational efficacy. The mapping between the
concept/object pairing in a metaphor is effective if there is a striking cor-
respondence of some property (or quality) between the pair. An analogy,
on the other hand, provides a model of the object/concept that it seeks to
explain. Metaphors can be, and are, used to generate analogies that can
then be embodied in models that appear congruent with real-world
objects and behaviors. The use of ant colony behavior patterns to solve
aircraft routing problems and that of insect locomotion patterns in the

16
VOLUME #2, ISSUE #4

design of robots are examples of this type of work, demonstrating the


exploitation of concepts emanating from biological metaphors.
For the purposes of exploration of ideas, the metaphor is useful while it
works and when it breaks down. Exploration with the metaphor that works
consistently generates incremental development of contiguous concepts
and may give us new vocabulary for the articulation of emergent concepts.
When a metaphor is stretched to breaking point, it reveals the limitation of
the associated concept space and catalyzes the search in a new direction.
We see this happening in knowledge management, with the breakdown of
the machine metaphor (as embodied in Tayloristic management practice)
and the surge of engagement with the organic metaphor.

THE APPEAL OF THE ORGANIC METAPHOR

An interesting aspect of competition in cyberspace is the emergent


nature of the space itself and of competitive structures within that space.
The associated management rhetoric is dominated by notions of uncer-
tainty, change, regeneration, and dynamism. The importance of sensing
and making sense of the dynamic context, and of adapting strategies and
behaviors that promote viability and sustainability of the organization in
interaction with that context, is reflected in the contemporaneous births
and deaths of dot.com ventures, and is theorized about in the strategy and
organizational development literatures.
The properties required to survive in the new context are:

◆ The ability to sense and make sense of the context.


◆ Intelligence (the ability to discern and choose) to act.
◆ The ability to respond to changing conditions rapidly and in a
coordinated manner.
◆ The ability to effect change efficiently.

The intellectual appeal of the organic metaphor lies in its easy accommo-
dation of these characteristics. It is wide ranging: It embraces living sys-
tems from primitive unicellular organisms through to humans, and it
scales from very simple, isolated life forms through to complex eco-
systems. There is a plethora of relevant concepts at our disposal when uti-
lizing the organic metaphor to explore emergent problem domains.
Like organizations, the living systems at the heart of the organic
metaphor are complex adaptive systems: they are self-organizing, self-
producing open systems capable of maintaining stable states under non-

17
EMERGENCE

equilibrium conditions. Viable systems appear to have a sustainable bal-


ance between requisite variety (necessary for responding to changes in
context), redundancy (necessary for robustness), and efficiency.
Self-organization represents considerable design elegance, accommo-
dating responsiveness to local changes while maintaining the organiza-
tion and integrity of the global form through relational contiguity and
intelligence networks. The capability for globally positioned, locally gen-
erated action supports speed, high granularity, and fineness of control in
interactions with the environment, and generates a robustness in the face
of changing conditions.
Of particular interest for designers of organizations is the metaphor of
self-organizing networks (q.v. Maturana & Varela’s, 1973, self-producing
networks of production, Luhmann’s, 1990, self-producing networks of
communication). The use of the self-organizing network metaphor for the
phenomenon of intelligent organizational behavior is significant because it
enables insights from a wide range of other disciplines (including physics,
computer science, artificial intelligence, biology, economics, and organi-
zational science) that have used the network metaphor to make analogies
that have been mapped down to the structural level for emergent systems
behaviors. Employing the self-organizing network metaphor has the addi-
tional advantage that it provides a dimension for congruence with other
work in intelligence (e.g., the use of neural nets in artificial intelligence)
and complexity (e.g., Kauffman’s, 1995, NK networks).

IMPLICATIONS FOR KNOWLEDGE MANAGEMENT—KNOWING AND


BEING IN A RECURSIVE RELATIONSHIP ENGAGED IN BECOMING
Many of the writers on knowledge management highlight the importance
of shifting our focus from knowledge to knowing (Cook & Brown, 1999)
and from being to becoming (Juarrero, 1999), underlining the context-
sensitive and transitory nature of organized states. For emergent behav-
ior in dynamic contexts, we can think of knowing and being as coupled in
a recursive relationship engaged in organizational becoming.
The emergent nature of the world coupled with the need to react
quickly has challenged the efficacy of traditional programmatic decision-
making modes (entailing the explicit definition of the problem, assembly
of all relevant information, analysis of the information, generation, and
evaluation of options, selection of the best option) as the sole means of
deciding to act in organizations.
At a fundamental level, there is a problem with the notion that the
appropriateness of action in the present can be honed to perfection by

18
VOLUME #2, ISSUE #4

perfect knowledge about the present, encapsulated in the concept of spe-


cious present.1 Knowledge of the present only exists in the future: we can
know what has passed and we can make judgments about what may
come, but we have knowledge of the present only when we have lived
through it and so it becomes known as knowledge of the past. For organ-
izations that need to react quickly, the emergent pattern of actions has to
be based on something other than logical derivation from what is already
known.
The traditional knowledge management concept (of learning from
action, codifying what is learned, incorporating it into best-practice
guidelines, and disseminating it throughout the organization) is chal-
lenged not only because it is too slow as a mechanism, but also because
relevant knowledge is context-dependent knowledge; in rapidly changing
contexts the selection and reframing of what is known must happen in
conjunction with what is dynamically learned (discovered about, or
revealed by) in the contingent context.
So while some situations can still be dealt with effectively by basing
actions on the analysis of what is known about the past, the need to react
to contextual changes in real time demands that we find a different way
of looking at intelligent action.

THE METAPHOR OF THE INTELLIGENT LIVING SYSTEM


Against this background, the organic metaphor of complex self-organiz-
ing intelligent systems becomes a very compelling one. In these systems,
action is based jointly or variously on reflexes, learned reflexes, and con-
sidered decision making, involving different functional aspects of the
nervous system. The concept of learned reflexes is particularly interest-
ing, exploiting as it does the synergies between path-dependent learning
and experience with the real-time contingencies of being in the world.
Learning to drive or becoming a world-class champion table tennis
player are examples of instances where conscious thought is engaged to
the degree to which it is providing overall strategic direction. The
learned reflex (especially in the instance of the world-class champion
table tennis player) is faster, more detailed, and more precise than purely
conscious thought would allow.
This degree of coordination and speed in acting appropriately to
externally generated contingencies is supported by the underlying self-
organizing network (comprising different functional aspects of the
sensory, nervous, and motor systems) that recognizes relevant external
contingencies and generates the appropriate stroke, which not only

19
EMERGENCE

connects with the approaching shot but also positions the return strategi-
cally in relation to the opponent’s position and form.

THE QUESTION OF SENTIENCE


The notion of a recursive relationship between knowing and being res-
onates with Heidegger’s notion of Dasein (Heidegger, 1962). Being in the
world is inevitable; it is not conditional on doing, or on making decisions
about becoming. Choosing to act in different ways may lead to different
outcomes, but being is persistent regardless of whether decisions are made,
acted on or not. This highlights the importance of self-awareness. To be is
to have some impact on the world, and understanding the nature of this
impact is fundamental to making choices about how to act in the world.
Metaphors based on primitive organic systems clearly have no parallel
in this context of organizational being and doing. To develop an organic
metaphor for organizational being and doing presupposes some concept of
organizational self-consciousness and self-identity. Powerful, relevant
organic metaphors must of necessity come from higher organisms, i.e.,
those that can be described as sentient. The most complex (and possibly
the most interesting) sentient beings are humans, and it is quite common
to find metaphorical reference to human qualities and behaviors (such as
identity, values, and choice) when describing organizations. This type of
usage carries with it the danger of becoming excessively anthropocentric.
If we remain mindful of this danger, the human metaphor is a very power-
ful one, as we have first-hand knowledge of its referent domain.

CONCLUSIONS
The “real world” exists in its entirety. We choose a set of attributes to
define the world and to describe the behaviors that we see around us (q.v.
Nietszche’s “There is no ‘real’ expression and no real knowing apart from
metaphor”). The world is not governed by Newtonian physics or quan-
tum mechanics or thermodynamics. There are features and behaviors that
we attribute to the world. We attempt to describe and explain them by
alluding to various bodies of knowledge as diverse as Newtonian physics,
quantum physics, miracles, and divine intervention.
This is particularly important in view of the current polarization of the
holistic and reductionist schools of thought. Within the holistic school
there is a tendency to reject the fruits of reductionist labor on the grounds
of ontological incompatability, and the reductionist school is dismissive of
holistic sentiments on the grounds of incommensurability.

20
VOLUME #2, ISSUE #4

When attempting to make sense of something as complex as systems of


human activity embedded in dynamic contexts, multiple viewpoints give
the widest possible insight. Polarization of viewpoints (e.g., the reduction-
ist/holistic division) is limiting and potentially dangerous. There is nothing
wrong with the reductionist point of view (so far as it goes), nor is there any-
thing wrong with a holistic open systems view (so far as it goes). Both are
useful in so far as both have recognized limitations. Lose sight of the limi-
tations and Alice’s Adventures in Wonderland (Carroll, 1865) becomes
closer to reality.
The organic metaphor is useful in bridging the chasm between reduc-
tionist and holistic views in knowledge management. Although it is favored
by the holistic camp, there is no reason that it cannot be used in a reduc-
tionist way according to appropriateness and awareness of limitations. The
concept of the self-organizing network is an example of a metaphor whose
meaning and utility has wide applicability. It is, on the one hand, used in a
holistic manner for understanding communities of practice in knowledge
management and, on the other, reduced to an analog for the design of auto-
mated language-recognition software by computer scientists using reduc-
tionist programming techniques. Both endeavors are valuable for the
realization of viable, intelligent organizations.
This article has focused on the use of the self-organizing organic net-
work metaphor in knowledge management, and, as a specific example,
shown how the learned reflex, with its integration of multiple functional
aspects of the sensory, nervous, and motor systems, constitutes a power-
ful and apposite metaphor for looking at real-time knowledge manage-
ment in dynamic contexts.

REFERENCES
Carroll, L. (1865) Alice’s Adventures in Wonderland, reprinted in (1982) The Complete
Illustrated Works of Lewis Carroll, London: Chancellor Press.
Cook, S. D. N. & Brown, J. S. (1999) “Bridging epistemologies: The generic dance between
organizational knowledge and organizational knowing,” Organization Science, 10 (4):
381–400.
Heidegger, M. (1962) Being and Time, trans. J. Macquarrie & E. Robinson, New York:
Harper and Row.
Juarrero, A. (1999) Dynamics in Action: Intentional Behavior as a Complex System,
Cambridge, MA: MIT Press.
Kauffman, S. (1995) At Home in the Universe: The Search for Laws of Complexity, New York:
Oxford University Press.
Lackoff, G. (1987) Women, Fire and Dangerous Things, Chicago: University of Chicago
Press.
Lackoff, G. & Johnson, M. (1980) Metaphors We Live By, Chicago: University of Chicago
Press.

21
EMERGENCE

Luhmann, N. (1990) “The Autopoiesis of Social Systems,” in Essays on Self-Reference, New


York: Columbia University Press.
Maturana, H. & Varela, F. (1973) Autopoiesis and Cognition : The Realisation of the Living
Organisation of Living, Netherlands: Reidel.
Merali, Y. (2000) “Individual and collective congruence in the knowledge management
process,” Journal of Strategic Information Systems, 9(2–3): 213–34.
Morgan, G. (1986) Images of Organization, Thousand Oaks, CA: Sage.
Nietszche, F. (1872) Philosophenbuch, trans. in D. Breazeale (1979) Philosophy and Truth:
Selections from Nietzsche’s Notebooks of the Early 1870s, Sussex, NJ: Humanities Press,
50–51.

22
EMERGENCE, 2(4), 23–39
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

The Emergence of Knowledge


in Organizations
Ralph Stacey

T his article argues for a particular way of interpreting analo-


gies from the complexity sciences as the basis for a perspec-
tive on knowledge creation in organizations called complex
responsive processes of relating (Stacey, 2000; Stacey,
Griffin, & Shaw, 2001; Stacey, 2001). From this perspective, knowledge is
continuously reproduced and potentially transformed in processes of
interaction between people. It follows that people cannot “share” know-
ledge, because one cannot share the actions of relating to others, only
perform them. It also follows that knowledge as such is not stored any-
where. All that can be stored is reifications in the form of artifacts, or
tools, which can only become knowledge when used in communicative
interaction between people.
It becomes impossible to talk about measuring knowledge as “intel-
lectual capital,” because knowledge itself does not exist in measurable or
any other reified form. Indeed, putting the words “intellectual” and “cap-
ital” together makes little sense. The notion put forward by some (for
example, Roos et al., 1997; Sveiby, 1997) that an organization can own
“intellectual capital,” that is, can own the attitudes, competence, and
intellectual agility of individuals, becomes highly dubious, since no one
can own relationships. The conclusion is that while it is possible to nur-
ture knowledge, it is impossible to “manage” it, when “manage” is under-
stood in its conventional sense.
The article first highlights the central concepts of mainstream think-
ing about knowledge creation and management in organizations, and
then outlines the perspective of complex responsive processes of relating.

23
EMERGENCE

MAINSTREAM THINKING ABOUT KNOWLEDGE


CREATION IN ORGANIZATIONS

“Mainstream thinking” is a term used in this article to indicate the key


concepts to be found in the most quoted writings on organizational learn-
ing and knowledge creation (Senge, 1990; Nonaka & Takeuchi, 1995).
These writings in turn locate their theoretical frameworks in systems
dynamics (Forrester, 1961, 1969, 1971; Meadows, 1982); sender–receiver
models of knowledge transmission from information theory (Shannon &
Weaver, 1949); distinctions between tacit and explicit knowledge
(Polanyi, 1958, 1960); notions of individual mental models, single- and
double-loop learning (Bateson, 1973; Argyris & Schon, 1978; Argyris,
1990); and dialog as a special form of communication (Bohm, 1965, 1983).
These concepts are to be found in most of the literature on knowledge
management (for example, Burton-Jones, 1999; Davenport & Prusak,
1998; Kleiner & Roth, 1997; Leonard & Strauss, 1997; Sveiby, 1997;
Quinn, Anderson, & Finkelstein, 1996; Garvin, 1993; Brown, 1991).
Throughout the above body of work, the individual and the collective,
such as the group, the organization, and society, are always treated as two
distinct phenomenal levels requiring different explanations of how learn-
ing and knowledge creation take place. The connection between the two
levels is usually understood to lie in the interaction of individuals to cre-
ate the level of group/organization, which then constitutes the context
influencing how individuals interact. It is usually explicitly stated that it
is individuals who learn and create knowledge, although this is almost
always coupled with an emphasis on the importance of the teams within
which this takes place. A key question then becomes whether a team,
group, or organization can be said to learn, or whether it is just their indi-
vidual members who do so. In mainstream thinking, in the end, it is usu-
ally individuals who learn and create knowledge, and the principal
concern from an organizational perspective is how that individual learn-
ing and knowledge might be shared across an organization, and how it
might be captured, stored, and retained by the organization. Sometimes,
the group/social level is treated as a kind of transcendental group mind,
common pool of meaning, or flow of a larger intelligence, for example in
Bohm’s notion of dialog.
Mainstream thinking assumes that individuals communicate by trans-
mitting signals to each other and a distinction is usually drawn between
transmissions of data, information, knowledge, insight, and wisdom, all as
the basis of action (for example, see Davenport & Prusak, 1998). As

24
VOLUME #2, ISSUE #4

regards the transmission of knowledge, the distinction between explicit


and tacit knowledge (Nonaka & Takeuchi, 1995) is thought to be particu-
larly important. Explicit knowledge is systematic and easily transmitted
from one person to another in the form of language. Tacit knowledge
takes the form of mental models below the level of awareness and is dis-
played as skill or knowhow. Mental models are representations of the
world and the individual in that world, which are historically determined
by the experience of the individual. New knowledge is said to come from
tapping the tacit knowledge located in individual heads, and this process
of tapping is understood as one of translating the tacit knowledge in indi-
vidual heads into explicit forms available to the organization. However,
the process of translation does not explain how completely new tacit
knowledge comes to arise in individual heads; for an approach claiming
to explain the creation of knowledge, this is a major limitation. As know-
ledge is dispersed through an organization by the movement between
tacit and explicit, it must be tested, which requires discussion, dialog, and
disagreement.
This is where it becomes important to work and learn in teams. The
knowledge, information, and data that individuals transmit to each other
become shared routines; that is, they are stored in the form of culture,
social structure, organizational procedures, traditions, habits, and group
norms. This constitutes a level above that of the individual, which forms
the social context within which individuals live, act, and relate to each
other. In mainstream thinking, then, there is a circular, systemic inter-
action between individuals at one level and the group/organization/society
at a higher level. The nature of this circular interaction is considered to
be of central importance to the possibility of learning and knowledge
creation. It is widely held that effective learning and knowledge creation
require widespread sharing of values to do with openness, trust, affirma-
tion, dialog, and empowerment. The effectiveness of these processes is
also said to require particular forms of leadership that establish values of
this kind and provide a central vision to guide the learning and know-
ledge creation process.
Mainstream thinking is, therefore, firmly based in systems thinking
and an understanding of mind drawn from cognitivist psychology, which
holds mind to be a computing function of the brain (McCullough & Pitts,
1943; Gardner, 1885).
Over the past few years, developments in the natural complexity
sciences (particularly Kauffman, 1995; Gell-Mann, 1994; Holland, 1998)
have attracted the attention of some writers concerned with organizational

25
EMERGENCE

knowledge. The tendency, however, is to regard the complexity sciences


as an extension of systems thinking (for example, Nonaka & Takeuchi,
1995; Boisot, 1998). It can be argued that this interpretation of the com-
plexity sciences does not lead to any significant change in the underlying
frame of reference described above (see Stacey, Griffin, & Shaw, 2001).
Consider now an alternative way of drawing on insights from the com-
plexity sciences.

ANALOGIES FROM THE COMPLEXITY SCIENCES

Most systems theories envisage the systemic unfolding of that which is


already enfolded, usually by a designer, in the definition or identification
of the system itself. In other words, the system unfolds mature forms of
an identity that is already there in some embryonic sense. This offers the
prospect of control from outside the system, by a designer, and any trans-
formation of the system’s identity must also be determined from outside
by a designer. However, at least some of those modeling complex adap-
tive systems (for example, Kauffman, 1995) are trying to simulate evolu-
tion as an internal dynamic that expresses identity and difference at the
same time. When this process of evolution is modeled as a “system” of
interacting entities, that “system” has a life of its own, rendering it much
less susceptible to control from outside, if at all.
However, extreme care needs to be taken in using such modeling as a
source domain for analogies with human action. The very act of modeling
requires an external modeler, and the specification of the model requires
the initial design of a system, even though what is being modeled is an
evolutionary process that is supposed not to depend on any outside
design. When one turns to this work as a source domain for human action,
therefore, it is important to realize that there is no analogy in human
action for the external designer, programmer, or model builder.
Furthermore, if one takes the “model” or the “system” as the analogy
for human interaction, one reifies human interaction and implies that one
can stand outside of it and observe it. However, as a human, one can
never stand outside of human interaction, since the very act of observing
others interacting is itself an interaction. Systems thinkers have tried to
deal with this problem by widening the boundary of the system to include
the observer, but in doing so they always locate some kind of agency out-
side the boundary. For example, an observer including themselves in the
system is then observing themselves observing. The argument leads to
infinite regress (see Stacey, Griffin, & Shaw, 2001). When one focuses

26
VOLUME #2, ISSUE #4

attention on the “system,” one tends to lose sight of centrality of the


process of interaction, which perpetually constructs itself as continuity
and transformation.
It follows, therefore, that there is no analogy in human action for the
“system.” Instead, it is the process of interaction in the simulation that
provides an analogy for human action (Stacey, 2001). Although scientists
who work with the concept of complex adaptive systems are clearly doing
so within a systems framework, they are modeling processes that display
the internal capacity to produce coherence spontaneously, as continuity
and transformation, solely through local interaction in the absence of any
blueprint or external designer. This work demonstrates the possibility
that processes of interaction in local situations have the intrinsic capacity
for patterning themselves as continuity and transformation at the same
time. It is this insight that holds out the prospect of a different way of
thinking about knowledge creation in organizations.
Nevertheless, the modeling of abstract interactive processes cannot
directly say anything about human acting and knowing. It requires imag-
ination to avoid thinking about the abstract model from an external per-
spective as a system and think, instead, about what the modeling of
interaction might be saying from a perspective within that interaction. It
is for this reason that complexity theories cannot simply be applied to
human action; they can only serve as a source domain for analogies with
it. Furthermore, the models of complex adaptive systems are nothing
more than abstract sets of relationships demonstrating possible proper-
ties of those relationships. The abstract relationships are completely
devoid of the attributes of any real processes and, therefore, their use as
analogies requires imaginative acts of translation if they are to say any-
thing about real processes.
This article suggests that human interaction is analogous to the
abstract interaction modeled by complex adaptive systems. The sugges-
tion is that human relating intrinsically patterns living human experience
as the coherence of continuity and transformation. This coherence is
meaning, that is, knowledge emerging in the living present in local inter-
actions without any global blueprint, plan, or vision.

MODELING INTERACTION IN THE MEDIUM OF


DIGITAL SYMBOLS

The action of complex adaptive systems is explored using computer sim-


ulations in which each agent is a computer program, that is, a set of

27
EMERGENCE

interaction rules expressed as computer instructions. Since each instruc-


tion is a bit string, a sequence of symbols taking the form of 0s and 1s, it
follows that an agent is a sequence of symbols, arranged in a particular
pattern specifying a number of algorithms. These algorithms determine
how the agent will interact with other agents, which are also arrange-
ments of symbols. In other words, the model is simply a large number of
symbol patterns arranged so that they interact with each other. It is this
local interaction between symbol patterns that organizes the pattern of
interaction itself, since there is no set of instructions organizing the global
pattern of interaction. The programmer specifies the initial symbol pat-
terns, then the computer program is run and the patterns of interaction
are observed. Simulations of this kind demonstrate the possibility of sym-
bolic interaction, in the medium of digital symbols arranged to algorith-
mic rules, patterning itself.
For example, in his Tierra simulation, Ray (1992) designed one bit
string, one symbol pattern, consisting of 80 instructions specifying how
the bit string was to copy itself. He introduced random mutation into the
replication and limited the computer time available for replicating as a
selection criterion. In this way, he introduced chance, or instability, into
the replicating process and imposed conditions that both enabled and
constrained that process. This instability within constraints made it pos-
sible for the interaction to generate novel attractors. The first attractor
was that of exponentially increasing numbers of individual symbol pat-
terns, which eventually imposed a constraint on further replication. The
global pattern was a move from sparse occupation of the computer mem-
ory to overcrowding. However, during this process, the individual sym-
bol patterns were gradually changing through random bit flipping, so
coming to differ from each other. Eventually, distinctively different kinds
of symbol patterns emerged, namely, long ones and short ones. The con-
straints on computer time favored smaller ones, so that the global pattern
shifted from one of exponential increase, to one of stable numbers of long
bit strings, to one of decline in long strings accompanied by an increase
in short ones. The model spontaneously produced a new attractor, one
that had not been programmed in.
In other words, new forms of individual symbol patterns and new
overall global patterns emerged at the same time, since there can be no
global pattern of increase and decline without simultaneous change in the
length of individual bit strings, and there can be no sustained change in
individual bit string lengths without the overall pattern of increase and
decline. Individual symbol patterns, and the global pattern, are forming

28
VOLUME #2, ISSUE #4

and being formed by each other, at the same time. To repeat, the new
attractor is evident both at the level of the whole population and at the
level of the individual bit strings themselves at the same time.
Furthermore, the new attractors are not designed but emerge as self-
organization, where it is not individual agents that are organizing them-
selves but, rather, the pattern of interaction, and it is doing so
simultaneously at the level of the individuals and the population as a
whole. It is problematic to separate them out as levels, since they are
emerging simultaneously. No individual bit string can change in a coher-
ent fashion on its own, since random mutation in an isolated bit string
would eventually lead to a completely random one. In interaction with
other bit strings, however, advantageous mutations are selected and the
others are weeded out. What is organizing itself, through interaction
between symbol patterns, is then changes in the symbol patterns them-
selves. Patterns of interacting are turning back on themselves, imperfectly
replicating themselves, to yield changes in those patterns of interaction.
Ray, the objective observer external to this system, then interpreted
the changes in symbol patterns in his simulation in terms of biology, in
particular the evolution of life. Using the model as an analogy, he argued
that life has evolved in a similar, self-organizing, and emergent manner.
Other simulations have been used to suggest that this kind of emerging
new attractor occurs only at the edge of chaos where there is a paradoxi-
cal pattern of both stability and instability at the same time.
The computer simulations thus demonstrate the possibility of digital
symbols self-organizing, that is, interacting locally in the absence of a
global blueprint, in the dynamics at the edge of chaos to produce emer-
gent attractors of a novel kind, provided that the symbol patterns are
richly connected and diverse enough. Natural scientists at the Santa Fe
Institute and elsewhere then use this demonstration of possibility in the
medium of digital symbols as a source of analogy to provide explanations
of phenomena in particular areas of interest such as biology. The inter-
action between patterns of digital symbols can also provide an abstract
analogy for human interaction, if that interaction is understood from the
perspective of Mead’s thought on mind, self, and society.

MEAD’S THEORY OF THE EVOLUTION OF MIND, SELF,


AND SOCIETY

For Mead (1934), human societies are not possible without human minds,
and human minds are not possible in the absence of human societies.

29
EMERGENCE

Humans must cooperate to survive and they also have an intense, intrin-
sic need for relationship and attachment to others. Indeed, the human
brain seems to be importantly shaped by the experience of attachment
(Schore, 1994, 1997). Mead therefore sought an explanation of how mind
and society, that is, cooperative interaction, evolved together.
He adopted a phenomenological, action-based account of how mind
and society might have evolved from the interactive behavior of the
higher mammals. He pointed to how dogs relate to each other in a
responsive manner, with the act of one fitting into the act of the other, in
aggressive or submissive interactions. One dog might make the gesture of
a snarl and this might call forth a counter snarl on the part of the other,
which means a fight, or it might call forth a crouching movement, which
means submission. In other words, the gesture of one animal calls forth a
response from another and together gesture and response constitute a
social act, which is meaning. This immediately focuses on interaction,
that is, a rudimentary form of social behavior, and on knowing and know-
ledge as properties of interaction, or relationship. Meaning does not first
arise in an individual and is then expressed in action, nor is it transmitted
from one individual to another. Rather, meaning emerges in the inter-
action between them. Meaning is not attached to an object, or stored, but
repeatedly created in the interaction.
Mead described the gesture as a symbol in the sense that it is an action
that points to a meaning. However, the meaning could not be located in the
symbol taken on its own. The meaning only becomes apparent in the
response to the gesture and therefore lies in the whole social act of
gesture–response. The gesture, as symbol, points to how the meaning might
emerge in the response. Here, meaning is emerging in the action of the liv-
ing present, in which the immediate future (response) acts back on the past
(gesture) to change its meaning. Meaning is not simply located in the past
(gesture) or the future (response), but in the circular interaction between the
two in the living present. In this way the present is not simply a point but
has a time structure. Every gesture is a response to some previous gesture,
which is a response to an even earlier one, thereby constructing history.
This process of gesture and response between biological entities in a
physical context constitutes simple cooperative, social activity of a mind-
less, reflex kind. The “conversation of gestures” is both enabling and con-
straining at the same time and it constitutes meaning, although animals
acting in this meaningful way are not aware of the meaning. At this stage,
meaning is implicit in the social act itself and those acting are unaware of
that implicit meaning.

30
VOLUME #2, ISSUE #4

Mead argued that humans must have evolved from mammals with
similar rudimentary social structures to those found in present-day mam-
mals. The mammal ancestors of humans evolved central nervous systems
that enabled them to gesture to others in a manner that was capable of
calling forth in themselves a range of responses similar to those called
forth in those to whom they were gesturing. This would happen if, for
example, the snarl of one called forth in itself the fleeting feelings associ-
ated with counter snarl and crouching posture, just as they did in the one
to whom the gesture of snarl was being made. The gesture, as symbol,
now has a substantially different role, namely, that of a significant symbol,
which is one that calls forth a similar response in the gesturer as in the
one to whom it is directed. Significant symbols, therefore, make it possi-
ble for the gesturer to “know” what they are doing.
This simple idea is a profound insight. If, when one makes a gesture
to another, one is able to experience in one’s own body a similar response
to that which the gesture provokes in another body, then one can “know”
what one is doing. It becomes possible to intuit something about the
range of likely responses from the other. This ability to experience in the
body something similar to that which another body experiences in
response to a gesture becomes the basis of knowing and of consciousness.
Mead suggested that the central nervous system, or better still the bio-
logically evolved whole body, has the capacity to call forth in itself feel-
ings that are similar to those experienced by other bodies. The body, with
its nervous system, becomes central to understanding how animals
“know” anything.
The neuroscientist Damasio (1994, 1999) argues that the human brain
continuously monitors and integrates the rhythmical activity of the heart,
lungs, gut, muscles, and other organs, as well as the immune, visceral,
and other systems in the body. At each moment the brain is registering
the internal state of the body and Damasio argues that these body states
constitute feelings. This continuous monitoring activity, that is, registra-
tion of feeling states, is taking place as a person selectively perceives
external objects, such as a face or an aroma, and experience then forms an
association between the two. Every perception of an object outside the
body is associated, through acting into the world, with particular body
states, that is, patterns of feeling. When a person encounters situations
similar to previous ones, they experience similar feeling states, or body
rhythms, which orient the way that person acts in the situation. In this
way, human worlds become affect laden and the feeling states uncon-
sciously narrow down the options to be considered in a situation. In other

31
EMERGENCE

words, feelings unconsciously guide choice, and when the capacity to feel
is damaged, so is the capacity to select sensible action options rapidly.
Damasio suggests that, from a neurological standpoint, the body’s moni-
toring of its own rhythmic patterns is both the ground for its construction
of the world into which it acts and its unique sense of subjectivity.
Possessing this capacity, the maker of a gesture can intuit, perhaps
even predict, the consequences of that gesture. In other words, they can
know what they are doing, just before the other responds. The whole
social act, that is, meaning, can be experienced in advance of carrying out
the whole act, opening up the possibility of reflection and choice in mak-
ing a gesture. Furthermore, the one responding has the same opportunity
for reflecting on, and so choosing, from the range of responses. The first
part of a gesture can be taken by the other as an indication of how further
parts of the gesture will unfold from the response. In this way, the two can
indicate to each other how they might respond to each other in the con-
tinuous circle in which a gesture by one calls forth a response from the
other, which is itself a gesture back to the first. Obviously, this capacity
makes more sophisticated forms of cooperation possible.
The capacity to call forth the same response in oneself as in the other
is thus a rudimentary form of awareness, or consciousness, and together
with meaning, emerges in the social conversation of gestures. At the same
time as the emergence of conscious meaning, there also emerges the
potential for more sophisticated cooperation. Human social forms and
human consciousness thus both emerge at the same time, each forming
and being formed by the other at the same time, and there cannot be one
without the other. As individuals interact with each other in this way, the
possibility arises of a pause before making a gesture. In a kind of private
role-play, emerging in the repeated experience of public interaction, one
individual learns to take the attitude of the other, enabling a kind of trial
run in advance of actually completing or even starting the gesture.
In this way, rudimentary forms of thinking develop, taking the form of
private role-playing, that is, gestures made by a body to itself, calling
forth responses in itself. Mead said that humans are fundamentally role-
playing animals. He then argued that the gesture that is particularly use-
ful in calling forth the same attitude in oneself as in the other is the vocal
gesture. This is because we can hear the sounds we make in much the
same way as others hear them, while we cannot see the facial gestures we
make as others see them, for example. The development of more sophis-
ticated patterns of vocal gesturing, that is, of the language form of signif-
icant symbols, is thus of major importance in the development of

32
VOLUME #2, ISSUE #4

consciousness and of sophisticated forms of society. Mind and society


emerge together in the medium of language.
However, since speaking and listening are actions of bodies, and since
bodies are never without feelings, the medium of language is also always
the medium of feelings. Furthermore, the public and private role-plays,
or conversations, which constitute the experience of the interacting indi-
viduals, actually shape the patterns of connections in the plastic brains of
each (Freeman, 1995). Both public and private conversations are shaping,
while being shaped by, the spatio-temporal patterns of brain and body.
This simultaneous public and private conversation of gestures takes place
in the medium of significant symbols, particularly those of language, and
it is this capacity for symbolic mediation of cooperative activity that is one
of the key features distinguishing humans from other animals.
As soon as one can take the attitude of the other, that is, as soon as one
can communicate in significant symbols, there is at least a rudimentary
form of consciousness. In other words, one can “know” the potential con-
sequences of one’s actions. The nature of the social has thus shifted from
mindless cooperation to mindful, role-playing interaction, made more
and more sophisticated by the use of language. Meaning is now particu-
larly constituted in gesturing and responding in the medium of vocal
symbols, that is, conversation. Mind, or consciousness, is the gesturing
and responding action of a body directed toward itself as private role-play
and silent conversation, and society is the gesturing and responding
actions of bodies directed toward each other. Conversational relating
between people is the process in which meaning, that is, knowledge, per-
petually emerges.
As more and more interactions are experienced with others, so,
increasingly, more roles and wider ranges of possible responses enter into
the role-playing activity that is continuously intertwined with public ges-
turing and responding. In this way, the capacity to take the attitude of
many others evolves and this becomes generalized. Each engaged in the
conversation of gestures can now take the attitude of what Mead calls the
generalized other. Eventually, individuals develop the capacity to take
the attitude of the whole group, that is, the social attitude, as they gesture
and respond. The result is much more sophisticated processes of cooper-
ative interaction.
The next step in this evolutionary process is the linking of the attitude
of specific and generalized others, even of the whole group, with a “me.”
In other words, there evolves a capacity to take the attitude of others not
just toward one’s gestures but also toward one’s self. The “me” is the

33
EMERGENCE

configuration of the gestures/responses of the others/society to one as a


subject, or an “I.” What has evolved here is the capacity to be an object
to oneself, a “me,” and this is the capacity to take the attitude of the
group, not simply to one’s gestures, but to one’s self. A self, as the rela-
tionship between “me” and “I,” has therefore emerged, as well as an
awareness of that self, that is, self-consciousness. Mead argued that this
“I” response to the “me” is not a given but is always potentially unpre-
dictable, in that there is no predetermined way in which the “I” might
respond to the “me.” In other words, each of us may respond in many dif-
ferent ways to our perception of the views that others have of us.
Here, Mead is pointing to the importance of difference, or diversity,
in the emergence of the new, that is, in the potential for transformation.
In addition to Mead’s argument, one could understand the response as
simultaneously called forth by the gesture of the other and selected or
enacted by the responder. In other words, the response of the “I” is both
being called forth by the other and being enacted, or selected by the his-
tory, biological, individual, and social, of the responder. Your gesture calls
forth a response in me, but only a response that I am capable of making
and that depends on my history. This adds a constructivist dimension to
Mead’s argument, suggesting a paradoxical movement in the response of
selection/enactment and evocation/provocation at the same time. In this
way, the reproduction and potential transformation of historical responses
in the living present are held in tension with the reproduction and poten-
tial transformation of evocation.
The social, in human terms, is a highly sophisticated process of coop-
erative interaction between people in the medium of symbols in order to
undertake joint action. Such sophisticated interaction could not take
place without self-conscious minds, but neither could those self-
conscious minds exist without that sophisticated form of cooperation. In
other words, there could be no private role-play, including silent conver-
sation, by a body with itself, if there was no public interaction of the same
form. Mind/self and society are all logically equivalent processes of a con-
versational kind.
However, the symbolic processes of mind/self are always actions,
experienced within a body as rhythmic variations, that is, feeling states.
Mind is the action of the brain, rather like walking is the action of the
body. One would not talk about walking emerging from the body, and it
is no more appropriate to talk about mind emerging from the brain. Note
how the private role-play, including the silent conversation of mind/self,
is not stored as representations of a pre-given reality. It is, rather, contin-

34
VOLUME #2, ISSUE #4

uous spontaneous action, in which patterns of action are continuously


reproduced in repetitive forms as continuity, sameness, and identity, and
simultaneously as potential transformation of that identity. In other
words, as with interaction between bodies, the social, so with interaction
of a body with itself, the mind, there is the experience of both familiar
repetition of habit and the potential of spontaneous change. The process
is not representing or storing, but continuously reproducing and creating
new meaningful experience. In this way, the fundamental importance of
the individual self and identity is retained, along with the fundamental
importance of the social. In this way, too, both continuity and potential
transformation are always simultaneously present. Furthermore, there is
no question of individuals at one level and the social at another. They are
both at the same ontological level.

THE CONNECTION WITH THE COMPLEXITY SCIENCES

The process of interaction between people is a continuous circular one


that takes place in the medium of embodied symbols, for example, in
sounds called words. However, as one imagines such interaction between
larger and larger numbers of individuals, one wonders how any kind of
global coherence could arise in such huge numbers of local interactions.
This is not an issue with which Mead dealt, but it is one where the com-
plexity sciences offer important insights.
Some of the work in the complexity sciences explores the properties
of abstract models of continuous circular processes of interaction
between computer programs in the medium of digital symbols. It is pos-
sible that certain properties of interaction demonstrated in the abstract
models might, therefore, offer analogies for human interaction, inter-
preted through Mead’s thought. The modeling of complex interactions
demonstrates the possibility that interactions between large numbers of
entities, each entity responding to others on the basis of its own local
organizing principles, can produce coherent patterns with the potential
for novelty in certain conditions, namely, the paradoxical dynamics at the
edge of chaos.
In other words, the very process of self-organizing interaction, when
sufficiently richly connected, has the inherent capacity to spontaneously
produce coherent pattern in itself, without any blueprint or program.
Furthermore, when the interacting entities are different enough from
each other, that capacity is one of spontaneously producing novel patterns
in itself. In other words, abstract interactions can pattern themselves

35
EMERGENCE

where those patterns have the paradoxical feature of continuity and nov-
elty, identity and difference, at the same time.
By analogy, the circular process of gesturing and responding between
people who are different to one another can be thought of as self-
organizing relating in the medium of symbols having intrinsic patterning
capacity. In other words, patterns of relating in local situations in the liv-
ing present can produce emergent global patterns in the absence of any
global blueprint. And emergent patterns can constitute both continuity
and novelty, both identity and difference, at the same time. This is what
is meant by a complex responsive process of relating and it amounts to a
particular causal framework, where the process is one of perpetual con-
struction of the future as both continuity and potential transformation at
the same time. Individual mind and social relating are patterning
processes in bodily communicative interaction, forming and being
formed by themselves.
The complex responsive process of relating perspective, then, is one
in which the individual, the group, the organization, and the society are
all the same kinds of phenomena, at the same ontological level. The indi-
vidual mind/self is an interactive role-playing process conducted pri-
vately and silently in the medium of symbols by a body with itself, and
the group, organization, and society are all also interactive processes in
the medium of the same symbols, this time publicly and often vocally
between different bodies. The individual and the social, in this scheme,
simply refer to the degree of detail in which the whole process is being
examined. They are fractal processes.
Culture and social structure are usually thought of as repetitive and
enduring values, beliefs, traditions, habits, routines, and procedures.
From the complex responsive process perspective, these are all social acts
of a particular kind. They are couplings of gesture and response of a pre-
dictable, highly repetitive kind. They do not exist in any meaningful way
in a store anywhere, but, rather, they are continually reproduced in the
interaction between people. However, even habits are rarely exactly the
same. They may often vary as those with whom one interacts change and
as the context of that interaction changes. In other words, there will usu-
ally be some spontaneous variation in the repetitive reproduction of pat-
terns called habits. These habits and routines, values, and beliefs are not
at some higher ontological level. They are part of the pattern of inter-
action between people.
Furthermore, there is no requirement here for any sharing of mental
contents, or any requirement that people should be engaging in the same

36
VOLUME #2, ISSUE #4

private role-plays. The only requirement for the social, understood as


habits, routines, and so on, is that people should be acting them out.
Systems, databases, recorded and written artifacts are usually thought
of as stores of knowledge. From the complex responsive process perspec-
tive, they are simply records that can only become knowledge when peo-
ple use them as tools in their processes of gesturing and responding to
each other. What is captured in these artifacts is inevitably something
about the meanings of social acts already performed. Since a social act is
ephemeral and since knowledge is social acts, it can never be stored or
captured. Habits here are understood not as shared mental contents but
as history-based, repetitive actions, both private and public, reproduced
in the living present with relatively little variation.

CONCLUSION
There are profound implications of this way of thinking for how one
understands learning and knowledge creation in organizations. From
mainstream perspectives, knowledge is thought to be stored in individual
heads, largely in tacit form, and it can only become the asset of an organ-
ization when it is extracted from those individual heads and stored in
some artifact as explicit knowledge. From a complex responsive process
perspective, knowledge is always a process of responsive relating, which
cannot be located simply in an individual head, then to be extracted and
shared as an organizational asset. Knowledge is the act of conversing and
new knowledge is created when ways of talking, and therefore patterns of
relationship, change.
Knowledge, in this sense, cannot be stored, and attempts to store it in
artifacts of some kind will capture only its more trivial aspects. The
knowledge assets of an organization then lie in the pattern of relation-
ships between its members and are destroyed when those relational pat-
terns are destroyed. Knowledge is, therefore, the thematic patterns
organizing the experience of being together. It is meaningless to ask how
tacit knowledge is transformed into explicit knowledge, since uncon-
scious and conscious themes organizing experience are inseparable
aspects of the same process. Organizational change, learning, and know-
ledge creation are the same as change in communicative interaction,
whether people are conscious of it or not. This perspective suggests that
the conversational life of people in an organization is of primary impor-
tance in the creation of knowledge.

37
EMERGENCE

NOTE
This paper is based on R. Stacey (2001) Complex Responsive Processes in Organizations:
Learning and Knowledge Creation, London: Routledge.

REFERENCES
Argyris, C. (1990) Overcoming Organizational Defenses: Facilitating Organizational
Learning, Needham Heights, MA: Allyn and Bacon.
Argyris, C. & Schon, D. (1978) Organizational Learning: A Theory of Action Perspective,
Reading, MA: Addison Wesley.
Bateson, G. (1973) Steps to an Ecology of Mind, St Albans, UK: Paladin.
Bohm, D. (1965) The Special Theory of Relativity, New York: W. A. Benjamin.
Bohm, D. (1983) Wholeness and the Implicate Order, New York: Harper and Row.
Bohm, D. & Peat, F. D. (1989) Science, Order and Creativity, London: Routledge.
Boisot, M. (1998) Knowledge Assets: Securing Competitive Advantage in the Knowledge
Economy, Oxford, UK: Oxford University Press.
Brown, J. S. (1991) “Research that reinvents the corporation,” Harvard Business Review,
Jan–Feb.
Burton-Jones, A. (1999) Knowledge Capitalism: Business, Work and Learning in the New
Economy, Oxford, UK: Oxford University Press.
Damasio, A. (1994) Descartes Error: Emotion, Reason and the Human Brain, New York:
Picador.
Damasio, A. (1999) The Feeling of What Happens: Body and Emotion in the Making of
Consciousness, London: Heinemann.
Davenport, T. H. & Prusak, L. (1998) Working Knowledge: How Organizations Manage
What They Know, Cambridge, MA: Harvard University Press.
Forrester, J. (1961) Industrial Dynamics, Cambridge, MA: MIT Press.
Forrester, J. (1969) Urban Dynamics, Cambridge, MA: MIT Press.
Forrester, J. (1971) “The counter intuitive behavior of social systems,” Technology Review,
Jan: 52–68.
Freeman, W. J. (1995) Societies of Brains: A Study in the Neuroscience of Love and Hate,
Hillsdale, NJ: Lawrence Earlbaum Associates.
Garvin, D. A. (1993) “Building a learning organization,” Harvard Business Review,
July–Aug.
Gell-Mann, M. (1994) The Quark and the Jaguar, New York: Freeman & Co.
Holland, J. H. (1998) Emergence from Chaos to Order, New York: Oxford University
Press.
Kauffman, S. A. (1995) At Home in the Universe, New York: Oxford University Press.
Kleiner, A. & Roth, G. (1997) “How to make experience your best teacher,” Harvard
Business Review, Sept–Oct.
Leonard, D. & Strauss, S. (1997) “Putting your company’s whole brain to work,” Harvard
Business Review, July–Aug.
Mead, G. H. (1934) Mind, Self and Society, Chicago: Chicago University Press.
Meadows, D. H. (1982) “Whole earth models and system co-evolution,” Co-evolution
Quarterly, Summer: 98–108.
Nonaka, I. & Takeuchi, H. (1995) The Knowledge Creating Company: How Japanese
Companies Create the Dynamics of Innovation, New York: Oxford University Press.
Polanyi, M. (1958) Personal Knowledge, Chicago: Chicago University Press.
Polanyi, M. (1960) The Tacit Dimension, London: Routledge and Kegan Paul.

38
VOLUME #2, ISSUE #4

Quinn, J. B., Anderson, P., & Finkelstein, S. (1996) “Managing professional intellect: Making
the most of the best,” Harvard Business Review, March–April.
Roos, J., Roos, G., Dragonetti, N. C., & Edvinsson, L. (1997) Intellectual Capital: Navigating
the New Business Landscape, London: Macmillan Press.
Schore, A. N. (1994) Affect Regulation and the Origin of the Self: The Neurobiology of
Emotional Development, Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
Schore, A. N. (1997) “Early organization of the nonlinear right brain and development of a
predisposition to psychotic disorder,” Development and Psychology: 595–631.
Senge, P. (1990) The Fifth Discipline: The Art and Practice of the Learning Organization,
New York: Doubleday.
Shannon, C. & Weaver, W. (1949) The Mathematical Theory of Communication, Urbana, IL:
University of Illinois Press.
Stacey, R. (2000) Strategic Management and Organisational Dynamics: The Challenge of
Complexity, London: Pearson Education.
Stacey, R. (2001) Complex Responsive Processes in Organizations: Learning and Knowledge
Creation, London: Routledge.
Stacey, R., Griffin, D., & Shaw, P. (2001) Complexity and Management: Fad or Radical
Challenge to Systems Thinking?, London: Routledge.
Sveiby, K. E. (1997) The New Organizational Wealth: Managing and Measuring Knowledge-
Based Assets, San Francisco: Berrett-Koehler.

39
EMERGENCE, 2(4), 40–49
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Out of Control into


Participation
Brian Goodwin

F or nearly 500 years, western science has provided our cul-


ture with a remarkably successful procedure for gaining reli-
able knowledge of the natural world that can be used to
produce a great range of artifacts and technologies whereby
we can control natural processes. However, the dialectic of science has
now brought us face to face with a world that is intrinsically uncontrol-
lable, for reasons that remain scientifically intelligible. In a fundamental
sense, this is the end of a deeply pervasive cultural attitude of control that
has dominated most of our institutions, including those of government
and the corporate sector. We have reached a moment of truth in which
rationality itself points to a necessary new praxis, a different way of being
in the world from that which we have cultivated and practiced for
centuries.
This is deeply challenging; but there is a bridge of continuity across
what appears to be a sudden cultural abyss, from conventional western
scientific practice to a new mode of informed and extended rationality.
Complexity theory points to a path across the chasm, although it does not
itself provide the bridge. In this article, I shall examine the nature and
implications of this pointer, and how we can follow it into a new world.
The specific properties of that world are, however, necessarily still largely
out of sight. I begin with a brief description of the assumptions within
western science that give it both its power and its limitations.

40
VOLUME #2, ISSUE #4

GALILEAN SCIENCE

When Galileo started the great adventure of modern science with his sys-
tematic study of the motion of falling and projected bodies, cylinders
rolling down inclined planes, and the movements of the moons of Jupiter,
he was guided by a deep and powerful organizing idea: natural phenom-
ena can be described by mathematics. He wasn’t the first to explore this
ordering principle of nature. Egyptian, Greek, and Arab natural philoso-
phers had all contributed substantially to the realization that processes
involving the operation of levers, musical intervals, and harmony, and
particularly the movements of the heavenly bodies, are governed by
number, ratio, and geometry, so that there is a distinctly rational aspect to
natural processes.
What Galileo did was to define the methodology of science in terms
of the study of number and measure. Those properties of the natural
world that can be measured and expressed in terms of mathematical
relationships define the domain of scientific inquiry. These measurable
quantities, such as mass, position, velocity, momentum, and so on, are
the “primary qualities” of phenomena, as the philosopher John Locke
defined them. They originate from our experience of weight and force in
natural processes. Other experiences that we may have, such as the per-
fume and texture of a fruit or a flower, feelings associated with their
color, or the joy that we may feel at the beauty of a landscape or a sun-
set, which have no quantitative measure, are outside the legitimate
domain of scientific inquiry. Modern science is thus defined as the sys-
tematic study of quantities and excludes “secondary” qualities (experi-
ence of color, odor, texture, beauty of form, etc., which are often referred
to as qualia).
As a strategy for exploring an aspect of reality—the quantifiable and
the mathematizable—the restriction of modern science to primary quali-
ties is perfectly reasonable. It has also turned out to be remarkably suc-
cessful. The diversity of aspects of the natural world that fall under the
spell of number, measure, and mathematics is astonishing, ranging from
light, magnetism, and chemical reactions to the laws of biological inheri-
tance. But the impulse to mathematize nature takes scientific description
well beyond what is perceived as the “common-sense” behavior of clocks,
magnets, and chemical processes to the strange but self-consistent world
of quantum mechanics. Here, causality functions differently from
mechanical interactions. Quantum elements do not behave as independ-
ent entities whose properties can be varied in arbitrary ways.

41
EMERGENCE

The quantum realm is governed by principles of intimate entangle-


ment and coordination between its components, giving rise to coherent
holistic order that extends over any distance.
Mathematics also gives us new insights into the curious logic of the
weather, showing us why it is unpredictable but intelligible. The discov-
ery of deterministic chaos in dynamical systems allows us to reconcile
these two apparently contradictory properties. As is now widely known,
sensitivity to initial conditions means that any error in specifying these
for weather calculations, and rounding errors that inevitably accompany
computation, will grow exponentially so that errors rapidly overwhelm
the calculation and long-term prediction fails. This is the property of nat-
ural processes governed by deterministic chaos (Gleick, 1987), which is
now recognized to be a natural or generic pattern of behavior for most
nonlinear systems. It has been identified in the dynamics of our hearts
and brains, in the behavior of social insects, and in many other biological
processes (Goodwin, 1994; Kelso, 1995).

THE CREATIVITY OF NATURE

There is another source of unpredictability in natural processes that has


become the focus of a recently developed field of research called the sci-
ences of complexity (Kauffman, 1993, 1995; Cohen & Stewart, 1994).
Here, the problem is to understand how unexpected properties arise
from the interactions of the component elements of a complex system,
which can be physical, chemical, biological, or social. These are called
emergent properties because the system as a whole displays behavior that
is unpredictable from an observation of the interactions of its component
parts.
For instance, colonies of social insects such as bees, wasps, termites,
and ants achieve remarkable feats of organization and coordinated action
that go so far beyond the capacities of the individuals that the colony is
often described as a superorganism, an emergent whole with properties
of its own. Termites construct their beautifully intricate colonial
dwellings through processes that look anything but organized. Yet, out of
the activities of termite construction gangs that form and disperse in
apparently disorganized patterns, there emerge coherently structured
pillared halls and passageways, complete with air conditioning, that
accommodate thousands of inhabitants (O'Toole et al., 1999). Nature is
full of creative surprises, and the sciences of complexity explore, by
means of observation, mathematical modeling, and computer simulation,

42
VOLUME #2, ISSUE #4

how this creativity of natural process can be understood even if it cannot


be predicted or controlled. However, an important feature of emergent
properties is that they are always consistent with, although not necessar-
ily reducible to, the properties of the components of the system. Nature
doesn’t suddenly produce something out of nothing, so there are no mir-
acles (Sol & Goodwin, 2000).
Western science has now arrived at a dramatic turning point.
Scientific knowledge was intended to reveal the laws of nature, which we
could then use for prediction and control of natural processes. This
knowledge has given us a remarkable range of very useful technologies,
and this process will continue for those limited aspects of nature that con-
form to mechanical causality. But what has been revealed by science itself
is that much, probably most, of nature cannot be predicted and con-
trolled. We can now understand why the complex systems on which the
quality of our lives depends, such as the weather, ecological systems,
communities, organizations, economies, and health, are out of our control
except in very limited ways. But we have to interact with the complex sys-
tems that surround us because we are a part of them. What is the appro-
priate relationship to nature in view of our new understanding?

FROM QUANTITY TO QUALITY: INTUITION AND A


SCIENCE OF QUALITIES

One of the main constraints on conventional science that limits the abil-
ity to gain insight into the realm of complex phenomena is the restriction
of data to quantifiable, measurable aspects of natural processes. There is
no intrinsic reason that this constraint should be accepted. What is
required in a science is some methodology whereby practicing subjects
come to agreement on their observations and experiences. This is the
basis of quantitative measurement: acceptance of a method whereby dif-
ferent practitioners can reach intersubjective consensus on their results.
Where there is no consensus, there is no “objective” scientific truth.
Why should this not be extended to the observation and experience of
“secondary” qualities? In fact, this extension is practiced in many areas,
an example being the healing professions, whether conventional western
medical practice or complementary therapeutic traditions. The present-
ing subject’s experience of pain and its qualities is certainly used in diag-
nostic practice, as are many other qualities such as color and texture of
skin, posture, tone of voice, etc. Paying close attention to these, as well as
to quantitative data such as temperature, pulse, and blood pressure, is a

43
EMERGENCE

significant part of the art of diagnosis. Conventional wisdom accepts that


these skills can only be acquired through practice and experience, which
hones the intuitive faculty to perceive reliably the underlying condition
that is the cause of change from health to disease. Health is an emergent
property that cannot be reduced to the sum of quantitative data about dif-
ferent aspects of the body. Its perception requires the healer to pay atten-
tion to qualities as well as quantities, and to make use of the intuition
(noninferential perception of wholes) in coming to a holistic judgment
about the condition presented.
Conventional scientists begin to get very nervous when this type of
procedure is described as science. They are suspicious of intuition, and
they mistrust qualitative observation. As far as intuition is concerned,
they need have no anxieties: it is a universally recognized subjective
component of scientific discovery. It is the intuitive faculty that makes
sense of diverse data and brings them into a coherent pattern of mean-
ing and intelligibility, although of course the analytical intellect is also
involved in sorting out the logic of the intuitive insight. What is not prac-
ticed in science is the systematic cultivation of the intuitive faculty, the
capacity to recognize the coherent wholes that emerge from related
parts. However, the study of emergent properties in the science of com-
plexity clearly requires the use of intuition to a high degree. It is what is
required to perceive the subtle order that characterizes the holistic
properties of complex systems—ecosystems, communities, organiza-
tions, health.
Furthermore, these emergent properties are closely associated with
“secondary” qualities. The health of an ecosystem is reflected in the qual-
ity of birdsong as well as in the (quantitative) diversity of species.
However, scientists are trained to pay attention only to quantities. As peo-
ple and as naturalists they are aware of qualities, which are often the pri-
mary indicators of change. But as scientists they factor them out of their
consciousness. This restriction is based on a convention that has worked
extremely well for “simple” systems, but it has severe limitations in the
face of complexity. It is time for a move into a science of qualities.
A science of qualities is not new in the western tradition. This is the
science that was practiced by Johann Wolfgang von Goethe in the late
eighteenth and early nineteenth century. Regarded for many years as an
aberration because of an apparent conflict with Newtonian science,
Goethe’s studies have been largely ignored within mainstream science.
However, it is now evident that Goethe’s approach to natural processes is
not so much in direct conflict with the dominant science of quantities as

44
VOLUME #2, ISSUE #4

different in emphasis from it (cf. Bortoft, 1996). In Goethe’s study of color,


for example, which is where he ran into trouble for challenging Newton’s
color theory, an explicit goal is not simply to understand the conditions
under which various colors emerge, but also to relate this to the experi-
ence we have of different colors, i.e., their qualia. The assumption is that
our feelings in response to natural processes are not arbitrary, but can be
used as reliable indicators of the nature of the real processes in which we
participate. Qualities include the realm of the normative, our assessment
of the rightness or wrongness, appropriateness or inappropriateness, of
particular actions in relation to our knowledge.
A science of emergent qualities involves a break with the positivist
tradition that separates facts and values and re-establishes a foundation
for a naturalistic ethics (Collier, 1994). The essential argument here is
that, if we believe that our knowledge is reliable and relates to a real
world, it guides our behavior toward that world. This is clearly true for
conventional scientific knowledge, such as understanding the properties
of gold and using it appropriately in technology. It also holds for qualities:
if we believe that it is in the nature of children to play, we will provide
opportunities for them to do so in order that they can have a good qual-
ity of life. This principle extends logically to treatment of other species
and to nature in general, within an epistemology that includes qualitative
evaluation as an intrinsic aspect of reliable knowing.

INTO PARTICIPATION

Participation now enters as a fundamental ingredient in the human expe-


rience of any phenomenon, which arises out of the encounter between
two real processes that are distinct but not separable: the human process
of becoming and that of the “other,” whatever this may be, to which the
human is attending. In this encounter where the phenomenon arises,
feelings and intuitions are not arbitrary, idiosyncratic accompaniments,
but direct indicators of the nature of the mutual process that occurs in the
encounter. By paying attention to these, we gain insight into the emer-
gent reality in which we participate.
Of course, there are idiosyncratic, personal components of the insight,
just as there are idiosyncratic elements of the integrating theories that
come with flashes of intuitive insight to individual scientists. These need
to be distinguished from the more lasting and universal aspects of the
insight, which is where the process of intersubjective testing comes in to
find consensus among a group of practitioners (cf. Wemelsfelder et al.,

45
EMERGENCE

2000). The same type of process is required to evaluate the insights


gained from paying attention to qualities of experience in order to under-
stand the subtle order of complex systems.
The sensitivity of these systems to initial conditions, to change in their
parts or their interactions, means that we must be finely tuned to the
process we seek to influence beneficially in order to monitor our effects,
as in any healing process. These are basic ingredients of a science of qual-
ities. In a sense, they are no more than a statement of what holistic prac-
titioners have been engaged in. However, it is time to develop such a
science systematically as an extension of quantitative science in a direc-
tion that is appropriate to the needs of our age.

METAPHORS AND PRAXIS

The sciences of complexity provide us with an extremely suggestive set of


metaphors, which give useful indications of the properties needed in a
new scientific praxis that could apply to human organizations as well as
to relations with the natural world (Reason & Goodwin, 2000; Stacey et
al., 2000). Moving away from control, letting go, living on the edge of
chaos where emergent order arises that can provide adaptive solutions to
problems: these indicate precisely where we want to be to deal creatively
with unexpected change. Why not simply use the insights of complexity
theory to take human organizations to the edge of chaos so that they can
operate more effectively? There are two reasons that this can’t be done
within our current science of control.
The first has already been described: the restriction of “reliable”
knowledge to quantitative variables and their coherent mathematical
relationships, so that qualia cannot contribute to “objective knowledge.”
The “agents” described in complex adaptive systems have no qualitative
experience and so cannot behave like humans except in a very restrictive,
mechanical sense. The other reason is the assumption that the scientist
must stand outside of and apart from the “system” in order to examine
and influence it. But we are inside the systems that are causing us prob-
lems, part of their intrinsic relationships. We cannot manipulate these
complex processes, taking them to a desired state, because they operate
in terms of principles of self-organization and we are part of the self,
along with all the other participants in the process in which we are
engaged. However, we can feel or intuit change as well as measure what-
ever may help us in assessing what is happening. This is of course how
we live our lives in relation to our fellows, so we have plenty of practice

46
VOLUME #2, ISSUE #4

at it. Once basic needs (of food, shelter, and clothing) are met, quantities
play a relatively small part in achieving a fulfilled life, which depends on
quality of relationships. Ways of systematically developing an appropriate
praxis within self-organizing communities that facilitate the emergence of
appropriate order have been explored and developed within several dif-
ferent traditions, prominent among them being cooperative inquiry or
participatory action research (Heron & Reason, 1997; Reason, 1998).
Science is not ahead in these developments; it is behind.
Our scientific and technological culture has emphasized quantities of
everything as the measure of achievement and fulfilment, and in doing so
has progressively isolated individuals from one another and from nature.
Quantification and control of nature, once acting through technology as a
liberating force for humanity, have now reached the point of enslaving
everything they touch, particularly life itself through patents that turn
organisms and their parts into salable commodities and humans into per-
fectable machines. The “bottom line” of profit as the constantly scruti-
nized criterion of success in the unregulated marketplace is a major
quantity that enslaves the corporate sector and prevents the transition of
most companies to a condition of freedom and creativity.
In physiology it is becoming recognized that such inflexibility of goal,
a kind of rigid homeostasis, is a clear sign of danger: a constant high heart
rate warns of proneness to sudden cardiac arrest. Such order indicates
that the body has lost its flexibility and responsiveness to change and has
fallen into a condition of disease. Health, on the other hand, carries with
it a signature of unpredictable variability in physiological variables, but
variability within limits as in a strange attractor. Indeed, it appears that
health is characterized precisely by a balance between order and chaos in
the body’s functions, which takes us back to the insights of complexity
theory: creative living occurs on the edge of chaos.
This suggests that present business practice, with its rigid focus on
maintaining constant high profits, has resulted in severe proneness to the
economic equivalent of sudden cardiac arrest, as observed in the increas-
ing rate of company failures. Again we have a suggestive metaphor, but
no prescription from complexity theory for healing the patient. There
isn’t one within the current scientific paradigm, for the reasons given
above: it still works within the tradition of separation of the
investigator/manipulator/leader from the system and restricts itself to
quantities, whereas we humans live most of our lives in terms of qualities
and relationships, as does the rest of living nature. Leadership in the new
context means facilitating processes and procedures that encourage high

47
EMERGENCE

quality of experience in the group. This then results in a robust creativity


and health such that profits look after themselves, remaining within rea-
sonable bounds but varying unpredictably in the short term.
There are very powerful economic and political forces that act against
such transformation, maintaining a culture of fear in organizations due to
the threat of loss of market share if high profitability is not maintained. It
therefore requires a remarkable act of courage to get to the point of
engaging financial analysts and shareholders in a conversation about the
goals and purposes of trade that could transform the objective of business
to good quality of life for all. Financial analysts are, of course, simply
reflecting to CEOs how the “market” expects them to behave. But the
collusion between analysts, managers, and shareholders is actually what
maintains the dangerous condition of high profits that is a primary symp-
tom of the current economic and environmental disease from which we
suffer.
A better quality of life can only be realized if all the members of our
planetary society are included in the new contract, for this is what par-
ticipation means. It makes no sense trying to achieve a good quality of life
for humans at the expense of the rest of nature, as we are now learning
the hard way through the destructive effects of environmental pollution,
unhealthy industrialized food, turbulent climate change, and species
extinctions. These were all foreseen as dangerous results of our actions by
the few who read the signs and understood mutual dependence through
complex networks.
A science of qualities extends the science of quantities to include the
different ways of knowing that we can use to understand the complex
webs of relationship within which we are embedded at every moment of
our lives. Focus on quality of life by the cultivation of the antennae
needed to participate responsibly in these webs is not new. We do it nat-
urally all the time, and all human cultures have developed these qualities
of participation to a greater or lesser degree. We have chosen to do so to
a lesser degree in our culture and there is a growing consensus that it is
time to recover our balance.

REFERENCES
Bortoft, H. (1996) The Wholeness of Nature: Goethe’s Way Toward a Science of Conscious
Participation in Nature, New York: Lindisfarne Press.
Cohen, J. & Stewart, I. (1994) The Collapse of Chaos, London: Viking.
Collier, A. (1994) Critical Realism: An Introduction to Bhaskar’s Philosophy, London: Verso.
Gleick, J. (1987) Making a New Science, New York: Viking.
Heron, J. & Reason, P. (1997) “A participatory inquiry paradigm,” Qualitative Inquiry, 3(3): 274–94.

48
VOLUME #2, ISSUE #4

Kauffman, S. A. (1993) The Origins of Order: Self-Organization and Selection in Evolution,


New York: Oxford University Press.
Kauffman, S. A. (1995) At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity, New York: Oxford University Press.
Kelso, J. A. S. (1995) Dynamic Patterns, Cambridge, MA: MIT Press.
O’Toole, D. V., Robinson, P. A., & Myerscough, M. R. (1999) “Self-organized criticality in
termite architecture: A role for crowding in ensuring ordered nest expansion,” Journal
of Theoretical Biology, 198: 305–27.
Reason, P. (1998) “Co-operative inquiry as a discipline of professional practice,” Journal of
Interprofessional Care, 12: 419–36.
Reason, P. & Goodwin, B. (1999) “Toward a science of qualities in organisations: lessons from
complexity theory and postmodern biology,” Concepts and Transformation, 4: 281–317.
Sol‚ R. & Goodwin, B. (2000) Signs of Life: How Complexity Pervades Biology. New York:
Basic Books.
Stacey, R., Griffin, D., & Shaw, P. (2000) Complexity and Management: Fad or Radical
Challenge to Systems Thinking?, London: Routledge.
Wemelsfelder, F., Hunter, E. A., Mendl, M. T., & Lawrence, A. B. (2000) “The spontaneous
qualitative assessment of behavioral expressions in pigs: First explorations of a novel
methodology for integrative animal welfare measurement,” Applied Annals of Behavioral
Science, 67: 193–215.

49
EMERGENCE, 2(4), 50–64
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

New Wine in Old Wineskins:


From Organic to Complex Knowledge
Management Through the Use of Story
David J. Snowden

K nowledge management is a difficult and challenging topic


that has been subject to oversimplistic approaches from a
variety of authors and technology vendors. It is fashion-
able to reference its 2,500-year-old philosophical origins
in epistemology, which date back to the first use of the phrase “justified
true belief ” in Plato’s Theactetus. It is equally fashionable to claim that
the nature of knowledge is such that it cannot or should not be man-
aged. Both statements are misguided and diversionary for related
reasons.
Philosophy is concerned with the nature of what it means to “know,”
using the specialist language of epistemology, informed and burdened
with thousands of years of context, for day-to-day management action
either trivializes that language or becomes too academic. The needs of an
organization to stem the outward flow of intellectual capital and more
effectively deploy what has been called the only sustainable source of
competitive advantage (Stewart, 1997) are not best served by the trivial-
ization of “philosophy made simplistic.” Equally, to say that knowledge
management is a mere oxymoron is at best an abrogation of responsibil-
ity, only available to those who have ceased to manage and now merely
advise. At worst, it misrepresents management as an exercise of authority
within a bureaucratic command-and-control environment: a very
restricted use of the word that enters the English language from Latin via
the French for the ability to ride a horse in dressage. Knowledge is some-

50
VOLUME #2, ISSUE #4

thing that modern organizations, both commercial and governmental,


have to manage in the here and now; the question is how.
Modern knowledge management starts with Nonaka and Takeuchi’s
1995 book The Knowledge Creating Company, although Nonaka outlined
the ideas in an article of the same title in the Harvard Business Review
four years earlier (Nonaka, 1991). The 1995 book created the interest
from which many conferences, a few good, and far too many superficial,
books and articles have arisen. Key was its introduction of the distinction
between tacit and explicit knowledge in the much misused SECI (social-
ization, externalization, combination, internalization) model, highly
derivative of, but not completely true to, ideas originally put forward in
Polanyi’s 1962 Terry Lectures at Yale University (Polanyi, 1983).
The context of the 1995 publication, in contrast with that of 1991, was
that the limitations of the then dominant “fad,” business process reengi-
neering (BPR), had started to become evident. Knowledge was managed
before and during BPR, but the word was not problematic in day-to-day
business use. One of the main drivers of BPR was the desire to capture
and imbed knowledge into processes in the interests of efficiency, and in
most cases to legitimize or excuse significant cost reduction through
downsizing. This is not the time or the place to argue the merits or other-
wise of BPR; what is relevant is that the attempts to imbed knowledge
into processes made organizations increasingly aware of the human or
tacit components of knowledge and the problems and issues associated
with rendering that knowledge explicit. Nonaka’s SECI model was seized
on, whatever its intent, as providing a means by which the said tacit
knowledge could be rendered explicit.
At the same time, we had the first examples of attempts to produce
accounting standards for intellectual capital, most notably at Scandia
under the direction of the world’s first chief knowledge officer
(Edvinsson & Malone, 1997), coupled with the growth of balanced score-
cards and the first scalable collaboration software in Lotus Notes.
This combination of the intuitively attractive SECI model, a major
company managing knowledge within the accepted conventions of
accounting, and the growth of technology enablers gave rise to know-
ledge management and set its agenda.
Looking back in a decade or so, we will probably see 1995 and the
growth of knowledge management as the moment signaling the bounding
of Taylorism, just as quantum mechanics and the uncertainty principle
bounded the universal assumptions of Newtonian science, on which
Taylorism or “scientific management” was predicated. At any one time,

51
EMERGENCE

there is a limit to the range of ideas or concepts that managers can hold
within their attention zone of active awareness and conscious manage-
ment. This is not to say that other things are not being managed, but there
are always a limited number of erogenous zones within the body politic
of the organization. These zones receive attention, new investment, and
focus until their value is understood, its limits defined, and the practices
internalized as part of the day-to-day unconscious activities of the organ-
ization; at this point their capability to stimulate interest fails.
Until the period from 1995 to the present day (and for some organi-
zations a few years yet), the erogenous zones were often challenging and
frequently complicated, but benefit could be obtained through the appli-
cation of Taylorist principles. Business schools, the rapid and sometimes
parasitical growth of management consultancies, and the increasing num-
ber and volume of innovative technology developments reflected this in
an accepted process of “fads,” itself a manifestation of entrainment. An
HBR article or a book by a guru would define a new area of attention, usu-
ally at the boundary limits of the previous fad. Conferences and a popu-
lar journal or two would create the necessary level of interest, with
aspirational presentations and articles by industry practitioners providing
the critical mass for a phase shift transition from interest to investment.
At this point, a simple model or easily grasped concept or saying would
become commonplace, and standard recipe-book approaches based on
reductionist thinking would be put in place.
For a time this appeared to be happening with knowledge management,
and for certain technology-based solutions it is firmly established.
However, Nonaka’s separation of tacit from explicit knowledge brought the
commonplace discourse of managers into domains of cultural ambiguity,
human irrationality, and “knowing,” in which the level of interdependency
and interaction rendered the “awareness zone” complex rather than com-
plicated. The increasingly global nature of organizations, enabled by tech-
nology and to a lesser extent the growth of the internet, increased network
connections to the point where the old infrastructure of BPR, the balanced
scorecard, and much systems theory, including the learning organization
(Senge, 1990), started to break down and the space opened up for a new
organic metaphor of management theory informed by complexity thinking.
The deficiencies of the SECI model in practice are becoming evident
(Snowden, 2000a). In particular, organizations are increasingly realizing
that there is a body of tacit knowledge that cannot be made explicit, and
that even much of what can be made explicit shouldn’t be, on grounds of
either cost or flexibility (Snowden, 1997). It is also becoming accepted

52
VOLUME #2, ISSUE #4

that the reductionist assumptions of intellectual capital measurement sys-


tems cannot account for the difference between asset and market value,
and that technology cannot fully replace the need for human interaction.
Finally, and of most interest, there is the increasing realization that much
knowledge is held collectively within communities, and cannot be repre-
sented as the aggregation of individual knowledge. This has immense
consequences for reward and management systems: we are moving
slowly and steadily to a recognition that the modern organization is a
complex network of tribes, rather than a feudal landscape in which
ownership of budget replaces land as the means of enslavement.

MECHANISTIC TO ORGANIC TO COMPLEX

Complexity theory was also starting to appear on the radar of industry


practitioners at this time, but its links to knowledge management were
not immediately evident. However, as early as 1998 virtual conferences
were being set up on “organic knowledge management.” Many of the
early KM practitioners had very quickly reached the limits of what could
be achieved by treating the organization as a machine, the basic
metaphor of Taylorism. Four examples will illustrate this:

◆ We started to see the understandable, but dangerous, assertion of a


dualism between culture and technology in knowledge management
practice. This was understandable as a counterbalance to the use of
technology as a fetish; dangerous in its failure to recognize that tech-
nology is a tool, and that human culture is itself at least partly defined
by our tool-making and tool-using capabilities.
◆ Social capital was resurrected as a term, partly as a reaction to an
excessive focus on technology by many practitioners and vendors.
Such moves were, with a few honorable exceptions, predicated on
utilitarian and capitalist notions of exchange—the primary unit of
analysis remained the individual. Altruism, tribal identity and loyalty,
passion, and belief were excluded, or reduced to some primitive
notion of personal utility and rational acts.
◆ Early experiments took place with the use of story as a means of know-
ledge disclosure and communication, coupled with the use of inquiry
techniques derived from anthropology (Aibel & Snowden, 1998). Story
was adopted as a result of its capability to convey complex ideas in a sim-
ple form and its tolerance of ambiguity and uncertainty. In some ways,
story restored the context that was stripped out in the act of codification.

53
EMERGENCE

◆ Some software companies started to exploit the paradox that ensuring


privacy is more likely to lead to knowledge exchange. In these
systems, email records, a much underused knowledge resource, are
trawled to identify sources of expertise; each “expert” can then choose
to have their expertise known, or to retain privacy. Other members of
the organization utilizing the system to find an expert will only be
made aware of experts who have chosen the option of being publicly
known, but individuals who have chosen to be private are notified of
who is searching for their expertise; they can then choose to volunteer
or withhold that expertise depending on a complex set of factors, rang-
ing from the reputation of the searching individual to the level of fear
of abuse or time factors that motivated them to privacy in the first
place. This very simple intervention gives rise to complex behavior:
individuals who prove worthy of trust are more likely to gain access to
nonpublic expertise than are those who play the political game for
their own advantage at the expense of others, reversing the normal
balance of power in large organizations.

Gradually, we are seeing a new pattern of knowledge management prac-


tice emerging, in which the organization is treated as a complex ecology.
The role of the manager is as gardener or game warden, not mechanic or
big game hunter; the consultant becomes a mentor or enabler of descrip-
tive self-awareness rather than the purveyor of prescriptions to manage
the symptoms of corporate failure. However, little of this work is properly
rooted in a coherent set of concepts. As such, it remains new wine in old
wineskins, with all the consequent problems of contamination and leak-
age. Even some of the more organic practitioners and academics con-
strain their written words and practices to a Taylorist model of
respectability; itself a sin of both omission and commission from which
many complexity writers are, regrettably, not exempt.
In parallel with this shift from mechanical to organic, we see an
increasing awareness of complexity theory in organizations. The last few
years have seen an increasing number of books seeking to popularize
complexity theory in the context of management science. Too many of
these are trivial, based on inadequate understanding and attempts to re-
label existing industry practices as “complex” in an attempt to take the
guru spot of an emerging fad. Others provide real insight in a form that is
readable to a more general population than the research community (e.g.,
Axelrod & Cohen, 1999). However, most good complexity writing is
descriptive, reflecting its research roots. For Taylorist thinking a descrip-

54
VOLUME #2, ISSUE #4

tion can readily lead to a prescriptive model: a hypothesis from the


description is tested, and a prescriptive and purportedly universal model
or recipe book created. Nevertheless, this is not true for the cases
described as complex; here we are identifying underlying principles or
concepts that result in practices. The practices themselves cannot be gen-
eralized into universal models, as they result from the application of the
principle but are not principles in themselves. In a complex world, best
practice is too context specific for universal application; it is always past
practice. Unfortunately, the market has been habituated to imitation of
best practice arising from models generalized from the practices of a sam-
ple of organizations, ideally blue chip, that are used to validate new “fads”
and provide security blankets. For understandable reasons, many work-
ers in complexity theory are conforming to this model.

INFORMING ORGANIC PRACTICE THROUGH


COMPLEXITY

If we are to inform organic knowledge management by complexity, to


provide it with the necessary conceptual underpinning, then we need to
shift market awareness from imitation of generalized practice to the
application of basic concepts. Management has to realize that each solu-
tion will be unique, but the underlying principles remain the same: we
start intervention-based journeys open to discovery, rather than deter-
mining goals focused entirely on exploitation of what is currently known.
Expressed another way, organizations have to manage on the basis of
ambiguity and uncertainty if they are to take true advantage of the com-
plex system formed by their intellectual capital. This is likely to be bot-
tom up, rather than top down.
An important way to achieve this step change in thinking is to take an
existing knowledge management issue and associated organic practice,
reviewing it in the context of complexity theory, and applying the revi-
sions in a visible way that on articulation enables a shift in thinking and
understanding. By way of illustration, the remainder of this article will
examine enhancements to, and the consequent transformation of, com-
munication issues within organizations through the use of story,
enhanced by thinking from complexity.
Story allows the communication of complex ideas in a simple, memo-
rable form (Snowden, 2000b, 2000c). It also provides a highly effective
means of mapping knowledge within the organization and embedding
sustainable lessons learnt (Aibel & Snowden, 1998). Some approaches to

55
EMERGENCE

the construction of stories rely on identifying examples of good and bad


behavior and, through a process of exclusion and refinement, create memo-
rable stories that can lead to behavior modifications (Denning, 2000). Others
use narratives to create a greater understanding of the organization through
the construction of story taxonomies (Gabriel, 2000), while some of the more
creative applications construct fairy stories to create a metaphorical envi-
ronment to enhance understanding and mutual respect (Spark Team, 2000).
These three approaches rely to a greater or lesser extent on the role of an
expert interpreter and story writer/creator. They are either fictional or in
varying degrees purport to tell the truth in a compelling manner.
All such methods suffer from the danger of emerging antistories: sto-
ries that evolve very quickly in organizations as a cynical reaction to an
official story of “goodness” that exceeds the limits of the believable or
politically acceptable. For factual stories, the need to make the story
compelling requires a degree of emphasis and selection, which is fertile
ground for the emergence of antistory. For fantasy, the medium itself is
likely to induce cynicism; there are a limited number of executives pre-
pared to kiss frogs and save fair maidens with any degree of seriousness,
whatever the literary merits of the story.
Work under the author’s direction approached story from a different
perspective. The early emphasis was on the use of story circles, a facili-
tated technique to provide the raw material for extracting “knowledge dis-
closure points” in the form of decisions, judgments, problem resolution,
etc. (Snowden, 1998, 2000a), from which knowledge assets could be
derived and cataloged. Story circles typically lasted for a half to one day
and used a variety of techniques (Snowden, 2000b), including fiction to
provoke and elicit anecdotes from the participants that, considered as a
whole, express the learning, experience, and culture of the group. The
gathering of anecdotal material provided highly valuable content in its
own right, aside from its use as a disclosure medium. It was soon evident
that this offered the raw material for story construction; indeed, anecdotal
material was used to persuade executives to action in several assignments.
From this point, development of story-construction techniques could
easily have evolved in the same direction as the three approaches identi-
fied above, had it not been for a moment of serendipity. A large project
requiring significant behavior change in a rule-bound organization with
strong private networks coincided with early study and conference atten-
dance to scope a research project into the application of complexity sci-
ence to organizations. The eureka moment took place in a tortuous
workshop session in an airless room in Paris.

56
VOLUME #2, ISSUE #4

DEPRESSION, SERENDIPITY, AND REVELATION

The project in question was a lessons-learnt program in international


sales effectiveness. Several story circles had been conducted in a variety
of countries with failed, failing, and successful bid teams. A depressing
pattern was emerging: where teams were successful in winning business,
they ignored the processes designed to mitigate risk, provide manage-
ment control, and ensure responsible pricing. On the other hand, teams
who followed the process generally failed, and often spent the entire
gross margin of the contract on the bidding process itself. This was not
going to be a palatable conclusion to the project sponsor, hence the
depression.
In an attempt to create some meaning, anecdotal material from suc-
cessful projects was sorted into two groups: contracts won that were both
profitable and had created few significant problems in implementation;
and contracts that everyone would, in retrospect, prefer to have lost.
Each anecdote was printed in full or summary on a single sheet of paper
and the two classes of material were tacked to opposite walls. Workshop
participants were encouraged to walk and talk about the material.
Hypothesis, speculation, challenge, discussion, and disputation created a
rich information flow and suddenly a pattern became clear, self-evident
to all; to this day, most participants claim the original insight and proba-
bly all are correct in their claims. Successful bids, which were not the
subject of bitter recrimination years later, had all followed the spirit
behind the process. Even if the formal risk assessment had been down-
played, or the risk premium cut without proper authority in order to win
a bid, risk had been actively discussed. Further discussion and refine-
ment revealed that most bid teams who were successful in winning sus-
tainable business were operating from some simple, unarticulated
heuristics that were held in common across bid teams with little or no
contact, but with many common experiences.
Once they had been articulated, these heuristics were capable of
rational explanation, but only with the benefit of hindsight. For example,
one heuristic was that the risk premium could be halved with complete
safety. This was easy to remember and operated in many cases; study of
the bid process indicated a series of review stages in the deduction of the
risk premium in which the actual practice of each review was to add a
safety margin to cover themselves. The net effect was that the premium
was doubled by the formal process, and then halved back to its original
level by human wit and ingenuity.

57
EMERGENCE

This valuable insight would have remained unique to that project, had
it not been for the coincidence that the project leader and the initiator of
the complexity project was the author, with both activities concurrent. It
appeared that there was a strong correlation between the heuristics and the
rules governing flocking in Boid’s algorithm. In addition, the workshop
process had increased information flows to painful levels, but had in con-
sequence seen a breakdown of existing perceptions and beliefs and
resulted in new insights that emerged from the active discourse of informed
participants. Critically, no expert had analyzed the material and drawn con-
clusions; meaning had arisen from the community itself, but only where the
environment had been changed to create discomfort and disruption.
Subsequent work in a variety of projects, this time informed in
advance by complexity, validated the original insights. Once a critical
mass of anecdotes, in practice between 30 and 40, had been gathered,
increasing information flow between agents in a workshop environment
would lead to the emergence of articulated organizing principles, gener-
ally expressed as rules, values, or beliefs. This process was assisted if it
took place in a performance space with physical activity, movement, and
active, often contentious, dialog. By increasing the information flow to
the point where current perceptions or infrastructure broke down, organ-
izing principles could be articulated: emergent properties of a complex
system. In addition, the process of emergence involved a degree of phase
shift: dialog would appear meaningless for long periods and then sud-
denly meaning would emerge in the form of a memorable phrase.
However, there were still issues: sometimes the identification of an
organizing principle was difficult, and not always consistent. There
seemed to be a step missing in the process.

THE EMERGENCE OF ARCHETYPES

Now we reach the second moment of serendipity: a keynote address on


the use of story at a congress of librarians in Derbyshire, UK. Librarians
are interesting people: they are curious, collect eclectic knowledge, and
are willing to share it. In this case the inspiration came from Dr. Judy
Palmer, director of the Health Care Libraries Unit at John Radcliffe
Hospital, Oxford. She had little knowledge of what she was unleashing
when she recommended the Mulla Nasrudin stories from the Middle
East, collected by Idries Shah (1985). These stories follow a format com-
mon to many storytelling cultures in that they use archetypes to explore
aspects of human interaction. Other examples include Greek myths,

58
VOLUME #2, ISSUE #4

where each god represents an extreme form of human behavior; and abo-
riginal stories in Australia, where individual animals display specific
aspects of human interaction. Archetypal stories, among many other
things, allow conversations to take place about aspects of human behav-
ior that cannot be talked about directly. This is one of the uses of the
Mulla Nasrudin stories in Sufi society: if you do something stupid, you
don’t tell people about it, you make up a story in which the Mulla did it.
The form, structure, and characters are well known and an amusing story
will spread quickly and naturally within the community, dispersing
knowledge with it. Mulla stories continue today, with warning stories
about the Mulla meeting British Immigration at Heathrow Airport. Peter
Hawkins of Bath Consulting in the UK has written, but not published, a
series of Mulla stories for today’s managers. The popular Dilbert cartoons
are a more modern version of this story form; pinned to walls, emailed, or
posted anonymously they provide a powerful, amusing learning mecha-
nism, with bite!
The Mulla Nasrudin stories evolved over many years; for every car-
toonist who succeeds there are many who fail. If we want to use arche-
types in organizational story, we cannot wait for evolution, nor can we
experiment with many options until one succeeds. We need to be able to
produce archetypes that resonate with the organization in an efficient and
timely manner. Again, the same workshop techniques that were used for
organizing principles come into use, but with the addition of a cartoonist
and possibly some actors, depending on the planned use of the arche-
types. The anecdotal database is once more key: workshop participants
converse about the anecdotes, their meaning and relevance, and as this
talking increases and more individuals connect with other individuals in
multithreaded conversation, characters start to emerge. The cartoonist is
there to funnel this dialog into a set of archetypal characters by drawing
what he or she hears and then redrawing in dialog with the workshop par-
ticipants. Far more easily than with organizing principles, a phase shift
takes place and the archetypes emerge from the discourse between the
participants, focused by the cartoonist. Importantly, an “expert” does not
“analyze” the material or use archetypes previously identified as “appro-
priate” or “best practice” for that industry sector. That would be a
Taylorist approach, old wineskins for new wine. Archetypes emerge from
the discourse of the community and thus resonate with that community
when they are used.

59
EMERGENCE

VALUES, RULES, BELIEFS, AND THE AVOIDANCE OF


ANTISTORIES

Now we can return to the difficulties and inconsistencies in identifying


organizing principles. With the archetypes we have created an additional
and useful agent. The archetypes have already focused the extreme aspects
of the community and are represented by vivid cartoons. We can create a
new discourse between and about the archetypes. Here, actors can be used
to enhance the dialog. Actors are not inhibited in role-play, while most mem-
bers of an organization are. Someone engaging in a role-play also tends to
formulate strong opinions about what they said, regardless of what is heard.
By allowing workshop participants to coach an actor in role-play, the inhibi-
tion is removed and hearing is not impaired. Stories are created about the
archetypes, coaching actors into impromptu plays in which they character-
ize and represent the archetypes. This additional level of discourse provides
the organizing principles in the event of difficulties or inconsistencies.
Some of the uses of organizing principles are self-evident; they are a
measure of culture and an indication of the nature of the community. The
form of their expression is also valuable. Rules, values, and beliefs are dif-
ferent things and have different implications for knowledge creation and
community formation.
In the context of organizational storytelling, the value of the organiz-
ing principles, and to a lesser extent the archetypes, is in preventing anti-
stories. As stated, an antistory is generally a cynical and spontaneous
reaction to a script that is too far away from the reality of life within the
organization concerned, or where the powerful act in a hypocritical man-
ner. All organizations have antistories, ranging from initiative-weary cyn-
icism to self-righteous indignation. Most internal communication within
an organization attempts to create a script based on idealized behavior.
This is also a strong tendency in some of the other approaches to story-
telling identified earlier. The temptation to tell things how they should be
is, or becomes, irresistible.
The issue with organizing principles is that they rarely reflect the offi-
cial descriptions of organizational culture, but official messages presume
the official perception of culture. Now that we know the organizing prin-
ciples we can manage the story, by controlling the shift in organizing prin-
ciples. Reinforcing an existing value or achieving a change or modification
of a rule are possible without antistory. Small incremental changes work;
the catastrophic changes that are occasionally necessary are more unpre-
dictable in their outcome and more likely to generate antistory.

60
VOLUME #2, ISSUE #4

ILLUSTRATIVE USES OF ARCHETYPES

Archetype-based stories are very powerful for a variety of purposes. One


of the most obvious is that of the original Mulla Nasrudin scenario,
namely, as an indirect confessional device. Because the archetypes are
drawn from the community, they resonate in day-to-day use and can be
incorporated into lessons-learnt programs. Encouraging teams to relate
stories using the archetypes as well as telling the “official” story creates a
more complex and valuable learning environment. If the same archetypes
are incorporated into training programs, induction programs, and the
like, the confessional device is institutionalized within the organization.
This can produce drastic reductions in training times and increased
retention of learning through reference to the archetypes and their sto-
ries (Snowden, 2000d).
Another use, linked strongly to concepts of self-organization and
descriptive self-awareness, is of archetypes as an indexing tool for oral
histories. These are valuable means by which the experience of past
employees can be captured in a memorable form; they allow induction
times to be shorted by giving employees access to stories that it might
otherwise take them months or years to accumulate. They allow employ-
ees at all levels to explore alternative views, or investigate likely
responses to a situation from different viewpoints. While several compa-
nies have started work on oral histories, they have allowed themselves to
be constrained by Taylorist thinking. The general tendency is to construct
templates and interview guidelines and then go out and gather the mate-
rial. The reasons for this are understandable: there is a concern about
how to index and catalog the database; a desire to ensure that all neces-
sary information is gathered; a need to structure incoming data so that it
is easy to analyze. All of this misses both the point and the opportunity.
An oral history is a rich repository of stories told and retold across gener-
ations. If we look at the evolution of standard stories in communities, we
see that they evolve through constant telling and retelling to a form that
stands up and delivers a desired message. This cannot be anticipated or
designed; it has to evolve.
Designing an environment for that evolution to take place is simple, if
we treat the problem as complex rather than complicated. First, storage is
cheap and stories can be captured pervasively. Video booths, tape recorders
given to key staff before they leave for home each night and collected the
next day, children asked to enter a competition to interview their parent
about their work and present it to the company: the list is endless if one is

61
EMERGENCE

creative. Once a reasonable sample of these stories is available, in a work-


shop environment we create discourse levels to the point at which a lim-
ited number of archetypes emerges, say five. Each incoming oral history is
then self-classified using a simple point allocation; for example, the story-
teller allocates 100 points over no more than three archetypes.
Given this approach, we can take any number of incoming oral histo-
ries without involved structured interviews or similar. Now, in using
those histories in, say, a web environment, we present the archetypes on
the screen, preferably in cartoon format, and under each archetype we
place a slider bar. By manipulating the slider bar the user of the system
can navigate across a rich database of experiences, discovering material
anew on each occasion. Decision makers can look at the stories being told
by groups of individuals who categorize themselves in a similar way. We
have created a self-organizing learning ecology, though a simple process
that will itself give rise to complex behavior.

CONCLUSIONS
Complexity-based thinking, whether through direct action or metaphor,
is a fundamental shift in the way we think about organizations. It is not
the latest in a set of fads or concepts that extend and develop the Taylorist
philosophy. Instead, it bounds Taylorism, limiting it to the execution of
stable and structured initiatives, just as Newtonian science was bounded
but not invalidated by the discoveries of modern physics. This is very dif-
ficult for individuals moving into this field, whether practitioners or aca-
demics, or the increasing population of individuals who straddle both
domains. Academic life and the day-to-day practices of consultancy com-
panies are firmly established in the norms and paradigms of scientific
management. The great and understandable temptation is to dress up the
new ideas in the models of the old: to put new wine into old wineskins.
This may be the only way to get funding, or it may be a necessity for sur-
vival. However, the general pattern of human history is that new ideas
gain currency only after a degree of martyrdom, or at least the courage to
risk being condemned for heresy.
Much knowledge management practice, and the associated failures
directly attributable to Taylorist assumptions, has provided an awareness
at a high level in many organizations that something is wrong. The old
models do not work, or work in different ways. Planned and structured
interventions result in unanticipated and surprising consequences.
Knowledge extracted from an employee and embedded in a database is

62
VOLUME #2, ISSUE #4

discovered to be unusable without the original creator. Investments in


large, sophisticated document databases in which experts codify their
knowledge provide no return or benefit; they are just used to find contact
names. The list is legion, depressing, and hope bearing at the same time.
It is no longer necessary to apologize for thinking in terms of complex
rather than complicated systems.
If this change is to be consolidated and built on, there are several
important changes necessary in the way we think about and intervene in
organizations:

◆ We need to shift from experts who analyze and interpret, to facilita-


tors who through active discourse enable emergence of new under-
standing and perspective.
◆ We need to create a clear separation of complex from complicated in
organizational decision makers; internalizing this one distinction can
make all the difference to the reception that a more radical,
complexity-based solution will receive. We also associate complexity
with simplicity and complicated approaches with simplistic ones. In
dealing with a complex system, we need to draw boundaries and con-
struct simple interventions that result in complex activity.
◆ The seductive power of goals and board-level sponsorship must be
resisted in favor of starting journeys and building responsiveness into
the organization so that it can gain first-mover advantage from dis-
coveries on that journey.
◆ Taylorism is not rejected; it is bounded, just as Newtonian physics was
bounded. For many activities a complicated approach is the correct
approach and to do anything else is foolish.
◆ Techniques such as story, developed within the practices of organic
knowledge management, are better informed through complexity, which
provides conceptual roots. We need to take other, nonmechanistic suc-
cess stories and reinvent them in the context of complexity.
◆ Above all, a critical mass of workers in this field have to move from
description and research into action—not seeking to establish best-
practice replicable cases, which is the Taylorist mode, but actively
applying the principles of complexity theory to difficult areas of
organizational behavior, choosing those issues for which it is
accepted that more traditional routes have failed, then transforming
them through simple and creative interventions that lead to complex
behaviors.

63
EMERGENCE

Major shifts in thinking are rare. The connectiveness of the web, the
breakdown of mechanistic knowledge management, globalization, and all
the words beginning with e: all of these signal a change in thinking at
least as great as the switch from medieval to renaissance society. A change
of this magnitude will always mean that the inquisition of academic and
business orthodoxy will offer the Galileo option to pioneers, but it should
be resisted. However painful the alternative, putting new wine into old
wineskins always results in leakage and spoilage.

REFERENCES
Aibel, J. & Snowden, D. J. (1998) “Intellectual capital deployment: A new perspective,”
Focus on Change Management, 47(September): 15–20.
Axelrod, R. & Cohen, M. (1999) Harnessing Complexity: Organizational Implications of a
Scientific Frontier, New York: Free Press.
Denning, S. (2000) The Springboard, Oxford, UK: Butterworth Heinemann.
Edvinsson, L. & Malone, M. (1997) Intellectual Capital: Realizing Your Company’s True
Value by Finding its Hidden Brainpower, New York: HarperBusiness.
Gabriel, Y. (2000) Story Telling in Organizations: Facts, Fictions, and Fantasies, Oxford, UK:
Oxford University Press.
Nonaka, I. (1991) “The knowledge-creating company,” Harvard Business Review, Nov–Dec:
96–104.
Nonaka, I. & Takeuchi, H. (1995) The Knowledge-Creating Company: How Japanese
Companies Create the Dynamics of Innovation, Oxford, UK: Oxford University Press.
Polanyi, M. (1983) The Tacit Dimension, New York: Doubleday.
Senge, P. (1990) The Fifth Discipline: The Art and Practice of the Learning Organization,
New York: Doubleday Currency.
Shah, I. (1985) The Exploits of the Incomparable Mulla Nasrudin/The Subtleties of the
Inimitable Mulla Nasrudin (2 vols.), London: Octagon Press.
Snowden, D. (1997) “A Framework for Creating a Sustainable Programme in Knowledge
Management,” in Business Guide to Knowledge Management, S. Rock (ed.), London:
Caspian Publishing.
Snowden, D. (1998) “Thresholds of acceptable uncertainty: Achieving symbiosis between
intellectual assets through mapping and simple models,” Knowledge Management, May.
Snowden, D. (2000a) “Organic knowledge management Part I: The ASHEN model: An
enabler of action,” Knowledge Management, 3(7, April): 14–17.
Snowden, D. J. (2000b) “The art and science of story or ‘Are you sitting uncomfortably?’ Part
1: Gathering and harvesting the raw material,” Business Information Review, 17(3):
147–56.
Snowden, D. J. (2000c) “The art and science of story or ‘Are you sitting uncomfortably?’ Part
2: The weft and the warp of purposeful story,” Business Information Review, 17(4).
Snowden, D. J. (2000d) “Story Telling and Other Organic Tools for Chief Learning Officers
and Chief Knowledge Officers,” in D. Bonner (ed.) In Action: Leading Knowledge
Management and Learning, London: ASTD (www.astd.org).
Spark Team (2000) Story Telling, Stories and Narrative in Effecting Transition, Spark Press,
sparkteam@sparknow.net.
Stewart, T. (1997) Intellectual Capital: The New Wealth of Organizations, New York:
Doubleday.

64
EMERGENCE, 2(4), 65–77
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

The Concept of Emergence in


Social Science:
Its History and Importance
Geoffrey M. Hodgson

T he terms “emergence” and “emergent property” date from


the last quarter of the nineteenth century. However, the
general idea behind these terms is older. It is redolent, for
example, of the “law of the transformation of quantity into
quality” laid down by G.W.F. Hegel in his Logic and subsequently taken
up by Karl Marx and Frederick Engels. The philosopher Auguste Comte
(1853, Vol. 2: 181) wrote of irreducible properties: “Society is no more
decomposable into individuals than a geometrical surface is into lines, or
a line into points.” The idea of emergence was also hinted at by John
Stuart Mill (1843, Bk 3, Ch. 6, Para. 2) with his idea of “heteropathetic”
causation.
The word “emergent” in this context was first suggested by the
philosopher George Lewes (1875, Ch. 3: 412). Subsequently, the philoso-
pher of biology Conwy Lloyd Morgan (1927, 1933) wrote extensively on
the topic. Following Mill and Lewes, Morgan (1927: 3–4) defined emer-
gent properties as “unpredictable” and “non-additive” results of complex
processes. In more detail, Morgan (1932: 253) explained:

the hypothesis is that when certain items of “stuff,” say o p q, enter into
some relational organization R in unity of “substance,” the whole R(o p q)
has some “properties” which could not be deduced from prior knowledge
of the properties of o, p, and q taken severally.

65
EMERGENCE

Morgan saw such properties as crucial to evolution in its most meaning-


ful and creative sense, where (Morgan, 1927: 112):

the emphasis is not on the unfolding of something already in being but on


the outspringing of something that has hitherto not been in being. It is in
this sense only that the noun may carry the adjective “emergent.”

For Morgan, evolution creates a hierarchy of increasing richness and


complexity in integral systems “as new kinds of relatedness” successively
emerge (Morgan, 1927: 203). Also for Morgan, the “non-additive” char-
acter of complex systems must involve a shift from mechanistic to
organic metaphors: “precedence should now be given to organism rather
than to mechanism—to organization rather than aggregation” (Morgan,
1933: 58).
Morgan’s formulation of the concept was explicitly acknowledged by
a group of philosophers in the 1920s. Prominent among these was
Samuel Alexander (1920) at the University of Manchester in the UK and
Alfred Whitehead (1926) at Harvard University in the USA. The psy-
chologist and philosopher William McDougall (1929) was conspicuous in
the presentation and development of the concept. Similar ideas also
appeared independently at the time. The American philosopher Roy
Sellars (1922) developed the notion of “creative synthesis” to explain how
new properties could emerge in complex systems. Accordingly, a set of
ideas were established in philosophy in Britain and America in the 1920s
that are remarkably redolent of the ideas that emerged later in complex-
ity theory in the 1990s.
However, this earlier wave of complexity thinking did not last very
long. Within philosophy, ontological speculation about the nature and
properties of reality became highly unpopular, as positivism grew in
influence in the interwar period. Positivists believe that “metaphysical”
propositions that are not directly grounded on experience are “unscien-
tific.” The ontological concerns of Morgan, Alexander, Whitehead,
McDougall, and Sellars were dismissed—along with the concept of
emergence—as metaphysical and useless speculation.
Nevertheless, despite this setback, the concept of emergence did have
an enduring impact. It survived at the fringes of biology and social sci-
ence until its rediscovery later in the twentieth century.

66
VOLUME #2, ISSUE #4

PAST IMPACTS OF EMERGENCE ON THE


SOCIAL SCIENCES

One of the first social scientists to be influenced by Morgan was the insti-
tutional economist Thorstein Veblen. Morgan visited Chicago in 1896
and Veblen was crucially influenced by his ideas (Dorfman, 1934;
Hodgson, 1998; Tilman, 1996). The influence of Morgan is evident in
Veblen’s treatment of institutions as phenomena that are dependent on
individuals but are not reducible to them.
Prior to Veblen, many social scientists believed that social phenomena
could be understood in terms of the biological characteristics of the pop-
ulations involved. Human society, it was thought, could evolve no more
rapidly than the individuals themselves. From such a standpoint, the
rapid evolution of human civilization could only be explained if there
were some Lamarckian process by which acquired characteristics could
be inherited. Otherwise, there would be no explanation of the rapid evo-
lution of the human genetic material that supposedly correlated with the
evolution of human civilization in a few thousand years. Like other anti-
Lamarckians, Morgan challenged this, arguing that the inheritance of
acquired characteristics was not plausible. But this anti-Lamarckian
standpoint created a problem: if there was no inheritance of acquired
characteristics, then how could the relatively rapid evolution of civiliza-
tion be explained?
Morgan (1896: 340) resolved this problem by suggesting that evolu-
tion occurs both at the level of human genes and at the level of human
social institutions. These social institutions act as a storehouse of accu-
mulating and evolving social customs, technology, and knowledge.
Furthermore, as this cultural heritage itself evolves, it provides a new cul-
tural and institutional environment for the development of each human
individual. This evolving social environment unleashes new possibilities
for each person, even if human nature and the human genetic endow-
ment remained more or less the same. As a Darwinian opponent of the
Lamarckian theory of biological inheritance, Morgan argued that it was
not the human genetic endowment that had evolved significantly in the
last few centuries, but the human social environment.
Morgan’s Darwinian understanding of evolution led him to promote the
idea of an emergent level of socioeconomic evolution that was not explica-
ble exclusively in terms of the biological characteristics of the individuals
involved. Evolution occurs at this emergent level as well, and without any
necessary change in human biotic characteristics. Accordingly, the crucial

67
EMERGENCE

concepts of emergence and emergent properties were liberated by the


insistence of a barrier between acquired habit and biotic inheritance. The
biological and the social spheres became partially autonomous, but
linked, levels of analysis. On this basis, in later works, Morgan developed
the philosophical concept of emergence.
Veblen did not use the concept of emergence explicitly. Nevertheless,
it is striking that after 1896 the concept of the “natural selection of insti-
tutions” began to appear explicitly in his work. Furthermore, some pas-
sages in his writings read almost as rephrased versions of Morgan’s texts
(Hodgson, 1998).
Following Veblen, the concept of emergence assumed a marginal exis-
tence in Anglophone social science. There are a few rare examples from
economics. The institutional economist Morris Copeland (1927) men-
tioned Morgan’s concept of emergent properties. While at Columbia
University, the New Zealand economist Ralph Souter (1933: 111) dis-
cussed the importance of emergent properties, acknowledging its prece-
dent in Morgan and Alexander. He criticized the Austrian school of
economics for its omission of emergent properties: Austrian economists
proclaim an individualistic ontology and methodology, and insufficiently
acknowledge the existence of emergent properties at the systemic level.
The English institutional economist John A. Hobson (1936: 216) wrote in
his book on Veblen:

Emergent evolution brings unpredictable novelties into the processes of


history, and disorder, hazard, chance, are brought into the play of ener-
getic action.

A more significant aftermath of Morgan’s influence on Veblen was inex-


plicit, yet massive in its reverberations. The institutional economist
Wesley Mitchell, formerly a student of Veblen, established himself as the
leading American economist in the interwar period. At this time, macro-
economics was not yet established as a subdiscipline; mainstream eco-
nomics was microeconomic in character.
Mitchell tried to break from this individualist foundation. In his 1924
Presidential Address to the American Economic Association, Mitchell
(1937: 26) argued that economists need not begin with a theory of indi-
vidual behavior but with the statistical observation of “mass phenomena.”
Mitchell (1937: 30) explained that this was possible “because institutions
standardize behavior, and thereby facilitate statistical procedure.”
Mitchell thus hinted at a process of social conformism that stabilized

68
VOLUME #2, ISSUE #4

behavior in an institutional context. To rephrase this in the language of


emergence and complexity: Mitchell and others saw complex systems
involving positive feedback effects that led to relatively stable emergent
phenomena at the macroeconomic level.
Mitchell and his colleagues in the US National Bureau for Economic
Research in the 1920s and 1930s played a vital role in the development
of national income accounting. They suggested that aggregate, macro-
economic phenomena have an ontological and empirical legitimacy.
Arguably, these developments prepared some of the groundwork for the
Keynesian revolution in economics. Through the development of national
income accounting, the work of Mitchell and his colleagues helped to
establish modern macroeconomics, and in particular influenced and
inspired the macroeconomics of Keynes (Mirowski, 1989: 307; Colander
& Landreth, 1996: 141). Although the concept of emergence was implicit
rather than explicit in these intellectual developments, we can trace the
origins of this line of thinking in Veblen’s break from reductionism and his
establishment of institutions as units of analysis.
Elsewhere, the concept of emergence found a small refuge in sociol-
ogy. Talcott Parsons came to Harvard University in 1927 and was influ-
enced there by Whitehead. Parsons took on board aspects of Whitehead’s
organicist ontology and saw the existence of emergent properties as “a
measure of the organicism of the system” (1937: 749). However, although
Parsons played a role in preserving the concept, his use of the term was
idiosyncratic and unclear.
As noted above, the idea of emergence was largely submerged in the
positivist and reductionist phase of Anglo-American science in the inter-
war period (Ross, 1991). Although his use of such terminology was at odds
with Morgan’s original concept, Parsons helped to keep the concept of
emergence alive during this difficult period.

THE RE-EMERGENCE OF EMERGENCE

After the Second World War, Michael Polanyi (1967), Sir Karl Popper
(1974), Ernst Mayr (1985), and several others rehabilitated the idea of
emergent properties. The concept had never entirely disappeared, but it
took the decline of positivism to provide an opportunity for its redevelop-
ment. One of those involved in this process was the great polymath
Michael Polanyi. Perhaps significantly, Polanyi had worked in both the
natural and the social sciences. His classic book on tacit knowledge has a
chapter titled “emergence,” in which he wrote:

69
EMERGENCE

you cannot derive a vocabulary from phonetics; you cannot derive the
grammar of language from its vocabulary; a correct use of grammar does
not account for good style; and a good style does not provide the content
of a piece of prose. ... it is impossible to represent the organizing princi-
ples of a higher level by the laws governing its isolated particulars.
(Polanyi, 1967: 36)

Another person who played a crucial part in the rediscovery of the con-
cept was the philosophically inclined biologist Ernst Mayr. He argued
that the characteristics of a complex system:

cannot (not even in theory) be deduced from the most complete knowledge
of the components, taken separately or in other partial combinations. In
other words, when such systems are assembled from their components,
new characteristics of the new whole emerge that could not have been pre-
dicted from a knowledge of the components. (Mayr, 1985: 58)

Like his predecessors, Mayr established that the existence of emergent


properties at a particular level of reality means that explanations cannot
be reduced entirely to components and phenomena at lower levels. He
wrote:

Recognition of the importance of emergence demonstrates, of course, the


invalidity of extreme reductionism. By the time we have dissected an
organism down to atoms and elementary particles we have lost everything
that is characteristic of a living system. (Mayr, 1985: 58)

We are reminded of the words of William Wordsworth in his poem “The


Tables Turned”:

Sweet is the lore which Nature brings;


Our meddling intellect
Mishapes the beauteous form of things: –
We murder to dissect.

After these statements by Polanyi, Popper, and Mayr, the development of


the story of emergence was to take another turn. By the 1980s, the
development of computer technology had greatly facilitated the simula-
tion of nonlinear dynamic systems. This led to a related development
known as chaos theory.

70
VOLUME #2, ISSUE #4

Working on nonlinear mathematical systems, chaos theorists have


shown that tiny changes in crucial parameters can lead to dramatic con-
sequences, known as the “Butterfly Effect—the notion that a butterfly
stirring the air today in Peking can transform storm systems next month
in New York” (Gleick, 1988: 8). There are parallels here with the account
of “bifurcation points” in the work of Prigogine and Stengers (1984). After
behaving deterministically, a system may reach a bifurcation point where
it is inherently impossible to determine which direction change may take;
a small and imperceptible disturbance could lead the system in one direc-
tion rather than another.
Accordingly, chaos theory suggests that apparent novelty may arise
from a deterministic nonlinear system. From an apparently deterministic
starting point, we are led to novelty and quasi-randomness. Consequently,
even if we knew the basic equations governing the system, we would not
necessarily be able to predict the outcome reliably. The estimation of “ini-
tial conditions” can never be accurate enough. This does not simply
undermine the possibility of prediction: in addition, the idea of a reduc-
tionist explanation of the whole in terms of the behavior of its component
parts is challenged. As a result, the system can be seen to have emergent
properties that are not reducible to those of its constituent parts.
Chaos theory was closely followed by another phase in the history of
the concept of emergence. This work was centered at the Santa Fe
Institute (Arthur, 1995; Arthur et al., 1997; Anderson et al., 1988;
Kauffman, 1993, 1995; Holland, 1998; Waldrop, 1992). Instead of focus-
ing largely on disorder and chaos, complexity theorists at Santa Fe and
elsewhere also stressed the emergence of “order out of chaos” and the
sustained behavior of complex systems “at the edge of chaos” (Cohen &
Stewart, 1994). One clear outcome of the work of the Santa Fe Institute
has been to bring respectability and prominence to the concept of emer-
gence, even in disciplines where it had been neglected.
Part of the impact of this work has been achieved through the use of
powerful, heuristic computer simulations. Many such simulations have
created artificial social worlds, in which modeled agents interact in vari-
ous ways, often to create surprising, systemic outcomes. Much of this
work shows the emergence of order and other “higher-level” properties
in complex systems. Reviewing the modeling of such “artificial worlds,”
David Lane (1993: 90) writes that a main thrust “is to discover whether
(and under what conditions) histories exhibit interesting emergent prop-
erties.” His extensive review of the literature in the area suggests that
there are many examples of artificial worlds displaying such attributes.

71
EMERGENCE

THE CRITIQUE OF REDUCTIONISM

At this stage we can take stock. Although the concept of emergence is over
100 years old, its rise to prominence has not been steady. The first three
decades of the twentieth century saw relatively sophisticated developments
of the concept in the sphere of philosophy. Many of these earlier insights and
debates have since been neglected and are worth revisiting. In the last two
decades of the twentieth century, the concept of emergence was brought
from the margins into the limelight of discussion of the evolution of complex
systems. Computer simulations have powered much of this recent work.
However, these developments involve a powerful challenge to pre-
vailing conceptions of how both the natural and the social sciences should
work. The impact of this challenge is not yet widely appreciated.
Furthermore, old habits die hard, and the old ways of thinking about sci-
ence have strong adherents. The most important issue over which this
debate between the old and the new science is articulated is the question
of reductionism.
Reductionism sometimes involves the notion that wholes must be
explained entirely in terms of their elemental, constituent parts. More
generally, reductionism can be defined as the idea that all aspects of a
complex phenomenon must be explained solely in terms of one level, or
type of unit. According to this view, there are no autonomous levels of
analysis other than this elemental foundation, and no emergent proper-
ties on which different levels of analysis can be based.
Consider biology. Although many biologists acknowledge the exis-
tence of emergent properties, there are many theorists and practitioners
who still cling to the view that all biological phenomena can and should
be explained in terms of “lower-level” components, such as genes.
Furthermore, reductionism is still conspicuous in social science today
and typically appears as methodological individualism. This tends to be
defined as “the doctrine that all social phenomena (their structure and
their change) are in principle explicable only in terms of individuals—
their properties, goals, and beliefs” (Elster, 1982: 453). It is thus alleged
that explanations of socioeconomic phenomena must be reduced to prop-
erties of constituent individuals and relations between them. Allied to
this is the sustained attempt since the 1960s to found macroeconomics on
“sound microfoundations.” This “microfoundations revolution” meant the
rejection of much of Keynesian macroeconomics. Indeed, it is the
antithesis of the approach discussed above, as developed by the institu-
tionalist and Keynesian economists of the 1920s and 1930s.

72
VOLUME #2, ISSUE #4

Reductionism should be distinguished from reduction. Reduction


involves the partial decomposition of elements at one level into parts at a
different level. Measurement is an act of reduction. The general idea of a
reduction to parts is not being overturned here. Some degree of reduc-
tion to elemental units is inevitable. Science cannot proceed without
some dissection and some analysis of parts. However, although some par-
tial reduction is inevitable and desirable, a complete analytical reduction
is both impossible and a philosophically dogmatic diversion. What is
important to stress is that the process of analysis cannot be extended to
the most elementary subatomic particles presently known to science, or
even to individuals in economics or genes in biology. A complete reduc-
tion would be hopeless and interminable. Reduction is necessary to some
extent, but it can never be complete. As Popper (1974: 260) has declared:

there is almost always an unresolved residue left by even the most


successful attempts at reduction.

Essentially, if socioeconomic systems have emergent properties, then


reductionism is confounded. Emergent properties by definition are not
entirely explicable in terms of constituent elements at a more basic level.
Accordingly, the idea of explaining the macro behavior of socioeconomic
systems completely in terms of individuals and individual actions
(methodological individualism) is misconceived. Similarly, explanations
of macroeconomic phenomena completely in terms of microeconomic
postulates (the microfoundations project) are confounded. There are
strong arguments to suggest that neither methodological individualism
(Udéhn, 1987) nor the microfoundations project (Rizvi, 1994) can ever be
successful in reducing all features of the system to its micro components.
Given this incipient failure, in explaining complex systems we may be
obliged to rely on emergent properties at a higher (macro) level.
In general, reductionism is clearly countered by the notion that com-
plex systems display emergent properties at different levels that cannot be
completely reduced to or explained wholly in terms of another level. By
contrast, antireductionism generally emphasizes emergent properties at
higher levels of analysis that cannot be reduced to constituent elements.

EMERGENCE DISTINGUISHED FROM SUPERVENIENCE

The concept of supervenience was developed by Alexander Rosenberg


(1976, 1985) in both economics and biology. In part, this alternative and

73
EMERGENCE

weaker concept is a symptom of a reluctance by some to adopt the con-


cept of emergence. Supervenience applies to the situation where the
identity of two or more entities at the macro level does not assume iden-
tity at the constituent micro level, but identity at the micro level does
guarantee identity at the macro level. In this case the macro level can be
said to be supervenient. Accordingly, similar properties at the macro level
cannot all be explained by a single set of micro-level components, but
identical configurations of micro-level components all give rise to identi-
cal macro-level phenomena. The concept of supervenience is used to
defend a qualified form of reductionism. Supervenience retains the onto-
logical priority of the micro level over other, higher levels.
By contrast, modern concepts of emergence suggest that different
outcomes are possible with near identical configurations and interactions
of micro-level elements. As chaos theory suggests, tiny, seemingly
insignificant differences can lead to quite different systemic outcomes. In
chaotic systems, almost exact identity at the micro level does not guaran-
tee identity at the macro level, and supervenience is eluded. Notably, the
supervenience concept was developed before chaos theory posed a
severe challenge to reductionism. The possibility of a high degree of con-
text sensitivity undermines the application of the supervenience concept.
Accordingly, in biology, identical genes do not lead to identical organ-
isms or behaviors. The biologist Conrad Waddington (1975: vi) showed
that in evolution the genetic makeup of organisms (their genotype) does
not generally give rise to similar characteristics (phenotypes). This is
partly because “genotypes, which influence behavior, thus have an effect
on the nature of the selective pressures on the phenotype to which they
give rise.” This introduces “an inescapable indeterminism” in evolution-
ary theory. In a similar vein, Cohen and Stewart (1994: 3) give a related
set of examples and cases. They argue that “simplicity of form, function,
or behavior emerge from complexities on lower levels because of the
action of external constraints.”
The fact that different outcomes are possible with near identical con-
figurations and interactions of micro-level elements confounds the con-
cept of supervenience. It is no longer tenable to infer from almost exact
identity at the micro level some identity at the macro level.

CONCLUSIONS
One of the reasons that the concept of emergence is important for social
science is that it provides a necessary means to focus on higher-level units

74
VOLUME #2, ISSUE #4

and relations and to avoid the potentially intractable problem of analyti-


cal reduction to lower-level units.
However, while emergent properties provide indispensable hooks to
bring analysis up to a higher level, we must never lose sight of the
dependence of these higher-level properties on lower-level units.
Indeed, if it were possible to explain a higher-level phenomenon entirely
in terms of lower-level units, then we should do so (Bunge, 1980). Biology
cannot ignore chemistry and macroeconomics cannot ignore the micro-
economics of individuals and firms. Emergence does not give license to
neglect the constituent elements of which an entity or structure is
composed.
What is required is the development of a methodology that is sensi-
tive to the “marks of emergence in their most telling form.” As Kyriakos
Kontopoulos (1993: 22–3) elaborates further, the marks of an emergent
property include its novelty, its association with a new set of relations, the
stability and boundedness of these relations, and the emergence of new
laws or principles applicable to this entity. We find emergent properties
in any complex, evolving system, throughout the natural and the social
realm.
The theory of emergence offers a nonreductionist account of complex
phenomena. Indeed, with emergent properties a reductionist account is
impossible. We are just beginning to grasp the full implications of this,
although, as we have seen, the concept of emergence has been around for
more than 100 years.
At the beginning of the twenty-first century, we are presented with an
exciting agenda of research in which philosophical concepts such as
emergence can find their place and impact within the flowering sciences
of complexity. At least as far as the social sciences go, we are just at the
beginning of this process.

REFERENCES
Alexander, S. (1920) Space, Time and Deity (2 Vols), London: Macmillan.
Anderson, P. W., Arrow, K. J., & Pines, D. (eds) (1988) The Economy as an Evolving Complex
System, Reading, MA: Addison-Wesley.
Arthur, W. B. (1995) “Complexity in economic and financial markets,” Complexity, 1(1):
20–25.
Arthur, W. B., Durlauf, S. N., & Lane, D. A. (eds) (1997) The Economy as an Evolving
Complex System II, Redwood City, CA: Addison-Wesley.
Bunge, M. A. (1980) The Mind-Body Problem: A Psychobiological Approach, Oxford, UK:
Pergamon.
Cohen, J. & Stewart, I. (1994) The Collapse of Chaos: Discovering Simplicity in a Complex
World, London/New York: Viking.

75
EMERGENCE

Colander, D. C. & Landreth, H. (eds) (1996) The Coming of Keynesianism to America:


Conversations with the Founders of Keynesian Economics, Aldershot, UK: Edward Elgar.
Comte, A. (1853) The Positive Philosophy of Auguste Comte (2 Vols), trans. H. Martineau
from the French volumes of 1830–42, London: Chapman.
Copeland, M.A. (1927) “An instrumental view of the part-whole relation,” Journal of
Philosophy, 24(4, 17 Feb): 96–104.
Dorfman, J. (1934) Thorstein Veblen and His America, New York: Viking Press.
Elster, J. (1982) “Marxism, functionalism and game theory,” Theory and Society, 11(4):
453–82. Reprinted in J. E. Roemer (ed.) (1986) Analytical Marxism, Cambridge, UK:
Cambridge University Press.
Gleick, J. (1988) Chaos: Making a New Science, London: Heinemann.
Hobson, J. A. (1936) Veblen, London: Chapman and Hall.
Hodgson, G. M. (1998) “On the evolution of Thorstein Veblen’s evolutionary economics,”
Cambridge Journal of Economics, 22(4, July): 415–31.
Holland, J. H. (1998) Emergence: From Chaos to Order, Oxford, UK: Oxford University
Press.
Kauffman, S. A. (1993) The Origins of Order: Self-Organization and Selection in Evolution,
Oxford, UK/New York: Oxford University Press.
Kauffman, S. A. (1995) At Home in the Universe: The Search for Laws of Self-Organization
and Complexity, Oxford, UK/New York: Oxford University Press.
Kontopoulos, K. M. (1993) The Logics of Social Structure, Cambridge, UK: Cambridge
University Press.
Lane, D. A. (1993) “Artificial worlds and economics,” Parts I and II, Journal of Evolutionary
Economics, 3(2, May): 89–107 and 3(3, Aug): 177–97.
Lewes, G. H. (1875) Problems of Life and Mind, Vol. 2, London: Trubner.
McDougall, W. (1929) Modern Materialism and Emergent Evolution, London: Methuen.
Mirowski, P. (1989) More Heat Than Light: Economics as Social Physics, Physics as Nature’s
Economics, Cambridge, UK: Cambridge University Press.
Mitchell, W. C. (1937) The Backward Art of Spending Money and Other Essays, New York:
McGraw-Hill.
Morgan, C. L. (1896) Habit and Instinct, London/New York: Edward Arnold.
Morgan, C. L. (1927) Emergent Evolution, 2nd edn, London: Williams and Norgate.
Morgan, C. L. (1932) “C. Lloyd Morgan,” in C. Murchison (ed.) A History of Psychology in
Autobiography, Vol. 2, New York: Russell and Russell: 253–64.
Morgan, C. L. (1933) The Emergence of Novelty, London: Williams and Norgate.
Polanyi, M. (1967) The Tacit Dimension, London: Routledge and Kegan Paul.
Popper, K. R. (1974) “Scientific Reduction and the Essential Incompleteness of All Science,” in
F .J. Ayala & T. Dobzhansky (eds) Studies in the Philosophy of Biology, London/Berkeley,
CA/Los Angeles, CA: Macmillan/University of California Press: 259–84.
Prigogine, I. & Stengers, I. (1984) Order Out of Chaos: Man’s New Dialogue With Nature,
London: Heinemann.
Rizvi, S. A. T. (1994) “The microfoundations project in general equilibrium theory,”
Cambridge Journal of Economics, 18(4, Aug): 357–77.
Rosenberg, A. (1976) “On the interanimation of micro and macroeconomics,” Philosophy of
the Social Sciences, 6(1): 35–53.
Rosenberg, A. (1985) The Structure of Biological Science, Cambridge, UK: Cambridge
University Press.
Ross, D. (1991) The Origins of American Social Science, Cambridge, UK: Cambridge
University Press.

76
VOLUME #2, ISSUE #4

Sellars, R. W. (1922) Evolutionary Naturalism, Chicago: Rand-McNally.


Souter, R. W. (1933) Prolegomena to Relativity Economics: An Elementary Study in the
Mechanics and Organics of an Expanding Economic Universe, New York: Columbia
University Press.
Sperry, R. W. (1991) “In defense of mentalism and emergent interaction,” Journal of Mind
and Behavior, 12(2): 221–46.
Tilman, R. (1996) The Intellectual Legacy of Thorstein Veblen: Unresolved Issues, Westport,
CN: Greenwood Press.
Udéhn, L. (1987) Methodological Individualism: A Critical Appraisal, Uppsala, Sweden:
Uppsala University Reprographics Centre.
Waddington, C. H. (1975) The Evolution of an Evolutionist, Edinburgh/Ithaca, NY:
Edinburgh University Press and Cornell University Press.
Waldrop, M. M. (1992) Complexity: The Emerging Science at the Edge of Order and Chaos,
New York: Simon and Schuster.
Whitehead, A. N. (1926) Science and the Modern World, Cambridge, UK: Cambridge
University Press.

77
EMERGENCE, 2(4), 78–103
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Knowledge, Ignorance,
and Learning
Peter M. Allen

I f we look at the dictionary definitions of science, we find “sys-


tematic and formulated knowledge.” Systematic is defined as
“methodical, expressed formally, according to a plan.” If we look
up complexity, we find “consisting of parts, composite, compli-
cated, involved.” And the definition of complicated is “intricate, involved,
hard to unravel.”
Putting these together, and excluding the “intricate and hard to
unravel” part, a definition of systems science could be the systematic, for-
mulated knowledge that we have about situations or objects that are
composite. But already we can see the origins of the major branches of
“complexity thinking,” whether one is discussing a complicated system of
many parts, or alternatively what might be a relatively simple system of
parts whose mutual interactions make them “hard to unravel.” We also
see the fundamental paradox that is involved in the science of complex-
ity, since it purports to be the systematic knowledge that we have about
objects of study that will be either “intricate or hard to unravel.”
Systematic and formulated knowledge about a particular situation or
object may be of different kinds. It could be what type of situation or
object we are studying; what it is “made of;” how it “works;” its “history;”
why it is as it is; how it may behave; how and in what way its behavior
might be changed.
The science of complexity is therefore about the limits to the creation
of systematic knowledge (of some of the above types) in situations or
objects that are either “intricate or hard to unravel.”
This has two basic underlying reasons:

78
VOLUME #2, ISSUE #4

◆ Either the situation considered contains an enormous number of


interacting elements making calculation extremely hard work,
although all the interactions are known.
◆ Or the nonlinear interactions between the components mean that
bifurcation and choice exist within the situation, leading to the possi-
bility of multiple futures and creative/surprising responses.

With today’s enormously increased computational power, the first case is


not necessarily a problem, whereas the second corresponds to what we
shall call the science of complexity. We shall create a framework of sys-
tematic knowledge concerning the limits to systematic knowledge. This
will take us from a situation that is so fluid and nebulous that no system-
atic knowledge is possible to a mechanical system that runs along a pre-
dicted path to a predicted equilibrium solution. Clearly, most of the
problems that we encounter in our lives lie somewhere between these
two extremes. What the science of complexity can do for us is allow us to
know what we may reasonably expect to know about a situation. It can
therefore banish false beliefs in either total freedom of action or total lack
of freedom.
The identification of “knowledge” with “prediction” arose from the
success and dominance of the mechanical paradigm in classical physics.
This is understandable, but erroneous. While it is impressive to be able
to predict eclipses, it is indeed “knowledge” to know, in a particular situ-
ation (playing roulette?), that prediction is impossible. In the natural sci-
ences, for many situations it was possible to know what was going to
happen, to predict the behavior, on the basis of the (fixed) behavior of the
constituent components. The Newtonian paradigm based on planetary
motion, the science of mechanics, applied to situations where this was
true, and indeed in which the history of a situation, and knowledge as to
“why it was as it is,” were not required for the prediction.
But of course, “mechanical systems” turn out to be a very special case
in the universe! They may even exist only as abstractions of reality in our
minds. The real world is made up of coevolving, interacting parts where
patterns of interaction and communication can change over time, and
structures can emerge and reconfigure.

KNOWLEDGE GENERATION

In the next two sections we set out a systematic framework of knowledge


about the limits to knowledge. The different aspects of knowledge may be:

79
EMERGENCE

◆ What type of situation or object we are studying (classification— “pre-


diction” by similarity).
◆ What it is “made of ” and how it “works.”
◆ The “history” of process and events through which it passed.
◆ The extent to which its “history” explains why it is as it is.
◆ How it may behave (prediction).
◆ How and in what way its behavior might be changed (intervention
and prediction).

In previous papers Allen has presented a framework of “assumptions”


that, if justified, lead to different limits to the knowledge one can have
about a situation (Allen, 1985, 1988, 1993, 1994, 1998).

ASSUMPTIONS USED TO REDUCE COMPLEXITY TO SIMPLICITY


What are these assumptions?

1 We can define a boundary between the part of the world that we want
to “understand” and the rest. In other words, we assume first that
there is a “system” and an “environment.”
2 We have rules for the classification of objects that lead to a relevant
taxonomy for the system components, which will enable us to under-
stand what is going on. This is often decided entirely intuitively. In
fact, we should always begin by performing some qualitative research
to try to establish the main features that are important, and then keep
returning to the question following the comparison of our under-
standing of a system with what is seen to happen in reality.
3 The third assumption concerns the level of description below that of
the chosen components or variables. It assumes that these compo-
nents are “homogeneous:”

— either without any structure;


— made of identical subunits;
— or made up of subunits whose diversity is at all times distributed
“normally” around the average.

This assumption, if it can be justified, will automatically lead to a


description over time that cannot change the average properties of the
components. The “inside” of a component is not affected by its expe-
riences. It leads to a description based on components with fixed,
stereotypical insides. This is a simplification of reality that fixes the

80
VOLUME #2, ISSUE #4

nature and responses of the underlying elements inside the compo-


nents. It creates “knowledge” in the short term at the expense of the
long. When we make this simplifying assumption, although we create
a simpler representation, we lose the capacity for our model to “rep-
resent” evolution and learning within the system.
4 The fourth assumption is that the individual behavior of the sub-
components can be described by their average interaction para-
meters. So, for example, the behavior of different employees in a
business would be characterized by the average behavior of their job
type. This assumption (which will never be entirely true) eliminates
the effects of “luck” and of randomness and noise that are really in the
system.

The mathematical representation that results from making all four of


these assumptions is that of a mechanical system that appears to “predict”
the future of the system perfectly.
A fifth assumption that is often made in building models to deal with
“reality” is that of stability or equilibrium. It is assumed in classical and
neoclassical economics, for example, that markets move rapidly to equi-
librium, so that fixed relationships can be assumed between the different
variables of the system. The equations characterizing such systems are
therefore “simultaneous,” where the value of each variable is expressed
as a function of the values of the others. Traditionally, then, “simple” mod-
els have been used to try to understand the world around us, as shown in
Figure 1 overleaf. Although these can be useful at times, today we are
attempting to model “complex” systems, leaving their inherent complex-
ity intact to some extent. This means that we may attempt to build and
study models that do not make all of these simplifying assumptions.
What is important about the statement of assumptions is that we can
now make explicit the kind of “knowledge” that is generated, provided
that the “necessary” assumptions can be made legitimately. Relating
assumptions to outcomes in terms of types of model, we have the science
of complexity.

THE SCIENCE OF COMPLEXITY

Having made explicit the assumptions that can allow a reduction in the
complexity of a problem, we can now explore the different kinds of
knowledge that these assumptions allow us to generate.

81
EMERGENCE

Complexity Successive Assumptions Simplicity

1. Classification 3. Average Types 4. Average Events 5. The Attractors


2. Boundary

X X
Z Z
Reality
Y Y

Self-Organization Nonlinear Dynamics


Evolutionary Models Autopoiesis
Catalytic Sets Stability, Resilience,
Changing Taxonomy Bifurcations,
Learning Models... Selection Attractors, Chaos...
Soft Systems Creativity + Selection
Literature, History,
Descriptions... Systems change The systems just
qualitatively... RUNS. But the
Events and Processes modeler can
Systems diverge... make experiments

Figure 1 The assumptions made (left to right) in trading off realism and
complexity against simplification and hence ease of understanding

EQUILIBRIUM KNOWLEDGE
If we can justifiably make all five assumptions above, and consider only
the long-term outcome, then we have an extremely simple, hard predic-
tion. That is, we know the values the variables will have, and from this can
“calculate” the costs and benefits that will be experienced. Such models
are expressed as a set of fixed relationships between the variables, calcu-
lable from a set of simultaneous equations.
Of course, these relationships are characterized by particular para-
meters appearing in them, and these are often calibrated by using regres-
sion techniques on existing data. Obviously, the use of any such set of
equations for an exploration of future changes under particular exogenous
scenarios would suppose that these relationships between the variables
remained unchanged. In neoclassical economics, much of spatial geo-
graphy, and many models of transportation and land use, the models that
are used operationally today are still based on equilibrium assumptions.
Market structures, locations of jobs and residences, land values, traffic
flows, etc. are all assumed to reach their equilibrium configurations “suf-
ficiently rapidly” following some innovation, policy, or planning action, so
that there is an equilibrium “before” and one “after” the event or action,
vastly simplifying the analysis.
The advantage of the assumption of “equilibrium” lies in the simplicity
that results from having only to consider simultaneous and not dynamical
equations. It also seems to offer the possibility of looking at a decision or
policy in terms of stationary states “before” and “after” the decision. All

82
VOLUME #2, ISSUE #4

cost/benefit analysis is based on this fundamentally flawed idea.


The disadvantage of such an approach, where an equilibrium state is
simply assumed, is that it fails to follow what may happen along the way. It
does not take into account the possibility of feedback processes where
growth encourages growth, decline leads to further decline, and so on (non-
linear effects), which can occur on the way to equilibrium. In reality, it
seems much more likely that people discover the consequences of their
actions only after making them, and even then have little idea of what would
have happened if they had done something else. Because of this, inertia,
heuristics, imitation, and postrationalization play an enormous role in the
behavior of people in the real world. As a result, there is a complex and
changing relationship between latent and revealed preferences, as individ-
uals experience the system and question their own assumptions and goals.
By simply assuming “equilibrium,” and calibrating the parameters of the
relationships on observation, one has in reality a purely descriptive approach
to problems, following, in a kind of post hoc calibration process, the changes
that have occurred. This is not going to be very useful in providing good
advice on strategic matters, although economists appear to have more influ-
ence on governments than do any other group of academics.

NONLINEAR DYNAMICS
Making all four assumptions leads to system dynamics, a mechanical rep-
resentation of changes. Nonlinear dynamics (system dynamics) are what
results generally from a modeling exercise when assumptions 1 to 4 above
are made, but equilibrium is not assumed. Of course, some systems are
linear or constant, but these are both exceptions, and also very boring. In
the much more usual case of nonlinear dynamics, the trajectory traced by
such equations corresponds not to the actual course of events in the real
system but, because of assumption 4, to the most probable trajectory of an
ensemble of such systems. In other words, instead of the realistic picture
with a somewhat fluctuating path for the system, the model produces a
beautifully smooth trajectory.
This illusion of determinism, of perfectly predictable behavior, is cre-
ated by assuming that the individual events underlying the mechanisms
in the model can be represented by their average rates. The smoothness
is only as true as this assumption is true. Systems dynamics models must
not be used if this is not the case. Instead, some probabilistic model based
on Markov processes might be needed, for example.
If we consider the long-term behavior of nonlinear dynamical sys-
tems, we find different possibilities. They may:

83
EMERGENCE

◆ Have different possible stationary states. So, instead of a single, “opti-


mal” equilibrium, there may exist several possible equilibria, possibly
with different spatial configurations, and the initial condition of the
system will decide which it adopts.
◆ Have different possible cyclic solutions. These might be found to cor-
respond to the business cycle, for example, or to long waves.
◆ Exhibit chaotic motions of various kinds, spreading over the surface
of a strange attractor.

An attractor “basin” is the space of initial conditions that lead to particular


final states (which could be simple points, or cycles, or the surface of a
strange attractor), and so a given system may have several different possi-
ble final states, depending only on its initial condition. Such systems can-
not by themselves cross a separatrix to a new basin of attraction, and
therefore can only continue along trajectories that are within the attractor
of their initial condition. Compared to reality, such systems lack the “vital-
ity” to jump spontaneously to the regime of a different attractor basin. If
the parameters of the system are changed, however, attractor basins may
appear or disappear, in a phenomenon known as bifurcation. Systems that
are not precisely at a stationary point attractor can follow a complicated
trajectory into a new attractor, with the possibility of symmetry breaking
and, as a consequence, the emergence of new attributes and qualities.

Y
Cyclic Attractor
Point Attractor

Point Attractor Point Attractor


Separatrices

Cyclic Attractor

Cyclic Attractor

Cyclic Attractor

Point Attractor

Point Attractor X

Figure 2 An example of the different attractors that might exist for a


nonlinear system. These attractors are separated by “separatrices”

84
VOLUME #2, ISSUE #4

SELF-ORGANIZING DYNAMICS
Making assumptions 1 to 3 leads to self-organizing dynamic models,
capable of reconfiguring their spatial organizational structure. Provided
that we accept that different outcomes may now occur, we may explore
the possible gains obtained if the fourth assumption is not made.
In this case, nonaverage fluctuations of the variables are retained in
the description, and the ensemble captures all possible trajectories of our
system, including the less probable. As we shall see, this richer, more
general model allows for spontaneous clustering and reorganization of
spatial configuration to occur as the system runs, and this has been
termed “self-organizing.” In the original work, Nicolis and Prigogine
(1977) called the phenomenon “order by fluctuation,” while Haken (1977)
called it “synergetic,” and mathematically it corresponds to returning to
the deeper, probabilistic dynamics of Markov processes (see, for example,
Barucha-Reid, 1960) and leads to a dynamic equation that describes the
evolution of the whole ensemble of systems. This equation is called the
“master equation,” which, while retaining assumption 3, assumes that
events of different probabilities can and do occur. So, sequences of events
that correspond to successive runs of good or bad “luck” are included,
with their relevant probabilities.
Each attractor is defined as being the domain in which the initial con-
ditions all lead to the final result. But, when we do not make assumption
4, we see that this space of attractors has “fuzzy” separatrices, since
chance fluctuations can sometimes carry a system over a separatrix across
to another attractor, and to a qualitatively different regime. As has been
shown elsewhere (Allen, 1988) for systems with nonlinear interactions
between individuals, what this does is to destroy the idea of a trajectory,
and gives to the system a collective adaptive capacity corresponding to
the spontaneous spatial reorganization of its structure. This can be imi-
tated to some degree by simply adding “noise” to the variables of the sys-
tem. This probes the stability of any existing configuration and, when
instability occurs, leads to the emergence of new structures. Such self-
organization can be seen as a collective adaptive response to changing
external conditions, and results from the addition of noise to the deter-
ministic equations of system dynamics. Methods like “simulated anneal-
ing” are related to these ideas.
Once again, it should be emphasized that self-organization is a natu-
ral property of real nonlinear systems. It is only suppressed by making
assumption 4 and replacing a fluctuating path with a smooth trajectory.
The knowledge derived from self-organizing systems models is not

85
EMERGENCE

simply of its future trajectory, but instead of the possible regimes of oper-
ation that it could potentially adopt. Such models can therefore indicate
the probability of various transitions and the range of qualitatively differ-
ent possible configurations and outcomes.

EVOLUTIONARY COMPLEX SYSTEMS


System components and subcomponents all coevolve in a nonmechanical
mutual “learning” process. These arise from a modeling exercise in which
neither assumption 3 nor assumption 4 is made. This allows us to clarify
the distinction between “self-organization” and “evolution.” Here, it is
assumption 3 that matters, namely, that all individuals of a given type, x
say, are either identical and equal to the average type, or have a diversity
that remains normally distributed around the average type. But in reality,
the diversity of behaviors among individuals in any particular part of the
system is the result of local dynamics occurring in the system. However,
the definition of a “behavior” is closely related to the knowledge that an
individual possesses. This in turn depends on the mechanisms by which
knowledge, skills, techniques, and heuristics are passed on to new indi-
viduals over time.
Obviously, there is an underlying biological and cultural diversity due to
genetics and to family histories, and because of these, and also because of
the impossibility of transmitting information perfectly, there will necessarily
be an “exploration” of behavior space. The mechanisms of our dynamical
system contain terms that both increase and decrease the populations of dif-
ferent “behavioral” or “knowledge” types, and so this will act as a selection
process, rewarding the more successful explorations with high payoff and
amplifying them while suppressing the others. It is then possible to make
the local micro diversity of individuals and their knowledge an endogenous
function of the model, where new knowledge and behaviors are created and
old ones destroyed. In this way, we can move toward a genuine, evolution-
ary framework capable of exploring more fully the “knowledge dynamics” of
the system and the individuals that make it up.
Such a model must operate within some “possibility” or “character”
space for behaviors that is larger than the one initially “occupied,” offer-
ing possibilities that our evolving complex system can explore. This space
represents, for example, the range of different techniques and behaviors
that could potentially arise. This potential will itself depend on the chan-
neling and constraints that result from the cultural models and vocabulary
of potential players. In any case, it is a multidimensional space of which
we would only be able to anticipate a few of the principal dimensions.

86
VOLUME #2, ISSUE #4

Figure 3 If eccentric types are always suppressed, then we have non-


evolution. But, if not, then adaptation and speciation can occur

In biology, genetic mechanisms ensure that different possibilities are


explored, and offspring, offspring of offspring, and so on spread out in
character space over time, from any pure condition. In human systems,
the imperfections and subjectivity of existence mean that techniques and
behaviors are never passed on exactly, and therefore that exploration and
innovation are always present as a result of the individuality and contex-
tual nature of experience. Human curiosity and a desire to experiment
also play a role. Some of these “experimental” behaviors do better than
others. As a result, imitation and growth lead to the relative increase of
the more successful behaviors, and to the decline of the others.
By considering dynamic equations in which there is an outward “dif-
fusion” in character space from any behavior that is present, we can see
how such a system would evolve. If there are types of behavior with
higher and lower payoff, then the diffusion “uphill” is gradually ampli-
fied, while that “downhill” is suppressed, and the “average” for the whole
population moves higher up the slope. This is the mechanism by which
adaptation takes place. It demonstrates the vital part played by
exploratory, nonaverage behavior, and shows that, in the long term, evo-
lution selects for populations with the ability to learn, rather than for pop-
ulations with optimal, but fixed, behavior.

87
EMERGENCE

In other words, adaptation and evolution result from the fact that
knowledge, skills, and routines are never transmitted perfectly between
individuals, and individuals already differ. However, there is always a short-
term cost to such “imperfection,” in terms of unsuccessful explorations, and
if only short-term considerations were taken into account, such imperfec-
tions would be reduced. But without this exploratory process, there would
be no adaptive capacity and no long-term future in a changing world.
If we return to our modeling framework in Figure 1, where we depict
the tradeoff between realism and simplicity, we can say that a simple, appar-
ently predictive system dynamics model is “bought” at the price of assump-
tions 1 to 4. What is missing from this is the representation of the
underlying, inner dynamic that is really running under the system dynamics
as the result of “freedom” and “exploratory error making.” However, if it can
be shown that all “eccentricity” is suppressed in the system, evolution will
itself be suppressed, and the “system dynamics” will then be a good repre-
sentation of reality. This is the recipe for a mechanical system, and the ambi-
tion of many business managers and military people. However, if instead
micro diversity is allowed and even encouraged, the system will contain an
inherent capacity to adapt, change, and evolve in response to whatever
selective forces are placed on it. Clearly, therefore, sustainability is much
more related to micro diversity than to mechanical efficiency.

Figure 4 Without assumption 3 we have an “inner” dynamic within the


macroscopic system dynamics. Micro diversity, in various possible
dimensions, is differentially selected, leading to adaptation and emer-
gence of new behaviors

88
VOLUME #2, ISSUE #4

Let us now examine the consequences of not making assumptions 3,


4, and 5. In the space of “possibilities,” closely similar behaviors are con-
sidered to be most in competition with each other, since they require sim-
ilar resources and must find a similar niche in the system. However, we
assume that in this particular dimension there is some “distance” in char-
acter space, some level of dissimilarity, at which two behaviors do not
compete. In addition, however, other interactions are possible. For exam-
ple, two particular populations i and j with characteristic behavior may
have an effect on each other. This could be positive, in that side effects of
the activity of j might in fact provide conditions or effects that help i. Of
course, the effect might equally well be antagonistic, or neutral. Similarly,
i may have a positive, negative, or neutral effect on j. If we therefore ini-
tially choose values randomly for all the possible interactions between all
i and j, these effects will come into play if the populations concerned are
in fact present. If they are not there, then obviously there can be no pos-
itive or negative effects experienced.

Figure 5 A population i may affect population j, and vice versa

A typical evolution is shown in Figure 6 overleaf. Although competition


helps to “drive” the exploration process, what is observed is that a system
with “error making” in its behavior evolves toward structures that express
synergetic complementarities. In other words, although driven to explore
by error making and competition, evolution evolves cooperative struc-
tures. The synergy can be expressed either through “self-symbiotic”
terms, where the consequences of a behavior in addition to consuming
resources is favorable to itself, or through interactions involving pairs,
triplets, and so on. This corresponds to the emergence of “hypercycles”
(Eigen & Schuster, 1979) and of “supply chains” in an economic system.
The lower right-hand picture in Figure 6 shows the evolution tree
generated over time. We start off an experiment with a single behavioral

89
EMERGENCE

Figure 6 A two-dimensional possibility is gradually filled by the error-


making diffusion, coupled with mutual interaction. The final frame shows
the evolutionary tree generated by the system

type in an otherwise “empty” resource space. The population initially


forms a sharp spike, with eccentrics on the edge suppressed by their
unsuccessful competition with the average type. However, any single
behavior can only grow until it reaches the limits set by its input
requirements, or, in the case of an economic activity, by the market limit
for any particular product. After this, it is the “eccentrics,” the “error-

90
VOLUME #2, ISSUE #4

makers,” that grow more successfully than the “average type,” as they are
less in competition with the others and the population identity becomes
unstable. The single sharply spiked distribution spreads, and splits into
new behaviors that climb the evolutionary landscape that has been
created, leading away from the ancestral type. The new behaviors move
away from each other, and grow until in their turn they reach the limits
of their new normality, whereupon they also split into new behaviors,
gradually filling the resource spectrum.
While the “error-making” and inventive capacity of the system in our
simulation is a constant fraction of the activity present at any time, the
system evolves in discontinuous steps of instability, separated by periods
of taxonomic stability. In other words, there are times when the system
structure can suppress the incipient instabilities caused by innovative
exploration of its inhabitants, and there are other times when it cannot
suppress them and a new behavior emerges. It illustrates the fact that the
“payoff ” for any behavior is dependent on the other players in the system.
Success of an individual type comes from the way it fits the system, not
from its intrinsic nature. The important long-term effects introduced by
considering the endogenous dynamics of micro diversity has been called
evolutionary drive, and has been described elsewhere (Allen & McGlade,
1987a; Allen, 1990, 1998).
One of the important results of “evolutionary drive” was that it did not
necessarily lead to a smooth progression of evolutionary adaptation. This
was because of the “positive feedback trap.” This trap results from the
fact that any emergent trait that feeds back positively on its own “pro-
duction” will be reinforced, but that this feedback does not necessarily
arise from improved performance in the functionality of the individuals.
For example, the “peacock’s tail” arises because the gene producing a
male’s flashy tail simultaneously produces an attraction for flashy tails in
the female. This means that if the gene occurs, it will automatically pro-
duce preferentially birds with flashy tails, even though they may even
function less well in other respects. Such genes are essentially “narcissis-
tic,” favoring their own presence even at the expense of improved func-
tionality, until such a time, perhaps, as they are swept away by some
much more efficient newcomer. This can give a punctuated type of evo-
lution, as “inner taste” temporally dominates evolutionary selection at the
expense of increased performance with respect to the environment. This
is like culture within an organization where success is accorded to those
who are “one of us.” For example, the “model” or “conjecture” within an
organization about the environment it is in and what is happening will

91
EMERGENCE

tend to self-reinforce for as long as it is not clearly proved wrong. There


is a tendency to institutionalize “knowledge” and “practice” so that actors
are “qualified” to act providing they share the “normal” view. Such con-
formity and unquestioning acceptance of the company line are of seem-
ing benefit in the short term, but are dangerous over the long term. In a
static or very slowly changing world, problems may take a long time to
surface, but in a fluid, unstable, and emergent situation, this would be
disastrous in the longer term.
Much of conventional knowledge management therefore concerns the
generation and manipulation of databases, and the IT issues raised by
this. However, knowledge only exists as the interpretive framework that
assigns “meaning” and “action” to given data inputs as the result of a par-
ticular system view. But the evolution of complex systems tells us that
structural systemic change can and will occur, and therefore that any par-
ticular “interpretive framework” will need to be able to change. This
means that any particular “culture” and set of practices should be contin-
ually challenged by dissidents, and rejustified by believers. Consensus
may be a more frequent cause of death than is conflict.
Perhaps the real task of “management” is to create havens of “stabil-
ity” for the necessary period within which people can operate with fixed
rules, according to some set of useful “stories.” However, these would
have to be transformed at sufficiently frequent periods if the organization
is to continue to survive. Therefore, as a counterweight to the “fixed” part
of the organization engaged in “lean” production, there would have to be
the exploratory part charged with the task of creating the next “fixed”
structure. This is one way in which knowledge generation and mainte-
nance could be managed in some sectors.

THE LIMITS TO KNOWLEDGE


If we now take the different kinds of knowledge in which we may be
interested concerning a situation, then we can number them according
to:

1 What type of situation or object we are studying (classification—


“prediction” by similarity).
2 What it is “made of ” and how it “works.”
3 Its “history” and why it is as it is.
4 How it may behave (prediction).
5 How and in what way its behavior might be changed (intervention
and prediction).

92
VOLUME #2, ISSUE #4

Then we can establish Table 1, which therefore in some ways provides us


with a very compressed view of the science of complexity.

Assumptions 5 4 3 2
Type of model Equilibrium Nonlinear Self-organizing Evolutionary
dynamics
(including chaos)
Type of system Yes Yes Yes Can change
Composition Yes Yes Yes but Can change
History Irrelevant Irrelevant Structure changes Important
Prediction Yes Yes Probabilistic Limited
Intervention and
prediction Yes Yes Probabilistic Limited

Table 1 Systematic knowledge concerning the limits to systematic


knowledge

Here, we should comment on the fact that we have considered models


with at least assumptions 1 and 2. These concern situations where we
believe we know how to draw a boundary between a system and its envi-
ronment, and where we believe we know what the constituent variables
and components are. But these could be considered as being a single
assumption that chooses to suppose that we can understand a situation on
the basis of a particular set of influences—a typical disciplinary academic
approach with a ceteris paribus assumption. A boundary may be seen as
limiting some geographic extension, while the classification of variables
and mechanisms is really a region within the total possible space of
phenomena.
A representation or model with no assumptions whatsoever is clearly
simply subjective reality. It is the essence of the postmodern, in that it
remains completely contextual. In this way, we could say that it does not
therefore fall within the science of complexity, since it does not concern
systematic knowledge. It is here that we should recognize that what is
important to us is not whether something is absolutely true or false, but
whether the apparent systematic knowledge being provided is useful.
This may well come down to a question of spatio-temporal scales.
For example, if we compare an evolutionary situation to one that is so
fluid and nebulous that there are no discernible forms, and no stability for
even short times, we see that what makes an evolutionary model possible
is the existence of stable forms, for some time at least. If we are only inter-
ested in events over very short times compared to those usually involved
in structural change, it may be perfectly legitimate and useful to consider

93
EMERGENCE

the structural forms fixed (i.e., we can make assumption 3). This doesn’t
mean that they are, it just means that we can proceed to do some calcu-
lations about what can happen over the short term, without having to
struggle with how forms may evolve and change. Of course, we need to
remain conscious that over a longer period forms and mechanisms will
change and that our actions may well be accelerating this process, but
nevertheless it can still mean that some self-organizing dynamic is useful.
Equally, if we can assume not only that forms are fixed but that in addi-
tion fluctuations around the average are small (i.e., can make assumptions
3 and 4), we may find that prediction using a set of dynamic equations pro-
vides useful knowledge. If fluctuations are weak, it means that large fluc-
tuations capable of kicking the system into a new regime/attractor basin
are very rare and infrequent. This gives us some knowledge about the
probability that this will occur over a given period. So, our model can
allow us to make predictions about the behavior of a system as well as the
associated probabilities and risk of an unusual fluctuation occurring and
changing the regime. An example of this might be the idea of a 10-year
event and a 100-year event in weather forecasting, where we use the sta-
tistics of past history to suggest how frequent critical fluctuations are. Of
course, this assumes the overall stationarity of the system, supposing that
processes such as climate change are not happening. Clearly, when 100-
year events start to occur more often, we are tempted to suppose that the
system is not stationary, and that climate change is occurring.
These are examples of the usefulness of different models and the
knowledge with which they provide us, all of which are imperfect and not
strictly true in an absolute sense, but some of which are useful.
Systematic knowledge, therefore, should not be seen in absolute
terms, but as being possible for some time and in some situations, pro-
vided that we apply our “complexity reduction” assumptions honestly.
Instead of simply saying that “all is flux, all is mystery,” we may admit that
this is so only over the very long term (who wants to guess what the uni-
verse is for?). Nevertheless, for particular questions in which we are inter-
ested, we can obtain useful knowledge about their probable behavior by
making these simplifying assumptions, and this can be updated by contin-
ually applying the “learning” process of trying to “model” the situation.

A FISHERIES EXAMPLE

The example of Canadian Atlantic Fisheries has been presented in several


articles (Allen & McGlade, 1986, 1987b, 1988; Allen, 1997). It includes

94
VOLUME #2, ISSUE #4

models of different types, equilibrium, nonlinear dynamics, and self-


organizing. Here, we shall only describe a model that generates, explores,
and manages the “knowledge dynamics” of fishermen in different ways.
Our model (or story) is set in a spatial domain of 40 zones, with two fish-
ing ports on the coast of Nova Scotia. The model/story recounts the fish-
ing trips and catches of fishermen based on the knowledge they have of
the location of fish stocks and the revenue they might expect by fishing a
particular zone. This information comes essentially from the fishing activ-
ity of each boat and of the other boats, and therefore there is a tendency
for a “positive feedback trap” to develop in which the pattern of fishing
trips is structured by that of the catch—leading to self-reinforcement.
Areas where fishing boats are absent send no information about potential
catch and revenue. The parameters of the cod, haddock, and pollock are
all accurate, as are the costs and data about the boats.
The spatial distribution of fishing boats has two essential terms. The
first takes into account the increase or decrease of fishing effort in zone I,
by fleet L, according to how profitable it is. If the catch rate is high for a
species of high value, revenue greatly exceeds the costs incurred in fish-
ing there, provided that the zone is not too distant from the port. Then
effort will increase. If the opposite is true, effort will decrease. The sec-
ond term takes into account the fact that due to information flows in the
system (radio communication, conversations in port bars, spying, etc.), to
a certain extent each fleet is aware of the catches being made by others.
Of course, boats within the same fleet may communicate freely the best
locations, and even between fleets there is always some “leakage” of
information. This results in the spatial movement of boats, and is gov-
erned by the “knowledge” they have concerning the relative profits that
they expect at the different locations.
This “knowledge” is represented in our model/story by the “relative
attractivity” that a zone has for a particular fishing boat, depending on
where it is, where its home port is, and what the skipper “knows.” For
these terms we use the idea of “boundedly rational” decision makers who
do not have “perfect” information or absolutely “rational” decision-
making capacity. In other words, each individual skipper has a probabil-
ity of being attracted to zone I, depending on its perceived attractivity.
Since probability must vary between 0 and 1, we see that A must always
be defined as positive. A convenient form, quite usual in economics, is:

A i= eRUi

95
EMERGENCE

where Ui is a “utility function.” Ui constitutes the “expected profit” of the


zone i, taking into account the revenue from the “expected catch” and the
costs of crew, boat, and fuel, etc. that must be expended to get it.
In this way, we see that boats are attracted to zones where high
catches and catch rates are occurring, but the information only passes if
there is communication between the boats in i and in j. This will depend
on the “information exchange” matrix, which will express whether there
is cooperation, spying, or indifference between the different fleets.
However, responses in general will be tempered by the distances
involved and the cost of fuel.
In addition to these effects, however, our equation takes another very
important factor into account, factor R. This expresses how “rationally,”
how “homogeneously,” or with what probability a particular skipper will
respond to the information he is receiving. For example, if I is small, then
whatever the “real” attraction of a zone i, as expressed in U, the
probability of going to any zone is roughly the same. In other words,
“information” is largely disregarded and movement is “random.” We have
called this type of skipper a stochast. Alternatively, if i is large, it means
that even the smallest difference in the attraction of several zones will
result in each skipper going, with probability 1, to the most attractive
zone. In other words, such deciders put complete faith in the information
they have, and do not “risk” moving outside of what they know. These

Figure 7 An initial condition of our fishing story. Two trawler fleets and
one long liner fleet attempt to fish three species (cod, haddock, and pollock)

96
VOLUME #2, ISSUE #4

“ultra rationalists” we have called Cartesians. The movement of the boats


around the system is generated by the difference at a given time between
the number of boats that would like to be in each zone, compared to the
number that actually are there. As the boats congregate in particular loca-
tions of high catch, so they fish out the fish population that originally
attracted them. They must then move on the next zone that attracts them,
and in this way there is a continuing dynamic evolution of fish popula-
tions and of the pattern of fishing effort.
Let us describe briefly a key simulation that we have made of this
model. We consider the competition between fleets 1 and 2 fishing out of
port 1 in South West Nova Scotia. We start from the initial condition
shown above and simply let the model run, with catches and landings
changing the price of fish, and the knowledge of the different fleets
changing as the distribution of fish stocks changes. We let the model run
for 20 years, and see how the “rationality” I affects the outcome.
This is a remarkable result. The higher the value of R, the better the
fleet optimizes its use of information (knowledge management?) and in
the short term increases profits (e.g., R=3 or more). But, this does not
necessarily succeed in the long term. Cartesians have a tendency to “lock
in” to their first successful zone and stay fishing there for too long,
because it is the only information available. These Cartesians are not “risk
takers” who will go out to zones with no information, and hence they get

Figure 8 Over 20 years the stochast (R=.5) beats the Cartesian (R= 2),
after initially doing less well

97
EMERGENCE

trapped in the existing pattern of fishing. The stochasts (R=.5), provided


that they are not totally random (R less than .1), succeed in both discov-
ering new zones with fish stocks, and also in exploiting those that they
have already located.
This paradoxical situation results from the fact that in order to fish effec-
tively, two distinct phases must both be accomplished. First, the fish must
be “discovered.” This requires risk takers who will go out into the
“unknown” and explore—whatever present knowledge is. The second
phase, however, requires that when a concentration of fish has been dis-
covered, the fleet will move in massively to exploit this, the most profitable
location. These two facets are both necessary, but call on different qualities.
Our fishing model/story can be used to explore the evolution of strate-
gies over time. Summarizing the kinds of results observed, we find the
following kind of evolutionary sequence:

◆ Fleets find a moderate behavior with rationality between .5 and 1.


◆ Cartesians try to use the information generated by stochasts, by
following them and by listening in to their communications.
◆ Stochasts attempt to conceal their knowledge, by communicating in
code, by sailing out at night, and by providing misleading information.
◆ Stochasts and Cartesians combine to form a cooperative venture with
stochasts as “scouts” and Cartesians as “exploiters.” Profits are shared.
◆ Different combinations of stochast/Cartesian behavior compete.
◆ In this cooperative situation, there is always a short-term advantage to
a participant who will cheat.
◆ Different strategies of specialization are adopted, e.g., deep-sea or
inshore fishing, or specialization by species.
◆ A fleet may adopt “variable” rationality, adapting its search effort
according to the circumstances.
◆ In all circumstances, the rapidity of response to profit and loss turns
out to be advantageous, and so the instability of the whole system
increases over time.

The real point of these results is that they show us that there is no such
thing as an optimal strategy. As soon as any particular strategy becomes
dominant in the system, it will always be vulnerable to the invasion of
some other strategy. Complexity and instability are the inevitable conse-
quence of fishermen’s efforts to survive, and of their capacity to explore
different possible strategies. The important point is that a strategy does
not necessarily need to lead to better global performance of the system in

98
VOLUME #2, ISSUE #4

order to invade. Nearly all the innovations that can and will invade the
fishing system lead to lower overall performance, and indeed to collapse.
Thus, higher technology, more powerful boats, faster reactions, etc. all
lead to a decrease in output of fish.
In reality, the strategies that invade the system are ones that pay off
for a particular actor in the short term. Yet, globally and over a long
period, the effect may be quite negative. For example, fast responses to
profit/loss or improved technology will invade the system, but they make
it more fragile and less stable than before. This illustrates the idea that
the evolution of complexity is not necessarily “progress,” and the system
is not necessarily moving toward some “greater good.”

EVOLUTIONARY FISHING: LEARNING HOW TO LEARN


We can now take our model/story to a further stage: We can build a model
that will learn how to fish. To do this, we shall simply run fleets in competi-
tion with each other. They will differ in the parameters that govern their strat-
egy: rationality, rate of response to profit and loss, which fleets they try to
communicate with, etc. By running our model, some fleets will succeed and
some fail. If we take out the failures when they occur, and relaunch them with
new values of their strategy parameters, then we can see what evolves.

Figure 9 After 30 years fleet 7 has the best strategy (fast response to
profit/loss), although previously fleet 2 was best and then fleet 3

99
EMERGENCE

The importance of this model is that it demonstrates how, as we relax


the assumptions we normally make in order to obtain simple, mechanis-
tic representations of reality, we find that our model can really tell us
things we did not know. Our stories become richer and more variable, as
different outcomes occur and different lessons are learnt. In the spatial
model, we found that economically suboptimal behavior (low R) was nec-
essary in order to fish successfully, and that there was no single “optimal”
behavior or strategy, but there were possible sets of compatible strategies.
The lesson was that the pattern of fishing effort at any time cannot be
“explained” as being optimal in any way, but instead is just one particular
moment in an unending, imperfect learning process involving the ocean,
the natural ecosystem, and the fishermen.
What is important is even further from the “observed” behavior of the
boats and the fish. It concerns the “meta” mechanisms by which the
parameter space of possible fishing strategies is explored. By running
“learning fleets” with different learning mechanisms in competition with
each other, our model can begin to show us which mechanisms succeed
in successfully generating and accumulating knowledge of how to suc-
ceed, and which do not. Our model has moved from the domain of the
physical—mesh size, boat power, trawling versus long lining, etc.—to the
nonphysical—How often should I monitor performance? What changes
could I explore? How can I change behavior? Which parameters are
effective?—in short, how can I learn? This results from relaxing both
assumptions 3 and 4. We move from a descriptive mechanical model to an
evolving model of learning “inside” fishermen, where behavior and
strategies change, and the payoffs and consequences emerge, over time.

DISCUSSION
We have shown that knowledge generation results from making “simpli-
fying” assumptions. If these are all true, we have a truly appropriate and
useful model; but if the assumptions do not hold, our model may be com-
pletely misleading. Our reflection considered:

◆ Equilibrium assumptions where change is assumed to be exogenous.


◆ Assumptions of fixed “average” mechanisms leading to nonlinear
dynamics (system dynamics), where interacting actors make changing
choices governed by the invariance of their preference functions.
◆ Self-organizing systems of interacting actors of fixed internal nature,
that interact through discrete events that are not assumed average,

100
VOLUME #2, ISSUE #4

allowing the nonaverage to test the stability of any regime, and to


explore other basins of attraction and possible regimes of operation.
◆ Evolutionary, learning models in which behaviors and preferences, goals
and strategies, can change, but the “learning mechanisms” are invariant.

These represent successively more general models, each step containing


the previous one as a special case, but also of increasing difficulty, and
indeed still probably far from “reality.” It is small wonder that most peo-
ple probably have a very poor understanding of the real consequences of
their actions or policies.
The discussion above also brings out a great deal concerning the ideas
that underlie the “sustainability” of organizations and institutions. This
does not result from finding any fixed, unbeatable set of operations that
correspond to the peak of some landscape of fitness. Any such landscape
is a reflection of the environment and of the strategies of the other play-
ers, and so it never stops changing. Any temporary “winner” will need
continually to monitor what is happening, reflect on possible conse-
quences, and possess the mechanisms necessary for self-transformation.
This will lead to the simultaneous “sustaining and transformation” of the
entities and of the system as new activities, challenges, and qualities
emerge. The power to do this lies not in extreme efficiency, nor can it be
had necessarily by allowing free markets to operate unhindered. It lies in
creativity. And, in turn, this is rooted in diversity, cultural richness, open-
ness, and the will and ability to experiment and to take risks.
Instead of viewing the changes that occur in a complex system as nec-
essarily reflecting progress up some pre-existing (if complex) landscape,
we see that the landscape of possible advantage itself is produced by the
actors in interaction. The detailed history of the exploration process itself
affects the outcome. Paradoxically, our scientific pursuit of knowledge tells
us that uncertainty is therefore inevitable, and we must face this. Long-
term success is not just about improving performance with respect to the
external environment of resources and technology, but also is affected by
the “internal game” of a complex society. The “payoff ” of any action for an
individual cannot be stated in absolute terms, because it depends on what
other individuals are doing. Strategies are interdependent.
Ecological organization is what results from evolution, and it is com-
posed of self-consistent “sets” of activities or strategies, both posing and
solving the problems and opportunities of their mutual existence.
Innovation and change occur because of diversity, nonaverage individu-
als with their bizarre initiatives, and whenever this leads to an exploration

101
EMERGENCE

into an area where positive feedback outweighs negative, growth will


occur. Value is assigned afterward. It is through this process of “post-hoc
explanation” that we rationalize events by pretending that there was some
pre-existing “niche” that was revealed by events, although in reality there
may have been a million possible niches and one particular one arose.
The future, then, is not contained in the present, since the landscape
is fashioned by the explorations of climbers, and adaptability will always
be required. This does not necessarily mean that total individual liberty
is always best. Our models also show that adaptability is a group or pop-
ulation property. It is the shared experiences of others that can offer
much information. Indeed, it pays everyone to help facilitate exploration
by sharing the risks in some cooperative way, thus taking some of the
“sting” out of failure. There is no doubt that the “invention” of insurance
and of limited liability has been a major factor in the development of our
economic and social system. Performance is generated by mutual inter-
actions, and total individual freedom may not be consistent with good
social interactions, and hence will make some kinds of strategy impossi-
ble. Once again, we must differentiate between an “external game,”
where total freedom allows wide-ranging responses to outside changes,
and an “internal game,” where the division of labor, internal relations, and
shared experiences play a role in the survival of the system.
Again, it is naïve to assume that there is any simple “answer.” The
world is just not made for simple, extreme explanations. Shades of grey,
subjective judgments, postrationalizations, multiple misunderstandings,
and biological/emotional motivations are what characterize the real
world. Neither total individual freedom nor its opposite are solutions,
since there is no “problem” to be solved. They are possible choices among
all the others, and each choice gives rise to a different spectrum of possi-
ble consequences, different successes and failures, and different
strengths and weaknesses. Much of this probably cannot really be known
beforehand. We can only do our best to put in place the mechanisms that
allow us always to question our “knowledge” and continue exploring. We
must try to imagine possible futures, and carry on modifying our views
about reality and about what it is that we want.
Mismatches between expectations and real outcome may either cause us
to modify our (mis)understanding of the world, or simply leave us perplexed.
Evolution in human systems is therefore a continual, imperfect learning
process, spurred by the difference between expectation and experience, but
rarely providing enough information for a complete understanding.
Instead of the classical view of science eliminating uncertainty, the

102
VOLUME #2, ISSUE #4

new scientific paradigm accepts uncertainty as inevitable. Indeed, if this


were not the case, then it would mean that things were preordained,
which would be much harder to live with. Evolution is not necessarily
progress and neither the future nor the past was preordained. Creativity
really exists, it is the motor of change, and the hidden dynamic that
underlies the rise and fall of civilizations, peoples, and regions, and evo-
lution both encourages and feeds on invention. Recognizing this, the first
step toward wisdom is the development and use of mathematical models
that capture this truth.

REFERENCES
Allen, P. M. (1985) “Towards a New Science of Complex Systems,” in The Science and Praxis
of Complexity, Tokyo: United Nations University Press: 268–97.
Allen, P. M. (1988) “Evolution: Why the Whole is Greater than the Sum of its Parts,” in
W. Wolff, C.-J. Soeder, & F. R. Drepper (eds) Ecodynamics, Berlin: Springer Verlag:
2–30.
Allen, P. M. (1993) “Evolution: Persistent Ignorance from Continual Learning,” in R. H. Day
& P. Chen (eds) Nonlinear Dynamics and Evolutionary Economics, Oxford, UK: Oxford
University Press: 101–12.
Allen, P. M. (1994) “Evolutionary Complex Systems: Models of Technology Change,” in L.
Leydesdorff & P. van den Besselaar (eds) Chaos and Economic Theory, London: Pinter.
Allen, P. M. (1994) “Coherence, chaos and evolution in the social context,” Futures, 26(6):
583–97.
Allen, P. M. (1998) “Evolving Complexity in Social Science,” in G. Altmann & W. A. Koch
(eds) Systems: New Paradigms for the Human Sciences, Berlin/New York: Walter de
Gruyter.
Allen, P. M. & McGlade, J. M. (1986) “Dynamics of discovery and exploitation: The Scotian
Shelf fisheries,” Canadian Journal of Fisheries and Aquatic Sciences, 43(6): 1187–200.
Allen, P. M. & McGlade, J. M. (1987a) “Evolutionary drive: The effect of microscopic diver-
sity, error making and noise,” Foundations of Physics, 17(7, July): 723–38.
Allen, P.M. & McGlade, J.M. (1987b) “Modelling Complex Human Systems: A Fisheries
Example,” European Journal of Operations Research, 30: 147–67.
Allen, P. M. & McGlade, J. M. (1989) “Optimality, Adequacy and the Evolution of
Complexity,” in P. L. Christiansen & R. D. Parmentier (eds) Structure, Coherence and
Chaos in Dynamical Systems, Manchester, UK: Manchester University Press.
Barucha-Reid, A. T. (1960) Elements of the Theory of Markov Processes and Their
Applications, New York: McGraw-Hill.
Eigen, M. & Schuster, P. (1979) The Hypercycle, Berlin: Springer Verlag.
Haken, H. (1977) Synergetics, Heiderlberg, Germany: Springer Verlag.
McGlade, J. M. & Allen, P .M. (1985) “The fishing industry as a complex system,” Canadian
Technical Report of Fisheries and Aquatic Sciences, No 1347, Ottawa, Canada: Fisheries
and Oceans.
Nicolis, G. & Prigogine, I. (1977) Self-Organization in Non-Equilibrium Systems, New York:
Wiley.

103
EMERGENCE, 2(4), 104–112
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Knowledge as Action,
Organization as Theory:
Reflections on Organizational Knowledge
Haridimos Tsoukas

If you desire to see, learn how to act


Reality = community
Heinz von Foerster (1984)

G overnment officials in the UK recently conducted an


inspection of the teaching methods and aims of
Summerhill, an independent school well known for its
libertarian philosophy. In their report, the inspectors
charged the school with lack of discipline and clear structures, and were
categorical that Summerhill’s modus operandi leaves its students inade-
quately prepared for the rigors of life after school. Reacting to the report,
several supporters of Summerhill claimed the reverse. The school’s
libertarian philosophy, they said, gives students what conventional
schools fail to give: freedom to explore themselves and find out what
they are good at.
This episode (and several others like it) is interesting because the
same set of activities (teaching and school management) is assessed so dif-
ferently by two different observers. What differentiates the observers is
that they use different assessment criteria, derived from different
domains of action. The inspectors do exactly what they are supposed to
do: ensure that schools broadly conform to a set of criteria defined by the
government. Summerhill supporters, on the contrary, subscribe to an

104
VOLUME #2, ISSUE #4

unconventional pedagogical philosophy and, predictably, they want their


school to do things differently from mainstream schools.
If we ask the question “What is Summerhill’s competence?” or “What
is Summerhill’s organizational knowledge?” it is not clear that there can
be a single answer. In fact, this may be the wrong question to ask, since
it misleads us into searching for a definitive answer beginning with
“knowledge is…,” as if knowledge were some thing out there to be
grasped and described (Reyes & Zarama, 1998: 21; Cook & Yanow, 1996).
What is forgotten in such a mode of thinking is that knowledge pre-
supposes a subject or, in the language of second-order cybernetics, an
observer. Knowledge is of someone about something. For the government
inspectors, Summerhill has a confusing curriculum, knows very little
about pedagogy, and is ignorant about school management. The reverse
is the case in the view of those supporting Summerhill.
What is more realistic to say is that knowledge is an assessment of an
entity’s pattern of actions, made by an observer situated in a particular
domain of action, drawing on a particular set of criteria. Knowledge, in
other words, cannot be defined in abstracto, but is a particular observer’s
assessment, derived from applying particular criteria to a set of particular
actions. As Reyes and Zarama (1998: 21) aptly remark, knowledge is an
ascription, not a description; an assessment, not an assertion.
Notice that such a definition of knowledge not only applies when an
observer passes a judgment on someone else (such as a teacher of a stu-
dent), but also when an observer passes a judgment on themselves. How
do I know whether I can ride a bicycle? Because, looking back at the rel-
evant activity, I see that I have been effective in riding a bicycle. How
does a company know what it “knows”? Or, as Mintzberg (1994: 276) asks,
“How can we know that a strength is a strength without acting in a spe-
cific situation to find out?” Finding out what an entity knows is not a cere-
bral exercise, a purely cognitive act, but primarily an empirical question
to be settled in the context of action (Cook & Yanow, 1996: 431). The
observer making the assessment must specify the relevant evidence (or
be open to new experiences) and the criteria for evaluation in order to
arrive at a conclusion. Organizational knowledge is observer dependent
and action based. As such, it cannot be given an objective description in
the way that a bank statement provides us with an objective description
of our last month’s transactions (Lakoff, 1995; Tsoukas, 1997).
Knowledge is the outcome of the process of knowing, that is, the
process of someone drawing distinctions (Maturana & Varela, 1988).
When we draw a distinction, we split the world into “this” and “that;”

105
EMERGENCE

through language we constantly bring forth and ascribe significance to


certain aspects of the world (including, of course, our own behavior)
(Schutz, 1970; Taylor, 1985; Winograd & Flores, 1987). In von Foerster’s
(1984: 48) formulation, cognitive processes are never-ending recursive
processes of computation. Cognition consists in computing descriptions
of descriptions, namely, in recursively operating on, transforming, modi-
fying symbolic representations. In doing so, cognizing subjects rearrange
and reorder what they know, thus creating new distinctions and, there-
fore, new knowledge (Bell, 1999: lxiv; Dewey, 1934).
Observers generate distinctions, but they do so within a “form of life”
(Wittgenstein, 1958), a “practice” (MacIntyre, 1985), a “consensual domain”
(Maturana & Varela, 1988), a language-mediated domain of sustained inter-
actions. For example, the meaning of notions such as “shame,” “trust,”
“work,” “loyalty,” is inextricably bound up with the life of a subject of
experience; they are what Taylor (1985: 54) calls “subject-referring prop-
erties.” Language is constitutive of subject-referring properties and, by
implication, of the forms of life from which those properties derive their
meaning. Different vocabularies constitute differently carved-up seman-
tic spaces, within which particular distinctions are located. For example,
having an experience, such as “shame,” involves seeing that certain
descriptions apply—our language marks certain qualitative distinctions
concerning what is shameful (and by implication what is dignified) and
how we ought to respond to it. This accounts for the fact that in different
cultures there are different things to be ashamed about. Knowing how to
act within a domain of action is to make competent use of the distinctions
constituting that domain (Reyes & Zarama, 1998: 24; McDermott, 1999:
106; Cook & Yanow, 1996).
As Spender (1989) has shown, on entering a particular industry, man-
agers learn a particular “industry recipe,” that is, a set of distinctions tied
to a particular field of experience. The distinctions pertain to a number of
issues, ranging from how markets are segmented to the kind of employ-
ees suited to an industry or the technology used. To be a competent
member of an industry is to make competent use of its key distinctions—
and this needs to be learnt on the job.
As individuals increase and refine their capacity for making distinc-
tions (something that happens with practice), they increase their capacity
for knowing. Knowledge is what is retained as a result of this process
(McDermott, 1999: 106). Consider, for example, the case of an operator at
a call center of a mobile telecommunications company (Tsoukas &
Vladimirou, 2000). A particular customer complained that he did not have

106
VOLUME #2, ISSUE #4

the caller identification service, whereby a caller’s phone number


appears on the receiver’s mobile phone display, although he had paid for
it. This could have been due to a technical problem, an error on the part
of the company in having failed to activate that service, or the fact that
certain callers did not wish to have their number appear on other people’s
mobile phone displays. An experienced operator knew that the first two
possibilities were not very common and that she should focus on the
third. Indeed, as ethnographic studies show (Orr, 1996; Hutchins, 1995),
this is what experienced practitioners do: they see through a problem,
shortcut formally known procedures of reasoning involving a set of crude
distinctions, in order to make more refined distinctions (Schon, 1983).
To know is to act. The process of making distinctions, of recursively
computing descriptions of descriptions, involves a historically constituted
cognizing entity actively engaging with the world and selecting, carving
up, bringing forward, highlighting certain aspects of the world. At the
level of the individual, as Polanyi (1975) perceptively noted, knowing is
acting in the sense that, in order to know something, the individual acts
to integrate a set of particulars of which they are subsidiarily aware. To
make sense of our experience, we necessarily rely on some parts of it sub-
sidiarily in order to attend to our main objective focally. We comprehend
something as a whole (focally) by tacitly integrating certain particulars,
which are known by the actor subsidiarily. Knowing has a from–to struc-
ture: the particulars bear on the focus to which I attend from them. Thus,
knowing always has three elements: subsidiary particulars, a focal target,
and, crucially, a person who links the two.
Polanyi’s (1975: 36) classic example is the man probing a cavity with
his stick. The focus of his attention is at the far end of the stick, while
attending subsidiarily to the feeling of holding the stick in his hand. On
this view, knowledge is inevitably and irreducibly personal, since it
involves personal participation (action) in its generation. In Polanyi’s
(1975: 38) words, “the relation of a subsidiary to a focus is formed by the
act of a person who integrates one to another.”
Polanyi’s view of individual knowledge-as-action can be extended to
apply at the collectivity level (Tsoukas, 1998: 58–9). The stories and arti-
facts that practitioners share in a community constitute a certain type of
knowledge (we may call it “heuristic knowledge,” Collins, 1990) that has
been historically generated in response to remarkable events (such as
contingencies, breakdowns, failures, and successes). Individual practi-
tioners subsidiarily draw on such collective knowledge (heuristic know-
ledge) while tackling a particular problem. They are focally aware of a

107
EMERGENCE

problem by tacitly integrating the heuristic–knowledge subsidiaries


(involving stories about similar problems in the past) with the focal prob-
lem. Narratively organized experiences (both personal and vicarious) pro-
vide practitioners with the subsidiary particulars, which bear on the focal
activity to which practitioners are attending from (Tsoukas, 1998). In his
ethnographic study of photocopier repair technicians, Orr (1996) has
shown how the stories shared by the community of technicians constitute
an important part of its collective memory on which technicians individ-
ually draw in the course of their repair activities.

ORGANIZATION AS THEORY

If knowledge is irreducibly personal, how could it ever be organizational?


In a weak sense, knowledge is organizational simply by its being gener-
ated, developed, and transmitted within the context of organizations.
That is obvious and deserves no elaboration. In a strong sense, however,
knowledge becomes organizational when members of an organization
draw distinctions in the course of their work, by taking into account not
only the situatedness of their actions but also the generalizations pro-
vided to them by the organization, in the form of generic rules.
Let me explain. A distinguishing feature of organization is the gener-
ation of recurring behaviors by means of institutionalized roles that are
explicitly defined. For an activity to be said to be organized, it implies
that types of behavior in types of situations are connected to types of
actors (Berger & Luckmann, 1966: 22; Scott, 1995). An organized activity
provides actors with a given set of cognitive categories and a typology of
action options (Kreiner, 1999; Scott, 1995; Weick, 1979). Such a typology
consists of rules of action, typified responses to typified expectations
(Berger & Luckmann, 1966: 70–73). Rules are prescriptive statements
guiding behavior in organizations and take the form of propositional
statements, namely, “If X, then Y, in circumstances Z.”
On this view, organizing implies generalizing: the subsumption of
heterogeneous particulars under generic categories. In that sense, formal
organization involves abstraction, typification (Kreiner, 1999: 14). Since
in a formal organization the behavior of its members is guided by a set of
propositional statements, it follows that an organization may be seen as a
theory—a particular set of concepts (or cognitive categories) and the
propositions expressing the relationship between concepts.
Organization-as-theory enables organizational members to “take a find-
ing and generalize from any context to another context” (Bell, 1999: lxiii).

108
VOLUME #2, ISSUE #4

For example, the operators of the call center mentioned above have been
instructed to issue standardized responses to standardized queries: if this
type of problem appears, this type of solution is normally appropriate.
From a purely organizational point of view, the contextual specificity sur-
rounding every particular call (a specificity that callers tend to expand on
in their calls) is removed through the application of generic organiza-
tional rules.
Rules, however, exist for the sake of achieving specific goals. The gen-
eralizations selected and enforced are selected from among numerous
other possibilities. To have as a rule, for example, that “no caller should
wait for more than one minute before their call is answered” is not self-
evident. It has been selected by the firm in order to increase its customer
responsiveness, hoping that, ultimately, it will contribute to attracting
more customers, thus leading to higher market share, and so on. In other
words, a rule’s factual predicate (“If X…”) is a generalization selected
because it is thought to be causally relevant to a justification—some goal
to be achieved or some evil to be avoided (Schauer, 1991: 27). A justifica-
tion (or, to be more precise, a set of logically ordered justifications) deter-
mines which generalization will constitute a rule’s factual predicate. This
is an important point, since it highlights the fact that rules exist for the
sake of some higher-order preferences, which may have been explicitly
formulated in the past, but which, in the course of time, tend to become
part of an activity’s background and, thus, have probably faded.
Moreover, rules do not apply themselves; members of a community of
practice, within specific contexts, apply them (Gadamer, 1980; Tsoukas,
1996; Wittgenstein, 1958) Members of a community must share an inter-
pretation of what a rule means before they apply it. As Barnes (1995: 202)
remarks:

nothing in the rule itself fixes its application in a given case, … there is no
“fact of the matter” concerning the proper application of a rule, … what a
rule is actually taken to imply is a matter to be decided, when it is
decided, by contingent social processes.

Since rules codify particular previous examples, an individual following a


rule needs to learn to act in proper analogy with those examples. To fol-
low a rule, therefore, is to extend an analogy.
Notice that, on this Wittgesteinian view of rules, the proper applica-
tion of a rule is not just an individual accomplishment but a collective
one, since it is fundamentally predicated on collectively shared meanings.

109
EMERGENCE

If formal organization is a set of propositional statements, then those


statements must be put into action by organizational members, who
“must be constituted as a collective able to sustain a shared sense of what
rules imply and hence an agreement in their practice when they follow
rules” (Barnes, 1995: 204, emphasis added). The justifications underlying
rules need to be elaborated and their meaning agreed by the organiza-
tional collective. Organizational tasks are thus accomplished by the
extent to which individuals are able to secure a shared sense of what rules
mean (or by agreeing, reinforcing, and sustaining a set of justifications) in
the course of their work. This suggests the notion of the organization as a
densely connected network of communication through which shared
understandings are achieved.

CONCLUSIONS
We may now stand back and review the whole argument. Formal organi-
zations are three things at once: contexts within which individual action
takes place; sets of rules in the form of propositional statements; and his-
torical communities. Knowledge is what remains when individuals draw
distinctions in the course of their work, based on an appreciation of con-
text and/or the application of theory.
From the above, it should be clear that organizational knowledge is
three things at once. First, it is personal knowledge. As members of
organizations, individuals draw distinctions in the course of their work;
select what they take to be the relevant aspects of both the context within
which their actions take place and the tradition within which they are
embedded; decide how strong is the analogy between current and past
instances. Secondly, organizational knowledge is propositional.
Propositional statements explicitly articulating the tasks of an organiza-
tion guide individual action. And thirdly, organizational knowledge is col-
lective (or cultural). It consists of the shared understandings of a
community as they have evolved over the course of time, thanks to which
concerted action is rendered possible (Collins, 1990: 109).
If the above is accepted, it follows that the management of organiza-
tional knowledge is broader than the development of ever more sophisti-
cated propositional statements (what is often referred to as “codified” or
“canonical” knowledge) and their management through digital informa-
tion systems, as some seem to suggest (Gates, 1999). That is the easy part.
More painstaking is the refinement of individual perceptual skills
through systematic organization-wide reflection on past experiences

110
VOLUME #2, ISSUE #4

(Weick’s [1979, 1995] sensemaking), as well as the continuous effort to


sustain a shared sense of what organizational rules mean in practice.
Since knowledge does not apply itself but is applied by people, it is per-
haps worth stressing that it is still people and organizations that need
management (Kreiner, 1999: 1, 26), rather than some hypostatized body
of “knowledge” existing in the Platonic realm of “pure forms.”
At the same time, what makes the knowledge economy distinct, and
what, therefore, differentiates management today from management in
the past, is that managing people and organizations needs to be done
from the perspective of building and refining knowledge assets. It is now
widely realized what sophisticated theorists and practitioners have
known all along: when companies hire people to work they do not just
hire pairs of hands, or even brains, but whole human beings whose
knowledge and expertise, properly used and constantly developed in the
organizational context, can make a difference to how resources are
deployed. That means that the key to achieving effective coordinated
action does not so much depend on those “higher up” collecting more and
more information about what is going on in the organization (which has,
traditionally, been the panoptical managerial ideal), as on those “lower
down” finding ever more sophisticated ways of interrelating their actions.
The challenge for theorists is to explore how this happens and what the
role of individual and organizational knowledge is in such a process.

REFERENCES
Barnes, B. (1995) The Elements of Social Theory, London: UCL Press.
Bell, D. (1999) “The Axial Age of Technology Foreword: 1999,” in D. Bell, The Coming of
the Post-Industrial Society, special anniversary edition, New York: Basic Books:
ix–lxxxv.
Berger, P. & Luckmann, T. (1966) The Social Construction of Reality, London: Penguin.
Collins, M. H. (1990) Artificial Experts, Cambridge, MA: MIT Press.
Cook, S. & Yanow, D. (1996) “Culture and Organizational Learning,” in M. D. Cohen & L.
S. Sproull (eds) Organizational Learning, Thousand Oaks, CA: Sage: 430–59.
Dewey, J. (1934) Art as Experience, New York: Perigee Books.
Gadamer, H. G. (1980) “Practical philosophy as a model of the human sciences,” Research
in Phenomenology, 9: 74–85.
Gates, B. (1999) Business @ the Speed of Thought, London: Penguin.
Hutchins, E. (1995) Cognition in the Wild, Cambridge, MA: MIT Press.
Kreiner, K. (1999) “Knowledge and mind: The management of intellectual resources,”
Advances in Management Cognition and Organizational Information Processing, 6:
1–29.
Lakoff, G. (interviewed by I. A. Boal) (1995) “Body, Brain, and Communication,” in J. Brook
& I. A. Boal (eds) Resisting the Virtual Life, San Francisco: City Lights: 115–30.
MacIntyre, A. (1985) After Virtue, 2nd edn, London: Duckworth.
Maturana, H. & Varela, F. (1988) The Tree of Knowledge, Boston: New Science Library.

111
EMERGENCE

McDermott, R. (1999) “Why information technology inspired but cannot deliver knowledge
management,” California Management Review, 41: 103–17.
Mintzberg, H. (1994) The Rise and Fall of Strategic Planning, New York: Prentice Hall.
Orr, J. (1996) Talking About Machines, Ithaca, NY: ILR Press.
Polanyi, M. (1975) “Personal Knowledge,” in M. Polanyi & H. Prosch, Meaning, Chicago:
University of Chicago Press: 22–45.
Reyes, A. & Zarama, R. (1998) “The process of embodying distinctions: A re-construction of
the process of learning,” Cybernetics and Human Knowing, 5: 19–33.
Schauer, F. (1991) Playing by the Rules, Oxford, UK: Clarendon.
Schon, D. (1983) The Reflective Practitioner, New York: Basic Books.
Schutz, A. (1970) On Phenomenology and Social Relations, ed. H.R. Wagner, Chicago:
University of Chicago Press.
Scott, W. R. (1995) Institutions and Organizations, Thousand Oaks, CA: Sage.
Spender, J.-C. (1989) Industry Recipes, Oxford, UK: Blackwell.
Taylor, C. (1985) Philosophy and the Human Sciences, Vol. 2, Cambridge, UK: Cambridge
University Press.
Tsoukas, H. (1996) “The firm as a distributed knowledge system: A constructionist
approach,” Strategic Management Journal, 17 (Special Winter Issue): 11–25.
Tsoukas, H. (1997) “The tyranny of light,” Futures, 29: 827–43.
Tsoukas, H. (1998) “Forms of Knowledge and Forms of Life in Organized Contexts,” in R.
Chia (ed.) In the Realm of Organization, London: Routledge: 43–66.
Tsoukas, H. & Vladimirou, E. (2000) “On Organizational Knowledge and its Management:
An Ethnographic Investigation,” Working Paper No. 00/02, University of Essex,
Department of Accounting, Finance and Management.
Twining, W. & Miers, D. (1991) How to Do Things with Rules, 3rd edn, London: Weidenfeld
and Nicolson.
von Foerster, H. (1984) “On Constructing a Reality,” in P. Watzlawick (ed.) The Invented
Reality, New York: W.W. Norton: 41–61.
Weick, K. (1979) The Social Psychology of Organizing, 2nd edn, Reading, MA: Addison-
Wesley.
Weick, K. (1995) Sensemaking in Organizations, Thousand Oaks, CA: Sage.
Winograd, T. & Flores, F. (1987) Understanding Computers and Cognition, Reading, MA:
Addison-Wesley.
Wittgenstein, L. (1958) Philosophical Investigations, trans. G. E. M. Anscombe, Oxford, UK:
Basil Blackwell.

112
EMERGENCE, 2(4), 113–135
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

“Shall I Compare Thee to …


an Organization?”
Max Boisot & Jack Cohen

F or years, organization theory has been pulled in opposite


directions by the implicit themata of economics and of biol-
ogy (Holton, 1973). Economics, on the one hand, has been
Platonic and Newtonian in its orientation (Mirowski, 1989).
It evolved with a focus on stable equilibria and, for that reason, found it
difficult to handle discontinuous change—until recently, comparative
statics was as near as it got. Economics also tried, in a reductionist fash-
ion, to minimize the part played by context in its explanatory schemes.
Biology, on the other hand, has been more Aristotelian and context ori-
ented. It has also been more willing to admit of irreversible time-related
processes and far-from-equilibrium phenomena. All biologists have to
come to terms with irreversible change, if only in their own lives!
The term “organization” is common to both biology and the social sci-
ences. Within the latter, the term is of particular interest to the fields of
management and economics. Indeed, the organization of firms and the
organization of industries can be said to form a large part of the core of
these fields of study. But do biologists, management scholars, and econo-
mists refer to the same underlying phenomena when they use the term
“organization”? Is there a common set of concepts that underpins the use
of the term in these different disciplines?
The term “organization” is clearly not the exclusive property of orga-
nizational scientists. Biologists, on etymological grounds alone, have a
greater claim to it. Indeed, students of social and economic organization
have been much influenced by biological thinking, as witnessed, for
example, by work on the population ecology of organizations (Hannan &

113
EMERGENCE

Freeman, 1989; Carroll, 1987; Aldrich, 1979), on product and organiza-


tional lifecycles, on organizational growth and development, etc. In each
case the borrowing has been partial, often undisciplined, and sometimes
arbitrary.
Until the 1980s, the closest that organization theory came to thinking
in rigorous biological terms was through general systems theory and
cybernetics (Von Bertalanffy, 1976; Katz & Kahn, 1978; Ashby, 1954;
Wiener, 1962). But, as Paul Cilliers observes, general systems theory and
cybernetics are ahistorical in character, i.e., they are reversible and have
no memory (Cilliers, 1998). To that extent, they represent a mechanistic
tradition within biology rather than an organismic approach to organiza-
tional issues. For biology in its early days, under the spell of Descartes’
mechanical philosophy, was almost exclusively mechanistic—think, for
example, of Harvey’s pump model of the heart or of de la Mettrie’s
homme-machine.
The question we wish to address in this article is whether, when they
talk of organization, the biological and managerial disciplines are holding
on to different parts of the same animal or on to different animals that
merely show a passing resemblance to each other. Certainly, three
decades ago, organization theorists believed that general systems theory
offered them a theory of organization that they intimately shared with
biologists. But systems theory à la 1960s (which was never part of the bio-
logical mainstream) has been superseded by the complex adaptive sys-
tems (CAS) paradigm, and organization theorists have not yet fully
adopted this new thinking. Their implicit models still owe more to the
1960s than to the 1980s or the 1990s. Biologists, for their part, have in
general moved on, accepting complex systems thinking in disciplines as
different as ecology and endocrinology. Does the concept of complex
adaptive systems move the two disciplines closer to each other? Does it
foreshadow a universal theory of organization?
Both social science and biological concepts of organization have to
deal with the problem of agency and hence with the problem of know-
ledge. Yet, whereas in the social sciences knowledge has been framed pri-
marily as a cognitive issue, in biology knowledge shows up more as the
driver of purposeful behavior, whether or not cognition as commonly
understood is involved. In economics, for example, knowledge is a cog-
nitive object. In contrast, for biology, knowledge is a disposition or capac-
ity to respond. As Popper put it, from the amoeba to Einstein, organisms
are in the business of formulating and testing hypotheses (Popper, 1972).
Are the social science and the biological perspectives on knowledge

114
VOLUME #2, ISSUE #4

incommensurable in a Kuhnian sense (Kuhn, 1962)? Or can they be rec-


onciled? Our hypothesis is that through learning, biological entities trans-
form capacities into objects and objects into capacities. We thus take
learning—in its context of organization—to be the key to any reconcilia-
tion between them.
Stewart and Cohen’s concept of extelligence illustrates how objects
generate capacities (Stewart & Cohen, 1997): by blurring their bound-
aries with their context. Extelligence, then, can be thought of as a rela-
tional property that links a system at a given level with its proximate
environment at that same level, i.e., with its context. There is extelligence
at the level of the individual agents within an organization—the organi-
zation itself then becomes a form of extelligence at this level. And there
is also extelligence at the level of the organization itself—here, markets
and other institutions now constitute extelligences for it.
So how, then, should we consider the term “organization”? Within
each discipline, the meaning of the term has been evolving over the
years.

ORGANIZATION IN THE MANAGEMENT SCIENCES

The challenge of organizing was felt by political units long before it was
felt by economic ones. The modern nation state would be unthinkable
without a significant increase in organizational capacity. The rapid spread
of education and literacy in western societies over the last three centuries
allowed a phenomenal growth in the organs of modern governments and
in their ability to monopolize the means of coercion and of administration
over increasingly large tracts of territory. Within the modern nation state,
coercion and administration have become inversely related: control by
force gradually gives way to control by regulation, i.e., over time, infor-
mation and intelligence substitute for energy, both constructive and
destructive. But increases in the size of administrative units spawn
impersonal bureaucracies—a shift from community to organization, from
Gemeinschaft to Gesellschaft (Tönnies, 1955). The core values of these
new entities suggest that bureaucracies are efficient machines in the serv-
ice of the state. They have no goals of their own. They are means rather
than ends (Weber, 1978; Elias, 1939).
With the advent of the railway and the telegraph in the second half of
the nineteenth century—and later that of the telephone—the focus of
economic organizations was still primarily on the transportation and dis-
tribution of physical goods. In the space of two decades, in a number of

115
EMERGENCE

industries, the US evolved from a collection of disjointed regional mar-


kets into a single integrated national market. The growth of the modern
corporation in the US at the end of the nineteenth century was a natural
consequence of these technological changes. Yet, the sheer size of these
giant firms posed the problem of managerial coordination in an acute way.
Timetables had to be met, production scheduled, inventories monitored,
etc. Unsurprisingly, the first modern managers were production engi-
neers, and their prime concern was the planning and control of produc-
tive activity. Organization, then, was first and foremost the organization
of production (Chandler, 1962), and the planning and control processes
through which production was organized were akin to—to use a biologi-
cal term—a nervous system.
The giant corporation grew through a process of differentiation and
integration of increasingly specialized functions (Lawrence & Lorsch,
1967). Emile Durkheim’s distinction between organic and mechanical
solidarity is relevant to this process (Durkheim, 1933). Durkheim was a
turn-of-the-century French sociologist who claimed that in premodern
societies, social solidarity was “mechanical” in nature, i.e., it bound
together undifferentiated agents into a homogeneous whole. In modern
industrial societies, by contrast, solidarity has become “organic,” i.e.,
because of the division of labor, agents are now specialized, so that where
bonding occurs, it now takes place between functionally differentiated
units. The first type of solidarity generates simple social objects, each
with a “nervous system,” whereas the second generates complex social
systems, and is much more like an ecosystem of diverse creatures;1 see
below. This is reminiscent of Toffler’s First Wave and Second Wave; it is
his Third Wave (1980) that matches our later informational concerns.
The implicit model that underpinned the concept of early twentieth-
century organizations—both state and nonstate—was drawn from a
nineteenth-century physics deeply committed to a mechanical philosophy.
With large-scale production, the concerns were focused on energy expen-
ditures and achieving efficiency, i.e., “work put in” over “work got out.” The
aim was to minimize the energy expenditures needed—animate or inani-
mate—to obtain a given result. “Second Law of Thermodynamics” think-
ing dominated: you couldn’t win, you were doing well to break even! In the
“machine bureaucracies” thus created (Mintzberg & Waters, 1985), good
management was first and foremost the management of costs, and cost
accountants reigned supreme. They even had their own god: Frederick
Taylor. Note that the reduction of input was seen, under the eye of the cost
accountant, to be at least as virtuous an activity as increasing the output.

116
VOLUME #2, ISSUE #4

Associating the efficient use of machines with effective information


flows and feedback relationships made industrial managers particularly
receptive to cybernetic concepts when these first appeared in the 1940s
and 1950s (Wiener, 1962; Ashby, 1954). Yet, instead of treating their organ-
izations as open systems—as was increasingly being done in biology (see
below)—industrial managers were continually trying to close them up in
order to assert managerial control. In other words, they were constantly
pushing the organization-as-machine model rather than the organization-
as-living-thing model. They were trying, in a way, to emulate closed sys-
tems by reducing their inputs, improving “efficiency,” not efficacy. And
even when the open-systems perspective finally took root—in the 1960s
and 1970s—it promoted the view of organization-as-individual-organism
rather than as anything larger or more complex. Organism here equaled
firm (Burns & Stalker, 1960). Serebriakoff (1975) coined the word “org”
specifically to refer to organism and to organization, with both having a
“brain” between a sensorium and a motorium, with external feedbacks at
least.
Open systems theory introduced a biological perspective to manage-
ment. It naturally produced its own tensions. Organization theorists
found themselves pulled in opposite directions by the Newtonian tradi-
tions that still pervaded economics and the Heraclitian traditions of biol-
ogy. On the one hand, economists have never been particularly
concerned with organizational matters. They have typically treated firms
as atoms that collide competitively with each other in markets. What goes
on inside firms—internal organization—is of no particular interest to
them. Taking it as an axiom of their system that firms can be treated as if
they behaved rationally, they can afford to ignore the complexities of what
goes on within their organizational boundaries as epiphenomenal to their
concerns. Rational firms will be driven to respond in identical fashion to
a given set of external forces. (This had many prejudices in common with
the Skinner behaviorist psychology, where organisms—and children—
were considered as if there was no internal structure, simply the
input/output system.)
On the other hand, and drawing on biological analogies, organization
theorists had become increasingly uncomfortable with this simple
mechanical representation of economic organizations. Looking inside
firms, they sensed a degree of complexity and novelty in organizational
processes that constantly challenged the toy-world models bequeathed to
them by economists. Equally, the psychologists of the 1970s repudiated
Skinner.

117
EMERGENCE

The massive densification of communications that has taken place


over the past few decades has gradually eroded that facile distinction
between “internal” and “external” organization that allowed economists
to demarcate themselves from organizational theorists. External organi-
zation may have been treatable by the economists’ analytical tools. It
could be managed through the impersonal and mechanical operations of
the “market mechanism.” Internal organization, however, was too com-
plex for that. It required managerial coordination (Coase, 1937;
Williamson, 1975). Furthermore, with fast, low-cost modern communica-
tions technologies, external organization has now become as complex as
internal organization. If 100 years ago the railway and the telegraph ini-
tially created islands of complexity in the form of the modern corporation,
the internet is today extending this complexity and making it ubiquitous.
The rapid growth of interorganizational networking made possible by
the arrival of the internet has promoted an ecological perspective on
economic/business organization, as well as fostering a growing interest in
complex systems. Organizations are now viewed less as tightly coupled
objects than as loosely coupled systems in interaction. This shift in per-
spective clearly challenges the traditional assumption that we can associ-
ate economic organizations with bounded entities called firms. It has
always been taken for granted that firms were both instruments of pro-
duction and of distribution. But they turn out, on closer inspection, to be
only instruments of distribution—a ranked structure of claims to the out-
put of production.2 With the spread of outsourcing, downsizing, and
strategic alliancing, productive organization today reaches out far beyond
the boundaries of the firm as conventionally understood. The distinction
between internal and external organization has now not so much disap-
peared as become intractably fuzzy.
Both social and biological organizations first have to construe their
environments in order to act in them. Each does it in its own way. What
it is to be a frog (Lettvin et al., 1959) thus differs significantly from what
it is to be a bat (Nagel, 1989); frogs and bats construe their relevant envi-
ronments differently. So do Microsoft and the Birmingham city authori-
ties in the UK. We are here dealing with problems of representation and
of meaning. It may be argued that complex living systems—and, more
controversially, some complex nonliving systems—do not react directly to
events but to their internal representations of events. The focus thus shifts
imperceptibly to issues of knowledge and the emergence of patterns, i.e.,
the organization of information to generate plausible representations of
an environment as the basis of action. Enacting such representations sub-

118
VOLUME #2, ISSUE #4

sequently shapes an organization’s capacities and hence our conception of


what it should or could be.3 When Wittgenstein (1968) refers to “forms of
life,” is he not simply referring to the way that a capacity for having or gen-
erating knowledge is constrained by organization, biological or otherwise?
We should keep in mind, for organisms as well as organizations, that
the context is not an unchanging “environment,” which the internal
knowledge can assume remains the same. The structure of the effective
context is determined recursively, like rivers carving their way across a
landscape and changing its geography, then being guided by the new
topography. Tadpoles see a different landscape from frogs, but themselves
change the landscape. A puppy sees your home differently from the dog
it becomes, but the dog’s life is modified by what it did as a puppy.
Equally, the existence of a company in the “industrial ecology” should
now be seen as a modifier of the very environment which that company
sees as its context. This is new thinking whose characteristic word—and
thought—is recursion.
Organizations, then, bring forth their worlds. Their knowledge is as
much about the possible worlds that they are capable of construing as
about the probable worlds to which they need to respond if they are to
survive in the short term. They therefore act more in a matrix of plausi-
bilities than of certainties. This is because, like biblical prophets, they are
caught by their own forecasts and must look at the phase space of possi-
bilities for the future; they must, now, have grown out of the “five-year
plan” mentality of the 1930s. The complex adaptive systems approach is
well aligned with this perspective. It offers us a much more provisional
and potentially richer conception of what organization is about. It makes
us less disposed to ontologize traditional forms of organization and more
inclined to tolerate a “Cambrian” explosion of organizational possibilities
(Gould, 1990).

ORGANIZATION IN BIOLOGY

Has biology experienced the same transition as did management from


tightly coupled conceptions of organization to more loosely coupled
ones? Has it moved from consideration of objects, to processes, to com-
plex systems thinking? What have been the consequences? Has there also
been a shift from an energy-based view of biological processes to an
information-based one?
This change of perspective in thinking about human cultural organi-
zations parallels earlier, arguably homologous, changes in biological

119
EMERGENCE

thinking. Here are some examples. In the 1930s to 1950s, anatomy—a


science of objects and their relationships—was treated as a major focus in
schools and in medical education in general, with dissection as the cen-
tral teaching exercise and a significant tool in research. This was replaced
by physiology—a science of processes—in the 1950s to 1970s.
Physiology started with closed-system, equilibrium assumptions (homeo-
stasis), but then moved into the 1980s with progressively open-systems
thinking regulated by internal feedbacks.
Endocrinology started in the 1920s with lists of hormones: until the
1970s, measures of “amounts” such as blood levels of oestrogens or high-
density lipids, for example, were the usual clinical parameters. After the
1980s, endocrine questions began to mesh in with more complex ques-
tions about feedback relationships. Today, endocrinologists have devel-
oped a vast list of chemical messengers, whose amounts have become less
important than their timing and their responsivity within complex inter-
active systems.
Embryology in the 1920s was mostly descriptive; it then became pro-
gressively more interactive in outlook, looking to the organism develop-
ing in context, and regulating not its state (homeostasis) but its
developmental path (homeorhesis). From this it was but a step to the epi-
genetic view, i.e., an organism-level trajectory operating in context.
Ecology was originally made up of object-oriented lists of species and
numbers. During the course of the twentieth century, a more interactive
and open-systems style of thinking gradually led to the nonequilibrium
ecosystems perspective that we teach and manage today, drawing on pro-
gressively more complex-systems models whose mathematical basis
includes bounded chaos. “Balance of nature” thinking finally died among
professional ecologists in about 1985, leading to a dissonance between
folk ecologists and conservationists that somewhat resembles that
between “command-and-control” and “open-systems” managers in eco-
nomic organizations.
In both the biological and the managerial disciplines, one sees two
significant shifts: from objects to interaction between objects; and from
objects as things to objects as spatio-temporal states in wider processes.
In each of the two shifts, the time dimension looms larger. In both disci-
plines, one is now moving toward the study of interacting processes; in
other words, toward the study of complex, possibly adaptive systems. The
two shifts in our thinking—from objects to interactions and from objects
to spatio-temporal states—vastly increase the degrees of freedom of what
one understands by an organization, as well as the scope for creative and

120
VOLUME #2, ISSUE #4

emergent processes to drive its evolution and development. This intro-


duces the issue of organizational autonomy, which differs in substantial
ways from Maturana and Varela’s concept of biological autonomy and
autopoietic closure.4 It also moves us ever further away from the concept
of the organization as a machine that is first “designed” from the outside
and then externally directed, whether by William Paley’s watchmaker or
by some other first-mover. To some extent, it also moves us away from the
simplistic biology promoted by the popularizers of a DNA-driven world.
Dawkins’ The Selfish Gene and The Blind Watchmaker return readers to
a mechanical, object-centered view that is not widely shared by today’s
working biologists, in the (mistaken) belief that it is a simpler view more
congenial to the readers.
The ways of looking at the evolution of reproductive systems that
were pioneered by Waddington in the 1960s (Waddington, 1957) are now
becoming popular among biologists, for they fit this new-style thinking
(but are not easy to meld with the DNA-centered, organisms-are-just-
DNA-writ-large approach so common in popular books and the media).
These epigenetic landscape models and their kin aid understanding of
evolving lineages by presenting each development as a series of balls
rolling down an “epigenetic landscape” of hills and valleys; where the
balls end up defines the resulting organism. The useful element of this
model is the determination of what determines the topography of these
successive landscapes. Think of each developing organism’s landscape as
an elastic sheet. The gene system pulls from underneath, a series of tang-
led threads each attached separately to the sheet (for few genes have sim-
ple, single effects), while the environment pushes from above—think of a
hand whose fingers are resting on the sheet. As each generation of organ-
isms is produced, its existence modifies the shape of the landscape for
subsequent generations, so different genetic patterns are selected.
This recursive genetic malleability is alien to the Mendelian, purely
mutation-based explanations of evolution. Only a small proportion of
organisms grow up to become breeders, and it is these which have
recombined the parental genes in useful ways. Nearly all evolution pro-
ceeds by rearrangement of the very diverse genetics of natural popula-
tions. Only in laboratories, in folk biology, and in some science-fiction
films are new mutations taken to be the sole source of variability.
At the scale of human organizations, our questions have also gradually
shifted from “What is the secret process that a particular firm uses to
remain successful?”—a “mutational” approach—to “How has this organ-
ization promoted a responsiveness to changes in the market that it has

121
EMERGENCE

itself caused in order to remain successful?”—an evolutionary strategy


used by Darwin’s finches.

THE MANAGEMENT OF ECOLOGIES AND GENOMES


Practice, as always, lags behind. Our perceptual apparatus is more com-
fortable resolving the world into objects than into processes. Objects
have a lower dimensionality than do processes. “Folk” conceptions of
organization thus reflect the fact that “being” is an altogether easier con-
cept to deal with than “becoming” (Juarrero, 1999), i.e., we cannot easily
cope with games of chess where the rules evolve or keep changing. For
everyday purposes, stable objects tax our limited data-processing capaci-
ties less by sparing us the need to deal with complex dynamics.
Nevertheless, given the need to cope with ever-increasing turbulence in
our environment, our new concepts of organization are gradually moving
away from “folk” approaches toward something more sophisticated.
Being, as Parmenides pointed out, is ideally surprise free. It is therefore
boring as well as misleading. Becoming, by contrast, is Heraclitian; as
flux, its trajectory is open ended. More risky than being, to be sure, but
in a turbulent world, also both more realistic and more fun.

METAPHORS, ANALOGIES, AND ABSTRACTIONS

In discussing how biologists and organization scientists use the term


“organization” in their reasoning, we first need briefly to distinguish
between reasoning by means of metaphor, by means of analogy, and by
abstraction. How do these three forms of reasoning differ? Simplifying
somewhat, we might say the following:

◆ Reasoning by metaphor treats things that are different as if they were


similar in one significant respect that is left largely implicit.
Metaphors achieve their effects by connoting rather than denoting.
◆ Reasoning by analogy treats things that are different as if they were
the same in a number of significant respects that are more rigorously
defined than in the case of the metaphor. Analogies denote rather than
connote.
◆ Reasoning by abstraction treats things that are different as if they were
the same in all significant respects (Dretske, 1981). As we move from
metaphor to abstraction, our reasoning shifts from the poetic to the
propositional.

122
VOLUME #2, ISSUE #4

There are seductive metaphors that link human activities and the biolog-
ical world, built into our everyday language and therefore difficult to ana-
lyze. We talk, for example, of the head of the firm, the body of the church,
the long arm of the law, the minister’s right-hand man, and so on. In the
middle ages the “body corporate” made its appearance. It allowed a
monarch, for example, to treat a group of people sharing a common inter-
est or concern as if it were a single unified entity. Charters of incorpora-
tion, given to craft guilds or city corporations, allowed them to relate to
the crown and other parties as a single body (Turnbull, 1997). The goals
and interests shared by members of the corporation justified the assump-
tion that this body had a mind of its own and and that it could act ration-
ally with respect to such goals and interests. It was thus endowed with a
“legal personality;” the joint-stock companies of the nineteenth century
were built on the same assumption.
The dangers of metaphorical reasoning are well known. They provide
tools for explanation, giving us insights rather than understanding, and
insights can often prove illusory. They must be considered points of
departure for the reasoning process rather than points of arrival.
As epistemological strategies, analogies fare somewhat better; but
how useful are they? Most such analogies are useful in illustrating a par-
ticular point or illuminating a specific problem. Let us get back to bio-
logical analogies of social entities for our examples, particularly
misleading ones. Usually the comparison is with the “folk” idea, and the
biology is very different—we don’t accept the argument that if the “folk”
idea is common to explainer and explainee, it doesn’t matter what the
reality is. Comparison with a cell, for example, is useless if you get the cell
all wrong because you’re making the comparison with the “lies-to-
children version” (Stewart & Cohen, 1997) from the elementary biology
textbook. On the other hand, some published analogies have been useful
and fairly true to the realities of biology. Victor Serebriakoff ’s “orgs”
(1975), for example, were a good way of comparing sensory and motor
physiologies of organisms and factories, and Popper’s view of knowledge
as a capacity to act (“from the amoeba to Einstein”) is also a useful image
for both economists and biologists.
The use of analogies presents two dangers. Either superficial charac-
ters are being analogized, as in the biological arena—for example, we
might compare zebras and giraffes and tigers, and conclude “camouflage
features.” The problem then arises when we move out along the yellow-
and-black axis and find wasps, whose pattern serves a totally different
function, being a conspicuous warning. Alternatively, perhaps, we might

123
EMERGENCE

explore the deeper analogy between carnivores that have their eyes more
or less at the front where they can rangefind and concentrate on their
prey, with the open-field herbivores that are their prey and that have eyes
on the sides, where they can observe almost 360 degrees. Nice idea, but
again there are some contradictions: one must ask, for example, if herbi-
vores have any special way to look close and down at what they are eat-
ing, and then again look at wasps, which are carnivores with 360-degree
eyes.
Every analogy is biased by the theories of the two items, or the two
processes, that are being compared. Yet, we do feel that we can do more,
after we have worked out that a thermostat is like a ballcock valve on a
cistern is like an essay returned with tutor’s comments. We can say “neg-
ative feedback.” Our query in this article relates to the feeling that we can
do more when we say that, for example, cybernetics underlies both indus-
tries and ecologies. With the growth of the managerial sciences and par-
ticularly the development of open-systems theory, it began to look as if
one could go beyond metaphorical references to biology and reason more
rigorously by analogy. Are we today able to move beyond analogy and fur-
ther toward an abstract concept of organization—one that would be
shared by the biological and the social sciences?
The cybernetic insight was an important one, but it remains limited in
scope. We have moved from object to process, to feedback among
processes (cybernetics and homeostasis), and then to phase spaces and
epigenetic landscapes (homeorhesis and phase spaces). Indeed, we have
moved on further, to the recursional processes that can be discerned in
phase space (evolutionary fitness landscapes, cubes of Information
Space). Now we can imagine that one company (or ecosystem) can take a
particular developmental trajectory, adapting to a changing context as it
progresses, even as it shapes it.
Knowledge involves a selective representation of important “features”
(Stewart & Cohen, 1997) of an external or internal environment for the
purposes of choosing and acting. But choosing and acting do not of them-
selves imply a living entity. Any feedback mechanism requires some
degree of representation; indeed, that is precisely what the word “repre-
sentation” means—feedback! The structure of a thermostat, for example,
includes a thermometer (bimetal strip) that generates representations for
itself in that sense. Living things, however, usually have more complex
representations and can do more things with them than can inanimate
things like thermostats. Beyond a certain level of complexity, the choos-
ing and acting associated with representations can lead to agency.

124
VOLUME #2, ISSUE #4

One more word about the limitations of cybernetics. Like many of the
icons that became a source of analogies for managers—the second law of
thermodynamics, homeostasis, and the balance of nature—cybernetics
was the engineering manifestation of a reductionist strategy. It was an
attempt to reduce the living to the mechanical. It was only ever inter-
ested in the feedback processes of simple systems, or those of systems
that it could eventually simplify. It neither sought nor could cope with
complex and interwoven feedback mechanisms, those that could give rise
to emergent properties. Cybernetics was not interested in emergence; it
was interested in control (Wiener, 1962). Its goal was the production of
servomechanisms, i.e., of agency at the most primitive level. So, if orga-
nizational thinking traces a shift from objects to processes—i.e., inter-
action between objects—cybernetics looks at the subset of interactions
that is characterized by feedback. But, as the feedback loops increase in
density, the interactions generate a level of complexity that puts them
beyond the reach of cybernetics, rooted as it is in the concept of mecha-
nism (Wiener, 1962).
Emergence then comes to the fore as a phenomenon in its own right.
Being a generator of hierarchies, emergence challenges the reductionist
strategies on which the concept of mechanism was originally built.
Simply put, reductionism creates simple objects, whereas emergence
creates complex objects. Social and biological organizations are instances
of complex objects, the outcome of dense interwoven processes unfolding
over time.

EMERGENCE IN BIOLOGICAL AND


SOCIAL ORGANIZATION

With the time dimension now coming into the picture, it becomes neces-
sary to introduce the idea of irreversible processes. This is what the sec-
ond law of thermodynamics is all about. But the second law turns out to
be limited. Inheriting as it does the physicist’s closed-system perspective,
it implicitly equates irreversible processes with degenerative processes.
Yet, irreversibility in time refers to nothing more than the system’s loss of
memory, i.e., its access to past states. It kicks over its tracks and hence
cannot retrace its steps. The assumption that this automatically leads to
degeneration or disorder is unwarranted. Through the phenomenon of
emergence, it can also lead to order (Prigogine & Stengers, 1984; Nicolis
& Prigogine, 1989). In this sense, emergence is the antithesis of entropy.
Emergence does not actually violate the second law —it respects the idea

125
EMERGENCE

of irreversibility—but rather gives rise to the idea that temporal


processes can be the source of a local order. This order may well be paid
for in the coin of entropy generated somewhere outside the system,
something that still presents difficulties for many physicists who then
have to deal with the phenomenon of negative entropy (Schrödinger,
1967). Such a local order must be considered ontologically privileged,
i.e., it skews outcomes in favor of certain states at the expense of com-
peting alternatives. Stewart and Cohen discuss the concept of privilege5
in the biological sphere in their book Figments of Reality. We here extend
the concept of privilege even further, to all physical processes.
In sum, we argue that emergence and entropy are two sides of the
same thermodynamic coin. Both are characterized by irreversibility, but
only in the case of entropy does irreversibility lead to increasing disorder.
In the case of emergence, it leads to increasing and ordered complexity.
Both social and biological organizations are subject to these thermo-
dynamic effects. They are privileged sites of what Schumpeter (1961)
labeled “creative destruction.” At such privileged sites, we witness a
replay of the ancient battle between Parminides and Heraclitus, that is,
between the conflicting forces of stability and instability. According to
Alicia Juarrero, Parminides won (Juarrero, 1999). Yet, with the spread of
the internet and the evolution of ever more turbulent organizational envi-
ronments, it may soon be time for Heraclitus to make a comeback.
Emergence is the ultimate Heraclitean process. It is a generator of
ontological hierarchies. It first builds these from the bottom up and then
these, in turn, control the resulting articulated systems from the top
down. There is a range of levels over which comparisons are often made
between biological hierarchies and social ones. Thus, evolution, ecolo-
gies, colonies, bodies, cells, bacteria, viruses, prions, etc. find echoes in
our thinking on human organizations at the level of nations (ecologies),
industries (colonies), firms (bodies), individual divisions or departments
(cells), bacteria (employees, ugh!), communications (viruses?), prions
(memes?).
At each level in a system’s evolving organizational hierarchy there is
the challenge of establishing a unit of agency and its structure, as well as
its characteristic processes and the constraints that act on these. The
processes and constraints then change the rules for the system’s next iter-
ation. Recursion produces changes which are progressive, which build
difference each time around. Emergence is one possible outcome—
entropy being the other—when the system’s history is forgotten: what-
ever information might have been available as to its causal structure is

126
VOLUME #2, ISSUE #4

lost to view. One must bear in mind, however, that the view in question
is only ever the observer’s. Emergence is an observer-dependent phe-
nomenon, discernible only from outside the system. Yet, remove the
observer and you get what Thomas Nagel has labeled “the view from
nowhere” (Nagel, 1989)—we must observe to discern emergence,
because many systems, like a swimmer in a current, cannot “notice” their
context changing.
Emergence exploits the contextual properties of structures and/or sys-
tems. It also exploits the degrees of freedom available to systems. Where
the elements of a system are tightly coupled, for example, the scope for
variation—and hence for emergent processes to take root—is quite lim-
ited (Boisot, 1998). It resides, if anywhere, in the combinatorial potential
of its constituent elements: both the properties and identity of the system
are largely determined by the combinations that it can achieve. Where,
on the other hand, the system is characterized by a loose coupling of its
elements, we get not only combinatorial power, but also behaviors, that
is, combinations that vary over time. Thus the binding strength of sub-
atomic particles is so high that not only does it allow little combinatorial
potential and hence variation—they are too tightly coupled for that—but
it also allows little in the way of behaviors. The weaker binding strength
of atoms, by contrast, gives us all the potential combinations available to
us through chemistry, together with a variety of behaviors that can now
take place within and between molecules. Some of these behaviors are
complex enough to allow organic molecules and autocatalytic processes
to emerge (Kauffman, 1993, 1994). Finally, when transmitted information,
rather than energy, becomes the primary binding agent—as it is between
autonomous organisms that have representations of each other and can
signal to each other—potential variations register increasingly, not at the
level of what the thing is, but of what the thing does, i.e., as choices pred-
icated on sequences of behaviors. Iterative behavior evolves, too.
We can frame the tight/loose coupling issue as a relationship between
two kinds of resource that are needed to establish and then stabilize inter-
actions between objects: binding energy and what we will call binding
information. Hydrogen and oxygen rely primarily on the first kind of
energy, our immune system relies primarily on the second. Energy and
information are both required to keep a system together, but in different
mixes. Thus, for example, whereas simple systems will rely mostly on
binding energy to keep themselves together, complex systems will look to
binding information to maintain their integrity. This can be illustrated by
means of a simple graph. We can place the system’s binding energy on the

127
EMERGENCE

x axis and its binding information on the y axis. In the scheme just out-
lined, the amounts of binding energy and binding information required
to keep the system together are inversely related. With very high binding
energies, the elements of the system covary, so that knowledge of the
state of one element is sufficient to give you knowledge of the state of the
system as a whole. Little binding information is needed. As the binding
energy decreases, however, the number of possible configurations that
the system can adopt goes up and so does its potential information con-
tent. It then requires a larger amount of binding (constraining) informa-
tion to keep it within a range of viable configurations. The graph allows
us to conceptualize the distinction between tightly coupled and loosely
coupled systems in information terms. It also allows us to distinguish sys-
tems from aggregations or piles. What we call objects, in effect, corre-
spond to tightly coupled systems, whereas what in this article we refer to
as organizations correspond to loosely coupled systems.6
Many problems in the history of organizations stem from attempts to
treat organizations as if they were objects. We submit that the reason for
this has to do with the cognitive challenge of dealing with loosely coupled
systems. Owing to their higher degrees of freedom, they are inherently
more complex than objects, and such complexity is not intuitively acces-
sible. Organizations tax our powers of abstraction to a much higher
degree than objects.
We can perhaps now better understand the nature of emergence as
ontological privilege. As the number of possible states that a system can
adopt goes up, it becomes increasingly difficult to understand why it first
“chooses” certain states over others—i.e., it “bifurcates”—and then com-
mits to these by subsequently self-organizing around them. We cover up
our incomprehension by saying that novelty has been introduced into the
system. In effect, though, the system’s “choices” reflect nothing more
than an extreme sensitivity to the numberless discontinuities to which its
own complexity give rise.

TO SUMMARIZE

How far do the above remarks apply to both biology and the social sci-
ences? To what extent do they suggest a set of abstract organizational
principles common to both? We do not attempt to give a definite reply to
these questions. Instead, in the form of bullet points, we offer a number
of pointers:

128
VOLUME #2, ISSUE #4

◆ A set of interrelated elements exhibit both recurrence and stability in


their relationships; complexity here is but a measure of the organiza-
tion or disorganization present in such relationships. Binding energy
and binding information both contribute to recurrence and stability
and, in so doing, they foster emergent processes.
◆ In both biological and social systems, hierarchies are bottom-up
emergent outcomes of complex organization. In both cases, however,
once they have been created, hierarchies act in a top-down fashion to
constrain and thus to further organize the system. This is also true of
social networks. Sooner or later, they exhibit hierarchy, and such hier-
archy then helps to shape the organizational behavior of actors in the
network. Iterative behavior will change the rules through time.
◆ In such a description, the antithetical relationship between entropy
and emergence needs to be clarified. Both reflect the irreversible
nature of the passage of time and the erasure of memory within a sys-
tem. But, as Prigogine has shown in his work on dissipative structures,
entropy and emergence lead off in opposite directions: the first
toward increasing disorder, the second toward increasing order.
Entropy has largely been absent from the discourses on social and
economic organization, and emergence has only recently been admit-
ted to it, but both need to be treated together as two sides of the same
coin. The idea of ontological privilege depends on it.
◆ In biology, the entropy concept has been applied both to energy
processes (Brooks & Wiley, 1986) and to information processes (Atlan,
1979; Ayres, 1994; Küppers, 1990). We think that these approaches
have been unnecessarily tethered to second-law degenerative ideas.
It follows from what we have said above that the concept of construc-
tive emergence should also be applied to energy and information
processes respectively.
◆ Above a certain level of complexity, biological systems have ways of
representing their environment to themselves. They now respond to
their environment partly directly and partly indirectly via the repre-
sentations that they have of it. This will be equally true of social
organizations. In the case of social organizations, however, there is a
choice of representations to which one can respond. That is to say,
representations compete with each other in a meme-like fashion for
ascendency (Dawkins, 1986; Blackmore, 1999). Which representation
is finally selected reflects the outcome of a political process. There is
a seductive analogy here with Dennett’s “pandemonium” (Dennett,
1997).

129
EMERGENCE

◆ In both cases, these representations form one of the terms of what we


might call a “comparator;” the other term is given by actual outcomes
as they occur in the real world. Serebriakoff (1975) developed the
ideas of sensorium and motorium for all “orgs,” and the brain was a
“comparator” so that behavior could become appropriate. The com-
parator shapes expectations and dispositions to react and behave in
particular ways. It is the mismatch between expectations and experi-
enced outcomes, whether recorded biologically or cognitively, that
shapes behavior. In economic organizations, planning and control sys-
tems should fulfill the function of comparators.
◆ A comparator is thus an embodiment of the knowledge held by the
organization. It is also the main source of agency in the system. But
agency implies choice and choice in turn implies alternative possible
states of the system. Thus, agency only makes sense in complex adap-
tive systems that are capable of generating multiple alternative repre-
sentations for themselves. Only such systems are capable of agency.
◆ So we see that knowledge and agency are closely related concepts. In
some deep sense, agency is predicated on the possession of know-
ledge, taken here to mean a disposition to act in one particular way
when faced with alternative possibilites. We do not need to get “epis-
temological” about this. We could simply take knowledge to be that
subset of your beliefs on which you are prepared to act.
◆ We can then hypothesize that, to the extent that structure conditions
a disposition to act, structure is itself an embodiment of knowledge.
The implication is that any structured organization is also an embod-
iment of knowledge. Not only does an organization have knowledge;
an organization is knowledge. The analogy with organisms is again
helpful here: an organism is a process, a capacity to act that imposes
its order locally—a whirlpool or a fountain rather than a rock.
◆ If we follow Brooks and Wiley in arguing that biological systems are
concerned to minimize their rate of entropy production per unit of
work performed (Brooks & Wiley, 1986), we can see something simi-
lar happening in social and economic organizations as they relent-
lessly seek out greater efficiencies. This is good 1960s thinking. It was
applied both to their energy processes and to their information
processes. Yet, we now know that a quest to minimize entropy pro-
duction must be balanced out with a need to handle variety, as itera-
tion proceeds and the organization changes its own context.
Emergence builds on such variety; a system that was totally efficiency
oriented could never evolve. Thus, ultimately, it is the turbulence

130
VOLUME #2, ISSUE #4

with which a system has to deal that determines how it will balance
out the competing claims of entropic and emergent processes. (The
cost accountant is precisely the spanner in the recursive works!)
◆ Minimizing entropy production in both biological and social systems
is an economizing activity. It takes place in two steps: first, substitute
data processing for energy processing through cumulative learning;
second, reduce the data-processing load through acts of data structur-
ing. Boisot (1998) takes data structuring to consist of codifications and
abstractions. This second step requires insight, which is itself the out-
come of an emergent process. Once more, we find entropy and emer-
gence to be inextricably intertwined.

CONCLUSION
Until the last two decades of the twentieth century, economists remained
tethered to an energy metaphor drawn from nineteenth-century physics
(Mirowski, 1989). Alfred Marshall had hinted that biology would be a
more appropriate source of concepts for economists than was physics, but
the hint was never taken up. Today, evolutionary economics has become
a branch of economics in its own right (Vromen, 1995). The study of
human organizations has aligned with biological thinking earlier and
more extensively than has physics (Aldrich, 1999). But has it moved
beyond metaphor?
Physicists, while trying to create a toy universe based on linear math-
ematical thinking, committed themselves to a Boltzmann heat-engine
thermodynamics that led to a heat–death picture of the universe and an
interpretation of living things as feeding on negentropy. Yet, this was but
one choice open to them. If, for example, they had included gravity as
well as perfectly elastic billiard balls in their thinking, their closed system
would have gone up in order instead of down! Early conceptions of
organisms as machines (Descartes, 1989) reflected the billiard ball view,
but most biologists have now moved beyond it.
Von Bertalanffy developed systems theory mainly to deal with homeo-
stasis in physiology. He realised that feedback processes implied that
time had a direction to it, and that physical “laws,” contra Newtonian sys-
tems, could not read back and forth (compare his position with that of
Schumpeter, e.g., in Boisot, 1998). From Newton you could explain orbits
and predict Uranus from anomalies in Neptune. From endocrinology you
could explain certain behaviors, especially reproductive behaviors and
pathologies, and occasionally predict: the hormone activin, for example,

131
EMERGENCE

was successfully predicted, but it excited less media attention than the
discovery of Uranus. From Darwin, however, although you could explain
Galapagos finch species, you could not predict the appearance or the
characteristics of another one. Understanding and explaining carried
both different meanings and different requirements for physicists on the
one hand, and for biologists, geologists, or astrophysicists on the other.
The former are concerned to treat events that are ostensibly different as
if they were the same, i.e., they abstract (Dretske, 1981; Boisot, 1998).
The latter are more comfortable taking events that ostensibly look the
same—in that sense, they also rely on abstractions—but then focusing on
what makes them unique.
Biologists, in sum, have come to realize that individuals, including
humans, all differ genetically by quite a lot: not only does a species not
have a single genetic blueprint, but all its individuals work in different
ways. They are not Ford Model Ts but handmade cars, each adjusted in a
different way to give collectively much the same results in spite of the
presence of good or bad mutations. Likewise, at the level of human
organizations, it became apparent that standardized blueprints for com-
pany startups did not work as well as strategies that led to differences.
The resulting heterogeneity effectively reduced competition and
improved a startup’s chances of survival.
The new sciences of complexity explain but do not predict. Prediction,
as conventionally understood, is beyond their reach. Such a limitation is
inherent in their subject matter. Yet, can a science that does not predict
be useful? The “failure” of complexity sciences to predict is due at least
in part to their dependence on a loose coupling among the elements of
the systems that they study for versatility in behavior. Couplings must be
loose enough to allow different—and sometimes novel—behaviors to
appear at successive iterations. Hence, no prediction. Such loosely cou-
pled systems lead to evolving and emergent processes because their com-
binatorial potential is greater.
Complexity sciences have self-organizing processes as one of their
central concerns. With the passage of time, the universe tends to compli-
cate itself through a process of symmetry breaking, changing its own rules
as it does so (Kauffman, 2000). This “evolution” of purely physical
processes has led physicists to start thinking more like biologists. Yet,
what applies to physical processes also seems to apply with even more
force to social ones. As organizations evolve, they become more complex,
i.e., more informative. Organizations thus tend to metamorphose over
time in ways that cannot be predicted ex ante. Indeed, some would argue

132
VOLUME #2, ISSUE #4

that we are witnessing precisely such a metamorphosis in the realm of


social organizations as a result of the enhanced interconnectivity made
possible by the internet.
The fact that both the “hardest” of the natural sciences and the “soft”
social sciences are now both focusing on emergent processes and self-
organization suggests that we are indeed gradually moving away from
metaphor and toward a single science of organization. Biological forms of
thought are thus increasing the size of their basin of attraction and meta-
morphosing as they do so.

NOTES
1 Mechanical and organic solidarity correspond to the distinction that ecologists draw
between commensalism—i.e., competition and cooperation between similar organ-
isms—and symbiosis—i.e., cooperation between dissimilar organisms. For a fuller dis-
cussion of the distinction, see Aldrich (1999).
2 Marx, in the distinction that he drew between the “sphere of production” and the
“sphere of exchange,” was making essentially the same point. He drew different con-
clusions, however, to those put forward here; see his Capital.
3 Note here that the employees have a conception, an idea of what the organization
should be; this may differ from the managers’ conception. And both probably differ
from the consultant’s conception of what the organization should be, who probably sees
a different context for it and so understands it differently.
4 Maturana and Varela once more reproduce the separation between internal and exter-
nal organization that economists once imposed on the social sciences. They thus pro-
mote the idea of an organization as an object with an inside and an outside.
Organizational autonomy is maintained in part by closed boundaries. This will be true
of some but not all organizational forms. In some cases, the integrity of an organiza-
tional form is achieved by an attractor exercising its influence over a field. Here, there
is no clear boundary that separates the inside from the outside, only a gradient of field
strength.
5 The extended usage of this concept is introduced in Cohen (1977).
6 Clearly, all “objects” are characterized by some measure of organization. We recognize
that organization is a matter of degree. Some objects can be highly organized without
constituting organizations for our purposes.

REFERENCES
Aldrich, H. (1979) Organizations and Environments, Englewood Cliffs, NJ: Prentice Hall.
Aldrich, H. (1999) Organizations Evolving, Thousand Oaks, CA: Sage.
Ashby, W. (1954) An Introduction to Cybernetics, London: Methuen.
Atlan, H. (1979) Entre le cristal et la fumée: Essai sur l'organisation du vivant, Paris: Seuil.
Ayers, R. (1994) Information, Entropy and Progress: A New Evolutionary Paradigm,
Woodbury, NY: AIP Press,
Blackmore, S. (1999) The Meme Machine, Oxford, UK: Oxford University Press.
Boisot, M. (1998) Knowledge Assets: Securing Competitive Advantage in the Information
Economy, Oxford, UK: Oxford University Press.
Brooks, D. R. & Wiley, E. O. (1986) Evolution as Entropy: Towards a Unified Theory of

133
EMERGENCE

Biology, Chicago: University of Chicago Press.


Burns, T. & Stalker, G. (1961) The Management of Innovation, London: Tavistock.
Carroll, G. (1987) “Publish and Perish: The Organizational Ecology of Newspaper
Industries,” Monographs in Organizational Behavior and Industrial Relations, Vol. 8.
Chandler, A. (1962) Strategy and Structure: Chapters in the History of the American
Industrial Enterprise, Cambridge, MA: MIT Press.
Cilliers, P. (1998) Complexity and Postmodernism: Understanding Complex Systems,
London: Routledge.
Coase, R. (1937) “The nature of the firm,” Economica N.S., 4: 386–405.
Cohen, J. (1977) Reproduction: a Textbook of Animal and Human Reproduction, London:
Butterworth.
Dawkins, R. (1982) The Extended Phenotype: The Gene as the Unit of Selection, Oxford, UK:
Oxford University Press.
Dawkins, R. (1986) The Blind Watchmaker, Harmondsworth, UK: Penguin.
Dawkins, R. (1990) The Selfish Gene, Oxford, UK: Oxford University Press.
Dennett, D. (1997) Kinds of Minds: Towards an Understanding of Consciousness, London:
Basic Books.
Descartes, R. (1989) Discourse on Method, Cambridge, UK: Prometheus Books.
Dretske, F. (1981) Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
Durkheim, E. (1933) The Division of Labor in Society, New York: Free Press.
Elias, N. (1939) The Civilizing Process: Vol. 2. State Formation and Civilization, Oxford, UK:
Basil Blackwell.
Gould, S. J. (1990) Wonderful Life: The Burgess Shale and the Nature of History, New York:
W.W. Norton.
Hannan, M. T. & Freeman, J. (1989) “Organizations and Social Strucrure,” in Organizational
Ecology, Cambridge, MA: Harvard University Press.
Holton, G. (1973) Thematic Origins of Scientific Thought: Kepler to Einstein, Cambridge,
MA: Harvard University Press.
Juarrero, A. (1999) Dynamics in Action: Intentional Behavior as a Complex System,
Cambridge, MA: MIT Press.
Katz, D. & Kahn, R. (1978) The Social Psychology of Organizations, New York: Wiley.
Kauffman, S. (1993) The Origins of Order, Oxford, UK: Oxford University Press.
Kauffman, S. (1994) At Home in the Universe, New York: Viking.
Kauffman, S. (2000) Investigations, Oxford, UK: Oxford University Press.
Kuhn, T. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
Küppers, B.-O. (1990) Information and the Origin of Life, Cambridge, MA: MIT Press.
Lawrence, P. & Lorsch, J. (1967) Organization and Environment: Managing Differentiation
and Integration, Homewood, IL: Richard Irwin.
Lettvin, J. Y., Maturana, H. R., McCulloch, W. S., & Pitts, W. H. (1959) “What the frog’s eye
tells the frog’s brain,” Proceedings of the IRE, 47(11, Nov): 1940–59.
Maturana, H. R. & Varela, F. J. (1992) The Tree of Knowledge: the Biological Roots of Human
Understanding, Boston: Shambhala.
Mintzberg, H. & Waters, J. (1985) “Of strategies, deliberate and emergent,” Strategic
Management Journal, July–Sept.
Mirowski, P. (1989) More Heat Than Light: Economics as Social Physics, Physics as Nature’s
Economics, Cambridge, UK: Cambridge University Press.
Nagel, T. (1989) The View From Nowhere, Oxford, UK: Oxford University Press.
Nicolis, G. & Prigogine, I. (1989) Exploring Complexity: An Introduction, New York: W.H.
Freeman.

134
VOLUME #2, ISSUE #4

Popper, K. R. (1972) Objective Knowledge: An Evolutionary Approach, Oxford, UK:


Clarendon Press.
Prigogine, I. & Stengers, I. (1984) Order out of Chaos: Man’s New Dialogue with Nature,
Toronto: Bantam Books.
Schrödinger, E. (1967) What is Life?, Cambridge, UK: Cambridge University Press.
Schumpeter, J. (1961) The Theory of Economic Development: An Inquiry into Profits,
Capital, Credit, Interest and the Business Cycle, London: Oxford University Press.
Serebriakoff, V. (1975) Brain, London: Davis-Poynter.
Stewart, I. & Cohen, J. (1997) Figments of Reality: The Origins of the Curious Mind,
Cambridge, UK: Cambridge University Press.
Toffler, A. (1980) The Third Wave, London: Advent Books.
Tönnies, F. (1955) Community and Association, London: Routledge and Kegan Paul.
Turnbull, S. (1997) “Corporate governance: Its scope, concerns and thories, Corporate
Governance, 5: 180–206.
Von Bertalanffy, L. (1976) General System Theory: Foundations, Development, Applications,
New York: George Braziller.
Vromen, J. (1995) Economic Evolution: An Enquiry into the Foundations of New Institutional
Economics, London: Routledge.
Waddington, C. H. (1957) The Strategy of the Genes: A Discussion of Some Aspects of
Theoretical Biology, London: George Allen and Unwin.
Weber, M. (1978) Economy and Society, G. Roth & C. Wittich (eds), Berkeley, CA:
University of California Press.
Wiener, N. (1962) Cybernetics: Communication and Control in Animals and Machines,
Cambridge, MA: MIT Press.
Williamson, O. (1975) Markets and Hierarchies: Analysis and Antitrust Implications,
Glencoe, IL: Free Press.
Wittgenstein, L. (1961) Tractatus Logico-Philosophicus, London: Routledge and Kegan Paul.
Wittgenstein, L. (1968) Philosophical Investigations, Oxford, UK: Basil Blackwell.

135
EMERGENCE, 2(4), 136–150
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Complex Information
Environments:
Issues in Knowledge Management and
Organizational Learning
Duska Rosenberg

M aking the best use of knowledge as one of the most


important organizational resources is a major aim of
sound management practice, striving to achieve reduc-
tion of risk, duplication of effort, and uncertainty in the
day-to-day work of an organization (Galbraith, 1994; Sanchez & Heene,
1997). A survey of literature on organizational requirements for know-
ledge management indicates that the matter is also of some theoretical
importance, having brought to light several reasons for the attention that
knowledge management is currently receiving.
Knowledge sharing, learning from experience, and open access to
information about the ways an organization works are made possible by
advanced information and communications technologies (ICT). ICT have
increased access to and decentralization of informational resources and
thus stimulated organizational learning (Walsh & Ungson, 1991).
Furthermore, the emergence of the global business community through
ecommerce and ebusiness has brought to light considerable benefits that
enhanced conditions for partnership and trust can bring to distributed
organizations. In particular, their access to global markets is considered
to be the key to their success, as it enables timely and effective response
to changing customer demands (Nonaka, 1991).
The concepts of knowledge management and organizational learning

136
VOLUME #2, ISSUE #4

have also been related to the particular, situated aspects of day-to-day


working practices of a distributed organization. They are applied to the
study of teamwork in physical, media, and virtual environments, where
cognitive and social aspects of organizational processes determine the
requirements for technological platforms and applications (Davenport et
al., 1998). Studies of individual workers’ circumstances, in the workplace,
at home, and on the move, emphasize their need to access an organiza-
tion’s knowledge networks, through both formal and informal channels
(Baker, 1994).
In this context, there is a clear distinction between “hard” and “soft”
knowledge. Hard knowledge is treated as some kind of commodity that
can be exchanged, transferred, and coded into documents, records,
charts, and other similar informational artefacts (Teece, 1998).
Increasingly, however, we consider the importance of “soft” knowledge,
the implicit communication and tacit understanding of the particular sit-
uations in which knowledge is put to use. Tacit knowledge can be trans-
mitted through concrete examples, experience, and practice through
personal interactions (Boisot et al., 1997).
The work presented in this article is based on the study of tacit know-
ledge that is communicated within and between teams of knowledge
workers “on the move.” Their experiential knowledge is shared through
personal interactions in real-life tasks, carried out by agile teams in con-
struction who work together on a joint project while being employed by
different subcontracted firms. Construction teams are agile in the sense
that they have flexible, ad hoc organizational structures, which increase
their mobility and their responsiveness to continuously changing circum-
stances on the construction site. Although they have strict formal channels
of communication and established procedures and protocols, most of their
day-to-day working life is oriented toward informal social networks
created through experience. It is those informal networks that help indi-
viduals cope with the continual change that characterizes their work envi-
ronment, and enables them to work as part of distributed teams, across
physical, cultural, and organizational divides. For these reasons, agile
teams in construction are taken as an example of a distributed organization
held together by informal social networks that support shared tacit knowl-
edge of the ways the teams can work together effectively.
The study builds on recent research within the academic community
concerned with the study of distributed organizations, but focuses on a
practical knowledge management project, CICC.1 The main aim of the
project was to improve relationships between people, technology, and

137
EMERGENCE

information content in the real-world environment of a large construction


project. CICC technologies were designed and implemented bearing in
mind that introducing them into a real-life working environment requires
sensitivity, awareness, and understanding of the human dimensions of
construction teams’ work.
As teamwork is essentially social in nature, the CICC technologies
predominantly address the need for increased connectivity between peo-
ple in order to improve knowledge sharing within and between con-
struction teams. Knowledge management applications developed within
CICC include intranets, organizational memories, and general know-
ledge management tools. Intranets enable networked access to people
and the informational networks to which they belong (Rosenberg, 2000).
Knowledge repositories and organizational memories provide access to
the pool of knowledge that an organization has about its past projects
(Perry et al., 1999). Management tools are focused on knowledge net-
working practices (Rosenberg & Holden, 2000), recognizing not only the
strategic importance of data resources that underpin the creation of
knowledge assets, but also the need to support exploratory learning of
individuals in organizations that “will enhance their personal effective-
ness and their overall capacity to interpret and make sense of experi-
ences” (Boisot, 1999: 266).

COORDINATING AGILE TEAMS

Agile teams working on a shared problem face challenges at two levels.


First, the usual composition of such teams requires that participants have
diverse expertise and thus their distinct contributions to the shared task
have to be coordinated. Second, agile teams are created to work together
on a specific project and are dissolved when the project is finished. Thus,
the team members rarely know each other well prior to starting work on
a joint task, and consequently their individual perspectives on the collab-
orative situation must be coordinated “on the fly.”
In the study of coordination of work within agile teams, the focus was
on the effect that these two factors—diverse expertise and team agility—
have on the coordination of joint activities. The main aim was to find out
how various informational resources available in the work environment
aid (or obstruct, as the case may be) the coordination of both task-
oriented and communication-oriented aspects of collaboration. Particular
attention was paid to the role of advanced communications technology in
facilitating such coordination when the teams are not only agile, but also

138
VOLUME #2, ISSUE #4

geographically dispersed. New technologies such as multimedia, mobile


telephones, wearable computers, and video have the potential for sup-
porting working environments characterized by professional and cultural
diversity.

It is important that this technology fits in unobtrusively with the working


practices of particular communities, supporting the interactions and activ-
ities that have naturally evolved as part of social life in the workplace.
(Winograd & Flores, 1986)

Coordination of distributed work activities is crucial for the success of a


collaborative task. Such coordination is also central to the study of col-
laboration within CSCW (computer-supported cooperative work) where
the focus is on “how co-ordination emerges, how it is maintained and
what happens when it breaks down” (Rogers, 1993: 295). In particular, the
entire working environment is examined to find the features that facilitate
coordination of distributed activities. The most important of these are the
shared artifacts, also referred to as common artifacts (Robinson, 1993),
boundary objects, communicative artifacts, collective cognitive artifacts,
or common objects, that are used by teams to create and maintain mutual
understanding of the work process.
Shared artifacts provide the resources for people to focus their col-
laborative activities and to obtain facilitative feedback from each other
about the current state of the activity. They can be regarded as loci, or
physical manifestations of what is essentially purposeful social action. For
example, a simple common object such as a set of pigeonholes in a uni-
versity department has a structure consisting of slots for holding mail,
documents, and messages for individual staff. It also has an informative
function, since it makes it easy to see if an individual has any mail. A
pigeonhole that is full up may “mean” that the owner is not back from a
vacation or conference, or perhaps that there is substantial marking to be
done. The set of pigeonholes also shows at a glance the size of the depart-
ment, thus providing some information about both individuals and their
organization.
The starting assumptions behind the work presented in this article are
that shared artifacts are the key to investigating the coordination of dis-
tributed activities, and that this coordination takes place at two levels
simultaneously. At the task level, participants coordinate their perspec-
tives on the job in hand in order to establish the common ground for joint
problem solving. At the communicative level, participants negotiate in

139
EMERGENCE

order to reach agreements on how what is said and done should be inter-
preted. The distributed activities are thus coordinated by participants
monitoring the progress, noting the changes, and providing feedback
about their own actions and reactions, relying on the shared artifacts for
this purpose.
Personal interactions follow a general pattern: the participants estab-
lish contact, make their situation visible to others, and then together
build a shared environment where they cooperate to solve the current
problem (cf. Gumpertz & Hymes, 1972). Participants focus on shared
artifacts in the process of negotiating the meanings of words or images
presented there (Robinson, 1993). They thus create the common ground:
“a sine qua non for everything we do with others … the sum of [the par-
ticipants’] mutual, common or joint knowledge, beliefs, and suppositions”
(Clark, 1996: 92).
Within the boundaries of the common ground, the participants can
identify the objects referred to, come to understand each other’s goals
and purposes, cooperate, and coordinate their actions. Indeed, common
ground is regarded as fundamental to all coordination activities and to
collaboration (Clark & Brennan, 1991). One of the key research questions
for the CICC project was how people create the common ground in situ-
ations where the contact between them is influenced or mediated by
technology (Rogers, 1993; Hindmarsh et al., 1998; Dourish & Bellotti,
1992).
Assuming that common ground is fundamental, it is important for the
designers of technology to understand what representations should be
employed in order to design usable interactive technology. In her analy-
sis of technology-based shared artifacts, Rogers (1993: 296) distinguishes
between two main kinds of mediating mechanisms:

explicit representations which are intended to provide specific informa-


tion about the status and properties of actions and artifacts that constitute
the work, and implicit representations where an action or some form of
communication signifies a change in status of an object or process.

Some of the mediating mechanisms have universally accepted meaning,


while the meaning of others is negotiated and interpreted anew in each
particular context. A further dimension to the mediating mechanisms is
added in the practice theories of culture that emphasize the importance
of language use in “cultural meaning making.” Placing language at the
center of the creation of social meaning and the consequent building of

140
VOLUME #2, ISSUE #4

communities, practice approaches also take into account the resources


available in the community’s environment. The resources that enable
participants to create distinct communities include various designed arti-
facts, such as documents, instruments, computer applications, and others,
that are at the center of personal interactions among team members. In
such situations, the technology provides the shared artifact needed for
the coordination of distributed activities, and the nature of computerized
representations, both explicit and implicit, becomes a critical issue. Such
technology is expected to facilitate not only access to the data that an
organization has, but also to facilitate the processes that shape human
cognition and communication in the significant social and cultural con-
texts, thus fitting in with normal human activities in the workplace.

THE PEOPLE AND INFORMATION FINDER

The working assumption behind the design of usable interactive technol-


ogy within the CICC project was that it forms an integral part of the
entire information environment created by the interaction of people,
organizations, and artifacts, where information is generated, exchanged,
stored, processed, internalized, and externalized. An interactive multi-
media technology prototype, People and Information Finder (PIF2) was
designed to provide explicit representations of people, their organiza-
tions, and teams, as well as the projects on which they were working.
Multimedia facilities were used to increase the richness of the informa-
tion about the PIF people and their workplace. These included visual
channels with both abstracted and image-based data. Such increased
media richness was expected to enhance the creation of common ground
among the PIF users, thus enabling tacit knowledge sharing in commu-
nication mediated by the PIF. The main research questions concerned
the content of multimedia displays and explicit representations, as well as
their informativeness and implicit representations.
As the main aim was to improve communications and to offer a richer
information environment for the PIF people, a key research issue was to
find out how the explicit and implicit representations in the PIF could
help them to initiate and maintain contact with one another. Such a facil-
ity is particularly useful in situations where a stable organizational form
is absent, as a team is created for the purposes of a particular project or a
task and is dissolved once this is completed. The work on the task itself
represents a period during which cooperation is vital. Problems of lack of
shared culture have to be addressed and this is usually done with a series

141
EMERGENCE

of induction meetings and seminars at the start of the project. This is a


vital yet time-consuming team-building stage, and is undermined by
individuals joining projects at later stages with consequent integration
problems (Rosenberg et al., 1997). Poor communication causes serious
problems in day-to-day activities that require continuous cooperation and
coordination (Fruchter, 1998).
One of the key roles intended for the PIF was to become an impor-
tant shared artifact and a vehicle for soft knowledge management prac-
tices, designed to help those who need to learn about a given project and
its organization. Such learning would take place in “communication at
arm’s length” mediated by the PIF, which would enable people to assim-
ilate the project culture without disturbing the established flow of activi-
ties on the site. The social relevance of the PIF as the shared artifact
could then be analyzed as a matter of system-in-use that systematically
interacts with the organization that uses it.
As a technological artifact, the PIF is a web page populated with infor-
mation about its owners, using standard web technology and program-
ming in html underpinned by Java script. It comprises a technological
platform; more precisely, a configuration of communications technologies
such as telephone, network technology such as the web, and advanced
interactive technology, such as virtual and augmented reality. It thus pro-
vides an integrated service that helps the user to choose between various
channels of communication, from voice link to videoconferencing, and to
browse through a collection of similar pages created by different owners
according to a predesigned template.
It also provides a richer experience of people and their work environ-
ment in a particular organization; in other words, it supports the creation
of common ground between the PIF people. They obtain information
from people, from informational resources such as documents or data-
bases, or from the real world where PIF owners live and work, and the
structure of the PIF page reflects this. It is divided into three main com-
ponents. The first contains information that helps visitors to locate the
owners in their physical space, giving name, address, phone number, and
email address, as well as photographs and video images showing their
offices, desks, and terminals. This makes it possible for the visitor to
choose how to contact them, via telephone, ordinary mail, email, or
videoconferencing. The second, “nearest neighbors,” provides informa-
tion about the organizational space that a team occupies, giving similar
information about accessing other PIF owners who are engaged in com-
parable or related work. The third component describes the owners’ proj-

142
VOLUME #2, ISSUE #4

ects and activities, both present and past, comprising the part of the real
world in which the owner’s work is done.
As a knowledge management tool, the PIF provides a facility for
accessing information about an organization or a team in the construction
industry, the people who work for it, and the activities in which they
engage. The status of information ranges from personally owned (or pri-
vate), which is accessible only to the owner, to information that is distrib-
uted (or shared) between members of a group. Alternatively, information
may be public (or visible), which the owner is willing to show to the gen-
eral public or to a selected group of collaborators. This nonproprietary
information forms part of the informational resources used by the organ-
ization or the group as a whole.3
The PIF thus has the potential for enabling innovation in cooperative
work within agile teams. In order to understand more about this poten-
tial, a preliminary study of communication mediated by the PIF proto-
type was carried out in interviews with a small group of informants. Their
responses were elicited in open and in structured interviews, as well as
focus groups. The data analysis was oriented toward explicating the infor-
mational links between what visitors could observe in the communicative
situations displayed on the pages and how they interpreted the informa-
tion presented there.
The main aim was to find out what general interactive strategies peo-
ple use when learning about a team, its people, and their work. In par-
ticular, the focus was on discovering what knowledge, presuppositions,
and beliefs people bring with them to joint activities, and how the exter-
nal representations designed in the PIF could influence the creation of
common ground.

ANALYZING SOCIAL KNOWLEDGE

In the analytical framework developed for studying the use of the PIF in
a real-life setting, the Common Ground Framework was extended and
adapted for the study of representations in the PIF pages in order to iden-
tify the features that would facilitate the communication it mediated.
The original framework views communication as joint activity
between two or more participants engaged in a shared task (Clark &
Brennan, 1991; Grosz & Sidner, 1986; Clark & Schaefer, 1989) who, in
the process of carrying out this task, have to establish the common
ground for mutual understanding and trust. Clark (1996: 43) divides the
common ground into three parts:

143
EMERGENCE

1 Initial common ground. This is the set of background facts, assump-


tions, and beliefs the participants presupposed when they entered the
joint activity.
2 Current state of the joint activity. This is what the participants presup-
pose to be the state of the activity at the moment.
3 Public events so far. These are the events the participants presuppose
have occurred in public leading up to the current state.

In the modified framework, the presuppositions determining the initial


common ground were classified into those related to the background
knowledge characterizing participants’ expertise and the more personal,
experiential knowledge of teamwork on construction projects. The for-
mer includes, for example, individual expertise in architecture, construc-
tion management, structural engineering, or any other relevant
profession that determines the nature of their contribution to the process
and task activities. The latter is based on the experience of working on
construction projects, with particular partners, and the trust that such an
experience generates. These form the starting context for joint activities
and influence the manner in which participants recognize standard pro-
cedures, identify their own roles and responsibilities as well as those of
others, and decide how to organize joint activities. Special attention was
given to the social knowledge that forms part of the initial common
ground and, more specifically, how the explicit representations of the PIF
interact with the creation of the tacit knowledge about its owners.
The visitors to the PIF pages were observed in a “conversation at
arm’s length” with the owners, where the main goal was to learn about the
“project people.” Observations of visitors navigating through the sample
were focused on ease of use and on the extent of the support for learning
that it offered. Two main research questions guided both the data collec-
tion and the interpretation of the informants’ responses. The first was
concerned with the informativeness of various pages in terms of content.
The second question was whether or not the principles that govern social
action and face-to-face interaction in the real world would be equally
valid in the mediated interaction in the media space and the virtual
world.

REPRESENTATIONS OF THE PHYSICAL AND SOCIAL SPACES


The information that the visitors picked up from the web pages was con-
siderably richer than that explicitly expressed. Much of this richness was
related to the social and organizational characteristics of the owners’

144
VOLUME #2, ISSUE #4

physical environment. For example, the “god’s eye view” of the office lay-
out helped them to infer the social structure of the group inhabiting the
space. The assumptions normally made about the social significance of
physical spaces in the real world were also made about the virtual world.
Knowledge of the ways in which real organizations structure their
working space helped visitors to interpret the social aspects of the situa-
tion presented on the screen. For example, they understood that people
who have their own offices are higher up in the organizational hierarchy,
and that those who are physically colocated usually belong to the same
working group. The external representation of the office space thus plays
an important part in identifying the owners’ status and relationships.

REPRESENTATIONS OF PERSONAL SPACES


Interpretations of individual web pages confirmed the importance of
links between the images of the physical environment and the socially
relevant information inferred from it. Different views of the physical loca-
tion reflect the personal image of the owners as if they were wearing
“electronically augmented clothes.” For example, the scene of papers
scattered all over the desk makes the owner appear as an untidy but
creative person, someone who works with paper and computers, which
indicates a high level of education and might be seen as “serious, reliable
and responsible.”
A detailed view of the workspace also contributes to familiarization
with the owner. The owner of a particular space may be assumed to be
creative and clever, as they work with computers and have lots of paper
scattered around, working on knowledge-intensive tasks, spending much
of their time at the terminal if there is a comfortable chair in front of the
computer.

THE INFORMATION ENVIRONMENT


Originally, the design intention was for the personal page to provide infor-
mation about the owners’ availability by making their situation at work
more visible to potential visitors. The possibility of intrusion into the
owner’s space at an inopportune time would thus be reduced. A camera
placed in the owner’s office was the main source of this kind of information,
enabling visitors to make informed decisions about whether the owners
should be approached via an intrusive medium such as telephone, or video-
conferencing, or whether an email message would be more appropriate.
Messages left on the screen when there is no activity are a very impor-
tant guide to availability. For example, Post-it® notes saying “Back this

145
EMERGENCE

afternoon/tomorrow/next week” give immediate guidance on when the


owner can be contacted. A collection of such messages creates the con-
text for interpreting similar situations when there is no action on the
screen. Thus, the view of an empty office without a message would mean
that the owner is momentarily absent, perhaps talking to a colleague in
the corridor or having lunch, while a message saying “Goodbye” would
mean that a longer, or maybe indefinite, period of absence was intended.
People who share the working environment with the owner can pro-
vide an alternative point of contact if the owner is not available.
Particularly useful features are the links with closest collaborators within
the intellectual space they share, indicating to the visitor that close col-
laboration is no longer restricted to people inhabiting the same physical
working space. The PIF is therefore an informational resource that can
transcend the limitations of the real world; this is where the visitors saw
possible new uses of this technology in their own lives. The general
image was of an impressive technological achievement providing a novel
service.
The overriding principle in visitors deciding who to contact was the
awareness that they may be intruding into other people’s working spaces
and they were therefore looking for signs of invitation. In general, they
expected a virtual host to help them orient themselves in the organiza-
tional environment and to act as a focal point to which they would return
when they get lost in the virtual space they were navigating. The host was
also expected to sanction the appropriate forms of contact and determine
the acceptable degree of intrusion in any given circumstances.
Particularly important was the realization that the owners had full
control over the cameras in their offices, so when they did not wish to be
watched they could turn the camera off. This meant that if the camera was
on, the visitors were invited to enter, but they were still aware of differ-
ent degrees of intrusion possible in the circumstances. For example,
accessing a database or clicking on a document was equivalent to a per-
mission to “enter without knocking,” sending an email message was com-
parable to leaving a message on the door, but videoglance helped with the
choice whether to knock or not. Videoconferencing, being the most intru-
sive call for contact, required the most explicit permission.
The scenes offered by the camera were interpreted in the light of such
social considerations. The “degree of closeness” between owners and vis-
itors was often determined by interpretation of symbols: for example, a
highlighted name in the “nearest neighbors” section was seen to be an
invitation to contact that person rather than somebody else. Rules that are

146
VOLUME #2, ISSUE #4

observed in face-to-face interaction also played a part in decisions to


establish contact, so that people whose faces were turned toward the
camera were generally considered to be more friendly and approachable.
Most visitors saw the intranet page as providing much more than a
home page on the web. They used it as a traveling map helping them to
locate the owners within the wider context of their organization. The con-
text was referred to simultaneously as the physical and informational
space the owners occupied. The global function of the page from the vis-
itors’ point of view was that of an augmented map enabling a view of indi-
viduals in relation to their neighbors and to other sources of information
available in the real and virtual worlds.

CONVERSATIONS IN PERSONAL SPACES

Conversations among construction workers are normally centered on the


object being designed or under construction and the actions required to
get the design or construction process going. In this context, people sug-
gest new tasks, justify their actions, and check that the tasks have been
done. At the same time, they negotiate their responsibilities, evaluate
contributions, and establish individual authority.
The content of their conversations also refers to actions on design or
media objects, such as showing a file or drawing, creating the design
objects or media objects, such as PowerPoint slides that represent the
design objects. In the course of a team session, design decisions have to
be made, both choosing among the alternative solutions to design prob-
lems and about the responsibilities and contributions of individual team
members. In the traditionally organized physical workspace, these con-
versations have naturally evolved. The powerful new technology prom-
ises that such naturalness can be maintained in conversations where
people are not physically present together, but are in contact through
telephones, videoconferencing, email, virtual reality, and other similar
technology applications.
The findings of this study suggest that more work needs to be done
before this promise is fully realized in practice. They also suggest that a
significant aspect of future work should be devoted to theoretical issues
of interdisciplinary research. A main problem that needs to be addressed
at the start involves the study of interactions in physical, media, and vir-
tual spaces, in particular the linguistic approaches to conversation analy-
sis. These are traditionally focused on the study of face-to-face
conversations where two participants jointly create the “frame” of the

147
EMERGENCE

conversation. The findings of this study show that the initial common
ground is much more varied and comprises considerably more social
knowledge and assumptions than the traditional studies within the com-
mon ground framework have so far acknowledged.
Furthermore, in a real-life workplace there is a greater number and a
greater variety of conversations than face-to-face encounters. It is often
impossible for people to participate actively in a conversation, so they
may overhear, monitor, or ignore it, while still being aware that a conver-
sation has taken place. The understanding of the overhearers is different
from the understanding of the active participants (Clark, 1996), and this
should be taken into account, both in scientific coverage of interactions
in the workplace and in the design of technology that supports them.
Many joint activities of teams are carried out “at arm’s length,” instead
of face to face, where the possibilities of misunderstanding are greater
and the facilities for repair reduced. It is therefore important for commu-
nications technologies to observe the established representations of infor-
mation in terms of rules, regulations, and etiquette that is easily accepted
by team members. It is even more important for the technologies to
enhance the representations of information to include more personal and
less structured requirements, such as helping people to build the com-
mon ground that ultimately leads to trust.

CONCLUSIONS
Several principles governing social interaction in establishing face-to-
face contact also apply to the situation of virtually visiting an organization
and its people using the intranet example described in this study. The
prototype makes a person’s environment visible to others at remote loca-
tions, and helps in creating a common ground to underpin joint activities.
An important social (and design) principle is related to intrusion into
another person’s space, be it physical, organizational, or informational.
Invitations into such spaces can be communicated indirectly and visitors
will look for them in the objects presented on the page, such as the
videoglance, shrunken screen, and highlighted names. This will happen
even if the designers of the page have not explicitly intended for these
objects to have such a communicative function.
Another important principle concerns the construction of initial com-
mon ground. Visitors’ interpretations of the information presented on a
personal page will be considerably richer and with more social detail than
the literal meaning of the text, pictures, or graphics presented. This is

148
VOLUME #2, ISSUE #4

where the metaphor of “electronically augmented clothes” is particularly


apt. Decisions are made not only about the appropriate means to obtain
information by accessing the sources that it makes available, but also
about personalities and possible relationships between them, the nature
of the organization, and maybe even the quality of work that can be
expected of them.
The information displayed on the intranet pages is therefore inter-
preted within a rich context of natural human communication, where the
accepted social norms that determine appropriate behavior apply. The
extended common ground framework developed here provides the con-
ceptual basis, as well as methods and techniques for discovering and for-
mulating the social constraints that regulate joint activities and
communication at arm’s length. It is the framework for the design of tech-
nology that brings together people, their social relationships, and the
resources they need in order to carry out the joint activities in the work-
place, which are the stated aims of knowledge management in practice.

NOTES
1 EU AC017, CICC (Collaborative Integrated Communications for Construction).
A detailed report is available on the author’s website, http://
www.rhbnc.ac.uk/~uhtm059/index.html.
2 The People and Information Finder is a multimedia prototype developed as part of the
CICC project.
3 The team involved in the development of the PIF prototype consisted of Nicholas
Farrow, David Leevers, Mark Perry, and Duska Rosenberg (cf. Rosenberg et al., 1997).

REFERENCES
Baker, W. (1994) “Building Intelligence Networks,” in Networking Smart, New York:
McGraw-Hill.
Boisot, M. (1999) Knowledge Assets, Oxford, UK: Oxford University Press.
Boisot, M., Griffiths, D., & Moles, V. (1997) “The Dilemma of Competence: Differentiation
Versus Integration in the Pursuit of Learning,” in R. Sanchez & A. Heene (eds),
Strategic Learning and Knowledge Management, New York: John Wiley.
Clark, H. (1996) Using Language, Cambridge, UK: Cambridge University Press.
Clark, H. & Brennan, S. (1991) “Grounding in Communication,” in L. B. Resnick, J. M.
Levine, & S. D. Teasley (eds) Perspectives on Socially Shared Cognition, American
Psychological Association: 127–49.
Clark, H. & Schaefer, E. (1989) “Contributing to discourse,” Cognitive Science, 13: 259–92.
CSCW ’92 (1992) “Sharing perspectives,” ACM 1992 Conference on Computer Supported
Cooperative Work, Toronto, Canada, October 31 to November 4.
Davenport, T. H., De Long, D. W., & Beers, M. C. (1998) “Successful knowledge manage-
ment projects,” Sloan Management Review, 39(2, Winter): 43–57.
Dourish, P. & Bellotti, V. (1992) “Awareness and Coordination in Shared Workspaces,”
Proceedings of CSCW ’92, ACM, New York.

149
EMERGENCE

Fruchter, R. (1998) “Roles of computing in P5BL: Problem-, project-, product-, process-,


and people-based learning,” Artificial Intelligence for Engineering Design and
Manufacturing, 12: 65–7.
Galbraith, J. (1994) Competing with Flexible Lateral Organizations, 2nd edn, Boston:
Addison-Wesley.
Grosz, B. & Sidner, C. (1986) “Attentions, intentions and the structure of discourse,”
Computational Linguistics, 12: 175–204.
Gumpertz, J. & Hymes, D. (1972) Directions in Sociolinguistics: The Ethnography of
Communication, New York: Holt, Rinehart and Winston.
Hindmarsh, J., Fraser, M., Heath, C., Benford, S., & Greenhalgh, C. (1998) “Fragmented
Interaction: Establishing Mutual Orientation in Virtual Environments,” ACM98
Proceedings of CSCW98, Seattle, November 14–18: 217–26.
Nonaka, I. (1991) “The knowledge-creating company,” Harvard Business Review, 69(6):
96–104.
Perry, M., Fruchter, R., & Rosenberg, D. (1999) “Co-ordinating distributed knowledge: A
study into the use of an organisational memory,” Cognition, Technology and Work, 1:
142–52.
Robinson, M. (1993) “Design for Unanticipated Use,” in De Michaelis, Simone, & Schmidt
(eds) Proceedings of the third European Conference on Computer-Supported
Cooperative Work, September 13–17, Milan, Italy, Amsterdam: Kluwer: 187–202.
Rogers, Y. (1993) “Coordinating computer-mediated work,” Computer Supported
Cooperative Work, 1: 295–315.
Rosenberg, D. (2000) “3 steps to ethnography: A discussion of inter-disciplinary contribu-
tions,” Artificial Intelligence and Society, special issue ed. L. Pemberton,
“Communications in Design,” 15(1).
Rosenberg, D. & Holden, T (2000) “Interactions, technology, and organizational change,”
Emergence, 2(2, March).
Rosenberg, D., Perry, M., Leevers, D., & Farrow, N. (1997) “People and Information Finder:
Informational Perspectives,” in R. Williams (ed.) The Social Shaping of Multimedia:
Proceedings of International Conference COST-4, Luxembourg: European Commission
DGXIII.
Sanchez, R. & Heene, A. (eds) (1997) Strategic Learning and Knowledge Management, New
York, NY: Wiley.
Teece, D. J. (1998) “Capturing value from knowledge assets: The new economy, markets for
know-how and intangible assets,” California Management Review, 40(3): 55–79.
Walsh, J. P. & Ungson, G. R. (1991) “Organizational memory,” Academy of Management
Review, 16(1): 57–91.
Winograd, T. & Flores, F. (1986) Understanding Computers and Cognition: A New
Foundation for Design, Norwood, OH: Ablex.

150
EMERGENCE, 2(4), 151–162
Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Handling Complexity with


Self-Organizing Fractal
Semantic Networks
Jürgen Klenk, Gerd Binnig, & Günter Schmidt

I n today’s globally networked world, people are facing dramati-


cally increasing complexity. Part of this complexity is due to the
rapidly growing amount of information that is becoming available
online. Because knowing more usually leads to better decisions
than knowing less (even if the “less” seems clearer and more definite),
complex problems require a complex analysis (Davenport & Prusak,
1997), i.e., being aware of the multitude of relevant information available
for the decision process, and being able to select the right pieces of infor-
mation from this multitude. Despite the fact that dealing with complexity
is one of the strengths of the human mind, because of the current infla-
tion of information there is a growing need for tools that intelligently
assist people in handling complexity.
Current research and state-of-the-art tools for handling complexity are
concentrated around topics such as searching and indexing, data mining,
classification and clustering, summarization and information extraction,
and case-based reasoning, among others. The underlying mathematical
models come from disciplines such as statistics, logic, and optimization
theory. Depending on the complexity, these methods achieve their tasks
with varying degrees of success. In particular, searching on the extremely
complex internet is still rather unsatisfactory in many cases due to both low
precision and low recall. This is not too surprising, as most tools do not
exploit the possibly most important aspect of the human mind when deal-
ing with complexity, the network-like structure of knowledge and thinking.

151
EMERGENCE

Network-based approaches (Quillian, 1967) such as neural networks


(Müller et al., 1995) and semantic networks (Minsky, 1988a; Reimer, 1991)
do concentrate on this aspect. The theory of neural networks even attempts
to reinvent the evolution of the brain (in a rather short period). From a sci-
entific point of view, this is certainly an exciting approach. However, we
believe that semantic networks provide a more powerful approach for the
development of tools that can deal with the exploding complexity of today’s
world, as they jump into this evolution on a higher semantic level. In this
case, the evolving network does not have to reinvent human knowledge
and thinking to the extent that these can be built right into the network.
While the network-like structure of knowledge and thinking is one
important aspect when dealing with complexity, complexity theory (as
originally developed in the biological sciences as a means of understand-
ing how organic entities and communities form) also focuses on the
aspect of self-organization. The problem of self-organization itself is
extremely complex, because the number of causal factors and the degree
of interdependence between them is too great to permit the use of mech-
anistic prescriptive models to solve it. Instead, self-organization of a com-
plex system is driven on a small (local) scale by a collection of (typically)
relatively simple processes, the causal factors. The global outcome very
much depends on these local processes and the initial state of the system.
Despite the fact that individual local processes are deterministic, the
global outcome usually cannot be predicted because the system exhibits
chaotic behavior due to its nonlinearity caused by the interdependence of
its local processes (Baldi & Politi, 1997).
When combining ideas and theories about self-organization with ideas
and theories about knowledge and thinking—that is, when dealing with a
self-organizing semantic network—one can model complex problems such
as cognition and learning, which relate to the internal state of an entity, as
well as the behavior of such entities in communities or collectives, which
relate to the external state of the entities. However, in modeling this exter-
nal state one really deals with self-organizing semantic networks on two
different scales, the internal self-organizing semantic network of the com-
munity or collective, and those of the entities that make up the collective.
This idea can be extended quite naturally to more than two scales. In fact,
since most semantic networks typically have a hierarchical structure, all
self-organizing semantic networks actually consist of a number of smaller
self-organizing semantic networks across different scales.
If one wants to drive the self-organization of the network locally with
a few simple processes that are the same on all scales (such as in cellular

152
VOLUME #2, ISSUE #4

automata, Gutowitz, 1991), then the network must look the same every-
where, so that these processes can be applied at every location in the net-
work. In other words, the self-organizing semantic network must be
self-similar across all scales by being constructed out of a few fundamental
building blocks and construction principles. Mathematical objects that are
self-similar across hierarchical scales are called fractals, which is why we call
the model described in this article a self-organizing fractal semantic network.

DEFINITION OF A SELF- ORGANIZING FRACTAL


SEMANTIC NETWORK

We begin with the basic definitions of semantic and hierarchical


networks:

◆ Definition 1: A semantic network is a directed or nondirected graph with


the additional property that its nodes and links carry semantic informa-
tion. Nodes and links of a semantic network are called semantic units.
The semantic information provides a particular meaning for a semantic
unit. Frequently, semantic information is encoded in natural language,
i.e., specific names or processes are used to label semantic units.
◆ Definition 2: A hierarchical network is a directed or nondirected graph
with the additional property that some of its links carry scaling infor-
mation. This means that one of the nodes connected to a scaling link is
at a higher level of hierarchy than the other node.

It should be noted that this definition does not yield an absolute value for
the level of hierarchy, i.e., it does not assign to every node an integer that
corresponds to its level of hierarchy. Instead, it gives a relative definition
for the level of hierarchy. While this approach is more general, it can
cause conflicts because one can have cycles (loops) of hierarchical links.
This conflict exists only on a global scale and can be resolved if one con-
siders only local neighborhoods of the whole network at any one time.
Local neighborhoods are introduced by the notion of a topology.

◆ Definition 3: A topological network is a directed or nondirected graph


with the additional property that for every node and every link one or
several local neighborhoods are defined. The local neighborhoods of a
node or a link are sets consisting of this node or link and other nodes
or links of the network.

153
EMERGENCE

In many cases, topological networks are obtained by assigning a weight (a


value between 0 and 1) to every link of the network. The negative loga-
rithm of this weight then yields a metric or distance function on the net-
work. The local neighborhoods are defined with the help of some
threshold mechanism applied to this metric. Every metric determines a
topology, but the converse is not true; therefore, the above definition is
more general than the method of assigning weights.
In the special case of semantic networks, it is often required that a
semantic unit be used both as a node and as a link. For example, a seman-
tic unit labeled friendship can be viewed both as a node (“friendship is in
particular a relation,” where it is a node connected hierarchically to the
node relation) and as a link (“friendship between two people,” where it is
a link between the two nodes representing the two people). This gives
rise to the following definition.

◆ Definition 4: A higher-order network is a directed or nondirected


graph in which links can at the same time be nodes. This means that a
link can connect two nodes, one node and one link, or two links.

In the next definition we capture what we mean by a fractal network. We


will not give an exact mathematical definition (for ideas on how to do this
see Edgar, 1990; Mandelbrot, 1982), but rather a working definition,
which will suffice for the scope of this article. In particular, our formula-
tion will allow us more easily to define the self-organizing fractal seman-
tic network and to understand the example given in the last section. But
this definition will not allow us to explore in more detail the fractal struc-
ture of the network. In particular, we will not be able to give a definition
for the fractal dimension, which, according to separate studies, seems to
be related to the more subjective quantity of complexity. This topic will
be covered in a forthcoming paper.

◆ Definition 5: A fractal network is a hierarchical network with the addi-


tional property that all of its nodes and links are derived from a small
set of basic building blocks. In this sense, a fractal network exhibits a
self-similar structure because it looks the same everywhere on all lev-
els of hierarchy.

The following definition deals with the processes that perform the self-
organization of the network.

154
VOLUME #2, ISSUE #4

◆ Definition 6: A (locally) self-organizing network is a directed or non-


directed graph with the additional property that at least some of its
nodes and links are connected to one or several (local) processes out of
a set of processes. A process is an algorithm that performs a transfor-
mation of the network, while a local process is an algorithm that per-
forms a transformation of the network only in the local neighborhood
of the node or link to which it is connected. The (local) processes are
triggered by the state of the node or link to which they are connected,
which in turn is a function of the entire network.

For practical purposes, the state of a node or link is often only a function
of its local neighborhood. For example, when dealing with semantic net-
works, a semantic unit is often connected to attributes, and the values of
these attributes determine the state of the semantic unit.
It is conceivable that the processes themselves form a hierarchical
network. This is a topic under current investigation. The fundamental
processes will be covered below.
After making all of the above definitions, we are now in a position to
define a self-organizing fractal semantic network, our fundamental model
that we use to study complexity problems.

◆ Definition 7: A (locally) self-organizing fractal semantic network is a


hierarchical, topological, higher-order, fractal, (locally) self-organizing
semantic network.

Figure 1 overleaf shows a self-organizing fractal semantic network. Nodes


are depicted as spheres, links as cylinders, and processes as Janus heads
(see next section for details). The hierarchical structure is clearly visible
from the fact that nodes contain internal networks. Note, too, that some
links are nodes, as their cylinders have spheres in their center that con-
nect them to other nodes or links.

THE BASIC BUILDING BLOCKS

As specified in definition 1, nodes and links of the network are called


semantic units. All semantic units are subdivided into concepts and
instances, as is usual (Cohen, 1989). We further subdivide nodes into
information units, attribute units, and process units or Janus units. In our
model we use the metaphor of a Janus (pl. Jani) for a process. In Roman
mythology, Janus is a two-faced god looking in opposite directions, i.e.,

155
EMERGENCE

Figure 1 A sample self-organizing fractal semantic network

taking into account a multitude of perspectives. This is exactly what a


process in a fractal network does, if one interprets the two directions as the
internal, lower-scale and the external, higher-scale parts of the network with
respect to the semantic unit to which the process is connected. Information
units are general elements that can represent concepts or instances, and
they are identified by specific names. Attribute units are identified by spe-
cific names and values, which can be set, retrieved, or computed.
All links of the network are either scaling or nonscaling. Standard
inheritance principles (Cohen, 1989) are defined across all scaling links,
making use of the network’s topology or neighborhood concept. Links are
further subdivided into comparison units, interaction units, description
units, and controller units. Nonscaling comparison units allow us to
describe the degree of similarity or dissimilarity of two semantic units,
while scaling comparison units allow us to describe how close one seman-
tic unit comes to being an instance of another semantic unit, or how close
one semantic unit comes to being a special case of another semantic unit.
Nonscaling interaction units allow us to describe the type of interaction
of two semantic units, while scaling interaction units allow us to describe
the role that one semantic unit plays as part of another semantic unit.
Description units connect semantic units to their attribute units, which
describe the semantic units in more detail. Finally, controller units con-
nect semantic units to their Janus units, which in turn control and act on
the semantic units’ local neighborhoods.
Figure 2 shows how the basic building blocks are used to construct a
network. Note that each building block labeled “semantic unit” can be
replaced with any basic building block. Information units do not appear
in this diagram, as there is no restriction on their use. In practice, most of
the building blocks labeled “semantic unit” are information units.
In the self-organizing fractal semantic network, each semantic unit
has a specific meaning, which it receives indirectly through its multitude

156
VOLUME #2, ISSUE #4

Semantic Semantic
Unit Unit

Unit arison
Intera(scal.)
ction

)
(scal.
p
Unit
Com
Janus Semantic
Unit Contr
oll
Unit er ction Unit
Interanit
Semantic U
n Unit Comp
riptio
Desc nit aris
Unit on
Attribute U Semantic
Unit Unit

Figure 2 The basic building blocks

of links to other semantic units. Thus, in our model meaning is defined


(and is definable) not in an absolute way, but only in a relative way, in
accordance with Minsky’s philosophy (Minsky, 1988b).
The state of a semantic unit is a (possibly complex) function of all
semantic units of its local neighborhood. A simple example is the state of
usefulness of the information unit representing a soccer ball. The soccer
ball’s local neighborhood with respect to the state of its usefulness may
consist of the attribute unit size and the attribute unit pressure, together
with their respective values. The state of the soccer ball’s usefulness can
then be computed from the current values of its two attribute units,
preferably in terms of fuzzy set theory.
A more complex example is the mood state of a human being. Here
clearly, the attribute units describing the human being, such as height and
weight, among others, are not sufficient to describe his mood state.
Psychologists know that the whole context within which the human being
lives affects his mood state. There are many theories about what factors
should and should not be taken into account when determining this context,
and what kind of influence each of these factors has on the mood state. In
our model, it is precisely this context that makes up a human being’s local
neighborhood with respect to his mood state, and it is precisely the kind of
influence each of these factors has that determines the mood state function.

THE BASIC PROCESSES

To motivate the choices for the basic processes in our self-organizing frac-
tal semantic network, we continue with our examples from the last section.
It is important to note that there are Janus units that influence the
usefulness of the soccer ball, or the mood state of the human being.

157
EMERGENCE

Semantic units that are in the local neighborhood of the soccer ball or the
human being typically control and trigger these Janus units. If the soccer
ball becomes useless because it goes flat (caused by a Janus unit), it may
get thrown away (caused by another Janus unit) and thus removed from
the network, and a new soccer ball may be purchased (caused by yet
another Janus unit), and thus a new information unit representing the
new soccer ball is created. As for the human being, he may identify him-
self as the typical representative of a certain group, and may conse-
quently join this group to improve his mood state, or he may found a new
interest group and invite others to join it. To this extent, he may even
acquire new skills or knowledge.
These examples illustrate the basic processes required in our model.
We have Janus units that are able to create new semantic units, to mod-
ify or destroy existing semantic units (and even themselves), to create
links between semantic units, to classify or identify semantic units as
other semantic units, and to create new groups or segments in the net-
work. Other Janus units must be able to set, retrieve, or compute values
of attribute units and to determine the states of semantic units. Finally,
there are Janus units that are able to perform a learning task in the form
of knowledge acquisition or restructuring.
As can be seen from the Janus units described above, the processes car-
ried out by Janus units can range from generic to specific, meaning that
they use generic properties of the basic building blocks that make up the
self-similar structure of the network, or very specific properties of the local
neighborhood of a certain semantic unit, respectively. Therefore, some
Janus units can be connected to any semantic unit, while others require the
presence of particular semantic units in the local neighborhood of the
semantic unit to which they are connected. Clearly, the Janus units that
simply create, modify, or destroy semantic units (including the ones that
create links, as this is a special case of creating semantic units) perform very
generic tasks. Therefore, they can be connected to any semantic unit.
However, they are usually not triggered directly, but rather invoked as
parts of more complex processes such as classification or segmentation.
The Janus units that perform the evaluation of attribute units’ values typ-
ically have a set of mathematical tools at hand from areas such as fuzzy set
theory, statistics, geometry, topology, and algebra, among others. Therefore,
these Janus units are more specific, as they can be applied only to attribute
units whose values satisfy certain type constraints. Finally, the Janus units
that determine the states of semantic units are even more specific, as their
processes might only be applicable to one or a small group of semantic units.

158
VOLUME #2, ISSUE #4

CLASSIFICATION AND SEGMENTATION

The process of classification stands for the common task of comparing one
semantic unit to others. The goal here is to find comparable semantic
units in the sense that they are alike, can perform similar tasks, have sim-
ilar goals, are more general or more specific, are constituents or groups,
or are in similar states, among other things. Psychology tells us that this
is a very important task, as human beings are constantly in search of their
identities by comparing themselves to and (even more importantly) dif-
ferentiating themselves from others. In our model, the process of classi-
fication is performed through extensive local neighborhood analyses. This
means that the degree of similarity of two semantic units is determined
by the degree of similarity of their local neighborhoods with respect to
the above comparison factors. As with determining the status of a seman-
tic unit, when comparing semantic units it is not enough simply to take
into account the values of the attribute units of these semantic units.
Instead, the topology of the network, i.e., the entire local neighborhood
structures of the semantic units, must be considered.
Therefore, the process of classification deals with the more general
task of finding similar structures and not just similar values of attribute
units. Because of the self-similar structure of the network, this classifica-
tion process can be implemented in a generic way, thus allowing the
Janus unit representing the classification process to be used throughout
the entire network.
While classification focuses on finding similar structures among seman-
tic units, segmentation focuses on grouping semantic units according to sim-
ilarities found during classification. There are two main types of groupings
with which our model deals, corresponding to the two types of scaling links
we defined, the scaling comparison units and the scaling interaction units.
These two types of groupings correspond again to well-known results from
psychological studies, people’s desire to categorize and organize informa-
tion and knowledge, and people’s desire to form working groups to better
achieve a common goal (Wenger, 1998; Prusak & Lesser, 2000). While the
categorization and organization of knowledge predominantly use comparing
and contrasting mechanisms, working groups are formed according to skills
and common goals. Here, it is often the case that diversity is more impor-
tant than similarity, because working groups are often more successful if
they consist of group members with the right mix and variety of skills.
From a process point of view, the results of classification processes deter-
mine and trigger segmentation processes. In particular, the classification

159
EMERGENCE

results determine which new links from semantic units to other semantic
units representing categories or groups should be created, and trigger the
appropriate segmentation processes that create these links. The classification
results also determine which new categories or groups should be created or
formed if a number of semantic units have been classified as being similar or
having a common goal, and thus should be united/joined/combined/inte-
grated in such new categories or groups. The triggered segmentation
processes then create these new categories or groups and also create all links
to members of these categories or groups. Again, because of the self-similar
structure of the network, this segmentation process can be implemented in a
generic way, thus allowing the Janus unit representing the segmentation
process to be used throughout the entire network.
Finally, the creation of new semantic units during the segmentation
process triggers a new classification process, this time based on the new
network structure and topology. This sequence of classification and seg-
mentation, which continuously determines and changes the neighbor-
hood structure and thus influences the states of the semantic units, is the
main driver of the self-organization of the network.

AN EXAMPLE

In this section we illustrate how a self-organizing fractal semantic net-


work can be used to model a complex economic and ecological society.
Our model is derived from a real-world situation in a rural area in south-
ern Germany and is based on the following assumptions:

1 A number of farmers are running small agricultural firms. These firms


are influenced by standard ecological and economic factors.
2 Agricultural goods are traded on a market following standard market
rules.
3 There is a dominant (almost monopolistic) market player, a consor-
tium that purchases almost all agricultural products and tries to dic-
tate prices. The consortium performs certain refinement processes,
such as making dairy products from milk or running a slaughterhouse.
4 Farmers have an internal knowledge about producing and selling
agricultural goods. Some of the farmers also have a good understand-
ing of the economic situation in marketplaces.
5 The farmers as well as the consortium are connected to the market-
place. Each has a limited influence on the marketplace, depending on
their relative size and importance.

160
VOLUME #2, ISSUE #4

6 There are consumers of varying size—supermarket chains, butchers,


stores, and individuals—purchasing the refined agricultural products.
7 The goals of the farmers are to maximize their revenue and at the
same time to minimize their workload. Their states are determined by
the degree of success or failure to achieve their goals.
8 The goals of the consortium are to maximize its revenue by minimiz-
ing the prices for agricultural products, while at the same time ensur-
ing sufficient supply of goods.

When the network evolved out of a given initial state, because of the
dominance of the consortium the prices for agricultural goods dropped
significantly, causing the farmers’ revenues to decrease substantially,
despite a similar or even higher workload. As a consequence, the farmers’
states changed from satisfactory to unsatisfactory, triggering their classifi-
cation Jani, which attempted a reclassification of the farmers within the
network, based on the new state of the network.
The result of the classification process was that most farmers were in
a similar state of dissatisfaction, so that in the subsequent segmentation
process a new working group was created within which the farmers
organized themselves and shared their knowledge. It was this newly
created working group that took over the connection to the marketplace
from its individual members. Because of its greater importance, accord-
ing to the implemented economic rules it could balance the pressure on
the prices exercised by the consortium. The prices went up again, giving
individual farmers higher revenues and thus greater satisfaction.
One could imagine how this scenario might continue, given that the
players have enough information and knowledge at hand to adapt their
strategies to the evolution of the network. However, the assumptions
made were too simplistic to expect this model to reach a final stable state.
Studies based on more refined models are currently under investigation.

CONCLUSIONS
We have shown that aspects of complexity can be modeled with self-
organizing fractal semantic networks, where generic processes drive the
self-organization of the network on a local scale. Necessary requirements
of this model are the existence of a topology or neighborhood structure
and the self-similarity of the network on all scales of hierarchy. The two
processes of classification and segmentation are fundamental in driving
the self-organization of the network, and it appears that cognition and

161
EMERGENCE

learning can be derived from them. However, more research is necessary


to precisely determine the nature of this relation.
Despite the fact that we have given some guidelines on how to extend
the concept of fractals from geometry to topological hierarchical networks
(definition 5), the question of how to do this in strict mathematical terms
is still open. If an answer could be obtained, then it would be interesting
to study how the fractal dimension in this network is related to the sub-
jective term of complexity, because this may shed some light on driving
mechanisms for learning.
Finally, we have applied the concept of a self-organizing fractal
semantic network to the problems of natural language understanding and
image recognition. In these cases, texts or images were transformed into
initial input networks. Structuring and connecting these input networks
to world knowledge networks with the help of the classification and seg-
mentation methods described above then accomplished the task of
understanding these texts or images.

NOTE
Gerd Binnig and Günter Schmidt would like to thank the Deutsche Bundesstiftung Umwelt
for supporting the simulation project mentioned above.

REFERENCES
Baldi, R. & Politi, A. (1997) Complexity, Cambridge Nonlinear Science Series 6, Cambridge,
UK: Cambridge University Press.
Cohen, G. (1989) Memory in the Real World, Hove, UK: Lawrence Erlbaum Associates.
Davenport, T. H. & Prusak, L. (1997) Working Knowledge, Boston: Harvard Business School
Press.
Edgar, G. A. (1990) Measure, Topology, and Fractal Geometry, New York: Springer Verlag.
Gutowitz, H. A. (ed.) (1991) Cellular Automata: Theory and Experiment, Cambridge, MA:
MIT Press.
Mandelbrot, B. B. (1982) The Fractal Geometry of Nature, San Francisco: Freeman.
Minsky, M. L. (ed.) (1988a) Semantic Information Processing, Cambridge, MA: MIT Press.
Minsky, M. L. (1988b) The Society of Mind, New York: Simon & Schuster.
Müller, B., Reinhardt, J., & Strickland, M. T. (1995) Neural Networks: An Introduction, 2nd
edn, Berlin: Springer Verlag.
Prusak, L. & Lesser, E. (2000) “Communities of practice, social capital, and organizational
knowledge,” Knowledge Connections, 2(1, Feb).
Quillian, M. R. (1967) “Word concepts: A theory and simulation of some basic semantic
capabilities,” Behavioral Science, 12: 410–30.
Reimer, U. (1991) Einführung in die Wissensrepräsentation (“Introduction to Knowledge
Representation”), Stuttgart, Germany: B.G. Teubner.
Wenger, E. (1998) Communities of Practice: Learning, Meaning, and Identity, Cambridge,
UK: Cambridge University Press.

162
About the Authors

Peter M. Allen is Head of the Complex Systems Management Centre in the School
of Management at Cranfield University. He was a Royal Society European
Research Fellow from 1970–71 and a Senior Research Fellow at the Université
Libre de Bruxelles from 1972–87, where he worked with Nobel Laureate Ilya
Prigogine. Since 1987 he has been at Cranfield University. For many years
Professor Allen has been working on the mathematical modeling of change and
innovation in social, economic, financial, and ecological systems, and the develop-
ment of integrated systems models linking the physical, ecological, and socio-
economic aspects of complex systems as a basis for improved decision support
systems. Email P.M.Allen@cranfield.ac.uk.

Gerd K. Binnig (PhD, Johann Wolfgang Goethe University, 1978), has been a
research staff member of the IBM Zurich Research Laboratory since 1978, inter-
rupted by a sabbatical at the IBM Almaden Research Center in San Jose (1985–6)
and a guest professorship at Stanford University (1985–8). In 1987 he received an
honorary professorship at the University of Munich. For the development of the
scanning tunneling microscope, which he invented together with Heinrich Rohrer,
he received numerous awards, including the Nobel Prize in Physics in 1986. This
and the atomic force microscope invented later made it possible to image and study
structures and processes on the atomic scale. These instruments, which serve as
tools for investigations of phenomena of the smallest dimensions, play a key role in
nanoscience and nanotechnology. Dr. Binnig’s present fields of research include
micro- and nanosystem techniques and the theory of “Fractal Darwinism,” which
he developed to describe complex systems. Email gbi@zurich.ibm.com.

Max Boisot is Professor of Strategic Management at ESADE in Barcelona, Senior


Associate at the Judge Institute of Management Studies at the University of
Cambridge, Poh Seng Yeoh Research Fellow at the Snider Center for
Entrepreneurial Research, The Wharton School, University of Pennsylvania, and
Associate Fellow of the Information Systems Research Unit at Warwick Business
School. He holds a BA and Diploma in Architecture from Cambridge University,
an MSc in Management from MIT, and a doctorate in technology transfer from the
Imperial College of Science, Technology and Medicine, London University. He
has published numerous research articles and is the author of Information Space:
A Framework for Analyzing Learning in Organizations, Institutions, and
Cultures (Routledge, 1995) and Knowledge Assets: Securing Competitive
Advantage in the Information Economy (Oxford University Press, 1998). Email
boisot@attglobal.net.

163
EMERGENCE

Paul Cilliers is Senior Lecturer in the Philosophy Department of the University of


Stellenbosch. His current research is focused on the implications of complexity the-
ory for our understanding of ethics, law, and justice. He is the author of
Complexity and Postmodernism (Routledge, 1998). Email fpc@akad.sun.ac.za.

Jack Cohen is an internationally known reproductive biologist and has written


numerous books on embryology, reproduction, and evolution. His present position
at the University of Warwick bridges the Ecosystems Unit of the Biology
Department and the Mathematics Institute. His brief includes bringing more sci-
ence to public awareness. He now works with the mathematician Ian Stewart, with
whom he has explored issues of complexity, chaos, and simplicity, producing sev-
eral joint papers and three books. He is consultant to well-known science fiction
authors (McCaffrey, Gerrold, Harrison, Niven, Pratchett) designing alien creatures
and ecologies, is frequently heard on BBC radio programs, and has initiated and
participated in the production of several TV documentaries. Email
DrJackCohen@cs.com.

Brian Goodwin studied biology at McGill, mathematics at Oxford, and received


a PhD from Edinburgh University. He worked at McGill, MIT, the University of
Sussex, and the Open University, and has a long association with the Santa Fe
Institute. He is now at Schumacher College in Devon, where he uses the sciences
of complexity to study emergent phenomena and to understand health in various
contexts, involving the development of a science of qualities. Email
BCGood1401@aol.com.

Geoffrey M. Hodgson is Research Professor in Business Studies at the University


of Hertfordshire. He was formerly Reader in Institutional and Evolutionary
Economics at the University of Cambridge. He has published several books includ-
ing Economics and Institutions (1988), Economics and Evolution (1993),
Economics and Utopia (1999), and Evolution and Institutions (1999). In addition,
he has published over 50 articles in academic journals. His next book is entitled
How Economics Forgot History and will be published in 2001 by Routledge.
Email g.m.hodgson@herts.ac.uk.

Jürgen Klenk, PhD in Mathematics (Tübingen University, 1997), is a Research


Staff Member at IBM’s Zurich Research Laboratory. He is heading a project on the
development of a Natural Thinking Machine, a system that transfers human know-
ledge and thinking to computers to enable them to become intelligent assistants.
His current research interests are to study the use of Natural Thinking Machines
for the simulation of complex dynamical behaviors. Email jkl@zurich.ibm.com.

Yasmin Merali is Director of the Information Systems Research Unit at Warwick


Business School. Her research publications focus on organizational transformation

164
VOLUME #2, ISSUE #4

in dynamic contexts. Much of her work is of a transdisciplinary nature, and she has
developed the “information lens” to explore issues of transformation in organiza-
tional and strategic contexts. Her current research is concerned with the use of com-
plexity theory to study organizations as complex evolving systems in the information
space. Her experience in consultancy and training spans a range of UK-based and
multinational organizations, and she contributes to MBA and executive programs,
both in the U.K. and internationally. Email Yasmin.Merali@warwick.ac.uk.

Günter Schmidt, PhD in Theoretical Physics (Tübingen University, 1995), is the


Director of the Knowledge Management Division at DEFiNiENS AG in Munich. He
developed the object-oriented image analysis software eCOGNITION, which is based
on semantic knowledge networks and fuzzy classification, and a simulation tool for
complex multihierarchical and dynamically networked systems such as socio-
economic and ecological systems. His research interests are cognitive systems and
evolution in self-organizing semantic networks. Email guenter.schmidt@delphi2.de.

Ralph Stacey is Professor of Management and Director of the Complexity and


Management Centre at the Business School of the University of Hertfordshire and
a member of the Institute of Group Analysis. He is also a consultant to managers
at many levels across a range of organizations and the author of a number of books
and articles, including Managing the Unknowable (Jossey-Bass, 1992), Strategic
Management and Organisational Dynamics: The Challenge of Complexity
(Pearson Education, 2000), and Complexity and Creativity in Organisations
(Berrett-Koehler, 1996). Email r.d.stacey@herts.ac.uk.

Duska Rosenberg is Senior Lecturer in Management Information Systems and a


member of the Management Information and Communication Systems Group at
the School of Management, Royal Holloway Institute, University of London. Her
research interests include analysis and design of shared information environments,
computer-mediated communication, and natural language interaction in media
and virtual spaces, while her consultating activities focus on human communica-
tion, information sharing, and introducing intranets into geographically dispersed
organizations. Dr. Rosenberg is also a member of the Rotterdam School of
Management and a senior consultant with Magi Inc., a knowledge management
software and consulting firm. Email d.rosenberg@rhbnc.ac.uk.

David J. Snowden is European Director of the Institute for Knowledge


Management. He pioneered the use of anthropological and story techniques in
knowledge management and regularly acts as an adviser and educator in know-
ledge management for senior management in both commercial and governmental
sectors across the world. He is currently responsible for programs within the insti-
tute linking knowledge and learning through the application of complexity theory,
and is the author of two forthcoming books on the use and abuse of story and

165
EMERGENCE

complex knowledge. He has an MBA from Middlesex University and a BA in


Philosophy from Lancaster University. He is honorary fellow in knowledge man-
agement at the University of Surrey and Associate Fellow of the Information Systems
Research Unit at Warwick Business School. He teaches on the MBA programs at
Warwick, Sophia Antipolis, and Piacenza. Email SNOWDED@uk.ibm.com.

Haridimos Tsoukas is Professor of Organization Theory and Behaviour at the


University of Strathclyde and the Athens Laboratory Business Administration. His
publications have appeared in several leading academic journals. He is currently
editing (with Christian Knudsen) The Oxford Handbook of Organization Theory:
Meta-theoretical Perspectives (to be published in 2002). He is a co-editor of
Organization Studies and associate editor of Organization. His research interests
include organizational knowledge and its management; the epistemology of orga-
nizational research; new science and organization theory; and organization theory
as a practically oriented science. Email htsoukas@alba.edu.gr.

166

View publication stats

You might also like