You are on page 1of 283

RECONSTRUCTING A LEGACY:

ON OVERCOMING BIOLOGICAL PREFORMATIONISM, DUALISM, AND

THE INHERITANCE PARADIGM

by

Gregory A Mengel

A Dissertation Submitted to the Faculty of the California Institute of Integral

Studies

in partial fulfillment of the Requirements for the Degree of

Doctor of Philosophy in Philosophy and Religion

with a concentration in Philosophy, Cosmology, and Consciousness

California Institute of Integral Studies

San Francisco, CA

2009
CERTIFICATE OF APPROVAL

I certify that I have read Reconstructing a Legacy: On Overcoming Biological

Preformationism, Dualism, and the Inheritance Paradigm by Gregory A Mengel,

and that in my opinion this work meets the criteria for approving a dissertation

submitted in partial fulfillment of the requirements for the Doctor of Philosophy

in Philosophy and Religion with a concentration in Philosophy, Cosmology, and

Consciousness at the California Institute of Integral Studies.

________________________________________________

Brian Swimme, Doctor of Philosophy, Chair

Professor, Philosophy, Cosmology, and Consciousness

________________________________________________

Alfonso Montuori, Doctor of Philosophy

Professor, Transformative Studies

________________________________________________

Susan Oyama, Doctor of Philosophy

Professor Emeritus, City University of New York


© 2009 Gregory A Mengel
iv

Gregory A Mengel
California Institute of Integral Studies, 2007
Brian Swimme, Doctor of Mathematical Cosmology, Committee Chair

RECONSTRUCTING A LEGACY:

ON OVERCOMING BIOLOGICAL PREFORMATIONISM, DUALISM,

AND THE INHERITANCE PARADIGM

Abstract

Contemporary attempts to bring evolution and ontogeny into a productive

theoretical synthesis are constrained by a legacy of preformationist thinking. This

legacy, which is descended from the metaphysics of the Scientific Revolution, is

embodied in what I call the inheritance paradigm. In this dissertation, I document

the interacting material and conceptual factors responsible for the development of

the inheritance paradigm and critically examine its role in contemporary theory. I

begin by describing the transformation of the physical and social landscape of

early modern Europe that created the conditions for inheritance-based reasoning

to develop. I then follow the construction of biological inheritance from its early

appearance as an unstructured analogy, through its consolidation into a structured

medical concept, and finally to its integration into biology as a general

explanatory category.

Next, I examine a theoretical and philosophical counter-movement in the

life sciences, sometimes called constructionism (Gray, 1992). Proponents of the

constructionist movement emphasize the interdependence of phylogenetic and


v

ontogenetic (including behavioral) dynamics in the production of complex

organic form. Developmental systems theory (DST), in particular, seeks to

integrate theories of ontogeny and evolution by rejecting the preformationism

implicit in the genes-environment dichotomy and taking seriously the constructive

interactions that are actually responsible for the production of complex organic

form.

Finally, I evaluate a recent revision of DST that attempts, by way of an

extended model of inheritance, to resolve a supposed inconsistency between DST

and Darwinian explanation. I argue that this revision, with its focus on the causes

of intergenerational resemblance and stability, privileges the inheritance paradigm

over the systems thinking that is at the heart of DST. I suggest, therefore, that a

constructionist integration of evolutionary and developmental theories would be

more effectively advanced by replacing the inheritance paradigm with a network

paradigm that emphasizes the constructive, interactive dynamics responsible for

the formation and transformation of developmental systems across multiple

spatial and temporal scales.


vi

Acknowledgements

The developmental system responsible for this work has, during its long

ontogeny, engaged countless interactions within a vast and ever-changing network

of influences. Despite the richness and complexity of the social milieu in which

this work has developed, it is a great pleasure to single out some of the individuals

whose participation was especially significant.

My expression of gratitude must begin with my dissertation committee.

Brian Swimme, in addition to acting as advisor on this dissertation, has been a

principal intellectual mentor throughout my years in the Philosophy, Cosmology,

and Consciousness program at CIIS. His extraordinary writing and teaching have

helped me to awaken to the unfathomable creativity of the universe and to

recognize the movements of cosmogenesis expressing themselves in my own life.

I am also profoundly indebted to Susan Oyama, whose insights form the

conceptual foundation of this work. I have benefited immeasurably not only from

her exhilarating writings, but also from our many long, fruitful, and exceedingly

enjoyable conversations. She has been consistently generous with her time and

energy, and there is no question that, if this dissertation has anything positive to

offer, she deserves a great deal of the credit. I am also grateful to Alfonso

Montuori, whose reading and thoughtful comments have reminded me of the

larger context in which I wish to situate my work.

In addition, I am deeply indebted to my dissertation support group, whose

members have included Linda Gibler, Rod O’Neil, Jacob Sherman, Marc Slavin,
vii

John Taylor, and John Wilkinson. The weekly dinners, prepared with grace and

elegance by John Taylor, provided an incomparable combination of culinary and

intellectual nourishment. And each of the individuals mentioned, through their

reading of my work and my reading of theirs, have left an indelible mark on this

document and on my intellect.

Of course, this project would have been all but impossible without the

support of my beloved friends and family, whose confidence in my capacity to do

this work has consistently exceeded my own. Their encouragement, support,

patience, and love is woven through every sentence. I would like especially to

mention Galen Hamilton, Toni Holliday, Toni Nash, and Steve Ryan, each of

whom spent untold hours generously putting up with my self-indulgent rambling.

Finally, I would like to offer a special note of gratitude to Kerry Brady for her

wise counsel in connection with the personal dimension of my journey.


TABLE OF CONTENTS

Abstract .................................................................................................................. iv

Acknowledgements ................................................................................................ vi

Chapter 1: Introduction ........................................................................................... 1

Historical Background........................................................................................ 7

The Problem ..................................................................................................... 22

Overview .......................................................................................................... 26

Chapter 2: The Prehistory of the Life Sciences..................................................... 35

The Received View of Hereditary Resemblance.............................................. 35

The Scientific Revolution................................................................................. 42

Generation Theory............................................................................................ 47

Preexistence and its Discontents ...................................................................... 51

Conclusion........................................................................................................ 63

Chapter 3: The Ontogeny of Inheritance............................................................... 65

Natural History: New Practices, New Distinctions, New Problems ................ 67

Global exchange and botanical classification.............................................. 67

The natural history of man........................................................................... 72

Agriculture: The Contribution of Selective Breeders....................................... 76

Medicine: From Hereditary Malady to Human Heredity ................................. 81

Establishing a conceptual domain ............................................................... 84

Competing physiologies and conceptual refinements ................................. 90

Conclusion........................................................................................................ 97

Chapter 4: The Integration of Biology and Inheritance ...................................... 100


ix

The Four Hereditary Syntheses ...................................................................... 100

Biology Develops (1800-1850) ...................................................................... 104

The First Hereditary Synthesis ....................................................................... 113

The Second Hereditary Synthesis................................................................... 117

The Third Hereditary Synthesis ..................................................................... 121

The Fourth Hereditary Synthesis.................................................................... 128

Conclusion...................................................................................................... 130

Chapter 5: Developmental Dualism and the Constructionist Challenge............. 132

Grappling with Information............................................................................ 133

Conceptualizing genetic information......................................................... 134

Disputing genetic determinism.................................................................. 140

Development as Construction: Developmental Systems Theory and the

Interactionist Consensus................................................................................. 150

Lehrman and developmental psychobiology............................................. 152

Lewontin and dialectical biology .............................................................. 156

Intentionality and preformationism ........................................................... 159

Development as semi-reliable happenstance ............................................. 164

Development as construction..................................................................... 169

Conclusion...................................................................................................... 174

Chapter 6: Evolutionary Dualism and the Constructionist Challenge ................ 176

Extending Inheritance..................................................................................... 177

Rethinking Inheritance and Evolution............................................................ 182


x

Causal parity and evolutionary explanation .............................................. 183

The extended replicator ............................................................................. 191

The evolutionary developmental system vs. the extended replicator ........ 201

Conclusion...................................................................................................... 209

Chapter 7: Re-rethinking Evolution and Inheritance .......................................... 210

On the Costs and Benefits of Extending Inheritance...................................... 212

Inheritance, replication, and ambiguity ..................................................... 213

Parts, wholes, and context-dependence ..................................................... 217

Evolution without Inheritance ........................................................................ 224

Taking construction seriously.................................................................... 226

Evolving interactive networks ................................................................... 235

Conclusion...................................................................................................... 248

Chapter 8: Recapitulation.................................................................................... 250

References ........................................................................................................... 257


1

Chapter 1: Introduction

Contemporary attempts to bring biological evolution and ontogeny into a

productive theoretical synthesis are constrained by a legacy of preformationist

reasoning. This legacy is embodied in what I am calling the inheritance paradigm

and descended from the metaphysical dualism of the Scientific Revolution. The

inheritance paradigm is a general approach to biological reproduction that begins

by classifying the various phenomena of intergenerational resemblance, such as

the stability of species, the idiosyncratic similarities between parents and

offspring, and the mixing of traits in hybrids, under the single principle of

heredity based on an analogy with wealth transmission. The phenomena

associated with heredity are then attributed to the transmission of formative

causes (usually genes) from parents to their offspring. The origins and enduring

prestige of the inheritance paradigm is due in part to the continued dominance of

the cosmology1 ushered in by the Scientific Revolution. This cosmology, which is

characterized by the withdrawal of spontaneity and concrete temporality from the

material world, sustains a metaphysical dualism in which the generative dynamics

responsible for formation must remain outside the strictly material realm, in the

mind of God (or the human), in the laws of nature, or in a fetishized

macromolecule. The inheritance paradigm is therefore structurally preformationist

1
I am using the term cosmology here to refer to an entire network of

epistemological and ontological assumptions about the natural world.


2

because inheritance-based models focus on the transmission and expression of

forms, which are assumed, in some sense, to preexist.

The term preformationism is typically associated with the preformation-

epigenesis debates that characterized eighteenth century generation theory and

resurfaced in late nineteenth century embryology. Although I am adopting the

word from this context, I conceive it quite broadly, as a fundamental mode of

reasoning in which form, also understood broadly, is tacitly assumed to exist prior

to, or apart from, its concrete materialization. In biology, preexisting forms are

presupposed by metaphors such as genetic information and the ecological niche

(considered apart from its occupant). This mode of reasoning allows the hard

problems of organic formation to be deferred, while strictly atomistic and

mechanistic explanations are granted more credence than they warrant.

Preformationism, in the sense I am using it here, is closely related to

atomism in classical physics. Indeed, in a very general sense, preformationism can

be seen as a consequence of the attempt to place the life sciences on an

epistemological par with the physical sciences. For classical physics, it is possible

to explain a system’s behavior in terms of the intrinsic capacities for mechanical

interaction possessed by the smallest elements constituting the system. These

irreducible elements, or atoms, are characterized by primary qualities, such as

mass and momentum, which determine their activity in relation to other atoms

moving through infinite Newtonian space. The behavior of such abstract units is,

by definition, uniform and fully predictable, both forward and backward in time,
3

and the behavior of a system, however complex, is always simply the sum of the

behavior of the atoms composing it. An additional aspect of this cosmology is that

the concept of time is transformed into what Bergson (1911) called abstract time.

Here, time is merely an index used to distinguish various instants, where the only

possible differences in the universe are in the spatial arrangement of its

unchanging atoms. This completes the picture of the world as machine. The

fundamental methodology for understanding such a world is, of course,

reductionism. In order to understand any particular system, we must simply

analyze it into its constituent elements and identify the intrinsic properties of

those elements. Once this is complete, we can, in principle, calculate every detail

about the past and future of the system.

Preformationism in biology, then, is essentially analogous to atomism in

physics. Living systems, qua living systems, are understood to be ultimately

composed of functional structures, and these forms (or the genes that represent

them) are treated as the basic units of biological analysis. They contribute to

differential fitness, are transmitted from parents to offspring, and are reliably

reproduced in development. As a consequence of this emphasis on the distribution

and transmission of preexisting forms, the complex and contingent processes by

which forms are actually generated are systematically marginalized, allowing both

evolutionary and developmental biology to be situated within a generally

mechanistic framework.

The tacit preformationism embodied in modern biology is responsible for


4

the familiar dichotomies, such as mind-body, nature-nurture, vitalism-mechanism,

holism-atomism, and rationalism-empiricism, which turn up in various life

science discourses. Efforts to pinpoint the ultimate sources of preexisting form

invariably give rise to questions that recapitulate these standard oppositions: Can

a living process, with its evident coherence and apparent purposiveness, be

explained mechanistically, or is some vital principle needed? Can the holistic

properties of a living system be reduced to the properties of its preexisting parts,

or must we refer to some unanalyzable whole that exists prior to the parts? Is an

individual influenced more by the qualities inherited from one’s ancestors or by

one’s own life experiences? For each of these questions, the tacit assumption that

form must somehow already exist results in an unresolvable dispute about its

ultimate origins. Preformationist reasoning dominated eighteenth century

generation theory; it reappeared in late nineteenth century embryology, and it

remains alive today as a tacit feature of the inheritance paradigm.

This dissertation is also about an alternative constructionist mode of

reasoning. The constructionist approach endorsed by this dissertation seeks to

overcome the preformationism that haunts biology by drawing attention to the

constructive interactions by which form is realized in concrete time. As has been

noted many times, living systems (as well as many complex physical systems) are

fundamentally different from the systems described by classical physics. The

parts of a living system exist as thoroughly interdependent, essentially historical

entities, bound up with the structure and function of the system as a whole.
5

Concrete time is an essential element of the irreversible transformative and

integrative processes that characterize living systems. As a consequence, when we

apply a physicalist epistemology to organic phenomena, the problems of

formation tend to become more or less intractable. Whereas the exclusion of

temporality robs matter of its formative activity and forces theories to rely on

preexisting forms, the constructionist alternative embraces the fundamentally

dynamic nature of material systems and directs our attention to the actual

processes through which organic structure and function are generated and

regenerated.

Although the constructionist attitude is actually quite old (as I discuss

below), the metaphor of construction is due to Richard Lewontin (1983b), who

introduced it in an effort to counter the tendency of extant biological metaphors to

divide causality into its so-called internal and external components, usually genes

and environments. His construction metaphor is intended to highlight the

dialectical relation between internal and external causes. A parallel argument has

been put forward as part of developmental systems theory (Griffiths & Gray,

1994; Oyama, 1985, 2000b; Oyama, Griffiths, & Gray, 2001). In critiquing the

persistence of the nature vs. nurture dichotomy, Susan Oyama (1985) also

identified the tendency of biologists to rely on internal and external causes while

neglecting their interdependence. Indeed, it was she who resurrected the term

preformationism to characterize the ways in which many so-called interactionists

treat internal and external causes as alternate sources of form.


6

The themes of preformationism and constructionism are considered in this

dissertation specifically in the context of the inheritance paradigm. I explore the

ascent of biological inheritance as a conceptual space in which general questions

related to the causal links between parents and offspring could be explicitly taken

up for the first time. As the conceptual domain was consolidated and its structure

elaborated, inheritance provided nineteenth century life scientists with a

fundamental biological principle on which to construct a thoroughly mechanistic

and deterministic approach to organic form. The concept of biological inheritance

emerged as a new source of formal and final cause that did not explain formation

so much as defer the question and replace it with two other questions: how is form

transmitted from parent to offspring, and how does this form then become

realized during embryogeny. To the extent that the essential causes of form are

inherited, they necessarily preexist their emergence in ontogeny. The illusion of a

perfectly mechanical world is thus sustained by an appeal to unanalyzable internal

causes and inaccessible historical ones.

I endorse a constructionist approach to biological theory, which does not

rely on the inheritance paradigm to explain the dynamic production and

reproduction of organic form. I draw support for this view particularly from

Oyama (1985, 2000b), who has consistently argued that intergenerational

resemblances need to be given fully developmental explanations. The terms

developmental and developmentalist, in this context, specifically refer to

approaches that attempt to explain formation in terms of the constructive


7

interactions through which it is generated rather than by appealing to the

inheritance of forms or special formative causes. The preformationist conception

of inheritance, as embodied in genetics, has, for too long, marginalized

developmental questions and impoverished both domains. From a constructionist

perspective, we have not accounted for and organic form, whether morphological

or behavioral, unique or lineage-typical, until we have explained its actual

formation.

Historical Background

In the seventeenth and eighteenth centuries, natural philosophers who

appealed to the self-organizing powers of matter to explain formation were liable

to be accused of atheism (and materialism) by those who regarded the attribution

of intrinsic dynamism to matter as threatening to the absolute sovereignty of God

(Grene & Depew, 2004). By the mid-seventeenth century, the connection between

form and matter that characterized Aristotle’s cosmos had been irrevocable

ruptured, and matter had been reconceived as formless and inert. This produced a

basic metaphysical duality between the realm of matter and the realm of mind.

The latter was closer to the divine, of course, and served as the implicit or explicit

source of form. As time went on, some of the sovereignty that had been reserved

for God was delegated to the universal laws of nature, which of course He had

fashioned. Although this move enabled theories to become more naturalistic, the

basic duality was preserved, and matter remained essentially formless. As a result

of this conception of nature, the biological theories that have gained acceptance
8

are those that leave the origins of form effectively unanalyzed and unanalyzable.

The reliance on dualistic and preformationist reasoning remains a

dominant feature of scientific rationality. Yet, I would suggest that, for most of

the modern period, beginning perhaps as early as William Harvey’s work on

animal generation, a constructionist perspective has coexisted with the dominant

preformationist mode of reasoning. I classify this counter-movement as

constructionist to underscore its concern with processes of formation and to

suggest a conceptual line of descent leading to the constructionist movement in

contemporary philosophy of biology (Gray, 1992). The actual representatives of

this lineage would be exceedingly difficult to identify definitively because the

history has been written, for the most part, from the dominant perspective. It

would be particularly difficult for mainstream historians to tell the constructionist

story because, from the conventional point of view, scientific conflicts have

typically been framed as conflicts between mechanists and vitalists or atomists

and holists. Given this narrative legacy, genuine constructionists tend to be

mischaracterized as unscientific vitalists or obscurantist holists. Therefore,

unfortunately, to do justice to the story of constructionist science would require a

detailed historical exegesis, which would exceed both the scope of this document

and my own competence.

Despite these historiographic difficulties, it is possible to identify at least

two individuals whose formative contributions to constructionist thought are both

unambiguous and formidable, and whose ideas have continued to have currency
9

for constructionist thinkers up to the present day. These figures are Immanuel

Kant and Johann Wolfgang von Goethe. Kant, as both a thoroughgoing

Enlightenment thinker and a peerless critic of its philosophical excesses, clearly

articulated the ontological and epistemological limitations inherent in Newton’s

mechanical philosophy. His epistemology, though dualistic, was at least

nominally interactionist. Knowledge about the world, for Kant, required an

interaction between unstructured sense data and the a priori categories of the

imagination. In addition, he also called into question the corpuscular theory of

matter and proposed an alternative ontology based on a dynamic interplay of

forces (Lenoir, 1981). In the Second Antinomy, in Critique of Pure Reason

(1781/1929), he deepened his challenge to standard ontology, demonstrating that

wholes and parts must be understood to co-exist in a dialectical relation because

neither can be accorded ontological priority by the faculty of reason. Finally, in

Critique of Judgment (1790/1951), Kant famously argued that blind mechanical

laws could never suffice for understanding natural purposes (organisms) because

the latter are organized and self-organizing beings (p. 220). These insights

effectively set the stage for the emergence of biology in the German-speaking

states, and have influenced, directly or indirectly, practically all subsequent

biological thought.

Goethe is one of the most significant proponents of a fully constructionist

alternative to the metaphysics of the Scientific Revolution. As surely as Kant

represented the rationalism of the Enlightenment, Goethe embodied the


10

unrestrained ambitions of Romanticism. He explicitly rejected the dualism of

Kant’s epistemology and argued that full knowledge of nature can only emerge by

way of a profound participatory engagement with natural forms (Tarnas, 1991, p.

433). While this epistemological stance calls to mind the subjectivism of the

Naturphilosophie movement, there is a crucial difference. According to Zajonc

(1998), while the naturphilosophen followed this general principle into a full-

blown idealism, Goethe remained thoroughly committed to observation and to the

interdependence of subject and object (p. 18). Knowledge, for Goethe, was like

organic form itself, emerging within the intimate relation between observer and

observed. Furthermore, his conception of organic form emphasized

transformation over static structure. “We must,” he wrote, “take into

consideration not merely the spatial relations of the parts, but also their living

reciprocal influence, their dependence upon and action on one another" (quoted in

Russell, 1916/1982, p. 48). Goethe believed that organic form, whether in the

growing plant or in the imagination of the observer contemplating the plant,

emerges through dialectical interaction. This is the hallmark of the constructionist

perspective.

Although the insights of Kant and Goethe were, in many ways,

foundational for the development of biology early on, as I argue in Chapter 4,

constructionist thought was in retreat throughout the second half of the nineteenth

century and well into the twentieth century. Early nineteenth century biology was

nominally constructionist, in as much as it emphasized epigenesis, and


11

morphology, but a new emphasis on reductionism and mechanism gradually came

to dominate the life sciences, bringing with it a new preformationism. By the mid-

twentieth century, classical and population genetics were firmly established, and

most branches of biology had been consolidated into the modern neo-Darwinian

synthesis. Biological evolution, for the synthesis, was defined as changes in the

gene frequencies of populations. The developmental mechanisms responsible for

transforming inherited genotypes into adult phenotypes were regarded as

sufficiently reliable and stable that they could be ignored, and heredity could be

fully explained in terms of genetic transmission. As a result, natural selection,

along with a few other population-level dynamics affecting the gene pool, were

considered sufficient to explain all evolutionary change.

The synthesis, of course, left unanswered some basic questions related to

embryology, but these matters remained marginal. A small cadre of embryologists

led by C. H. Waddington made a valiant effort to keep embryological questions

from being entirely ignored. Drawing on his interest in Whitehead and process

philosophy, Waddington (1975) developed his diachronic biology, which

emphasized the temporal dimension of the developmental system. Along with

other developmentalist theorists, such as Richard Goldschmidt and I. I.

Schmalhausen, Waddington maintained that evolution must depend on changes in

the epigenetic processes responsible for formation. This principle is exemplified

in the idea, developed in parallel with Schmalhausen, that some developmental

pathways are substantially buffered against genetic and environmental


12

fluctuations (Grene & Depew, 2004).

In the development of the disciplines of animal and human psychology,

the theoretical pendulum has swung a number of times between extremes of

internalism and externalism, recapitulating a tension between rationalism and

empiricism that can be traced to Descartes and Bacon in the modern era, and Plato

and Aristotle among the ancients. When psychology originated as a scientific

discipline in the late nineteenth century, it embodied a decidedly internalist

perspective, with introspection constituting one of its principal methodologies

(Wozniak, 1997). As explained by Johnston (2001), a number of the field’s early

authors proposed “evolutionary” theories that attempted to account for many of

the behaviors and mental capacities of humans and animals in terms of instinct.

These theories came under attack in the 1920s as part of a broader backlash

against what had come to be seen as an excessive focus on internal processes and

states. The counter-movement that emerged eventually gave rise to the so-called

“environmentalist” orientation of behaviorism and stimulus-response psychology.

In addition to the overt externalism of these approaches, however, there were a

few individuals whose calls for a rejection of the internal-external opposition

could be characterized as constructionist. Zing Yang Kuo, for example, though

generally associated with the new behaviorism (Griffiths, 2004), anticipated

modern constructionists in arguing that “the sharp distinction between inherited

and acquired responses should be abolished” (quoted in Johnston, 2001, p. 16).

He claimed that a reliance on heredity leads theorists to neglect the problems of


13

behavior formation.

Instinct theory reemerged a decade later in a more sophisticated form. In

the mid-1930s, Konrad Lorenz founded ethology as a Darwinian research

program that focused on the reliable appearance of species-typical animal

behaviors, such as maternal imprinting. Lorenz claimed that these behaviors are

innate, but, in contrast to previous instinct-based theories, he insisted on a sharp

discontinuity between innate and acquired behaviors, emphatically denying that

the former could arise from the latter (Griffiths, 2004). The development of

ethology is significant to the history of constructionism because widespread

influence of Lorenz’s writings motivated Daniel Lehrman to produce a seminal

critique of the distinction between innate and acquired behaviors. Lehrman (1953)

showed in detail why explanations that resort either to instinct or learning in

explaining the acquisition of behavior fail to capture the developmental origins of

even relatively simple behaviors. The ethologists’ “preformistic” assumptions

about maturation, according to Lehrman, “short-circuit” the investigation into

how the behaviors actually develop.

There is some disagreement about Lehrman’s influence on his

contemporaries. While Timothy Johnston (2001) has suggested that Lehrman had

little direct impact on the discourse, Paul Griffiths (2004) has argued that

Lehrman’s paper helped to dislodge the Lorenzian instinct concept among British

ethologists. Nevertheless, what is certain is that his constructionist approach to the

development of behavior is a principal intellectual forerunner to the


14

developmental systems critique of the nature vs. nurture controversy.

Notwithstanding the perturbations of Waddington, Lehrman, and other

constructionist thinkers, the preformationist juggernaut of twentieth century neo-

Darwinism continued to consolidate itself around the gene. A new threshold in the

ascent of the gene was crossed with the publication of Richard Dawkins’ The

Selfish Gene (1976) and E. O. Wilson’s Sociobiology: A New Synthesis (1975).

With these two landmark works, the gene was granted an unprecedented level of

causal primacy in behavioral and social development. Wilson’s ambitions were

nothing less that the explanation of all aspects of human culture in terms of the

natural selection of genes. By attributing complex social interactions, such as

warfare, religion, and entrepreneurship to naturally selected, genetic

predispositions, he suggested that all the social sciences might eventually be

assimilated into the modern evolutionary synthesis. It is well known that

Dawkins’ anthropomorphized, omnipotent gene and Wilson’s emphasis on the

biological basis of human nature generated a healthy amount of controversy.

What I wish to emphasize here is that the preformationist ambition embodied in

Wilson’s and Dawkins’ works provided a key motivation for the emergence of the

constructionist approaches that inform the present work. One of the early and

influential responses to the gene-centrism represented by Dawkins and Wilson

was Stephen J. Gould and Richard Lewontin’s 1979 paper, “The Spandrels of San

Marco and the Panglossian Paradigm.” This work did not address Dawkins’ and

Wilson’s theories explicitly, but instead challenged what Gould and Lewontin
15

referred to as the atomism and adaptationism of orthodox evolutionary thought.

According to Gould and Lewontin, evolutionary theory was in the grip of a style

of reasoning (the panglossian paradigm) that assumes that every detail of

organismic structure and function can be explained in terms of its distinct

selective advantage. Although, in that paper, sociobiology only warrants a passing

mention, the relevance to it of their critique of adaptationist reasoning was

transparent enough.

The same point about the uncritical reliance on natural selection had been

made earlier by George Williams (1966). Williams had cautioned against seeking

adaptationist explanations for traits that might be explained without the onerous

appeal to natural selection. His argument was specifically intended to challenge

the reliance on group selection in explaining social behaviors that could be

explained in terms of individual advantage. This line of reasoning led him to

conclude that the most parsimonious explanations of all would be those that refer

to competition among individual genes. One of the points made by Gould and

Lewontin (1979) is that Williams’ (1966) readers had only gotten half the

message. They had followed Williams’ advice to avoid group-level explanations,

but, in their enthusiasm to focus on the lowest conceivable units of selection, they

had forgotten Williams’ caution that adaptation should only be invoked as a “last

resort,” when “less onerous principles” are not sufficient (1966, p. 11). According

to Gould and Lewontin, this is precisely what many theorists were doing; these

theorists were atomizing organisms into arbitrarily many independent traits and
16

seeking distinct adaptationist explanations, or “just-so stories,” for the

propagation of the individual genes associated with each trait. Worse yet, Gould

and Lewontin argued, when one adaptationist explanation failed, many theorists

would respond by simply posing a different one, rather than by questioning their

initial assumption that the trait is an adaptation. Recognizing that Williams’ call

for caution had been unheeded, Gould and Lewontin went a step further, and

suggested that the entire “adaptationist programme” needed to be rethought.

In addition, Gould and Lewontin (1979) proposed an alternative approach

to explaining phenotypic form. They suggested that some significant aspects of

organismic structure should not be considered candidates for adaptationist

analysis because they are actually integral to the organism’s overall

developmental architecture. Reaffirming Darwin’s (1859/1968) recognition of the

“unknown laws of growth,” Gould and Lewontin called for evolutionary biology

to pay greater attention to the ways in which evolution is influenced by

developmental constraints and other morphological principles. This advice was

not exactly welcomed. The modern synthesis had more or less institutionalized

the definition of organic evolution in terms of population-level genetic change. It

was (and is) far from obvious how considerations of individual development, of

the precise dynamics and processes by which structure and function are reliably

regenerated, can be integrated into the population-based approach of the modern

synthesis (Amundson, 2005).

In the years since Gould and Lewontin’s “Spandrels” paper (1979), the
17

reintegration of development into evolutionary and theory has become an

increasingly important issue for philosophy of biology. The 1980s witnessed a

great deal of discussion surrounding the issues of evolution and development.

Historians noted the omission of embryology from the synthesis, and theorists on

all sides wondered how this might be reconciled. Although some considered this

challenge to constitute a full-blown crisis for the modern synthesis (e.g.,

Goodwin, 1994; Gould, 1980), advocates of the orthodoxy mounted a vigorous

defense. In the end, the synthesis incorporated the notion of developmental

constraints, without significantly revising its fundamental overall explanatory

framework (Amundson, 1994). Developmental dynamics were admitted as a

potential constraint that precludes certain adaptive possibilities; cheetahs will

never evolve a fifth leg even it might make them faster. This modest concession

allowed natural selection to retain its theoretical preeminence as the main cause of

evolutionary change.

As Ron Amundson (1994) has explained, however, the notion of

developmental constraint accepted by synthesis theorists was only one of two

meanings of constraint invoked in Gould and Lewontin’s (1979) discussion. This

sense of developmental constraint, which Amundson called a constraint on

adaptation, can be used to explain why an organism’s functional design is less

than optimal. The primary explananda is still, on this view, adaptation. What the

synthesis still has not taken into account, however, is the notion of developmental

constraints on form. Here, as Amundson has pointed out, constraint is not


18

intended to explain anything about the organism’s adaptive design; it is intended

to explain structural features that may be entirely independent of fitness. Indeed,

the integration of such morphological constraints require the synthesis to take

seriously the evolutionary significance of individual ontogeny.

According to Amundson (2005), it is difficult for neo-Darwinian

evolutionary theory to integrate developmental morphology precisely because the

two domanins have different explanatory aims. While the synthesis seeks to

explain changes in functional design, morphology seeks to explain the

ontogenetic production of complex structures. This is perhaps true. However,

although Amundson was content to accept each theory as more or less sufficient

within its own explanatory domain, it may be the case that the puzzle requires us

to question the fundamental assumptions underpinning the theoretical framework

of the modern synthesis. This is precisely the approach taken by constructionists.

During the developmentalist debates of the 1980s, Lewontin and Oyama

published what have become the foundational works for the contemporary

constructionist movement in philosophy of biology. Lewontin (1982, 1983b)

presented a penetrating analysis of the metaphors associated with evolution and

ontogeny and proposed that construction is a more appropriate and general

metaphor for biological processes because it expresses the role of organism as

both subject and object of evolutionary and developmental processes. Oyama

(1985) exposed the preformationism and vitalism that lurk beneath the apparent

materialism of information metaphors. She documented the ways in which the


19

long-rejected opposition of nature and nurture is tacitly perpetuated by approaches

in which the outcomes of development are imagined to somehow preexist

ontogeny. In addition, she showed that the arguments typically cited to secure a

privileged informational role for the genome rely on a question-begging double

standard. Oyama’s deconstruction of semantic information in biology is discussed

in detail in Chapter 5.

A principal commonality between Lewontin’s (Levins & Lewontin, 1985)

dialectical biology and Oyama’s (1985) developmental systems approach is that

both reject the simplistic, linear conception of causation that characterizes much

conventional scientific reasoning. Both Lewontin and Oyama call attention to the

ways in which causation, whether framed in terms of the internal and the external,

as in the nature-nurture debates, or in terms of different scales of time and space,

as in the segregation of evolution and development, must be understood as

ultimately reciprocal and complexly interdependent. Constructionist thought,

whether based in systems thinking or dialectics, recognizes that causes do not

exist in complete independence from their effects, and the dynamics of systems

are constructed by systemic interactions in concrete time. The essential

characteristic of the constructionist conception of causality is that it reintegrates

concrete temporality into our conception of living systems. Whereas conventional

scientific reasoning admits only abstract time and must therefore neglect

formation, constructionist reasoning recognizes that living systems are

fundamentally dynamic processes of formation and transformation.


20

Constructionism provides biological theory with a way to overcome the

deeply entrenched assumptions that sustain the segregation of evolutionary and

developmental processes. At the heart of the segregation of evolution and

development is the inheritance paradigm. As I will explain in detail in the

chapters that follow, the conception of inheritance as a distinct category of

biological causation is not only a fundamental feature of evolutionary and

developmental explanation, but also to the tradition of giving each processes an

independent, though complementary explanation. For this reason, constructionist

approaches often must directly confront standard assumptions about heredity and

genetics. Laland, Odling-Smee, & Feldman (2001; Odling-Smee, Laland, &

Feldman, 2003), for example, have developed a model of evolution based on the

construction of organism-environment systems. Their niche construction theory is

based on a dual-channel system of inheritance, which relies on genetic

transmission to explain ordinary phenotypic inheritance, but adds an ecological

channel to explain the inheritance of functional organism-environment

relationships (ecological niches).

A number of theorists associated with the constructionist movement go

further, directly challenging the concentional understanding of genetic

inheritance. Eva Jablonka (2001) and Jablonka and Marion Lamb (2005) have

argued that genes are not the only cellular structures capable of transmitting

hereditary variation. They describe epigenetic mechanisms, such as stable

cytoplasmic states and DNA methylation, which are transmitted to daughter cells
21

during reproduction and may in some cases be transmitted between generations.

They claim, provocatively, that the existence of epigenetic channels of inheritance

might facilitate the inheritance of some acquired characteristics. Eva Neumann-

Held (2001), meanwhile, argues that the traditional gene concept, on which the

assumption of information transmission is based, does not hold up in light of

recent molecular biology. She describes how the causal role of DNA and other

functional molecular sequences are literally constructed in real time, based on the

immediate functional requirements of the cellular system.

Finally, Paul Griffiths and Russell Gray (1994, 1997, 2001) have proposed

what they argue is a fully constructionist theory of evolution based on

developmental systems theory. Their approach also relies on an alternative

conception of inheritance, but it goes beyond the extended inheritance proposed,

for example, by Laland et al. (2001) and Jablonka and Lamb (2005). Griffiths and

Gray begin with the idea of the developmental system defined as the entire set of

developmental resources involved in the ontogeny of an individual. However, in

order to delineate a manageable unit of evolution, they redefine the developmental

system based on an extended model of inheritance. Here, the evolutionary

developmental system is defined as including all those developmental resources

responsible for the production of lineage-typical phenotypes.

Griffiths and Gray’s (1994, 1997, 2001) model is quite complex, and, as I

discuss it in detail in the final chapters of the dissertation, I will not attempt to

elaborate it further here. I will simply note that, although they have accomplished
22

something formidable and worthwhile, in their effort to achieve a rapprochement

with conventional evolutionary explanation, they have departed in crucial ways

from the original conception of developmental systems theory presented by

Oyama (1985). One of the principal aims of this dissertation is to make the

conceptual difference between Griffiths and Gray’s and Oyama’s versions of

developmental systems theory explicit in order to reassert the more radical

implications of the latter. I believe an genuine engagement with the problems of

formation is essential for a fully constructionist integration of individual

development into evolutionary biology.

The Problem

Since the Scientific Revolution, the laws of physics have served as the

ideal for positive knowledge about nature. Legitimate scientific explanation is

typically expected to follow a Cartesian method of decomposing systems into

simple physical constituents on the assumption that the properties and

potentialities of the component elements ultimately prefigure the characteristics of

the larger whole that they form. While this reductionist methodology is an

indispensable tool in many areas of inquiry, it is inadequate as a metaphysical

principle. While metaphysical atomism is relatively harmless for the sorts of

systems that interested Galileo and Newton, its biological cousin,

preformationism, has never been adequate for understanding the fundamental

nature of living systems.

In contrast to the causal factors associated with linear physical


23

interactions, the principles that govern living processes are constructed by the

systems themselves. As material systems become more highly organized, system-

specific ordering principles come into being. Higher integrative levels are, in a

sense, constrained by lower levels, including the physical, but they are not strictly

reducible to them, and as new levels emerge they generate structural and

functional possibilities absent from lower levels. These emergent properties must

not be confused with special vital principles, however; the point is to recognize

that, given concrete time, on evolutionary as well as ontogenetic scales, the

interactive dynamics of material systems are capable of producing complexly

organized bodies and constructing the conditions responsible for their reliable

reconstruction and their ongoing transformation. There is no need to posit either

an external source of design or an internal repository of information, for, as I shall

argue in the chapters that follow, the interactive dynamics and distributed

interdependencies of material networks are entirely sufficient to “remember” the

patterns of organization that constitute developmental systems.

The reductionist impulse is deeply ingrained in scientific practices and

institutions. And there is no question that the mechanistic reductionist strategy of

deferring the problems of formation has produced huge advances in our

understanding of living processes at ever smaller scales and with ever greater

precision. At the same time, there remains a lacuna at the heart of biology. After

almost four centuries of sustained inquiry, we still lack a general theory of living

systems that encompasses both ontogeny and evolution. I contend that this
24

difficulty endures, to a great extent, because development, both morphological

and behavioral, continues to be framed in terms of an inheritance paradigm that

relies on genes as a transmission medium for preexisting form (information),

which is then expressed in ontogeny. Practicing biologists do not actually believe

that DNA encodes every detail of the organism. That sort of crude genetic

determinism is widely disclaimed. Ontogeny is typically said to involve

interaction among genes and between genes and environments. Yet, theoretical

frameworks based on the inheritance paradigm still treat development as a

dichotomous process involving two essentially distinct kinds of causes. Genes, on

this view, carry information that represents the phenotype, while the environment

merely provides supporting or interfering conditions. While not exactly

determinist, this standard view of interaction remains substantially committed to a

preformationist conception of development that marginalizes the constructive

interactions that are ultimately responsible for the production of living bodies, at

all spatial and temporal scales.

This dissertation argues that the inheritance paradigm is incompatible with

the goal of integrating individual ontogeny into evolutionary theory because the

very dynamics of formation that unite the two domains in actual living systems

are neglected due the preformationist structure of the paradigm. Indeed, as I will

explain in detail, the segregation of evolution and development is secured by the

logical structure of the paradigm itself. I show that the inheritance paradigm, far

from being merely the formalization of a natural category, was actually


25

constructed over a period of centuries. In the first half of the dissertation, I trace

the emergence of the inheritance paradigm as it was gradually constructed in the

separate but interdependent domains of natural philosophy, natural history,

agriculture, and medicine. In advance of this process, there was no general

category encompassing hereditary disease, hereditary resemblance, hybridism,

and species stability. Instances of biological inheritance first had to be perceived

as puzzles to be solved; then these diverse questions had to be consolidated into a

single structured concept; and finally this concept could gradually be integrated

into biology as a general explanatory category.

In the second half of the dissertation, I examine some recent philosophical

controversies concerning the role of inheritance in biological explanation,

focusing on challenges raised by Oyama (1985) and other proponents of DST.

Oyama shows that there is no stable justification for the double standard that

ascribes to genes and genes alone the capacity to inform development. Some

authors have argued that inheritance should therefore be extended to include all

the factors that are responsible for the reliable reconstruction of form (Griffiths &

Gray, 1994). I interpret Oyama’s critique of conventional inheritance differently.

It seems to me that DST, as originally formulated by Oyama, attributes

intergenerational resemblance to the stability of developmental systems, rather

than to a category of inherited factors, however extended. I argue that DST’s aims

would be better achieved by replacing the inheritance paradigm with an

interactive network paradigm that emphasizes the constructive interactions that


26

are actually responsible for the generation and transformation of organic form. By

making construction rather than transmission the touchstone for understanding

living processes, this reorientation of biological theory reveals the genuine unity

of evolutionary and developmental processes.

Overview

This dissertation includes a historical and a conceptual deconstruction of

the inheritance paradigm. In the first half of the work (Chapters 2-4), I present a

historiography of inheritance that tracks the evolution of the inheritance

paradigm, beginning at the dawn of modern science and culminating in the

consolidation of genetics in the early twentieth century. This study shows how the

concept of heredity was constructed, especially within the medical and

agricultural domains, and then integrated into biological thought in a series of

stages, which gradually transformed evolutionary biology into a mechanistic

science of variation and selection from which embryology was effectively

excluded. In the second half (Chapters 5-7), I engage some recent debates about

the role of inheritance in biological explanation. I defend the developmental

systems approach as articulated by Oyama (1985, 2000b), which calls into

question the orthodox partition of development into formative, genetic causes and

supporting or interfering, environmental ones. I evaluate Griffiths and Gray’s

(1994) attempt to reconcile DST with Darwinian evolutionary explanation, by

way of an extended model of inheritance. Although a worthy achievement, I

conclude that, for a genuinely constructionist integration of evolutionary and


27

developmental biology, the inheritance paradigm should be replaced by an

interactive network paradigm that emphasizes the constructive interactions

responsible for form at all levels of biological organization.

I employ slightly different methodologies in the first and second halves of

this dissertation. The first half consists of a historiography of the inheritance

paradigm. I examine the contingent historical-developmental origins of heredity

as a structured biological concept. Though the historiographic methodology I

employ is fairly standard, I wish to emphasize its specifically constructive aspect.

It is an accepted principle in the history of science that we must not evaluate

scientific ideas from previous eras in light of current knowledge, or imagine

(whiggishly) that science has always been advancing inexorably toward the

present. Still, as French (1994) points out, the temptation to discern the seeds of

current ideas in past eras often leads to what he calls genetic historiography. To

emphasize my rejection of such preformationist historical reasoning—not to claim

any originality for it—I describe the approach adopted here as constructionist

historiography. In the same way that a developmental system, at any given stage

of its ontogeny or evolution, must be treated in terms of the functional and

structural properties that characterize that stage, we must do our best to evaluate

ideas and concepts in terms of the historical context in which they are embedded.

The goal is not simply to explain their propagation, as if their meanings were

independent of the dynamics of history, but rather, to understand the network of

concepts in which they functioned and the sequences of interactions through


28

which they were formed, reformed, and transformed.

In the second half of the dissertation, I engage contemporary debates in

philosophy of biology with a more or less conventional analytical approach,

except that I attempt to maintain a thoroughly constructionist perspective. My

orientation toward scientific explanation is antipreformationist, and so is my view

of knowledge in general. This constructionist approach to knowledge, based as it

is on the reconciliation of form and matter in concrete time, is ultimately pluralist,

not only in its epistemology, but also in its ontology, since the two are, in the end,

fundamentally interdependent. Also, I am using the (perhaps unexpected) word

constructionist to distinguish the perspective of this work from the controversial

approach to scientific knowledge called social constructivism, as the latter tends

to be associated with various degrees of antirealism and radical skepticism about

science (see Kukla, 2000). The constructionist epistemology I endorse here is not

antirealist; it merely rejects the sort of naïve realism for which the mind is

construed as a “mirror of nature” (apologies to Rorty). It is crucial to

acknowledge that, as observers, we are implicated in our observations, and we are

literally constituted by our personal, cultural, and evolutionary history. This does

not necessarily lead to a problematic relativism, though certainly to caution and

humility.

I begin the story of heredity in Chapter 2, with an overview of how the

phenomena of intergenerational resemblance and stability were understood before

the concept of biological heredity had been formulated. I attempt to convey a


29

sense of the essentialist metaphysics that prevailed into the early modern period.

This was not the essentialism that is often associated with pre-Darwinian species

fixism, but rather a particular medieval worldview in which resemblances, as

aspects of a deep cosmological wholeness, needed no explanation. I then discuss

the Scientific Revolution, specifically highlighting its theological and

metaphysical implications. The Cartesian-Newtonian cosmology, which

envisioned inert corpuscles of pure matter, moving through infinite space

according to the universal laws of mechanics, rendering formation all but

inexplicable. As a consequence, almost all the contenders in the seventeenth and

eighteenth century debates over animal generation involved some sort of

preformation, and the phenomena of intergenerational resemblance were treated

as marginal anomalies, and given ad-hoc explanations, if any at all.

In Chapter 3, I track the societal, theoretical, and practical developments

that contributed to the emergence of heredity as a definite problem. I show how

new practices and institutions, especially in natural history and agriculture, helped

to expose distinctions and regularities that had not been recognizable in pre-

modern times. In particular, the greater mobility of plants, animals, and people

created the opportunity to observe genealogical relationships in isolation from

influences related to place. In this context, the adjective hereditary began to be

applied metaphorically to the transmission of accidents from parents to offspring.

Meanwhile, toward the end of the eighteenth century, the ancient notion of

hereditary maladies was taken up by academic physicians in a deliberate, and


30

ultimately successful, effort to transform the adjective hereditary from a dubious

metaphor into a rigorous category of medical causation. This effort produced

some of the key conceptual features that continue to structure the discourse on

genetics.

In Chapter 4, I reflect on the role played by the concept of biological

inheritance in the transformation of biology. Over the course of the nineteenth

century, biology evolved from an epistemologically fuzzy science of formation

into a thoroughly mechanistic science that relies on the tacitly preformationist

inheritance paradigm to account for the evolution and development of organic

form. I begin with the early development of biology in the German-speaking

medical schools and the Paris museum of natural history. The physiologists and

naturalists associated with these institutions were among the first to treat life as a

distinct field of study and, as a consequence, were more interested in its unique

properties than in its material and physical basis. This situation would change,

however, as younger generations of biologists attempted to apply epistemological

standards derived from physics and chemistry. This impulse, I suggest, helped to

propel biology through a series of conceptual shifts, which transformed it from a

science of form and formation into one of inheritance and differential

reproduction. With the establishment of genetics, evolutionary biology achieved

the mechanistic atomism it had been seeking, while, simultaneously, securing its

incompleteness.

Chapter 5 begins the second half of the dissertation, in which I consider


31

heredity and inheritance in the context of contemporary biological theory. In

Chapter 5, I examine the role of the genetic information concept in modern

biology. The assumption that genes carry semantic information is used to justify

their special role in explaining both heredity and development and. I suggest, to

perpetuate a fundamental ambiguity that obscures the gap between molecular and

Mendelian genes. I review some of the recent discussions on the utility and

coherence of the semantic information concept in this context, and conclude that,

even within orthodox theory, it is difficult to justify the privileging of genes on

that basis. I then introduce the constructionist alternative, as endorsed by

dialectical biologists and especially developmental systems theorists. DST rejects

the semantic information concept, arguing that it serves as a substitute for

explaining development. Semantic information, according to DST, is simply a

new, more nuanced, preformationism, which continues to reproduce nature-

nurture and gene-environment dichotomies because it underpins a dichotomous

view of the developmental process. If information exists at all, according to DST,

it is not inherited but constructed along with the rest of the organism.

In Chapter 6, I take up Griffiths and Gray’s (1994) attempt to develop a

full-fledged alternative to genic selectionism based on a radically extended model

of inheritance. Drawing on Oyama’s (1985) observation that many developmental

resources besides genes are passed on in reproduction (e.g., p. 37), Griffiths and

Gray explicate a model of inheritance that includes, as units of inheritance, all the

reliably present developmental resources involved in the development of the


32

lineage-typical phenotype. In addition, they reject the replicator/interactor

distinction, long a centerpiece of the units of selection debates, arguing that it

relies on a dichotomous conception of development. I present their argument in

the context of its more conservative rival, the extended replicator theory (ERT),

because many of Griffiths and Gray’s key positions were formulated in this

debate. I offer some critical evaluation of both positions, but ultimately defend

Griffiths and Gray’s position in this dispute. Their approach is not only more

coherent, it also attempts to integrate developmental theory into evolutionary

explanation, whereas ERT merely extends inheritance without altering the

underlying dualism of the orthodox framework.

In Chapter 7, I take a more critical view of Griffiths and Gray’s (1994)

extended inheritance model. I suggest that their reliance on units and mechanisms

of inheritance places too much emphasis on what is inherited, and, given the

metaphysical tide that the constructionist movement is swimming against, I

believe this is unhelpful. It is my view that Oyama’s articulation of DST, with its

emphasis on systems thinking, is more faithful to the constructionist approach that

I am defending in this work. Furthermore, I believe that the particular worries that

have been raised against DST are based on a misunderstanding of the

constructionist perspective, and I offer (yet another) defense of its non-

preformationist reasoning. Finally, in the interest of making my understanding of

DST unambiguous, I suggest replacing the inheritance paradigm with an

interactive network paradigm. Although the importance of constructive


33

interaction has been emphasized many times, I suggest that the attempt to

preserve some notion of inherited resources, causes, or conditions, however

carefully circumscribed, provides a lifeline to the preformationist reasoning we

seek to overturn. It is not a question of denying either heredity or the causal

significance of particular developmental resources. It is simply a question of

adopting a metaphor that does more of the important work of directing attention

to the dynamic, constructive processes that are actually responsible for ontogeny

and evolution.

This dissertation seeks to make a very simple, though perhaps radical,

point. Our full understanding of living bodies as centers of development and self-

generated activity has consistently been undermined by the Cartesian impulse to

redescribe them as machines composed of inert matter. Because formation cannot

be explained in these terms, forms must be understood, in some sense, to preexist.

Moreover, because the metaphysical alienation of form from matter is so deeply

embedded in this worldview, those who attempt to provide a genuine materialist

account of formation are often perceived as mystical or confused. The

preformationist mindset, which we all share to some extent, simply cannot make

sense of constructionist explanation. Thus I hope to promote constructionist

explanation on its own terms, as a coherent alternative to the entire Cartesian

orientation. The genuine integration of development and evolution then will

require no less than a new worldview, which recognizes the constructive

capacities inherent in the material world, and the potential for life that is built into
34

the evolutionary nature of the universe.


35

Chapter 2: The Prehistory of the Life Sciences

This chapter examines the theories and approaches of the early modern

natural philosophers who studied and speculated about the formation of living

beings. I pay special attention to the attitudes that shaped considerations of

biological inheritance during the period from the early seventeenth century

through the late eighteenth century. It should become clear from this discussion

that, although species type, family resemblance, family disease, and hybridism

were, to varying degrees, recognized as relevant aspects of generation, only

species type truly captured the attention of natural philosophers. Furthermore,

species type was the only one of the three that was not typically described as

hereditary. Meanwhile, although the use of the adjective hereditary implied a

metaphorical association with property succession, hereditary resemblances were

generally understood as degenerations, accidental deviations from the natural

course of generation. That is, they were instances of deformation rather than novel

formations. One final thing to bear in mind is that, although the thinkers of this

period held views that may seem odd from a modern perspective, their

preoccupation with the origins of form was not one of them.

The Received View of Hereditary Resemblance

Before I begin to discuss early modern generation theory, its inception in

the wake of the scientific revolution, and the effects of these transformations on

the way philosophers conceived of biological inheritance, it may be helpful to

recall how these things were viewed during the immediately preceding era. To
36

begin with, it is important to recognize that the problems to which the modern

concept of heredity is addressed had no clear counterpart in premodern thought. It

had been noticed since antiquity, of course, that certain diseases seem to run in

families and that children often resemble one or both parents. As Lopéz-Beltrán

(1994) notes, these facts were indeed discussed by analogy with the hereditary

succession of property and titles and described using the adjective hereditary (p.

214). However, the endurance of this analogy obscures very real differences

between the way hereditary resemblance was thought about in ancient times and

the way it came to be conceived in the late eighteenth and early nineteenth

centuries. As Müller-Wille and Rheinberger (2007a) write, “the problem [for

those interested in hereditary resemblances] was not to explain how properties

were transmitted, but rather to explain how the same causal agents that once had

been involved in the generation of ancestors apparently could remain active in the

generation of their remote descendants” (p. 5).

Any consideration of biological inheritance was inevitably bound up with

the way the process of generation was understood. Generation referred to the

production of a living being. A “lower” form of being could arise by spontaneous

generation, whereas a red-blooded animal was understood to be engendered by its

parents. In a very broad sense, the generation of an animal was conceived as a

single momentous happening, in which the material and spiritual contributions of

the parents interact with natural and/or divine laws and powers (natural and divine

were not clearly distinguished) to produce a new living being. Jacob (1970/1976)
37

writes that “the generation of every plant and every animal was, to some degree, a

unique, isolated event, independent of any other creation. . . . The formation of a

being . . . had no roots in the past” (p. 20). This conception of generation calls to

mind the way we today think about the starting of a fire or, to borrow Jacob’s

metaphor, the production of a work of art. As Smith (2006) notes, early modern

generation theorists thought of the production of all individual traits in much the

way we think of the production of birth defects, namely, that they have their

origins in the course of development (p. 81).

As Müller-Wille (2007) explains, similarities and differences between

individuals were typically attributed to the local circumstances surrounding the

generation event (p. 178). Indeed, there was no intrinsic limit to the potential for

deviation from the parental form, so that a given germ was seen as capable of

developing into practically anything (see also Amundson, 2005, p. 36).2

Similarities between parents and offspring were attributed to similar

“constellations of climatic, economic, political, and social factors,” (Müller-Wille,

2007, p. 178) which could vary significantly from place to place. It was tacitly

assumed, moreover, that the causes of the different varieties of plants and races of

animals were to be found among the conditions particular to the places they

inhabited. Indeed, belief in the intricate connection between the forms of living

beings and their natural places was so deep, it could be said that, “it is the place

2
Jacob (1970/1976) describes a sixteenth century report of a sheep

mating with a boar and giving birth to a lamb with the head of a pig (p. 19).
38

that ‘inherits’ its inhabitants and impresses its character on them” (Müller-Wille

& Rheinberger, 2007a, p. 18).

While the maxim “like begets like” is routinely used to claim a continuity

between ancient notions of resemblance and the modern concept of heredity, the

traditional conception of generation suggests a reconsideration. The Oxford

English Dictionary defines “to beget” as “to procreate, to generate: usually said of

the father, but sometimes of both parents.” This reflects the older view of

generation in which the parents were understood as having an active role in

engendering their offspring. The modern principle of heredity, by contrast, is

associated with the modern notion of reproduction,3 in which a natural replication

process ensures the transmission of structural information from parents to

offspring. Ancient thinkers were implying nothing of this sort when they

remarked on intergenerational resemblance.

3
This word reproduction came into use after Réaumur used in

describing the capacity of the crayfish to regenerate a lost claw. Its meaning

was later expanded by Buffon (Jacob, 1970/1976, pp. 72-73).


39

Noting the difficulty of discerning any clear anticipations of the modern

notion of heredity in writings prior to the modern period, Müller-Wille

(2007) points out that the metaphors of premodern myth, science, and

philosophy depict nature as organized along lines of ancestral descent and

devolution (p. 177). This vertical logic, which emphasized the unilinear

descent of the clan or bloodline, was the key isomorphism underpinning the

metaphor between biological and social conceptions of intergenerational

relations (Müller-Wille, 2007; Sabean, 2007). Within this metaphysical

framework, resemblance was not really a problem. The writings of both

Aristotle and Galen express the principal that all things naturally seek to

endure eternally. Because, like everything in the sublunary realm, a living

being is subject to corruption, the closest it can come to realizing eternity is

by engendering copies of itself (Roger, 1963/1997, p. 63). William Harvey

(1847/1943) was expressing precisely this outlook when he wrote that

“every generative efficient [sic] engenders another like itself” (p. 363).

Resemblance, as Müller-Wille (2007) points out, is a trivial consequence of

this outlook (p. 180).

In addition, it is important to bear in mind the general conception of nature

that prevailed before modern natural history emerged in the eighteenth century.

As French (1994) points out, the nature that concerned ancient and medieval
40

natural historians was not the external realm governed by laws, which occupies

the modern mind, but the natures of particular beings (p. 23). Following Aristotle,

premodern natural historians inquired into the distinct essences of living beings,

documenting the intrinsic qualities that make a being what it is. As Foucault

(1973) explains, “every being bore a mark, and the species was measured by the

extent of a common emblem. So that each species identified itself by itself,

expressed its individuality independently of all the others” (p. 144). Another

important feature of this premodern natural history was the absolute continuity of

the natural world. In God’s creation, all that could exist, did exist; any gaps in

creation would suggest something missing, some deficiency in God’s creativity.

Thus, what came to be called the great chain of being entailed a continuous series

of forms beginning with brute matter, passing through infusoria, plants, animals,

man, and the hierarchy of divine beings leading up to God. Although there were

natures in common, such as fox nature, owl nature, etc., there was not yet any

explicit categorization into discrete species.4

We now turn briefly to the question of precisely how an animal achieves

or fails to achieve the production of offspring that resemble it. Galen’s schema

recognized three types of resemblance: species, sex, and structure, where structure

4
French (1994) notes that the older sense of nature derives from the

Greek word physis, while the modern concept comes from the Latin natura,

which, though it was used by the Romans in their translations of Aristotle,

originally signified something nearer to the modern sense.


41

referred to “accidents of the body and the mind’s habits” (Roger, 1963/1997, p.

66). Since resemblance to the father was considered the most natural outcome of

generation, exceptions to this outcome constituted malformations and therefore

attracted the attention of theorists. What causes a child to resemble its mother

rather than its father, including in its sex? How does a child end up resembling a

grandparent more than either parent? What if the child resembles neither parent

and none of its ancestors? (questions adapted from Smith, 2006). At the dawn of

the modern period, Aristotle’s generation theory provided the framework for

reflecting on these questions. The production of the form of both the species and

the individual was thought to derive from the male seed, which contributed both

formal and final causes. Drawing an analogy to carpentry Aristotle (1910) wrote,

the shape and the form are imparted from [the carpenter] to the material by
means of the motion he sets up. It is his hands that move his tools, his tools
that move the material; it is his knowledge of his art, and his soul, in which is
the form, that moves his hands or any other part of him with a motion of some
definite kind, a motion varying with the varying nature of the object made. . . .
Such, then, is the way in which these males contribute to generation. (p. 23)

Notice that there was nothing explicitly supernatural in this account. The

formal/final cause was attributed to the “definite kind” of motion initiated in the

seed by the father. The motion itself was the efficient cause of formation and

associated with the power of movement possessed by the seed in virtue of its

innate heat. The maternal contribution, on the other hand, is material cause only,

though, for Aristotle, matter was never entirely without form. The menstrual

blood was considered to possess, in potentia, the form of the human and of the
42

mother’s personal traits (Roger, 1963/1997, p. 66).

Given this scheme, then, resemblances other than to the father needed to

be explained. The most common approach, according to Roger (1963/1997), was

to attribute a resemblance, either to the mother or to a remote ancestor, on a

deficiency of innate heat (p. 65). If a child was born female (an imperfect male for

Aristotle), this was because the heat of the father’s seed could not overcome the

cold of the menstrual blood and fully achieve perfection. Likewise, the child

might possess other resemblances to the mother due to insufficient animation by

the seed. An inadequately animated seed might also be compensated for by the

“spirit” of an ancestor (Roger, 1963/1997, p. 66). Finally, the process could be

influenced by the mother’s imagination, which was capable of producing arbitrary

resemblances or teratisms (deformities). This dynamic was sometimes seen as a

competition between opposing forces, in which the imagination, if activated by

strong desire, could overtake and disturb the natural course of generation. Besides

the rather transparent projection patriarchal values, notice that the emphasis was

primarily on how the parents engender form in their offspring, not on how they

engender resemblance. Thus, there was no need for any sort of representation of

parental form to be transmitted from parent to offspring.

The Scientific Revolution

Sometime in the fourteenth century, according to most historians, Europe

began to be undergo the major cultural transformations that led to the intellectual

and religious movements we know as the Renaissance, the Reformation, the


43

Enlightenment, and the Scientific Revolution. These transformations occurred

over centuries and affected all aspects of life at all levels of European society.

They produced new social arrangements, new political, academic, and economic

institutions and practices, and brought forth fundamental changes in the way

Europeans conceived of themselves and of the nature of reality. Roger

(1963/1997) documents the gradual transition from an era dominated by the

authority of the ancients and their scholastic interpreters to one where experience

and experiment were considered decisive, where the mysteries of nature were

available to be unraveled by anyone with sufficient patience and determination,

where the contemporary individual might dare to challenge the pronouncements

of The Philosopher (Aristotle). Well before the mechanical philosophy was put on

a firm foundation by Newton, natural philosophers were increasingly regarding

nature as directly knowable, and seeking ever more explicit causal explanations

for the phenomena they encountered.

The shifting ontological, epistemological, and theological foundations of

early modern Europe precipitated a major reorientation toward the ideas of the

ancient authorities. As with the other major cultural transformations, these shifts

were not the result of a purely intellectual movement, but coincided with

significant changes in the material and institutional underpinnings of the

conceptual landscape. For one thing, as Roger (1963/1997) explains, the

encompassing metaphysical framework of Aristotelianism had been sustained

throughout the middle ages and the Renaissance by the medieval university
44

system, and with its decline “a whole intellectual universe collapsed in ruins” (p.

125). In addition, during the same period, the mobility of individuals and of their

written works, thanks to the printing press, were giving rise to new patterns of

among individuals and between individuals and the natural world. This flattening

of the epistemic landscape helped to fill the vacuum left by the collapse of the

university system and facilitated the emergence of a new, more democratic

attitude toward knowledge and truth.

As a result of the shifting metaphysical landscape, Roger (1963/1997)

suggests, the Aristotelian account of form and causation, with its subtle yet

profound interdependence between form and matter, began to lose its coherence

for thinkers of the late sixteenth and early seventeenth centuries. As theorists

began to adopt a more modern perspective on causation, those who remained

nominally committed to Aristotelianism, were forced to explain the formation of

material bodies in terms of formative faculties. These so-called natural souls were

not the same as spiritual souls, in that the former were conceived as intrinsic to

the material world. Yet they were distinct from brute matter in that they possessed

the power to cause formation. Here, theorists were able to draw on another

ancient authority, Galen, who, centuries earlier, had appealed to formative

faculties in explaining the proper arrangement of an animal’s functional parts

(Roger, 1963/1997, p. 59). Because most thinkers were not in a position to

recognize the ways in which Galen had departed from Aristotle’s original

approach to causation, even those theorists who presumed themselves to be


45

defending Aristotle were often distorting his fundamental conceptions (Roger,

1963/1997, p. 125).

Another crucial factor in this epistemological transition was the attempt by

the late scholastics to reconcile traditional Aristotelian doctrines with the

theological requirements. In particular, the theological demand for an increasingly

sovereign deity produced the concomitant need for matter to be conceived as

increasingly passive (Grene & Depew, 2004, pp. 39-40). As Grene and Depew

make clear, this produced what amounted to a reversal of traditional Aristotelian

relationship between form and matter. For Aristotle, form preceded matter

(logically, not temporally) so that matter existed only potentially until it became

actual by providing for the individuality of some particular substance, such as this

chair or this man, etc. (p. 40). For the emerging view, however, matter was

imagined to exist as pure extension, somehow actual, yet absolutely inert and

without form. Form, in this way, became merely the shape or structure that is

impressed on some chunk of material by some external formative agency or

designer. In this way, Aristotle’s original conception of formal and final causation

was rendered practically unintelligible, and specific efficient causes had to be

posited to explain every sort of movement.

This new conception of matter, as unformed, yet fully actual, was

reinforced and rationalized by the revival of another ancient doctrine, atomism.

For the atomistic philosophy, which gained widespread support beginning in the

late sixteenth century, matter ultimately consisted of infinitesimal particles,


46

moving through infinite space, according to the laws of motion. This ontology

was welcomed by theologians and natural philosophers alike, for, on the one

hand, it guaranteed the absolute sovereignty of God and, on the other, it unleashed

the epistemic ambitions of the philosophers. Roger (1963/1997) explains:

Now corpuscles were bare matter. They had neither form, nor faculty, nor
soul. To make of them the elements of living matter was sooner or later to
doom Aristotelian as well as Galenist biology. If one did not believe that
matter was capable of organizing itself unaided, if one believed that the
laws of motion were too general to explain the formation of a living being,
if one judged, finally, that a soul was necessary in order to account for
vital phenomena, one had to conceive of this soul as radically foreign to
matter, as made of a substance other than matter. . . . (p. 128)

The ultimate logical consequence of this view (which was not immediately

apparent except perhaps to Descartes, whose epistemic ambitions surpassed those

of most of his contemporaries) was the absolute evacuation of the universe any

sort of order not imposed from without. From the mid-sixteenth to the mid-

seventeenth centuries, as matter was increasingly emptied of all formative power,

it became less and less acceptable to invoke natural souls, and only material and

efficient causes remained.

With the displacement of the premodern Aristotelian cosmology by a

modern Newtonian one, the old view of the world, as a sort of organic whole, was

replaced by one in which nature is essentially a collection of disconnected parts,

assembled into a multitude of clever machines by a providential Creator. This had

immediate consequences for the way living beings were conceived. Where the

nature of living beings had once seemed to express the fundamental nature of the
47

cosmos, they now had to be seen as exceptions to that fundamental nature. This

was especially problematic at the time because, as Foucault (1973) has argued,

living beings were not yet understood as fundamentally distinct from other natural

beings. The belief in spontaneous generation was commonplace, while fossils,

which were not yet associated with previously living beings, were considered to

occupy a gray area between living and nonliving bodies (p. 161). The great chain

of being was continuous from God all the way down to brute matter, permitting

no serious categorical gaps (Jacob, 1970/1976, pp. 33-34). Therefore, as

Newtonian physics achieved the status of metaphysical and methodological

touchstone for all of natural philosophy, the challenge posed by those beings that

grow and procreate took on an unprecedented urgency.

Generation Theory

With the collapse of Aristotle’s intrinsically ordered cosmos and the

rejection of the various formative faculties that had been proposed to fill the

resulting causal vacuum, the central problem was set for the early modern study

of living beings (beings that undergo generation). One of the most common

events in human experience, the generation of animals, had been rendered

incomprehensible by the new natural philosophy. The mechanical philosophy had

also undermined the metaphysical intelligibility of individual hereditary

resemblances, but this problem was far less urgent than the more basic problem of

how form is generated. The history of generation theory and the widespread and

long-lasting popularity of the doctrine of preexistence, in particular, leaves little


48

doubt about this. While preexistence settled the problems of formation by pushing

it back to the moment of creation, it rendered hereditary resemblance entirely

inexplicable.

Generation research as a modern scientific activity is typically considered

to have been begun with William Harvey’s (1578-1657) De Generatione

Animalium in 1651. There is some disagreement about how modern Harvey’s

views actually were, due to his ambiguous metaphysics (Gasking, 1967; Meyer,

1936; Needham, 1959; Roger, 1963/1997). Gasking (1967) emphasizes Harvey’s

modernity, claiming that he rejected the appeal to souls and spirits and merely

described his observations with care and precision (p. 33). Yet, as Meyer (1936)

notes, there are innumerable passages in De Generatione where Harvey refers

explicitly to vital principles, formative faculties, and vegetative souls (p. 32).

Regardless which of these perspectives better captures the authentic Harvey, his

importance as a transitional figure is beyond dispute. Practically everyone agrees

that Harvey was committed to observation and wary of metaphysical speculation

and deference to ancient authority. He considered himself a follower of Aristotle,

but, unlike some of his Aristotelian contemporaries, when he was forced to posit

formative causes, he regarded them as placeholders for causes that could

ultimately be investigated and understood (Roger, 1963/1997, p. 96). As Roger

observes, this stance was not substantially different than the one Newton adopted

toward gravity a few decades later.

As I said, it is undisputed that Harvey approached the problem of


49

generation in a thoroughly empirical manner. He described his observations

carefully and offered little in the way of new theory (Gasking, 1967, p. 33). His

primary theoretical contribution, according to Gasking (1967), was in refuting the

contemporary belief in preformation. It was widely believed, especially among

medical theorists, that the form of the animal is brought forth all at once from the

seminal material in what was sometimes called the first formation (Lopéz-Beltrán,

1992, p. 18) and sometimes called metamorphosis (Bowler, 1971). As evidence

against this sort of preformation, Harvey (1847/1943) pointed out that the mass of

semen and menstrual blood, the confluence of which was supposed to constitute

the material basis for the embryo, is nowhere to be found in the uterus after

intercourse. He countered, based on his own observations, that red-blooded

animals are made by epigenesis; their embryos develop by way of a

“superaddition” of parts, which are created in succession out of material that is

taken in as nutriment.

Like generation theorists since antiquity, Harvey (1847/1943) regarded the

overall resemblance of offspring to their parents as obvious, writing that “the

work of the father and mother is to be discerned in both the body and mental

character of the offspring” (p. 363). It was not a point that needed to be defended,

but an uncontested fact that an adequate theory of generation ought to explain. In

opposition to Aristotle, however, Harvey was convinced by the resemblance of

offspring to both parents and by the “mixed nature” of hybrids that male and

female parents contribute equally to generation. In addition, following Aristotle,


50

Harvey believed that only efficient causes can have direct physical effects on the

events of generation. That is, any causal influence on the formation of the

offspring would have to involve the transmission of movement to the embryo

through direct physical contact. This raised more of a problem with respect to

paternal resemblance, since Harvey (1847/1943) denied that the father contributes

materially to the embryo. He referred here to the mystery of contagion as a way to

account for the possibility of communication without contact (p. 364). Since

Aristotle’s cosmology had all but collapsed by the mid-seventeenth century,

Harvey was forced to deal with form and resemblance exclusively in terms of

local, efficient causes. Given the technical and conceptual resources available to

him, this was an impossible task.

Harvey’s willingness to privilege observation over both ancient authority

and theoretical speculation epitomized the empirical spirit of the emerging

scientific worldview. Had subsequent generation theorists shared his willingness

to trust experience, things might have turned out very differently. However, the

Scientific Revolution was also characterized by a powerful commitment to

rationalism and to the principle that true knowledge should be deducible from first

principles. It isn’t that observation was discounted in favor of metaphysical

conjecture. The natural philosophers of the seventeenth and eighteenth centuries

were strongly committed to observation, as is evident from their extensive

reliance on microscopy. Nevertheless, observations, in order to be taken seriously,

had to be reconciled with the new mechanical philosophy of Newton and Boyle.
51

At the same time, as Roger (1963/1997) explains, it seemed evident to

many natural philosophers, with the notable exception of Descartes, that

mechanical laws could not account for the creation of novel form or order (p.

269). Those committed to the new philosophy, therefore, were forced to conclude

that generation consists in the evolution, (literally unrolling) of preformed beings.

The most extreme species of preformationism, the doctrine of preexistence,

remained the central dogma of generation theory for over a century, despite being

inherently unobservable, precisely because thinkers of the period could not

imagine a reasonable alternative.

Preexistence and its Discontents

By the late seventeenth century, nature had, in the words of Roger

(1963/1997), “lost all spontaneity and become pure passivity in the hands of God”

(p. 262). The appearance of form thereby became a major scientific and

metaphysical conundrum, and even loyal followers of Descartes were unable to

follow him in imagining that epigenesis could be accounted for by the laws of

mechanics alone. Besides being difficult to believe, the theological consequences

of pushing mechanism that far were potentially dangerous (which is why

Descartes withheld publication of his views on the topic until his death). Thus,

within two decades of Harvey’s refutation of the old preformationism, a new

modern variety had begun to be formulated. Dutch microscopist Jan

Swammerdam (1637-1680) is usually credited with being the originator of

modern preformation theory based on his research on insect metamorphosis


52

(Needham, 1959, p. 170; Roger, 1963/1997, p. 267).5 Swammerdam’s

preformationist intimations of 1669 and 1672 were soon apparently confirmed by

Marcello Malpighi (1628-1694), who in 1672 claimed to have seen the essential

elements of the chick in a hen’s egg prior to incubation (though not prior to

fertilization). Swammerdam and Malpighi, whatever the actual content and intent

of their respective claims, were swept up in a rising intellectual tide, in which the

conclusions seemed almost to preexist the actual evidence. The idea of preexisting

germs clearly captured the scientific imagination of the age (Gasking, 1967, p. 43;

Roe, 1981, p. 5).

Drawing on the recent work of Swammerdam and Malpighi, among

others, and motivated by the unacceptable implications of Descartes’ mechanical

theory of generation, Nicolas Malebranche (1638-1715) explicated the doctrine of

the preexistence of germs, including its most radical element, emboîtement (Roe,

1981, p. 7). The essential tenet of this doctrine was that every individual being

that would ever live was formed by God at the beginning of time. The germs of

all these beings were then encased one within the next, so that every member of

every lineage was wholly contained, as a nested series, within the first of the kind.

5
Bowler (1971) questions this attribution, suggesting that it is partly

based on poor translation of the original Dutch (pp. 234-235). But,

according to Roger (1963/1997), in some of his writings, Swammerdam

referred explicitly to the preexistent germ and wrote that “all humans eggs

had existed in Eve” (p. 267).


53

Accordingly, the generation of a human was understood to consist in the

evolution (unrolling) of the outermost germ, which was, more or less, a miniature

of the adult it would become. Although, in principle, particular observations could

be interpreted as evidence for preformation, no possible observations could have

supported preexistence over other varieties of preformationism. It was a purely

philosophical commitment, which gained widespread currency precisely because

it dispensed with the need to explain the origination of form.

The doctrine of preexistent germs did not leave much room for speculating

about hereditary resemblance. Particularly during the first seventy years after the

doctrine of preexistence came to the fore, the formerly uncontroversial facts of

resemblance were very much squeezed to the margins. For much of this period, a

very rigid version of preexistence theory, either animalculist or ovist, held sway.

This animalculist (also spermist) view held that tiny creatures, which had been

discovered in 1677 swimming in the male semen (spermatic animalcules), were

the loci of the preexistent homunculoid germ. This view is best exemplified by the

famous sketch by Nicolas Hartsoeker of a miniature human curled up within a

single animalcule. For the ovist view, on the other hand, the preexistent

miniatures were located within the female ovum (which, incidentally, had not

been identified, but merely hypothesized by Harvey). The reliability of species

type clearly needed no explanation for either version of preexistence. Other types

of hereditary resemblance, meanwhile, such as accidental similarities to parents

and grandparents, hereditary diseases, and hybridism, were simply not that
54

interesting to the theorists of this period. “The great problem in [their] eyes was

the formation of the living being, considered as an isolated individual, without

essential relationship to the individuals of the same species that had preceded and

begotten it” (Roger, 1963/1997, p. 311).

The comfortable authority enjoyed by the doctrine of preexistence was

confronted with its first serious challenge when, in 1741, Abraham Trembley

(1710-1784) discovered that a small piece of the body a fresh water polyp (later

named hydra by Trembley) is capable of regenerating a whole new animal

(Lenhoff & Lenhoff, 1991, p. 53). This was not the first time the phenomenon of

regeneration had caused consternation for orthodox generation theory. Réaumur

had studied the regenerative capacity of crawfish in 1712, followed ten years later

by Nicolas Hartsoeker (Benson, 1991, p. 95). Indeed, Hartsoeker was convinced

by his findings to reject preexistence, writing that “the intelligence which can

produce the lost claw of a crawfish can reproduce the entire animal” (quoted in

Gasking, 1967, p. 86). Nevertheless, these early findings were not enough to

shake the convictions of others, including Réaumur, for whom, it seemed,

preexistence had no conceivable alternative (Roger, 1963/1997, p. 314). By the

1740s, however, the situation was different. Not only were Trembley’s findings

particularly remarkable, they were soon repeated with other animals. In 1744, for

example, Charles Bonnet (1720-1793) published work on the regenerative

capacity of worms, which helped to convince naturalists of the pervasiveness of

regeneration (Lenhoff & Lenhoff, 1991, p. 53).


55

Although the acceptance of regeneration was not sufficient to overturn the

doctrine of preexistence, it did, along with a renewed interest in teratisms

(deformed births), precipitate a major crisis and a significant reconfiguration in

the way the doctrine was conceived and defended. One of the casualties of this

crisis was the animalculist version of the theory, which was essentially falsified

when Charles Bonnet’s demonstrated that female aphids can reproduce by

parthenogenesis (Gasking, 1967, p. 64). Clearly, if a female can reproduce

without the contribution of a male, generation must not (always) depend on a

preexistent germ carried by the spermatic animalcule. Moreover, the crisis

permitted phenomena that had long been marginalized by the belief in strict

emboîtement to be once again taken seriously. These included the problems

associated with hybridity and hereditary resemblance. It became permissible to

point out facts that had been obvious to Harvey a century earlier, namely, that

hybrids, as well as normal offspring, bear the marks of both parents. It was in this

context that a number of high-profile opponents of preexistence began to appear.

Beginning in the mid-1740s, renowned French mathematician and

physicist Pierre Louis Maupertuis (1698-1759), inspired in part by Trembley’s

polyp, mounted a sustained attack on the doctrine of preexistence (Terrall, 2007,

p. 257). Along with the phenomena of regeneration, the facts of hereditary

resemblance, especially the apparent contribution of both parents, convinced

Maupertuis that preexistence could not be correct (Roe, 1981, p. 14). His most

pointed attack on preexistence took the form of a genealogical analysis of six-


56

digitism, which documented the distribution of polydactyly across several

generations of the Ruhe family of Berlin. He adopted a proto-statistical approach

that had been used by Jean-Jacques Dortous de Mairan to reject the claim that

teratisms originate from accidents of formation. For Mairan, the fact that extra

fingers always appear on the hands, while extra toes are always on the feet, rather

than in random places, indicated that their formation could not be purely

accidental (Loveland, 2001, p. 472; see also Roger, 1963/1997, pp. 334-335).

Maupertuis extended this probabilistic mode of reasoning to analyze the

recurrence of polydactyly in the Ruhe family (Hoffheimer, 1982, p. 132). Given

the rarity of this phenomenon in the general population, Maupertuis calculated the

probability of it occurring to multiple individuals across multiple generations

within a single family. He concluded that the high frequency of six-digitism in the

Ruhe family compared to the general population was so extremely improbable

that it could not be accounted for by chance alone. He suggested that its

occurrence along definite lines of descent indicated that it was propagated by

hereditary transmission. In addition, Maupertuis pointed out that the patterns of

polydactyly exhibited by the Ruhe family implicated both male and female

parents, thereby confirming the dual-seed theory of generation (Hoffheimer,

1982).

Despite the doubts raised by Maupertuis and others, the doctrine of

preexistence remained popular throughout the eighteenth century. However, it

was never again able to claim the dominant status it had enjoyed during its first
57

seventy years. The doctrine was now open to dispute, and its supporters had to

take seriously the facts of regeneration, teratism, hybridism, and bi-parental

inheritance. To be sure, these issues remained marginal in comparison to the

momentous questions concerning how the living being is formed. Yet a loosening

had definitely appeared in the web of belief. There was now room for genuine

debate, and a new generation of generation theorists were finding themselves in a

position to propose alternative models of generation and to confront, at least to a

limited degree, the problems posed by actual formation.

During the latter half of the eighteenth century, a number of theorists

opposed to the doctrine of preexistence proposed alternative models of

generation. Although generation theory during this period is often framed as a

dispute between preformationists and epigenesists, as Lopéz-Beltrán (2001)

points out, the question of whether animal generation involves two seeds or just

one probably constituted the main point of contention. Preexistence theory was

necessarily committed to the single-seed model, since the adult form obviously

needed to be encapsulated in one seed. Dual-seed seed theorists, which Lopéz-

Beltrán calls successionists, were convinced by the phenomena of resemblance

and hybridism that that generation involves seminal contributions from both

parents (p. 75n38). Although the successionists have often been grouped together

with the epigenesists, (e.g., Grene & Depew, 2004; Terrall, 2007), perhaps

because they were characterized that way by their rivals, these theorists did not all

regard formation as gradual and epigenetic. For example, medical theorists


58

regarded generation as involving an instantaneous solidification of the seminal

fluids of both parents (despite Harvey’s explicit refutation of this belief) (Lopéz-

Beltrán, 1992). Views such as this are still preformationist in the sense that

formation itself is not explained.

The theory promoted by Maupertuis was both successionist and

epigenesist. In his 1745 work, Venus Physique, he attempted to explain generation

in terms of recently discovered properties of chemical affinity and spontaneous

organization. Maupertuis described the Tree of Diana, a remarkable treelike

formation produced by the mixing of silver, mercury, nitric acid, and water, and

suggested that the properties responsible for such phenomena might also account

for the accretion of form in epigenetic development (Gasking, 1967, p. 73). When

Réaumur objected that these simple laws of attraction are insufficient to account

for the complex organization of a living being, Maupertuis modified his position.

He allowed that, in addition to chemical affinities, living matter might also

possess psychological qualities such as desire, aversion, and memory (Roe, 1981).

Elaborating this view in his 1751 work, Système de la Nature, Maupertuis

proposed that living beings are ultimately composed of living particles and that

the production of seminal fluid involves an accumulation in the reproductive

organs of particles from throughout the body. When combined during procreation,

these particles would then organize themselves into a functioning organism due to

their capacity to remember their former position and function (Gasking, 1967; see

also Terrall, 2002, 2007). In order for form an organized body, he wrote, “it is
59

necessary to have recourse to some principle of intelligence, to something

resembling what we call desire, aversion, memory” (quoted in Roe, 1981, p. 15).

Here we see an example of how a commitment to epigenesis, when situated

within the dualist metaphysic of modernity, can drive toward a vitalist perspective

that also places formation beyond proper analysis.

The legendary French natural philosopher Georges-Louis LeClerc, Comte

de Buffon (1707-1788), was also inspired by Trembley’s polyp to propose a

successionist theory of generation. He was influenced by Maupertuis’ corpuscular

notion of living particles and based his own theory on what he called organic

molecules (Jacob, 1970/1976, p. 76). Buffon did not attribute memory or

intelligence to these entities, however. Rather, he proposed that they were formed

by what he called the moule intérieur or interior mold. He was never entirely clear

about the precise nature of this mold or the “penetrating forces” by which it was

able to act. It is clear, however, that he intended these forces to be understood as

analogous to other known, but little understood, forces and dynamics such as

gravity and chemical affinity. He wrote:

In the same way that we can make molds by which we give to the exterior
of bodies whatever shape we please, let us suppose that Nature can make
molds by which she gives not only the external shape, but also the internal
form, would this not be a means by which reproduction could be effected?
. . .Nature can have these internal molds, which we will never have, just as
she has the qualities of gravity, which in effect penetrate to the interior;
the supposition of these molds is therefore founded on good analogies.
(quoted in Terrall, 2002, p. 314)

Buffon seems to be struggling here to articulate something that was only


60

beginning to dawn on the scientific imagination of his day: the notion that living

beings exhibit not only visible form, but internal organization. In other words,

when he stressed the contrast with the exterior mold of the craftsman, he was not

merely juxtaposing physical outsides and insides; he was groping for a notion of

internal organization that was not yet available to him. According to Roger

(1963/1997), “Buffon was unable, despite his efforts, to free himself from a

mechanistic and spatial representation of organization” (p. 442). Indeed, Jacob

(1970/1976) argues that the efforts of Maupertuis and Buffon to imagine a

corpuscular alternative to preexistence might have played a role in shifting the

emphasis from visible structure to hidden order or organization, thereby helping

to create the conditions for the latter concept to develop (pp. 77-82). This shift

was pivotal in the emergence of biology as a distinct discipline.

The partisans of preexistence theory continued to defend the dominant

view, but were forced, in the latter half of the eighteenth century, to acknowledge

the difficulties it faced. Among the leading supporters of the doctrine of

preexistence, Albrecht von Haller (1708-1777) may be an exception in that he

simply refused to accept that resemblance, hybridism, and regeneration presented

serious challenges. Interestingly, Haller had initially been persuaded by

Trembley’s polyp to reject preexistence in its spermist version and had begun to

argue for the significance hereditary resemblance (Roe, 1981, p. 25). However, he

later converted to the ovist version that arose after Bonnet’s discovery of

parthenogenesis in aphids, and, with the certainty of a convert, he rejected


61

evidence that contradicted his conviction. As Roe (1981) points out, he simply

denied that children truly resemble their parents and explained hybridism with an

ad-hoc theory based on the action of the male semen (p. 42).

Charles Bonnet, in contrast, took hereditary facts seriously. As Gould

(1977a) points out, he spent the second half of his life modifying the ovist model

of preexistence to cope with them (p. 19). Bonnet maintained the conceptual core

of preexistence without the simplistic conception of fully formed miniatures

entailed by traditional emboîtement. Bonnet expresses this new attitude as

follows:

The term emboîtement suggests an idea which is not altogether correct.


The germs are not enclosed like boxes within the other, but a germ forms
part of another germ as a seed is a part of the plant on which it develops. .
. . I understand by the word germ every preordination, every preformation
of parts capable by itself of determining the existence of a plant or animal.
(quoted in Benson, 1991, 98)

Thus, Bonnet’s version of ovist preexistence theory was less literal than its

predecessor, and did not promise observational consequences so much as

explanatory adequacy (Gasking, 1967, p. 108). This move was crucial, for it gave

Bonnet much more theoretical lattitude, allowing him to cope with various

anomalies without sacrificing his metaphysical commitments.

Since this new ovism did not require that the germs contain fully formed

miniatures, it was more amenable to the evidence of hereditary resemblance.

Bonnet suggested, for example, that the germs might be more or less identical at

the level of the species, with individual variation being caused by the
62

“circumstances” of development (Gasking, 1967, p. 124). To account for the

apparent contribution of both parents to the particular characteristics of the

offspring, he elaborated on an idea proposed earlier in the century by Louis

Bourguet’s (Lopéz-Beltrán, 2001, p. 76). According to this model, the germ in its

early stages is nourished by the seminal fluid of the father and nutritive fluids

supplied by the mother, both of which could then affect the development of the

embryo. The capacity of these substances to cause hereditary resemblances was

attributed to the idea that they originate in all the organs of the parents’ bodies

(Gasking, 1967, p. 125). Thus, Bonnet ended up resorting to a sort of pangenesis

theory that was not so different from the position of rival theorists such as

Maupertuis.

As Lopéz-Beltrán (2001) notes, a number of commentators, including T.

H. Huxley, have pointed out that the rival models of generation described by

Bonnet and Buffon share a certain structure (pp. 75-78). Both models treat

reproduction in terms of two conceptually distinct phases associated with two

distinct aspects of living form. Both Buffon’s interior moulds and Bonnet’s

preexistent germs were seen as the locus of the essential species form. The rest of

the developmental process was understood primarily as enlargement, with the two

models differing only with respect to the depth of influence of the nutritive

milieu. These pangenetic influences were seen, by both theorists, as responsible

for the individual accidents and degenerations that constituted hereditary

resemblances. According to Lopéz-Beltrán (2001), the main point of contention


63

between the models of Bonnet and Buffon was the boundary between these two

domains. Bonnet’s commitment to preexistence prescribed a strict limit to these

influences. Buffon’s scheme, though it reflected the same structural duality,

entailed no strict boundary between type and accident, meaning that hereditary

degenerations could, under certain circumstances, lead to permanent alterations of

the lineage or even the species.

One additional commonality between the two models is that both

displaced the hardest part of the problem of formation from the observable present

to the unobservable past. For Bonnet, it was pushed back to creation and down to

an invisible level. In a slightly less obvious way, Buffon also displaced formation.

He packed the principal formative events into the first moments after conception,

when the unobservable interior molds were supposed to produce the essential

form. Thus both Buffon’s theory and the doctrine of preexistence anticipated

modern heredity precisely in the sense that they appealed to the unobservable past

to account for the complex forms observed in the present.

Conclusion

In this chapter, I have discussed the treatment by early modern natural

philosophers of the phenomena of intergenerational resemblance that would

eventually be explained in terms of a unified concept of heredity. I reviewed the

way that the reconstruction of species form was conceived in traditional

Aristotelian metaphysics as an aspect of the natural inclination of all things to

imitate the eternity of the celestial realm. The male sought to engender its form by
64

sowing its seed in the female, imparting the innate heat and orderly movement

that would transform the matter provided by the female into a near copy of the

father. Eventually, the changing epistemological and metaphysical

presuppositions associated with the Scientific Revolution led to a evacuation of

formative power from the material universe, setting up the central problem for

early modern generation theory. William Harvey (1847/1943) attempted to found

a modern empirically-based embryology that privileged observational evidence

over first principles and took seriously the perennially difficult questions of

formation. Soon thereafter, however, the ascent of the mechanical philosophy

forced generation theorists to conceive of generation as a mechanical unfolding of

preexistent germs. The doctrine of preexistence, which remained the central

dogma of generation theory for over a century, rendered hereditary resemblance

practically unthinkable. Eventually, however, this extreme view encountered

insuperable difficulties. Inheritance was once again admitted into the discussion,

but its scope, as always, was limited to accidents and peculiarities. These

hereditary resemblances remained a decidedly marginal concern, and were not

considered to require a general explanation. For the natural philosophers of the

seventeenth and eighteenth centuries, the foremost problem was always to

understand how organized beings are engendered.


65

Chapter 3: The Ontogeny of Inheritance

In this chapter, I review some of the interacting factors that contributed to

the development of heredity as a mode of cultural and scientific reasoning. As

documented by growing body of research highlighted in a recent volume edited

by Staffan Müller-Wille and Hans-Jörg Rheinberger (2007b),6 a sequence of

conceptual and societal transformations spanning three centuries helped to

produce what the editors call the knowledge regime of heredity. They employ this

knowledge regime framework to examine the epistemic and practical rudiments

that emerged before heredity had developed sufficient conceptual, linguistic, or

institutional coherence to be legitimately treated by conventional intellectual

history (p. 13). The development of heredity, they argue, involved a

reconfiguration of the entire conceptual space in which the basic questions about

nature and humanity are posed. To understand this transformation, therefore, we

need to look beyond the appearance of hereditarian ideas in the heads of

philosophers and scientists, and examine the fundamental restructuring that was

taking place in “the agricultural, technical, juridical, medical, and scientific

practices in which knowledge of inheritance was materially anchored”

(Rheinberger & Müller-Wille, 2003, p. 3).

6
This volume is the product of a series of workshops held as part of

a project called A Cultural History of Heredity, overseen by the Max

Planck Institute for the History of Science.


66

The cultural historiography of heredity is rich and complex and far

exceeds the scope of the present work. I will therefore concentrate, in this

treatment, on three areas that are directly implicated in the production of heredity

as a biological principle. I begin with natural history, which was gradually

becoming institutionalized and professionalized during the late seventeenth

century. This transformation of natural history was associated with, and shaped

by, the emergence of what Müller-Wille and Rheinberger (2007a) call a culture of

exchange, which produced new opportunities for the exchange of plant

specimens, as well as knowledge, across widely dispersed geographical regions.

This novel situation contributed materially to the way plant species and their

interrelations came to be conceived and reconceived. Next, I turn to early modern

agriculture and the ways the emerging culture of exchange affected animal

breeding practices. The relocation of livestock breeds as a consequence of long-

distance trade challenged assumptions about the connections between animals and

their natural places and forced breeders, not only to improve their breeding

practices, but also to make them fully explicit for the first time. Finally, I discuss

the deliberate effort undertaken by progressive medical authors to transform their

art into a true Enlightenment science by bringing rational scrutiny to bear on the

enigmatic question of hereditary disease. This final endeavor provided the

primary context in which biological inheritance was developed into what Lopéz-

Beltrán (2007) describes as “a structured, causal, mechanistic presence in the

biological realm” (p. 106).


67

Natural History: New Practices, New Distinctions, New Problems

In many respects, the premodern understanding of biological succession

was subject to the same inner logic by which the devolution of property and titles

was understood in premodern society. As Sabean (2007) explains, the traditional

European family group understood itself as a bloodline descended from a

common ancestor, in which access to power and resources was controlled

primarily through succession (p. 44). By a sort of implicit analogy to the

premodern clan, according to Müller-Wille (2007), the species was also viewed

primarily in terms of the genealogical relations of vertical descent and devolution

that connected the individual plant or animal to its ancestral place and bloodline

(p. 178). In addition, as was the case with the traditions and practices of social

inheritance, biological succession was not conceived in terms of explicit laws or

mechanisms. In particular, neither the persistence of species type nor the

transmission of individual accidents of generation were seen to require any sort of

internal principle connecting offspring to parents. Moreover, it was not yet

evident that anything about these phenomena needed to be explained. Before

explicit questions of biological inheritance could even arise, the uncharted

conceptual territory surrounding the nature of species and their interrelations had

to be explored and mapped.

Global exchange and botanical classification

Over a period spanning three centuries, the logic of genealogical

succession that characterized both the social and natural conceptions of


68

inheritance was altered by a number of major conceptual shifts. First, in both

cases the logic of succession itself came to be understood in more explicit terms.

For example, throughout the early modern period, as society became increasingly

mobile, traditional practices of property devolution were increasingly brought

within explicit legal frameworks (Sabean, 2007). Second, as McLaughlin (2007)

explains, there was a subtle shift in the way social and biological succession were

represented. In the context of social succession, where the primary emphasis had

been on the benefactor and his act of bequeathing the clan’s legacy, the

development of standardized legal procedures brought the mechanics of the

process itself into focus (p. 281). At the same time, the generation of plants and

animals was increasingly being represented as a natural occurrence governed by

constant laws, rather than as the work of parents. Third, the imagined

configuration of family and species groups shifted from a vertical to a horizontal

orientation. The family, for example, was conceived less as a vertical bloodline

through which power and prestige were preserved and more in terms of horizontal

kin relationships that could be developed in the interest of capital accumulation

(Sabean, 2007, p. 49). This third shift occurred in natural history through the

explication of new species concepts, in which a relational, ecological

understanding of species types gradually replaced the traditional emphasis on

unique essences (Müller-Wille, 2007).

To understand the conceptual developments that transformed early modern

natural history, we must consider the institutional framework that emerged as a


69

direct result of the expansion, in the seventeenth through nineteenth centuries, of

colonialism and long distance trade. Perhaps the most significant institution in the

transformation of natural history was the botanical garden. A global network of

botanical gardens was established during the early modern period for the

collection, exchange, and study of plant specimens. Müller-Wille (2007) argues

that the practices of specimen exchange and controlled cultivation facilitated by

the gardens were decisive in precipitating a deconstruction of the tacit relations

traditionally believed to bind living beings to their natural places. He suggests that

these new practices realized, spatially, what would become a novel conceptual

distinction, namely, that between the genealogical and environmental components

of generation. The mobility of individual specimens of various plant types

literally performed the difference between characters that are relatively dependent

on the conditions of cultivation and those that seem relatively independent of

those conditions.

As a consequence of this physical and conceptual uprooting of plants from

their natural places, naturalists began increasingly to conceive of species type in

terms of the qualities that reproduce across a range of conditions. As a result, the

tacit species essentialism of premodern natural history was replaced by an explicit

but ultimately unstable doctrine of species constancy. Seventeenth century

naturalist John Ray (1628-1705) articulated what is generally considered the first

modern species concept. He defined the biological species explicitly in terms of

traits that appear reliably, regardless of the conditions of generation. He described


70

the constancy of species type in terms of the potentiality represented by the seeds

of a single plant. Writing in 1686, he explained that “no matter what variations

occur in the individuals of the species, if they spring from the seed of one and the

same plant, they are accidental variations and not such as to distinguish a species .

. . one species never springs forth from the seed of another” (quoted in Mayr,

1982, p. 256). Carl Linnaeus (1707-1778), perhaps the leading promoter of

species constancy, made this idea even more explicit by arguing that all of the

seeds of a species are, by definition, exactly the same. As Müller-Wille (2007)

explains, Linnaeus claimed that all individuals of a species are “genetically”

identical cousins, descended from one pair of ancestors created in the beginning,

and all intraspecies variations are, therefore, due to accidents of generation (p.

184). Linnaeus’ principal provided a rational basis for what Ray had described. In

addition, given the emerging appreciation for the notion that generation is subject

to universal and unchanging natural laws, it followed from Linnaeus’ view that

“organisms under all circumstances reproduce within the bounds of species” (p.

178).

This breakthrough, which established species as a real biological (rather

than merely a logical or nominal) category, is often under-appreciated because of

the emphasis historians have placed on the eventual rejection of species constancy

by evolutionists (Amundson, 2005). The doctrine of species constancy constituted

a significant advance, however, because it replaced traditional essentialism with a

rational typology, which included explicit criteria for distinguishing species types
71

from each other and from mere varieties. It was still essentialist in that the species

was conceived as an objective type, but it was a relational typology, in that it

emphasized the place of each species within a larger system of interconnections

(Müller-Wille, 2007, p. 178). Müller-Wille cites Mayr’s observation that

naturalists gradually stopped using the species concept to refer to an “intrinsic

property” and began increasingly to conceive of species in terms of ecological

relationships (p. 178). From this point forward, as Foucault (1973) writes, “all

designation must be accomplished by means of a certain relation to all other

possible designations. . . . An animal or a plant . . . is what the others are not; it

exists in itself only in so far as it is bounded by what is distinguishable from it”

(p. 144).

One of the routine activities of naturalists in the botanical gardens was the

reduction of plant varieties to their species in order to place them within a

universal system of classification, a Natural System. The effort to “sow out” those

variations that are due to soil and climate was precisely what led botanists to

recognize the many exceptions to their simple framework. As Müller-Wille and

Orel (2007) point out, the naturalists encountered two problematic cases (pp. 179-

180). First, there were occasions when individual plants from the same lineage

would exhibit accidental variations that did not disappear in the next generation,

despite careful cultivation. If, as Linnaeus claimed, every seed was identical and

“genetically” immutable, accidents of generation would be very unlikely to breed

true. Second, there were plant varieties that exhibited supposedly species-typical
72

characters that seemed to be derived from two different plant species. These

hybrids could not be accounted by the Linnaean doctrine of species constancy.

Linnaeus recognized these problematic cases and took them seriously. As Müller-

Wille and Orel explain, to deal with the first type of case, Linnaeus invented a

new category: the constant variety (p. 180). In addition to those characters that

were species typical, and thus supposedly invariant, and those characters that

were thought to be entirely dependent on conditions, this new category identified

characters that depended on the sub-specific lineage in which they appeared. The

case of hybrids, meanwhile, led Linnaeus to found a research tradition in

hybridism whose line of intellectual descent can be traced ultimately to Mendel

(2007). Thus, as Müller-Wille (2007) emphasizes, taxonomic practices enacted in

the botanical gardens eventually revealed exceptions to the distinctions defined by

those very practices, and not only led botanists to recognize the reality of sub-

specific groups, but also forced them to pay ever closer attention to minute

differences in individual characters and their patterns of reproduction.

The natural history of man

Questions concerning the nature of stable, sub-specific lineages also arose

during the early modern period from a domain far outside the controlled

conditions of the botanical gardens. Colonial expansion not only created the

conditions for the elaboration of new botanical practices and concepts, but it also

produced a large scale, unintended experiment in human reproduction. The

physical differences among diverse human populations were, at one time, thought
73

to indicate the existence of multiple human species, possibly with independent

origins. The events of the eighteenth century, however, led some naturalists and

philosophers to begin to question this assumption. The increased contact among

diverse groups naturally resulted in an increased frequency of intermarriage. The

children engendered by these couplings exhibited a mix of their parents’

characteristics, which they, in turn, passed on to their own children. As Mazzolini

(2007) explains, Spanish and Portuguese colonial authorities, who were forthright

in their white supremacist attitudes, dealt with this proliferation of mixed

individuals by developing increasingly intricate systems of social stratification,

call Castas, by which they assigned legal and social status to individuals

principally based on skin color. In addition to highlighting patterns of hereditary

resemblance, the colonial authorities’ focus on skin color, in particular, led them

to place great emphasis on the fact of blending.7

The patterns exhibited in the inheritance of racial characteristics led a

number of eighteenth century authors to conclude that humanity consists of a

single species, which had split off into multiple sub-species, or races. Naturalists,

such as Buffon, Johan Blumenbach (1752-1840), and Leopoldo Caldani (1725-

1813), supported this so-called monogenist view of the human species by

analyzing the way individual human traits were mixed and combined in the

7
Indeed, the treatment of skin color as a paradigm hereditary trait

would return a century later in Fleeming Jenkin’s (1867) review of The

Origin of Species, for which the issue of blending proved central.


74

children of individuals from different racial groups. According to Mazzolini

(2007), because these authors were forced to rely largely on documents produced

by Spanish and Portuguese colonial authorities, what they actually ended up

analyzing was the social classification system used to govern colonial territories.

The authors, therefore, uncritically treated skin color as the defining characteristic

of race (p. 355). The easy blending of skin color, moreover, provided ready

evidence for the argument that racial differences are superficial and historically

constituted.

McLaughlin (2007) notes that Kant also wrote about the origin of human

racial groups in terms of the origin and persistence of various skin colors (p. 283).

Kant focused on the so-called black and white races, but unlike other theorists on

racial difference, he suggested that dark skin is an adaptation, which arose in

response to the climatic conditions of places like Africa. Although Buffon had

also attributed skin color differences to climate, as McLaughlin explains, his

explanation had not appealed to the idea of adaptation. For Buffon’s thoroughly

teleological worldview, the species type in its original wholeness was the apex of

adaptedness, designed by the Creator to fill a prescribed role in the economy of

nature. Individual variations could, therefore, only be understood as degenerations

from the original, ideal form (p. 279). Kant’s suggestion that individual variations

can have adaptive value, which was independent of the whole organism, was a

notable departure (p. 284). Although, as McLaughlin’s discussion makes clear,

Kant’s view of adaptation was still a long way from Darwin’s, his reasoning
75

represented one of the first attempts to account for the inheritance of racial

differences in terms of the advantages they conferred.

Kant’s recourse to history (in its temporal sense) was part of a more

general development of historical consciousness taking place in the late

eighteenth century. Foucault (1973) writes that, “historians of the nineteenth

century were to undertake the creation of a history that could at last be 'true' . . .a

history restored to the irruptive violence of time” (p. 131). Moreover, Kant was

certainly not the only pre-Darwinian theorist to apply historical reasoning to

organic form. According to Müller-Wille (2007), the growing attention being paid

by naturalists to the ecological interdependence and diversity of species and

subspecies forced them to question the traditional assumption that organic forms

fill preordained ecological roles. They were increasingly noticing that species

differing both in form and perfection [sic] are able to perform equivalent

ecological functions, while species that are morphologically similar often thrive

under quite diverse ecologically conditions. As Müller-Wille explains, this

conundrum was crucial in prompting naturalists to resort to history to account for

the apparent contingency of these relations (p. 194) (see also Jacob, 1970/1976).

Moreover, this new attention to historical contingency and organic diversity

contributed substantially to the background against which naturalists began to find

inheritance metaphors useful in describing the reproduction of accidental

variations (2007).
76

Agriculture: The Contribution of Selective Breeders

While natural philosophers were pondering the laws of generation, and

naturalists were identifying the traits that appear irrespective of the conditions of

generation, farmers and animal breeders were trying to minimize degeneration.

The economic transformations that were sweeping Europe during the early

modern period had created opportunities for the introduction of established

livestock breeds into new environments. In the short run, these events required

farmers and breeders to develop acclimatization practices to deal with the effects

of raising their animals in new environments. In the long run, these events had a

profound impact on the prevailing understanding of the relation between animals

and their natural places. The breeders’ efforts to meet the challenges posed by the

relocation of livestock breeds resulted in the development of new attitudes about

the connection between parents and offspring and the limitations and possibilities

presented by selective breeding.

As Wood (2003) explains, for farmers and animal breeders in early

modern Europe, the notion that like begets like “was associated in some

mysterious way with the animal’s blood and also with its traditional environment”

(p. 22). Because these notions were not explicitly theorized, it is impossible to

state precisely how the relationship between breed and place was conceived. In

particular, the causal distinction between differences in breed and accidental

differences is nowhere clearly articulated. “Differences between races (breeds),

and also between individuals within a race,” Wood says, “were explained
77

ultimately by reference to accidents of development caused by influences over

which the farmer had limited or no control” (p. 22). While differences between

individuals might be due to circumstances surrounding the moment of conception,

such as abnormal weather, the mother’s imagination, or astrological influences,

differences between breeds were typically attributed to stable aspects of the local

environment, such as air, water, food, and climate (p. 24). Yet, it does not seem

that breed differences were viewed as simply environmental as opposed to

hereditary. Blood and place, according to Wood (2007), were intimately

interconnected for the early modern farmer. The tacit belief that “the domestic

animal inherited its environment just as truly as it inherited its blood” expressed a

deep intuition about the mysterious interconnection between an animal’s distinct

nature and its traditional environment (p. 230).

Because of their assumptions about the connection between blood and

locality, farmers expected that breeds transported to new places would undergo

significant degeneration (Wood, 2007, p. 230) . Paralleling the experience of the

botanists,8 however, this problem turned out to be relatively minor. Wood (2003)

describes one of the early “experiments” in which the relative effects of blood and

locality began to be distinguished. In 1723, Swedish farmer Jonas Alströmer

imported Spanish Merino sheep from their sunny home climate in southern Spain

8
The parallel is literal. As Wood (2003) notes, the efforts of breeder

Jonas Alströmer to cultivate foreign sheep in Sweden coincided with his

friend Linnaeus’ similar efforts with exotic plants (p. 26).


78

to cold, wet Sweden, in an effort to create a domestic wool industry. Alströmer

took great care to mitigate the dramatic change of environment, and the sheep

were soon producing fine wool in their new home. Alströmer thus demonstrated

that animals could be transplanted to new conditions without significant

degeneration, and his success inspired others, who continued to improve on the

techniques he developed. At first, husbandry practices were based primarily on

managing the conditions in which the young were reared. As time went on,

however, breeders discovered that, for the particular range of characteristics and

environments that concerned them, the careful control of breeding was much

more effective than the management of external influences.

Alströmer’s experiment inaugurated an extended period of debate and

discussion on the various problems associated with controlling the patterns of

inheritance between parents and offspring (Wood, 2003, p. 26). As Wood

explains, the breeding discourse was intensified further by the remarkable success

enjoyed by Robert Bakewell and the breeding association he formed with fellow

farmers in 1783, the Dishley Society. This association of breeders introduced

many innovations, including a trait-based approach, whereby they attempted to

breed sheep for particular, economically desirable characteristics. Their focus on

individual traits contrasted with the traditional grading technique, in which males

of high grade stock were brought in and crossed with local females in order to

increase the proportion of “superior blood” in the next generation. Grading was

based on a traditional blood fraction concept and was directed toward the
79

improvement of overall health and vigor, rather than the propagation of particular

traits. Although controversial at first, the trait-based approach eventually caught

on as a result of the indisputable economic advantages enjoyed by Dishley

Society members.

In addition to the shift of focus from whole individuals to isolated traits,

Wood (2003) points out that Bakewell introduced population thinking to the

breeding industry (p. 29). Based on straightforward economic considerations,

Bakewell emphasized the importance of establishing a desirable trait throughout

an entire flock. He was once quoted as saying that “the merit of a breed cannot be

supposed to depend on a few individuals of a singular beauty; it is the larger

number that must stamp their character on the whole mass” (quoted in Wood,

2007, p. 235). The key to the production of a high-value breed, for Bakewell, was

to create a population in which the most economically valuable characteristics

were reliably present and consistently passed on to subsequent generations.

Bakewell and the Dishley Society developed specific techniques to

improve the reliability with which accidental variations could be made to spread

through a population. The primary technique they pioneered, according to Wood

(2003), was to inbreed very close relatives, fathers with daughters, brothers with

sisters, etc. This technique, which they called breeding in and in, was said to

produce individuals with greater prepotency, defined as the capacity of an

individual to engender its own particular traits in its offspring. Although

prepotency was thought to have something to do with the blood, it remained


80

among what C. C. André once called “the innermost secrets of nature” (quoted in

Wood, 2003, p. 33).

Despite the successes experienced by Bakewell and his colleagues, as

Wood (2003) points out, the advisability of close inbreeding was widely

questioned due to the frequency of negative side-effects (p. 28). Breeders needed

to be constantly vigilant in order to avoid the problematic health consequences of

inbreeding, and few breeders possessed the highly developed judgment of Dishley

Society members. As Wood notes, most experts agreed that a greater

understanding of the physiology of generation was needed so that the subjective

element could be replaced with a law-based methodology that anyone could

implement (p. 34). This endeavor developed into a research tradition involving

the participation of professional breeders, academics, and gentlemen naturalists.

Interestingly, the influence of this legacy touched not only Darwin, but also

Mendel. Cyrill Napp (1792-1867), an active participant in the Moravian sheep

breeding community who published his own views on heredity, also happened to

be the Abbot at the Monastery of St Thomas in Brno when Mendel entered there

in 1843.

The disentanglement of blood and locality in agriculture mirrored the

conceptual shifts taking place in natural history. The various races and varieties of

living beings that had been tied to their natural places since time immemorial

were becoming mobile. There had been few occasions to consider the nature of

those ties until they were broken by the emerging culture of exchange. As Müller-
81

Wille (2007) writes, “inheritance became an issue when the intimate relationship

between property, place, and ancestry, which organisms seem to enjoy in their

natural places, was upset” (p. 194). The ascendance of physical mobility, material

exchange, and horizontal affinity provided the structural conditions for the

emergence of heredity as an epistemic possibility. Parnes (2007) suggests that,

given these societal realignments, it was all but inevitable that a knowledge

regime would emerge that treats capital, goods, knowledge, and hereditary

dispositions as independent elements, which circulate within a population and

may be transmitted to the next generation. At the same time, the integration of

heredity into the science of living beings required a conceptual framework in

which the various hereditary phenomena could be defined, interrelated, and

contested. As we shall see in the next section, this framework would come from

the unlikely halls of medicine.

Medicine: From Hereditary Malady to Human Heredity9

Throughout the seventeenth and eighteenth centuries, a crisis was building

for traditional medicine. Among the motley variety of healers populating

medieval European society, the rational and learned doctor had always

differentiated himself based on his university education and, in particular, on his

9
Unless otherwise noted, all the historical research in this section

derives from the work of Carlos Lopéz-Beltrán, whose exhaustive

exploration and analysis of the medical history of heredity is as useful as it

is impressive.
82

ability to provide a “good story” (French, 2003, p. 2). As I noted in the previous

chapter, when the medieval universities collapsed, the comprehensive

metaphysical system anchored by them also collapsed. Without a foundation in

Aristotelian natural philosophy, the early modern physician was deprived of his

primary source of authority (French, 2003). Medicine needed a new source of

authority and, in the eighteenth century, nothing was more authoritative than the

new mechanical philosophy of Newton and Boyle. This disciplinary and

theoretical crisis provided the background against which medical thinkers began

seriously to take up the problem of hereditary disease. As I shall show, medical

theorists of the late eighteenth and early nineteenth centuries were largely

responsible for working out the structure of the modern concept of heredity. And,

for obvious reasons, the contours of its logic grew out of its stated purpose, to

solve the problem of hereditary disease transmission. As Lopéz-Beltrán (2007)

writes, “the early modern notion of hereditary diseases was a main hinge by

which the metaphorical flow of structured meaning passed from the legal to the

biological sphere” (p. 107).

It had been noticed since antiquity that certain diseases such as epilepsy

seem to run in families. Along with ordinary physical similarities between parents

and children and the mixing of traits observed in hybrid animals, family diseases

were among the accidental resemblances often described using the adjective

hereditary (Lopéz-Beltrán, 1994, p. 214). It is important to recognize that this

usage was purely metaphorical; it did not imply any general hypothesis about
83

causation or any notion that these disparate phenomena would admit a single

explanation. As Lopéz-Beltrán (2004) argues, heredity, as a general biological

category, which was able to encompass all that had come under the metaphor of

inheritance, emerged out of a concerted theoretical effort led by medical thinkers.

The effort to domesticate hereditary disease took on greater urgency in the

latter half of the eighteenth century, and the French medical establishment, in

particular, played a pivotal role in raising the profile of the problem. Two essay

competitions sponsored by the Parisian Société Royale de Médecine, in 1788 and

1790, provided a vital forum for debating and clarifying the conceptual terrain

related to hereditary diseases (Lopéz-Beltrán, 2007). The French medics’ most

concrete achievement, according to Lopéz-Beltrán (2004), was the introduction

into biology of the French noun l’hérédité (heredity). The importance of this

linguistic innovation can hardly be overstated. As Lopéz-Beltrán argues, it was

the emergence of the noun that enabled the concept to move beyond the realm of

metaphor, and to become both more general and more real. With the noun in place

as “the carrier of a structured set of meanings,” a wide range of biological

phenomena could then be consolidated within a single causal framework, and the

details could be defended and contested (p. 41).

The emergence of the noun heredity was a crucial event, but its

significance may have derived less from the reification itself than from the

conceptual structure that provided its foundation. This conceptual structure was

developed largely through the efforts of medical theorists to establish precise


84

criteria for distinguishing truly hereditary diseases from those whose

intergenerational recurrence was attributable to other factors. Before discussing

the development of the concept in detail, let us preview its main features. First of

all, the designation hereditary was restricted, by definition, to maladies that were

considered to be congenital, as opposed to connate. That is, only a malady that is

irreversibly rooted in the constitution during the first formation (when the initial

rudiments of the embryo are consolidated), could qualify as hereditary. This

excluded any maternal influences that might affect the embryo during

development. Second, the manifestation of a disease at a predictable stage of life

was considered a key indicator of its congenital and thus hereditary origins. Third,

a new model of latent causation was developed to account for the unreliability

with which symptoms of hereditary diseases appear in individuals. Here, it was

not the disease itself that was understood to be inherited. Rather, a predisposition

for the disease was supposed to be transmitted, which would then require a

triggering cause to produce symptoms. This model could account for why a

purportedly hereditary disease might affect only a few of one’s children, or skip a

generation altogether.

Establishing a conceptual domain

Judging from the its increasing frequency in the medical literature, the

topic seems to have experienced a significant upsurge of interest as early as the

turn of the seventeenth century (see Lopéz-Beltrán, 1992, Appendix 1). This is

perhaps not all that surprising. The Scientific Revolution was just getting
85

underway, and a new breed of medical theorists was beginning to reject those

aspects of their predecessors’ theories and practices that conflicted with the

emerging worldview. Over the next two centuries, the holistic perspective that

had governed the traditional medical arts was gradually rejected in favor of the

mechanistic perspective that characterized Enlightenment natural philosophy. In

particular, a new solidist physiology, which attributed disease to the

malfunctioning of solid parts, emerged to challenge the traditional reliance on

humoral imbalances for describing (actually defining) disease. It was against this

background that eighteenth century medical theorists began their effort to define

the domain of hereditary disease and to map the territory it encompasses (Lopéz-

Beltrán, 2007).

The mapping of the conceptual domain of hereditary disease relied, to a

great extent, on the appropriation of an existing conceptual framework that had,

since antiquity, been used to describe chronic, incurable diseases (Lopéz-Beltrán,

2007, p. 110). This framework was based on the notion of the individual

temperament or constitution. Originally conceived in terms of the four cardinal

humors and their relative proportions, the temperament was understood to

constitute one’s essential psychological and physiological makeup (much the way

DNA is currently treated in popular discourse). However, as Lopéz-Beltrán

(2004), explains, despite its origins in the humoral tradition, the temperament was

gradually transformed into a more general notion, unconnected to any single

physiological theory (p. 49). This shift coincided with a linguistic drift, in which
86

the word constitution came to be preferred over temperament, particularly by

solidist theorists for whom the former connoted a certain concreteness and

structural stability, in contrast to the fluidity and imprecision associated with the

latter (Lopéz-Beltrán, 1992, p. 50).

The integration of hereditary phenomena into what Lopéz-Beltrán (2004)

calls the semantic niche occupied by the individual constitution was motivated, in

part, by the need to explain the chronic and incurable nature of hereditary

maladies. Indeed, Waller (2002) argues that the notion of a hereditary malady was

actually a “spin-off” from the notion of a constitutional malady, which physicians

had begun to rely on in order to rationalize their inability to treat certain ailments

(p. 414). Either way, the recruitment of the constitution by medical theorists

seems to have provided a general framework, within which a variety of theoretical

positions on hereditary disease could be posed and contested (Lopéz-Beltrán,

2004, p. 49).

First and foremost, it was argued that, to be considered hereditary, a

disease must be rooted in the individual constitution, which meant that it must

have been acquired at the moment of first formation, when the seminal

contributions of the parents stabilized and consolidated the basic organization of

the individual (Lopéz-Beltrán, 2007, p. 108). Thus the constitution framework

entailed a fundamental distinction between congenital maladies, acquired during

first formation, and connate ones, acquired sometime later. Only congenitally

acquired maladies could qualify as truly hereditary because, after the first
87

formation, the constitution was understood to be essentially fixed. Influences that

come later, along with the diseases they might cause were understood as more

superficial and therefore contingent. This distinction clearly anticipates what

would eventually be understood as the hard view of heredity that has come to

characterize modern biology.

Second, the constitution framework suggested a model of causation that

proved invaluable for explaining patterns of hereditary transmission. According to

Ackerknecht (1982), one of the key corollaries of constitutional disease was the

notion of disease predisposition, or diathesis, which, though dating to antiquity,

did not acquire a “definite” meaning until around 1800. It was around this time, as

Hamlin (1992) points out, that the topic of diathesis was generating an

unprecedented level of interest, with references appearing throughout the

literature in practically every branch of medicine (p. 53). In order to understand

this new view of disease causation, it is helpful to keep in mind that the ontology

of disease, for early modern medicine, was subtly but significantly different from

the modern one. As Hamlin explains, rather than as something one can acquire,

via some sort of agent, a disease was understood as simply a state of imbalance.

The cause, therefore, was generally understood as the proximate, physiological

one (e.g., excess bile) (p. 50). As Cartron (2003) explains, a diathesis, on this

view, was understood not as a disease, per se, but as a general weakness or

vulnerability, which could produce any one of a number of different diseases (p.

161).
88

The irregularity exhibited by patterns of hereditary transmission was a

persistent problem for hereditarian medical theorists and a consistent target for

critics. William Cadogan, for example, had this to say about hereditary gout in

1771:

Those, who insist that the gout is hereditary, because they think they see it
sometimes, must argue very inconclusively; for if we compute the number
of children who have it not, and women who have it not, together with all
those active and temperate men who are free from it, though born of gouty
parents; the proportion will be found at least one hundred to one against
that opinion. . . . What is all this, but to pronounce a disease hereditary,
and prove it by saying that it is sometimes so, but oftener not so? (quoted
in P. K. Wilson, 2007, p. 139).

Medical theorists recognized that the notion of diathesis or constitutional

predisposition could help them deal with the irregularity problem, and set out to

give it a detailed treatment (Waller, 2002, p. 421). In order to make the notion of

predisposition precise, the medical theorists developed an explicit multilevel

model of causation (Lopéz-Beltrán, 2007). On the most basic level, there was the

immediate concrete physiological imbalance responsible for a patient’s

symptoms. This was the standard understanding of disease in terms of its

proximate cause. In addition, the medics distinguished two types of so-called

remote causes: predisposing causes and triggering causes (Hamlin, 1992, p. 51;

Lopéz-Beltrán, 2007, p. 119). The predisposing cause was the diathesis itself,

understood to entail a specific, though unobservable factor, which might remain

hidden indefinitely. If the offspring of a parent afflicted with some supposedly

hereditary disease remains asymptomatic, the inherited cause of the disease could
89

simply be considered latent. Should it manifest at some point, its appearance

would be attributed to the presence of a triggering cause.

By attributing hereditary diseases to a combination of predisposing and

triggering causes, hereditarians could explain why a hereditary disease often fails

to affect all the children of an afflicted parent. Since it is the predisposing cause

that is actually transmitted, the appearance (or not) of symptoms could be

explained in terms of the contingent presence (or not) of a triggering cause.

Erasmus Darwin, for example, appealed to hereditary predisposition to explain the

irregularity of the gout. It might remain latent, the gouty doctor claimed, unless it

is exposed to a triggering cause, such as excessive alcohol consumption (P. K.

Wilson, 2007, p. 137). The convenience of the scheme was not lost on the

skeptics, however. Waller (2002) notes British surgeon Benjamin Phillips’ wry

observation that including diseases that afflict grandparents and grandchildren and

skipping the intervening generation drastically improves the statistics in favor of

hereditary transmission (p. 421).

The concept of predisposing causation also helped to make sense of the

fact that many of the diseases suspected of being hereditary, such as scrofula and

gout, were known to exhibit a predictable pattern of manifestation. These diseases

would reliably appear at the same stage of life and the symptoms would persist for

a consistent period of time (Lopéz-Beltrán, 2007, pp. 114-118). The regularity of

this pattern was naturally attractive to hereditarian theorists and therefore adopted

early on as an important distinguishing criterion. In the Encyclopédie, published


90

by Diderot and D’Alambert in the middle of the eighteenth century, Diderot

incorporated this phenomenon as part of his definition of héréditaire (hereditary).

Indeed, he cited, as his primary example of a hereditary disposition, the bodily

changes experienced during puberty (Lopéz-Beltrán, 1994, p. 227). Diderot

claimed that because this pattern (later named homochrony by Haeckel)

characterizes hereditary disease, it must also be transmitted to the child during the

original formation of the individual constitution.

Competing physiologies and conceptual refinements

The elaboration of the conceptual elements outline in the preceding

section was crucial for the development of heredity, but it did not occur in

isolation from considerations of physiology. Indeed, there was a constant

interplay between the conceptual requirements arising from the phenomenology

of hereditary transmission and the requirement that any proposed model at least

not contradict the prevailing understandings of the physiology of reproduction.

Meanwhile, throughout the eighteenth and nineteenth centuries, these prevailing

understandings were evolving. In particular, the dominance of the traditional

Hippocratic-Galenic approach to medicine was being challenged, particularly in

France, by the ascent of solidism, a school of thought that, as discussed above,

attributed health and disease to the solid structures of the body rather than to an

imbalance of humoral fluids (Lopéz-Beltrán, 2001, p.78). Motivated by the

intellectual prestige associated with the mechanical philosophy, proponents of

solidism pushed medicine to develop an explicitly mechanistic physiology.


91

Initially, solidist physiologists tended to be skeptical that hereditary diseases

actually exist because it was difficult to imagine a viable route of solid to solid

transmission. However, as the influence of solidist physiology spread and the

problem of hereditary transmission continued to gain cultural resonance, it

became all but inevitable that the two domains would meet. Indeed, the discourse

on hereditary transmission eventually came to be dominated by solidist

physiologists who were increasingly open to its reality. The effort to explicate the

conceptual domain of hereditary transmission within the French medical

establishment continued to gain momentum, while the physiology problem served

as a check on conceptual speculation. In the end, the principle of heredity was

formulated absent an explicit physiological mechanism, but the preoccupations of

the solidists were pivotal in determining its ultimate conceptual contours.

Although a growing number of eighteenth century medical theorists were

attempting to account for the transmission of hereditary diseases and

resemblances in terms of “a regular, stable, physiological source,” (Lopéz-

Beltrán, 2001, p. 78), a humoralist conception of the body continued to dominate

most physicians’ attitudes toward health and disease. As Lopéz-Beltrán (2001)

points out, for traditional humoralism, the individual temperament was

understood as somewhat fluid and open. In contrast to the modern tendency to

sharply distinguish between internal and external causes (nature and nurture), for

early modern medicine, body and environment spanned a relatively permeable

and dynamic boundary. Health and disease were conceived in terms of a balance
92

maintained through physical and mental interactions involving the so-called six

non-natural things: air; food and drink; physical motion; sleep; evacuation; and

the passions (2001). Lopéz-Beltrán argues that, because the individual

temperament or constitution was literally constituted and sustained through the

interpenetration of the naturals and the non-naturals, humoralism was all but

unable to conceive of a strict distinction between hereditary nature and

environmental nurture (pp. 80-81).

Initially, for these precise reasons, accounting for the physiology of

hereditary transmission was less problematic for the humoralist than for the

solidist, which is why the latter were initially more skeptical about it. As Lopéz-

Beltrán (2001) points out, since the constitution, for humoralism, is understood as

a blend of fluids, and new beings are engendered through the mixing of the

seminal contributions of the parents, it is easy to imagine how the humoral

qualities of the parents might be passed on to their offspring (See also Cartron,

2007, p. 160; Lopéz-Beltrán, 1994, p. 225 n41). Many humoralists imagined that

the hereditary transmission of a disease resulted from a specific poisonous humor

or taint that could be passed in the seminal fluid of one of the parents. The idea of

a physical substance of this sort being communicated to one’s offspring could be

used to explain a number of facts about hereditary disease. A toxin could damage

a specific system, producing particular symptoms, or it could attack different

systems at different times, potentially producing a variety of maladies. Indeed,

some physicians suggested that a single such protean virus might persist in an
93

individual’s system for a lifetime, causing scrofula, phthisis, rickets, gout, dropsy,

scurvy, epilepsy, mania, and hysteria. It might then make its way to the sperm or

mother’s milk to be passed to the next generation (Lopéz-Beltrán, 2007).

While intuitively plausible, the conception of hereditary disease as a

product of specific morbid humors did not accord well with the conceptual

demands emerging in the eighteenth and nineteenth centuries. For example,

although the passing of specific noxious substances did not strictly conflict with

the notion of predisposition, there was nothing in the theory to explain why such a

substance would act as a latent rather than a proximate cause. And there was

nothing to justify its special protean capacity to cause multiple diseases.

Particularly problematic for the humoral account of transmission, according to

Lopéz-Beltrán (2007), was the regularity associated with homochrony (p. 118).

Why would a poisonous humor cause symptoms at a particular stage of life and

persist for a predictable period. It isn’t that such a pattern was inconceivable, but

simply that the pattern was not addressed by the appeal to a poison. Of course,

there were many other humoralists for whom hereditary transmission was less

about the transmission of a specific poison than the passing of the overall

temperament. However, explaining disease in terms of a general temperamental

weakness is not much of an improvement if the goal is conceptual clarity and

mechanistic precision.

On the whole, the style of reasoning that characterized humoral medicine

ultimately ran against the tide of late eighteenth century medical thought. For
94

example, at the same time that solidist medical theorists were attempting to

further clarify the distinctions used to classify hereditary diseases, humoralist

physiologists were continuing to transgress them. As Lopéz-Beltrán (2001) points

out, for a growing number of theorists, being congenital, that is, rooted in the

constitution during first formation, was decisive for explaining the chronic nature

of hereditary diseases, as well as their patterns of latency and homochrony (p.

114). Yet humoralists found the congenital/connate distinction difficult to accept

(p. 115). For these theorists, the influence of humors in the mother’s blood or

milk was indistinguishable from the influence of humors in the seminal fluids.

Nor were such influences limited to material factors; immaterial factors, such as

the mother’s state of mind, were also considered capable of influencing the

developing child’s temperament (p. 115). As a rule, humoralists regarded

hereditary transmission as merely one more way that humoral influences could

have their effects (2007, p. 81).

The solidist approach to medicine presents a stark contrast to the fluidity,

permeability, and holism of the humoral approach. Solidism held that the

functional unity of the body is located among its solid, structural features. In line

with their mechanistic stance, solidists insisted that disease results from a

structural or physical defect in the solid parts, rather than from an amorphous

imbalance of fluids. Moreover, as Lopéz-Beltrán (2001) notes, this difference was

not merely one of alternative substances (fluid vs. solid) but also one of levels of

explanation. While humoral explanations were holistic, involving multiple levels


95

of interactions and relationships, solidist explanations were strictly reductive, and

causation was conceived exclusively at the level of the local and the physical.

Initially, as Lopéz-Beltrán (2001) explains, the ontological commitments

of the solidist school led them to deny the possibility of hereditary transmission,

altogether (pp. 73-81). A highly influential articulation of these doubts, published

in 1748 by the young surgeon Antoine Louis (1723-1792), served as a touchstone

for generations of hereditarian theorists attempting to defend the possibility of

hereditary transmission. Basically, Louis insisted that, since any flaws in the solid

parts or their structural organization would have to be the result of an accidental

occurrence during of the first formation of the individual, there would be no

conceivable way for such a flaw to be transmitted from parent to offspring

(Lopéz-Beltrán, 1994, p. 230).

As the eighteenth century drew to a close, however, hereditary phenomena

were becoming a major preoccupation of both the public and the intelligentsia,

particularly in post-Revolutionary France (Cartron, 2007). As Cartron explains,

throughout this period, concern for public health and hygiene was exploding, as

was anxiety over the hereditary degeneration of society (see also Pick, 1989).

Meanwhile, evidence of hereditary transmission was continuing to accumulate.

This convergence of events was enough to persuade solidist physiologists to

became seriously involved in attempting to bring clarity to the emerging domain

(Lopéz-Beltrán, 2001). Indeed, the solidist theorists were primarily responsible

for the distinctions that were definitive in giving the conceptual domain its
96

clearest articulation. It was solidists, as Lopéz-Beltrán (1994) explains, who

insisted that only congenital diseases should be considered potentially hereditary,

and argued that only those that affect the same organs at the same age be

considered congenital (pp. 230-231).

In addition to the theoretical advantage solidism held by way of its

simplicity and its mechanism, surgery and autopsies were beginning to reveal

inner dimensions of family resemblance, including defects, down to the level of

tissues and organs (Lopéz-Beltrán, 2007). The discovery that structural details

among the solid parts are sometimes shared by family members greatly supported

the solidist position. This conception of hereditary resemblance, however,

deepened the puzzle of transmission. The humoralist view, based as it was on

fluids, could rely on the mixing of the seminal fluids of the parents to provide a

physiological process for the hereditary transmission of poisonous humors or the

entire temperament. But the solidist view, which asserted that hereditary

resemblance must involve a deep and pervasive similarity of bodily structure,

could not rely on any such straightforward mechanism.

In order to speak coherently about hereditary resemblance, therefore, the

solidists resorted to an epistemological strategy which has often proved useful for

these circumstances; they appealed to a metaphor and relegated the details to a

black box. According to Lopéz-Beltrán (2007), medical theorist Alexis Pujol

(1739-1804) described the process by which the structural features of the parents

are impressed on the child during the first formation in terms of an analogy to a
97

hand carefully copying the details of the parents form onto and into the child’s

body (p. 121). This metaphor provided precisely the sort of abstract, lawful

conceptual space that was needed for the development of the theory to advance.

Indeed, as Lopéz-Beltrán points out, this idea of a universal mechanism of

hereditary transmission was a pivotal step on the way to the unification of various

hereditary phenomena from medicine, breeding, and ultimately biology (p. 121).

The solidist conceptual frame was uniquely equipped to accommodate both

normal and pathological instances of hereditary transmission, a crucial

precondition for the development of heredity as a general category of biological

explanation. Thus, despite the fact that solidists had deferred the transmission

problem, they gained the upper hand in the debate over the nature of hereditary

phenomena largely on the strength of theoretical clarity. As Lopéz-Beltrán (2007)

explains, solidist theorists had established a clear framework for heredity that

allowed for a coherent and consistent explanation for a number of the observed

phenomena. All that they lacked was a viable account of the physiology of

transmission.

Conclusion

I have attempted, in this chapter, to provide a sense of the sweeping social,

academic, economic, and medical transformations that informed the construction

of heredity as a knowledge regime and a biological principle. The uprooting of

plants, animals, and people from the unfathomably deep and ancient ties that held

them in place set in motion a chain of events that radically transformed both the
98

physical and conceptual landscape of early modern Europe. Among the countless

consequences of this cultural earthquake was the emergence of a new sense of

living beings as portable types, whose identities were conceived collectively and

relationally rather than individually and genealogically. Yet, it was precisely this

detachment of beings from the ties binding them, temporally, to their ancestors,

and, spatially, to their ancestral places, that forced the explicit recognition of their

contingent historical existence. When the family estate was transformed from an

immovable place into a collection of commodities that were interchangeable with

capital, inheritance had to be made an explicit legal process. And so, like the

bourgeois individual vis-à-vis the medieval clan, living beings were torn loose

from their physical and metaphysical moorings, and the nature of their

interrelations needed to be made explicit. In this context, the notion of heredity

can be seen, not as a new explanation for these previously untheorized relations,

but as an expression of the lacuna itself.

Meanwhile, in medicine, the question of intergenerational transmission

came into focus precisely as the ontology of disease was being transformed by the

emergence of a new a mechanistic conception of the body. The permeable and

holistic balance that had determined health and disease since antiquity was

replaced by a solid and, more or less, invariable frame, the malfunctions of which

could be identified with faulty parts. Though, at first, the solidists found heredity

incomprehensible, their insistence on clear-cut definitions and precise (if

hypothetical) local causes played a critical role in the mapping of the concept’s
99

internal contours. Solidist medical theorists, relying on the framework provided

by the individual constitution, managed to explicate a precise conceptual structure

for heredity, which could then be applied beyond medicine to describe

intergenerational stability and resemblance throughout the living world.

As I shall show in the next chapter, it was at around this same time that

biology began to emerge as an independent discipline. Early biology was

concerned primarily with problems of structure and function, and their ontogeny.

As a consequence of their preoccupation with whole organisms and the laws

governing their organization, intergenerational similarity remained a peripheral

concern for this first generation of biologists. Yet, by the early twentieth century

biology had become thoroughly engaged with transmission, and the problem of

formation had been pushed to the margins. This change in emphasis was

accompanied by a shift toward reductive and mechanistic thinking, similar to the

one described in the medical context. Perhaps predictably, biology had arrived at

a similar impasse regarding the precise processes responsible for producing

intergenerational similarity and, for the very same reasons, were forced to defer

the hardest part of the problem.


100

Chapter 4: The Integration of Biology and Inheritance

In the previous chapter, I described a number of shifts in the intellectual

landscape of early modern Europe that, taken together, began to make it possible

for the metaphor of biological inheritance to be transformed into an objective

natural category. Changing representations of the natural order, of selective

breeding, and of hereditary disease, interacted with changing ideas about family

structure and the inheritance of property and titles to give rise to a new conceptual

space in which inheritance could be imagined as a general mechanism responsible

for intergenerational resemblance. At the same time that the conceptual structure

of heredity was being elaborated, life itself was emerging as a distinct area of

scientific study. Yet, surprisingly given its importance today, it required the better

part of the century for heredity to become integrated into the biology as a central

scientific problem. It is illuminating, I suggest, to analyze the integration of

heredity into biology in terms of four distinct stages, each of which can be seen as

constituting a relatively coherent framework that nevertheless produced the

conditions for the synthesis that would succeed it.

The Four Hereditary Syntheses

I begin with the emergence of biology itself. As Foucault (1973) has

emphasized, biology was not recognized as a definite area of study before the turn

of the nineteenth century. It was at this time that life itself was beginning to be

seen as a special province of nature, deserving to be studied separately from

physics and, to some extent, even medicine. In biology’s formative period, the
101

principle of life did not yet include hereditary transmission as a significant

problem. Indeed it was not until the end of this initial phase that the first of four

hereditary syntheses brought the issue of heredity to the attention of biologists.

For the first hereditary synthesis, heredity was understood by analogy to

momentum, or to the weight of accumulated tradition (Lopéz-Beltrán, 2004).

Heredity was the force that acted to preserve species and racial continuity against

the forces change. Interestingly, the emphasis on stability of type that

characterized this first synthesis represented a significant shift from the concern

with individual peculiarities and accidents that had first occasioned the appeal to

inheritance metaphors (Lopéz-Beltrán, 1992, p. 136). Nevertheless, this energetic

conception of heredity as a counterforce to variation continued to have

widespread influence for the remainder of the century (Gayon, 2000).

The second synthesis of heredity and biology can be found in the

hereditarian speculations of Charles Darwin (1883). Darwin’s theory of natural

selection required an understanding of heredity that was somewhat different than

the view embodied by the first synthesis. Darwin’s conception of inheritance was

based on domestic breeding practices, which had traditionally emphasized the

transmission of individual differences. Although Darwin’s hypothesis about the

causes of character transmission was never accepted, it introduced into biology

the possibility of explaining heredity in terms of the movement of material

particles. This shift in emphasis was decisive for the way questions about heredity

were framed in the decades following Darwin, particularly by those interested in


102

evolution.

The third synthesis grew out of German research in cytology and

reproductive fertilization. This research program discovered the trans-generational

continuity of the germ line and the physiological processes through which this

continuity was achieved (Coleman, 1965). This work established the nuclear

paradigm, which defined heredity as the transmission and combination of the

parents’ nuclear material. The observations of van Beneden, Strasburger, and

others, along with the theoretical speculations of Roux and Weismann, convinced

many late nineteenth century biologists that the hereditary determinants were

somehow embodied in the structured nuclear material.

The fourth hereditary synthesis culminated in the founding of classical and

population genetics in the second and third decades of the twentieth century. At

the end of the nineteenth century, a small group of botanists independently

investigating the patterns of variation and heredity in various plant varieties had

simultaneously come upon Mendel’s paper. This event initiated a period of

intensive research by breeders seeking to further explicate and extend Mendel’s

findings. This effort culminated in the Mendelian chromosome theory of heredity,

which immediately established transmission genetics as a research program.

Meanwhile, a fierce academic controversy regarding how to reconcile Mendelian

and Darwinian approaches to evolution was resolved through the development of

new statistical tools that enabled the integration of Mendelian genetics into

existing biometric approaches to Darwinism.


103

A number of authors have suggested that the modern understanding of

heredity became possible only after biologists were able properly to distinguish

between transmission and development (e.g., Bowler, 1989; Sandler & Sandler,

1985). Central to this premise is the assumption that nineteenth century theorists

were unable to recognize the theoretical independence of hereditary transmission

because they thought of it as merely a consequence of the developmental process.

As the story goes, statistical and physiological approaches to the study of

heredity/development in latter decades of the nineteenth century prepared the

ground for the “rediscovery” of Gregor Mendel’s paper at the end of the century,

by which time theorists were able to recognize its significance, and a new

research tradition was born. Once transmission was recognized as an independent,

statistical problem, the dynamics of embryological development could be treated

as substantially irrelevant to evolution, since the former could be studied

exclusively in terms of the differential transmission of genes.

I take a slightly different approach to this conceptual evolution. Where the

standard historiography highlights the liberation of heredity from a conceptual

entanglement with development, I examine how the staged integration of heredity

into nineteenth century biology transformed the meanings of both heredity and

development. With each stage, more and more of the causal responsibility for the

ontogeny of form was transferred to the conceptual space being mapped by

hereditarian theorists. Rather than isolating the problem of inheritance from the

more complex and intractable problems of embryology, the identification of


104

hereditary transmission as a distinct causal phenomenon allowed embryogeny to

be treated as a mechanical unfolding, directed by preexisting formative causes. As

heredity came increasingly to refer to the transmission of characters,

determinants, or dispositions, the role left for the embryologist was simply to

account for the programmatic expression of those preformed elements. The really

interesting questions, meanwhile, those concerning the origin of form and

function, were then taken over by evolutionary theory to be explained in terms of

natural selection and a few other population level dynamics.

Biology Develops (1800-1850)

During the final decades of the eighteenth century and the early decades of

the nineteenth century, the philosophy of living beings was undergoing a

fundamental transformation. Throughout Europe, during this period, a new

awareness was emerging within natural history and natural philosophy that living

beings possess a special quality that makes them fundamentally distinct from non-

living beings (Foucault, 1973; Nyhart, 1995). Life began to be represented, during

these years, as a distinct mode of existence characterized by distinct laws and, as a

result, came to be seen as a valid object of investigation in its own right. Jacob

(1970/1976) describes how the emphasis that eighteenth century thinkers placed

on visible structure, was replaced by a greater concern for (non-visible)

organization. These thinkers’ new focus on inner organization can be seen in the

growing recognition among natural philosophers and naturalists that living beings

possess a functional integration and intrinsic wholeness that is not found among
105

inanimate beings. As Foucault (1973) argues, these changes represented not

merely a new perspective, but a major reconfiguration of the conceptual

landscape. The beginnings of this new outlook can be seen in Cuvier’s emphasis

on functional integration and in the sweeping metaphysical and epistemological

concerns of the German Naturphilosophen. What these disparate approaches

shared was an acceptance of life itself as a special province, characterized by, and

productive of, new forms of knowledge.

This new field of knowledge, this science of life, was given the name

biology (actually the German and French equivalents) independently by a number

of authors around the turn of the century (Coleman, 1977). A decade earlier, Kant

(1790/1951), in his third critique, had cautioned that it would be futile to try to

understand living things according to purely mechanical laws. “It is absurd” he

wrote, “. . . to hope that another Newton will arise in the future who shall make

comprehensible by us the production of a blade of grass according to natural laws

which no design has ordered” (p. 248). As the nineteenth century began, the

challenge was to discover the laws that apply to organized beings in virtue of their

existence as functional, end-directed wholes. Well before the term biology gained

widespread acceptance, research in physiology, comparative anatomy, natural

history, zoology, and morphology had begun to reflect this new attitude.

One of the first institutional settings for the development of a science of

living beings was the Parisian Muséum d’Histoire Naturelle, which had been

established as a center of French natural science in 1793. Because this was a


106

natural history museum (complete with a celebrated botanical garden, of course),

a great deal of emphasis was placed on the traditional activity of classification.

Yet, the very meaning of natural history was being transformed by the changing

attitude toward living beings. Under the direction of Georges Cuvier (1769-1832),

comparative anatomy became a principal methodology for understanding living

beings in terms of their internal and external relations (Appel, 1987).

Cuvier followed Kant in affirming that living beings are characterized by a

functional integration that can only be analyzed in terms of purpose or teleology

(Appel, 1987). According to Appel, Cuvier’s teleological perspective was

exemplified in his two fundamental laws of biological organization: the principle

of the correlation of parts and the principle of the conditions of existence. The

principle of the correlation of parts held that the functional integration of an

animal’s parts entails a corresponding structural integration. The structure of each

part of an animal reveals its functional relations and implies the structure of the

parts related to it (p. 43). The principle of the conditions of existence stated that

the precise and fixed interdependence of parts is a fixed law that pertains to

natural history without exception. Because of the enormous influence of post-

Darwinian biology it has often been assumed that this principle referred only to

the necessity that an organism be functionally adapted to its environment (see

e.g., Bowler, 1984, p. 106). Cuvier’s principle, however, primarily referred to the

functional interdependence of the organs of an individual animal and only

secondarily to the animal’s ecological relations (Russell, 1916/1982, p. 34). For


107

Cuvier, teleology and holism were the foundation for explaining biological form

and organization.

As Appel (1987) explains, Cuvier’s chief academic rival in the Academy

of Sciences, E. Geoffroy Saint-Hilaire (1772-1844), took a distinctly different

view on the relation between structure and function. For Geoffroy, structure was

the most salient fact of animal organization, and indeed its primary cause (see also

Russell, 1916/1982, p. 77). Geoffroy founded a morphological research program

called philosophical anatomy, which sought to classify species based on structural

rather than functional considerations. The primary research activity of Geoffroy’s

program, the identification of structurally equivalent organs or parts in different

animal species, was based on the his premise that the bodies of all animals are

variations on a single basic plan (see also Gould, 1977b, p. 47).10 Geoffroy’s

conviction that all animals can be referred to single abstract type was expressed in

two principles. The principle of the unity of organic composition held that every

animal is constituted entirely of the same parts (Russell, 1916/1982, p. 54).

Though the parts may vary in shape and in proportion to one another, according to

the unity of composition, every animal is completely homologous with respect to

every bone and organ. The principle of connections held that these homologous

10
Goethe made basically the same claim regarding the morphology

of plants, coining the term morphology to designate this area of study. He

argued that the various parts of the plant are all transformations of the

structure of the leaf. This is now called serial homology (Amundson, 2005).
108

parts can be identified by reference to their spatial and physical interrelations. In

other words, homologous parts were identifiable not only by their shape or

function, but also by the way they are arranged relative to one another (Appel,

1987, p. 85). For Geoffroy, then, the explanation for biological form and

organization was rooted in an understanding of abstract structural types and their

formal relations.

The third significant figure from this period in French natural history is

Jean-Baptiste Lamarck (1744-1829), who, though a generation older than Cuvier

and Geoffroy, made his most substantial contributions in the context of the new

attitude about life that was emerging at the turn of the nineteenth century. In

addition to being among the first to use the term biology, Lamarck was of course,

the first to publish a theory of the transmutation of species. He suggested that the

hierarchy of complexity found in nature is a result of a process of progression in

which the higher forms arise through the gradual transformation of lower forms.

According Lamarck’s theory, life has a tendency to increase the volume and

extent of living bodies, to produce new types of organs to meet the needs of these

bodies, and to remember these achievements so they can be reproduced in the

next generation (Russell, 1916/1982). As a result, he argued, individuals in a

lineage will become more complex and more adapted to their environment over

time. Although it is tempting to see a natural affinity between the ideas of

Lamarck and Geoffroy’s unity of type, as Russell points out, Lamarck had no

particular interest in, or knowledge of, morphology. This prevented him from
109

tapping the rich vein of morphological evidence, and left him that much more

vulnerable to the withering criticism of Cuvier, who understood that the

functional integration that defines complex organisms precludes the sort of

structural changes on which Lamarck’s theory depended. Moreover, it is

important to realize that, notwithstanding the typical association of the name

Lamarck with “the inheritance of acquired characters,” Lamarck actually offered

no theory of heredity. He merely incorporated what was, at that time, a vague and

unsystematic intuition about intergenerational similarity (on the history of so-

called Lamarckism, see Zirkle, 1946). As Lopéz-Beltrán (2004) makes clear, no

true theory of biological heredity existed until after the concept had been worked

out by the French medical theorists.

Another important source of early biological thinking was the German

university system. As Nyhart (1995) explains, the German-speaking universities

were transformed, in the early nineteenth century by the institutionalization of

research as a key faculty activity. The purpose of this innovation was to support

the pursuit of pure knowledge, or Wissenchaft and to promote the maximum

development of the individual, by inculcating Bildung (Nyhart, p. 14). Research

in human physiology was extended to encompass comparative anatomy and

zoology, establishing a program dedicated to describing the general laws of life.

Because this Wissenschaft of life had its roots in medicine, it was, for the first few

decades, merely an extension of medical physiology. However, according to

Nyhart, as physiology came to be dominated by physicalists who were exclusively


110

interested in materialistic and mechanistic explanations, those interested in

morphology moved into separately established research programs in anatomy,

zoology, and embryology.

It has been noted by a number of authors that the worldview of nineteenth

century Europe was characterized by a new awareness of historical development,

whether cosmological, geological, cultural, or individual (see e.g., Foucault, 1973;

Gasking, 1967; Sandler, 2000). Nowhere was this perspective more evident than

in the German-speaking states where, as I said, individual development was

incorporated into the university system’s mission. Nature as a whole was

understood, from this perspective, to embody an inner purposefulness, and

individual and cultural development were merely special cases of this more

fundamental reality. The basic paradigm or root metaphor for this attitude was the

organism itself. Naturphilosophen, such as Kielmeyer, Oken, Schelling, and

Hegel took this metaphor the furthest, articulating an explicit, if not monolithic,

organicist metaphysics (Lenoir, 1981). Well after that movement had faded,

though, the developmental perspective continued to play a significant role in the

German and European intellectual milieu.

Not surprisingly, perhaps, the developmentalist perspective was

particularly salient for researchers participating in the emerging Wissenschaft of

life. As a result, there was a virtual consensus among nineteenth century German

embryologists that form develops by gradual epigenesis (Coleman, 1977, p. 43).

Taking development and morphology as their theoretical touchstones,


111

comparative embryologists set out to identify the laws governing the orderly and

teleological production of form. It was in this spirit that early Naturphilosoph Carl

Friedrich Kielmeyer (1765-1844) first proposed a law of developmental

parallelism, which was later made fully explicit in the Meckel-Serres law of

recapitulation. The Meckel-Serres law stated that “the development of an

individual organism obeys the same laws as the development of the whole animal

series; . . . the higher animal, in its gradual evolution [i.e., development],

essentially passes through the permanent organic stages which lie below it”

(quoted in Coleman, 1977, p. 50).

Karl Ernst von Baer (1792-1876), the acknowledged founder of scientific

embryology, proposed laws of embryological development that rejected explicit

recapitulationism. He argued that the appearance of a parallel between the

embryological stages of higher animals and the adult forms of lower animals is

merely an artifact of the developmental movement from the general to the

particular (Russell, 1916/1982, p. 124). According to von Baer, "there is gradually

taking place a transition from something homogenous and general to something

heterogeneous and special " (quoted in Mayr, 1982, p. 473). For von Baer, every

embryo begins its existence in its most undifferentiated state, based on its

membership in one of four main classes of developmental types, and then

gradually undergoes specialization within its type (Russell, 1916/1982, p. 124).

Characters common to all the members of a type emerge first, followed by less

general characters, culminating with those that are unique to the species and
112

ultimately the individual.

Concern with the principles of form was not limited to German

morphologists and embryologists, of course. Naturalists, zoologists, and

comparative anatomists throughout Europe were seeking to understand life

morphologically. As Amundson (2005) explains, some of this work, such as

Richard Owen’s research on homology and the vertebrate archetype identified

many of the structural relationships that were later reinterpreted by Darwin in

terms of common descent. In addition, as Russell (1916/1982) and Bowler (1996)

have shown, the evolutionary research program that dominated the immediate

decades after Darwin was primarily concerned with morphology. Evolutionary

morphologists such as Gegenbaur and Haeckel used morphological data to

reconstruct evolutionary lines of descent, and Haeckel, in particular, reformulated

recapitulation in explicitly evolutionary terms, famously stating that phylogeny is

the literal cause of ontogeny (Bowler, 1996).

Obviously, a great deal more could be said about the development of

biology in the first half of the nineteenth century, but the fundamental point I wish

to emphasize with this selective overview is that early biological thought did not

include, and had no particular need for, a concept of heredity. Indeed, many of the

questions that were eventually answered in terms of heredity would have had little

meaning for the biologists of this period. What concerned these early theorists of

life were the basic questions of life itself: What is life? What special laws govern

its domain? How is it organized? How is organic form produced in development?


113

This last question was particularly salient and continued to interest morphological

thinkers from von Baer to Haeckel and beyond. The notion that discrete

characters (or variations) are transmitted from parents to offspring would likely

have struck these men as uninteresting, if not meaningless.

The First Hereditary Synthesis

During the decades in which early biologists were attempting to explicate

life’s particular explanatory laws, another set of laws was taking shape in the

medical literature. As detailed in the previous chapter, French medical authors

deliberately transformed the notion of hereditary transmission from a descriptive

metaphor into a structured category of biological causation, culminating in its

explicit integration into the semantic field defined by the individual constitution.

This, in turn, enabled normal and pathological modes of hereditary transmission

to be consolidated under the single noun l’hérédité (heredity). From the 1820s

onward, the explanatory scope of the heredity concept was gradually expanded

until it was taken up in a more general discourse among life scientists. This

interaction between medical thinkers, breeders, naturalists, and other biologists

found a culmination of sorts in the 1848 publication of Prosper Lucas’ Traité

philosophique et physiologique de l'hérédité naturelle, an ambitious attempt to

synthesize all that was then known about heredity (Churchill, 1987).

Heredity during this period was beginning to be seen as a useful

explanatory approach that psychiatrists, naturalists, animal breeders, and

anthropologists could apply to existing disciplinary questions. Although theorists


114

from various disciplines adopted the basic conceptual framework established by

medical authors, their contributions also helped to reshape the concept to fit their

specific concerns (Lopéz-Beltrán, 2004, p. 50). For psychiatrists, heredity

represented an inexorable natural force responsible for the pathological

degeneration of society (Cartron, 2007). The new discipline of anthropology

(called the natural history of man) found heredity useful for analyzing the nature

of human racial groups (Mazzolini, 2007). The distinct interests of naturalists and

animal breeders led them to different perspectives on the level at which heredity

operates. Naturalists tended to see heredity as an explanation for the stability of

types, while animal breeders tended to be interested in the power of heredity to

preserve accidental variations.

Naturalists and breeders negotiated their opposing perspectives on the

contested middle ground occupied by races, breeds, and stable varieties (Lopéz-

Beltrán, 1992, p. 138). For the consensus that eventually emerged, heredity was

understood as a force, exhibiting different degrees of intensity at different levels.

First articulated by Scottish breeder James Anderson, the idea was that each level,

from the species down to the individual, constituted a distinct domain in which

heredity would set the boundaries for variation on the next lower level (Lopéz-

Beltrán, 1994, pp. 54-55). Individual characters could vary within the limits set by

the characters common to the family; family characters could vary within the

limits set by the racial type; racial characters could vary within the limits set by

the species; and species characters were of course fixed.


115

A significant consequence of this hierarchical solution to the problem of

sub-specific variation and stability was that heredity was reframed as a stabilizing

force, which effectively reversed the original association of heredity with the

idiosyncratic and the accidental (Lopéz-Beltrán, 1994, p. 56). Indeed, accidental

variation was understood, by mid-century theorists, as a tendency that runs

directly counter to heredity. The most well-known, if overstated, articulation of

this view was the one elaborated by Prosper Lucas. His encyclopedic Traité

philosophique et physiologique de l'hérédité naturelle summarized and

synthesized much of the contemporary thinking on heredity, making it available,

for the first time, to a wide audience. Lucas described the opposition between

heredity and variation in terms of a polarity between two cosmic principles,

hérédité (heredity) and innéité (inneity). The principle of inneity, according to

Lucas, causes individuals to vary freely in all directions; the principle of heredity

promotes stability and prevents variation altogether at the level of species. This

cosmic polarity was supposed to play itself out during the first formation of each

being (Lopéz-Beltrán, 1992, pp. 148-153). At that formative moment, the

characters associated with each level would come into being through a struggle

between these two principles. While species-level characters were determined

completely by the hereditary species type, the formation of individual characters

came under the influence of a hierarchy of ancestral types, which constrained,

without fully determining, the resulting characters.

The publication of Lucas’ work completed the first hereditary synthesis. In


116

this first pass, heredity was integrated into biology as a version of the type-

accident scheme already operative in traditional natural history. Not surprisingly,

Lucas’ framing expressed contemporary French society’s the preoccupation with

tension between the forces of change and progress and the forces of conservatism

and tradition (Lopéz-Beltrán, 1992, p. 149). Two things need to be emphasized

about the significance that this first hereditary synthesis had for biology. First, as I

said, the concept of heredity entailed by this synthesis had been turned virtually

upside down it’s the way it was originally formulated in the medical literature. As

a result of its assimilation into the broader life sciences, heredity had been

transformed from an explanation of individual pathology and resemblance into an

explanation of biological, cultural, and cosmic stability. This association of

heredity with the momentum of tradition, holding in check the freewheeling

innovation of variation would persist for the better part of the nineteenth century,

until finally being displaced by the materialist view embodied in the second and

third syntheses (Gayon, 2000). Second, this framing of heredity complemented

the existing conceptual landscape of biological thought and therefore prompted no

new avenues of research. Although the principles of heredity and variation were

framed in terms of causation, they did little more than name observed patterns of

similarity and difference across generations. Therefore, embryological

development continued to be conceived in terms of embryological laws with no

concrete physiological connection to the past, and the actual mechanisms of

hereditary transmission remained primarily the concern of medical theorists and


117

professional breeders.

The Second Hereditary Synthesis

Darwin’s theory of natural selection could not rely on the framework

provided by the first hereditary synthesis, since, among other things, the principle

of heredity was understood to preclude the transmutation of species. This was not

an obstacle for Darwin’s thinking, however, because, despite relying on Lucas’

treatise to substantiate the facts of inheritance, he had developed his own

understanding of how it works. Darwin’s view of inheritance was largely

informed by the accumulated knowledge of domestic breeders along with his own

experience breeding pigeons. For this reason, he was especially concerned with

the preservation of individual variations (Bartley, 1992; Müller-Wille, 2007).

Darwin’s approach simply bypassed the conception of heredity as an energetic

force in favor of a structural conception involving the transmission of material

elements (Gayon, 2000). This helped to recast reproduction as a twofold process,

consisting of a transmission phase and a development phase. Although he did not,

in the end, draw a clear distinction between hereditary transmission and

development, he did single out inheritance as an explicit problem (Winther,

2001b). I contend that Darwin’s work on heredity qualifies as a second hereditary

synthesis, not only because he identified transmission as an important biological

problem, but also because his structural, materialist approach emphasized a

distinction between heredity and development that transgressed the prevailing

opposition between heredity and variation. Darwin’s move, though fairly subtle at
118

the time, helped to realign the relationship between heredity and development in a

way that allowed the ultimate responsibility for form to be gradually relocated

from the epigenetic processes of embryogeny to the preformed units of

inheritance.

Darwin (1883) devoted several chapters in his two-volume work, The

Variation of Animals and Plants Under Domestication to the problem of

inheritance and his solution to it, the provisional hypothesis of Pangenesis. Three

elements of Darwin’s approach to inheritance form the basis of the second

hereditary synthesis. First of all, Darwin was concerned to explain the

transmission of discrete peculiarities from parents to offspring. Second, he

ignored the prevailing energetic conception of heredity and proposed a model

based on the physical transmission of material particles (Gayon, 2000). Third, his

focus on inherited particles opened the door to a conceptual distinction between

transmission and development, even if Darwin did not quite walk through it,

himself. Although Darwin’s hypothesis was a scientific failure, these three

elements of his approach fundamentally reformulated the problem of biological

inheritance and created the conditions that informed its further development.

Darwin’s hypothesis of Pangenesis began with the idea that every cell in

the body would release gemmules during all stages of an organism’s development,

including all stages of cellular differentiation. Gemmules were sub-cellular units

that could, under the proper conditions, develop into cells like those from which

they originated. They would gather in the reproductive organs and organize
119

themselves by “mutual affinity” into buds or sexual elements. The process of

embryological development (as opposed to cell development), Darwin (1883)

wrote, “depends on [the gemmules’] union with other partially developed cells or

gemmules which precede them in the regular course of growth” (p. 370). Darwin

thus accounted for transmission and development by way of a twofold process.

First, there was the gemmule itself, which was essentially preformed, in that it

was destined to grow into a cell identical to its parent. Second, the development of

the whole organism was understood to proceed as a result of the power possessed

by the gemmules to identify each other and come into proper union.

The process of cellular differentiation illustrates Darwin’s curious

combination of preformationist and epigenesist elements. Recall that gemmules

were supposed to be released by cells at all stages of differentiation. Gemmules

from later stages of differentiation would seed cells from earlier stages, and

imbued them with the formative impetus to advance to the next stage. As Darwin

(1883) explained, “as soon as any particular cell or unit becomes partially

developed, it unites with (or, to speak metaphorically, is fertilized by) the

gemmule of the next succeeding cell, and so onwards” (p. 384). So if a cell type

needed to pass through several stages as it progressed from an undifferentiated to

a fully specialized state, there would be gemmules that represented (so to speak)

each intervening stage, and prompted the preceding stage to advance toward

further differentiation. Each moment in epigenesis was supposed to be literally

preformed in the parent. Pangenesis thus defined generation in a way that


120

privileged transmission as the ultimate source of form in the sense that all

developmental outcomes, even transitional ones, were understood to originate as

gemmules transmitted from the parents.

I believe Darwin’s formulation of inheritance qualifies as a second

hereditary synthesis primarily because, after Darwin, hereditary transmission was

no longer simply a metaphysical principle; it was a scientific problem. He shifted

the ontology of inheritance from a vaguely energetic one to concrete material one.

This move returned attention to the propagation of idiosyncratic characters, and

insinuated a logical distinction between transmission and development. It is true,

as previous authors have stressed (Hodge, 1985; Winther, 2001b), that

transmission, for Darwin, was thoroughly dependent on the mechanics of

development. However, by introducing the idea that characters are transmitted as

preformed material particles, Darwin achieved a subtle but fundamental

realignment of the discursive field. In place of a tug-of-war between heredity and

variation, Darwin hypothesized physical entities bearing forms that could then be

caused to vary by environmental perturbations. As Winther (2001b) rightly notes,

this is still a long way from the modern interactionist idea that characters develop

through the mutual influence of heredity and environment. However, by bringing

the heredity-variation opposition down to earth and situating it within a material

process of transmission and development, Darwin posed a problem that would

occupy (at least marginally) a generation of cellular physiologists. The next major

threshold to be crossed in this process was to demonstrate the continuity of


121

nuclear structure as the physical basis of hereditary transmission.

The Third Hereditary Synthesis

During the first decades of the nineteenth century, most anatomists and

physiologists considered the various sorts of vital tissue to be an irreducibly living

substance. The emergence of cell theory in the 1830s and 1840s largely displaced

this vitalist physiology and helped to usher in a materialist perspective. The

observation, by the German micro-anatomists and cytologists, that the structure of

the nuclear material is maintained during reproduction, extended this materialist

outlook to the problem of hereditary transmission. The narrowing of heredity to

refer exclusively to the local transmission of physiological structure across

generations constitutes what I am calling the third hereditary synthesis. Many

historians have emphasized that heredity and development remained entangled

until the Mendelian Revolution (Bowler, 1989; Sandler & Sandler, 1985;

Winther, 2001a). This is true in the sense that heredity research continued to

attend to the physiological mechanisms through which hereditary structures

participate in embryological development. However, as Churchill (1987) shows,

the demonstration of a continuity of nuclear structure across generations and the

identification of that continuity with heredity gave a respectable empirical

foundation to Darwin’s atomistic conceptualization of heredity.

In the years after Darwin made the scientific world safe for evolutionary

speculation, neither natural selection nor heredity claimed the attention of most

evolutionary theorists. As Bowler (1996) has documented, the dominant


122

evolutionary research program during last decades of the century was

evolutionary morphology. The fulfillment of Darwin’s project of

reconceptualizing heredity in materialist terms was eventually achieved,

appropriately enough, by cell theorists. In the late 1830s, botanist Mathias

Schleiden (1804-1881) and anatomist Theodor Schwann (1810-1882) articulated

the first general cell theory, which identified the cell as the fundamental structural

component of all plant and animal life. Schwann went further, and suggested that

the cell is the basic unit of organic function, as well as structure, though without

much in the way of evidence (Coleman, 1977, p. 29). By 1860, cell division had

been repeatedly observed, and it was widely accepted that, in the words of Rudolf

Virchow (1821-1902) “omnis cellula a cellula” (all cells arise from other cells).

By 1875, the basic structures of the cell had been identified, including the

nucleus, the chromosomes, and the cytoplasm, and a vigorous research program

was underway to explicate its functional and structural properties (Coleman,

1977, p. 30).

Given the monumental set of challenges facing nineteenth century

cytologists, it is perhaps not surprising that the problem of heredity did not

immediately attract much attention (Coleman, 1965, p. 127). Even in the late

1870s, when Oscar Hertwig and Hermann Fol hypothesized that the egg and

sperm unite in a way that affords a continuity of nuclear material across

generations, they did not venture an opinion on the significance of this fact for

heredity (pp. 139-149). This may have been simple caution. They were certainly
123

aware of the implications, given that Haeckel had speculated a full decade earlier

regarding the role of the nucleus in inheritance (p. 146).11

Two events, according to Coleman (1965), set the stage for a surge of

interest in the implications of nuclear continuity for heredity (p. 140). First, in

1883, Edouard van Beneden (1846-1912) observed the union of the chromosomes

during fertilization followed by their mitotic division during the first embryonic

cell division. This finally settled the long-running dispute surrounding the

mechanism of fertilization, confirming the prediction of Hertwig and Fol. In the

same year, Wilhelm Roux (1850-1924) published a discussion of nuclear division

that concurred with van Beneden’s observations, but gave them a novel

interpretation (Coleman, 1965, p. 141). Van Beneden had noted the equal

distribution of paternal and maternal chromosomes to the first daughter cells, but

had assumed that this division is quantitative. Roux suggested that the complexity

and relative inefficiency of nuclear division might be better explained if the

process were understood qualitatively (Sturtevant, 2001, p. 19). He suggested that

the chromosomal elements must be distributed qualitatively because they

represent hereditary characters. Yet, even with these insights, heredity remained a

secondary concern for Roux (Coleman, 1965, p. 142).

11
A Churchill (1987) points out, Haeckel did not believe the nucleus

to be continuous across cell division, so he cannot be regarded as

anticipating the theories of transmission that developed in the 1800s (p.

350n43).
124

A second event that, together with van Beneden’s observations, seems to

have finally ignited the interest of cytologists in heredity was the 1884 publication

by Carl Wilhelm von Nägeli (1817-1891) of Mechanisch-physiologische Theorie

der Abstammungslehre (Coleman, 1965). In an attempt to provide a strictly

mechanical account of evolution, Nägeli offered an elaborate speculation on the

physiology of heredity. Drawing on Johannes Müller’s (1801-1858) idea that the

germinal cells contain a morphological plan for the whole organism, Nägeli based

his notion of heredity on a similar assumption of totipotency (Duchesneau, 2007).

His most influential innovation was the hypothetical separation of the hereditary

material, which he called the idioplasm, from the rest of the protoplasm (Mayr,

1982, p. 671). The protoplasm was conceived as essentially passive and

unstructured, and was supposed to acquire its form during development through

the action of the complexly structured idioplasm (Coleman, 1965, p. 145). The

highly speculative nature of Nägeli’s ideas has been noted often, and it is true that

he had no empirical evidence for the physical and physiological processes he

posited. Nevertheless, his conceptual framework proved appealing, and his

dualism between active idioplasm and passive protoplasm became a touchstone

for conceptualizing heredity in light of new cytological discoveries (Coleman,

1965, p. 145).

Hertwig, Strasburger, and especially Weismann relied on Nägeli’s

framework to interpret their findings regarding nuclear continuity (Coleman,

1965, 145). For example, in order to explain their observation of the physical
125

isolation of the nuclear material, they appealed to Nägeli’s suggestion that the

hereditary material is functionally isolated. The functions of Nägeli’s idioplasm,

they inferred, must be carried out by the complexly structured chromosomes.

Weismann even adopted Nägeli’s terminology, though in place of Nägeli’s

idioplasm-protoplasm terminology, Weismann distinguished between two types

of idioplasm, the germ-plasm, which was supposed to be the hereditary material,

and the somatoplasm, which ambiguously referred to everything else (Winther,

2001a, p. 526). Hertwig took up Nägeli’s conjecture that the male and female

contributions to heredity must be equal and argued that this conjecture was

supported by van Beneden’s findings on the union of nuclear ingredients during

fertilization (Churchill, 1987, p. 350; Coleman, 1965, p. 147). Furthermore,

Hertwig insisted, again following Nägeli, that the effectiveness of the nuclear

material must derive from its complex organization. Therefore, in order to

preserve the organized structure of the hereditary material, fertilization must be a

morphological, rather than an physiological process (Coleman, 1965, p. 145).

By the late 1880s, there was a basic consensus regarding the structural

continuity between generations (Churchill, 1987, p. 355). The principle of nuclear

heredity, however, required that the continuity of nuclear structure encompass the

entire cycle from fertilization to fertilization. In order for hereditary qualities to be

incorporated in the gametes, the organizational properties of the hereditary

material would either have to survive cellular differentiation or be reconstituted

after development. As Churchill (1987) explains, there were a range of theories


126

for how such a continuity might be accomplished, physiologically. At one end of

the spectrum was Strasburger’s contention that, since the nuclei of the germ cells

must be altered during development, gamete production must involve a sort of

dedifferentiation by which the original nuclear organization might be restored. On

the other extreme was Weismann’s strict separation between soma and germ-line.

According to the Churchill, for the Weismann-Roux Mosaic Theory,

embryological development entailed an unpacking and parceling out of hereditary

determinants. Somatic cells, according to this theory, would receive only that

portion of the nuclear material that corresponded to the particular cell-type it was

to become. This disaggregation of the hereditary structure effectively precluded

the sort of reversal that would be required to reconstitute the original nuclear

structure. Therefore, in order to guarantee its integrity through the life cycle,

Weismann was forced to insist on the sequestration of the germ-plasm. This

concern, rather than the non-inheritance of acquired characters, was Weismann’s

actual motivation for insisting that the germ-plasm was sequestered (Winther,

2001a).

Regardless of how the various theorists conceived the physiology of

continuity, the conceptual consequences were essentially equivalent. Churchill

(1987) identifies the recognition of nuclear continuity as a major turning point in

the history of heredity theory. The definition of heredity as the reliable

transmission of complex microscopic organization, which is replicated during cell

division and capable of persisting across multiple generations, created the


127

prospect of a more fundamental distinction between transmission and

development. With the introduction of the term Vererbung, Churchill argues, the

meaning of heredity was finally narrowed so that it referred solely to the

transmission of organized nuclear material across generations (cf. McLaughlin,

2007). As Churchill explains, it was the closing of the circle leading from

fertilization through gamete production and back to fertilization that at last

brought the transmission problem clearly into view. To be sure, there were

multiple solutions proposed for the physiological problems raised by this

framework. But all these solutions shared a common orientation to heredity that

stood in contrast to the developmental conception that had dominated biology for

the better part of the century. Indeed, according to Churchill, the emergence of the

transmission problem among German cytologists and embryologists constituted

the crucial transition in the all-important partitioning of transmission and

development into independent problems (p. 363).

This formulation of heredity as reliable transmission and continuity of

organized nuclear structure is a key feature of what I am calling the third

hereditary synthesis. There is another side to this conceptual reconfiguration,

however, which is the corresponding transformation in how embryological

development was understood. These conceptual changes coincided with a return

to biology of a thoroughgoing mechanism, as exemplified by Nägeli’s insistence

on a mechanical account of evolution and by Wilhelm Roux’s

Entwicklungsmechanik (developmental mechanics), which called for a strictly


128

causal embryology (Russell, 1916/1982, p. 317). A key consequence of this sort

of strict mechanism is the tendency to separate the source of form from the

process through which it is realized. We see this expressed in Nägeli’s conceptual

duality of idioplasm and protoplasm and in Weismann and Roux’s embryological

dualism, in which the hereditary germ-plasm was accorded the formative

responsibility for the development of diverse cells and organs (Coleman, 1965).

Perhaps not surprisingly, these conceptual developments engendered a renewed

debate about the explanatory applicability of preformation and epigenesis

(Maienschein, 2006). The key difference between the new preformationism and

its eighteenth century predecessor was that, for eighteenth century preexistence

theory, hereditary resemblance was an exception to the explanation of form,

whereas, for the new preformationism, heredity was the efficient cause of form.

The Fourth Hereditary Synthesis

From the 1880s onward, biological inheritance was increasingly regarded

as an important biological problem. The profile of heredity was raised by the

cytological advances discussed above, as well as by the statistical work of Francis

Galton and his followers. The revolutionary hereditary synthesis that followed the

rediscovery of Mendel’s work is one of the most well-documented events in the

history of science. There is no need to retell the now familiar events leading up to

the rediscovery of Mendel’s work, the subsequent controversies between

Mendelian mutationists and Darwinian biometricians, and their resolution through


129

the work of the Morgan school and the founders of population genetics.12 Thus, I

will limit myself to reviewing the main consequences of this synthesis.

The emergence of transmission genetics is typically heralded as the

moment, in the second decade of the twentieth century, when, thanks to the work

of the Morgan group, biologists were finally able clearly to distinguish between

transmission and development (Bowler, 1989; Sandler & Sandler, 1985). What

this distinction means, in the context of genetics, is not what it meant to Darwin

or to Weismann, both of whom were trying to understand heredity as a

phenomenon of individual reproduction. With the advent of transmission genetics,

the object of study was no longer the individual mechanisms responsible for

inheritance. This is clear enough for population genetics, where the object of

study is a statistical model of an evolving population. However, methodologies of

classical genetics also bypass the question of how heredity occurs. Classical

geneticists breed animals, especially Drosophila Melanogaster, and study patterns

of character transmission. It is not the physiology of transmission that interests

them, however; it is the reliable correspondence, to the extent it can be

established, between genes and characters, or more precisely, between genetic

differences and character differences. The physiological basis for that

correspondence, which is a developmental question, is systematically excluded

12
A few excellent treatments of this history include but are by no

means limited to (Bowler, 1989; Gayon, 1998; Olby, 1985; Provine, 2001;

Sturtevant, 2001).
130

from the inquiry, except to the extent that phenomena such as linkage and

pleiotropy can be inferred.

Notwithstanding the black-boxing of all questions developmental, it is not

the case that transmission genetics is totally agnostic about the causes of heredity.

Indeed, the methodological focus on genetic differences has always entailed an

assumption that genes play a special causal role in heredity. This is not the place

to embark on a critique of this assumption. That will be taken up in the next

chapter. Here, I merely wish to note that, at its inception, transmission genetics

already involved an unresolved ambiguity concerning the ontology of its central

object.

Conclusion

This chapter has traced the interplay, throughout the nineteenth century,

between the development of biology and the concept of heredity, which, having

acquired its conceptual structure at the crossroads of natural history, agriculture,

and medicine, was finally given formal articulation early in the century. Biology

began the nineteenth century with a focus on form, organization, and

development. These are the questions that led natural philosophers to recognize

that living beings constitute a special category of natural objects. Although

morphology and embryology continued to have a profound influence on

biological research throughout the century, the epistemological clarity of

Cartesian categories ultimately proved irresistible. The impulse toward precisely

specifiable, mechanical-atomistic explanations created a conceptual need that the


131

heredity concept was able to fill. By emphasizing the transmission of form-

bearing particles, biological inheritance, beginning with the second hereditary

synthesis, offered what amounted to a new, more sophisticated, preformationism,

which, like its predecessor, permitted the difficult problems of form to be ignored

or deferred. The cytological discoveries of Weismann and his associates, followed

by Morgan et al.’s chromosome theory, seemed at last to provide a solution

(however incomplete) to the transmission problem. As I will argue in the next

chapter, however, the assumptions underlying this tidy framework are far from

settled. The dualism introduced by Nägeli and Weismann (Coleman, 1965), and

enacted by Morgan et al.’s definitional fiat (Amundson, 2005) is now being

challenged in a radical way on both empirical and conceptual grounds.


132

Chapter 5: Developmental Dualism and the Constructionist Challenge

In the first half of this dissertation, I examined the historical events

through which heredity emerged as both a general style of reasoning and a

definitive feature of biological explanation. As an explanation for the regeneration

of biological form, heredity helped to displace the vague epigenetic laws of early

nineteenth century morphology and reinstate the strict mechanism that

characterized eighteenth century preexistence theory. Gould (1977a) suggests that

the modern science of heredity (genetics) occupies a middle ground between the

extremes of preformation and epigenesis (p. 18). Oyama (1985), on the other

hand, points out that genetics shares with the older theories a basic conviction that

“form, whether miniaturized and encapsulated, recreated by a vitalistic force or

inscribed on a molecule,” preexists its realization in the developing organism (p.

25). Modern theories inherit from their discredited antecedents a reliance on

special formative causes that are not themselves explained. The information-

oriented metaphors that populate contemporary biology, for all their technical

sophistication, operate within a discursive field that is tacitly dualistic and

preformationist. The rhetorical force of the information discourse in biology rests

on the metaphysical distinction between matter and form and the related notion

that form can and must exist independent of and prior to its instantiation in

material systems.

According to the standard model of development, information is

transmitted from parents to offspring as encoded instructions that direct the


133

ontogeny of the organism. In its most common formulation, the DNA is said to

contain a genetic program for development (and behavior). In the first part of this

chapter, I evaluate the conventional, gene-centric model of development and

describe some of the prevailing arguments for and against it. After reviewing

these controversies, I present a more radical challenge to the preformationism that

is implicit in much of the conventional reasoning about development. Drawing on

the developmental systems approach, exemplified in the work of Susan Oyama

(1985, 2000b), I argue that, as long as explanations for ontogeny presuppose some

sort of information that preexists the actual transformations that make up a life

cycle, these transformations will be misunderstood. As Oyama (2000b) points out,

the genetic program notion not only adds nothing of value to developmental

explanations, “it usually imports extraneous and misleading implications” (p. 52).

From a developmental systems perspective, if we desire genuine answers to

developmental questions, we must not resort to codes, representations, or

directing agencies. Rather, we must make a serious attempt to understand how,

through cascades of contingent interactions, complex systems construct and

reconstruct organic form at all scales and levels of organization.

Grappling with Information

The historical study undertaken in the preceding three chapters provides a

broad background against which to evaluate the place of biological inheritance in

current theories of ontogenetic development. As I have explained, heredity came

to be defined exclusively in terms of genetic transmission in the early twentieth


134

century, as other possible mechanisms of inheritance were systematically

marginalized (Sapp, 1987). In the second half of the century, the emphasis shifted

from genes, as physical objects, to genes as bearers of information, instructions

chemically encoded in DNA. This shift from a material to an informational

conception of heredity was the key move in the development of a fully modern

preformationism, which marginalizes the problems of formation without positing

a literal miniature or an unsupportable determinism.

Conceptualizing genetic information

Information serves as the conceptual thread that weaves together three

different levels of biological discourse: molecular, developmental, and

evolutionary. Appeals to biological information, however, conceal a basic

ambiguity at the core of biological theory. Although the Mendelian chromosome

theory of heredity contributed to the integration of Darwinism and Mendelism, the

eventual elaboration of the details of genetic structure by Watson, et al. led to an

unexpected conceptual tension between evolutionary and molecular genetics.

With the description of DNA’s structure, the genes came to refer both to the

Mendelian units of inheritance, modeled by population genetics, and to the

molecular sequences that form the genetic code, at which point the two entities

began to diverge.13 The conceptual tension resulting from this divergence is given

13
On the co-existence of multiple gene concepts, some molecular

and others evolutionary, see Kitcher (1992), Moss (2001) and Griffiths and

Neumann-Held (1999).
135

an apparent resolution by the rhetorical appeal to the everyday concept of

information. The genetic program concept, in particular, provides an apparent

bridge between the semantic metaphors of molecular biology and the transmission

metaphors of evolutionary biology.

John Maynard Smith (1986) traces the idea of genetic information back to

Weismann’s analogy between hereditary determinants and telegraph messages

(see also 1997, 2000a). Notwithstanding the precocity of his metaphor, however,

Weismann had no direct influence on the information concept that has come to

dominate biology (Amundson, 2005). Indeed, the information concept did not

gain a purchase in biological thought until it was introduced into molecular

biology in the 1950s (Kay, 1993). Kay recounts how information entered the

discourse in molecular biology after the structure of DNA was described in 1953.

The idea of a genetic code, which had been provocatively suggested by Erwin

Schrödinger in the 1940s, came in handy in explaining the complex relationship

between the base sequences in the DNA molecule and the amino acid sequences

that make up polypeptides. This turned out to be a magnificently fruitful language

for describing the intricate processes of protein synthesis, and everyone was

rightfully impressed by its apparent universality. The entire technical vocabulary

of molecular biology was soon saturated with references drawn from natural

language, such as coding, transcription, translation, messages, editing,

proofreading, etc. For many, there has never been a need to question whether the

genetic code is real (whatever that might mean).


136

The metaphorical vocabulary of molecular biology, though compelling, is

not based on the technical conception of information provided by mathematical

information theory. Mathematical information theory was developed by

communication theorists and cyberneticists during the 1940s, and the classic

account is due to Claude Shannon (1948) and Shannon and Warren Weaver

(1949). According to Shannon and Weaver’s definition, information has been

transmitted when a change in the state of one system, the sender, results in a

signal that causes a second system, the receiver, to adopt one of several

alternative states. Another architect of the technical information concept was

cyberneticist Norbert Wiener. According to definition offered by Wiener

(1950/1967), the quantity of information, as opposed to noise, transmitted by a

signal can be measured in units, each of which permits “a single decision between

equally probable alternatives” (p. 10). The crucial thing to recognize about these

technical definitions is that they tell us how to measure information quantitatively,

but they can tell us nothing whatsoever about meaning. As Wiener notes, in terms

of statistical mechanics (the parent discipline of information theory), information

can be understood simply as the inverse of entropy. Whereas entropy is a measure

of the disorganization of a system, with maximum entropy constituting its most

statistically probable state, information is a way to quantify the system’s degree of

organization, or the improbability of its current state (p. 11). The amount of

information communicated by a signal then is a function of its unlikelihood. Last

but not least, Gregory Bateson (2000) offers a succinct definition of information
137

as “any difference which makes a difference in some later event” (p. 381).

If genetic information were merely understood in terms of information

theory, the notion would be relatively uncontroversial, though far less rhetorically

protean. However, as I mentioned, the discourse of molecular genetics is based on

an analogy with natural language that far exceeds what would be warranted by a

purely statistical conception of information. This situation has been further

complicated by the amalgamation of molecular, developmental, and evolutionary

notions of information under the robust conceptual umbrella of the genetic

program for development. As Keller (2001) points out, the metaphor of the

genetic program was coined in 1961, independently by molecular biologists Jacob

and Monod (1961) and by evolutionary biologist Ernst Mayr (1961). As Oyama

(1985) points out, this dual lineage has provided the genetic program metaphor

with a multifaceted ambiguity, an ambiguity which may be the key to its enduring

popularity. Geneticists, in the early part of the century, had ostensibly relied on

the differential gene concept, introduced by E. B. Wilson, which was based on the

correlation of genetic differences with character differences (Schwartz, 2000).

Even so, there was always a tacit presumption that some sort of substantial causal

link must obtain between genotype and phenotype. Indeed, it was assumed from

the early days of genetics that the proper role of developmental biology was to

elucidate the mechanisms through which genes produce characters (Sapp, 1987).

As molecular biology began to reveal the dynamic complexity of epigenetic

regulation, the genetic program metaphor came to the rescue to reaffirm the
138

principle that genes are the primary causes of characters.14

Molecular biology inherited classical genetics’ preoccupation with

heredity and the assumption that genes somehow cause characters. As Watson and

Crick (1953) famously wrote, in one of their earliest papers on the structure of

DNA, “it follows that in a long molecule many different permutations are

possible, and it therefore seems likely that the precise sequence of the bases is the

code which carries the genetic information” (p. 965). Of course, suggesting that

heredity can be attributed to the complex organization of DNA was still a long

way from explaining how that information is able to produce such widely

disparate outcomes in the ontogeny of a multicellular organism. It did, however,

reinforce the basic presumption of genetic control embodied in both classical and

molecular genetics. Jacob and Monod’s discovery that genes do not simply act,

but are differentially activated, might have constituted a challenge to this

framework. However, as Keller (2000a) points out, the soon to be Nobel laureates

chose to frame their discovery in a way that reinforced the dominant outlook.

Rather than mechanisms of gene regulation, Jacob and Monod chose to call them

genetic regulatory mechanisms and to refer to the effector and promoter regions

as regulatory genes. Furthermore, they interpreted this as indicating the existence

14
Keller (2000b) notes that a rival notion of a developmental

program was being proposed by various authors in the mid-1960s, which

located the program beyond the nucleus, in the regulatory mechanisms

residing in the cytoplasm.


139

of a “coordinated program of protein synthesis and the means of controlling its

execution” (quoted in Keller, 2000a, p. 80).

In The Logic of Life, Jacob fully elaborated the genetic program paradigm,

suggesting that the program idea resolves a key paradox of biology, namely, how

it is that an individual organism can, without vital forces or final causes, develop

according to a seemingly pre-established design and behave with apparent

purpose. The answer is a familiar one. Development and behavior are supervised

by a chemical system capable of self-replication. As a consequence of

reproductive competition between individuals over many thousands of

generations, this system came to have an exceedingly precise structure, which

serves as both memory and formative faculty for the contemporary organism. In

this way, all the apparent teleology of living systems is cashed out in a single

currency – the telos of reproductive success. It is an apparently elegant and

comprehensive solution, which ties together molecular, evolutionary, and

developmental biology into one neat package. According to Mayr (1982), the

genetic program also provides a criterion that distinguishes between living

organisms and inanimate objects (p. 629). Claims like this should always arouse

suspicion that there are hidden assumptions at work. In this case, it is the vitalism

and preformationism implicit in the genetic program metaphor.

The key principle connecting the telos of reproduction with the design

encoded in the genetic program is, of course, information. In elaborating the

genetic program metaphor, Jacob (1970/1976) relies explicitly on Wiener’s


140

(1950/1967) description of information. He wrote, for example, that “information

measures freedom of choice, and thus the improbability of the message; but it is

unaware of the semantic content” (p. 251). Also following Wiener, Jacob wrote

that “the isomorphism of entropy and information establishes a link between the

two forms of power: the power to do and the power to direct what is done” (p.

251).15 At the same time, however, Jacob seems to have been aware that,

understood in this way, information theory provides no justification for singling

out the genes as uniquely causal. He wrote, for example, that “any material

structure can be compared to a message” (p. 251). By the time he got to

comparing the developmental influences of the genes and the environment,

however, he seems to have forgotten this nuance, writing that “environment does

not give instructions,” though it does give “specific influences” (p. 293).

Disputing genetic determinism

Debates about the applicability and appropriateness of the information

concept for biology have taken on greater urgency over the last decade, due to

advances in developmental genetics and the emergence of evolutionary

developmental biology (Godfrey-Smith, 2000a, 2007; Griffiths, 2001; Jablonka,

2002; Johnston, 1987; Keller, 2000b; Maynard Smith, 2000a; Oyama, 2000b;

15
Notice how this framing preserves the principle of gene action.

Since the early days of classical genetics, the language of gene action had

been used to attribute causal agency to genes vis-à-vis phenotypes (see

Keller, 2000a).
141

Sarkar, 2000; Sterelny, 2000). But the issue is by no means a new one. Although

the language of genetic programs and blueprints was greeted with enthusiasm

when it arose in the 1960s, simplistic genetic determinism was soon tempered by

the recognition that genes alone cannot explain ontogeny. A consensus was

eventually reached that the phenotype is a product of interaction between

genotype and environment (Sterelny & Griffiths, 1999). On the other hand, as

Lewontin (1982, 1983b, 2000) has long argued, in many cases, this interactionist

consensus has simply resulted in a more sophisticated form of determinism.

Though environmental influences are factored in, primary responsibility for

development is still attributed to the genetic program, which is seen as controlling

or directing the process. And, as Oyama (1985) emphasizes, a mistaken belief still

prevails that genes and environment can be analyzed in terms of their independent

causal contributions.

According to Oyama (1985), the privileged status of genetic causation is

sustained by a double standard that attributes the function of informing

developmental outcomes to the genes and allows a merely supporting or

interfering role for the environment. To counter this double standard, Oyama

insists on a parity of reasoning, which simply requires that the same standards of

reasoning be consistently and rigorously applied to all the factors influencing

development. If a particular standard is used to justify a special role for genes, it

is only reasonable to hold non-genetic developmental influences to the same

standard. When this principle is applied scrupulously, it becomes far more


142

difficult to defend the privileged status for genetic influences. Indeed, according

to Oyama (2000b), it often turns out that “what is presented as a justification for

giving the gene special status is a consequence of already having done so” (p. 3).

Oyama (1985) adapts Gregory Bateson’s definition of information, writing

that “information is a difference that makes a difference, and what it ‘does’ and

what it means is thus dependent on what is already in place and what alternatives

are distinguished.” (p. 3). Applying this standard uniformly clarifies the

inconsistency evident in Jacob’s (1970/1976) attribution of information to genetic

but not environmental organization. A DNA sequence is informational, according

to this framing, because a change in the sequence can be causally correlated with

a change in development. This is essentially the differential gene concept of

classical genetics, except that the emphasis is on process rather than outcome.

Stated slightly differently, genetic differences can be considered informational

only with respect to specific ontogenetic events in specific spatiotemporal

contexts. In precisely the same sense, a contingent product of a previous

interaction, a change in ambient temperature, or a change in the concentration of

an intrauterine chemical, all qualify as potential sources of information.

Moreover, all sources of differences that affect ontogeny at any point in the life

cycle count as information in precisely this sense. These include conditions that

are traditionally attributed to the external environment. According to Oyama

(2000b), differences, genetic and non-genetic, are “‘informational’ not by

‘carrying’ context-independent messages about phenotypes, but by having an


143

impact on ontogenetic processes—by making a difference” (p. 67).

As a consequence of this line of argument, which has been reiterated by

various authors (Godfrey-Smith, 2000a; Griffiths & Gray, 1994; Johnston, 1987;

Sterelny, Smith, & Dickison, 1996), it is widely acknowledged that the technical

definition of information does not justify the asymmetry between genetic and

non-genetic causes that underpins the genetic program concept. Rather than

abandon the genetic program paradigm, however, a number of theorists simply

reject the restrictions entailed by the information theory and insist that scientific

practices depend on attributing intentionality to genetic information (Gray, 2001).

Maynard Smith (2000a), for example, in a review and defense of information

usage in biology, affirms that “the notion of information as it is used in biology . .

. implies intentionality” (p. 193). This, he notes, goes beyond even the semantic

account due to Dretske (1982). Dretske actually sticks close to traditional

information theory and attributes semantic content, or meaning, on the basis of the

capacity of a message to reduce uncertainty for some receiver. This is inadequate

to ground the typical usage of genetic information because, in Dretske’s account,

meaning depends crucially on the context of the receiver.16 Intentional meaning,

on the other hand, is a function of the sender’s context. The attribution of

16
Dretske is not always explicit about this, but it is an unavoidable

consequence of the fact that knowledge gain of a receiver only makes sense

given the whole communication situation and especially, the receiver’s

prior knowledge.
144

intentionality is crucial, therefore, if one wishes to claim that genetic information

specifies what is meant to happen in development.

According to Maynard Smith (2000a), the standard account of genetic

information entails the principle of intentionality in that it is understood to

represent the phenotype in a way that environmental influences do not. As

Sterelny and Griffiths (1999) explain, the test of whether an information concept

is intentional is to ask whether it makes sense to conceive of the information

being misread (p 104). On one hand, though thick, dark clouds may be typically

associated with rain, they cannot be said to represent rain; they are not dark and

dense because they convey information about rain. On the other hand, the

information contained in ordinary human communication, such as a weather

forecast, is assumed to be intentional. A prediction of rain is treated as having a

meaning that is independent of how it is interpreted or misinterpreted by a

receiver. A non-native language speaker may fail to understand the

weatherperson’s prediction of rain, but the information conveyed still means that

rain is expected. The discourse on genetic information does indeed seem to rely

on this sort of intentional conception of information. It is an entirely different

matter whether this position is coherent or justifiable.

Maynard Smith (2000a) defends this understanding of genetic information

on conceptual and scientific grounds. First, drawing on principles adopted from

semiotics, he suggests that genetic information should be understood as symbolic,

rather than indexical or iconic. Where an indexical relation is based on a causal


145

correspondence, such as the relation between clouds and rain discussed above,

and an iconic relation is based on resemblance between a sign and what it

signifies, a symbol, such as a word, represents its object purely by convention.

According to Maynard Smith, the symbolic nature of the genetic code is

exemplified by Monod’s principle of gratuity. Because the function of a protein is

underdetermined by the particular chemical interactions that produce it, Monod

describes the relation between codon and protein function as gratuitous. For

Monod (1970/1971), this means that “everything is possible” (p. 77). Maynard

Smith extends this principle to the relation between genes and phenotypes,

generally, arguing that “it is the symbolic nature of molecular biology that makes

possible an indefinitely large number of biological forms” (p. 185) (see also

Maynard Smith & Szathmáry, 1995 ). More to the point, he claims that the

symbolic nature of genetic information shows that it is genuinely meaningful (see

also Stegmann, 2004).

Second, Maynard Smith (2000a) argues that the latest discoveries in

developmental biology only make sense in light of a semantic conception of

genetic signaling. He notes how explanations in developmental biology are

increasingly being framed in terms of program concepts such as signaling. “The

picture that is emerging,” he writes “is one of a complex hierarchy of genes

regulating the activity of other genes” by “sending signals” (p. 187). He discusses

the example of the eyeless gene. In 1915, researchers observed for the first time

that a mutation in a particular gene would interfere with normal eye development
146

in Drosophila. They named this gene eyeless or simply ey, and it is now known as

Pax-6. More recently, researchers have discovered that Pax-6 is actually a

regulatory gene, and that this same gene regulates eye development throughout

the animal kingdom (Halder, Callerts, & Gehring, 1995). Indeed, with their

knowledge of precisely how this gene functions, researchers were able, by

“targeted expression of the ey complementary DNA,” to induce eye development

in unusual locations on the fruit fly’s body, such as the leg or antenna.17 These

and similar findings are demonstrating how certain sets of regulatory genes,

which play a central role in the development of general morphological features,

have been conserved over evolutionary time (Carroll, 2005). For Maynard Smith,

these new findings demonstrate that developmental biology needs to understand

these signals in terms of their meaning, as symbolic instructions.

Finally, for Maynard Smith (2000a), the notion of intentional genetic

information is ultimately justified by the appeal to natural selection. The meaning

of a particular genotype, according to this approach, is simply the adapted

phenotype, which it has been programmed by natural selection to produce. This

notion is adopted from the teleological or teleosemantic account of intentionality

developed by philosophers of mind such as Millikan (1984) and Papineau (1987).

17
Maynard Smith claims that the mouse gene was “transferred” to

the fruit fly. The wording of the study’s author, “targeted expression of

complementary DNA,” suggests that the report of the gene being

“transferred” is slightly misleading.


147

On the teleosemantic view of genetic information, because the genome is

understood to be adapted for its causal role in development, it can said to

represent normal developmental outcomes. Because it is acceptable from a neo-

Darwinian standpoint to refer to the proper function of an adaptation, it is

considered justifiable to say that the proper function of genetic information is to

represent the phenotype, and thus to carry intentional information about it.

Moreover, Maynard Smith maintains that this intentionality criterion is met only

by DNA because only DNA acquires its structure and function as a result of

natural selection.

Each of the justifications offered by Maynard Smith (2000a) for the

efficacy and legitimacy of treating the genome as a unique source of intentional

information has been challenged, beginning with the argument that the genetic

code is uniquely arbitrary and therefore symbolic. As Sterelny (2000) argues,

Maynard Smith’s own discussion seems to acknowledge, in the case of the lac

gene, that cytoplasmic factors can also meet the arbitrariness criterion. In

addition, arbitrariness alone does not warrant the inference to symbolic

representation, since it seems sufficient simply to note that these relations are

historically contingent. As Godfrey-Smith (2000a) emphasizes, in applying this

arbitrariness criterion to other biochemical and ecological interactions, (hormones

and their receptors, patterns of environmental differences) it becomes evident that

the distinction between the arbitrary and the necessary is perhaps a relative one,

which, in many instances, may simply depend on the distance between cause and
148

effect (see also Stegmann, 2004).

Maynard Smith’s (2000b) claim about the widespread reliance on semiotic

terminology in developmental biology is certainly accurate. Contemporary

developmental biology relies heavily on molecular techniques and technologies,

so it comes as no surprise that it also still makes use of molecular biology’s

semiotics-laden vocabulary. Indeed, Godfrey-Smith (2000a) notes that biology

textbooks treat the presence of coded instructions in DNA as established scientific

fact. However, Maynard Smith’s discussion of the language of developmental

genetics suggests a different moral. Besides references to coding and instruction,

which are unquestionably ubiquitous, non-semiotic concepts, such as signaling

and regulation, are also ubiquitous. Yet, Maynard Smith does not apply the same

standard of ubiquity and heuristic usefulness to the language of cybernetic

regulation that he uses to justify the language of semiotics. In a reply to

commentaries on his original paper, he writes that the term regulatory gene is

unfortunate because it implies a cybernetic rather than a semantic sense of signal,

asserting that the former is “wrong” (Maynard Smith, 2000b, p. 216). On the

contrary, the difference between semiotic and cybernetic vocabularies, in this

context, is that the latter can be justified by more than mere heuristic utility. The

reason the cybernetic language of regulation has become so widespread in

developmental biology is that it actually describes the interactive networks that

characterize development at the molecular level. The effort to understand the

dynamics of these signaling networks is now one of the primary activities of


149

developmental genetics researchers (Gilbert, 2000; Morange, 2000). While the

value of treating genetic signals as meaningful symbols is unclear, what is clear,

as Maynard Smith concedes, is that the paradigm of regulation is driving

important research. Unfortunately, for those who wish to justify the treatment of

DNA as a unique repository of intentional genetic information, the cybernetic

framework is of little help.

Finally, Maynard Smith’s (2000a) appeal to natural selection to justify

treating the genome as a unique bearer of intentional information turns out to be

his weakest argument. Sterelny (2000) easily shows that the criterion of

adaptation, which in his view is the correct one, does not single out the genome.

According to the extended replicator theory developed by Sterelny, Smith, and

Dickison (1996), anything that plays a causal role in development and is a product

of natural selection can qualify as a bearer of intentional information.

Cytoplasmic factors as well as obligatory symbionts are bearers of intentional

information, he suggests, since they have the precise form they do as a

consequence of being adapted for their role in development. As has been pointed

out by a number of authors, the reasoning relied upon by Maynard Smith to

justify the unique role of the genes is actually circular. Rather than explaining

why the causes of biological form must be encoded in genes in order to be

inherited, it simply presupposes that only genes are inherited (Keller, 2000b;

Lehrman, 1953; Oyama, 1985; Sterelny, 2000).

There is substantial reason to doubt commonplace assumptions about


150

genetic programs and representations. Indeed, there appears to be no principled

justification for the causal asymmetry between genetic and non-genetic

influences. Information, understood in the Shannon-Weaver (1949) sense, does

not pick out genes because this concept only requires systematic covariation,

which applies to any number of non-genetic factors. Indeed, there is a growing

appreciation among developmental biologists of the significance of epigenetic and

even ecological factors in specifying developmental outcomes (Gilbert, 2001).

Meanwhile, the appeal to intentionality also fails to justify the privileging of

genetic information, since the teleofunctional conception of information can

apparently be applied to a range of behavioral and ecological factors (Jablonka &

Lamb, 2005; Laland et al., 2001; Sterelny et al., 1996).18

Development as Construction: Developmental Systems Theory and the

Interactionist Consensus

As the previous section shows, the attempt to relieve the tension between

evolutionary and molecular genetics by way of a genetic information concept that

assumes that genotypes encode representations of phenotypes has been less than

18
One interesting response to the teleofunctional extension of

inheritance is due to Stegmann (2005). Stegmann acknowledges that

intentional properties cannot be limited to DNA, but suggests that some sort

of reasonable limits are appropriate. Stegmann is correct that this is a

slippery slope. However, following DST, it would be preferable simply to

abandon the notion of intentional information.


151

satisfactory. The genetic information concept has been called into question by

theorists who doubt that it can be defined in a way that supports the notion of

developmental instructions, uniquely stored in DNA. The mathematical

information concept fails because it can be applied to any signal in which there is

a systematic covariation between source and receiver. The intentional information

concept fails because, defined in terms of natural selection, it can be applied to

any adapted developmental influence. In addition to these considerations,

however, it is important to recognize that there are additional conceptual problems

with the appeal to intentional information, which are not solved simply by

decoupling the concept from the genome.

Susan Oyama’s (1985, 2000b) challenge to the developmental double

standard is not motivated simply by a grudge against genetic causation. The real

target of her critique, rather, is the fundamental dualism embodied in the

assumption that there are two kinds of causes operating in ontogeny. It is the

metaphysical dichotomy between formative and contingent causation that must be

overcome if we are to take developmental questions seriously. From this

perspective, it seems evident that the teleofunctional conception of developmental

information replicates the dualism and preformationism of the genetic program

concept. The problem with the teleosemantic approach is that it tends to imply

that information has some sort of real existence, independent of the organized

material systems of which it is a global property. As long as we maintain the

notion that development is directed by representational information, we remain


152

trapped within a discourse in which biological characteristics are conceived, in

some sense, as preformed, and developmental explanation remains caught in a

dualistic framework that opposes essential, formative causes to contingent,

supporting, or interfering ones. The only way out of this conceptual quagmire

may be a radical reformulation of the conceptual landscape, one which goes

beyond the dualism and preformationism of the interactionist consensus toward a

truly constructive interactionism. In the remainder of this chapter, I discuss the

effort of developmental systems theory to articulate such an approach.

Lehrman and developmental psychobiology

While the dichotomy between innate and acquired characteristics goes

back to the late nineteenth century, the nature vs. nurture opposition goes back at

least to the connate-congenital distinction developed by French physicians early

in that century. A coherent alternative to the reliance on these classic dichotomies

for the study of behavior development was articulated by Daniel Lehrman (1953),

one of the pivotal intellectual forbears of developmental systems theory. While

Lehrman was certainly not the first (non-Lamarckian) theorist to question the

innate-acquired dichotomy, he was one of the first to express a distinctively

systems view of development (even if he did not explicitly use systems language)

(Johnston, 2001). Of prime significance is Lehrman’s (1953) critique of Konrad

Lorenz’s theory of instinctive behavior. Lorenz had argued that certain species-

typical animal behaviors must be considered instinctive because they develop

reliably under conditions in which they could not have been learned. In his
153

famous “isolation experiments,” Lorenz monitored the development of individual

animals, which were not permitted any type of interaction with conspecifics. He

observed that under these restrictive conditions, certain stereotypical behaviors

would nevertheless emerge at predictable stages of maturation. Based on these

findings, he concluded that, since he had ruled out the possibility of these

behaviors being learned, they must be understood as innate.19 While Lorenz’s

defense of innateness should be appreciated in light of the excessive

environmentalism of many of his behaviorist contemporaries, as Lehrman points

out, the recourse to internal causes to counter an over-reliance on external causes

remains within the same dichotomized conceptual space.

Unsatisfied with Lorenz’s nativism as well as with standard behaviorism,

Lehrman (1953) argued that explaining a behavior by identifying it as either

innate or acquired fails because this passes directly over the question of how that

behavior actually develops. To demonstrate the inadequacy of Lorenz’s

explanatory approach, Lehrman presented examples of the sort of research that

attempts to explain behavioral development, rather than simply labeling it as

innate. For instance, he discussed some experiments that examine how certain

species-typical maternal behaviors develop in the female rat. Mother rats typically

shred and pile available material to build nests to which they later carry their

newborn young. That these nest-building and offspring-retrieving behaviors occur

very reliably, even for mother rats raised in isolation, suggests, according to

19
I have oversimplified Lorenz’s view for brevity.
154

Lorenz’s criteria, that they are innate. It turns out, however, that the actual

development of these behaviors is complex and contingent in a way that

controverts the assumption that they simply undergo maturation. For one thing, it

seems that there are other, apparently unrelated, behavioral routines that are

essential to the development of the focal behaviors. For example, mother rats who

were given powdered food and otherwise deprived of the opportunity to carry

things during their maturation often failed to develop normal nest-building and

offspring-retrieval behaviors. Mother rats who were prevented from licking their

own genitalia during pregnancy not only failed to retrieve their young, but they

were also more likely to eat them. In addition, Lehrman pointed to experiments

that show a correlation between the development of nest-building behavior and

environmental temperature. Rats kept in warmer conditions appear less likely to

develop a range of normal maternal behaviors, presumably because they are

simply less active. These normal activities, such as carrying and manipulating

food, are evidently essential to the ontogeny of maternal behaviors.

Lehrman’s (1953) critique of Lorenz marked the beginning of a tradition

in developmental psychobiology that refuses to rely on dichotomous and

preformationist conceptions to account for the development of animal behaviors.

Gilbert Gottlieb’s research in behavioral embryology exemplifies this tradition.

His work on the development of species-typical behavior in ducklings revealed

the complex interdependence between structural and functional aspects of

embryogeny through which structural maturation is influenced by feedback from


155

the functional activities of still developing structures. In a classic series of

experiments, Gottlieb (1975a, 1975b, 1975c, 1978, 1979) demonstrated the

importance of prenatal experience and self-stimulation in newly hatched

ducklings’ ability to recognize species-specific maternal calls. It turns out that the

development of this apparently innate capacity depends on the ducklings’ prenatal

exposure to the vocalizations of conspecifics, and, if these are lacking, on their

own vocalizations. These vocalizations facilitate the formation of the auditory

structures that enable the ducklings to recognize the maternal assembly call

immediately upon hatching.

It seems that whenever developmental questions are actually asked, the

answers reveal complex interdependencies that belie any presumption of linear

maturation. The old conception of behavioral development as genes-->structure--

>function is replaced by an understanding that these relations are reciprocal and

complexly interdependent.20 Some of these observations had previously been used

by behaviorists to counter nativist claims regarding the innateness or inheritance

of certain behaviors (Johnston, 2001; Richards, 1987). Lehrman (1953) and

Gottlieb (1979), however, exemplify the developmental systems approach in their

20
Just to be clear, this is not a suggestion that these interactions change

the genes. Yet, they do alter the genes’ effects, which as far as

developmental outcomes are concerned, is equivalent. As Oyama (1985)

writes, “if information could not flow inward [alter the effects of genes] . . .

development would be impossible” (p. 87).


156

explicit rejection of both behaviorist and nativist standpoints. Both Lehrman and

Gottlieb insisted that behaviors are neither innate nor acquired, but must actually

develop through cascades of complexly contingent interactions.

Lewontin and dialectical biology

Another important influence on developmental systems theory and a

crucial resource for the constructionist reformulation of developmental concepts

is the dialectical approach articulated by evolutionary geneticist Richard

Lewontin and his comrades. In a set of now classic papers, Lewontin (1982,

1983b) presented a radical conceptual critique of the standard framing of

evolutionary and developmental theories. He argued that biologists are misled by

the metaphors and reified elements of evolutionary theory to reach unwarranted

conclusions about the relationship between development and evolution. Lewontin

singled out, as a key moment in the history of biology, the reconciliation of

biological thought with the “epistemological meta-structure” of nineteenth

century science. This was a great step forward for biology, but, like all epigenetic

transformations, it contained, at its inception, the seeds of its own eventual

obsolescence. For the dialectical perspective, this is a general principle: “the

conditions which make possible the coming into being of a state of the system are

abolished by that state” (Lewontin, 2000, p. 60).

The epochal reframing of biology was achieved by Mendelism and

Darwinism, according to Lewontin (1983b), due to the clear severance of internal

and external causes entailed by these theories. Darwin’s explanation of evolution,


157

for example, relied (more or less) exclusively on external factors as the causes for

organic form. For classic Darwinism, the organism is essentially a passive object,

while Nature, as the source of the formative forces of evolution, takes on the role

of subject. In contradistinction to this, Mendelism (I would say heredity, including

Pangenesis. See Chapter 4) explains development in terms of the formative

influence of internal factors. Once again the organism is a passive object, while,

in this case, internal, heritable causes play the role of subject, directing ontogeny.

As Lewontin explained, by framing the organism as a mere product of internal

and external causes, the vague holism of early nineteenth century organicism was

overcome, and a new era of mechanistic research was inaugurated. Although this

dichotomous framework has enabled enormous theoretical and empirical

progress, Lewontin suggested that it has now outlived its usefulness and become

an obstacle for those studying ontogeny and cognition (biology’s hard problems),

where mechanistic approaches are less effective.

Lewontin (1982, 1983b) identified two key metaphors that perpetuate

misunderstanding in biology. The first metaphor, development, dates from the

earliest consideration of the problem of individual ontogeny (or generation). The

word develop is literally the opposite of envelop; it refers to the gradual unfolding

or revealing of a form that is enclosed or hidden, but already present. This

metaphor encourages researchers to think about developmental outcomes as

somehow existing prior to their ontogeny (whether as spermatic homunculi or

encoded programs). The second metaphor, adaptation, is due to Darwin (and the
158

British natural theology tradition on which he drew). According to the adaptation

metaphor, evolution is a design process in which the functional characteristics of

organisms are created by natural selection as it gradually optimizes the fitness of

populations to preexisting environments. The organism is treated as a set of

solutions to the problems posed by the environment. These two metaphors

complement one another and help to sustain the dichotomy between internal and

external causes that continues to confound thinking about enduring biological

problems.

As an antidote to this habit of viewing the organism as a purely passive

object, determined by internal and external causes, Lewontin (1983b) suggested

replacing the metaphors of ontogenetic development and phylogenetic adaptation

with the metaphor of construction. The construction metaphor dispenses with the

view of the organism as a mere object, but does not replace it with an equally

erroneous view of the organism as autonomous subject. This would merely be to

leap from one side of the preformationist dichotomy to the other: from atomism

and mechanism to holism and vitalism. Rather, the construction metaphor, for

Lewontin, entails the crucial insight that the organism is simultaneously subject

and object of biological processes. Drawing on the Marxist intellectual tradition,

he emphasized the dialectical nature of biological organization. Organism and

environment must be seen, according to this view, as inseparable, reciprocally

codetermined entities. In the final chapter, I return to the implications of the

dialectical approach for a constructionist integration of development and


159

evolution.

Intentionality and preformationism

Oyama (1985) has drawn on the insights and findings of earlier

psychobiologists, dialectical biologists and others to formulate a comprehensive

theoretical perspective called developmental systems theory (DST). According to

Oyama, empirical studies of development have consistently supported a view of

ontogeny as a cascade of constructive interactions occurring among complexly

interdependent and contingent influences. Yet, despite the fact that conceptual

dichotomies such as nature vs. nurture and biology vs. culture are routinely

pronounced dead, they keep returning as theorists grope for acceptable ways to

express their abiding conviction that form somehow preexists its appearance in

ontogeny (and phylogeny and cognition). The irrepressibility of this dualism is, in

turn, rendered all but inevitable by the particular species of mechanistic thought

that has dominated biology since the nineteenth century. DST is an attempt to

develop a theoretical framework that purges this crude Cartesianism, and replaces

the dichotomous reasoning that relies on independent internal and external causes,

with a rich and robust materialism that is able to conceptualize the constructive

dynamics of biological systems without resorting to preexisting representations or

quasi-cognitive agents.

One of the principal problems with the dualism implicit in contemporary

biology’s approach to developmental problems is the way in which the privileging

of genetic causes reinforces the association of the genetic with the inevitable.
160

Taken literally, this would constitute crude genetic determinism, but of course no

one is a genetic determinist in this sense. Nevertheless, as we saw in the first part

of the chapter, even supposedly enlightened interactionists attribute intentionality

to genetic information, implying that the genes carry a definite message, and

deviations from the “intended” outcome are said to indicate a misreading of the

message. Indeed, the rarely questioned assumption that development has a correct

outcome is one of the main justifications for claiming that genetic information is

intentional. In other words, even though the phenotype is considered a product of

gene-environment interactions, the genes are thought to contain literal instructions

for phenotypes. These instructions may require certain conditions in order to be

correctly executed, but there is only one proper meaning. Dawkins’ (1986)

rendering is characteristically colorful, but reflects the status quo perspective:

It is raining DNA outside. . . .DNA whose coded characters spell out


specific instructions for building willow trees that will shed a new
generation of downy seeds. Those fluffy specks are, literally, spreading
instructions for making themselves. It is raining instructions out there; it's
raining programs; it's raining tree-growing, fluff-spreading, algorithms.
That is not a metaphor, it is the plain truth. [emphasis added] It couldn't
be any plainer if it were raining floppy discs. (p. 111)

We should not be surprised that critics cry determinist. However, the more fitting

epithet is probably preformationist since, though developmental outcomes are

supposed to preexist as coded information, their expression is recognized as

contingent.

It is important to recognize that the arguments presented in the first part of

the chapter, challenging the privileging of the genes, are of limited help here.
161

Those arguments merely call into question the simplistic conception of genetic

information, leaving intact the appeal to intentionality. Although extended

replicator theory accepts some of the implications of explanatory parity, it still

divides developmental influences into those that are adapted for their

developmental role and those that are not. In this way, the basic dualism is

preserved in the pattern of developmental explanation; there are still two different

kinds of developmental causes, one formative, and the other supporting or

interfering. Whether the species-typical phenotype is represented by genes, by

cytoplasm, or by obligatory symbionts, the appeal to intentional information still

implies the preexistence of developmental outcomes.

When gene-centric biologists are directly challenged on this issue, they

typically take refuge in the differential gene concept (Gray, 2001). Dawkins

(1976), for example, points out that the gene for locution is merely shorthand for

the idea that a difference in that particular gene will make a difference in the

phenotype, all else being equal. In this sense, genes are to be understood as a

source, not of form, but of variation. And the frequency of a trait in a population

provide the justification for concluding that it is the normal, and thus intended

outcome of development. Notwithstanding the metaphysical leap from the

statistically normal to the intentional, Oyama (2000b) suggests that this tendency

to associate genetic with developmentally normal may be due to a confusion

between population and individual level analyses (pp. 38-39). For example, the

norm of reaction, which is a population-level concept, if transposed onto the


162

individual, becomes genetic potential or predisposition. It is perfectly legitimate,

of course, to move from population to individual with a well-defined statistical

model. If a group of people is betting on the outcome of a coin flip, there is no

problem with assigning to each individual a 50% probability of guessing

correctly. The norm of reaction, however, is based on the relative stability, in a

given population, of developmental outcomes that entail many complex

interactions at the individual level; it is a statistical description, not an individual

prescription. As Oyama points out, the actual ontogenetic possibilities of any

individual organism are not defined or inherently constrained by the reaction

norm; they are a product of the developing organism in its ecological context, a

context which it participates in creating and specifying through its activities.

There is no meaningful way to specify an individual’s range of genetic potential

independent of, or in advance of, the developmental process itself.

It is still possible for the defender of intended developmental outcomes to

defend the appeal to intentionality exclusively in terms of population-level

analysis. Here one might claim that the criterion for designating a developmental

outcome as genetically intended is its heritability. Heritability is a statistical

concept employed in population genetics to measure the amount of phenotypic

variance, in a particular population at a particular time, which is due to genetic

variance among individuals. In other words, it measures the amount to which

differences in a character can be attributed to differences in genotypes. If a

character is found to be substantially heritable, it is said to be to this degree


163

genetically determined (in the source-of-variance sense), or if one is an extended

replicator theorist, then perhaps epigenetically determined. Either way,

heritability is supposed to establish that a developmental outcome is the result of

an inherited representation.

The problem with this tactic, as Lewontin (1974) has pointed out, is that a

heritability measurement for a given environment tells us nothing about a

character’s heritability in a different environment. Furthermore, it tells us nothing

about the relative importance of genetic and environmental causes in the

development of the character. It is therefore a mistake to conclude from a

heritability score that a character is to that extent under genetic control,

represented by intentional information, or in any way the result of something that

is literally inherited. This latter qualification is a consequence of the fact that

heritability itself may vary significantly across different populations and

environments. In addition, because the methodology is based on variance,

characters that are essentially uniform within a species, (such as mammals having

four limbs) show a heritability of zero because there is no phenotypic variance for

this trait (for a lucid and easy to follow discussion, see Moore, 2002, pp. 41-47).

We should not expect the association between heritability and

developmental inevitability to fade away any time soon, regardless of the cogency

of the arguments against it. Assumptions regarding the intransigence of inherited

traits are as old as the inheritance concept itself. Recall from Chapter 3 that the

eighteenth century medical theorists who appealed to hereditary transmission to


164

explain patterns of disease in families often used their inability to treat a disease

as a criterion for identifying it as hereditary. Although the origin of this criterion

reminds us of its contingency, it has nevertheless become part of the very

definition of heredity. Recall as well that the principle of hereditary predisposition

was also developed at that time, partly as a way to impose some regularity on the

substantial irregularity exhibited by hereditary diseases. Finally it is important to

recognize that the assumption that so-called normal developmental outcomes are,

to some degree, inevitable or impervious to intervention has profound social and

political consequences. Oyama (1985) sums up this situation eloquently:

On the basis of such causal assumptions we may forego action because


failure seems assured before the fact, we may convey our expectations of
the inevitable to beings exquisitely ready to incorporate those expectations
into their conceptions of themselves, thus increasing the probability that
the prediction will be accurate, or we may conduct ourselves individually
and collectively in ways that we believe universal and decreed by the very
nature of life, and in so doing, fail to seek alternative ways of living
together. On the other hand, we may, by seeing persons simply as products
of conditions, undermine the very capacities of self-determination that are
necessary to alter those conditions. By accepting a simple-minded notion
of learning, we may assume that because someone does not respond to
some fairly short-lived, trivial or gross aspect of the environment, he or
she is beyond influence. Or we may believe that whatever is learned can
be changed. We do equal violence to the complexity of human life, that is,
by overestimating its rigidity and by underestimating the subtlety and
structured conservatism of its modes of change. We do most violence of
all by seeing persons as mere effects of internal and environmental causes,
rather than as active beings that to some extent define their own
possibilities. (p. 80)

Development as semi-reliable happenstance

The discussion, up to this point, has focused on the appeal to intentional

information to justify the conventional view of development as, in some sense,


165

instructed. Extended replicator theory notwithstanding, for defenders of the

received view, genetic information is typically assumed to take the form of

instructions encoded in DNA. The notion of genetic instructions, in turn, supports

an image of ontogeny, central to developmental thought at least since Aristotle, as

an orderly sequence of events directed toward a particular outcome. As I

mentioned, this is the literal meaning of the word development. Thus, the

inference from regularity of outcome to regularity of process is rarely questioned,

and this presumed regularity tacitly supports the assumption that the outcome is

instructed (even though, as Oyama [personal communication, April 25, 2009]

notes, “we wouldn’t follow the same inferential path for a row of dominos”). As it

turns out, even at the molecular level, where one might expect to find relatively

simple and straightforward mechanisms, things are far from orderly and

programmatic. Indeed, a close examination of cellular dynamics at the molecular

level reveals processes that are almost comical in their Rube Goldberg-like

convoluted complicatedness (Oyama, 2000b, pp. 120-123).

It is not unusual for discussions of molecular genetics to begin with a

simplified explanation of protein synthesis that exemplifies the central dogma,

according to which, as Crick put it, “DNA makes RNA, RNA makes proteins, and

proteins make us” (quoted in Keller, 2000a, p. 54). Though plenty of nuances

have been added, the straightforward account of transcription and translation that
166

follows remains the paradigm for the role of DNA in the cell.21 To begin with, the

DNA molecule consists of two strands, each made up of a long sequence of four

types of nucleotide bases. The four varieties of nucleotide bases, A, C, G, and T,

line up in base pairs, due to chemical affinity, so that nucleotide base A is always

bonded to T and C to G. The two strands that make up the double helix

complement each other in this way along its entire length. This complementary

base pair bonding is not only essential to the structure of the double stranded

DNA molecule, but also to the process of transcription, which is the first phase of

protein synthesis. During the transcription phase, the DNA double helix unwinds

at the site of a gene (defined as a sequence of nucleotide triplets on the DNA

strand). Free individual nucleotides, which are complements of the DNA

nucleotides (except U is substituted for T), attach to the DNA strand at the

location of the gene sequence, forming a messenger (mRNA) strand. At this point,

the mRNA strand (the primary transcript) which is the chemical complement of

the gene, consists of some number of nucleotide triplets called codons. It is

transported out of the nucleus to the cytoplasm where it is adopted by a ribosome

that will handle the translation process.

The cytoplasm, meanwhile, is populated by a variety of transfer RNAs

(tRNAs). Each tRNA consists two conjoined parts: a nucleotide triplet and a

specific amino acid. The various types of amino acids are always joined with the

21
The following explanation is drawn mostly from Keller (2000a,

59-66).
167

same triplets to form various tRNAs. The translation process translates the

mRNA strand into a chain of amino acids by way of the affinity between the

codons in the mRNA and the triplets constituting one part of the tRNA. As a

result of the tRNAs becoming attached to the mRNA strand, the amino acids

joined with the tRNAs are chained together to form a polypeptide. Once the

polypeptide is complete, it is released and then folds itself into a particular three-

dimensional shape to become a functioning protein molecule. The linchpin of the

translation process is the apparently arbitrary but unique and invariant association

between specific nucleotide triplets and specific amino acids joined together in

the tRNA. This association constitutes the so-called genetic code, which

ultimately links the structure of the DNA molecule to the specific amino acid

chains that form the various proteins.

This story is so simple and elegant. If only things were really this

straightforward. But they are not. First, there is all that junk DNA, the non-coding

sequences that are spread all through the coding sequences. Imagine hgsd trying

njut to asd read a sentence shd with random junk dju characters v inserted

between qqq the real words. According to Beurton (2000), this genes-in-pieces

phenomenon is typical for eukaryotic organisms (organisms with membrane-

bound nuclei). To deal with this situation, the primary transcript mRNA, which is

a sequence of nucleotides directly transcribed from the DNA, must be edited to

produce a mature transcript. The unusable portions (the introns) have to be

removed and the usable portions (the exons) spliced back together. But that’s only
168

the beginning. The precise way the exons are spliced together is not fixed, but

depends on many factors from the type and history of the cell to the organism’s

overall developmental and ecological context (see Gilbert, 2001). Furthermore, it

turns out there is no absolute distinction between an exon and an intron. This

depends on the reading frame, which can be shifted by one or more nucleotides to

produce an entirely different sequence of triplets. Besides alternative splicing and

shifting reading frames, there are other ways for a mature transcript to be

produced, including the splicing together of exons from different primary

transcripts or the insertion of bases not even coded for by the DNA. Finally, once

the protein synthesis is complete, the protein’s function can still be altered by

allostery, a process in which effector molecules bind to proteins, changing their

three-dimensional structure and, consequently, their function.

It seems clear enough that we are not going to make sense of this

complexity by appealing to programs and encoded information. Indeed, it is

precisely the attempt to understand these dynamics as simply additional

complications and anomalies, which can somehow be incorporated into the

standard DNA->RNA->Protein model of gene expression, that makes them seem

so disorderly. As Griffiths and Stotz (2007) point out, “the relationship between

DNA and gene product is indirect and mediated to an extent that was never

anticipated when the basic mechanisms of transcription, RNA processing and

translation were clarified in the 1960s” (p. 97). The metaphors of coding

sequences, transcripts, messengers, editing, etc. have been stretched to the point
169

where the primary concept of coded information has become all but unintelligible.

Rarely is the need for an alternative conceptual framework so starkly apparent.

Fortunately, a superior framework already exists and is in widespread use.

It is the cybernetic notion of regulation that began with the operon model of Jacob

and Monod (1961). A half century after the discovery of the humble lac operon,

the dynamic regulation model stands poised to displace the standard model of

gene action. Moreover, this emphasis on dynamic regulation and distributed

control is already entailed by the developmental systems approach. For DST, as

for the cybernetic regulation model, cellular and organismic structure and

function are not controlled by instructions encoded in a master molecule, but

emerge dynamically as global properties of hierarchically organized, complex

systems. Genes do not play a privileged informational role in the cell; they do not

cause particular proteins to be synthesized, and they do not contain

representations of gene products. The dynamic molecular networks responsible

for cellular regulation and ontogeny can be more comprehensively explained in

terms of the constructive interactions that characterize developmental systems.

Development as construction

It follows from the developmental systems critique that if we want to ask

and answer questions about the construction and reconstruction of organic form,

the appeal to intentional information is unjustifiable, unhelpful, and pernicious. It

cannot be justified because it relies on an asymmetry between formative and

contingent causes, which is founded on dubious methodological and metaphysical


170

premises. It is unhelpful because when we attribute formation to formative causes,

whether vitalistic forces, preformed homunculi, or transmitted representations, we

fail to address the developmental questions that motivated our inquiry. It is

misleading because it endorses a dichotomy between the inherited and the

environmental, the biological and the social, which is false.

By rejecting the conception of developmental information as coded

instructions or programs, the developmental systems approach unequivocally

abandons the premise that the developmental process is somehow directed to

produce outcomes that, in any sense, preexist the process itself. This is a crucial

move because, as Oyama (2000b) reminds us, “whenever a program is invoked, a

developmental question is being ignored, or worse, being given a spurious

answer” (pp. 62-63). The rejection of informational preformationism, the

systematic commitment to explanatory symmetry and causal interdependence, and

the insistence on developmental answers for developmental questions together

lead to an unraveling of the genetic programming paradigm, laying bare the core

ambiguity that stitches together different levels of biological discourse. What

begins to emerge, according to Oyama (1985), is:

a conception of a developmental system, not as the reading off of a


preexisting code, but as a complex of interacting influences, some inside
the organisms skin, some external to it, and including its ecological niche
in all its spatial and temporal aspects, many of which are typically passed
on in reproduction either because they are in some way tied to the
organism’s (or its conspecifics’) activities or characteristics or because
they are stable features of the general environment. (p. 34)

The developmental systems approach thus entails a radical reconceptualization of


171

biological inheritance. In place of the hereditary transmission of traits, or

programs/blueprints/recipes for traits, the inheritance of a developmental system

is simply the sum of material conditions available to it.

The constructionist approach outlined here fundamentally undermines the

traditional opposition between inherited traits, which are the result of encoded

information, and acquired traits, which result from local developmental

contingencies. For one thing, all the developmental interactants that are available

to successive life cycles are, on this view, inherited. Some of these resources may

be more structurally stable or reliably present than others, but this is an empirical,

rather than a metaphysical question. The reader may well object, in the spirit of

Dawkins (1976) and Hull (1988), that only genetic information is actively

replicated, while the availability of other resources is contingent at best. But this

is demonstrably false. As Griffiths and Gray (1994, 1997) show, there are many

extra-genetic developmental resources whose availability to succeeding

generations is due to the species-typical activities of prior generations.

Developmental systems theory fundamentally challenges not only the

traditional conception of inheritance as genetic transmission, but also the entire

notion that development should be explained in terms of two distinct kinds

developmental causation. As long as some causes are active, while others are

reactive, as long as some provide information and others simply read or interpret

it, the basic dichotomy between internal and external causes is sustained, and

developmental dualism is preserved. The essential (essentialist) error is the belief


172

that particular elements within a complex system have a special role of

transmitting or storing information. Indeed, one of the defining tendencies of the

larger preformationist worldview I am critiquing here is the persistent and

problematic habit of attempting to explain the functional organization of a

complex system by locating an organizing principle or power within some

elementary component (single or multiple, atomic or holistic), which is logically

or temporally prior to the system. This tactic does not explain anything; it merely

places the real causes of form beyond explanation, which is indeed the essence of

the preformationist impulse. If systems thinking teaches us anything, it is that we

must treat processes and relationships as primary. To the extent that a system has

functional components, these must be seen to exist, as such, only with respect to

the system as a whole.

Developmental systems theory shares with dialectical biology an

attentiveness to the interpenetration of organism and environment. Indeed, the

developmental system is defined to encompass all of the interacting influences

that come into play in the construction of living processes. The environment,

when invoked, refers to the organism-relevant environment, defined as all and

only those differences that make a difference to the organism. Specific questions

may be asked about what counts as a developmental interactant and what aspects

of the system are reliably present in successive life cycles, but these are empirical

questions. Moreover, answers to these questions do not automatically count as

answers to questions about the production of transgenerational stability and


173

change. As Oyama (2000b) argues, “heredity is not an explanation of this process,

but a statement of that which must be explained” (p. 71). Development can

produce stable outcomes despite significant genetic or other changes. This is the

core insight behind Waddington’s (1957) epigenetic landscape. As Oyama

(2000b) reminds us, “ontogenetic means are inherited; phenotypes are

constructed” (p. 71). Thus, rather than the expression of transmitted form, for

DST, ontogeny is understood in terms of particular cascades of the highly

contingent, yet constructive interactions. And instead of effectively disappearing

at the intersection of deterministic causes, the organism becomes the key point of

reference, the focus of analysis as both cause and effect of ontogenetic

construction.

It is perhaps needless to say that the developmental system must be

understood in terms of constant transformation and ontogeny as nothing other

than the process by which a system transforms the structure it finds distributed in

space and time into a more or less functionally integrated unity. The state and

activities of the system are changing from moment to moment, partly due to

autonomous changes affecting the interactants and partly as a result of the

system’s own particular history. Even the influence of the genome depends on the

contingent history of the system, a fact which is essential to cellular

differentiation. This is why it is important to emphasize the dynamic life cycle,

and to reject the opposition of genotype and environment as independent sources

of form. Organisms are active and their activities help to determine the conditions
174

of their ongoing ontogeny. Moreover, an emphasis on developmental systems and

the life cycles they construct helps to counter the traditional preoccupation with

the “mature” phenotype as the end product of ontogeny. For the developmental

systems approach, ontogeny ends at death.

Conclusion

This chapter has attempted to show that, with respect to a genuine

understanding of developmental processes, concepts such as genetic programs,

developmental instructions, and intentional information are both unhelpful and

misleading. They fail to capture the interactive, constructive, and contingent

aspects of ontogeny because they treat development as an expression of

intentional information or an execution of encoded instructions. Developmental

systems are self-organized, interactive networks. They are complex systems, the

structures and functions of which are not an end goal, but simply the dynamic

organization that defines, at a moment in time, a system in constant

transformation. Development is not controlled by a central agency embodied in a

master molecule. Control is reciprocal and distributed, emerging at particular

moments within systems that consist of dynamic hierarchies of regulatory

interactions. Developmental processes derive their celebrated robustness not from

programs, but from the multiple ways in which complex processes are constrained

and entrained as a consequence of their embeddedness in complex systems of

feedback and regulation. Moreover, these hierarchies of regulatory networks are

not arbitrarily restricted to the organism proper, but extend beyond it, such that
175

the developmental system can be seen, for certain purposes, to include

interactions at the levels of ecology, geology, and even cosmology.


176

Chapter 6: Evolutionary Dualism and the Constructionist Challenge

The previous chapter makes the case that the interactionist consensus is

inadequate because it preserves the logic of preformationism in the form of

genetic instructions and intentional genetic information. The assumption that

ontogeny is in some sense controlled or constrained by genetic information

prejudges developmental questions in a way that engenders facile and misleading

answers (Oyama, 1985, 2000b). DST’s parity of reasoning calls into question the

double standard embodied in conventional interactionism and, as a result, rejects

the arguments used to privilege particular causes (most often genetic ones) in

developmental explanation. The rejection of causal dualism may seem reasonable

enough with respect to the study of development, but it presents special problems

for neo-Darwinian evolutionary theory. The assumption that evolution can be

explained without reference to the details of development is built into the

theoretical structure of neo-Darwinism, which traditionally defines evolution in

terms of changes in the frequencies of genes in a population’s gene pool. An

evolutionary dualism is explicitly embodied in Dawkins’ (1976, 1982) attempt to

settle the unit of selection problem by distinguishing between replicators and

vehicles and in Hull’s (1980, 1988) reformulation of that distinction in terms of

replicators and interactors. In both cases, the definition of the replicator as the

unit of selection ignores its precise role in development while tacitly presupposing

its causal preeminence.

This chapter examines the attempt by Paul Griffiths and Russell Gray
177

(1994) to formulate a developmental systems conception of the unit of evolution.

To clarify the issues at stake in Griffiths and Gray’s model, I contrast it with the

“less radical” alternative promoted by Kim Sterelny, Kelly Smith, and Michael

Dickison (1996). Griffiths and Gray argue that the replicator/interactor framework

depends on and reinforces a dichotomous view of development that assumes a

unique causal role for genes as units of inheritance. They pursue the implications

of causal parity and extended inheritance for evolutionary explanation, ultimately

claiming that, to overcome the dichotomies that pervade development and

evolution, we must abandon the replicator/interactor distinction and adopt a

radically extended model of inheritance, which they claim is implied by DST.

Sterelny et al. counter that the privileging of genes that worries DST proponents

can be corrected by way of an extended replicator theory that maintains the

replicator/interactor distinction, while extending the selfish gene (or gene’s-eye-

view) conception of evolution to include a broader class of potentially selfish

replicators. I ultimately agree with the position taken by Griffiths and Gray that

the designation of a privileged class of replicators cannot be supported given our

current understanding of developmental causality. In this chapter, I present

Griffiths and Gray’s defense of their DST-based unit of evolution. In the next

chapter, I provide a critical examination of their model.

Extending Inheritance

Evolutionary dualism and developmental dualism are two sides of a single

coin. The distinction between nature and nurture, while ostensibly rejected in
178

theorizing development, is tacitly preserved at the heart of the orthodox

evolutionary theory. Darwin framed the theory of natural selection in terms of the

advantages conferred by various inherited characters. To account for inheritance,

he postulated the transmission of pangenetic particles carrying accidental

character variations from parents to offspring (1883). Although Darwin was not

an absolutist regarding the distinction between inherited and acquired characters,

the logic of the Darwinian revolution and the source of its power was its implicit

separation of internal and external causes (Lewontin, 1983a). Hidden internal

causes accounted for variations and their inheritance, while the selection of fitter

varieties accounted for the transmutation of species. This separation was only

strengthened by the introduction of transmission genetics. In the theoretical

framework of the modern synthesis, the nature-nurture opposition is sufficiently

uncontroversial that, despite the interactionist consensus, Maynard Smith (2000a)

can still state, unequivocally, that, “biologists draw a distinction between two

sorts of causal chain, genetic and environmental, or ‘nature’ and ‘nurture’...

evolutionary changes are changes in nature not nurture” (p. 189).

The framing of evolutionary theory exclusively in terms of changes in the

genetic constitution of populations is now being routinely questioned in

evolutionary as well as developmental theory. In order to encompass more than

just genes in evolution, some theorists propose extended models of inheritance

that recognize a wider assortment of inherited factors. Odling-Smee, Laland, and

Feldman (Laland et al., 2001; Odling-Smee et al., 2003), have developed a model
179

of evolution that they call niche construction. They have been influenced by

Lewontin (1982, 1983b) to develop a constructionist approach to evolution that

does not rely on a strict separation of internal and external causes. In order to

address organism-environment interpenetration, as Lewontin recommends,

Odling-Smee et al. identify two channels for hereditary information: genetic and

ecological. The genetic channel is conceived, conventionally, in terms of genes

transmitting hereditary information from parents to offspring. The ecological

channel, meanwhile, permits the transmission of a whole complement of

ecological factors arising from the niche constructing activities of ancestral

organisms. These include habitats chosen by ancestors, alterations to the local

environment such as nests, dams, and boroughs, and larger effects such as the

alteration of the Earth’s atmospheric balance, first, and most drastically, by the

photosynthesis of cynobacteria (but lately by homo industrialus). Because niche

construction processes alter the functional relationships between organisms and

their surrounds, they contributes substantially to the selective pressures

encountered by descendant populations, thereby affecting evolution. Although

provocative, the authors’ account of ecological inheritance has been criticized as

overly broad (Sterelny, 2005), while their account of genetic inheritance as overly

narrow (Griffiths, 2005). Finding themselves unexpectedly in the crossfire of an

ongoing debate about the mechanisms of inheritance (Griffiths & Gray, 1994,

1997, 2001; Sterelny, 2001; Sterelny et al., 1996), the authors essentially

capitulate. “To be honest,” Laland et al. (2005) write, “we have given insufficient
180

consideration to ‘inheritance mechanisms’ . . . Our goal was primarily to make the

case that different forms of ecological inheritance are evolutionarily

consequential” (p. 43).

A more technically detailed effort to extend inheritance is found in the

work of Jablonka and Lamb (2001, 2005). Jablonka and Lamb identify four

dimensions for the transmission of hereditary information. In addition to the

conventional inheritance of information encoded in DNA, they suggest, we must

also recognize the evolutionary significance of epigenetic, behavioral, and

symbolic inheritance systems. Epigenetic inheritance systems transmit the non-

DNA structural variations in cell lineages. These mechanisms, such as chromatin

marking and RNA interference appear capable of transmitting gene silencing

patterns not only between cells, but also between generations. Behavioral

inheritance systems include a variety of mechanisms for transmitting behavior

patterns in animal lineages. For example, the provision of behavior influencing

substances, such as food traces received through mother’s milk, influence the later

development of food preferences. Behavioral inheritance also, of course, includes

imitative learning, where the offspring directly copy the behaviors of

conspecifics, as well as non-imitative, socially mediated learning, in which

offspring learn indirectly by observing the experiences of conspecifics in social

situations. Behavioral inheritance is obviously exclusive to animals and it remains

unclear to what extent these systems have evolutionary significance. Finally, there

is the symbolic inheritance system. Symbolic inheritance is unique to humans,


181

since it entails the transmission of culture by way of language and other symbolic

means of communication.

The four-dimensional model suggested by Jablonka and Lamb (2001,

2005) is valuable for explicating the complexities of so-called cytoplasmic

inheritance. However, the dimensions beyond the first two, while perhaps

important for the behavioral evolution of more complex species, are clearly not

biologically universal. Gray (1992) and Griffiths and Gray (1994, 1997, 2001)

argue for a more general, and, at the same time, more radical model of extended

inheritance. Developing parity-of-reasoning arguments originated by Oyama

(1985), Griffiths and Gray reiterate that the developmental role of the genes is not

different in kind from other developmental influences. Set forth more formally,

the parity thesis (Griffiths & Knight, 1998) states that, although the specific

contributions of all developmental causes are in some sense unique, “the

empirical differences between the role of DNA and that of other developmental

resources does not justify the metaphysical [italics added] distinctions currently

built upon them” (Griffiths & Gray, 2001, p. 195). Griffiths and Gray attempt to

make the ultimate consequences of parity explicit, arguing that we must radically

expand inheritance to include all the developmental influences that are reliably

present in each generation. Gray (1992) cites, for example, cytoplasmic factors,

gut microbes, maternal chemical traces, social traditions, and ecological

associations (p. 179).

Sterelny et al. (1996) respond to Griffiths and Gray (1994) with an


182

alternative model of extended inheritance they call extended replicator theory

(ERT). They acknowledge DST’s parity-of-reasoning arguments and offer a less

“radical” revision of the unit of selection, which they claim addresses the

substantive problems associated with genetic determinism and genic selectionism,

while preserving the traditional replicator/interactor distinction. The essence of

the ERT position is that the genome is not functionally unique, but it is

functionally special. Like conventional theory, ERT considers the genome to be

special in light of its capacity to transmit intentional information. The difference

is that ERT theorists do not consider this capacity to be exclusive to the genes. As

I mentioned in Chapter 5, for ERT, anything that plays a causal role in

development and has that role as a result of natural selection can qualify as a

bearer of intentional information. The remainder of this chapter presents Griffiths

and Gray’s reformulation of the unit of selection and unit of inheritance problems,

framed in terms of their long-running scholarly exchange with the proponents of

ERT.

Rethinking Inheritance and Evolution

It is one thing to insist that a genuine understanding of development

requires a rejection of dichotomous and preformationist thinking. It is quite

another matter to argue that evolutionary theory also must reject its hard-won

distinction between inherited and acquired characters. The latter is generally

treated as essential to the basic Darwinian pattern of explanation, which, as I

mentioned, traditionally depends on a fundamental distinction between internal


183

and external causes. For the variational approach (see Lewontin, 1983a)

embodied in Darwinism, natural selection acts like an external force, the effects of

which are strictly independent of whatever internal causes are responsible for

producing variation. And transgressions of this independence have traditionally

raised the specter of Lamarckism. Bridging these two causal domains, meanwhile,

is the gene, which is simultaneously the effect of external causes and the source of

internal ones. Natural selection (along with inheritance) causes some genes to

increase in frequency, and their presence in individuals causes fitter phenotypes to

be reconstructed. The parity thesis, according to Griffiths and Gray (1994), forces

us not only to reject developmental dualism, but also to reject the evolutionary

dualism embodied in the replicator/interactor distinction. They argue that, as a

consequence of accepting explanatory symmetry in development, the gene should

be replaced as the unit of selection by “total developmental processes or life

cycles” (p. 278).

Causal parity and evolutionary explanation

Griffiths and Gray (1994) frame their discussion of DST and evolution in

terms of the radical extension of inheritance entailed by the parity thesis.

Orthodox approaches account for intergenerational resemblances with respect to

characters and life histories by appealing to an inherited genetic program that

interacts with the environment to direct or constrain morphological and behavioral

development. From the developmental systems perspective, on the other hand, the

appeal to a transmitted program or plan explains nothing about how parent-


184

offspring similarity is produced. As Griffiths and Gray (1997) put it,

“explanations of ontogeny in terms of a program or organizing center are

promissory notes redeemable against developmental biology” (p. 474). In place of

a unitary program, therefore, Griffiths and Gray insist that we must consider a

much larger set of developmental interactants as inherited. This set includes,

according to Griffiths and Gray (1994), all the “suitably structured resources” that

interact to reconstruct the life cycle (p. 285). Only in this way, they insist, will it

become possible to take development seriously within an evolutionary

framework.

Griffiths and Gray’s (1994) radical extension of inheritance has drawn

objections from theorists with a more orthodox perspective on the unit of

selection question. One influential objection, due to Kim Sterelny, is sometimes

called the ‘Elvis Presley’ problem, or simply the boundary problem. According to

Griffiths and Gray, Sterelny objects that the boundary of the developmental

system is too ill-defined and diffuse for it to function as a workable unit of

selection. Sterelny comments that, although Elvis may have influenced the

development of his musical sensibilities, “surely there is no system, no sequence,

no biologically meaningful unit that includes [him] and Elvis” (quoted in Griffiths

& Gray, 1994, p. 286). Consequently, he claims, there is no principled way to

identify which particular causal influences on development can be unambiguously

counted as part of the developmental system.

In response to the claim that the developmental system is too diffuse and
185

open-ended to serve as a proper unit of selection, Griffiths and Gray (1994)

concede that this is a weakness in the standard representation of the

developmental system. They suggest therefore that it is necessary to individuate

the developmental system in a way that departs from the prior formulations of

DST (e.g., Gray, 1992; Oyama, 1985), which called for the inclusion of more,

rather than fewer, developmental influences (pp. 285-286). Specifically, Griffiths

and Gray argue that, with respect to evolutionary questions, the developmental

system should be understood to include all and only “those developmental

resources whose presence in each generation is responsible for the characteristics

that are stably replicated in that lineage” (p. 286). To illustrate this point, they

distinguish between a unique scar and the general propensity to scar. Although the

causes of each are part of the developmental system, as it was originally

conceived, only the propensity to scar could reasonably be ascribed to the evolved

phenotype, and only the developmental resources involved in its development are

part of the unit of evolution. For the purposes of explaining evolution, therefore,

an alternative formulation of the developmental system is needed, which includes

only those developmental causes whose effects have evolutionary explanations,

i.e. are typical in the specified lineage. The intent of articulating this restricted

conception of the developmental system, they write, is simply “to point to the

explanatory connection between the trans-generational stability of these resources

and the trans-generation stability of certain developmental outcomes” (p. 287).

The explanandum is thus shifted from the real organism, warts and all, to the
186

lineage-typical organism, an abstract entity consisting exclusively of

developmental outcomes that are stable between generations. In order to

distinguish this new, more restricted entity from the original developmental

system, Griffiths and Gray (sometimes) refer to the new entity as the evolutionary

developmental system. In order to be clear, I will refer to Griffiths and Gray’s

approach as EDST.

Lest it appear that this distinction between evolved and individual

developmental outcomes is a recapitulation of the old distinction between innate

and acquired characters, Griffiths and Gray (1994) are quick to reject this

interpretation. They are not, they insist, implying that the distinction between

evolved and individual aspects of development is intrinsic to the developmental

system. Developmental outcomes that are lineage typical are not essentially

different from those that are the result of transient causes. That would entail two

classes of development process, which they explicitly reject (p. 287). Rather, the

distinction is purely heuristic. Evolutionary explanations, they argue, require us to

abstract away from the hurly-burly of the individual life cycle so that we can

focus on the dynamics that characterize much longer time scales. It follows from

this that distinguishing particular characters as having evolutionary explanations

must not be understood as implying that they are somehow more fundamental,

biological, or natural, or that they are more difficult to change than an individual

character. As they explain, “the fact that a trait has an evolutionary history has no

implications about the role of environmental factors in the process by which it


187

develops, except that the process is sufficiently reliable to produce similar

outcomes in each generation” (p. 280).

Griffiths and Gray (1994) note a second potential objection to their model

of extended inheritance, which is that even the evolutionary developmental

system, restricted to those factors needed to explain evolved phenotypes, still

includes factors whose state and existence is entirely independent of the history of

the relevant lineage. Thus, stable aspects of the physical world, such as sunlight

and gravity would seem to qualify as units of inheritance, since they are reliably

present and contribute to the construction of evolved developmental outcomes. In

addition, some developmental resources exist as a consequence of the life cycles

of other species, such as the discarded shells subsequently appropriated by hermit

crabs. Griffiths and Gray admit that one might plausibly object by claiming that

such independently existing factors have no interesting significance for

evolutionary explanations because their availability is unaffected by the activities

of the lineages that depend on them. Doesn’t evolution by natural selection, after

all, rely on the differential capacity of individuals to exploit scarce resources in

reproducing themselves? It is particularly difficult to see where sunlight and

gravity might fit into this picture.

Griffiths and Gray (1994) respond to this skepticism by suggesting a shift

of focus from developmental systems to developmental processes. They define the

developmental process as:

a series of events which initiates new cycles of itself. . . . The events which
make up the developmental process are developmental interactions—events in
188

which something causally impinges on the current state of the organism in


such a way as to assist production of evolved developmental outcomes. (p.
291)

The intention of this revision, they say, is to clarify a crucial point about the

ontology of the developmental system. Namely, the shift of focus to processes

eliminates the misimpression that persistent features of the environment are being

counted as part of the unit of evolution. What evolves is not the persistent feature

or independent object, but the pattern of developmental interactions, involving

persistent features and having an evolutionary history with respect to the

particular lineage. For example, while the bare presence of sunlight does not

depend on the evolutionary history of a lineage, a lineage-typical relationship with

the sun clearly does. The developmental role of the sun for a lineage of bats or

deep sea anemones is obviously quite distinct from what it is for rattlesnakes or

blue-green algae. Strictly speaking, then, what counts as part of the developmental

process is not the sun itself, but those patterns of interaction involving the sun that

explain lineage-typical developmental outcomes. The evolutionary individual,

then, encompasses all and only that which is replicated in development because it

is these resources and relationships that can be given evolutionary explanations

with respect to the lineage in question.22 Persistent environmental features are still

part of the inherited developmental system, which is now defined as the “sum of

22
Griffiths an Gray call the constituents of this group replicators

(see esp. Griffiths & Gray, 1997), although they reject the

replicator/interactor distinction.
189

objects” involved in developmental interactions. But they are not considered to be

replicators, as the latter are limited to the subset of developmental resources that

are replicated, and therefore subject to evolutionary explanation.

Although, according to Griffiths and Gray (1994), a genic selectionist

might counter that the replication of ecological interactions can be redescribed at

the level of behavioral genetics without losing meaning (e.g., Sterelny & Kitcher,

1988), they are able to show that, in some cases, differential replication of

ecological relationships can take place independent of genetic change. They cite

the example of habitat imprinting. Ecologists have identified a phenomenon they

call natal habitat preference induction, which is a technical name for the apparent

tendency of individuals to acquire habitat preferences during early development

(Mabry & Stamps, 2008). Based on this mechanism, an entire population can

develop a habitat preference, which is reconstructed in each generation. The

upshot is that habitat imprinting represents an example of evolutionary

differentiation occurring without changes in genes. Griffiths and Gray describe a

study of European mistle thrushes in which a habitat preference emerged

involving a single population of thrushes (p. 288). This population became

imprinted on parkland rather than forest, meaning that their evolutionary fate, in

relation to other populations, now depended on the availability of this particular

habitat. “The fate of different thrush lineages,” Griffiths and Gray explain, “will

depend on their interaction with the particular habitat with which they are reliably

associated, and the fate of that habitat” (p. 288). Ecological associations can, in
190

this way, confer lineage-specific fitness advantages that are relatively

irreversible.23

An additional advantage for the shift in focus from the developmental

system to the developmental process, according to Griffiths and Gray (1994), is

that it helps to clarify the temporal delineation of the evolutionary individual.

DST has been criticized, they point out, because the developmental system does

not seem to lend itself to the formation of clear-cut generations. Without a

cyclical succession of generations, each of which must reassemble the functional

structures of the lineage, there is no opportunity to evolve functional complexity

(p. 293). But with the emphasis on developmental processes, Griffiths and Gray

help to bring the periodicity of developmental events into focus. As a

consequence of this reasoning, Griffiths and Gray are able to identify the temporal

boundaries of the unit of selection, arriving at the notion of the life cycle. The

repeated sequence of developmental events that define the life cycle are, they say,

those that are “substantially repeated throughout the lineage” (p. 293). The life

cycle is not exactly identical to the cycle of birth and death associated with a

conventional organism, but the principle is the same, and in most cases they

coincide. The development of an individual leaf is not a life cycle because, if a

new life cycle is initiated from a leaf, it will not reproduce just the leaf but the

entire sequence of developmental processes of the plant (p. 295). Besides the

23
As Gray (1992) points out, the conventional assumption that all

and only genetic changes are irreversible is unjustified.


191

organism, there are additional atomic units of repetition, which meet Griffiths and

Gray’s definition of a life cycle, occurring at nested scales above and below the

organism level. This multi-level view of the unit of selection, they say, constitutes

a form of pluralism (p. 295).24

The extended replicator

As I said, there is a growing dissatisfaction among some evolutionary

theorists with gene-centered accounts of development and evolution. Ironically,

perhaps no author has done more to incite this discontent than Richard Dawkins,

whose lucid and compelling defense of gene-centered evolution has made his

selfish gene theory (1976) a lightning rod for opposition. At the same time, while

aware of the difficulties arising from the privileging of DNA as a unique source of

formative cause in development and evolution, some authors are still strongly

attracted to the conceptual elegance of the gene’s-eye-view representation of

natural selection. Just this sort of enthusiasm informs Sterelny, Smith, and

Dickson’s (1996) extended replicator theory. This model endeavors to construct a

principled account of the unit of selection that acknowledges the criticisms of

gene-centrism while preserving what they consider to be the main advantages of

the gene’s-eye-view perspective. ERT theorists concede the force of parity-of-

24
For Sterelny and Kitcher (1988) this would not be pluralism but

rather a form of hierarchical monism because it seems to suppose that life

cycles at different scales will have unique adequate representations of the

selection processes proper to them. See Chapter 7.


192

reasoning arguments, but at the same time, they insist that the replicator class

must be limited to those entities that can facilitate cumulative evolution by natural

selection. Thus they maintain that the set of replicators is larger than the genes,

but smaller than the superabundance of developmental resources entailed by

Griffiths and Gray’s (1994) model.

ERT theorists (Sterelny et al., 1996) accept that the ordinary rationale

relied upon to grant genes a privileged causal role in developmental explanations

is based on an untenable double standard. Furthermore, they concede that the

decentering of the developmental gene forces a corresponding decentering of the

evolutionary gene. Sterelny and Kitcher (1988) had attempted to demonstrate that,

even without a simplistic conception of genetic causality, one can defend the

genic selectionist representation of natural selection based on the existence of

reliable covariation between genes and phenotypes. But as Gray (1992) and

Griffiths and Gray (1994) show, the gene-for locution cannot be given a meaning

that distinguishes gene-trait covariation from environmental-trait covariation. If

no account of genes exists that justifies treating them as unique bearers of

hereditary information, then a description of evolution exclusively in terms of

genetic change will not suffice. Sterelny et al. (1996), however, do not accept the

notion of explanatory symmetry among developmental influences. They counter

that the recognition that the gene’s role is not unique does not justify Griffiths and

Gray’s radical extension of inheritance or their rejection of the

replicator/interactor distinction. There is still, Sterelny et al. claim, a crucial


193

distinction to be made between developmental resources that have an evolved

function in the production of the phenotype, and those that do not. Replicators,

genetic or otherwise, are distinctive in that they have the evolved function of

representing the phenotype.

A bit of background on the replicator concept is perhaps in order here. The

term replicator is due to Dawkins (1976) who, as part of his legendary selfish

gene theory, distinguished between replicators and vehicles. The basic idea is that

the evolutionary gene, as originally defined by Williams (1966), is a selfish

replicator whose only function is to make as many copies of itself as possible.

Vehicles are simply survival machines built by replicators as part of the

replicators’ strategy to propagate themselves into the future. According to

Dawkins, in order to function as a unit of inheritance and selection, a replicator

must exhibit fecundity, fidelity, and longevity. That is, it must be prolific in its

self-replication; it must replicate with a high degree of reliability; and it must

maintain its structural integrity over many generations. Only so-called germ-line

replicators satisfy these three criteria and make possible cumulative evolutionary

change. In addition, only active germ-line replicators can be units of selection,

according to Dawkins (1984), because they are able to influence the likelihood

that they will be copied. For this reason, “adaptations ‘for’ their preservation are

expected to fill the world and to characterize living organisms. Automatically,

those active germ-line replicators whose phenotypic effects happen to enhance

their own survival and propagation will be the ones that survive” (p. 163).
194

Dawkins (1976) offered his analysis of replicators and vehicles to defend

his assertion that the evolutionary gene is the only viable unit of selection. In an

effort to bring greater clarity to the unit of selection debates, Hull (1980, 1988)

presented an alternative, replicator/interactor framework. Hull transformed

Dawkins’ distinction from a metaphorical one to an explicitly metaphysical one.25

Where a given entity, for Dawkins’ account, just is either a replicator or an

interactor, for Hull, there is no meaningful sense in which something can be

strictly identified as one or the other. For example, according to Hull (1988),

“there are no general processes in which genes and only genes function” (p. 21).

Hull’s terminology does not identify types of things at all; it names the theory-

relative functional kinds involved in selection processes. Replicators are those

entities whose structure is semi-reliably copied during reproduction. Interactors

are the cohesive wholes whose interaction with the environment results in

differential copying of replicators. According to this account, it is ambiguous

simply to identify the primary unit of selection; one must identify either the

primary unit of replication or the primary unit of interaction (p. 23) (see also

Godfrey-Smith, 2000b). It is important to recognize that Hull’s framework does

not map as directly onto Dawkins’ as many authors imply. For example, the

25
This distinction between the metaphorical and the metaphysical is

an inexact one. The point is that Hull (1984) has identified his project as

metaphysical to emphasize that he is not making an empirical distinction.


195

entities identified by Dawkins’ as replicators can function, within Hull’s

framework, as replicators or interactors, depending on the context.

According to Sterelny, et al. (1996), ERT is grounded explicitly in Hull’s

conceptual framework. For one thing, they reject Dawkins’ specific claim that the

replicator is not an adapted for its role. Dawkins (1984) claimed that genes just

exist and benefit from the adaptations of their vehicles, but that they are not for

anything. Sterelny et al., however, “line up with Hull” (p. 388), in claiming that

replicators can also function as interactors. This is exemplified, for them, by the

fact that genes exhibit adaptations that increase the likelihood of their own

replication. Replicators, they argue, are selected for their effects on the fitness of

interactors, and, as a result, the causal role they play in producing these

characteristics must be understood as an evolved function (pp. 388-389). Indeed,

it is by treating DNA and its replication mechanisms as both replicators and

interactors that ERT theorists are able to maintain a privileged (though not

necessarily unique) role for genes in evolution and development.26 Genes are

special, for Sterelny et al., precisely in virtue of the fact that they have been

selected for their causal contributions to reproduction and development.

26
By making interaction an essential characteristic of the replicator,

this approach seems to blur the distinction between interactors and

replicators. I believe this contradiction is already present in Hull’s

treatment, but ERT turns it into a strength.


196

To describe the special role of genes and their associated copying

mechanisms, ERT makes extensive use of the concept of intentionality,

introduced in Chapter 5. As I have explained, intentionality can be understood in

terms of representation or aboutness. Intentional information is understood as

information that is about some state of affairs. Like Maynard Smith (and many

others), Sterelny et al. (1996) theorists rely on the teleosemantic notion of

intentionality developed by Ruth Millikan (1984; see also Papineau, 1987). In the

context of philosophy of mind, where the insight originated, the meaning of

intentional mental content is roughly understood to be derived from the function

of that mental content in the evolutionary history of the lineage. The proper

function of a belief, in other words, is to provide its possessor with an effective

representation of the world. And since the mind was selected for its

representational function, we are justified in saying that that mental content may

also misrepresent the world; the mind can malfunction in the same way as the

heart can fail to pump blood. Basically, the teleosemantic account of meaning

appeals to the trial and error reasoning of Darwinism to explain perception and

knowledge as we intuit them, without recourse to old-fashioned dualism (for a

contrary take on Darwinism and intentionality see Dennett, 1987). The

reappropriation of teleosemantic reasoning by philosophers of biology,

meanwhile, is intended to justify the shift from functional to representational

language in explaining the role of genes in development. Parenthetically,

Godfrey-Smith (1999) argues that this shift may not be as direct as Sterelny et al.
197

claim because, although representation implies function, the reverse is not true.

“Legs are for walking, but they do not represent walking” (p. 320).

The primary aim of the application of teleosemantic reasoning to ontogeny

is to escape the constraints imposed by mathematical information theory. As I

explained in the previous chapter, information theory only applies to statistical

correlations between the structure of the genome and the structure of particular

developmental outcomes. From this standpoint, one systematic covariation is as

good as any other, and radical causal symmetry obtains. By explicitly attributing

intentionality to genetic (and certain other) information, ERT hopes to avoid this

consequence and sustain the dichotomous view of development entailed by the

replicator/interactor framework. According to Sterelny et al. (1996), the appeal to

intentionality entirely bypasses the issue of structural correlation (p. 387). For one

thing, if genetic information is intentional, a complete absence of correlation may

only indicate that something went wrong. Since the genome represents the proper

outcome of development, deviations from that course can be attributed to errors

reading the genetic information. And, since genetic copying mechanisms are

designed for making accurate copies of genes, inaccurate copies or mutations can

also be described as mistakes. This sort of intentionality, they note, is basic to the

way we explain DNA proofreading and repair mechanisms (p. 387).

In addition, since genetic information represents function rather than

structure, reliable correlation is beside the point. To elucidate this idea, Sterelny et

al. (1996) draw an analogy between the information encoded in the genes and the
198

information contained in the plans for a building. Like the phenotype, the final

building is a product of a complex interplay of causes. And the plans themselves

may be correlated with a variety of events, such as kick backs or jobsite injuries,

which are in no way intended. Nevertheless, the plans contribute to the final

outcome in a way that a bag of cement does not (p. 387). By this logic,

developmental causes, such as the genes, are similar to blueprints, in that they

have the precise form they do in virtue of their intended influence on the end

result. To illustrate this basic asymmetry with a biological example, Sterelny et al.

imagine a particular shrub with a facultative pattern of development in which a

water-conserving leaf structure emerges only when the shrub is grown in an arid

climate (p. 388). While the adaptive phenotype depends on the interaction

between the plant’s genome and the arid climate, only the genome has its

structure as a consequence of a history of adaptive interaction. The arid climate,

though necessary for that particular phenotype to be expressed, has its form

independent of the plant’s evolutionary history. Thus, for ERT, since the genome

has the form it does due to its role in the reproduction of the adapted phenotype,

its biological function must be understood as representing the phenotype.

Sterelny et al.’s (1996) analysis provides them with a justification for the

privileging of genetic causes, but the ultimate consequence of this reasoning, they

claim, is that it identifies a biological function that can be fulfilled by an entire

class of entities. These entities are the extended replicators. ERT, therefore, seeks

to replace the classic view of evolution, for which the genes are the only units of
199

selection and inheritance, with one that recognizes all and only those

developmental influences that have been selected for their role in reproducing the

lineage. Sterelny et al. offer the following general definition of replication:

If B is a copy of A:
(i) A plays a causal role in the production of B.
(ii) B carries information about A in virtue of being relevantly similar to
A. This similarity is often functional: B has the same or similar
functional capacities as A. . . .
(iii) B respects the xerox condition: B is a potential input to a process of
the same type that produced it.
(iv) Copying is a teleological notion. For B to be a copy of A it must be the
output of a process whose biofunction is to conserve function. . . . B
must be meant to be similar to A; that similarity is why those
mechanisms exist. (p. 396)

This account shows how a gene can be a replicator without the onerous

requirement of self-replication. The hypothetical gene representing the arid

climate morph of the shrub discussed above meets these requirements; it is copied

in virtue of its role in the production of the arid climate phenotype. That covers

the first three. The gene satisfies the fourth requirement based on the assumption

that perfect copying is the proper function of the machinery of DNA replication.

It follows from the conception of the replicator as any device selected for

its role in maintaining functional similarity between generations that this

designation cannot be limited to genes. Sterelny et al. (1996) cite additional

examples:

Kakapo track and bowl systems, nest site imprinting and other mechanisms of
habitat stability; song learning, food preferences and other traditional
examples of cultural transmission in animals; gut micro-organism
transmission in food and other micro-organism symbionts which parents are
adapted to transplant to offspring; and centrioles and the other causally active
non-genetic structures that accompany genetic material in the gamete. (p. 389)
200

ERT treats it as an open empirical question whether particular lineages of nests or

burrows can be counted as evolutionary replicators. They allow, for example, that

the Kapiti penguin burrow is a replicator. It meets their basic criterion of bearing

information about biological structure, in that its form is a consequence of a

history of burrow-burrower interactions. That is, the form of the burrow

represents future burrows because it influences the burrowers in a way that makes

a difference in the next generation of burrows. Moreover, since differences in

burrow structure directly influence penguin fitness, these differences can affect

the burrow’s chance of being replicated.

At the same time, Sterelny et al. (1996) contend that their extension of

inheritance is not “promiscuous” in that, unlike Griffiths and Gray (1994), they do

not extend inheritance to “every reliably recurring [developmental] factor,” such

as, for example, “the human hand” (p. 389). In addition, they explicitly reject

Griffiths and Gray’s claim that the hermit crab-shell relationship is a replicator,

arguing that neither the specific form nor the reliable availability of discarded

shells depends on the evolutionary history of the hermit crab species. The crab-

shell relationship contrasts with both the gene responsible for the arid-adapted

shrub and the Kapiti penguin burrow. Both the traditional and the extended

replicator have the evolved biofunction of reproducing adaptive phenotypes, and

both are replicated by evolved copying mechanisms. The relationship between a

hermit crab and its shell, on the other hand, cannot be a replicator, according to

Sterelny et al., because the parent crabs cannot influence the availability of shells
201

for the next generation (p. 397). In addition, discarded shells are not adapted for

their role in the crab life cycle (pp. 392-393). They are more like the garbage cans

invaded by suburban possums. The cans are relevant to the possums’ life cycle,

but the availability of the cans is not explained by the history of possum-can

interaction.

The evolutionary developmental system vs. the extended replicator

Extended replicator theory is a response to the general set of problems

associated with genic selectionism, but it takes, as its touchstone, the particular

critical response to these problems articulated by Griffiths and Gray (1994). ERT

theorists accept the conclusion that genes are not causally unique agents in

development, but deny that the only alternative is to redefine inheritance so that it

applies to every factor that is causally relevant to intergenerational similarity.

Griffiths and Gray (1997) counter that once the ERT definition of the extended

replicator is cleansed of its unsupportable elements and the remaining elements

are properly understood, it fails to exclude anything that they are not already

excluding. Griffiths and Gray’s response is only partially successful, however. As

I will point out below and discuss in detail in the following chapter, the two

camps are relying on distinctly different assumptions and, consequently, they are

to some extent talking past each other. In examining Griffiths and Gray’s

response to ERT, I hope not only to shed more light on what issues are at stake in

this debate, but also to expose the conceptual incongruities that characterize this

exchange.
202

In their response to Sterelny et al. (1996), Griffiths and Gray (1997)

reiterate their position regarding what constitutes a unit of evolution, and what the

replicators are that make it up. In addition, they express doubts about whether the

supposedly strict definition of replicators and replication set forth by the

proponents of ERT can produce a more disciplined account of inheritance without

ruling out its own favorite candidates. For example, the ERT requirement that a

replicator must be copied by a mechanism that has evolved for the purpose of

functional replication (criterion [iv] above) may be unworkable. Griffiths and

Gray argue that this requirement runs the risk of excluding the modes of

replication that form the foundation of the replicator paradigm. As they point out,

a number of the key mechanisms responsible for cell replication, including the

genetic code itself, seem to have been frozen in place since evolutionarily ancient

times (p. 483). This question ultimately turns on the degree to which a phenotype

must be independently selectable in order to be considered adaptation. This is, of

course, a live question in evolutionary theory, and one that is central to debates

over the relative efficacy of natural selection.

Godfrey-Smith (2000b) also expresses skepticism about the ERT

requirement that copying mechanisms be products of adaptive design. To begin

with, he notes that, if taken literally, this principle would preclude us from asking

how highly specialized copying mechanisms evolved from more haphazard ones

(p. 10). In addition, he continues, if evolution is defined in terms of differential

replication, and replication itself is defined as an evolved function, an infinite


203

regress results. Sterelny’s response to the point about infinite regress, according to

Godfrey-Smith, is to deny that early evolution required replicators (cited in

Godfrey-Smith, 2000b, p. 10). This is puzzling, though, since, in Sterelny (2001)

he argues that the mechanisms of DNA replication are “one complex product of

evolution” and that it was not the first replication system (p. 338). Here, he draws

support from Maynard Smith and Szathmáry (1999), who speculate that the

evolution of DNA relied on earlier phases of evolution involving replicators that

exhibited multiplication, variation, and heredity (pp. 16-17). For them, like most

conventional evolutionary theorists, replication is part of the very definition of life

(see also Maynard Smith & Szathmáry, 1995, pp. 17-18). I can only conclude that

Sterelny is drawing some sort of distinction between evolutionarily modern

replicators, those of the gene epoch, and the ancient replicators that preceded

genes. But this seems contrary to the spirit of the extended replicator approach.

Griffiths and Gray (1997) also object to Sterelny et al.’s (1996) assertion

that the developmental system is excessively holistic.27 The gist of Sterelny et

al.’s complaint, as discussed above, is that Griffiths and Gray are unable to

demarcate an evolutionary individual because the developmental system is

defined in a way that includes factors with doubtful causal relevance for

evolution. Sterelny et al. write, “some causal influences are going to be part of the

[evolutionary] developmental system . . . and others are not” (p. 383). As noted,

27
In the next Chapter, I discuss the more general allegation of

holism raised against DST.


204

Griffiths and Gray (1994) address this problem by reconsidering the ontology the

development system and focusing on developmental processes (p. 291). Yet, for

Sterelny et al., the problem remains.

To understand Sterelny et al.’s (1996) worry and why it persists, it is

necessary to consider the problem from the particular theoretical perspective

adopted by ERT. Recall that ERT is concerned to identify entities whose

theoretical role as units of selection is equivalent to that played by Dawkins’

(1982) active germ-line replicators. For ERT the unit of evolution is identical to

the unit of inheritance, and meets their supposedly precise definition of the

replicator. Sterelny et al., therefore, seem to be looking to Griffiths and Gray for a

structurally analogous account of the unit of selection/inheritance/replication.

For Griffiths and Gray (2004), however, the distinctions and categories of

inheritance establish a different explanatory terrain. To begin with, they define the

units of inheritance as “developmental resources that reliably reoccur in each

generation and interact with the other resources to reproduce the life cycle” (p.

417). The unit of evolution,28 meanwhile, is defined as the developmental process

or life cycle, which is comprised of developmental interactions among different

sorts of developmental resources. Some of these developmental resources, such as

the vocalizations of Gottlieb’s ducklings, are self-generated. Others, such as soil

28
I use the phrase unit of evolution rather than unit of selection in

this context simply because Griffiths and Gray do not prejudge the relative

significance of adaptation vs. other types of evolutionary change.


205

conditions, are structured by ancestral populations, while still others, like sunlight,

merely persist (1997, p. 484). Finally, there are those that are actively replicated

by the developmental process of either the parents or the offspring. These

developmental resources, the replicators, constitute a third category, which, for

Griffiths and Gray (1994), includes “anything that is reliably replicated in

development” (p. 300).

Thus, in order to account for the evolution of developmental systems,

Griffiths and Gray (1994) elaborate three partially overlapping categories: (a)

developmental resources, (b) developmental processes, and (c) replicators. The

replicator category encompasses all inherited resources, other than persistent

environmental features, such as sunlight, and independently produced factors,

such as discarded shells. Although, for Griffiths and Gray’s model of extended

inheritance, sunlight and discarded shells are inherited developmental resources,

they are not replicators, and therefore they are not part of the unit of evolution,

because they are not reconstructed by the lineage; rather, their evolutionary

significance depends on the replication of the interactions that involve them. It is

also important to recognize that, although replicators are discussed in the context

of inheritance, many of the developmental influences that Griffiths and Gray

include in the replicator category would ordinarily be considered interactors

because they are products of development.

The complicatedness of this model is no doubt the main reason that

Sterelny et al. (1996) claim that Griffiths and Gray’s (1994) conception of
206

evolution is “hard to characterize precisely” (p. 379). Matters are certainly not

helped by the latter’s use of the term replicator as a synonym for the unit of

evolution. Having abandoned the replicator/interactor distinction, Griffiths and

Gray evidently mean something very different by this word than Sterelny et al.

Yet they insist that the two concepts cover the same entities (1997, p. 484). The

ERT response, seems to be simply to ignore the basic distinctions that form the

substance of Griffiths and Gray’s (1994) argument, including the one between

developmental processes and the developmental systems. Sterelny et al. write

that, because the replicator for Griffiths and Gray is comprised of “all the entities

and their relations that go into constructing an organism—we do not see that this

is a distinction that makes a difference” (p. 379). Therefore, instead of the

developmental process, ERT proponents take Griffiths and Gray’s unit of

evolution to be the entire developmental system. It should not be surprising that

Sterelny et al. find this to be an unmanageable replicator, since, according to their

interpretation, it includes everything in the universe.

It is tempting simply to note Sterelny et al.’s (1996) failure to understand

Griffiths and Gray’s (1994, 1997) model, and perhaps to attempt to make it more

clear, as I have done here. However, I think a basic theoretical

incommensurability precludes an easy resolution to this debate. As I have

indicated, the point of the extended replicator is to provide a revised account of

the unit of inheritance and selection in order to escape the limitations of genic

selectionism without sacrificing the elegance of Dawkins’ gene’s-eye-view


207

conception of evolution. Griffith’s and Gray’s aims are clearly quite different. As

constructionists, their stated aim is to develop a general account of evolution and

development (1994, p. 278). Despite this divergence of intent, however, Griffiths

and Gray (1997) seem to be convinced that the replicators described by them,

which include everything actively replicated in development (category [c] above),

would also be replicators for ERT (pp. 482-483). Here, I think it is Griffiths and

Gray who misconstrue ERT.

Consider the case of the human hand. Based on the ERT principle that a

replicator must have its particular form in virtue of its specific, naturally selected

effect on intergenerational similarity, Sterelny et al. (1996) exclude

developmental resources that do not appear to have explicit developmental

functions. They cite the human hand as something, which, although it meets

Griffiths and Gray’s (1994) replicator criteria, would not be a replicator for ERT

because its biofunction is economic rather than developmental (p. 389). Griffiths

and Gray (1997) counter that the functional criterion used by Sterelny et al. to

exclude the hand is unsupportable. However, the details of their argument reveal

that they are missing the point. Griffiths and Gray (1997) point out, for example,

that biological structures have indefinitely many functions as a consequence of

how they are deployed in the ecological life of an organism (p. 483). The human

hand is a case in point in that, among other things, it plays an essential role in

child care, which is clearly a developmental biofunction. Therefore, they claim,

the hand does meet ERT’s replicator criteria (p. 484).


208

Although Griffiths and Gray’s (1997) point about ERT’s general approach

to assigning biological function is valid, their criticism does not address the actual

rationale Sterelny et al. (1996) seem to have in mind when excluding factors such

as hands. Whereas Griffiths and Gray interpret ERT’s replicator definition as

requiring merely that a replicator have an evolved developmental function, ERT

actually requires it to have a specific developmental function. That is, the reason

the hand is not a replicator for ERT is that it does not have the specific function of

producing hands in the next generation (criterion [ii]). It does not influence the

hand variations of the next generation by getting copies of itself to be propagated.

Rather, the hand is simply an interactor. If it is a well-adapted hand, it benefits the

replicators (genes in this case) that will have the function of representing the hand

in the development of future generations. ERT’s extended replicators, like genes,

represent specific developmental outcomes, either directly or indirectly. Again,

the point of the ERT framework is not to identify the constituents of a larger unit

of evolution; it is to describe natural selection in terms of selfish units competing

to get themselves copied into future generations. Therefore, it would seem that

Griffiths and Gray’s claim that ERT is just as holistic as their own approach

doesn’t quite apply, given that ERT’s aims are fundamentally atomistic. Sterelny

et al. are explicitly concerned to represent evolution in terms of the selfish

replication of more or less independent elements of the developmental system.

Moreover, setting aside the nuanced disagreements about what is an adaptation

for what, ERT’s claim of admitting a more restricted group of replicators is


209

substantiated simply by the fact that it embraces the replicator/interactor

distinction, while Griffiths and Gray reject it.

Conclusion

Griffiths and Gray (1994, 1997, 2001) claim to offer a framework in

which developmental questions can be taken seriously without sacrificing the

power of traditional evolutionary explanation. The unit of evolution is construed

to be the life cycle as a whole, rather than the traditional adult organism, or the

Dawkinsian gene. Their parity thesis leads to the decentering of the

developmental as well as the evolutionary gene and justifies an expanded

conception of inheritance that has the potential to purge heredity of its

preformationist tendencies. Their model, they suggest, transcends the dichotomies

that plague traditional representations of evolution and development, such as

nature vs. nurture, inherited vs. acquired, and internal vs. external sources of

form. As a result, the questions elided by developmental dualism and gene

selectionism can at last be admitted into the research agenda. Yet, the refinements

intended to accommodate philosophical debates about units of selection and units

of inheritance, have led them to transform the developmental system in a way that

departs from the original emphasis on systems thinking. In particular, the

evolutionary developmental system understood as a set of independent elements

threatens the original DST emphasis on radical context dependence. In the next

chapter I critically examine Griffiths and Gray’s revision of DST with particular

attention to the question of integrating developmental and evolutionary theory.


210

Chapter 7: Re-rethinking Evolution and Inheritance

The previous chapter examines Griffiths and Gray’s (1994, 1997, 2001)

impressive attempt to formulate evolutionary theory in explicitly developmental

systems terms. Drawing on the principle of metaphysical parity among

developmental influences, they argue for an extended inheritance model that

radically expands what can be counted as a unit of inheritance, and suggest that

evolution is better represented as the differential replication of developmental

processes. In this chapter, I question Griffiths and Gray’s preservation of the unit

of inheritance concept, and argue that the consequences of DST’s constructionist

reasoning are more radical than Griffiths and Gray’s model allows. I agree that

parity of reasoning about inheritance is a necessary first step. However, as long as

units of inheritance, however extended, are understood to constitute a distinct

causal or explanatory category, the genuine integration of development and

evolution will be impeded. This is because the inheritance paradigm, both

historically and conceptually, is based on their separation. On the other hand, I

believe a formulation of the developmental systems approach that explicitly

eschews inheritance-based explanations can provide the basis for a genuine

integration of development and evolution.

This chapter presents a critique of Griffiths and Gray’s (1994, 1997, 2001)

revision the developmental system and a defense of developmental systems theory

as originally articulated and consistently defended by Oyama (1985, 2000b).

While I begin with a critical assessment of Griffiths and Gray’s extended


211

inheritance model, most of the chapter is a defense of what I see as the most

important insights of the developmental systems approach. Where Griffiths and

Gray take Oyama’s parity-of-reasoning arguments about inheritance to imply the

need for a more inclusive catalog of units of inheritance, I conclude that the

catalog itself is a problem. Obviously, intergenerational resemblance and

morphological stability rely on various material and structural elements being

shared across generations. Nevertheless, to suppose that the former can be

explained in terms of the latter, it seems to me, risks undercutting the demand that

developmental outcomes be given fully developmental explanations. Moreover, I

suggest that some of the alleged problems with DST that motivated Griffiths and

Gray’s revisions actually derive from a basic misunderstanding of the

developmental systems approach.

Finally, I outline an alternative framing of developmental systems and

evolution that I believe is more faithful to the foundational insights of DST. In

place of the appeal to generic units and mechanisms of inheritance to explain

intergenerational stability, I emphasize the stabilizing, regulatory, integrative, and

constructive capacities of interactive networks. Developmental systems, as nested

and overlapping interactive networks, construct, and are constructed by,

contingent, interdependent influences. I suggest that an interactive network

paradigm reveals the natural overlap of ontogenetic and phylogenetic dynamics

and strengthens the developmental systems case for a constructionist

reformulation of biological theory.


212

On the Costs and Benefits of Extending Inheritance

There is no question that Griffiths and Gray (1994, 1997, 2001) have made

a compelling and important contribution to philosophy of biology by

demonstrating that evolution can be represented in a way that does not rely on a

distinction between replicators and interactors. My concern, however, is that their

articulation of the evolutionary developmental system departs in substantial and

important ways from the original conception of the developmental system, and

thereby undermines the force of the constructionist challenge and their own stated

intent of integrating development and evolution. In particular, to the extent that

Griffiths and Gray maintain the structure of the inheritance paradigm, by relying

on units of inheritance and mechanisms of replication, I worry that their model

makes the continued marginalization of developmental questions all too easy.

There are two distinct manifestations of my general worry. First, through

their active engagement in debates about units of inheritance and replicators,

Griffiths and Gray (1994) obscure the distinction between causes and products of

heredity. As I will explain below, this ambiguity is typical of gene-centered

approaches, but it is antithetical to the constructionist commitment to take

seriously the causal processes responsible for formation. Second, by revising the

ontology of the developmental system in a way that redefines the system as a

“sum of objects,” (p. 291) Griffiths and Gray undermine the constructionist

emphasis on the causal interdependence of developmental influences. Their

framing is apparently intended to answer charges of excessive holism, but, as I


213

argue in the second part of this section, the holism problem can be (and has been)

answered without compromising the constructionist insistence on the context-

dependence of developmental and evolutionary causes.

Inheritance, replication, and ambiguity

Orthodox treatments of inheritance are equivocal. The word inheritance is

used interchangeably to refer to the causes of intergenerational resemblances and

to the resemblances themselves. Offspring may be said simply to inherit traits, or

they may be said to inherit the factors that cause traits to be reproduced. The word

replication is equally equivocal. It came into use in biology to describe the

process by which the DNA molecule is copied, but the word can also be used to

refer to any situation where like produces like (Hull & Wilkins, 2008).

Replication, in the latter sense, corresponds to the inheritance of traits.

Phenotypes such as thickness of fur or relative speediness are said to be inherited

or replicated in this everyday sense. This is consistent with the literal meaning of

inherit with respect to the bequeathing of property or titles (or bodily maladies)

from generation to generation. With genetics, the technical meaning of biological

heredity was refined to indicate the transmission of genes. In this sense, offspring

can be said to inherit a recessive allele with or without phenotypic effects. Within

the gene-centered explanatory framework, the ambiguity between the replication

of a phenotype and the replication of the “gene for” that phenotype is not so

crucial because of the robust informational properties attributed to the genes.

While genes replicate literally, the replication of phenotypes is considered an


214

inevitable consequence of special formative powers of the genes. In other words,

since genes encode traits, the inheritance or replication of traits is explained by

the inheritance or replication of genes.29 Although there is a technical distinction

between gene transmission and trait transmission, it is not a distinction that makes

a difference to most gene-centric theorists, most of the time (or to the ERTers).

For a constructionist approach, on the other hand, a clear distinction

between the replication of traits and the replication of developmental resources

responsible for recurring traits is crucial precisely because the latter do not have a

privileged causal role. The conflation of these two senses of replication is a

principal article in the developmentalist indictment of transmission genetics, since

it is the presumption of trait transmission that elides developmental questions. As

I have said, heredity, from a constructionist standpoint, is a question (see e.g.,

Oyama, 2000b, p. 71). Griffiths and Gray (1994) understand this perfectly well,

but their constructionist intentions are undermined by their extended inheritance

model because of the emphasis the latter places on units and mechanisms of

inheritance. Like genetics, extended inheritance posits a class of factors, which

explain inherited developmental outcomes and are themselves inherited.

29
For ERT, this framework is simply expanded to include structures

other than genes. Nests replicate by being good nests for their builders,

allowing their builders to reproduce offspring who will inherit the same

nest building disposition and the same nest structures will reappear (see

Chapter 6).
215

I have no quarrel with Griffiths and Gray’s (1994) suggestion that we must

include more than genes in our explanations of intergenerational stability. And

their identification of particular examples, such as cytoplasmic factors, DNA

methylation, etc. is also reasonable. After all, the identification of particular

causal factors would presumably be a part of any constructionist effort to explain

particular intergenerational similarities. The problem is that extended inheritance

implies that the factors responsible for inheritance constitute a general class of

causes. Moreover, Griffiths and Gray do not shy away from this implication but

actively embrace it, adopting the term “units of inheritance” for this class of

factors (see e.g., 2004). By preserving the conventional units of inheritance

category, the extended inheritance model, at best, leaves us stranded in familiar

rhetorical territory, trying to identify resources whose reliable presence is

responsible for lineage-typical traits, even as we deny that these resources play a

privileged causal role in development. At worst, the vital point that these units are

not causally special is simply too subtle for anyone but the constructionist “choir”

to integrate in any meaningful way.

The ambiguity between the causes and products of inheritance, between

the causal factors that are supposedly inherited and the intergenerational

resemblances that these factors are meant to help explain, is further exacerbated

by Griffiths and Gray’s (1994, 1997) engagement in the rhetoric of replication.

As I said, while the distinction is a footnote for gene-centric theories because of

the special causal powers granted to genes, for a constructionist view, the
216

distinction is crucial. Developmental outcomes require developmental

explanations, not just historical ones. As I have also indicated, some of what

Griffiths and Gray classify as replicators, such as cytoplasmic factors and

intrauterine conditions, are relatively unproblematic, since they are immediate

products of the parental reproductive process. Host imprinting and other

relationships with nominally independent features of the world, on the other hand,

are among the aspects of the life cycle that must actually be constructed. Griffiths

and Gray, nevertheless, explicitly reject the distinction. When they say they

“allow the status of ‘replicator’ to anything that is reliably replicated in

development” (1994, p. 300), they effectively obliterate the distinction between

developmental inputs and developmental outcomes.

I imagine that Griffiths and Gray would respond to this criticism by

pointing out that the distinction between developmental inputs and developmental

outcomes is precisely the distinction between replicators and interactors, which

they insist DST has no use for (1994, p. 298). I have to agree, and, moreover, I

accept that there may be no principled way to count cytoplasmic factors as

inherited while ruling out reliably replicated outcomes such as hands and beaver

dams. However, I also agree with Godfrey-Smith (2000b) when he suggests that

Griffiths and Gray push the replicator concept to the brink of collapse (p. 9). The

bottom line, I think, is that the elimination of the replicator/interactor distinction

is the right place to start, but we should not end up with an all-replicator

representation of evolution but an all-interactor one. An all-interactor view, by


217

deemphasizing inheritance, would avoid giving the impression of prejudging the

causes of intergenerational resemblance.

Parts, wholes, and context-dependence

As I explain in the previous chapter and at the start of this section,

Griffiths and Gray (1994) propose a two-level ontology that recasts the

developmental system as a “sum of objects” (p. 291). I believe that this move

conflicts with the original definition of the developmental system, as a mobile set

of dynamic and reciprocally interdependent interactants that are specifiable only

with reference to a particular system at a particular moment. In Oyama’s (2000a)

words, “what counts as a developmental interactant, and what aspects of it are

relevant, depends on others; the constitution of the system is defined in

interaction” (p. 341). Indeed, Gray (1992) has made essentially the same point,

writing that “internal and external factors are co-defining and co-constructing” (p.

176). Griffiths and Gray’s ontological reframing of the developmental system in

terms of processes and objects is ostensibly intended to clarify the role of

persistent environmental features in evolutionary explanations by distinguishing

the objective feature (e.g., sunlight) from the interactions into which it enters.

This is also supposed to help resolve the boundary problem by focusing attention

on developmental processes as units of evolution. In addition, according to

Hendrikse (2006), Griffiths and Gray’s revised ontology inoculates them against

allegations of unmanageable holism.

In discussing the debate between Griffiths and Gray and Sterelny et al. in
218

the previous chapter, I mentioned the latter groups’ complaints about DST’s

apparent holism and noted the existence of an older and more far-reaching

critique with which the ERT worry overlaps. This more general allegation of

holism is related to DST’s emphasis on causal interdependence, or what Sterelny

et al. (1996) call the “intermeshing of causal connections” (p. 382). The worry

that there is something fundamentally obscure about DST’s approach to causation

has also been expressed by Kitcher (2001) and Schaffner (1998). The upshot of

these worries, according to Hendrikse (2006), is that DST may be an unworkable

approach to scientific research unless it can be shown to support an atomistic

conception of causality.

Ultimately, Hendrikse (2006) claims that DST is able to cope with the

holism problem, but he relies, for this conclusion, on Griffiths and Gray’s two-

level ontology. In other words, it is not DST, in general, that Hendrikse ends up

defending, but the conception resulting from Griffiths and Gray’s ontological

revision. As should be evident by now, I am not satisfied with the Griffiths and

Gray’s ontological revision because it redefines the developmental system as a

collection of objective resources rather than system-dependent interactants.

Moreover, while it is certainly important for DST proponents to take their critics

seriously, I do not agree with Hendrikse that the critics’ worries about holism are

cause for genuine concern about the fundamental conceptual structure of DST. I

argue, rather, that what is actually needed is a proper appreciation for systems

thinking (see esp. Oyama, 2001). Therefore, I also disagree that DST poses a
219

threat to scientific explanation. I concede that if science is equated with the idea

that explanation just is the decomposition of complex systems into constituent

entities, which are assumed to prefigure the whole from which they are derived,

then systems thinking is a threat. However, it has always been one of principal

aims of the constructionist challenge to overcome precisely this sort of naïve

Cartesian atomism.

Notwithstanding my disagreement with his conclusions, Hendrikse (2006)

provides a lucid discussion of the fundamental issues at stake in the holism

controversy. To begin with, the real crux of DST’s alleged holism problem,

according to Hendrikse, is what Schaffner (1998) calls the problem of separability

of causes. DST, says Hendrikse, “appears to be motivated by a general thesis

about causation, one that insists that no factor can be attributed a separate

influence on [the] phenotype” (p. 94). DST’s emphasis on the radical context-

dependence of developmental influences is perceived as dangerous because,

Hendrikse says, without the ability to disentangle the causal web into distinct,

atomistic causal factors, ordinary scientific explanation is rendered impossible.

An additional epistemological problem raised by radical context-dependence,

according to Hendrikse, is that it undercuts what he calls “knowledge

exportability.” He writes, “the ability to export knowledge from one context of

inquiry to another is at the heart of scientific prediction and explanation. What is

destructive about holism is that it is incompatible with the exportation of

knowledge” (p. 97).


220

After considering the matter carefully, Hendrikse (2006) concludes that

DST is innocent of holism, but, apparently, only because of the way context-

dependence has been dealt with by Griffiths and Gray. According to Hendrikse,

there are two possible descriptions of context-dependence: “a) the contribution or

influence of a factor is determined by context and b) the outcome associated with

a factor is determined by context” (p. 101). While option (a) definitely amounts to

holism, with option (b), it is possible to recognize the context-dependence of

certain outcomes while maintaining a “commitment to an underlying stability that

justifies our practices of explanation and prediction” (2006, p. 101). Fortunately,

according to Hendrikse, it turns out that DST’s notion of context-dependence is

consistent with (b). As Hendrikse observes, in their response to Schaffner (1998),

Griffiths and Knight (1998) explicitly align DST with option (b) when they write,

“the point of indivisibility [of causes, as it is espoused by DST] is that the effects

[italics added] of all causal factors are context dependent” (p. 257). This subtle

shift, according to Hendrikse, is what steers DST back from the brink of obscurity

and shows that, indeed, “DST is atomistic in the sense that it takes developmental

interactants to have stable separable influences” (p. 90).

The relocation of context-dependence from the causes to the effects of

developmental interactions still leaves the other problem identified by Hendrikse

(2006), which is that highly context-dependent knowledge cannot be exported to

other contexts. An additional step is needed, he says, that will allow causal power

to be ascribed to individual entities in a way that context-independent explanation


221

and prediction are possible. I will forgo the details of Hendrikse’s technical

treatment of this issue. The basic idea is fairly straightforward. If the influence of

a developmental factor is supposed to be contingent on developmental context,

how can we make any meaningful scientific generalization about it? Put slightly

more concretely, if the discovery of a causal connection between gene G and

phenotype X in system S is never sufficient to justify ascribing some X-ish

quality to G, then the possibility of making general scientific predictions based on

what we know about G is deeply compromised.

I agree with Hendrikse (2006) that the primary issue underlying the

allegations of holism directed at DST is the belief that DST holds causality to be

indivisible. And it seems indisputable that Griffiths and Knight’s (1998) statement

quoted above is intended to deny that causal factors are themselves context-

dependent. I disagree, however, that this is the DST position. First of all, notice

that the principle of context-independent causes depends on the ontological

objectification of developmental resources, which I have already suggested

departs from earlier articulations of DST. Indeed, the claim that context-

dependence pertains to effects rather than causes seems to contradict Oyama’s

description of the developmental system as a reciprocally interdependent, mobile

assemblage of interactants, which both materially affect and define relevant

aspects of one another. In her seminal work (1985) and throughout her later

writings, Oyama has consistently described the developmental system in a way

that emphasizes the context-dependence of its constituent entities in terms, not of


222

their effects, but of their causal contributions. As she writes, “these interactants

define, constrain, and influence each other as interactants, for any factor’s role in

the system depends on its relations with the others” (Oyama, 2006, p. 55). This

may seem like a fairly subtle point, but reciprocal causality is a crucial

consequence of systems thinking. As I said, I disagree that systems thinking, by

calling atomistic reductionism into question, threatens scientific explanation.

To put it bluntly, I believe that critics misunderstand, on a very basic level,

what DST is actually claiming about causality. This confusion is clearly expressed

in Kitcher’s (2001) attribution to DST of the belief that “any kind of separation

out of causal factors does violence to the causal complexities of development” (p.

404). The only violence being done here is to the DST position. What DST

actually claims is simply that it is illegitimate to divide up the causal contributions

to development a priori. This is why Griffiths and Gray (2001) are explicit in

rejecting a metaphysical distinction between DNA and other developmental

resources (p. 195). No DST proponent seriously suggests that causal contributions

to development cannot be attributed to actual events in real material systems. The

whole point of taking development seriously is to attend to actual events rather

than invoking a transcendental allocation of causal responsibility. Nor does DST

deny the efficacy of framing research questions in a way that makes distinctions

about the causal roles of developmental factors, including genetic ones (Griffiths

& Gray, 2005, p. 420). As Oyama (2001) writes, “it is the neglect of mutual

influence, and the unprincipled separating out of genes as controlling, instructive


223

agents, that DST resists, not the conventional analyses of contributing factors or

the possibility of causal influence” (p. 182). Indivisibility is therefore not a

problem for DST because the only causal divisions that DST objects to are

metaphysical ones.

There is still the issue that Hendrikse (2006) identifies as the problem of

exporting context-dependent knowledge. Perhaps I’m missing something here, but

I am not quite able to see what this problem has to do with DST. It seems to me

that this concern with exportability is simply another way of talking about the

classic problem of induction. When and to what extent is generalization

warranted? This is the underlying issue at stake in Griffiths and Gray’s (1994)

response to Sterelny and Kitcher’s (1988) defense of the gene for locution.

Sterelny and Kitcher claim that a gene G can be said to be “for” trait X if a change

in G is associated, statistically, with a change in the appearance of X in the

phenotype, given a set of “relevant environments” (p. 348). Griffiths and Gray

counter that this will not work because, on a purely statistical analysis of acorn

genomes in relevant environments, we are forced to conclude that all acorn genes

are “for” rotting because that will be the fate of the vast majority of genetic

changes (p. 283). As Griffiths and Gray’s reductio ad absurdum demonstrates,

Sterelny and Kitcher err in presupposing a very particular subset of relevant

environments—namely, those in which the acorns germinate—and then use that

unacknowledged assumption to justify attributing special context-independent

properties to the acorn’s genes, thereby generalizing beyond what is warranted.


224

Griffiths and Gray’s response does not hold gene selectionism to some

unattainable epistemological standard; it simply applies to the gene the same

standard that should be applied to any other factor alleged to play a general causal

role in development.

Evolution without Inheritance

With due awareness of the provocative nature this section’s title, I remind

the reader that I am using the word inheritance specifically in the sense entailed

by the phrase extended inheritance, not as a general term for intergenerational

stability and resemblance. I take DST’s parity-of-reasoning arguments about

inheritance seriously, but I am led to reach conclusions that are the precise

opposite (at least rhetorically) of those drawn by advocates of extended

inheritance. As I have shown, the discourse on extended inheritance has produced

much wrangling about what should and should not count as a unit or mechanism

of inheritance. Yet, it seems to me that one of the principal points of applying

parity of reasoning to inheritance is to direct attention away from what is inherited

and toward how the reconstruction of form actually takes place in particular cases.

I definitely agree that intergenerational stability typically depends on the

availability of a wide range of interactants, all of which must be invoked to

explain particular developmental outcomes. What I do not agree with is that we

can abstract a subset of them into a special category on the assumption that this

subset, being reliably present, can therefore be considered to have a special role in

explaining reliable developmental outcomes. This isn’t to deny the potential for
225

individual factors to play a role in individual developmental explanations of

resemblance, but simply to suggest that the strategy of lumping these factors

together as a class is antithetical to the basic constructionist goal of taking

developmental questions seriously.

It should be clear that the central aim of the architects of the inheritance

paradigm, to identify the single mechanism responsible for all intergenerational

similarity, has been quietly laid to rest.30 As the early chapters of this work show,

the consolidation and reification of heredity as a structured causal category was to

some extent a historically contingent development. At any rate, it is now

becoming evident that no single causal mechanism is adequate to cover the whole

range of phenomena traditionally identified with heredity. Extended inheritance

theorists take this to justify the rejection of a single privileged unit of inheritance.

I am simply suggesting that inheritance, as a distinct category, should be rejected

because it perpetuates the impression that “inherited” phenotypes must be

explained in terms of inherited factors.

I advocate a slightly different view, which I believe is more consistent

with the spirit of the developmental systems approach, especially as articulated by

Oyama (1985, 2000b, 2001). In order to take development seriously, I suggest, we

must develop models of biological stability and change that do not assume that

the reliability of particular developmental outcomes can be explained by the

30
I say quietly because it seems that this news has reached very few

non-specialists, thus far.


226

reliability of particular developmental influences. In place of the explicit appeal to

inheritance, I argue that the history and diversity of life should be conceived in

terms of constructive interactions, and the integrated, interactive networks of

regulation that are both cause and consequence of those interactions, across all

scales of biological phenomena.

Taking construction seriously

Central to both Lewontin’s construction metaphor and Oyama’s

constructive interactionism is a commitment to a view of causality as thoroughly

reciprocal and interdependent. The causes involved in development and evolution,

on this view, do not preexist their effects, but must be treated as themselves

products of constructive or dialectical interactions. This recognition is crucial for

the genuine integration of development and evolution because their continued

segregation relies on a rhetorical strategy in which the explanations in each

domain rely on preexisting causes provided by the other. As I said, ontogeny

relies on the formative causes “designed” by evolution, while evolution relies on

the stable variants produced by ontogeny. Yet, the constructive interactions

through which developmental systems assemble themselves are, necessarily, the

same dynamics that produce the long-term stability and change responsible for

evolution. In this section, I attempt to affirm the constructionist consequences of

the developmental systems perspective by shifting the focus away from extended

inheritance and its units. I reemphasize and explicate what I take to be an

authentically systems-theoretic or dialectical ontology of the developmental


227

system, which, contrasts markedly with Griffiths and Gray’s (1994) description of

the system as a “sum of objects.”

Let me begin with a discussion of Lewontin’s redescription of evolution as

the mutual construction of organism and environment. First of all, as I discussed

in Chapter 5, Lewontin (1982, 1983b) calls attention to the way in which

orthodox evolutionary theory relies on the metaphor of adaptation, which

construes the functional characteristics of individual organisms as products of the

trial and error processes of a population in a preexisting environment. This view

construes the environment more or less objectively, as a general background of

problems and opportunities. There are reliable food and water sources, prey to be

sought, predators to be avoided, and climatic patterns with which the population

must cope. Challenges arise due to factors that are independent of the population.

Because some members of any local population will generally be more successful

than others and therefore able to leave more offspring, and because their offspring

will also tend to be more successful, gradually, a population will tend to become

better adapted to the environment.

As I have attempted to convey throughout the last few chapters, the

assumption of an independent, objective environment, specifiable without

reference to a situated organism, with a unique and contingent history, is

profoundly inadequate. Yet, as I will show, it is precisely this preformationist

assumption of preexisting environmental causes that sustains both the adaptation

metaphor and the orthodox conception of inheritance. In contrast to the


228

conventional treatment of the environment as a set of preexisting physical

conditions, the constructionist approach emphasizes the functional environment,

which is specified by each species, through its typical life activities (over

ontogenetic and phylogenetic time). Lewontin (1983b) can tell that stones in his

garden are part of the environment of a thrush because the thrushes use stones to

break open snails. For a woodpecker, meanwhile, stones do not seem to be

particularly significant, while trees obviously loom large. The concept of an

ecological niche was originally developed to capture this notion of a functional

environment. However, as Lewontin (2000) notes, the niche metaphor is not ideal

because it suggests a preexisting slot waiting to be filled by a properly designed

species.

It is tempting to take, from the simplistic example of the thrush and the

woodpecker, the rather superficial moral that we should consider the distinct

interests of the various animal species, as if the woodpecker is simply ignoring

stones in favor of trees. Although this is true at the level of physical interactions,

the level of analysis appropriate to ecological interactions is functional. In this

sense, rocks, trees, and other organisms derive their existence from functionally

significant relationships, and the patterns of interaction through which those

relationships are realized. As Lewontin (1982) describes this functional ontology-

epistemology, organisms “transduce” physical inputs into signals that have

meaning for them (p. 161). These signals correspond to organism-specific,

cognitive-perceptual features, which both constitute, and are constituted by, the
229

organisms’ constructive interactions with the world. In this way, the organism-

referent or functional environment effectively exhausts reality for an organism. It

is, therefore, not the case that organisms simply ignore what is unimportant, but

that what is not important does not exist, in that it has no functional significance,

developmentally, behaviorally, or cognitively.

This functional ontology-epistemology may seem very strange because we

tend to assume an unproblematic correspondence between what we humans

observe and what exists “out there.” A rock is a rock; a tree is a tree. It is

important to keep in mind, however, that we, as humans, also inhabit a functional

environment. The objects that we take for granted, whether rocks and trees or

atoms and supernovas, are products of how and what we perceive. And the

perceptual and cognitive capacities that we take for granted are actually

constructed products of our ontogenetic and phylogenetic histories. In order to

make this point fully explicit, it may be helpful briefly to consider the biological

ontology-epistemology elaborated by Maturana and Varela (1980, 1987). I think

their theory of autopoiesis may help to clarify (or at least provide a fruitful

perspective on) this aspect of constructionist approach. I am not endorsing their

entire theoretical framework or making any claims about how their theory is

received by other constructionist thinkers. What I am saying is that Maturana and

Varela offer a penetrating analysis, which is worth our attention because it

attempts to work out the ultimate epistemological and ontological consequences

of constructionist biology.
230

Autopoiesis is a theory of living systems that actually implements Kant’s

(1790/1951) conception of self-organizing beings as natural purposes. Although

Maturana and Varela make no overt appeals to teleology, the autopoietic system

clearly satisfies the conditions set forth by Kant: the parts exist by means of each

other, for the sake of each other and the whole, and they reciprocally produce

each other (1790/1951, p. 220). The paradigm of autopoietic organization is the

living cell, which, as an autopoietic system, is defined as a dynamic network of

molecular interactions. It is called autopoietic because it is literally self-creating;

a cell is a series of transformations that continually produce the components that

constitute it, including the boundary by which its continued coherence is realized.

It is also autonomous, in that it specifies its own organization through its ongoing

self-production. Although the cell provides a relatively accessible example of

autopoietic organization, autopoiesis is intended as a model for living processes at

all scales.

An essential corollary of autopoietic theory is that all the interactions

between an autopoietic system and its surroundings are understood instances of

cognition. As Maturana and Varela (1987) pithily express it, “all doing is

knowing and all knowing is doing” (p. 27). Their more formal definition of

cognition emphasizes its twofold character; an act of cognition, they write, is “an

effective action, an action that will enable a living being to continue its existence

in a definite environment as it brings forth a world” (pp. 29-30). In addition, in

sharp contrast to conventional descriptions of cognition as information


231

processing, for autopoietic theory, cognition is essentially identical to ontogeny.

As both entail a history of moment by moment structural change as a result of

interaction between external perturbations and internal dynamics, the same

principles of constructive interaction apply (Maturana & Varela, 1980, p. 74, p.

175).31 For example, with respect to perception, autopoietic theory does not

presuppose a pregiven external world to serve as a source of information for an

organism’s ostensible representations. Perception is not simply the detection and

processing of sense data. Rather, perception entails the production of a coherent,

sensorimotor pattern of experience based on one’s entire history of perceptually-

guided action in lived situations. To use Maturana and Varela’s phrase, perception

brings forth a world. Since this phrase may be misinterpreted in a way that

recapitulates the form-matter, mind-body dualities of conventional thought, allow

me to emphasize that the word world here does not imply some sort of

internalized mental representation. Autopoietic ontology-epistemology entails the

inseparability of mind and body, of thought and action, of appearance and reality,

of situated organism and lived world. The richness and complexity of the world

that is brought forth by an organism is inseparable from the richness and

complexity of the interactions by which it produces and sustains that world. The

environment is not a province of preformed, determinate perceptual qualities,

waiting simply to be read off. Perceptible features are constructed perceptual

31
Note that internal and external should be understood as relative to

a boundary distinguished by an observer.


232

events; they are formed through and as patterns of embodied sensorimotor

interaction.

Regarding color vision, for example, Maturana and Varela (1987) write

that, “we do not see the ‘colors’ of the world; we live our chromatic space” (p.

23). It is well known that the spectrum light that is visible to an individual is the

product of a functional relationship, and, as such, varies substantially between

animal species. Relative to what humans perceive, for example, the range of light

visible to bees is shifted toward the ultraviolet portion of the spectrum. More

bizarrely, according to Varela, Thompson, and Rosch (1991), the dimensionality

of color space also varies. Squirrels and rabbits perceive color in only two

dimensions, while goldfish and diurnal birds inhabit a four-dimensional chromatic

space (p. 181). While dichromatic vision might be imagined as analogous to black

and white, tetrachromatic vision, like a fourth dimension of physical space, is

perhaps impossible for humans even to imagine. Thus, phylogenetic and

ontogenetic histories of sensorimotor interaction produce diverse morphologies

and modes of perception, which enable organisms to inhabit diverse worlds, even

as they share physical space.

The kinship between the developmental systems approach and autopoiesis

theory, should be evident. The particular affinities I wish to emphasize, at the risk

of putting too fine a point on it, are that both reject the appeal to preexisting
233

forms, causes, or “unconditioned antecedents”32 in explaining the construction of

form, and both recognize a strong parallel between ontogeny and cognition.

Indeed, ontogeny and cognition are understood by both approaches as complex,

systemic phenomena that reciprocally determine both their domains of causal

significance and the precise nature of their sensitivities, enabling them to produce

coherent, semi-stable outcomes despite unstable conditions. This theme is

apparent in the parallel between the perceptual phenomenon of approximate color

constancy and the ontogenetic phenomenon of developmental canalization. In

approximate color constancy, the visual system is able to produce a stable color

experience despite major fluctuations in the surface reflectance of a given object.

This is a routine experience. When you carry an orange from an artificially lit

supermarket out into the sunlit parking lot, the frequency of the light waves

reaching your retina changes dramatically, but the orange appears the same hue of

orange. The interactive dynamics of the visual system are able to orchestrate a

coherent color experience, which is only partly dependent on the physical

properties of the surround. Vision is thus a not a window onto a preexistent

external reality; it is an emergent phenomenon, in which the perceptual features of

the world are reliably produced by a network of interactive processes.

32
Merleau-Ponty (1942/1963) uses this picturesque phrase in

describing the inadequacy of the stimulus concept of classical S-R

behaviorism (p. 9).


234

Similarly, with developmental canalization, a developmental system is

able to generate reliable ontogenetic outcomes despite significant variation in

genetic and epigenetic factors. Although some differences in developmental

inputs can and do result in phenotypic differences, many developmental pathways

are canalized, meaning that the system is sufficiently robust to buffer significant

fluctuations in developmental inputs. In both scenarios, the relatively stable

outcomes are produced despite variable inputs because the effective inputs and

their causal significance are determined by the systemic interactions that

constitute the processes.

Autopoiesis theory, DST, and dialectical biology all emphasize the

interdependent existence of systemic interactants, and all, in my view, have

radical ontological and epistemological consequences. Although it is convenient,

and often necessary, to treat the elements involved in ontogeny and cognition as

determinate objects, 33 it is all too easy to forget that we are not referring to things

in themselves, but to the parts of an integrated system. Parts, as Levins and

Lewontin (1985) explain, “have no prior independent existence as parts” (p. 273).

The features that define the parts as parts are not intrinsic to them, but come into

being within the pattern of relationships from which we subsequently abstract

them. Meanwhile, the inheritance paradigm, across all the life sciences, from

biology, medicine, and psychiatry, to sociology, anthropology, and eugenics is

33
Maturana attempts to use system-referent language in discussing

autopoietic systems, but most people find it frustratingly abstruse.


235

defined precisely by the conviction that such context-independent sources of

preexistent form simply must exist in order to explain intergenerational

resemblance. The only remedy to this legacy, I believe, is a thoroughgoing

commitment to constructionist explanations that reject the appeal to preexisting,

context-independent causes and remain exceedingly suspicious of a priori causal

categories.

Evolving interactive networks

I would like to conclude this chapter by stating, in positive terms, how I

think the problems of development and evolution might be integrated without the

appeal to inheritance as a causal or explanatory principle. I do not attempt to

present detailed hypotheses with identifiable empirical predictions. Nor do I claim

that this approach is able to solve every problem that might be raised against

constructionist approaches. This discussion is offered, rather, as a general

provocation and an attempt merely to make explicit what I take to be the most

radical implications of replacing the paradigm of inheritance with a paradigm

based on interactive networks.

A genuinely constructionist biology that brings the focus back to the

general problem of how complex organic form is produced and reproduced

requires a conceptual reintegration of development and evolution. The inheritance

paradigm consistently frustrates this reintegration for the simple reason that it

presupposes their segregation. Indeed, as I mentioned, the segregation of

development and evolution into distinct conceptual and disciplinary domains was
236

fully realized with the founding of transmission genetics in the early twentieth

century. It continues to be reinforced by the metaphors of development and

adaptation, which enable each domain to evade the difficult problems of

formation by resorting to concepts imported from the other. The metaphor of

development implies the expression of a pregiven internal source of form and

posits genetic instructions to play this role. The adaptation metaphor, meanwhile,

not only helps itself to the forms produced by internal developmental causes, but

it also presupposes structured external causes. The phenotypic forms given by

development constitute variants that are tried out to solve the problems posed by

the pregiven environment. The internal causes responsible for building the

successful variants are then transmitted to the next generation, and the cycle is

complete. These metaphors work together with the inheritance paradigm to

sustain the segregation of development and evolution and thereby to defer the

very problems of formation that would undermine this preformationist

framework. In place of the paradigm of inheritance, therefore, I suggest that the

developmental systems approach to evolution can be more clearly understood if

placed within a framework based on an interactive network paradigm. Complex

form in living systems is always generated by hierarchically structured interactive

networks, not only in the developing embryo, but also at the more encompassing

spatial and temporal scales associated with ecological succession and evolution.

In the discussion that follows, I reconsider some of the biological

phenomena that are, for the inheritance paradigm, explained in terms of special
237

transmitted causal factors and suggest how these phenomena might be reframed in

terms of an interactive network paradigm. I suggest, to begin with, that

intergenerational stability of form might be better understood in terms of the

tendency of complex networks to sustain and reliably regenerate complex patterns

of organization. No repository of information or central point of control is

required because “knowledge” of a system’s global structure is embodied in the

patterns of relationships and interaction realized by the system over concrete time.

This leads to my second point, which is a network paradigm entails a different

conception of control and regulation. Regulation is no longer a property of

preexisting entities, inherited or environmental, but a global property arising as a

consequence of the interdependencies that characterize the network. Third, I argue

that biological integration is handled more parsimoniously by a network

paradigm. The mutual participation of diverse entities, say, in multicellular

organisms, does not require special mechanisms to block cheating because

reciprocity is already assured by the contingently irreversible interdependencies

inherent in the system. Cheating is simply not an option. Fourth, I explain how the

network model handles the traditional problem of adaptation. I argue that standard

account of cumulative selection can be reframed as cumulative interdependence,

which permits the issue of formation to remain central. This sets the stage for my

principal claim, which is that the network paradigm facilitates the integration of

development and evolution by directing our attention to the dynamic processes of

formation and transformation through which structure and function are


238

reciprocally constructed across multiple spatial and temporal scales.

A good place to begin this shift away from the inheritance paradigm is

perhaps with an alternative account of intergenerational stability. Although the

relative constancy of species is not, strictly speaking, explained by transmission

genetics, which only deals with intraspecies differences, it is generally assumed

that species characteristics must be genetic. Nevertheless, it turns out that species

stability does not present a serious difficulty for the interactive network paradigm.

Recent research by Siegal and Bergman (2002) on canalized development

demonstrates how networks can produce reliable formation, simply as a

consequence of their interactive dynamics.

As explained above, canalization refers to the capability of a

developmental system to reliably generate certain developmental trajectories in

the face of environmental and genetic fluctuations. Canalization has traditionally

been explained in terms of stabilizing selection. The appeal to selection relies on

the inheritance paradigm by taking transmission for granted and deferring the

hard problem of explaining the developmental mechanisms through which

canalized outcomes are realized. According to Siegal and Bergman’s (2002)

findings, however, canalization may actually arise independently of stabilizing

selection, as a direct consequence of the complex interconnectedness of

developmental-genetic processes. In other words, responsibility for the stability

and reliability of canalized developmental processes is not to be found in DNA

sequences, but in the network structure of genetic regulatory interactions. The


239

more complex and highly interconnected the network, the more stable will be its

overall organization.

Siegal and Bergman’s research is restricted to genetic regulatory networks,

but their findings exemplify a general feature of networks. From the perspective

of the network paradigm, the genetic networks that produce developmental

stability are not qualitatively distinct from, or independent of, the more inclusive

networks in which they are embedded. Indeed, from this standpoint, the network

paradigm does not need to be seen as a rival to the explanation based on

stabilizing selection. Since stability is a naturally occurring property of highly

interconnected networks, in the same way that ontogenetic stability is maintained

by genetic networks, stability at more inclusive scales may be engendered by the

interactive dynamics of larger organismic or ecological networks, including those

that are ordinarily classified as natural selection. Incidentally, this is not to say

that instability does not also occur naturally and inevitably. Instability, after all, is

a precondition for both evolution and development.

This brings me to the second key area of biology that can benefit from a

reframing in terms of the interactive network paradigm: genetic regulation. Based

on the above description, the reader may expect that genetic regulation would

have always been understood in network terms. For most of its history, however,

genetic regulation has been framed primarily in terms of the inheritance paradigm.

As we saw in Chapter 5, although Jacob and Monod (1961) initially described

regulatory genes in terms of cybernetic mechanisms, they ultimately interpreted


240

these mechanisms according to the inheritance paradigm, as components in the

genetic program for development. This framing of genetic regulation is based on

the idea that DNA encodes hereditary information that controls the developmental

process. Regulatory genes, according to this control model, have the power to

direct the expression of other genes. With the additional discovery that regulatory

genes are also regulated, the metaphor of control attained a fully militaristic

interpretation, according to which, control is hierarchical and culminates in a

master regulator (Gilbert, 2000, p. 186).

As Gilbert (2000) points out, however, these hierarchical models are now

breaking down and being replaced by network models, in which control is

understood as interactive and distributed. It turns out that even so-called master

regulators are regulated, but not by some yet higher-ranking controller. On the

contrary, the dynamics of control turn out to be reciprocal and circular.

“Regulators,” according to Gilbert, “must be regulated by factors that are

themselves both regulated and regulators” (p. 187). Moreover, lest this talk of

regulators be misunderstood, allow me to make the consequences of this revision

explicit. The network perspective has no need for specific genes that have

regulation as their prescribed function. Indeed, on this view, it is not necessarily

meaningful to identify controlling or regulating functions with particular DNA

sequences, since such functions depend essentially on the history and state of the

entire system. As Neumann-Held (2001) makes clear, the very idea of genes as

stable entities began to disintegrate with the discovery of phenomena such as


241

mRNA editing and alternative mRNA splicing (as discussed in Chapter 5). She

argues for a process definition that recognizes the gene as a transient product,

whose function, not to mention its structure, is inseparable from the complex

cellular dynamics that produce it (see also Griffiths & Neumann-Held, 1999).

This understanding of function is axiomatic for the network paradigm. The

existence of a functional entity in a network, be it a promoter gene in a genetic

regulatory network or a predator species in an ecosystem, always depends, both

causally and materially, on the entire system of interactions from which it is

abstracted.

As a further consequence, the very meanings of control and regulation are

fundamentally altered by the network model. Control, on this view, is not some

special functional attribute that must be present in a system in order for it to

exhibit order. It is simply a description of that order. Regulation is a global

property that necessarily characterizes a relatively stable system. Error-correcting

feedback dynamics do not need to be guided or informed by special structures that

encode the proper state of the system; stable systems are simply those for which

the interacting dynamics are in balance. This perspective is both more general and

more metaphysically parsimonious than the appeal to teleological controllers.

A third key feature of living systems and processes that can be accounted

for elegantly by the interactive network paradigm is biological integration.

Sterelny and Griffiths (1999) point out how integration came to be framed as an

anomaly due to the gene’s-eye-view approach. This not only forced theorists to
242

doubt the existence of group-level adaptations, which was the original intent of

this perspective; it also helped them to recognize that, even at the level of the

organism, integration cannot simply be taken for granted (p. 75). I would not wish

to deny that the selfish gene perspective has spurred valuable thought and fruitful

research. Notwithstanding that approaches arguable heuristic benefits, the

interactive network paradigm has the contradistinct advantage that it easily

accommodates biological integration without the need to appeal to higher-level

selection or to morally loaded notions of cooperation and altruism. On the

contrary, emergent levels of biological organization can be understood simply as a

natural consequence of the complex interdependencies that accumulate as systems

co-evolve with other systems. There is no need to construe mutualisms as

somehow reducible to competition, or to identify mechanisms by which

replicators are blocked from subverting the interests of the larger system. From

the network perspective, integration, whether of organisms, colonies, or parasite-

host systems, does not necessarily require special adaptationist explanations. Of

course, functional systems can always be described in terms of selection, but the

tendency for integration can be seen as a robust and general phenomenon,

conceived at the same level of description as adaptation.

As Cuvier recognized two centuries ago (albeit with a different

interpretation), living systems at all scales and levels of organization are, by


243

necessity, always-already functionally integrated (Asma, 1996).34 Whether the

system in question is a developing multicellular embryo or an evolving

ecosystem, the principal alternative to integration is disintegration. It is not that

there are some intrinsic, a priori benefits enjoyed by independent organisms who

choose to enter into cooperative arrangements, an oddly anthropomorphic

conception of integration that, for some reason, often creeps into these

discussions. The real point is that, over ontogenetic and phylogenetic time spans,

recurrent interaction frequently leads to differentiation, which in turn engenders

interdependency. The differentiation of cell types in a multicellular animal and of

functional roles in a social insect colony each entail interdependencies that render

the entire issue of selfishness and altruism beside the point. Moreover, this is not

only a fact about cooperation; it is also a fact about the food chain. Predator-prey

interdependencies are no less instances of biological integration. This view does

not really conflict with explanations of integration that cite outlaw prevention

tactics or evolutionarily stable strategies. It merely reframes integration in such a

way that it is comprehensible and expected, rather than inherently exceptional and

contrary to some basic biological tendency toward self-interest on the smallest

possible scale.

A fourth place that the interactive network paradigm may offer a fresh

perspective on evolution is in the context of traditional adaptationism. To begin

34
Neo-Darwinian theory also conceives of contemporary species as

already more or less optimally adapted (Lewontin, 1983b).


244

with, let me emphasize that the network perspective is consistent with the

principle of cumulative selection typically used to explain adaptation. However,

rather than relying on the transmission of units bearing encoded forms, the

network paradigm emphasizes the contingent irreversibility that results from

differentiation, and treats particular instances of intergenerational resemblance,

appropriately, as developmental puzzles. The other key difference is that, while

orthodox evolutionary theory requires variation to be heritable in a strictly vertical

dimension, the network perspective tolerates substantially more diffuse

connections between generations. Sterelny (2001) argues that if inheritance is

diffuse, meaning that traits can spread horizontally, rather than just being

transmitted from parents to offspring, it will be possible for replicators to defect,

to propagate themselves in a way that benefits their profusion at the expense of

the larger system. However, due to the emphasis on integration described above,

defection is less of a concern for the network paradigm. The integrated parts of a

complex evolving lineage are committed to their role, not by their altruism, but by

their structural, functional, and ontological dependence on the systems in which

they participate. Incidentally, I am not suggesting that outlaw replicators are less

frequent than selfish gene theorists would claim; the problem, after all, has always

been to explain the relative rarity of defection.

Rather than attempting to extrapolate all evolution from the differential

reproductive advantage conferred by discrete variants, with the network

paradigm, the causes of intergenerational stability and change can be conceived in


245

terms of systemic interaction at multiple scales in a nested hierarchy. On the

smallest scale, variations in physiological aspects of reproduction, such as DNA

and cytoplasm, may produce stable differences in one’s immediate offspring. On

a wider scale, a permanent change in the availability of a food source, through

extinction for example, may affect an entire population. Moving even further out,

global climate change is bound to transform both the forms of species and their

interrelations, throughout the biosphere, though in unpredictable ways. Between

these broad categories there are, of course, many more layers of stabilizing and

destabilizing patterns that contribute and have contributed to the evolution of life.

The orthodox theorist might counter, at this point, that higher-level

changes can be redescribed in terms of lower levels. It is indeed standard to

describe the effects of large scale environmental changes in terms natural

selection operating on the gene pools of local populations. For many purposes,

this approach may be adequate, and it is perhaps always descriptively adequate.

Hierarchical selectionists, such as Gould (2001), for example, concede that one

can keep the books of evolution in terms of genetic change. However, if

contingently irreversible climatic or ecological changes produce changes in the

ontogeny of an entire population, it would seem preferable to analyze such events

in terms of the scale at which the salient dynamics are occurring. Moreover, the

idea that ecological changes can affect evolutionary changes by altering

developmental pathways is not an idle speculation; it is a key theme in the new

discipline of evolutionary developmental biology (Hall, 2000).


246

From the perspective of orthodox theory, it has been objected that, for

cumulative selection to be effective, hereditary transmission must be strictly

vertical, meaning that, with respect to any specific heritable variation, offspring

fitness must be a function of parent fitness (Sterelny, 2001). This would argue

against the evolutionary significance of changes at more inclusive scales. Griffiths

and Gray (2001) respond that, whether the inheritance of a variation satisfies the

verticality requirement actually depends on the scale under consideration. That is,

some instances of intergenerational stability and change must be considered in

terms that are more inclusive than parent-offspring transmission. Some changes in

developmental influences will have their effects on the level of families, demes

(p. 202), and, in the case of climate change, all of life (see also Oyama, 1985, p.

125). From a network perspective, of course, it makes no sense to privilege one

scale over others in explaining changes taking place on different scales.

Finally, the interactive network paradigm allows the domains of ontogeny

and phylogeny to be integrated seamlessly, without recourse to the dubious

concept of encoded information. With the conventional inheritance paradigm,

semantic hereditary information is required in order to bridge the independent

explanatory domains of evolution and development. As I discussed, the metaphor

of adaptation relies on preexisting external form, while the development metaphor

relies on preexisting internal form, and semantic information serves as the protean

bridge that links internal and external causes as it preserves the gulf between

them. The network perspective has no need for such conceptual craftiness because
247

there is no metaphysical distinction between the causal domains of ontogeny and

phylogeny.

The interactive network paradigm reveals a deep unity among the

processes underlying biological constancy, change, and variability. Evolutionary

and developmental processes differ in degree rather than in kind. There are not

two distinct phenomena, one guided by heritable variation and natural selection

and the other by the contingent expression of inherited information. We are

dealing, rather, with countless dynamic networks undergoing transformation

across widely disparate spatial and temporal scales. At one extreme, organisms

are constructed by processes that produce semi-reliable outcomes due primarily to

the high degree of interconnectedness among the interacting influences.35 At the

other extreme, the networks of constructive influences that regulate evolutionary

change are less tightly interconnected and are, to that degree, more buffeted by

contingency. The regulatory dynamics at the evolutionary end of the scale tend to

be treated under the umbrella category of natural selection, but we should not

forget that this category includes a variety of stabilizing and destabilizing

processes. As Oyama (1985) writes, “natural selection, or differential

reproduction, must be understood as an interactive process whose very constraints

and causes emerge as it functions, as they do in a developmental process” (p. 39).

35
The network of repair mechanisms responsible for the functional

integrity of DNA would probably fall at the far extreme of stability,

reliability and interconnectedness.


248

Conclusion

This chapter has explored the ambiguous role of inheritance in

evolutionary and developmental theory. I note how the emergence of genetics

provided a solution to the problem of heredity and for a time rendered the

ambiguity moot. It has returned, however, to the exact extent that the appeal to the

gene as the sole unit of inheritance has been called into question. This has reached

its most extreme form in the extended inheritance model of Griffiths and Gray

(1992, 1994, 1997). I argue that this model extends the unit of inheritance and the

replicator concepts so far that they no longer serve a general explanatory purpose.

The question of what is passed between generations essentially collapses.

Moreover, I argue that the pursuit of a single explanation for all intergenerational

stability and change in terms of units and mechanisms of inheritance is

historically contingent and ultimately misguided. Taken as a set of developmental

questions, distinct phenomena of biological stability and change must be given

distinct developmental explanations. Griffiths and Gray are clearly headed in this

direction with their recognition of different types of inheritance mechanisms. I

merely suggest that a thoroughly constructionist approach ought to abandon

inheritance as a class of objects and a causal category. Finally, I argue that we

ought to think about developmental systems as interactive networks. I contend

that the emphasis on networks more authentically captures the radical

implications of the developmental systems approach than does the emphasis on

extended inheritance. The conception of developmental systems as interactive


249

networks shows that biological stability and change can be accounted for in a way

that integrates evolution and development without recourse to inherited

information or general replication mechanisms, setting the stage for a genuinely

constructionist biology.
250

Chapter 8: Recapitulation

This dissertation argues that after a long and fruitful florescence, the

preformationist and dualist stage of biological thought is moving into senescence.

The Cartesian alienation of form from matter has facilitated detailed description

of many biological processes, but a new set of problems is now moving to the

forefront. The problems of formation that lie at the intersection of ontogeny and

phylogeny, such as biological integration, functional differentiation, and cognition

remain essentially untouched, and this lacuna is made all the more apparent by the

successes enjoyed by mechanistic approaches. This work, therefore, is ultimately

less about particular scientific research programs than about the materially

grounded conceptual networks that underpin them. Moreover, I am suggesting

that the conceptual landscape is shifting, that the once firm metaphysical

foundations of mechanistic biology are being destabilized. Just as the Aristotelian

essentialism of the late medieval period gave way to the Cartesian cosmology of

the Scientific Revolution, this latter worldview is now yielding to something new,

the final shape of which we can only guess at. Although the true contours of this

coming conceptual landscape will only be discernible in hindsight, I would

suggest that, for biological theory, this shift is embodied in the constructionist

recognition of concrete temporality for appreciating the dynamicm, formative

potentiality inherent in the material world.

I attempt to model, throughout this work, a constructionist approach both

to intellectual history and scientific reasoning. I reject the genetic historiography


251

that seeks the rudiments of the inheritance paradigm among the speculations of

ancient thinkers, asking, for example, whether Aristotle or Hippocrates was more

prescient, from the perspective of modern genetics. As I have attempted to show,

a nuanced reading of history reveals the meaninglessness of such questions. From

the standpoint of the (standard) historiographic approach adopted in this work, the

development of heredity as a recognizable puzzle and a conceptual possibility

depended on a variety of historical transformations that fundamentally

reorganized the physical and social patterns by which early modern Europeans

related to each other and to the natural world. The phenomena of parent-offspring

resemblance and the constancy of species and breeds moved into the foreground

only as real bodies began to be uprooted from the physical and social ties that had

held them in place since time immemorial. The ascent of inheritance as a general

biological phenomenon depended, in vital ways, on a contingent network of

interacting material and ideological influences.

Medieval Europe was characterized by relative stability, both physical and

social. The hierarchies that structured human life from the Vatican to the royal

court to the peasant village constituted their own justification. The cosmos was an

organic whole, where each being had its place and its inherent correspondence to

the whole. Within this milieu, the natural philosopher was led to ponder the true,

deep, inner nature of beings. The fox and the owl were not instances of abstract

classes or imperfect copies of ideal types; they were unique essences, whose

existence expressed something that was both intrinsic and transcendent. The role
252

of premodern natural history, then, was to describe the essential nature of various

beings, to understand as thoroughly as possible what makes each being what it is.

This essentialism was not based on an unquestioning fidelity to Platonic idealism,

but simply reflected a coherent contemporary cosmology, one which became

unintelligible once the medieval world gave way to the modern.

The transformation of the physical and conceptual landscape of early

modern Europe reinforced, and was reinforced by, the development of a new

metaphysical outlook. This modern worldview was made explicit in the

metaphysics of Descartes and clearly exemplified in the cosmology of Newton.

The world is a clock-like machine in which the motions of passive matter are

governed by fixed and universal laws. This universe described by Descartes and

Newton is not only blind, but essentially dead. Life, mind, and form, to the extent

that they can be considered real, must be imported from outside the universe. For

Descartes and his contemporaries, of course, the supernatural origins of life,

mind, and form were unquestionable. Indeed, throughout the Enlightenment,

natural philosophers continued to rely on a deistic prime mover to provide the

intricate designs needed to explain living beings. The modern worldview, it

should also be noted, did not eliminate essentialism, but merely transformed it.

The nature of each being, whether the soul of the individual, or the structure of

the animal, is no longer a microcosmic expression of the whole, but an

independent, preexisting form, which, for earlier thinkers, was derived from the

mind of God, but, more recently, is encoded in the DNA molecule.


253

By the late nineteenth century, the epistemological standards were

shifting, and it was becoming less permissible, in scientific circles, to rely on

supernatural sources for the design of living beings. Darwin provided the means

to negotiate this impasse. There was no need for a Designer because of the trial

and error logic expressed in the endless struggle for existence among living

beings. This was only half the solution, of course, since Darwin needed to assume

the availability of a mechanism of inheritance that would cause offspring to

resemble their parents more than other conspecifics. Beginning with Darwin

himself, a number of late nineteenth and early twentieth century biologists came

to see hereditary transmission as a central problem. It was through this

transformation of biology from a science of form to a science of the transmission

of form, that the overt dualism and preformationism of early modern generation

theory was replaced with the tacit dualism and preformationism of the inheritance

paradigm.

The consolidation of the inheritance paradigm around the gene established

the foundation on which much of modern developmental and evolutionary

biology are based. As I show in the second half of the dissertation, however, this

framework embodies significant contradictions and limitations, and these are

being exposed as a direct consequence of the advances made possible by the

framework itself. The original separation of internal and external causes had

opened an epistemological space in which scientists could elaborate mechanistic

explanations of biological processes on the molecular scale. But then they forgot
254

that the machine is a metaphor and the gene a methodological artifact.

We are now coming face to face with the limits of the mechanistic

approach, with its tacit dualism and preformationism, and find ourselves caught in

a conceptual double-bind of sorts. Many sense that the machine analogy, even in

its updated, information-age formulation, is inadequate, but the only apparent

alternatives are untenable. No one wants to be accused of vitalism or wooly

holism, so we continue to put our faith in the promise that more detailed

knowledge of molecular mechanisms will somehow resolve the hard problems of

formation. We know better than to attribute form exclusively to the genes or the

environment, but because those seem to be the only choices, we end up settling

for a superficial genes + environment interactionism that continues tacitly to rely

on preexisting form and leaves the original dichotomy untouched.

As has been argued convincingly by both developmental systems theorists

and dialectal biologists, the preformationism inherent in developmental dualism

entails a problematic biological determinism that no amount of hedging can

mitigate. As long as individual qualities, such as intelligence and criminality, or

species qualities, such as xenophobia, patriarchy, and aggressiveness, are labeled

as genetic (whatever genetic might mean in a particular context) these qualities

will be understood as somehow more fundamental or natural, and thus inevitable

or fated. These sorts of ideas often reinforce dominant ideologies and, as a result,

tend to translate easily into scientistic cartoons produced for popular

consumption. These problems are exemplified by the exaggerations of genetic and


255

genomic knowledge now being used to market personal genomics services, but

they have their true roots in the hereditarianism of the nineteenth century.

Eugenics, after all, was advocated by Francis Galton a full half century before the

Morgan group identified the classical gene.

There is an alternative to this dualist-preformationist conceptual system,

which has equally deep roots in modern thinking about life. Recapitulating and

further developing themes that go back to Kant and Goethe, developmental

systems theorists and dialectical biologists have formulated a constructionist

approach that overcomes the troublesome contradictions that characterize the

dominant framework. DST, in particular, challenges the privileging of the gene in

developmental and evolutionary explanations. It shows that the attribution of

special informational properties to genes is based on a double standard, pointing

out that the arguments used to justify the special developmental role attributed to

genes tacitly depends on already assuming their special status.

Oyama (1985) argues that an organism’s inheritance includes all the

developmental conditions that are passed to it, but that the actual construction of

the phenotype still must be explained. Griffiths and Gray (1994) draw on this

insight to develop an extended model of inheritance designed to achieve a

rapprochement between DST and Darwinian evolutionary explanation. This effort

has certainly enlivened the discourse on units of evolution, as well as introducing

DST to a wider audience. However, because Griffiths and Gray treat inheritance

as a more or less unified explanatory category, their model affirms the inheritance
256

paradigm, potentially making it more difficult to escape the preformationism

implicit in that framework.

I argue, therefore, that a constructionist integration of developmental and

evolutionary biology requires that we replace the inheritance paradigm with a

network paradigm. The inheritance paradigm, regardless of how the details are

nuanced, participates in the dualistic conceptual system that hinders the

integration of development and evolution. The network paradigm, on the other

hand, directs our attention to the constructive interactions that are responsible for

organic complexity at all spatial and temporal scales and reveals the underlying

unity of developmental and evolutionary dynamics. The novel forms at the

forefront of developmental and evolutionary change, from this perspective, are

neither structures nor functions. Structure and function are abstractions from, and

visible traces of, the most elusive of scientific objects: the relationships that

emerge and dissipate in concrete time. Relationships and patterns of interaction,

are made up of unique events in time, which, although they may never be exactly

repeated, nevertheless constitute the world as we know it. The network paradigm

escapes the irresolvable dichotomies of Cartesian cosmology by rejecting its

dualistic and preformationist epistemology and ontology and reconceiving form

and matter in terms of a concretely temporal ontogeny of relationships. Radical as

this may sound, such a revision would merely bring biological theory into line

with the cosmology of twentieth century physics.


257

References

Ackerknecht, E. (1982). Diathesis: The word and the concept in medical history.
Bulletin of the History of Medicine, 56, 317-325.

Amundson, R. (1994). Two concepts of constraint: Adaptationism and the


challenge from developmental biology. Philosophy of Science, 61(4), 556-
578.

Amundson, R. (2005). The changing role of the embryo in evolutionary thought:


Roots of evo-devo. Cambridge: Cambridge University Press.

Appel, T. A. (1987). The Cuvier-Geoffroy debate: French biology in the decades


before Darwin. New York: Oxford University Press.

Aristotle. (1910). De generatione animalium [On the generation of animals] (A.


Platt, Trans.). Oxford: Clarendon Press.

Asma, S. T. (1996). Following form and function: A philosophical archaeology of


life science. Evanston, IL.: Northwestern University Press.

Bartley, M. M. (1992). Darwin and domestication: Studies on inheritance. Journal


of the History of Biology, 25(2), 307-333.

Bateson, G. (2000). Steps to an ecology of mind (University of Chicago Press ed.).


Chicago: University of Chicago Press.

Benson, K. R. (1991). Observation vs. theory. In C. E. Dinsmore (Ed.), A history


of regeneration research: Milestones in the evolution of a science (1st ed.,
pp. 91-100). Cambridge: Cambridge University Press.

Bergson, H. (1911). Creative evolution (A. Mitchell, Trans.). New York: The
Modern Library.

Beurton, P. J. (2000). A unified view of the gene or how to overcome


reductionism. In P. J. Beurton, R. Falk, & H.-J. Rheinberger (Eds.), The
concept of the gene in development and evolution: Historical and
epistemological perspectives (pp. 286-314). Cambridge: Cambridge
University Press.

Bowler, P. J. (1971). Preformation and pre-existence in the seventeenth century:


A brief analysis. Journal of the History of Biology, 43, 221-244.

Bowler, P. J. (1984). Evolution: The history of an idea. Berkeley: University of


258

California Press.

Bowler, P. J. (1989). The Mendelian revolution: The emergence of hereditarian


concepts in modern science and society. Baltimore: Johns Hopkins
University Press.

Bowler, P. J. (1996). Life's splendid drama: Evolutionary biology and the


reconstruction of life's ancestry, 1860-1940. Chicago: University of
Chicago Press.

Carroll, S. B. (2005). Endless forms most beautiful: The new science of evo devo
and the making of the animal kingdom (1st ed.). New York: Norton.

Cartron, L. (2003, January 10-12). Pathological heredity as a bid for greater


recognition of medical authority in France, 1800-1830. Paper presented at
the A cultural history of heredity II: 18th and 19th centuries, Max Planck
Institute for the History of Science, Berlin. Available at
http://www.mpiwg-berlin.mpg.de/Preprints/P247.pdf

Cartron, L. (2007). Degeneration and "alienism" in early nineteenth-century


France. In S. Müller-Wille & H.-J. Rheinberger (Eds.), Heredity
produced: At the crossroads of biology, politics, and culture, 1500-1870
(pp. 155-174). Cambridge, MA: MIT Press.

Churchill, F. B. (1987). From heredity to verebung: The transmission problem,


1850-1900. Isis, 78(3), 336-364.

Coleman, W. (1965). Cell, nucleus, and inheritance: An historical study.


Proceedings of the American Philosophical Society, 109(3), 124-158.

Coleman, W. (1977). Biology in the nineteenth century: Problems of form,


function, and transformation. Cambridge: Cambridge University Press.

Darwin, C. (1883). The variation of animals and plants under domestication,


second edition. New York: D. Appleton.

Darwin, C. & Burrow, J. W. (1968). The origin of species by means of natural


selection: Or, the preservation of favoured races in the struggle for life.
Harmondsworth: Penguin. (Original work published 1859)

Dawkins, R. (1976). The selfish gene. New York: Oxford University Press.

Dawkins, R. (1982). The extended phenotype: The gene as the unit of selection.
Oxford: Freeman.
259

Dawkins, R. (1984). Replicators and vehicles. In R. N. Brandon & R. M. Burian


(Eds.), Genes, organisms, populations: Controversies over the units of
selection (pp. 161-180). Cambridge, MA: MIT Press.

Dawkins, R. (1986). The blind watchmaker: Why the evidence of evolution


reveals a universe without design. New York: Norton.

Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press.

Dretske, F. I. (1982). Knowledge and the flow of information. Cambridge, MA:


MIT Press.

Duchesneau, F. (2007). The delayed linkage of heredity with the cell theory. In S.
Müller-Wille & H.-J. Rheinberger (Eds.), Heredity produced: At the
crossroads of biology, politics, and culture, 1500-1870 (pp. 293-314).
Cambridge, MA: MIT Press.

Foucault, M. (1973). The order of things: An archaeology of the human sciences.


New York: Vintage Books.

French, R. K. (1994). Ancient natural history: Histories of nature. London:


Routledge.

French, R. K. (2003). Medicine before science: The business of medicine from the
middle ages to the Enlightenment. Cambridge: Cambridge University
Press.

Gasking, E. B. (1967). Investigations into generation, 1651-1828. Baltimore:


Johns Hopkins Press.

Gayon, J. (1998). Darwinism's struggle for survival: Heredity and the hypothesis
of natural selection. Cambridge: Cambridge University Press.

Gayon, J. (2000). From measurement to organization: A philosophical scheme for


the history of the concept of heredity. In P. J. Beurton, R. Falk, & H.-J.
Rheinberger (Eds.), The concept of the gene in development and evolution:
Historical and epistemological perspectives (pp. 69-90). Cambridge:
Cambridge University Press.

Gilbert, S. F. (2000). Genes classical and genes developmental: The different use
of genes in evolutionary syntheses. In P. J. Beurton, R. Falk, & H.-J.
Rheinberger (Eds.), The concept of the gene in development and evolution:
Historical and epistemological perspectives (pp. 178-192). Cambridge:
Cambridge University Press.
260

Gilbert, S. F. (2001). Ecological developmental biology: Developmental biology


meets the real world. Developmental Biology, 233, 1-22.

Godfrey-Smith, P. (1999). Genes and codes: Lessons from the philosophy of


mind? In V. G. Hardcastle (Ed.), Where biology meets psychology:
Philosophical essays (pp. 305-331). Cambridge, MA: MIT Press.

Godfrey-Smith, P. (2000a). Information, arbitrariness, and selection: Comments


on Maynard Smith. Philosophy of Science, 67, 202-207.

Godfrey-Smith, P. (2000b). The replicator in retrospect. Biology and Philosophy,


15, 403-423.

Godfrey-Smith, P. (2007). Information in biology. In D. L. Hull & M. Ruse


(Eds.), The Cambridge companion to the philosophy of biology (pp. 103-
119). Cambridge: Cambridge University Press.

Goodwin, B. C. (1994). How the leopard changed its spots: The evolution of
complexity. New York: C. Scribner's Sons.

Gottlieb, G. (1975a). Development of species identification in ducklings: I.


Nature of perceptual deficit caused by embryonic auditory deprivation.
Journal of Comparative and Physiological Psychology, 89(5), 387-399.

Gottlieb, G. (1975b). Development of species identification in ducklings: II.


Perceptual differentiation in the embryo. Journal of Comparative and
Physiological Psychology, 89(7), 675-584.

Gottlieb, G. (1975c). Development of species identification in ducklings: III.


Perceptual differentiation in the embryo. Journal of Comparative and
Physiological Psychology, 89(8), 899-912.

Gottlieb, G. (1978). Development of species identification in ducklings: IV.


Perceptual differentiation in the embryo. Journal of Comparative and
Physiological Psychology, 92(3), 375-387.

Gottlieb, G. (1979). Development of species identification in ducklings: V.


Perceptual differentiation in the embryo. Journal of Comparative and
Physiological Psychology, 93(5), 831-854.

Gould, S. J. (1977a). Ontogeny and phylogeny. Cambridge, MA: Belknap Press of


Harvard University Press.

Gould, S. J. (1977b, June). The return of hopeful monsters. Natural History, 36,
261

24-30.

Gould, S. J. (1980). Is a new and general theory of evolution emerging?


Paleobiology, 6(1), 119-130.

Gould, S. J. (2001). The evolutionary definition of selfish agency: Validation of


the theory of hierarchical selection, and fallacy of the selfish gene. In R. S.
Singh, C. B. Krimbas, D. B. Paul, & J. Beatty (Eds.), Thinking about
evolution: Historical, philosophical, and political perspectives: Festschrift
for Richard Lewontin (pp. 208-234). Cambridge: Cambridge University
Press.

Gould, S. J. & Lewontin, R. C. (1979). The spandrels of San Marco and the
panglossian paradigm: A critique of the adaptationist programme.
Proceedings of the Royal Society of London, Series B, 205(1161), 581-
598.

Gray, R. D. (1992). Death of the gene: Developmental systems strike back. In P.


E. Griffiths (Ed.), Trees of life: Essays in philosophy of biology (pp. 165-
209). Dordrecht: Kluwer Academic.

Gray, R. D. (2001). Selfish genes or developmental systems. In R. S. Singh, C. B.


Krimbas, D. B. Paul, & J. Beatty (Eds.), Thinking about evolution:
Historical, philosophical, and political perspectives: Festschrift for
Richard Lewontin (pp. 184-207). Cambridge: Cambridge University Press.

Grene, M. G. & Depew, D. J. (2004). The philosophy of biology: An episodic


history. Cambridge: Cambridge University Press.

Griffiths, P. E. (2001). Genetic information: A metaphor in search of a theory.


Philosophy of Science, 68(3), 394-412.

Griffiths, P. E. (2004). Instinct in the ‘50s: The British reception of Konrad


Lorenz’s theory of instinctive behavior. Biology and Philosophy, 19(4),
609-631.

Griffiths, P. E. (2005). Review of ‘Niche construction'. Biology and Philosophy,


20, 11-20.

Griffiths, P. E. & Gray, R. D. (1994). Developmental systems and evolutionary


explanation. Journal of Philosophy, XCI(6), 277-304.

Griffiths, P. E. & Gray, R. D. (1997). Replicator II–judgment day. Biology and


Philosophy, 12, 471-492.
262

Griffiths, P. E. & Gray, R. D. (2001). Darwinism and developmental systems. In


S. Oyama, P. E. Griffiths, & R. D. Gray (Eds.), Cycles of contingency:
Developmental systems and evolution (pp. 195-218). Cambridge, MA:
MIT Press.

Griffiths, P. E. & Gray, R. D. (2004). The developmental systems perspective:


Organism-environment systems as units of development and evolution. In
M. Pigliucci & K. Preston (Eds.), Phenotypic integration: Studying the
ecology and evolution of complex phenotypes (pp. 409-431). Oxford:
Oxford University Press.

Griffiths, P. E. & Gray, R. D. (2005). Discussion: Three ways to misunderstand


developmental systems theory. Biology and Philosophy, 20, 417-425.

Griffiths, P. E. & Knight, R. D. (1998). What is the developmentalist challenge.


Philosophy of Science, 65(2), 253-258.

Griffiths, P. E. & Neumann-Held, E. M. (1999). The many faces of the gene.


BioScience, 49(8), 656-662.

Griffiths, P. E. & Stotz, K. (2007). Gene. In D. L. Hull & M. Ruse (Eds.), The
Cambridge companion to the philosophy of biology (pp. 85-102).
Cambridge: Cambridge University Press.

Halder, G. P., Callerts, P., & Gehring, W. J. (1995). Induction of ectopic eyes by
targeted expression of the eyeless gene in Drosophila. Science, 267, 1788-
1792.

Hall, B. K. (2000). Guest Editorial: Evo-devo or devo-evo–Does it matter?


Evolution and Development, 2(4), 177-178.

Hamlin, C. (1992). Predisposing causes and public health in early nineteenth-


century medical thought. The Society for the Social History of Medicine,
5(1), 43-70.

Harvey, W. (1943). On animal generation (R. Willis, Trans.). In The works of


William Harvey (pp. 169-518). London: Sydenham Society.

Hendrikse, J. L. (2006). Explanation and inheritance. Dissertation Abstracts


International, 67(04).

Hodge, M. J. S. (1985). Darwin as a lifelong generation theorist. In D. Kohn & M.


J. Kottler (Eds.), The Darwinian heritage: Including proceedings of the
Charles Darwin centenary conference, Florence Center for the History
263

and Philosophy of Science, June 1982 (pp. 202-243). Princeton, NJ:


Princeton University Press, in association with Nova Pacifica.

Hoffheimer, M. H. (1982). Maupertuis and the eighteenth-century critique of


preexistence. Journal of the History of Biology, 15(1), 119-144.

Hull, D. L. (1980). Individuality and selection. Annual Review of Ecology and


Systematics, 11, 311-332.

Hull, D. L. (1984). Units of evolution: A metaphysical essay. In R. N. Brandon &


R. M. Burian (Eds.), Genes, organisms, populations: Controversies over
the units of selection (pp. 142-160). Cambridge, MA: MIT Press.

Hull, D. L. (1988). Science as a process: An evolutionary account of the social


and conceptual development of science. Chicago: University of Chicago
Press.

Hull, D. L. & Wilkins, J. S. (2008). Replication. In E. N. Zalta (Ed.), The Stanford


encyclopedia of philosophy (Fall 2008 edition). Retrieved November,
2008, from
http://www.science.uva.nl/~seop/archives/fall2008/entries/replication/.

Jablonka, E. (2001). The systems of inheritance. In S. Oyama, P. E. Griffiths, &


R. D. Gray (Eds.), Cycles of contingency: Developmental systems and
evolution (pp. 99-116). Cambridge, MA: MIT Press.

Jablonka, E. (2002). Information: Its interpretation, its inheritance, and its sharing.
Philosophy of Science, 69, 578-605.

Jablonka, E. & Lamb, M. J. (2005). Evolution in four dimensions: Genetic,


epigenetic, behavioral, and symbolic variation in the history of life.
Cambridge, MA: MIT Press.

Jacob, F. (1976). The logic of life: A history of heredity (B. E. Spillmann, Trans.).
New York: Vintage. (Original work published 1970)

Jacob, F. & Monod, J. (1961). Genetic regulatory mechanisms in the synthesis of


proteins. Journal of Molecular Biology, 3, 318-356.

Jenkin, F. (1867). Review of The origin of species. The North British Review, 44,
277-318.

Johnston, T. D. (1987). The persistence of dichotomies in the study of behavioural


development. Developmental Review, 7, 149-182.
264

Johnston, T. D. (2001). Toward a systems view of development. In S. Oyama, P.


E. Griffiths, & R. D. Gray (Eds.), Cycles of contingency: Developmental
systems and evolution (pp. 14-23). Cambridge, MA: MIT Press.

Kant, I. (1929). Critique of pure reason (N. K. Smith, Trans.). London:


Macmillan. (Original work published 1781)

Kant, I. (1951). Critique of judgment (J. H. Bernard, Trans.). New York: Hafner
Press. (Original work published 1790)

Kay, L. E. (1993). The molecular vision of life: Caltech, the Rockefeller


Foundation, and the rise of the new biology. New York: Oxford
University Press.

Keller, E. F. (2000a). The century of the gene. Cambridge, MA: Harvard


University Press.

Keller, E. F. (2000b). Decoding the genetic program: Or, some circular logic in
the logic of circularity. In P. J. Beurton, R. Falk, & H.-J. Rheinberger
(Eds.), The concept of the gene in development and evolution: Historical
and epistemological perspectives (pp. 159-177). Cambridge: Cambridge
University Press.

Keller, E. F. (2001). Beyond the gene but beneath the skin. In S. Oyama, P. E.
Griffiths, & R. D. Gray (Eds.), Cycles of contingency: Developmental
systems and evolution (pp. 299-312). Cambridge, MA: MIT Press.

Kitcher, P. (1992). Gene: Current usages. In E. F. Keller & E. A. Lloyd (Eds.),


Keywords in evolutionary biology (pp. 128-131). Cambridge, MA:
Harvard University Press.

Kitcher, P. (2001). Battling the undead: How (and how not) to resist genetic
determinism. In R. S. Singh, C. B. Krimbas, D. B. Paul, & J. Beatty
(Eds.), Thinking about evolution: Historical, philosophical, and political
perspectives: Festschrift for Richard Lewontin (pp. 396-414). Cambridge:
Cambridge University Press.

Kukla, A. (2000). Social constructivism and the philosophy of science. London:


Routledge.

Laland, K. N., Odling-Smee, J., & Feldman Marcus, W. (2005). On the breadth
and significance of niche construction: A reply to Griffiths, Okasha and
Sterelny. Biology and Philosophy, 20, 37-55.
265

Laland, K. N., Odling-Smee, J., & Feldman, M. W. (2001). Niche construction,


ecological inheritance, and cycles of contingency in evolution. In S.
Oyama, P. E. Griffiths, & R. D. Gray (Eds.), Cycles of contingency:
Developmental systems and evolution (pp. 117-126). Cambridge, MA:
MIT Press.

Lehrman, D. S. (1953). A Critique of Konrad Lorenz's theory of instinctive


behavior. Quarterly Review of Biology, 28(4), 337-363.

Lenhoff, H. M. & Lenhoff, S. G. (1991). Adam Trembly and the origins of


research on regeneration in animals. In C. E. Dinsmore (Ed.), A history of
regeneration research: Milestones in the evolution of a science (1st ed.,
pp. 47-66). Cambridge: Cambridge University Press.

Lenoir, T. (1981). The Gottingen School and the development of transcendental


naturphilosophie in the Romantic era. In W. Coleman (Ed.), Studies in the
history of biology, Vol. 5 (pp. 111-205). Baltimore: Johns Hopkins
University Press.

Levins, R. & Lewontin, R. C. (1985). The dialectical biologist. Cambridge, MA:


Harvard University Press.

Lewontin, R. C. (1974). The analysis of variance and the analysis of causes.


American Journal of Human Genetics, 26, 400-411.

Lewontin, R. C. (1982). Organism and environment. In H. C. Plotkin (Ed.),


Learning, development, and evolution (pp. 151-170). New York: Wiley.

Lewontin, R. C. (1983a). Darwin's revolution. New York Review of Books, 30, 21-
27.

Lewontin, R. C. (1983b). Gene, organism, and environment. In D. S. Bendall


(Ed.), Evolution: From molecules to men (pp. 273-285). Cambridge:
Cambridge University Press.

Lewontin, R. C. (2000). The triple helix: Gene, organism, and environment.


Cambridge, MA: Harvard University Press.

Lopéz-Beltrán, C. (1992). Human heredity 1750-1870: The construction of a


scientific domain. Unpublished Doctoral Dissertation, Kings College,
London.

Lopéz-Beltrán, C. (1994). Forging heredity: From metaphor to cause, a reification


story. Studies in the History and Philosophy of Science, 25(2), 211-235.
266

Lopéz-Beltrán, C. (2001, May 24-26). Natural things and non-natural things.


Paper presented at the A cultural history of heredity I: 17th and 18th
centuries, Max Planck Institute for the History of Science, Berlin.
Available at http://www.mpiwg-berlin.mpg.de/Preprints/P222.pdf

Lopéz-Beltrán, C. (2004). In the cradle of heredity: French physicians and


l’hérédité naturelle in the early 19th century. Journal of the History of
Biology, 37, 39-72.

Lopéz-Beltrán, C. (2007). The medical origins of heredity. In S. Müller-Wille &


H.-J. Rheinberger (Eds.), Heredity produced: At the crossroads of biology,
politics, and culture, 1500-1870 (pp. 105-132). Cambridge, MA: MIT
Press.

Loveland, J. (2001). Buffon, the certainty of sunrise, and the probabilistic reductio
ad absurdum. Archive for History of Exact Sciences, 55, 465–477.

Mabry, K. E. & Stamps, J. A. (2008). Dispersing brush mice prefer habitat like
home. Proceedings of the Royal Society of Biological Sciences, 275, 543–
548.

Maienschein, J. (2006). Epigenesis and preformationism. Retrieved June, 2008,


from http://plato.stanford.edu/archives/fall2006/entries/epigenesis

Maturana, H. R. & Varela, F. J. (1980). Autopoiesis and cognition: The


realization of the living. Dordrecht, Holland: D. Reidel.

Maturana, H. R. & Varela, F. J. (1987). The tree of knowledge: The biological


roots of human understanding (1st ed.). Boston: New Science Library:
Distributed in the United State by Random House.

Maynard Smith, J. (1986). The problems of biology. Oxford: Oxford University


Press.

Maynard Smith, J. (1997). Weismann and modern biology. In M. Ridley (Ed.),


Evolution (pp. 17-22). Oxford: Oxford University Press.

Maynard Smith, J. (2000a). The concept of information in biology. Philosophy of


Science, 67, 177-194.

Maynard Smith, J. (2000b). Reply to commentaries. Philosophy of Science, 67,


214-218.

Maynard Smith, J. & Szathmáry, E. (1995). The major transitions in evolution.


267

Oxford: W.H. Freeman Spektrum.

Maynard Smith, J. & Szathmáry, E. (1999). The origins of life: From the birth of
life to the origin of language. Oxford: Oxford University Press.

Mayr, E. (1961). Cause and effect in biology. Science, 134, 1501-1506.

Mayr, E. (1982). The growth of biological thought: Diversity, evolution, and


inheritance. Cambridge, MA: Belknap Press.

Mazzolini, R. G. (2007). Las castas: Interracial crossing and social structure,


1770-1835. In S. Müller-Wille & H.-J. Rheinberger (Eds.), Heredity
produced: At the crossroads of biology, politics, and culture, 1500-1870
(pp. 349-373). Cambridge, MA: MIT Press.

McLaughlin, P. (2007). Kant on heredity and adaptation. In S. Müller-Wille & H.-


J. Rheinberger (Eds.), Heredity produced: At the crossroads of biology,
politics, and culture, 1500-1870 (pp. 277-291). Cambridge, MA: MIT
Press.

Merleau-Ponty, M. (1963). The structure of behavior (A. L. Fisher, Trans.).


Boston: Beacon Press. (Original work published 1942)

Meyer, A. W. (1936). An analysis of the De generatione animalium of William


Harvey. Stanford University, CA: Stanford university press.

Millikan, R. G. (1984). Language, thought, and other biological categories: New


foundations for realism. Cambridge, MA: MIT Press.

Monod, J. (1971). Chance and necessity: An essay on the natural philosophy of


modern biology (A. Wainhouse, Trans. 1st American ed.). New York:
Knopf. (Original work published 1970)

Moore, D. S. (2002). The dependent gene: The fallacy of nature/nurture (lst ed.).
New York: Times.

Morange, M. (2000). The developmental gene concept. In P. J. Beurton, R. Falk,


& H.-J. Rheinberger (Eds.), The concept of the gene in development and
evolution: Historical and epistemological perspectives (pp. 193-215).
Cambridge: Cambridge University Press.

Moss, L. (2001). Deconstructing the gene and reconstructing molecular


developmental systems. In S. Oyama, P. E. Griffiths, & R. D. Gray (Eds.),
Cycles of contingency: Developmental systems and evolution (pp. 85-97).
268

Cambridge, MA: MIT Press.

Müller-Wille, S. (2007). Figures of inheritance, 1650-1850. In S. Müller-Wille &


H.-J. Rheinberger (Eds.), Heredity produced: At the crossroads of biology,
politics, and culture, 1500-1870 (pp. 177-204). Cambridge, MA: MIT
Press.

Müller-Wille, S. & Orel, V. (2007). From Linnaean species to Mendelian factors:


Elements of hybridism, 1751-1870. Annals of Science, 64(2), 171-215.

Müller-Wille, S. & Rheinberger, H.-J. (2007a). Heredity: The formation of an


epistemic space. In S. Müller-Wille & H.-J. Rheinberger (Eds.), Heredity
produced: At the crossroads of biology, politics, and culture, 1500-1870
(pp. 3-34). Cambridge, MA: MIT Press.

Müller-Wille, S. & Rheinberger, H.-J. (Eds.). (2007b). Heredity produced: At the


crossroads of biology, politics, and culture, 1500-1870. Cambridge, MA:
MIT Press.

Needham, J. (1959). A history of embryology (2nd ed.). New York: Abelard-


Schuman.

Neumann-Held, E. M. (2001). Let's talk about genes: The process molecular gene
concept and its context. In S. Oyama, P. E. Griffiths, & R. D. Gray (Eds.),
Cycles of contingency: Developmental systems and evolution (pp. 71-84).
Cambridge, MA: MIT Press.

Nyhart, L. K. (1995). Biology takes form: Animal morphology and the German
universities, 1800-1900. Chicago: University of Chicago Press.

Odling-Smee, F. J., Laland, K. N., & Feldman, M. W. (2003). Niche construction:


The neglected process in evolution. Princeton, NJ: Princeton University
Press.

Olby, R. C. (1985). Origins of Mendelism (2nd ed.). Chicago: University of


Chicago Press.

Oyama, S. (1985). The ontogeny of information: Developmental systems and


evolution. Cambridge: Cambridge University Press.

Oyama, S. (2000a). Causal democracy and causal contributions in developmental


systems theory. Philosophy of Science (proceedings), 67, S332-347.

Oyama, S. (2000b). Evolution's eye: A systems view of the biology-culture divide.


269

Durham, NC: Duke University Press.

Oyama, S. (2001). Terms in tension. In S. Oyama, P. E. Griffiths, & R. D. Gray


(Eds.), Cycles of contingency: Developmental systems and evolution (pp.
177-193). Cambridge, MA: MIT Press.

Oyama, S. (2006). Speaking of nature. In Y. Haila & C. Dyke (Eds.), How nature
speaks: The dynamics of the human ecological condition (pp. 49-65).
Durham: Duke University Press.

Oyama, S., Griffiths, P. E., & Gray, R. D. (Eds.). (2001). Cycles of contingency:
Developmental systems and evolution. Cambridge, MA: MIT Press.

Papineau, D. (1987). Reality and representation. New York: B. Blackwell.

Parnes, O. S. (2007). On the shoulders of generations: The new epistemology of


heredity in the nineteenth century. In S. Müller-Wille & H.-J. Rheinberger
(Eds.), Heredity produced: At the crossroads of biology, politics, and
culture, 1500-1870 (pp. 315-346). Cambridge, MA: MIT Press.

Pick, D. (1989). Faces of degeneration: A European disorder, c.1848-c.1918.


Cambridge: Cambridge University Press.

Provine, W. B. (2001). The origins of theoretical population genetics (2nd ed.).


Chicago: University of Chicago Press.

Rheinberger, H.-J. & Müller-Wille, S. (2003, January 10-12). Introduction. Paper


presented at the A cultural history of heredity II: 18th and 19th centuries,
Max Planck Institute for the History of Science, Berlin. Available at
http://www.mpiwg-berlin.mpg.de/Preprints/P247.pdf

Richards, R. J. (1987). Darwin and the emergence of evolutionary theories of


mind and behavior. Chicago: University of Chicago Press.

Roe, S. A. (1981). Matter, life, and generation: Eighteenth-century embryology


and the Haller-Wolff debate. Cambridge: Cambridge University Press.

Roger, J. (1997). The life sciences in eighteenth-century French thought (K. R.


Benson, Trans.). Stanford: Stanford University Press. (Original work
published 1963)

Russell, E. S. (1982). Form and function: A contribution to the history of animal


morphology. Chicago: University of Chicago Press. (Original work
published 1916)
270

Sabean, D. W. (2007). From klan to kindred: Kinship and the circulation of


property in premodern and modern Europe. In S. Müller-Wille & H.-J.
Rheinberger (Eds.), Heredity produced: At the crossroads of biology,
politics, and culture, 1500-1870 (pp. 37-59). Cambridge, MA: MIT Press.

Sandler, I. (2000). Development: Mendel's legacy to genetics. Genetics, 154, 7-


11.

Sandler, I. & Sandler, L. (1985). A conceptual ambiguity that contributed to the


neglect of Mendel's paper. History & Philosophy of the Life Sciences, 7, 3-
70.

Sapp, J. (1987). Beyond the gene: Cytoplasmic inheritance and the struggle for
authority in genetics. New York: Oxford University Press.

Sarkar, S. (2000). Information in genetics and developmental biology: Comments


on Maynard Smith. Philosophy of Science, 67, 208-213.

Schaffner, K. F. (1998). Genes, behavior, and developmental emergentism: One


process, indivisible? Philosophy of Science, 65, 209-252.

Schwartz, S. (2000). The differential concept of the gene: Past and present. In P.
J. Beurton, R. Falk, & H.-J. Rheinberger (Eds.), The concept of the gene in
development and evolution: Historical and epistemological perspectives
(pp. 26-39). Cambridge: Cambridge University Press.

Shannon, C. E. (1948). A mathematical theory of communication. The Bell


System Technical Journal, 27, 379-423, 623-656.

Shannon, C. E. & Weaver, W. (1949). The mathematical theory of


communication. Urbana: University of Illinois Press.

Siegal, M. L. & Bergman, A. (2002). Waddington’s canalization revisited:


Developmental stability and evolution. Proceedings of the National
Academy of Sciences, 99(16), 10528-10532.

Smith, J. E. H. (2006). The problem of animal generation in early modern


philosophy. Cambridge: Cambridge University Press.

Stegmann, U. E. (2004). The arbitrariness of the genetic code. Biology and


Philosophy, 19(3), 205-222.

Stegmann, U. E. (2005). Genetic information as instructional content. Philosophy


of Science, 72(3), 425-443.
271

Sterelny, K. (2000). The "genetic" program: A commentary on Maynard Smith on


information in biology. Philosophy of Science, 67, 195-201.

Sterelny, K. (2001). Niche construction and the extended replicator. In S. Oyama,


P. E. Griffiths, & R. D. Gray (Eds.), Cycles of contingency:
Developmental systems and evolution (pp. 333-349). Cambridge, MA:
MIT Press.

Sterelny, K. (2005). Made by each other: Organisms and their environment.


Biology and Philosophy, 20, 21-36.

Sterelny, K. & Griffiths, P. E. (1999). Sex and death: An introduction to


philosophy of biology. Chicago: University of Chicago Press.

Sterelny, K. & Kitcher, P. (1988). The Return of the Gene. Journal of Philosophy,
LXXXV(7), 339-361.

Sterelny, K., Smith, K. C., & Dickison, M. (1996). The extended replicator.
Biology and Philosophy, 11, 377–403.

Sturtevant, A. H. (2001). A history of genetics. Cold Spring Harbor, NY: Cold


Spring Harbor Laboratory Press.

Tarnas, R. (1991). The passion of the western mind: Understanding the ideas that
have shaped our world view (1st ed.). New York: Harmony.

Terrall, M. (2002). The man who flattened the earth: Maupertuis and the sciences
in the Enlightenment. Chicago: The University of Chicago Press.

Terrall, M. (2007). Speculation and experiment in Enlightenment life sciences. In


S. Müller-Wille & H.-J. Rheinberger (Eds.), Heredity produced: At the
crossroads of biology, politics, and culture, 1500-1870 (pp. 253-275).
Cambridge, MA: MIT Press.

Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive
science and human experience. Cambridge, MA: MIT Press.

Waddington, C. H. (1957). The strategy of the genes: A discussion of some


aspects of theoretical biology. London: Allen & Unwin.

Waddington, C. H. (1975). The evolution of an evolutionist. Ithaca, NY: Cornell


University Press.

Waller, J. C. (2002). 'The illusion of an explanation': The concept of hereditary


272

disease, 1770-1870. Journal of the History of Medicine and Allied


Sciences, 57(4), 410-448.

Watson, J. D. & Crick, F. H. C. (1953). Genetical implications of the structure of


deoxyribonucleic acid. Nature, 171(4361), 964-967.

Wiener, N. (1967). The human use of human beings: Cybernetics and society.
New York: Avon. (Original work published 1950)

Williams, G. C. (1966). Adaptation and natural selection; A critique of some


current evolutionary thought. Princeton, N.J.: Princeton University Press.

Wilson, E. O. (1975). Sociobiology: The new synthesis. Cambridge, MA: Belknap


Press of Harvard University Press.

Wilson, P. K. (2007). Erasmus Darwin and the "noble" disease (gout):


Conceptualizing heredity and disease in Enlightenment England. In S.
Müller-Wille & H.-J. Rheinberger (Eds.), Heredity produced: At the
crossroads of biology, politics, and culture, 1500-1870 (pp. 133-154).
Cambridge, MA: MIT Press.

Winther, R. G. (2001a). August Weismann on germ-plasm variation. Journal of


the History of Biology, 34, 517-555.

Winther, R. G. (2001b). Darwin on variation and heredity. Journal of the History


of Biology, 33, 425-455.

Wood, R. J. (2003, January 10-12). The sheep breeders’ view of heredity (1723-
1843). Paper presented at the A cultural history of heredity II: 18th and
19th centuries, Max Planck Institute for the History of Science, Berlin.
Available at http://www.mpiwg-berlin.mpg.de/Preprints/P247.pdf

Wood, R. J. (2007). The sheep breeders' view of heredity before and after 1800.
In S. Müller-Wille & H.-J. Rheinberger (Eds.), Heredity produced: At the
crossroads of biology, politics, and culture, 1500-1870 (pp. 229-250).
Cambridge, MA: MIT Press.

Wozniak, R. H. (1997). Theoretical roots of early behaviorism: Functionalism,


the critique of introspection, and the nature and evolution of
consciousness. Retrieved May, 2009, 2009, from
http://www.brynmawr.edu/Acads/Psych/rwozniak/theory.html

Zajonc, A. (1998). Goethe and the science of his time: A historical introduction.
In D. Seamon & A. Zajonc (Eds.), Goethe's way of science: A
273

phenomenology of nature (pp. 15-30). Albany: State University of New


York Press.

Zirkle, C. (1946). The early history of the idea of the inheritance of acquired
characteristics and of pangenesis. Transactions of the American
Philosophical Society, 35(2), 91-151.

You might also like