You are on page 1of 13

Enaction and Consistency? Non-Representationalism in Cognitive Science and in Philosophy of Science.

Isabelle Peschard
San Francisco State University, CA.

Abstract: The enactive theory of cognition aims to model and interpret cognitive structures with no reference to a perceiver-independent world. This paper analyses the threat of inconsistency coming from the tension between the application of this theory to itself, as product of cognitive activity, and the philosophical interpretation of scientific modelling of natural phenomena. Inconsistency is shown to arise in a representationalist philosophical framework structured around the idea of external normativity, unable however to account adequately for the practice of modelling. An alternative framework is then considered where normativity is understood in terms of what is at stake in epistemic practice.

Since it was first formulated by Francisco et al. (1993), the enactive theory of cognition, having benefited from an increasing interest in the embodiment of cognition, has become a serious alternative to mainstream representationalist, computationalist or connectionist, approaches (Torrance 2006; No 2004). Non-representationalism in cognitive science however remains in crucial need of attention from philosophy of science. This paper aims to start a reflection on the question of consistency of cognitive non-representationalism. A central claim of the theory of enaction (TE henceforth) is that cognitive structures emerge from sensorimotor activity, with no reference to a perceiver-independent world (Varela et al.1993, 173). However, in cognitive science, we cannot avoid as a matter of consistency [... that] any such scientific description, either of biological or mental phenomena, must itself be a product of the structure of our own cognitive system (Varela et al. 1993, 11).

TE should then apply to itself, as a product of cognitive activity. But the demand of selfapplicability of the theory and the demand that it be a scientific theory, investigating objectively a natural phenomenon, that of enaction, may seem to pull in opposite directions. And if these demands cannot both be satisfied, TE is epistemologically inconsistent. It is clear that if there is a tension between these two demands, this tension will not be purely internal to the theory but depend both on what TE is and on what it is for a theory to provide scientific models of natural phenomena, that is, on a certain interpretation of science. After a brief account of TE, I will show why, on the basis of certain assumptions largely accepted in philosophy of science, assumptions appealing to an external conception of normativity, TE may look inconsistent. But I will show also that in this context the practice of modelling, in general, cannot be accounted for adequately either. For remedy I will indicate the way towards a philosophy of science attentive to scientific practice, where the notion of normativity will be understood in terms of what is at stake in epistemic, cognitive or scientific, practice. This new perspective will cast a very different light on the epistemological status of TE. It is tempting to see this move towards philosophy of science as an inquiry about the epistemological legitimacy of TE. That view would not survive the journey. Our task is one of elucidation rather than legitimacy or justification. And as we will see, it goes both ways.



TE presents itself as a non-representationalist scientific theory of cognitive activity. Perception is viewed as emerging from sensori-motor activity, with the idea that the cognitive system and the world perceived are dynamically co-constituted. More precisely, a cognitive system is conceived as a dynamical system coupled to its environment and submitted through this coupling, which is sensorimotor, to perturbations coming from this environment. The environment is not a structure imposed from the outside but responsive to what those beings are and do and cognitive activity is the process through which this dynamical system compensates for these perturbations and stabilizes its activity. This stabilization is realized with the emergence of new structural patterns of neuronal activity, cognitive structures, which enables the system to maintain its organizational identity (Maturana & Varela 1988). These cognitive structures are physical observable phenomena interpreted with no reference to a perceiver-independent world. TE proposes to model a neuronal system as a system of coupled

oscillators and identifies cognitive structures in terms of local or long-range synchronization (Rodriguez &al. 1999; Thompson &Varela 2001; Varela &al. 2001) If this theory falls within its own domain of application, the process through which this theory develops should be modelled as emerging from a sensorimotor process with no reference to a perceiver-independent world, where the perceiver is, in this case, the enactivist scientist. It is not difficult to see why this may bring the theory in tension with philosophy of science. When the enactivist proposes this theory of cognition, he is also, at the same time, formulating some constraints on a possible epistemological interpretation of theories and his own in particular. But scientific theories are also objects of epistemological interpretation on the part of philosophy of science, and there may be a tension between this interpretation and the constraints implied by TE. For on the one hand, the patterns of synchronisation that the enactivist identifies as cognitive structures are understood to be objective phenomena, given that TE is a scientific theory; but on the other hand, the enactivists cognitive activity, as he constructs this theory, is to be understood, on the enactivist model, without reference to a perceiver-independent world. As I will show, the tension which here threatens the epistemological consistency of TE is serious but it comes from a commitment to philosophical framework which is not realistic, in that it cannot do justice to scientific activity, whereas, and this is the source of the tension, the concern of the enactive conception is cognitive activity.

The focus of philosophy of science was for a long time on theories. But times change, and models are now on the front stage. The question what are models? is far from having a unanimously accepted answer though. It seems fair to say that a model is generally conceived of as a set of objects with relations defined on these objects and/or on their properties, that is, as instantiating a structure. This is how models are conceived within the philosophical framework that I specifically want to discuss and challenge and this conception is certainly appropriate in cognitive science. However, even when we agree on how to conceive of models, the relation between models and the phenomena that are modelled can still be understood in very different ways. Specifically, how it is understood will depend on whether one looks at models retrospectively, as candidates for the representation of phenomena; or looks at models from within the practice of modelling. And how the theory of enaction is to

be understood will, in turn, depend on how the relation of models to phenomena is interpreted.

Semantic View Isomorphism I will start with the first option, the retrospective view on models, also known as the semantic view (van Fraassen 1980; Suppe 1989). It assumes that to be a model of a phenomenon is to represent this phenomenon, and conceives of how well a model represents a phenomenon as a function of the degree of isomorphism or similarity between the structure instantiated by the model and the structure of the phenomenon:
Theoretical models are the means by which the scientists represent the world (Giere 1988, 80) [] If what is going on in the real world is similar in structure to the model of the world then the data and the prediction from the model should agree. (Giere 2006, 30)

This view takes for granted that the phenomenon has a certain structure which is independent of whether there will ever be a model to represent it. The job of the scientist, on this conception, is to see whether there exists among the models of the theory, whose domain of application includes the phenomenon in question, one that is isomorphic or similar to the structure of the phenomenon. The theory provides a set of models as candidates for the representation of the phenomenon; the question is whether one of them passes the test of isomorphism or similarity. This view of models and their relation to the phenomena already makes TE seem problematic. The theory has models, dynamical models, which aim to model phenomena falling under the concept cognitive activity. But the structure of these phenomena cannot, within TE, be conceived of as independent of the interactive process through which they come to be determined. What cognition as natural phenomenon is, how it is structured, according to TE, can no more than anything else, from within TE, be conceived of as perceiverindependent; it must depend on the way in which it is investigated. So it seems that if we take seriously what TE says about cognition, then as far as the semantic view is concerned, we cannot take it seriously as a scientific theory. It is a bit like a Cretan asserting that all Cretans are liars. If what he says is true, then he is not lying, and, since he is himself a Cretan, then what he says is not true. 4

But lets have a closer look at the source of this apparent dilemma, that is, at the idea of models as representation of structures. Models as representations Scientific activity, and more generally cognitive activity, is a normative activity: a perceptual experience or a model, purports to say or to be about something and is subject to norms that determine how it can be right or be wrong. So a crucial question for philosophy of science is: what are these norms, and where do they come from? As we saw, in the retrospective view, a theoretical model is regarded as candidate for the representation of a phenomenon, and whether it is a good representation depends on whether the structure of the model is isomorphic or similar to that of the phenomenon. But in practice we assess a model by confrontation to the results of measurements, which must then also have the form of structures: they are also models, but this time models of data, datamodels. The assessment of a theoretical model is then thought of as a measure of the isomorphism or similarity to the data-model. But how do we go from isomorphism or similarity to a data-model to isomorphism or similarity to the phenomenon? Part of the answer is that not any data-model will do; what we need here is the data-model of the phenomenon. But what makes a data-model the right one?
the manner in which D, [the data-model] is obtained from [the domain of phenomenon] is, of course, complex. Likewise, the manner in which the various elements of D are related to the objects of is also problematic; it may be that the nature of this relation lies beyond linguistic expression. (da Costa & French 2003: 17)

Is there a better way to say that one just doesnt know what to say about X than to say that X must lies beyond linguistic expression? The problem here is that X, the relation between the data-model and the phenomenon it is meant to be a model of, is crucial to the representationalist view. This is the relation that should justify equating isomorphism to the data-model with representation of the phenomenon. The reason why the representationalist gets in trouble here is not difficult to understand though. We start with models as candidates for representing a certain phenomenon. So one way or another, the phenomenon must be conceived as a source of normativity: it is what makes the model right or wrong. It is an external source of normativity, the sort of normativity, of ultimate normative ground that the theory of enaction says doesnt exist,

and therefore cannot appeal to for itself. This is one of the reasons, perhaps the main one, why this theory is, in this framework, epistemologically problematic. The phenomenon is meant to exert a normative constraint on the selection of the good data-model; and the right data-model will, in turn, exert an external normative constraint on the selection of the good theoretical model. But when one looks at scientific activity, this conception of normativity appears to be very problematic.

Relation data-model/phenomenon The phenomenon is meant to exert a structural constraint on the selection of the datamodel. But what does that mean, in this context, that a phenomenon has a structure? A structure is a mathematical object: when we say that a physical object has a structure, we mean that it instantiates a mathematical structure, that is, that there exists a structural mathematical description of this object. To say that the structure of the phenomenon is independent of scientific activity seems then to make the endorsement of some sort of mathematical realism necessary to make sense of scientific knowledge. But even that would not much help the representationalist. That the theoretical model is a representation of the phenomenon would mean that this mathematical structure is nothing else than the structure instantiated by the theoretical model. At the same time, however, for the theoretical model to be a representation of the phenomenon, the structure of this theoretical model has to be isomorphic to that of the datamodel. But the constituents of the data-model are functions and variables correlated to specific experimental procedures that are historical, partly contingent, and highly contextual. For instance, they require the construction of an experimental system which involves the historical background constituted by previous studies with respect to the design of the experiment, the results that have been already obtained or else, the contingent decisions that have to be made with respect to the conditions of measurement of the experimental correlate of certain quantities (Chang 2004). In addition, experimental procedures involve crucial contextual considerations of relevance regarding the different parameters (Bailor-Jones 2003). In their empirical study of the neural activity correlated to particular perceptual task, Lutz &al. (2002) criticize previous studies for having made invisible, by means of averaging techniques, some individual differences in the neural system activity before the perceptual task. These differences are, from their theoretical perspective, deeply significant with respect to the interpretation of the brain activity during the task. Then, of course, what counts as a good data-model, for them, is different in a decisive way from what it was in these previous 6

studies. In fact, measurement and data-analysis of the results of measurement are necessarily dependent upon particular assumptions and methods, theoretical, instrumental and statistical, implying specific commitments regarding what is at stake in the investigation and what matters. How then could the structure of the theoretical model be at the same time sensitive to the contingencies that bear on the data-model and be identical to a mathematical object determined independently of any experimental activity? If one prefers not to appeal to some pre-established harmony, one retort may be that we can only have an approximate or partial structure of the phenomenon. But in the absence of a clear idea of how to measure or even compare degrees of approximation, the mention of approximation is not very helpful for clarifying the conditions under which a data-model can be recognized as the good datamodel. Relation theoretical model/data-model If the phenomenon is meant to be a source of external normative constraint on the selection of the theoretical model, and can only exert this constraint through the data-model, then the data-model should, in turn, exert an external constraint on the selection of the theoretical model. But this idea of an external constraint from the data-model to the theoretical model is no more realistic than the idea of an external constraint from the phenomenon on the data-model. What has become clear from studies that paid close attention to experimental scientific activity is that obtaining a data-model is in fact the most difficult part of the process of modelling. Not only is the data-model not given independently of a particular and very carefully constructed experimental system, with historical, contingent, and contextual features, but it is not independent of the construction of the theoretical model itself. In fact, adapting Pickerings interpretation of the dynamics of scientific activity (Pickering 1995), the practice of modelling is best understood as a dynamical process of co-construction and coadjustment of the different constituents of the practice of modelling: the theoretical model, the data-model, and the instrumental procedures involved in the measurement. In that perspective, the data-model cannot exert an external constraint on the construction of the theoretical model because, as we saw with the experimental study of Lutz &al., what should count as a good data-model is at issue in the practice as much as what should count as a good theoretical model, and because we cannot answer one question independently of the other. A consequence of this dynamical conception of the practice of modelling is that there is no necessary end-point. Where to stop, even if temporarily, is not arbitrary. But new results 7

immediately open new questions and an experimental system and its model can always be investigated further. The question is whether it is worth it going further, that is, again, what are the stakes? Any practice, and especially scientific practice, is accountable to certain norms (Rouse 2002), and that goes for endings as well. This dynamical character and open-endedness of the process of modelling makes the notion of external normative constraint, natural or rational, inappropriate. The normativity of the practice of modelling should, in fact, rather be understood as coming from within the practice (Peschard 2007), in the same way as has been argued regarding linguistic norms (Risjord 2007). The epistemological inability for the enactivist to appeal to an external normative constraint, which was previously seen as an epistemological problem, will then turn out to be the epistemological condition of any scientific practice of modelling.

Non-representationalism in philosophy of modelling Norms and stakes The shift from the idea of external normative constraints bearing, at different levels, on scientific practice to the idea that the norms that constrain the practice come from within the practice itself takes us to a non-representationalist philosophical framework. I will introduce this idea of norms coming from within the practice by drawing on the normative conception of practice proposed by Rouse (2002, 2007). A practice is materially situated, and the norms bind together the constituents of the practice through their mutual accountability. Practitioners are normatively accountable for the way in which they perform their experiments and carry out the experimental measurements, for the instruments they use, for the approximations they make, the assumptions they endorse regarding measurements or data-analysis, etc. But the objects that are involved in the construction of the experimental system, the measurements, the data-analysis, also have to satisfy certain norms: if you are measuring the velocity of a flow whose temperature may vary during the measurement, do not use a hotwire probe! If you use some averaging techniques for your data-analysis, do not use those that make significant differences invisible! Following Rouse, the normativity of the practice is seen as responsive to what is at stake in the practice. What counts as satisfactory ways to proceed, or to identify what is identical, similar, different, what is relevant or negligible, what counts as evidence, depends on what is recognized as being at stake in the practice, in taking or not taking certain features

into account, in taking them into account in a certain way or in another, in using certain methods of approximation or a certain degree of precision, etc. For instance, in an experiment in fluid mechanics, when one is modelling a system of wakes with coupled oscillators one may opt to use a linear coupling instead of a non-linear one. What is a stake in taking one rather than the other, what sort of real possibilities, to use Rouses terms, are allowed, which are excluded? Similarly we could ask of a theory of cognition: what should it account for primarily? Should one take into account large scale synchronisations of neural activity? Should phenomenological reports be integrated in the study of the neural dynamics? What is at stake in doing it one way or the other? The elucidation of what is at stake may depend on certain normative commitments with respect to how to conceive of the epistemic role of science and of how scientific knowledge should relate to ordinary experience, or abstract from it. For example, I take the authors of The Embodied Mind very seriously when they announce in their introduction that a science of cognition must reconcile science and experience because it is only by having a sense of common ground between cognitive science and human experience that our understanding of cognition can be more complete and reach a satisfying level. (Varela &al. 1993, 14) The norms are not external, because they are responsive to what is at stake, and that is always discussable, from within the practice, in response to what is going on in the practice, in the investigation of the phenomenon. The norms are constraining; they are, as Rouse points out, authoritative over and constitutive of human agency and meaning- but these constraints do not come from independent objective natures of things, but from the emergent configuration of a situation as having something at stake in its outcome (Rouse 2002, 257).

Representation? The practice of modelling, we saw, takes the form of a dynamical process of coconstitution of the theoretical model and the data-model. It is true, as certainly the representationalist would insist, that the practice of modelling is normative; in fact, normative accountability spares none of the constituents of the practice. The representationalist, however, conceives of the source of this normativity as external to the practice and constraining the practice of modelling as from outside: external or from outside in the sense that the normative constraints are meant to be independent of what is going on in the practice. This is the non-realistic commitment that was mentioned in the introduction.

To have a scientific model that has some explanatory power or predictive accuracy still falls short if it doesnt explain what is worth explaining, or explains by neglecting factors that are deemed important, or predicts accurately quantities that do not matter. The status of experience in first-person and phenomenological accounts in scientific study of cognitive activity is an obvious illustration. Things do not bear the label important or matters on their sleeves; arguments have to be made, and they appeal, first of all, to what is at stake in making this judgement and in doing things accordingly. What is the right way to proceed, what has to be accounted for, what has to be taken into account are normative constraints responsive to the dynamics of the practice, to the interpretation of what happens and of the stakes it reveals. The process of modelling is not only the co-constitution of theoretical model and data-model. It is also, at the same time, the realization of the experimental system which exhibits the phenomenon that is modelled. Hence, the relation between a model and the phenomenon it models is one of co-emergence rather than representation. One can still speak of similarity or isomorphism of structures, but this relation of isomorphism is not an external, accidental relation; it is an intrinsic relation (Surez 1999). In this perspective, what the theory of enaction had to recognize for itself, namely this co-emergence, does not make for an epistemological quandary. It is characteristic of the practice of modelling itself.


The account I proposed of the practice of modelling seems then to provide the theory of enaction with the epistemological legitimacy or ground that looked problematic in the perspective of a retrospective, representationalist view on models. Epistemological ground or legitimacy, however, is not the right way to speak. To see why, we have to go back to the question of the relation between stakes and norms. Beyond experimental procedure and process of modelling, it is a more general and more fundamental conception of bodies and of their relations with their surroundings that is involved in Rouses normative conception of practice. What is at stake is always what is at stake for somebody or some bodies which are not only situated but oriented in the world in a particular way. Different ways of being situated or oriented constitute different perspectives on what is going on, which allow for different appreciations of what is at stake. A body is conceived as a practical unity, a capacity for a coordinated responsiveness to what thereby


becomes distinguished as its surroundings. At the same time as a bodys surroundings is shaped by the bodys practical activity and capacities, in return, the configuration of the surroundings, as a field of possible activity, shapes the activities and capacities of this body. Hence, bodies and their surroundings are dynamically co-constituted through their coresponsiveness. What is normatively authoritative over practices depends to a large extent on this on-going practical configuration of the world but it transcends it. First, because of the material situation of the practice, that is, the material presence of these configurations emerging from different practices and the resistance it offers to practitioners performances. Second, because of the temporal situation of the practice. Practitioners activities are always already embedded in certain projects, always taken between what has been done already and what has to be done now, and both directions contribute to the configuration of the actual surroundings. Finally, what is at stake is beyond ones immediate control in that the surroundings of beings, and especially of discursive beings, incorporates other beings as also actively configuring their more or less shared surroundings. Ones activity is responsive not only to the surroundings as practically configured by his or her own activities but also by others activities. What is at stake in the development of a certain research program is at stake not only for those who are developing the program but also for those who didnt develop it and will have, however, to be responsive to the transformation of the world it would involve. This conception of the relation between bodies and their surroundings takes us beyond philosophy of science. What is underlying the conception of scientific modelling is a conception of cognitive being as coupled to their environment, and dynamically coconstituted through the practical capacities and activities of these beings, which conception is in fact the core of the theory of enaction. This is why it would be inadequate, not to say presumptuous, to think that this conception of scientific modelling can provide that theory with an epistemological ground or legitimacy.

It is crucial to the theory of enaction, as proposed by Francisco Varela, that cognitive structures have to be understood, and modelled, with no reference to a perceiver independent world. When read in the context of a representationalist framework, this claim seems incompatible with the claim that this theory provides scientific models of cognitive phenomena. This problem arises when the relation between models and the phenomena that


are modelled is understood in terms of external normative constraints exerted by the structure of the phenomena on the practice of modelling. I argued that this conception of the relation between models and the phenomena that are modelled is unable to account for essential aspects of the practice of modelling. It has to be replaced with a conception which does justice to the dynamics and material situation of the practice of modelling. According to such a conception, the norms to which the constituents of the practice are accountable are generated from within the practice and are sensitive to what is at stake in the practice of modelling. Only a remaining vestige of the representationalist perspective could make us think that what is at stake between science and philosophy, is a matter of external legitimacy or absolute ground. In a non-representationalist perspective, where normativity is not a matter of external, causal or rational, necessities, the boundaries between scientific and philosophical practices are no longer clearly and necessarily bounded. Like the boundaries between objects or between bodies and their surroundings, they are emergent and shaped by coresponsiveness. What is at stake in the philosophical understanding of a nonrepresentationalist conception of cognition is what is at stake in a certain practical configuration of the world, one expressing a non-representationalist conception of cognition and knowledge in general.

Bailor-Jones, Daniela (2003), When Scientific Models Represent, International Studies in the Philosophy of Science 17: 59-74. Chang, Hasok (2004), Inventing Temperature: Measurement and Scientific Progress. Oxford University Press. da Costa, N. C. A. and Steven French (2003), Science and Partial Truth. A Unitary Approach to Models and Scientific Reasoning. New York: Oxford University Press. Giere, Ronald, John Bickle and Robert F. Maudlin (2006), Understanding Scientific Reasoning. Wadsworth Publishing. Giere, Ronald (1988), Explaining Science. A Cognitive Approach. Chicago: The University of Chicago Press.


Lutz, Antoine, J-P Lachaux, J. Martinerie, and F. Varela (2002), Guiding the study of brain dynamics by using first-person data, PNAS 99 (3): 1586-1591. Maturana, Humberto and Francisco Varela (1988), The tree of knowledge: the biological roots of human understanding. Boston, Mass: Shambhala. No, Alva (2004), Action in Perception. Cambridge: The MIT Press. Peschard, Isabelle (2007), Participation of Public in Science: Towards a New Kind of Scientific Practice, Human Affairs, 17 (2): 138-153. Pickering, Andrew (1995), Beyond Constraint: The Temporality of Practice and the Historicity of Knowledge, In Jed Z. Buchwald (ed.), Scientific Practice: Theories and Stories of Doings Physics, Chicago: University of Chicago Press: 42-55. Risjord, Mark Who are We? Dissolving the Problem of Cultural Boundaries, The Modern Schoolman, forthcoming. Rodriguez, E., N. George, J.P. Lachaux, J. Martinerie, B. Renault and F. Varela (1999), Perceptions shadow: long distance synchronization in the human brain, Nature 397: 340-343. Rouse, Joseph (2002), How Scientific Practice Matter. Reclaiming Philosophical Naturalism. Chicago: The University of Chicago Press, Rouse, Joseph (2007), Social Practices and Normativity, Philosophy of the Social Sciences 37: 46-56. Surez, Mauricio (1999), Theories, Models and Representation, in L. Magnani and N.J. Nersessian (eds.) Model-Based Reasoning in Scientific Discovery. New York: Kluwer Academic Publishers, 75-83. Suppe, Frederick (1989), The Semantic Conception of Theories and Scientific Realism. Chicago: University of Illinois Press. Thompson, Evan and F. Varela (2001), Radical Embodiment: Neural Dynamics and Consciousness, Trends in Cognitive Sciences 5, (10):418-425. Torrance, Steve (ed.) (2006), Phenomenology and the Cognitive Sciences, Special Issue on Enactive Experience, 4, 4. van Fraassen, Bas C. (1980) The Scientific Image. New York: Oxford University Press. Varela, Francisco, Evan Thompson and Eleonor Rosch (1993), The Embodied Mind. Cambridge, MA: MIT Press. Varela, F., J.P Lachaux, E. Rodriguez and J. Martinerie (2001), The Brainweb: Phase Synchronization and Large-Scale Integration, Nature Rev. Neurosci. 2, (4): 229-39.