Professional Documents
Culture Documents
Analysis, Modeling,
and Simulation of Human
Operator’s Mental Activities
Thierry Bellet
the human beings. Nevertheless, the paradox of the experimental method applied to
cognition study, is that it partially destroys its own object of analysis, when applied.
It is like trying to study life in a completely sterile environment, devoid of oxygen.
Soon, the researcher does not observe living organisms, but dead things. It is also the
case of the natural cognition in experimental conditions: laboratory artificial tasks
partially kill the “living cognition.” Certainly, medicine has substantially improved
scientific knowledge on life through the dissection of corpses. But not any medical
scientist had ever thought to find the life itself into a corpse: s/he knows that life is
already away, and s/he precisely seeks what are the reasons for its disappearance.
Scientists interested in studying human mental activities must be aware of this
methodological risk: the more the experiment is based on artificial tasks, the more
the scientist investigates humans “stripped” of their own knowledge, and only a
very limited part of their natural cognition can resist to this treatment. This is one
of the major contributions of ergonomics—or naturalistic psychology—to cognitive
science: to preserve the mental activities studies of the destructive consequences of
the experimental paradigm reductionism. Focused on the activities of the human
“Operator” (that is, a human able to act), the ergonomist does not study human
cognition “in vitro,” as a laboratory specimen, but “in vivo”: the “Living Cognition” in
its dynamic interactions with the surrounding environment, that is, cognition such
as it is immersed into situation and implemented through the activity. From there,
the new challenge of the cognitive science is to dynamically simulate the human
mental activities on computer, opening thus the door to the “in silico” study of the
living cognition.
In accordance with this general background, this chapter is organized into three
main sections. The first one (that is, section 2) will focus on the Analysis of mental
activities by jointly considering three complementary theoretical fields: (i) Cybernetics
and Human Information Processing theory, (ii) the Russian theories of the activity,
and (iii) the ecological approach of the human perception. Then we will a empt to
propose an integrative approach of these three theoretical fields in a mixed-way,
based on a Perception-Cognition-Action regulation loop, and we will illustrate the
functioning of this iterative regulation loop in the particular frame of the car-driving
activity.
The third section of the chapter will be devoted to the challenge of Cognitive
Modeling. In contrast with the preceding section, the modeling step aims not
only to provide a descriptive analysis of the mental activities based on empirical
observations, but tries to propose formal models of human thinking that may
be subject to scientific debates. For being comprehensive, a model of the human
operator’s mental activities should provide an in-depth description of the cognitive
system at three levels, which are; (i) its cognitive architecture, (ii) the cognitive
processes implemented for the information processing and reasoning, and (iii) the
data-structures used for knowledge and mental representations modeling. Then, we
will present an integrative model of the human cognitive system, which distinguishes
the implicit, the explicit, and the reflexive cognition.
Section 4 will be centered on Cognitive Simulation purpose, requiring to develop
computational programs able to virtually implement and then dynamically simulate
the operator’s mental activities on a computer. Cognitive simulation requires us
to define models of the human cognitive system jointly based on human sciences
theories, artificial intelligence formalisms, and computer science techniques.
C H A P T E R 1 • A N A LY S I S , M O D E L I N G , A N D S I M U L AT I O N 25
Designed in continuity with the Analysis and the Modeling levels, Simulation is
the necessary core step for studying the living cognition: it is only when the mental
activities are simulated on computer that it becomes possible to analyze them in their
dynamic interactions, and to apprehend them in their natural complexity.
With the aim to concretely illustrate these different sections, we will borrow some
examples of mental activities implemented in the specific context of car-driving
activity. This applied field presents the main advantage of being both a familiar task,
probably practiced by a large share of the readers of this book, yet still a complex
activity involving all levels of the human cognition. Moreover, because of the recent
developments of driving assistance in terms of on-board information systems as
well as for vehicle automation, some critical issues in ergonomics have emerged
during the two last decades. Human-centered design methods based on in-depth
understanding of the drivers’ cognition are then required, in order to develop future
advanced assistance systems—like intelligent copilots—sharing a part of their
cognitive abilities with the human driver. In this framework, the mental activities
modeling is not only relevant for theoretical research in cognitive science, it also
becomes one of central topics of cognitive engineering research.
When engaged in our everyday activities, like reading a book, to move in a familiar
place, or to drive a car, a large share of the mental processes that support these activities
escape to our own field of consciousness. This reflects the fact that these activities
were pre-learnt before and that they are currently based on integrated abilities,
a sequence of procedural skills, which trigger themselves successively, without
systematically requiring an explicit and conscious monitoring of our thoughts by our
cognitive system. They only emerge in our awareness some decision-making steps,
like the intention we pursue or the goal we try to reach. However, if the situation
suddenly becomes abnormal, or critical, or if one dri s too far from the limits of
validity framing the routine regulation process, then the “waterline of the iceberg of
awareness” lowers: our a ention focuses on the problem to be explicitly solved and
our activity becomes then intentionally monitored. To all intents and purposes it is
clear that implicit awareness and explicit awareness cannot be seen as two separate
entities. These two levels of awareness support themselves and are embedded in
each other. In reality, they are two sides of the same coin: one cannot exist without
the other. If a coin is placed on the table so that only one side is visible, this does not
mean that the other side has disappeared. It is still there, at least in latent state, as a
base for the visible side. As a function of the human operators experience and skill,
or according to the familiarity of the situation, by contrast with a new or critical
situation, their awareness will alternate between the more dominantly explicit,
decisional and a entional level of cognition versus implicit awareness, mainly based
on skills and automatic cognitive processes.
The understanding of the relation between implicit versus explicit awareness of the
situation, as well as between automatic versus a entional control of the activity is the
key issue of the living cognition paradigm, which constitutes the central topic of this
26 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
The last step (that is, “Game Over”) is not an explicit phase of the experimental
method itself, but it is rather an implicit requirement of this scientific approach.
Indeed, repetitive independent measures of similar responses, collected for a similar
artificial task, in similar conditions, is the basic fundament of the experimental
paradigm in natural sciences, applied or not to cognition study. Therefore, the “story
must restart” a er each stimulus, as if it was a totally “new story,” in order to allow
the scientists to rigorously control their experiment. A er each “Stimulus—Response”
sequence, the experimental task is thus completed, without any expected feedback
effect (except in the specific frame of the experimental tasks that are explicitly focused
on learning or inter-stimulus effects study, like the “priming” effect). Consequently,
by using the experimental method, cognitive science ended up losing the notion of
“cycle,” however so important in the initial cybernetics theory through the feedback
process, in favor of a sequential string of information processing functions more or
less complex (Figure 1.1).
Even if the circular nature of these cognitive processes seems to still be true,
because only requiring to add an arrow between a Response and a new Stimulus,
the “Game Over” strategy which is implicitly contained in the experimental method
to scientifically control any learning effect risk, definitively breaks the Cybernetics
regulation loop and then, the fundament of the living cognition. This is one of the
major criticisms that can be addressed to the experimental paradigm applied to
cognition, which dominated the Human Information Processing approach during
the 1970s and the 1980s. In order to study human operators’ mental activities in their
natural dynamics, which is the central aim of the living cognition approach as defined
in this chapter introduction, it is therefore necessary to consider other theoretical
alternatives, like the Russian theory of activity, or the Neisser (1976) ecological theory
of human perception.
Like Cybernetics, the Russian “Theory of Activity” school considers the human
operators through their dynamic interactions with the external environment. But
in this approach, Activity is the starting point and the core topic of the scientific
investigation of human cognition, because it is argued that activity directly structures
the operator’s cognitive functions. The fundamental postulate of the Theory of
Activity is well summarized by Smirnov (1966): human becomes aware of the surrounding
28 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
world, by acting on it, and by transforming it. Following the works of Vygotsky (1962),
exploring activity as “mediated” by tools (that is, artifacts) and concluding that
human is constantly enabled or constrained by cultural contexts, Leontiev (1977)
described human activity (p. 87) as a circular structure: initial afferentation → effector
processes regulating contacts with the objective environment → correction and enrichment
by means of reverse connections of the original afferent image. The circular character of the
processes that realize the interaction of the organism with the environment appears to be
universally recognized and sufficiently well described in the literature. However, the most
important point for Leontiev (p. 4) is not only this circular structure as such, but the
fact that the mental reflection of the objective world is not directly generated by the external
influences themselves, but by the processes through which the subject comes into practical
contact with the objective world. Through activity, continued Leontiev (p. 87), a double
transfer is realized: the transfer object → process of activity, and the transfer activity → its
subjective product. But the transfer of the process into the form of the product does not take
place only at the pole of the subject (.) it also takes place at the pole of the object transformed
by human activity.
The process of the mental “introjection”1 of the objective world through the activity
has been more particularly investigated by Ochanine (1977) within the concept of
Operative Images, corresponding to functional mental models of the external object
properties, as discovered through the practical activity of the operator. From this
standpoint, the human operator is not a passive cognitive system who undergoes the
stimulus given by the external environment, and who reacts in order to adapt himself
under the pressure of events. S/he is an active observer, with inner intentions, able to
voluntary act on the world and to modify the current situation by their own activity, in
accordance with their own needs. Indeed, behind activity there is always a need, which
directs and regulates concrete activity of the subject in the objective environment (Leontiev,
1977; p. 88). As a consequence (p. 99), the concept of activity is necessarily connected with
the concept of motive. Activity does not exist without a motive; “non-motivated” activity is
not activity without a motive but activity with a subjectively and objectively hidden motive.
Basic and “formulating” appear to be the actions that realize separate human activities. We
call a process an action if it is subordinated to the representation of the result that must be
a ained, that is, if it is subordinated to a conscious purpose. Similarly, just as the concept of
motive is related to the concept of activity, the concept of purpose is related to the concept of
action … Activity usually is accomplished by a certain complex of actions subordinated to
particular goals that may be isolated from the general goal; under these circumstances, what
happens that is characteristic for a higher degree of development is that the role of the general
purpose is fulfilled by a perceived motive, which is transformed owing to its being perceived
as a motive-goal.
These considerations, so essential in ergonomics and so evident in our every day
life as psychological subject with needs, intents, and will, has been nevertheless
progressively forgo en by the modern cognitive science, when based on the
experimental paradigm. Under laboratory conditions, wrote Leontiev (1977, p. 101), we
always place before the subject a “ready” goal. For this reason the process of goal formation itself
usually escapes investigation. However, all human activities in every day life require
1 In order for a human being to control a phenomenon, the brain must be able to form a reflection of
it. The subjective reflection of phenomena in the form of feelings, perception and thoughts provides the
brain with the information essential for acting efficiently on this phenomenon in order to achieve a specific
aim (Leontiev, Lerner, Ochanine, 1961).
C H A P T E R 1 • A N A LY S I S , M O D E L I N G , A N D S I M U L AT I O N 29
two conditions for being: the existence of a need, and the possibility of the surrounding
environment to satisfy it (Ouznadzé, 1966, p. 249). Through laboratory experiments,
inner needs and spontaneous motives disappear, as well as the dynamic “life cycle”
of the natural cognition.
The same criticism against the destructive effect of experimental method applied to
human cognition study has been formulated by Neisser (1976), through his ecological
approach of human perception. Neisser’s work was initially based on the direct perception
theory of Gibson (1950, 1979), who postulates that some invariants or affordances,2
corresponding to properties of the objects available in the current environment, are
directly perceived by the organism. Then, the organism “resonates” with or is tuned in
to the environment, so that the organism’s behavior becomes adapted to it. Nevertheless,
by contrast with the Gibson “un-cognitive” theory of perception, Neisser admits the
existence of mental subjective functions, even if he seriously criticizes the sequential
vision of cognitive processes which dominated the Human Information Processing
theory during the 1970s (cf. as presented above in Figure 1.1).
2 In Gibson’s Theory, “affordance” means the possibilities for action that are “offered” by an object (for
example, a tree affords possibilities for climbing, or hiding behind).
30 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
in and process all the information available in the road environment. In accordance
with Neisser’s theory of perception, this information is not selected haphazardly, but
depends on the aims the drivers pursue, their short-term intentions (that is, tactical
goals, such as “turn le ” at a crossroads), and long-term objectives (that is, strategic
goals, such as reaching their final destination within a given time), the knowledge
they possess (stemming from their previous driving experience) and their a entional
resources available at this instant. Information selection is the result of a perceptive
cycle, whose keystone is the driver’s mental representation of the driving situation.
This mental representation, based on operative knowledge stored and then activated
(that is, memory cycle on Figure 1.3) in Long-Term Memory (LTM), is also the central
component of a cognitive cycle involving decision-taking and anticipating functions
(that is, via mental simulation based on the current state of the world). When an
appropriate action to the current driving context has been identified, selected, or
cognitively planned, it is implemented by the driver on the vehicle (that is, executive
functions) for progressing into the dynamic road environment.
In accordance with the Cybernetics theory, car driving activity is defined in Figure
1.3 as an continuous dynamic loop of regulation between (i) inputs, coming from
the road environment, and (ii) outputs, corresponding to the driver’s behaviors
implemented into the real world via the car, which generate (iii) feedback, in the form
of a new inputs, requiring new adaptation (that is, new outputs) from the driver, and
so on. From this general point of view, the first iteration of the Perception-Decision-
Action regulation loop corresponds to the moment when the driver starts up the
engine, and the last iteration comes when the driver reaches the final trip destination,
and then stops the car.
In accordance with the Human Information Processing theory, the human operator
is not described here as a closed black box, but as a set of perceptive, cognitive, and
32 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
behavioral functions allowing the driver to dynamically regulate their interactions with
the surrounding environment. In terms of cognitive activities, mental representation
of the driving situation plays a key-role in the cognitive system functioning. This
mental model, based on perceptive information extracted into the road environment,
corresponds to the driver’s awareness of the driving situation, and therefore determines
directly all their decision-making concerning the relevant adaptive behaviors to be
carried out in the current driving context.
In accordance with the Russian theory of activity, this mental representation is
also based on pre-existing operative knowledge, practically learnt “in situation,” and
corresponding to a subjective “introjection” of previous activity implementation into
similar driving contexts. Moreover, the driving task is performed by using an artifact
(that is, the vehicle), and the driving situation is directly transformed by the human
operator’s activity (for example, car position on the road depending of the driver’s
action on the vehicle controls), as well as the situation modifies the driver’s cognitive
states (in terms of mental representation updating, for example, or new operative
knowledge learning).
In accordance with the ecological theory of Neisser (1976), driver’s perception in
Figure 1.3 is based on a dynamic cycle when (i) an active schema directs gathering-
information activity (that is, top-down processes) and (ii) focus driver’s a ention
on a sample of pieces of information currently available in the environment. Then
(iii), this active schema is continuously modified as a function of the new pieces of
information thus collected.
Lastly, but not least, in accordance with the living cognition paradigm as defined
in this chapter introduction, this Perception-Cognition-Action loop of regulation
integrating three internal underlying cycles (that is, the perceptive, the memory,
and the cognitive circles of Figure 1.3) and which is explicitly based on pre-existing
experiences, is “per nature” a naturalistic model of everyday cognition. And we can
be pleased, because a car driver driving without a living cognition is a generally a
near-dead driver.
In order to follow up the in-depth analysis of the human operator’s mental activities
it is, however, necessary to propose a more formal description of them, in the frame
of a theoretical model of the Human Cognitive System. For being comprehensive, a
model of mental activities should provide a formal description of the human cognitive
system at three main levels, which are; (a) its cognitive architecture, (b) the cognitive
structures, like mental representations or operative knowledge, encompassed into
this cognitive architecture, and (c) the cognitive processes implemented on this mental
data-structures for the information processing and the reasoning deployments. The
two first components will be discussed in this section, and the cognitive processes
modeling topic, being intimately linked with the cognitive simulation purpose, will
be more particularly investigated in the section 4.
C H A P T E R 1 • A N A LY S I S , M O D E L I N G , A N D S I M U L AT I O N 33
The following Figure (Figure 1.4) provides a synthetic overview of the human
cognitive system architecture, distinguishing two main types of functional
memory structures: the Long-Term Memory (LTM), which hold human’s permanent
knowledge, and the Working Memory (WM), which contain mental representations of
the objective world.
Four main dynamic cycles regulate the internal functioning of the human cognitive
system. The perceptive cycle supports the human perception functions, allowing the
operator to actively explore the surrounding environment, according to their current
needs and objectives (top down perceptive exploration processes) and/or to integrate
new information into their mental representations (bo om up cognitive integration
processes). The memory cycle plays a central role in terms of knowledge activation
(supporting categorization and matching processes permi ing to fit pre-existing
knowledge with the objective reality) as well as in terms of new knowledge acquisition
(to be stored in LTM). The cognitive cycle corresponds to a set of cognitive processes
which collectively handled the internal mental representations, in order to make
appropriate decision and then, to act into or on the current environment (like mental
representation elaboration, understanding, anticipation, decision-making, or action
planning). Lastly, the cognitive resources allocation cycle is in charge to dynamically
regulate and control the life cycle of the cognitive system, in accordance with the
a entional resources that are available, on the one hand, and that are required by the
cognitive and perceptive functions at a given time, on the other hand.
The central structure supporting to the living cognition in this model is the
Working Memory. However, WM described here stems as much from the operational
memory concept as formulated by Zintchenko (1966), and then the French school
“for” the action. Therefore they provide interiorized models of the task (Leplat, 2005)
constructed for the current activity, but which can be stored in LTM and reactivated
later in new situations, for future performances of the same task. Thus the role of
practical experience is decisive in the genesis of these mental representations: it is
in interaction with reality that the subject forms and tests his representations, in the same
way as the la er govern the way in which he acts and regulates his action (Vergnaud,
1985). For this author, the main function of mental representations is precisely to
conceptualize reality in order to act efficiently. In the next section we will propose a
specific formalism, called “driving schemas” (Bellet et al., 1999), able to provide a
computational model of drivers’ mental representations or, to be more precise, to
represent the operative knowledge underlying these mental representations when
activated (Bellet et al., 2007). However, the predominant idea we wish to emphasize
here is that these internal representations are not mental copies of objective reality.
They correspond to an explicit and implicit “awareness” one has of the “situation”
(Endsley, 1995; Smith and Hancock, 1995; Stanton et al., 2001; Bailly et al., 2003).
On the one hand, they only contain a tiny amount of the information available
in the environment: they focus in priority on useful information in order to act
efficiently, as a function of the situational constraints. On the other hand, they can
also convey much more information than that available in perceptible reality (for
example, keeping in memory information perceived previously but henceforth
hidden, formulating inferences of potential future events based on precursive
clues, or anticipating the expected effects of an action in progress). What is more, as
illustrated by the work of Ochanine (1977) on Operative Images, these representations
may lead to functional deformations (due to expertise effects), via the accentuation
of certain characteristics (salient and significant) relevant to the activity, but ignoring
secondary details with respect to the current driver’s goal. As dynamic mental models
of the surrounding environment, operative representations are useful for guiding
the human performance in interaction with the world. A core function of mental
representations is to support cognitive simulations (implicit or explicit) providing
expectations of future situational status. Human operators continually updates
their occurrent representations as and when they carry out the activity, thereby
ensuring the activity permanence through time, with respect to the history of past
events on the one hand, and future changes in the current situation on the other.
Lastly, one characteristic of humans is their limits in terms of a entional capacities.
Therefore, they have to manage their cognitive resources according to the short-
term (for example, reacting within a given time to deal with the current situation)
and their long-term goals, while taking their experience into account to achieve the
task successfully (for example, what they know of their own capacities).
We have all undergone the experience of highly automated activity, almost outside the
field of one’s own awareness, which can be performed under a low level of control and
awareness. In this case we can talk of implicit awareness of both the external situation
and our own activity. On the contrary, we can also all remember situations in which
36 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
real-time decisions were judged as very complex, difficult, cognitively costly, and even
sometimes critical, requiring all our a ention in order to carry them out successfully.
Activity then based on explicit and voluntary decisions, even if the reasons underlying
them may be only partly conscious. In this case we speak of explicit awareness of the
situation and one’s intentions, decisions, and actions. These two levels of awareness,
implicit and explicit, co-exist in all our everyday activities. This dichotomy is,
moreover, well established in scientific literature, e.g., in the Psychological Review,
with the distinction put forward by Schneider and Schiffrin (1977) between controlled
processes, which require a ention and which can only be performed sequentially, and
automatic processes, which can be performed in parallel without the least a entional
effort. This is also highlighted by Piaget (1974) who distinguished pre-reflected acts
on the one hand, and reflected thought on the other. This distinction is also classical in
Ergonomics since the work of Rasmussen (1986), who distinguishes different levels
of activity control according to whether the behaviors implemented rely on; (i) highly
integrated sensorial-motor abilities (Skill-based behaviors), (ii) well-mastered decision
rules for managing familiar situations (Rule-based behaviors), or (iii) more abstract and
generic knowledge that is activated in new situations, for which the operator cannot
call on sufficient prior experience (Knowledge-based behaviors). Likewise in the field of
Neurosciences, with the Norman and Shallice (1986) model of control of the executive
functions or, more recently, with Koch’s zombie agents theory (2004), bringing to mind
somnambulism and animal cognition (Proust, 2003, speaks of proto-representations to
qualify this level of implicit awareness), in contrast with decisional and volitional
acts, oriented towards reaching the subject’s explicit aim. Lastly, the field of Artificial
Intelligence can be also mentioned with, for example, Smolensky’s (1989) distinction
between levels of sub-symbolic (that is, neuro-mimetic) versus symbolic computations.
Obviously, every sub-specialities of the cognitive science tend to converge since the
last 30 years towards the idea that different cognitive control modes of activity exist,
as do by consequence several levels of awareness. Figure 1.5 proposes an integrative
model of the human cognitive system combining all these different approach of
awareness into a unified way, but that is also fully compatible with the regulation
loop approaches of the living cognition, as explored above in Figure 1.4.
In very familiar context, human activity mainly relies on an automatic control mode
requiring a low quantity of a entional resource, and thus is essentially based on an
implicit awareness of the situation. Implicit awareness does not means “unconscious.”
Although it is not possible for humans to explicitly explain what they feel or what
they think at this level, they are nonetheless aware to be involved into the situation
they are living. It is a kind of sensorial and proprioceptive awareness, in tune with the
environment, as if the subject was “immersed into the world.” Dulany (1991) defines
this awareness level as a “feeling of experience.” In this mode there is no representation of the
self as an independent entity (Pinku and Tzelgov, 2006). According to this last definition,
implicit awareness is consistent with the ecological theory of Gibson (1979), who
postulates that perception of the self (for example, its spatial location and activity) as
embedded within the representation of the environment. This automatic control loop
of the executive functions (that is, the lower right circle within the automatic level on
Figure 1.4) is inaccessible to the operator’s explicit awareness, except the emergent
part that can be subject to decisional, intentional, and reflexive cognitive processing.
However, if the situation suddenly becomes abnormal or critical, or if we dri too
far from the limits of validity framing the routine regulation process (for example,
C H A P T E R 1 • A N A LY S I S , M O D E L I N G , A N D S I M U L AT I O N 37
when an action does not bring about the expected effects), then the explicit level of
awareness is activated. Indeed, regarding the operational functioning of the human
cognitive system, the explicit and implicit levels are intimately embedded. On the one
side, even when taking a deliberate decision, a large part of implementing this decision
will be done by the way of skill-based behaviors, escaping to conscious control, but
nonetheless relying on an implicit form of awareness and activity monitoring to
guarantee that the goal defined explicitly is reached. From the other side, when mental
activity relies on heavily integrated and automated empirical know-how, this activity
as a whole still generally under the control of the subject’s “explicitable” intentions (that
is, likely to become explicit; even if they are not totally currently), that constitutes
one part of the explicit cognition in our model. From this point of view, implicit and
explicit awareness cannot be seen as two separate entities. As previously said, they are
two sides of the same coin. More than as alternative ways, these two cognitive levels
are intimately linked through a dialectic relation. One of the central processes at work
in this dialogue is the “emergence.” Emergence can occur on two time scales. In the
short-term, that is, the temporality of the current situation itself, it corresponds to the
irruption (more or less spontaneous) in the field of explicit awareness of information
previously not perceived or only taken into account implicitly. In the long-term, that
is, in the temporality of permanent knowledge acquisition, emergence is involved in
the human operator’s elicitation strategies, whether for learning by introspection or
self-evaluation of their own abilities (that is, Piaget’s reflected thought), transmission
of one’s competences to another operator, or justify one’s acts a er decision-making.
In symmetry with the notion of emergence of conscious thought and conceptual
representations, we wish to add the notion of “immergence” of this explicit thought
to operative know-how and implicit awareness. This is to take into account the
embedding process of decisional cognition in operational activity, and to describe
the sedimentation phenomenon of explicit thought in implicit knowledge, that is, its
operative embodiment in deep-cognition (we could also speak with Anderson (1993)
38 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
Figure 1.6 describes the driver’s mental activities as simulated into the COSMODRIVE
(COgnitive Simulation MOdel of the DRIVEr; Bellet, 1998, Bellet et al., 2007) functional
architecture of the tactical level.3 The whole activity of the tactical module is dedicated
to the elaboration and the handling of mental representations of the driving situation.
Computationally, this module is implemented as a multi-agent system based on a
blackboard architecture: several Cognitive Agents process the information in parallel and
exchange data (read/write) via common storage structures: the blackboards. Cognition
is therefore here distributed, resulting from cooperation between several cognitive
agents. Each agent has only a part of the required expertise to find the solution. As
the solving procedure progresses, the product of each agent reasoning is wri en in
the blackboard. It then becomes available to the other agents sharing this blackboard,
3 In accordance with Michon (1985) typlology, that distinguishes three levels in the driving task. The
strategic level corresponds to the itinerary planning and managing, the tactical level corresponds to the
current driving situation management, and the operational level concerns the driving actions carried out
to control the car.
40 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
and they can then use these results to furthering their own reasoning. They can also
modify the content of the blackboard, and thus contribute to the collective solving of
the problem. Communication between the cognitive agents of the tactical module can
also be done via messages sending. Lastly, the complexity and speed of the processes
performed are dependent on the cognitive resources allocated to the solving agent by
the Control and Management Module. The quantity of resources allocated can change
according to the number of active processes, and their respective needs. A lower
quantity of resources can slow down the agent computation, change the solving
strategy, or momentarily interrupt the process underway.
actions, and events, learnt from practical experience. Like the piagetian schemes,
driving schemas are able of assimilation and accommodation, giving them a plasticity
to fit with reality through a process of instantiation (Bellet et al., 2007). Once activated
and instantiated with reality, the active driving schema becomes thus the Current
Tactical Representation (CTR) of the driver, which will be continually updated as and
when s/he progresses into the current road environment.
is, mentally) implement of current active driving schema constituting the CTR. Such
a mental “deployment” of the driving schema at the tactical level can be costly in
cognitive resources, and only a limited part of the possible ARs can be examined at
this explicit level of awareness. Nevertheless, implicit anticipation abilities also exist
in the Operational Module, that are based on automatic parallel computations able
to identify the specific AR more particularly relevant in the current context, and to
induce their emergence at the field of the tactical explicit awareness for focusing the
a entional resources.
Then, the Decision agent plays a key-role in making a selection from among these
AR and in instantiating the CTR of the Current State Blackboard as a consequence.
In detail, the decision-making process is supposed: (i) to examine the Anticipated
Blackboard for determining what branch of the AR tree is most adapted to the
present context, (ii) to select the next AR that immediately succeeds the driving zone
currently implemented by the Operational module, (iii) to instantiate in consequence
the corresponding driving zone of the CTR (that will be then transmi ed to the
Operational module by the GTR agent), (iv) to restart the same procedure for
the following zones until reaching the tactical goal of the active driving schema.
Furthermore, the Decision also plays an active role in the AR derivation procedure.
This agent permanently supervises the work done by Anticipation: each new derived
AR is examined with respect to decision-making criteria (based on risk-taking values
and time cost/saving ratios) and, according to whether these criteria are satisfied or
not, Decision determines the best derivation strategy to be adopted by Anticipation
(for a more detailed presentation of the cognitive agents, see Bellet et al., 2007).
accordance with the decisions taken at the tactical level. Then, driving behavior will
be practically carried out on the vehicle commands (pedals, steering wheel, and so
on), in order to progress along the driving path of the active driving schema, as
instantiated in the Current Tactical Representation. Moreover, another central function
of the operational module is to assess the risk of collision with other road users. A
specific reasoning is in charge of identifying a danger and, if necessary, to activate an
emergency reaction. This operational regulation process of the car control is based in
COSMODRIVE on two main types of hidden driving abilities (Mayenobe et al., 2002,
2004): the “Pursuit Point” strategy, related to the lateral and longitudinal control of
the vehicle, and the “Envelope Zones” used to regulate the interaction distances with
other roads users and to assess the risks of collision with obstacles.
Synthetically, kinematics models of the vehicles in COSMODRIVE are based
on a “bicycle model” approach and on a dynamic regulation law of moving related
with the distance between the vehicle itself and its “Pure-Pursuit Point” (Figure 1.8).
The pure-pursuit point method was initially introduced for modeling in a simplified
way the lateral and the longitudinal controls of an automatic car along a trajectory
(Amidi, 1990), and then has been adapted by Sukthankar (1997) for driver’s
situational awareness modeling. Mathematically, the pure-pursuit point is defined
as the intersection of the desired vehicle path and a circle of radius centered at the
vehicle’s rear axle midpoint (assuming front wheel steer). Intuitively, this point
describes the steering curvature that would bring the vehicle to the desired lateral
offset a er traveling a distance of approximately l. Thus the position of the pure-
pursuit point maps directly onto a recommended steering curvature: k = –2x/l, where
k is the curvature (reciprocal of steering radius), x is the relative lateral offset to the
pure-pursuit point in vehicle coordinates, and l is a parameter known as the look-
ahead distance. According to this definition, the operational control of the car by
COSMODRIVE can be seen as process of permanently maintaining of the Pursuit
Point in the tactical driving path, to a given speed assigned with each segment of the
current tactical schema, as instantiated in working memory (that is, the CTR).
distant safety zone, corresponding to the regulation distance that the driver judges as
safe. These zones are called “relative,” in contrast to driving or perceptive exploration
zones of schemas (which are qualified as absolute zones), in so far as their dimensions
are not determined in relation to fixed components of the road infrastructure, but in
relation to the dynamics of the driven vehicle. They are effectively based on Time-to-
Target values (from 0.5 to 1.8 seconds) and therefore directly depend on the vehicle’s
speed and driving path. The size of these zones may also vary according to the driver’s
profile (for example, experience, personality traits, aggressiveness), the situational
context, the driving strategies currently carried out, and the traffic density.
This “virtual skin” is permanently active while driving, as an implicit awareness
of our expected allocated space for moving. As with the body schema, it belongs to
a highly integrated cognitive level (that is, implicit regulation loop), but at the same
time favors the emergence of critical events in the driver’s explicit awareness (that
is, in the CTR). Therefore, the envelope zones play a central role in the regulation
of social as well as physical interactions with other road users under normal
driving conditions (for example, maintaining inter-vehicle distances), and in the
risk assessment of path conflicts and their management if a critical situation occurs
(commitment of emergency reactions).
By using the (i) functional architecture of COSMODRIVE described in Figure 1.6, (ii)
the cognitive agents that modeling the human cognitive processes, (iii) the driving
schemas as pre-existing operative knowledge activated and dynamically updated in
the frame of a functional mental representation matched with the road environment,
and (iv) the operational skills corresponding to the pure-pursuit point and the
lights. In a first time (that is, see the first view on the le , corresponding to the driver’s
mental representation at a distance of 30 meters of the traffic lights), the driver’s
situational awareness is mainly centered on the near traffic and on the color of the
traffic lights, that directly determines the short-term activity to be implemented.
Then, as s/he progresses towards the crossroads, the driver’s a ention will be
gradually focused on the ahead area on the road, and the traffic flow occurring in
the intersection center will be progressively considered and partly integrated into
the driver’s mental representation (that is, second le view, at a distance of 10 meters
of the traffic lights).
The advantage of the driving schema formalism—as defined in COSMODRIVE
model—is to combine declarative and procedural knowledge in the unified
computational structure. When combined with the operational regulation processes
linked with the envelope zones and the pursuit point, it is then possible to use such
driving schemas as a structure of control for both monitoring the operative activity,
as well as for supervising the mental derivation of the “schema deployment,” as
this process is implemented by the human cognitive system in order to anticipate
future situational status, or mentally explore the potential effects of an action before
applied it. In accordance with the operative activity theories presented in the previous
sections, these cognitive structures guarantee a continuum between the different
levels of (i) awareness (implicit versus explicit) and (ii) activity control (tactical versus
operational), thereby taking full account of the embedding of operative know-how
(that is, the level of implementation) in the explicit and decisional regulation loop
of the activity.
A er having discussed (in this chapter Introduction and in the first section centered
on the Mental Activity Analysis) on the limits of the “in vitro” approach for studying
human cognition through the natural sciences methods based on artificial laboratory
tasks (that is, founded on C. Bernard’s experimental paradigm), and argued in the third
section (dedicated to the Cognitive Modeling) of the necessity to use an ecological
“in vivo” approach for studying the human operators’ mental activities as they are
implemented in their everyday life, the last section of this chapter (that is, Cognitive
Simulation) was dedicated to an illustration, through a cognitive simulation model
of the car driver, of the possibility to design and the necessity to use “in silico”
simulations for in-depth investigation of the human natural cognition.
Indeed, by considering the challenge of the living cognition study, it is needed to
apprehend the dynamic functioning of the human cognitive system in interaction
with the surrounding environment where s/he is currently immersed. Thus,
computational models able to virtually simulate the human mental activities on
computer are required. One of the key issues of the living cognition are the mental
representations—here considered as pre-existing schemas instantiated with the
objective reality—that are dynamically elaborated and continually updated in the
working memory of the human operator before (that is, action planning phase)
and during their activity (that is, when it is practically carried out). Indeed, mental
representations and operative activity are actually intimately connected. In the
same way as the human activity fuels itself directly with mental representations, the
50 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
operator’s mental representations are also fuelled “by” the activity, and “for” the
activity, according to a double deployment process: cognitive and representational,
on the one hand, and sensorial-motor and executive, on the other.
The key mental structure supporting both humans mental representations and
their operative activity are operative schemas. From a metaphorical standpoint, such
operative schemas can be compared to a strand of DNA. They “genetically” contain
all the potential behavioral alternatives that allow the human to act within a more
or less generic class of situations. Nonetheless, only a tiny part of these “genotypic
potentialities” will finally express themselves in the current situation—with respect
to the constraints and specific characteristics of reality—during the cognitive (that
is, instantiation and deployment of the mental representations), and then executive
implementation of this schema (via the effective activity carried out to drive the
car). And it is only through this dynamic process of deployment of operative mental
representations, involving a collective effort of several cognitive processes that
certain of intrinsic properties of the living cognition will emerge. From this point
of view, the scientific investigation of the living cognition cannot forego the use of
computer simulation of the human mental activities, without taking the risk of being
largely incomplete.
Therefore, to provide “in silico” simulations of the mental activities, considered in
their dynamic implementation and by respecting their natural complexity as observed
in the everyday life, is the very central, innovative and necessary contribution of
the modern cognitive science to the multidisciplinary scientific investigation of the
human cognition.
Whether the drunkenness of the truth contained in the silicon is not always as
pleasant as the mystery discovered in a good bo le of old wine, it is nevertheless the
exciting fate of the researcher in cognitive science to investigate the “in silico veritas”,
for a couple of centuries.
REFERENCES
Amidi, O, 1990. Integrated Mobile Robot Control, Technical Report CMU-RI-TR-90–17, Carnegie
Mellon University Robotics Institute.
Anderson, J.R.,1993. Rules of the Mind. New Jersey: Lawrence Erlbaum Associates, Hillsdale.
Atkinson, R.C., Shiffrin, R.M., 1968. Human memory: a proposed system and its control
process. In: K.W. Spence, J.T. Spence (Eds.), The Psychology of Learning and Motivation. 2,
89–195, New York: Academic Press.
Baddeley, A.D., 1986. Working Memory. Oxford, Clarendon Press.
Bailly, B., Bellet, T., Goupil, C., 2003. Driver’s Mental Representations: experimental study and
training perspectives. In: L. Dorn (Ed.), Driver Behaviour and Training. Ashgate, 359–369,
Banet, A., Bellet, T., 2008. Risk awareness and criticality assessment of driving situations: a
comparative study between motorcyclists and car drivers. IET Intelligent Transport Systems,
2 (4), 241–248.
Bellet, T., 1998. Modélisation et simulation cognitive de l’opérateur humain: une application à la
conduite automobile, Thèse de Doctorat, Université René Descartes Paris V.
Bellet, T., Bailly-Asuni, B., Mayenobe, P., Banet, A., 2009. A theoretical and methodological
framework for studying and modeling drivers’ mental representations. Safety Science, 47,
1205–1221.
C H A P T E R 1 • A N A LY S I S , M O D E L I N G , A N D S I M U L AT I O N 51
Bellet, T., Bailly, B., Mayenobe, P., Georgeon, O., 2007. Cognitive modeling and computational
simulation of drivers’ mental activities. In: P. Cacciabue (Ed.), Modeling Driver Behaviour
in Automotive Environment: Critical Issues in Driver Interactions with Intelligent Transport
Systems. Springer Verlag, 315–343.
Bellet, T., Ta egrain-Veste, H., 1999. A framework for Representing Driving Knowledge.
International Journal of Cognitive Ergonomics, 3 (1), 37–49.
Bisseret, A., 1970. Mémoire opérationnelle et structure du travail. Bulletin de Psychologie, 24
(5–6), 280–294.
Dulany, D.E., 1991. Conscious representation and thought systems. In: R.S. Wyer and T.K. Srull
(Eds.), Advances in Social Cognition. Hilldale, NJ: Lawrence Erlbaum Associates, 91–120.
Endsley, M.R., 1995. Toward a theory of situation awareness in dynamic systems. Human
Factors, 37 (1), 32–64.
Ericsson, K.A., Kintsch, W., 1995. Long-term working memory. Psychological Review, 102, 211–245.
Fodor, J.A., 1975. The Language of Thought, Thomas Y. Crowell (Ed.). New York.
Gibson, J.J., 1950. The Perception of the Visual World. Cambridge, MA: Houghton Mifflin.
Gibson, J.J., 1979. The Ecological Approach to Visual Perception. Boston, MA: Houghton Mifflin.
Gibson, J.J., Crooks, L.E., 1938. A theoretical field-analysis of automobile driving. American
Journal of Psychology, 51, 453–471.
Graf P., Schacter, D.L., 1985. Implicit and explicit memory for new associations in normal
and amnesic subjects. Journal of Experimental Psychology: Learning, Memory, and Cognition,
11, 501–518.
Hall, E.T., 1966. The Hidden Dimension. Doubleday, New York.
Johnson-Laird, P.N., 1983. Mental Models: Towards a cognitive science of language, inference, and
consciousness. Cambridge University Press.
Kontaratos, N.A., 1974. A system analysis of the problem of road casualties in the United
States. Accident Analysis and Prevention, 6, 223–241.
Koch, C., 2004. The Quest for Conciousness: A neurobiological approach. Engelwood: Roberts and
Co. Publishers.
Leplat, J., 1985. Les représentations fonctionnelles dans le travail. Psychologie Française, 30
(3/4), 269–275.
Leontiev, A., 1977. Activity and Consciousness, available online on the web site: h p://marxists.
org/archive/leontev/works/1977/leon1977.htm.
Leontiev, K., Lerner, A., Ochanine, D., 1961. Quelques objectifs de l’étude des systèmes
Homme-Automate. Question de Psychologie, 1, 1–13.
Mayenobe, P., 2004. Perception de l’environnement pour une gestion contextualisée de la coopération
Homme-Machine. PhD thesis, University Blaise Pascal de Clermont-Ferrand.
Mayenobe, P, Trassoudaine, L, Bellet, T. and Ta egrain-Veste, H., 2002. Cognitive Simulation
of the Driver and Cooperative Driving Assistances. In: Proceedings of the IEEE Intelligent
Vehicles Symposium: IV-2002, Versailles, June 17–21, 265–271.
Michon, J.A., 1985. A critical view of driver behavior models: what do we know, what should
we do? In: L. Evans, R.C. Schwing (Eds.), Human Behavior and Traffic Safety. New York:
Plenum Press, 485–520.
Minsky, M., 1975. A Framework for Representing Knowledge. In: P.H. Winston (Ed.), The
Psychology of Computer Vision. New York: McGraw-Hill, pp. 211–277.
Neisser, U., 1976. Cognition and Reality: principles and implications of cognitive psychology. San
Fransisco: W.H. Freeman.
Norman, D.A., 1983. Some observations on mental models. In: D. Gentner, A.L. Stevens (Eds.).
Mental Models. Lawrence Erlbaum, pp. 7–14.
52 THE HANDBOOK OF HUMAN-MACHINE INTERACTION
Norman, D.A., Shallice T., 1986. A ention to action: willed and automatic control of behavior.
In: R.J. Davidson, G.E. Schwartz, D. Shairo (Eds.), Consciousness and Self-regulation. Advances
in research and theory. New York: Plenum Press, 4, pp. 1–18.
Ochanine, V.A., 1977. Concept of operative image in engineering and general psychology. In:
B.F. Lomov, V.F. Rubakhin, and V.F. Venda (Eds.), Engineering Psychology. Science Publisher,
Moscow.
Ohta, H. 1993. Individual differences in driving distance headway. In: A.G. Gale (Ed.), Vision
in Vehicles IV. Netherlands: Elsevier Science Publishers, 91–100.
Ouznadzé, D. 1966. Principes essentiels de la théorie d’a itude. In: A. Léontiev, A. Luria, and
A. Smirnov (Eds.), Recherches Psychologiques en URSS. Moscou: Editions du Progrès, 244–
283.
Piaget, J., 1974. La prise de conscience, Presses Universitaires de France, Paris.
Piaget, J., 1936. La Naissance de l’intelligence chez l’enfant. Delachaux and Niestlé.
Pinku, G., Tzelgov, J., 2006. Consciousness of the self (COS) and explicit knowledge.
Consciousness and Cognition, 15, 655–661.
Proust, J., 2003. Les animaux pensent-ils? Paris: Bayard.
Rasmussen, J., 1986. Information Processing and Human–machine Interaction: An approach to
cognitive engineering. Amsterdam, North Holland.
Richard, J.F., 1990. Les activités mentales: comprendre, raisonner, trouver des solutions. Paris:
Armand Colin.
Schilder, P., 1950. The Image and Appearance of the Human Body. New York: International
Universities Press.
Schneider, W., Shiffrin, R.M., 1977. Controlled and automatic Human Information Processing
I: Detection, search and a ention. Psychological Review, 84, 1–88.
Smirnov, A., 1966. La mémoire et l’activité. In: A. Léontiev, A. Luria, and A. Smirnov (Eds.),
Recherches Psychologiques en URSS. Moscou: Editions du Progrès, 47–89.
Smith, K., Hancock, P.A., 1995. Situation Awareness is adaptive, externally directed
consciousness. Human Factors, 37 (1), 137–148.
Smolensky, P., 1989. Connectionist modeling: Neural computation/Mental connections.
In: L. Nadel, L.A. Cooper, P. Culicover, R.M. Harnish (Eds.), Neural Connections, Mental
Computation. Cambridge MA: The MIT Press, pp. 49–67.
Stanton, N.A., Chambers, P.R.G., Piggo , J., 2001. Situational awareness and safety, Safety
Science, 39, 189–204.
Sukthankar, R., 1997. Situation Awareness for Tactical Driving, Phd thesis, Carnegie Mellon
University, Pi sburgh, PA, United States of America.
Thom, R., 1991. Prédire n’est pas expliquer. Paris: Eshel.
Vergnaud, G., 1985. Concepts et schèmes dans la théorie opératoire de la representation.
Psychologie Française 30, (3), 245–252.
Vermersch, P., 1994. L’entretien d’explicitation (en formation initiale et en formation continue). Paris:
ESF Editeur.
Vygotsky, L.S., 1962. Thought and Language. Boston: MIT Press.
Wiener, N. 1948. Cybernetics or Control and Communication in the Animal and the Machine.
Cambridge, MA: The MIT Press.
Zintchenko P., 1966. Quelques problèmes de psychologie de la mémoire. In: A. Léontiev, A.
Luria, and A. Smirnov (Eds,), Recherches Psychologiques en URSS. Moscou: Editions du
Progrès, 7–46.