You are on page 1of 21

PLEASE NOTE: This is an unrefereed draft.

The final version of this paper was published in June


2016, in a special issue on Language processing in translation of the journal Pozna Studies in
Contemporary Linguistics, edited by Dr Bogusawa Whyatt (DOI 10.1515/psicl-2016-0013). A
shorter version was presented at the TRA&CO symposium (Germersheim, April 27-29, 2016).
Comments more than welcome. ricardo.munoz@ulpgc.es. This draft is made available for
academic purposes. Please do not quote from this version.

Of minds and men


Computers and translators
Ricardo Muoz Martn
PETRA Research Group
Universidad de Las Palmas de Gran Canaria

Abstract
Translation process research (TPR) efforts seem at times unconcerned with the
theoretical foundations they need to interpret their results. A pervasive
theoretical approach within TPR has been the mind-as-computer view. This
approach has fostered both mechanistic and functional explanations of the
translation process, including semantic notions of meaning, unrealistic
constructs of the mental lexicon, and reified notions of equivalence. Some
consequences of the approach are illustrated with discussions in the realm of
translation quality assessment (automated and combined metrics, rubrics based
on error categorization, and the impact of human variables and factors) and the
monitor model hypothesis and its recent developments. Alternative approaches
that draw from 4EA cognition are sketched that suggest that meaning is
encyclopedic; that it is a process that cannot be measured; that the mental
lexicon is only an abstraction of a part of (world-)knowledge; and that the
tendency to choose default translations follows from the very structure of the
brain/mind and the minimax principle.
Keywords
cognitive translatology; mind-as-computer metaphor; equivalence; translation
quality assessment; monitor model

Translation studies and linguistics have both become much more empirical in the last decades. For
translation researchers, computerizing the profession has opened up many possibilities to record
relevant aspects of behavior when at task. This is a very welcome development, but results from
observational and experimental studies are meaningless by themselves. In order to ascribe
meaning to data and fit empirical results together, researchers also need a theory. Theories
ground perspectives on phenomena, support structuring data in certain ways, and foster specific
interpretations to make sense out of test results. Theories also rule explanations in or out,

because they usually stand on a set of assumptions that may logically filter some of them away. In
contrast, atheoretical research will only yield a collection of unrelated data sets where no
possibility can be rejected as a likely description or as a potential explanation, precisely because it
is not anchored to any particular view. In brief, empirical scholars unmindful of theory may simply
be running in circles, with no prospect of knowledge advancement in sight.
The word atheoretical, as used above, needs to be taken with a grain of salt. In the realm of
science, the word theory may mean something like 'a plausible or scientifically acceptable set of
general principles and theorems presenting a concise systematic view of a subject that are offered
to explain phenomena.' Yet in everyday use theory may also mean 'abstract thought, belief,
unproven assumption or speculation that works as the basis for action' (I'm paraphrasing
Webster's definitions here). That is, theories in this everyday sense are just our implicit, naive ways
to understand the world as laypersons, often through direct experience. Any and all humans have
such 'everyday theories' for virtually everything. For instance, we have a theory for doors, for the
ways to open doors, and also for correct behavior in certain situations, such as opening doors at
somebody else's house. Let us call these commonsense explanations folk theories, and thus
distinguish them from scientific theories.
From a different perspective, both folk and scientific theories often includeor even adopt an
overall shape ofanalogies or metaphors, whereby a difficult concept is understood or thought of
in terms of another, usually a simpler one (Lakoff and Johnson 1980; for metaphors in translation
theory, see Martn 2008, 2010, 2011). In doing so, the two concepts are equated through the
mapping or projection of a set of constituents of a concept onto those of the other concept, as if
establishing virtual links of correspondence. Now, such mappings are nearly never exhaustive, let
alone totally coherent, so they highlight certain aspects and obscure or ignore some others.1
Fortunately, we often have several alternative metaphors to understand some difficult concepts
e.g., a national economy can be thought of as an ecosystem, or a plant, or a patientand vice
versa: one and the same metaphor may be applied to different domains too (e.g., life is a journey,
a relationship is a journey, a career is a journey). Metaphorical folk theories work grosso modo;
1

For example, the genetic "code" refers to a set of four DNA nucleotides or nitrogenous bases that, combined in groups
of three and in certain orders, determine the amino acid sequences used in the production of proteins within a cell. This
is probably as foggy for you as it is for me. But when we think of the genetic code as a method to store and transmit
biological information, we can easily understand some more things. The four nucleotidesadenine (A), cytosine (C),
guanine (G) and thymine (T)may be thought of as letters, and their combinations in triplets or codons may be thought
of as syllables. There are 64 triplets, but only 61 syllables, because the other three are punctuation marks that establish
where a word starts or ends. There are only 20 standard amino acids, or possible words, so different sequences of
syllables are actually synonyms. Words or amino acids may be combined into genes or sentences. DNA transcribes these
sentences into messenger RNA, which translates them from the cell nucleus to its ribosome. Let us stop here to consider
that we probably understood most if not all of the explanation based on metaphorical thought, even though we still
don't quite know what a 'nucleotide' is. However, this metaphorical way of thinking genetics hides or obscures that
there are some nonsense words, or introns, which do not codify any amino acidgibberish, you will guess. But the
genetic code of humans has some 3 billion nucleotides or letters versus only 23,000 pairs of genes or sentences. This
means that each meaningful sentence is buried in pages and pages of meaningless words. This does not sound right any
longer, because we would not expect any book to be like that. Also, most (but not always all) introns or nonsense words
are very appropriately left behind before the sentences get translated, but erasing them all may result in the RNA
staying in the nucleus, i.e., not fulfilling its mission or becoming untranslatable, which seems to contradict what we
know about translating. Also, mutations (typos?) may lead to interrupted sentences or genes. Real broken sentences will
often have no serious impact on global text understanding whereas interrupted genes may lead to genetic diseases,
such as cystic fibrosis which, in turn, may affect several vital organs and hence the whole body. Again, these are
anomalies in the analogical or metaphorical mapping of what we know of language codes to what we know of genetics.

they are usually incomplete, and they may easily become confusing when several get entangled.
Under this light, and compared to folk theories, scientific theorieseven metaphorical onescan
be characterized as explicit, systematic, logical, empirically supported, full-blown knowledge
structures that are socially constructed and can be proved false with counterevidence.
Scientific theories depart from assumptions present in folk theories, or simply build upon folk
theories. One such folk theory that would become a scientific theory is the notion that human
minds are like computers and that they work like computers do. When, in 1943, neuroscientists
McCulloh and Pitts suggested that cognitive capacities could be explained by computations
performed by the brain, the mind-as-computer analogy had actually been around for more than a
century (cf. Gigerenzer and Goldstein 1996).2 Then the cognitive revolution merged Turing's (1950)
metaphor of machines capable of thinking with John von Neumann's (1958) analogy between
computers and the nervous system, and identified brain with hardware and mind with software.
The first modern, explicit theory of mind-as-computer was laid out by philosopher Hilary Putnam
(1961) and developed in many works by his PhD student Jerry Fodor. Once Aitkinson and Schiffrin
(1968) applied the mind-as-computer analogy to model human memory, the transit from folk to
scientific theory could be said to be complete.
Promoting a folk theory to scientific theory does not mean that the original one will fade out. At
least, in the case of mind-as-computer, it is quite alive and well. For instance, Rodriguez (2006)
shows that it is productive in an emerging folk neuropsychology. Indeed, diverse mixtures of folk
and scientific views of the mind-as-computer are today mainstream in several disciplines, including
the philosophy of mind, cognitive psychology, artificial intelligence, and neuroscience; and they
often inspire further research efforts in many other areas, such as translation studies. In the case
of translation and interpreting, these views are also rooted in popular culture, with laypeople
often thinking that translating is just a matter of mastering the sets of symbols (languages, in this
case) and automatically applying an ample but finite set of fixed rules.3 In the same line, people
often think that the main problems of simultaneous interpreting are processing speed and
memory capacity, i.e., "how good the machine is". In order to advance towards a solid cognitive
translatology, we need to dispel some darkness around concepts related to the mind that should
be clear beforehand: "An understanding of cognition is a prerequisite for explaining many of the
practical tasks relevant in translation, since these tasks are based on thinking, learning, and
understanding" (Risku 2013: 2). We may not know what is correct, but we already know some
things that are not. We will now tackle a necessarily brief description of the mind-as-computer
view.
1. Family resemblances
2

The idea that thinking is computing can actually be traced back to Thomas Hobbes (1655 [1839: 3]): "By RATIOCINATION
[reasoning], I mean computation. Now to compute, is either to collect the sum of many things that are added together,
or to know what remains when one thing is taken out of another. Ratiocination, therefore, is the same with addition and
subtraction; and if any man add multiplication and division, I will not be against it, seeing multiplication is nothing but
the addition of equals one to another, and division nothing but a subtraction of equals one to another as often as is
possible. So that all ratiocination is comprehended in these two operations of the minde, addition and subtraction"
(Small caps and italics in original).
3

Perhaps nobody expressed it more straightforwardly than Warren Weaver in a 1947 letter to Norbert Wiener, as selfquoted in his memorandum on translation: "[] one naturally wonders if the problem of translation could conceivably be
treated as a problem in cryptography. When I look at an article in Russian, I say 'This is really written in English, but it has
been coded in some strange symbols. I will now proceed to decode'" (Weaver 1949: 4).

We often think metaphorically of minds as computers. We are used to hear people liken a
computer's CPU to the central executive; the hard drive, to long-term memory; RAM memory, to
working memory; the bus, to the spinal chord; keyboards, USB ports, scanners and microphones,
to senses; etc. Such assumed correspondences have eased mechanistic explanations of the mind,
that is, descriptions of the way it works in terms of its parts and processes, which are causally
interrelated. This perspective is rooted in many cultures. For example, current colloquial usage in
my Spanish dialect may use cambiar el chip ('replace the chip') to express "change one's
mindset/frame of mind."
Also, both digital computers and human brains and/or minds process information, perform
calculations, and draw inferences. These parallels have prompted functional accounts of the mind
in terms of computing, i.e., explanations of mental states as to their function and the ways they
work. Thus, if mechanistic explanations fail, we can always resort to watch inputs and outputs in
the black box, and assume that some form of computation goes on inside that we just need to
figure out. These accounts are pretty much the essence of mainstream folk theories of mind-ascomputer. But there are important differences between a PC and a mind. For example, PCs have a
fixed internal speed but mental processes may display remarkable variation in time, as in reading,
remembering, problem solving or in tip-of-the-tongue phenomena. Such temporal variations are
one of the cornerstones of current quantitative research into translators' and interpreters'
cognition. The inputs and outputs of a PC are symbols, whereas human minds receive sensory
inputs and may produce motor outputs. We have no correspondence for a mouse. As I already
argued, these folk theories are incomplete at best.
In contrast, the computer in scientific computational theories of mind is not quite the same as in
the folk mind-as-computer analogy or metaphor. To begin with, in the scientific versions there is
no analogy or metaphor: the mind is a computer (Rescorla 2015). Yet this theoretical computer is
an abstract entity, not a regular PC. Some important differences between this abstract entity and
the human mind are that this theoretical computer works only serially (one step at a time;
contrast with Balling et al. 2014). No contextual or random effects alter its way of working or its
results. That is, whenever it departs from a given starting condition or initial state, it always yields
the same output. Also, and again unlike our brains, this theoretical computer has neatly separate
devices to store and process information andunlike PCs tooit boasts unlimited memory
capacity. Our brains, on the other hand, do store and process information, they do so both serially
and in parallel, and our memory capacity is impressive, but limited.
The scientific computational theory of mind is often considered a representational theory,
because computing cannot be performed on real entities, but on their representations. Within this
scope, the mind would thus be an information-processing device that would work on symbols. The
nature of these symbols or representations is a subject of heated debate (conceptual overview in
Berkeley 2008). Are symbols elements of a language of thought or mentalese, different from
natural languages? Do symbols have a meaning by themselves or can meaning be attached and
detached from them? Can emotions be formalized in the same terms? Can perceptions and motor
processes be accounted for along the same lines? These questions have no easy answers, because
neuroscientific research can so far only tell us where in the brain something is happening, but not
what that something is (Poeppel 2012). These questions and their answers remain mostly in the
realm of the philosophy of mind, and entail apriori choices about the nature of thought.

Classical computational approaches focus on abstract thinking, problem solving and (logical)
language processing. Connectionist approaches, on their part, claim that representations are not
symbolic, but that they are rather codified at lower levels and partial aspects and spread through
the whole cognitive system. Connectionist computational theories do a better job at describing
perceptual and motor processes, although they are not necessarily anti-representationalist. There
is no room here to even try to list all mind-as-computer views, with their minute variants, their
immediate neighbors and their usual associates, such as the mathematical theory of
communication (Shannon and Weaver 1949).4 From a translation theoretical perspective, folk and
scientific computational theories of mind, of both classical and connectionist persuasions, can be
seen as a set of approaches linked by the common trait of using computationalism to explain the
mechanics of cognition and by various family resemblances.
Once we acknowledge folk theories and their relationships with, and role within, scientific
endeavors, we may revisit the notion of atheoretical research and tone it down. Strictly speaking,
under this light purely atheoretical research is not possible. Instead, what we may have is research
driven by implicit and/or folk-theoretical research to various degrees. That is, sometimes
translation researchers may implicitly or explicitly adhere to a certain scientific framework, such as
the computational theory of mind, or consciously or inadvertently go with the wave and depart
from folk theories along the same lines, or a combination thereof (cf. Vygotsky 1986: 190-194 on
the parallel development of spontaneous and scientific concepts). This and a wide array of
variations make it very difficult to spell out a systematic critique here. In what follows, I will just
address some points common to many computationalist views that I deem important to redress in
cognitive theories of translation and interpreting.
2. Equivalence is not for humans
We cannot see a word without reading it. As soon as we listen or read anything, meaning springs
to our minds like a reflex. However, symbols are meaningless by themselves, they are just shapes
that get their meanings from and within the thoughts of people who interpret them. Meaning
never leaves the brain that creates and experiences it. That is precisely why our species developed
natural languages and that is why we create symbol systems time and again. No "content" gets
ever carried over from one language to another, and consequently there is no transfer whatsoever
in translation (see Martn 2008, 2010). People systematically link meanings to symbols through
experience, by grounding them in reality, i.e., by associating them to what the know and perceive
through means other than language.
In contrast, computers can manipulate symbols, but they do not know what they mean. Meaning
in computers can only be pre-assigned. That is, for computers to be able to yield an output of TL
symbols when offered an input of SL symbols that may be judged equivalent, somebody needs to
enter them beforehand. This is achieved by linking symbols to other symbolse.g., words to
synonyms, definitions, translationsso it leads to an infinite regress. This is known as 'the symbol
grounding problem' (Harnad 1990).5 True, abstract concepts cannot be perceptually anchored to
anything. In their case, there is no important difference between humans and machines.
4

For an overview, see the entry on the Computational Theory of Mind by Rescorla (2015), in the online Stanford
Encyclopedia of Philosophy. More detailed reviews and accounts from different perspectives in Horst (1996/2011),
Mikowski (2013) and Piccinini (2015).
5

Some recent developments claim to have solved it for AI (see, e.g., Steels 2008 vs. Bringsjord 2015) but here we are not
so interested in the problem itself as in its consequences for translation theory.

Nevertheless, we manage to bootstrap abstract thought up from both body and environment,
precisely with tricks such as metaphors and analogiesthe way you just understood this sentence.
The symbol grounding problem lies at the heart of important translation-theoretical obstacles
since the very beginnings of modern translation studies. Back in the 1960s, we borrowed views
and concepts from machine translation, where researchers had been unsuccessfully dealing with
how to link meaningless symbols in one language with meaningless symbols in another, i.e., they
faced the problem of establishing equivalence between linguistic units, so that the machine could
be left alone to translate on its own. The initial linguistic approaches to human translation took
meaning to be referential, logical, and propositional. Thus, it made sense to still focus on
equivalence: "A central task of translation theory is that of defining the nature and conditions of
translation equivalence" (Catford 1965: 21; see also Krein-Khle 2014).
Still, even modest bilinguals know that, as soon as we process a language segment, candidates to
equivalence in a different language often pop up in mind for very natural reasons (cf. Christoffels
et al. 2006; Muoz 2011). As early as 1969 it was obvious that "formal correspondence" was the
exception, and that humans were needed in order to determine whether in a certain situation a
given text segment in one language was to be considered equivalent to a text segment in a
different language. Nida and Taber (1982: 24) called it 'dynamic equivalence.' Once we get rid of
the computationalist approach to meaning, the question of equivalence becomes a very different
one, namely how is it that two or more people will come to construct close meanings for language
segments in two languages (cf. Halverson 1997: 222226). In order to try to answer it, we first
need to settle on a notion of meaning (sections 3 and 4).
3. Translation quality is in the eyes of the beholder
When extended to apply to human translators' mental processes, mind-as-computer views entail
some important assumptions about the nature of meaning. Because infinite symbol regression
needs to be set a practical limit, meaning is usually conceived of as a discrete, reified commodity
associated with language unitsusually lexical items, sometimes also morphemes and plurilexical
setswhereas syntax tends to be envisioned as only tenuously linked to meaning, if at all.6 We will
come back to syntax in sections 5 and 6; let us now add that in such views meaning also needs to
be assigned to discrete units so that they can be successively processed. Computationalist
approaches to translation tend to suggestalbeit often implicitlythat denotative and logical
meanings are language-bound, rather than human-dependent, and that we process denotative
and connotative meanings differently or separately, or that we first go for the logical or literal
meaning and only then construct figurative interpretations.
Nowhere else but in translation quality assessment can computationalist views on meaning be
more patent. Analytical, quantitative approaches to information and ultimately to thought are
6

This is not always wrong from a cognitive translatological perspective. Linking meaning to separate words allows
psychologists to call 'translation' what their informants do when they automatically utter full words in one language
when primed with full words in another one, with some meaning correspondence (historical review in Garca 2015).
Priming refers to changes in the ability to identify or produce (mostly, language) items as a result of a specific prior
experience with an item (Tulving and Schacter 1990). It is believed to occur outside of conscious awareness. Research
based on translating lists of isolated words is often scorned by translation scholars because of an alleged lack of
ecological validity due to decontextualization, but such criticism does not seem to be empirically sustained (e.g., De
Groot 2000; Prior et al. 2009). Pairing words from different languages may not be all that is going on when translating,
but it is definitely part of it.

inherent in mind-as-computer views. There are plenty of evaluation frameworks for both human
and machine translation, but most of them share some basic flaws.7 Many of those frameworks
work at sentence level at best, and assign fixed values to errors across the text (and independently
of its length). As if an obvious error in the first paragraph had the same impact on readers as the
same error in the last page! Automated metrics, such as BLEU and METEOR, compare MT output
with human-made reference translations where text bits that are somehow deemed closer to the
gold standard somehow get higher scores. That is, they behave like a bad translation teacher of
yesteryear, imposing a pre-established single solution over any possible variations, including
correct alternatives, and offering an arbitrary score with no further explanation.
Furthermore, categorizing and weighing errors is not always a straightforward affair. Most
evaluation frameworks boast sets of rubrics as error categories that may be ambiguous, such as
"mistranslation" (cf., e.g., Koby 2015). Using error categories often needs some training or at least
some getting used to, and always entails imposing a certain perspective on the task. This should be
enough to raise concerns as to the validity of the approach; after all, readers are usually able to
make up their minds as to the quality of a text on their own.8 Evaluatorsoften language
specialistsmay become confused even with top level categories (Popovi and Burchardt 2011:
265266) or they may have different approaches to how errors should be minimally corrected
(Stymne and Ahrenberg 2012: 1789).
There seem to be plenty of strange variables affecting human evaluators. Mellinger (2014) found
that post-editors tend to over-edit and alter sentence structure and lexical choices in both exact
and fuzzy matches even when explicitly instructed not to enter any unnecessary changes.
Mellinger and Shreve (in press) concede that this may be partially explained as an effect of
segmentation (cf. Dragsted 2008) but also find evidence that informants might feel prompted by
the very situation of focusing on the evaluation of pre-existing copy. The observer's paradox might
be at work too: beyond TT correctness, informants may be motivated by their wish to show they
can do a good or better job than that of an MT system or other translators or post-editors, or that
they simply live up to the potential expectations of researchers. The experimental setting may also
take its toll. Muoz and Conde (2007) found several distorting effects on judgments when
evaluators successively worked on many TL versions from the same original. Graham et al. (2013)
found that interval-level scales commonly used to express estimations of quality levels in
experiments may actually foster inconsistencies and that continuous rating scales may improve
intra rater agreement.
So why is it that judgments on translation quality are so sensitive to personal and environmental
circumstances? In one way or another, evaluation frameworks that adhere to computationalist
views use adequacy as criterion, and define it as "how much of the meaning expressed in the
reference translation is also expressed in a hypothesis translation" (Callison-Burch et al. 2007:
140). I would like to argue that in trying to measure meaning in the texts, researchers are just
7

Select reviews in Secar (2005), Callison-Burch et al. (2007) and O'Brien (2012) for (mainly, machine-) translation, and
Pchhacker (2005) and De Gregoris (2015) for simultaneous interpreting.
8

Waddington (2001) compared analytical and holistic approaches to human translation assessment of human-produced
translations and found they all had moderately strong, positive significant correlations. Conde (2008, 2012) studied the
behavior of evaluators using a very open approach with minimal instructions and found large differences between and
within groups (translator trainers and trainees, professional translators and potential addressees), as well as consistent
group tendencies and personal styles.

looking in the wrong place, because meaning is not there. Small wonder that "inter-annotator
agreement for fluency and adequacy can be called fair at best" (Callison-Burch et al. 2007: 148)
and that "There has been a worrying trend in recent MT shared tasks [...] of agreement between
annotators decreasing" (Graham et al. 2013: 33). This, in turn, suggests that combined human and
automated metricswhere people correct MT outputs minimally to produce a new version on
which standard metrics are computedsuch as HBLEU, HMETEOR and HTER (Snover et al. 2006),
are not going to be the solution.9 If there is a solution, I think it has to come from a different take
on the nature of meaning (within a different approach to the mind). Let us remember why.
4. A single, encyclopedic, underspecified meaning
The way meaning works is more complex and definitely more fascinating than the usual
computationalist accounts would have it. On the one hand, as noted before, meaning is not out in
language, but within our heads. We do develop some basic prelinguistic categories but, once we
are exposed to language, we start accommodating to it and channel our thoughts through
language, in ways that Slobin (1987, 1996, 2003) described as 'thinking for speaking,' i.e., language
has an impact on thought but only when one is thinking with the intention to use language. We
assign meanings to discrete linguistic units by carving them out of their social use (cf. Wittgenstein
1958: par. 43). The experience of communicative events where any given linguistic unit is used is
different for every speaker. Nevertheless, personal meanings overlap to a large degree, precisely
because language is a social tool and linguistic meanings get streamlined as they are used. Thus,
we are constantly updating mainly the fuzzy "boundaries" but even the "cores" of the meanings
we assign to words. This happens in very short termsthe second successive reading you perform
on a text is not identical to the first onein mid range periods (we tend to enter corrections in
translations that last week we judged perfect), and also in longer ones: that is why translations get
old.
On the other hand, meaning is an individual experience, and even isolated lexical meaning is much
richer than what language can codify. When we read, e.g., the word bicycle we tend to think of a
specific one, probably one we had or recently saw, with attributes such as colors and noises that
are inextricably part of our mental experience, even though we know that by uttering bicycle
usually no particular color or bell sound is meant.10 Thus, language underspecifies meaning, i.e.,
the meanings that people have in their minds when communicating exceed by far what can be
hinted at with language. All in all, meaning is encyclopedic in nature. In an encyclopedic vision of
meaning, distinctions such as meaning vs sense, connotation vs denotation, literal vs figurative
meaning and the like are illusions (cf. Haiman 1980; Gibbs 1984, 1989) and point to another
mismatch in the mapping of computers onto minds.
Furthermore, computers perform digital computations (i.e., on discrete elements), whereas no
human mental state, whether related to perception, cognition, or action, ever stands still long
enough to become clearly distinct from other contiguous mental states: human mind processes
are continuous (Spivey 2006) and what they process is either analog or else both digital and analog
9

In fact, Snover et al. (2009: 259260) admit that HTER does not discriminate between serious errors and minor edits,
and also that it is difficult and taxing for human annotators, and present a new version of their metrics, TER-Plus or
TERp.
10

Some other times we may just (re)build a prototype, or rather the relevant aspects of it, but we do not build and use
such prototypes in isolation, that is, we never think of a bicycle and nothing else. What is important here is that more
often than not we have a far richer mental experience than what we (aim to) communicate.

(Piccinini and Bahar 2013).11 From this perspective, there can be no fixed tertium comparationis
for originals and translations and no exact equivalence, because meaning cannot be quantified.
Meaning is not a thing; it is a process: a continuous mental, constructive flow that depends on
each person's dynamic mental experience. Driving it home, there are nothere cannot beclearcut differences between meaning and sense, and no part of meaning can be considered a cognitive
"complement" of another, including emotions.12
Nevertheless, there may be ways to indirectly approach meaning quantitatively, by measuring
readers' reactions to texts. Doherty and O'Brien (2009) found, and Doherty et al. (2010)
confirmed, that eye tracking may be used to evaluate MT output, because informants average
gaze time and fixation count are significantly higher when reading bad sentences.13 Koponen
(2012) and Stymne et al. (2012) found correlations between certain kinds of errors and cognitive
effort, as evidenced in eye-tracking data. This opens the interesting possibility of comparing data
for both good and bad originals and translations. In fact, taking advantage of the "unreasonable
effectiveness" of data (Halevy et al. 2009)which in this case may adopt the form of statistical
analyses of massive eye-tracking and keylogging logsmay help us get rid of rubrics and
classifications that are only obscuring the problem. Of course, you will probably think that
measuring cognitive effort when reading is not the same as measuring text meaning. We will
address that in the next section.
5. A mental lexicon no more?
Pavlenko (1999) distinguishes three kinds of word-related informations in the mind: (a) oral and
written word form representations; (b) their essential meaning, or semantic information, which
may include some minimal syntactic information, such as word class; and (c) non-linguistic
information, or conceptual information, drawn from world-knowledge. In the previous section, I
argued for an encyclopedic notion of meaning; in other words, that there is no actual separation
between representations, semantic and conceptual information. Here, I would like to focus on
some other important consequences of computationalist notions of the architecture of language
in mind. Usually, computationalist accounts suggest or assume a specific system for language in
the mind, composed by a mental lexicon and an autonomous syntax parser. The mental lexicon is
11

Here analog means 'continuous'. For mind as an analogical device see the discussion on the mental lexicon below.

12

Unattested partitions of meaning can only lead to confusions. For example, Lederer (1994/2014) implies that meaning
is linguistic and lexical, as opposed to cognitive (2014: 4, 8-9); then she writes that sense is a conscious non-verbal state
of mind, both cognitive and affective, the product of fusing language meaning with cognitive inputs (complments
cognitifs, p. 228). Then, she describes such cognitive inputs as notional and emotional elements from world knowledge
and contextual knowledge (pp. 223-224), and finally she states that cognitive reactions are always affective, and that
therefore she uses cognitive to refer to both (p. 223).
13

Moorkens et al. (2015) found little inter rater agreement for predicted post-editing effort and only moderate
correlations between predicted and actual post-editing effort. This may be due to the extreme sensitivity of quality
assessment to personal and environmental variables, as argued above, and also to the inherent vagueness of constructs
such as 'cognitive effort,' 'temporal effort,' and 'technical effort' (Krings 2001). In many ongoing research lines, both time
and technical efforts are or should be used as indirect indicators of cognitive effort, which cannot be directly measured,
except as operationalized, e.g., through eye-tracking. In contrast, using them as direct indicators of human post-editing
effort may lead to platitudes such as that text-segment or sentence length has an impact on time effort (cf. Popovi et
al. 2014). We experience time as something that happens, and only metaphorically can we think of it as subject to
investment or effort. In other words, effort may be inferred when these dimensions of time and text manipulation,
meaningless per se, are combined. Making an analogy with speed, so-called technical effort divided by temporal effort
may be a good way to operationalize both text quality and post editors' efficiency.

every person's linguistic repertoire, a long-term memory module working as a repository to store
Pavlenko's information types (a) and (b). The mental lexicon is thus thought of as separate and
different from a general world-knowledge repository.
Earlier computationalist models differed as to whether lexicons were a list of morphemes, of only
irregular words, of all lemmas, or of all word forms. They also varied as to whether there were
separate lexicons for oral and written word forms, for language reception and production, and for
different languages. The currently dominating Revised Hierarchical Model (RHM, Kroll and Stewart
1994) suggests that there are separate repositories for the representations of word forms from
different languages but a single repository for their meanings. In the RHM, translation equivalents
in the two languages are directly connected. L2 learners would start by accessing concepts
through their L1 until their language skills are good enough for them to get rid of that detour.
There is little evidence for the existence of different lexicons for different languages as entailed by
the RHM (Brysbaert and Duyck 2010). Words from different languages seem to belong together in
a single repository and they appear to be equally accessed, even with languages with very
different scripts (e.g., Moon and Jiang 2012). From a translation-theoretical perspective, our direct
concern should probably not be whether models of lexical access such as the RHM or its
competitor bilingual interactive activation plus model, or BIA+ (Dijkstra and van Heuven 2002)
offer a better fit to some partial sets of data, but whether the big picture they contribute to build
of language and cognition coheres with what we know about translation and interpreting
activities.14 It does not. For a start, lexical lookup, access or retrieval do not seem the right words
to describe mental lexical processes. We certainly cannot believe that somewhere in the brain
there is a fenced host of words with readymade meanings waiting to be called into the battlefield.
What happens seems closer to activation and perhaps combination and even construction. In any
case, Pavlenko's informations (a) and (b)word form representations and their meaningsdo not
seem as stable as she would have them.
On the one hand, phenomena observed in language-impaired individuals, malapropisms,
spoonerisms, and results from priming experiments suggest that lexical information does not
come in closed, stable sets but that it rather consists of discrete parts (Foss and Hakes 1978: 156).
Meaning seems to be the transient result of the interaction of at least four complementary neural
processes that link, activate, or inhibit referential, combinatorial, emotional-affective, and abstract
information (Pulvermller 2013). Also, language units may establish literally hundreds of
connections with other language units, often based on similarities and contrasts of all kinds, but
also simply through the interpreter's unique experience, thanks to our basically analogical minds
(cf. Hofstadter 2001; Itkonen 2005). What proof we have through empirical data points to words
14

Halverson (2015) offers an interesting discussion of several models of bilingual lexical representation. A good
candidate model of language comprehension for cognitive translatologiesthose based on (aspects of) 4EA cognition,
cf. Muoz (2010a, 2010b)is Zwaan's (2004) Immersed Experiencer Framework (IEF). This model includes notions
suggested in previous models that are crucial for translation theory, such as frequency effects that will modify the
access, use, and preference for language forms (e.g., Schmidt [2014]); spreading activation, or changes in the activation
levels of language elements due to a previous change in the activation of another language unit connected to it (Shastri
2003; Samani and Sharifian 1997); and inhibition or control of different language element competitors (in order to keep
register homogeneous, different languages apart, etc; e.g., Price et al. 1999; Linck et al. 2008; Humphreys and Gennari
2014). The IEF emphasizes the relationship between language and sensory and motor-driven mental experiences, and
formulates three processes of activation, construal, and integration of information through functional brain networks.

10

as prompts or "stimuli, whose 'meaning' lies in the causal effects they have on mental states"
(Elman 2004: 306).
On the other hand, the nature and role of the lexical items, the pointers or cues we use to
dynamically activate, construe, and integrate meaning may also be called into question. McCarthy
(2006) finds that many word clusters, such as but at the end of the day, know what I mean, etc,
display "semantic and pragmatic integrity" (they work as units) and that some are used more
frequently than single words accepted as belonging to the core vocabulary of English. Elman
(2009) shows that, when interpreting syntactic structures, word-specific information plays a
crucial role early on. The fuzzy boundaries between current English syntax and the lexicon are also
apparent in the evolution of the languageBroccias (2012) finds developmental similarities
between lexical items and constructions and, more generally, an interplay between the two. These
fuzzy boundaries are not exclusive of English, and can be noted in languages that are typologically
quite different, such as Finnish (Nenonen and Penttil 2014).
All these data support that distinctions between syntax and the lexicon should be put an end to, as
suggested by several linguists, such as Croft (2001: 14-25) in his Radical Construction Grammar. In
Langacker's (2008:14) words: "lexicon, morphology, and syntax form a continuum fully reducible
to assemblies of symbolic structures." In fact, Elman (2011) argues that a mental lexicon, in the
traditional dictionary/list view, may be totally unnecessary, for contextual information will impact
both meaning and grammatical structure early on. That is, neither syntax nor the lexicon can be
separated from world knowledge. Syntax and the mental lexicon are convenient constructs to talk
about some aspects of language, but it is doubtful that they will have a separate mental existence.
The picture that emerges from the changes in perspective summarized above is that of a brain
that may have just one store for information and pointers of all kinds that are used in different
ways to subserve behavior through dynamic knowledge structures (Delaney and Austin 1998;
Campitelli 2015). If we agree that we holistically interact with language-in-situation to build the
interpretation of a text, then our measurable interactive behavior (e.g., through eye tracking) may
be worked out as an index of meaning construction (see also Dragsted 2012) because it is all there
is.15 This new picture is far removed from the syntax/lexicon divide and the isolated, stable
form/meaning pairings assumed in a modular, dedicated semantic mental lexiconall of them
necessary components of most mind-as-computer analogies. Thus, it has important consequences
for the ways several translation research topics are being addressed. The notion of literal
translationloosely understood as a rendering that "follows closely the form of the source
language" (Larson 1984:10) is a case in point.
6. Literal translation
In the last years, there has been a revival of the concept of literality in the empirical research of
cognitive processes in translating, in the wake of Tirkkonen-Condit's (1995) monitor model

15

Another area where meaning is more cogently being put to the test is in the translation of surveys and questionnaires
in psychology, medicine, education and social studies. TT adequacy to purpose is tested through cognitive testing
(usually, semi structured interviews to pilot test takers and focus groups to determine whether questions are correctly
understood and also whether questions capture researchers' scientific intent), statistical tests and cross-referencing
with results from other tests and surveys. There is a rich literature on the subject that only seldom draws from
translation studies works. See, e.g., Hambleton and Zenisky (2010), Harkness et al. (2010), International Test
Commission (2010).

11

hypothesis, a computationalist model (in my view) that she traces back to Ivir (1981: 58), who
suggested that:16
The translator begins his search for translation equivalence from formal correspondence, and
it is only when the identical-meaning formal correspondent is either not available or not able
to ensure equivalence that he resorts to formal correspondents with not-quite-identical
meanings or to structural and semantic shifts which destroy formal correspondence
altogether.

According to Tirkkonen-Condit's monitor model hypothesis, literal translating would be "[...] a


default rendering procedure, which goes on until it is interrupted by a monitor that alerts about a
problem in the outcome" (1995: 408). Her functional/mechanistic approach becomes clearer later
on in the same text, when the procedure (1995: 408, 409) becomes a tendency (1995: 407, 409),
and then, an automaton (1995: 409412). If tendency does not necessarily entail willingness but
only an inclination or a natural or prevailing (pre)disposition to proceed in certain ways, the
automaton does not even entail conscious awareness, because it is 'a machine or control
mechanism designed to follow automatically a predetermined sequence of operations or respond
to encoded instructions' (Webster's dictionary). Tirkkonen-Condit suggests that this automaton
operates at the lexical and syntactic levels to generate translated language [excerpts] which are
linguistically acceptable and contextually appropriate as translation equivalents. The literal
translation automaton is also said to have a monitoring mechanism, whose aim is to prevent
incorrect literal renderings by triggering off conscious decision-making to solve translation
problems (pp. 407412). Automation would also affect this monitor, so that traces of its operation
would not be as frequently observable in the processes and products of experts as in those of
novices and non-experts.
Carl and Dragsted (2012) extend Tirkkonen-Condit's monitor model hypothesis to state that the
literal default rendering procedure implies parallel, tightly interconnected text production and
comprehension processes. Following Ruiz et al. (2008), they distinguish between shallow/parallel
vs. deep/sequential processes in translation, with the translators mostly engaging in the first type
through lexical and syntactic code-to-code linksand switching to the second one when the
monitor prompts them to do so. In other words, they apparently add to Tirkkonen-Condit's
automaton multitasking, attentional, and motor control(s). Carl and Dragsted observe that rereading and other problem-solving indicators occur right before or during the typing of TT
segments, and they view them as production problems because "translations of a phrase are
already typed before the translator exactly knows how to render the remaining part of that
phrase" (2012: 143). Thus, they assume that meaning hypotheses are only constructed to the
extent and at the moment they are needed to continue the task at hand, and surprisingly conclude
that the ST meaning is often only elaborated and tested in the writingthat is, "that
comprehension does not precede, but follows text production" (Carl and Dragsted 2012: 144).
Schaeffer and Carl (2014) build on the works summarized above and set out to develop a metric
for measuring literality of translations and assess the effort incurred by translators who deviate
from literal translation. They define the ideal literal translation as the one that satisfies the
16

Precedents of computationalist views in TIrkkonen-Condit's own work may be found at least in a 1991 article, where
she states that texts are composed of lexical and relational propositions and that lexical propositions are in the text, in
that they are lexically and grammatically signalled, apparently unambiguously and with no need of inferential work by
the reader.

12

following three criteria (2014: 2930)which the authors state that admittedly ignore a wide
range of phenomena and kinds of equivalence:
(a) Word order is identical in the source and target languages.
(b) Source and target text items correspond one-to-one.
(c) Each source word has only one possible translated form in the given context.
In order to estimate the translation effort for lexical selection (criterion c), Schaeffer and Carl
count the number of different translation realizations for each word in a given text segment that
are present in the CRITT translation process research database (Carl 2012). Criteria (a) and (b) are
operationalized through a local metric that quantifies the similarity of the ST and TT segments by
mapping both texts and adding or subtracting points to each ST or TT word, depending on its
contrasted order in both texts. The results are numerical cross values, which indicate the minimum
length of the progressions and regressions on the reference text required to generate the output
string. According to Schaeffer and Carl (2014: 36), this strategy provides a quantifiable definition of
literal translation. The more syntactic reordering between source and target text takes place, the
more it will become non-literal. Also, they assess translation effort and find that gaze activity and
production time is inversely proportional to the literality of the produced translations.
So we have a purported automaton in our mindsor at least a mechanism, in a weaker view
that generates translations by linking stored lexical and syntactic forms of two languages while we
are unaware of the meaning of the segments we are translating until we start typing them, when
we may spot some kind of mismatch, realize what the meaning is of what we are dealing with, and
move to a different (problem-solving) processing mode.17 And literal translations would take more
or less cognitive effort that would be quantified with relative accuracy by ascribing arbitrary values
to lexical items, depending on the differences between their positions in the lineal word orders in
both ST and TT. I am not going to argue against the scarce psychological reality that this
automaton seems to display since I am not sure what kind of reality psychological reality is and
how can it be tested. There are, however, many questions to be raised about the details of this
automaton, in view of what we have laid out as alternatives to computationalist approaches. Let
us lay down some of these questions as to the relationships between the automaton and the rest
of (1) the mind, (2) the body, and (3) the environment it interacts with, as represented by texts.
First, how plausible is this automaton as a part of the human mind? How could we account within
the literalist automaton for the counter-intuitive fact that students and experts alike tend to avoid
cognates in translation, even when they are correct? (Jakobsen, Jensen and Mees 2007; Malkiel
2009). Also, how autonomous would this automaton be from the rest of mental activities and
faculties? Lehr (2014) and Rojo and Ramos (in press) found evidence that changes in translators'
emotional states may lead to translations becoming more or less literal (see also note 4).
Also, how would people be able to read but manage to avoid dealing with meaning, precisely
when they are reading to translate? Reading for translating focuses on meaning, and that goal
seems to lead to deeper reading, in terms of mental processing (Shreve et al. 1993; Castro 2008;
17

In what follows, I will be using automaton, as defined above, because it is simpler to use just one term. Automaton is
the term used by Tirkkonen-Condit (1995), from which the other works depart to add more elements or to measure its
effects. Note, however, that Carl, Dragsted, and Schaeffer never use this term, but rather, the weaker one of
mechanism.

13

Jakobsen and Jensen 2008). It does not seem possible to understand something without thinking
of it. In the case of language, is the opposite true? Can we think of words without understanding
them, even if superficially or incorrectly? In particular, if "[...] comprehension and production are
constantly present in translation as well [...], and cannot easily be separated as two distinct
activities"; and if "The translator [...], always has production in mind during comprehension"
(Dragsted 2010: 43), how can people start typing without realizing the meaning of what they are
doing? If meaning is built only when typing, how can Carl and Dragsted explain that they find
approximately 3.5 times more ST fixations when translating than when reading for
comprehension, an important increase that Dragsted assumes that is likely to be the result of "the
planning of TT production and the effort of transforming ST expressions into meaningful TT"? (Carl
and Dragsted 2012: 131).
Second, in principle, an automaton should always yield identical results for the same input, but
this is rare in humans. Positing an automaton or a mechanism to translate a phrase is one thing;
putting it at work for a full day may be a totally different matter. An automaton should not get
tired, but people do. After a visual attention task for 3 h without rest (a time span many
professional translators often devote to working on a text), Boksem, Meijman and Lorist (2005)
found that mental fatigue results in a reduction in goal-directed attention, leaving subjects
performing in a more stimulus-driven fashion (in our case, it would be word-for-word translating).
That is, translating may become more literal when you are tired. This could in fact be considered
as proof of the existence of a devoted mechanism, one that would become more apparent when
other cognitive processes decay.
However, mental fatigue affects executive control (van der Linden et al. 2003) and therefore
planning, motor control, task-switching, and assessment processes that the automaton and its
monitor would need to perform anyway. Moser-Mercer et al. (1998: 6061) found that prolonged
interpreting turns increased the number of meaning errors and also fostered interpreters' lack of
awareness of drastic decreases in quality: "It appears that interpreters, at least on prolonged
turns, cannot be trusted to assess their performance realistically". Even if we suggest that the
automatons for translating and for interpreting are different, it is common knowledge among
professionals that mental fatigue will indeed lower the quality of translations, among other things,
by making them too literal.18
Third, how can we reconcile the workings of the automaton with the fact that experienced
readers, such as professional communicators, tend to only fix their eyes on the full words in a text,
practically ignoring empty words? Also, how would the automaton deal with texts? How would it
handle plurilexical language units and figurative language that people seem to treat and translate
as wholes (cf. Grauer 2009)? If we need to weed out all sentences with no figurative language, no
plurilexical units, and no wrong literal renderings, could it be the case that sometimes there may
just be a few sentences left in a text? Would we have a devoted automaton to deal with such
meager leftovers? Would it compensate for ignoring suprasentential (e.g., textual) features? In
trying to compute degrees of literalness, how can dialectal differences in both word and grammar
usage be accommodated? Can we posit that a translation is more literal in one dialect than in
another one, when the solutions are identical? Would the automaton be register-blind or would it
choose a gold standard, like researchers did to develop BLEU and METEOR metrics?
18

In any case, a purported separate automaton for (simultaneous) interpreting would indeed be more "literalistic" than
the one for translation (cf. Shlesinger and Malkiel 2005; Jakobsen, Jensen and Mees 2007).

14

All in all, this computationalist account of the way translation unfolds is unconvincing, especially
because it does not explain why and how we should have it, how we acquire it, how does it evolve.
I suspect that the literalist automaton can only be a metaphorical construct, and a faulty one, for
that matter, to describe the workings of the translating mind. Of course, we would not be able to
ask these questions if this research line had not been tried out, but I think this remarkable effort
was based on a wrong notion of meaning within a view of mind-as-computer. There must be
easier explanations for the phenomena it tries to account for.
An alternative, non computationalist account for the so-called literal translation would depart
from Levs (1967) minimax strategy: it is simply natural that translators and interpreters will seek
the maximum effect with the minimum effort when performing a task that may often demand
more cognitive resources than those available. We only have one vast memory store, where
language form representations work as pointers to recurring patterns of neural processing that are
facilitated through entrenched cognitive routines (Langacker 2008: 216). In order to navigate
through this store and perform different operationswhat we call access and retrieval, working
memory, etcwe need literally thousands of ways to organize it, many of which are sub-symbolic,
i.e., they link details of language units, such as aspects of shape, register, grammatical features,
and language membership (Paradis 1998: 5051).
Language input will promote the activation of all possible patterns related to such input, and all
elements that do not cohere with the perceived situation and are not reinforced by mutual
interaction will be deactivated in about 200 ms (Rayner and Clifton 2002: 281282).
Representations in several languages will also be activated (de Groot and Christoffels 2007: 18
21). Keeping languages apart entails an inhibition effort (e.g., Yeigh 2012), and phenomena like
false friends, cross-language contrasts and extra linguistic factors make of (professional)
translating and interpreting tasks that need to be learned and trained. Within this approach, we
are not talking about literal translation any longer, but rather about default translation as
described by Halverson (2015: 135): formulations that are chosen faster, more easily and (most
likely) more often than others.
Default translations do not necessarily imply word by word processing or translating. Quite on the
contrary, the most usual default translations might be those of plurilexical chunks. Language
proficiency and expertise have a direct impact on bilingual language control (Christoffels et al.
2006) and, together with mental fatigue, they contribute to explain phenomena regarding calques,
and the avoidance of appropriate cognates. Re-reading may tend to occur right before TT segment
typing simply because the translator wants to refresh mental activations. Other behaviors taken as
problem-solving indicators during the typing of TT segments may be related to the translators'
strategic management of cognitive resources, i.e., to their will to offload provisional solutions on
page in order to free mental resources they may then use to evaluate the quality of such
provisional solutions. A simpler, more plausible explanation than the computationalist view we
saw before.
7. Concluding remarks
Meaning is a process that we experience while we try to make sense of what surrounds us by
making use of all we know. Some of it can be codified into language, and repeated use will smooth
off the personal, rough edges of what we can communicate. Restrictions only apply when we
prepare to use language, when we think in order to translate. Our mental experience is holistic,
15

and far richer. That is how we come to posit hypotheses of meaning correspondence between text
segments in one or more languages. That is how we can venture that, in certain situations, from
certain perspectives, they may be judged equivalent by and for some people. A millennium-old
tradition of the written word as carrier of truth, rooted in several religions, would become in the
mid-20th century part of the mind-as-computer metaphor and distort our very perception of the
basic realities of translating and interpreting in several ways (Muoz, in press a).
I could only superficially address some aspects of computationalist views on translation and
interpreting quality assessment, and the very process of translating as represented by the literal
translation automaton. In the first case, I argued that quality is much more complex than just
slicing off a little piece of experience and attaching it to a word for everyone to see and agree on
its numerical value in a language chain, for all purposes and across the board. In the second one, I
suggested that default translating is actually much simpler (and simply human), with no modules
or mechanisms that will make us partial cyborgs who do not even know what they are processing
until something goes wrong. In the words of Lamb (1999: 2), "[...] information being represented
as symbols, put in places from which they can be later retrieved, moved from one place to
another. It is the theory of the five-year-old expressed in only slightly more sophisticated terms." I
would like to underscore, however, that this is not a general rejection of computationalism,
because it also has some merits I could not focus on.
Much of what I have discussed in this article is known to many people who deal with the interface
between cognition, language and communication; not so much, I think, between researchers of
the cognitive aspects of translation and interpreting. Practitioners perhaps will have glimpsed here
and there some confirmation of the intuitions they have when they look inwards to find out about
what they do. Paraphrasing Tabakowska's (1993: 20) reflection about cognitive linguistics, the
merit of cognitive translatologies will not consist of making new discoveries, but in offering a
theoretical framework for a systematic and coherent description of old and well-grounded
intuitions. In any case, in spite of a tradition that can be traced back to the birth of translation
studies (Muoz, in press b)a tradition that has seen the School of Leipzig passing the
computationalist torch to Paris' Interpretive School, and then to translation process researchI
hope to have made the point that translators' minds are not computers, and that it may be time to
move on.

References
Atkinson, R.C. and R.M. Shiffrin. 1968. "Human memory: A proposed system and its control processes". In: Spence, K.W.
and J.T. Spence (eds.), The Psychology of Learning and Motivation, vol. 2. New York: Academic Press. 89195.
Balling, L.W., K.T. Hvelplund and A.C. Sjrup. 2014. "Evidence of parallel processing during translation". Meta 59(2). 234
259.
Berkeley , I.S.N. 2008. "What the <0.70, 1.17, 0.99, 1.07> is a symbol?" Minds and Machines 18(1). 93105.
Boksem, M.A.S., T.F. Meijman and M. M. Lorist. 2005. "Effects of mental fatigue on attention: An ERP study". Cognitive
Brain Research 25. 107116.
Bringsjord, S. 2015. "The symbol grounding problem remains unsolved". Journal of Experimental and Theoretical
Artificial Intelligence 27(1). 6372.
Broccias, C. 2012. "The syntax-lexicon continuum". In: Nevalainen, T. and E.C. Traugott (eds.), The Oxford Handbook of
the History of English. New York: Oxford University Press. 735747.
Brysbaert, M. and W. Duyck. 2010. "Is it time to leave behind the revised hierarchical model of bilingual language
processing after 15 years of service?" Bilingualism: Language and Cognition 13. 359371.
Callison-Burch, C., C. Fordyce, P. Koehn, C. Monz and J. Schroeder. 2007. "(Meta-) evaluation of machine translation". In:
Proceedings of the Second Workshop on Statistical Machine Translation. Prague: Association for Computational
Linguistics. 136158. Available at http://www.statmt.org/wmt07/WMT-2007.pdf.

16

Campitelli, G. 2015. "Memory behavior requires knowledge structures, not memory stores". Frontiers in Psychology 6. 1696.
URL <http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01696/full>.
Carl, M. 2012. "The CRITT TPR-DB 1.0: A database for empirical human translation process research". In: OBrien, S., M.
Simard and L. Specia (eds.), Proceedings of the AMTA 2012 Workshop on Post-Editing Technology and Practice
(WPTP 2012). Stroudsburg, PA: AMTA. 918.
Carl, M. and B. Dragsted. 2012. "Inside the monitor model: Processes of default and challenged translation production".
Translation: Corpora, Computation, Cognition 2(1). 127145.
Castro Arce, M. 2008. "Procesos de lectura y traduccin al traducir" [Reading and comprehension processes at
translating]. In: Fernndez, M.M. and R. Muoz (eds.), Aproximaciones cognitivas al estudio de la traduccin y la
interpretacin. Granada: Comares. 3154.
Catford, J. C. 1965. A Linguistic Theory of Translation: An Essay in Applied Linguistics. London: Oxford University Press.
Christoffels, I.K., A.M. B. de Groot and J.F. Kroll. 2006. "Memory and language skills in simultaneous interpreters: The
role of expertise and language proficiency". The Journal of Memory and Language 54(3). 32445.
Conde Ruano, T. 2008. Proceso y resultado de la evaluacin de traducciones [Evaluating translationsProcesses and
results]. Unpublished PhD dissertation. University of Granada.
Conde Ruano, T. 2012. "The good guys and the bad guys: The behavior of lenient and demanding translation evaluators".
Meta 57(3). 763786.
De Gregoris, G. 2014. "The limits of expectations vs. assessment questionnaire-based surveys on simultaneous
interpreting quality: The need for a gestaltic model of perception". Rivista internazionale di tecnica della traduzione
16. 5787.
De Groot, A.M.B. 2000. "A complex-skill approach to translation and interpreting". In: Tirkkonen-Condit, S. and R.
Jskelinen (eds.), Tapping and Mapping the Processes of Translation and Interpreting. Amsterdam: John
Benjamins. 5368.
De Groot, A.M.B. and I.K. Christoffels. 2007. "Processes and mechanisms of bilingual control: Insights from monolingual
task performance extended to simultaneous interpretation". Journal of Translation Studies 10(1). 1742.
Delaney, P.F. and J. Austin. 1998. "Memory as behavior: The importance of acquisition and remembering strategies". The
Analysis of Verbal Behavior 15. 7591.
Dijkstra, T. and W.J.B. van Heuven. 2002. "The architecture of the bilingual word recognition system: From identification
to decision". Bilingualism: Language and Cognition 5. 175197.
Doherty, S. and S. OBrien. 2009. "Can MT output be evaluated through eye tracking?" In: Proceedings of MT Summit XII. 214
221.
Doherty, S., S. OBrien and M. Carl. 2010. "Eye tracking as an MT evaluation technique". Machine translation 24(1). 113.
Dragsted, B. 2008. "Computer-aided translation as a distributed cognitive task". In: Dror, I.E. and S. Harnad (eds.),
Cognition Distributed: How Cognitive Technology Extends our Minds. Philadelphia: John Benjamins. 237256.
Dragsted, B. 2010. "Coordination of reading and writing processes in translation: an eye on uncharted territory". In:
Shreve, G.M. and E. Angelone (eds.), Translation and Cognition. Amsterdam: John Benjamins. 4162.
Dragsted, B. 2012. "Indicators of difficulty in translation Correlating product and process data". Across Languages and
Cultures 13(1). 8198.
Elman, J.L. 2004. "An alternative view of the mental lexicon". Trends in Cognitive Sciences, 8(7). 301306.
Elman, J.L. 2009. "On the meaning of words and dinosaur bones: Lexical knowledge without a lexicon". Cognitive Science 33. 1
36.
Elman, J.L. 2011. "Lexical knowledge without a lexicon?" The Mental Lexicon 6(1). 133.
Foss, D.J. and D.T. Hakes. 1978. Psycholinguistics: An introduction to the psychology of language. Englewood Cliffs, NJ:
Prentice-Hall.
Garca, A.M. 2015. "Psycholinguistic explorations of lexical translation equivalents: Thirty years of research and their
implications for cognitive translatology". Translation Spaces 4(1). 928.
Gibbs Jr., R.W. 1984. "Literal meaning and psychological theory". Cognitive Science 8: 275304.
Gibbs Jr., R.W. 1989. "Understanding and literal meaning". Cognitive Science 13: 243251.
Gigerenzer, G. and D.G. Goldstein. 1996. "Mind as computerBirth of a metaphor". Creativity Research Journal 9(2/3).
131144.
Graham, Y., T. Baldwin, A. Harwood, A. Moffat and J. Zobel. 2012. "Measurement of progress in machine translation". In:
Proceedings of Australasian Language Technology Association Workshop. 7078. Available at
http://www.aclweb.org/anthology/U12-1010.
Graham, Y., T. Baldwin, A. Moffat and J. Zobel. 2013. "Continuous measurement scales in human evaluation of machine
translation". In: Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. 3341.
Available at http://www.aclweb.org/anthology/W13-2305.
Grauer, Christian. 2009. Lesen, Verstehen und bersetzen Kollokationen als Handlungseinheiten der bersetzungspraxis.
Trier: WVT.
Haiman, J. 1980. "Dictionaries and encyclopedias". Lingua 50(4). 329357.

17

Halevy, A., P. Norvig and F. Pereira. 2009. "The unreasonable effectiveness of data". IEEE Intelligent Systems 24(2). 812.
Halverson, S.L. 1997. "The concept of equivalence in translation studies: much ado about something". Target 9(2). 207233.
Halverson, S.L. 2015. "Cognitive translation studies and the merging of empirical paradigms: The case of literal
translation". Translation Spaces 4(2). 310 340.
Hambleton, R.K. and A.L. Zenisky. 2010. "Translating and adapting tests for cross-cultural assessments". In: Matsumoto,
D. and F.J.R. van de Vijver (eds.), Cross-Cultural Research Methods in Psychology. New York: Cambridge University
Press. 4674.
Harkness J.A., M. Braun, B. Edwards, T.P. Johnson, L. Lyberg, P.Ph. Mohler, B.E. Pennell and T.W. Smith. 2010. Survey
Methods in Multinational, Multiregional, and Multicultural Contexts. Hoboken, NJ: John Wiley.
Harnad, S. 1990. "The symbol grounding problem". Physica D(42). 335346.
Hobbes, T. [1655]. Elementorum philosophiae sectio prima De corpore. Reprinted by W. Molesworth (ed.) in 1839, The
English Works of Thomas Hobbes of Malmesbury, vol. 1. London: John Bohn.
Hofstadter, D. 2001. "Epilogue: Analogy as the core of cognition". In: Gentner, D., K.J. Holyoak and B.N. Kokinov (eds.),
The Analogical Mind: Perspectives from Cognitive Science. Cambrige, MA: MIT Press. 499538.
Horst, S.W. 2011 [1996] Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind.
Charleston, SC: CreateSpace.
Humphreys, G.F. and S.P. Gennari. 2014. "Competitive mechanisms in sentence processing: Common and distinct
production and reading comprehension networks linked to the prefrontal cortex". NeuroImage 84. 354 366.
Itkonen, E. 2005. Analogy as Structure and Process. Amsterdam: John Benjamins.
International Test Commission. 2010. International Test Commission Guidelines for Translating and Adapting Tests.
Available at http://www.intestcom.org.
Jakobsen, A.L. and K.T.H. Jensen. 2008. "Eye movement behaviour across four different types of reading task". In:
Gpferich, S., A.L. Jakobsen and I.M. Mees (eds.), Looking at Eyes. Eye Tracking Studies of Reading and Translation
Processing. Copenhagen: Samfundslitteratur. 103124.
Jakobsen, A.L., K.T. H. Jensen and I.M. Mees. 2007. "Comparing modalities: Idioms as a case in point". In: Pchhacker, F.,
A.L. Jakobsen and I.M. Mees (eds.), Interpreting Studies and Beyond. A Tribute to Miriam Shlesinger. Copenhagen:
Samfundslitteratur. 217249.
Koby, G.S. 2015. "The ATA flowchart and framework as a differentiated error-marking scale in translation teaching". In:
Cui, Y. and W. Zhao (eds.), Handbook of Research on Teaching Methods in Language Translation and Interpretation.
Hershey, PA: IGI Global. 220253.
Koponen, M. 2012. "Comparing human perceptions of post-editing effort with post-editing operations". In: Proceedings
of the 7th workshop on statistical machine translation. Montral: Association for Computational Linguistics. 181
190. Available at http://www.statmt.org/wmt12/WMT-2012.pdf.
Krein-Khle, M. 2014. "Translation and equivalence". In: House, J. (ed.), Translation: A Multidisciplinary Approach.
Basingstoke, NY: Palgrave Macmillan. 1535.
Krings, H.P. 2001. Repairing Texts: Empirical Investigations of Machine Translation Post-Editing Processes. Kent, OH: Kent
State University Press.
Kroll, J.F. and E. Stewart. 1994. "Category interference in translation and picture naming: Evidence for asymmetric
connections between bilingual memory representations". Journal of Memory and Language 33. 149174.
Lakoff, G. and M. Johnson. 1980. Metaphors We Live By. Chicago: Chicago University Press.
Lamb, S.M. 1999. Pathways of the Brain. The neurocognitive basis of language. Amsterdam: John Benjamins.
Langacker, R.W. 2008. Cognitive Grammar. A Basic Introduction. New York: Oxford University Press.
Lederer, Marianne. [1993] 2014. Translation. The Interpretive Model. 2nd ed. Trans. by N. Larch. Abingdon: Routledge.
Lehr, C. 2014. The influence of emotion on language performance - Study of a neglected determinant of decision-making
in professional translators. PhD Dissertation, University of Geneva.
Lev, J. 1967. "Translation as a decision process". In: To Honor Roman Jakobson, vol. 2. The Hague: Mouton. 11711182.
Linck, J.A., N. Hoshino and J.F. Kroll. 2008. Cross-language lexical processes and inhibitory control. The Mental Lexicon
3(3). 349374.
Malkiel, B. 2009. "When idioti (idiotic) becomes fluffy: Translation students and the avoidance of target language
cognates". Meta 54(2). 309325.
Martn de Len, C. 2008. "Skopos and beyond. A critical study of functionalism". Target 20(1). 128.
Martn de Len, C. 2010. "Metaphorical models of translation. Transfer vs. imitation and action". In: St. Andr, J. (ed.),
Thinking through Translation with Metaphors. Manchester: St. Jerome. 75108.
Martn de Len, C. 2011. "Translationsmetaphern - Versuch einer methodologischen Brcke zwischen Glaubens- und
Handlungsuntersuchung". In: Schmitt, P.A., S. Herold and A. Weilandt (eds.), Translationsforschung. Frankfurt:
Peter Lang. 541554.
McCarthy, M. 2006. Explorations in Corpus Linguistics. Cambrige: Cambridge University Press.
McCulloch, W.S. and W.H. Pitts. 1943. "A logical calculus of the ideas immanent in nervous activity". Bulletin of
Mathematical Biophysics 5(4). 115133.

18

Mellinger, C.D. 2014. Computer-Assisted Translation: An Empirical Investigation of Cognitive Effort. Unpublished Ph.D.
dissertation. Available at: http://bit.ly/1ybBY7W
Mellinger, C.D. and G.M. Shreve. (in press) "Match evaluation and over-editing in a translation memory environment".
To appear in: Muoz, R. (ed.), Reembedding Translation Process Resarch.
Mikowski, M. 2013. Explaining the Computational Mind. Cambridge, MA: MIT Press.
Moon, J. and N. Jiang. 2012. "Non-selective lexical access in different-script bilinguals". Bilingualism: Language and
Cognition 15(1). 173180.
Moorkens, J., S. OBrien, I.A. L. Silva, N. Fonseca and F. Alves. 2015. "Correlations of perceived post-editing effort with
measurements of actual effort". Machine Translation 29(34). 267284.
Moser-Mercer, B., E. Knzli and M. Korac. 1998. "Prolonged turns in interpreting: Effects on quality, physiological and
psychological stress (Pilot study)". Interpreting 3(1). 4764.
Muoz Martn, R. 2010a. "On paradigms and cognitive translatology". In: Shreve, G.M. and E. Angelone (eds.),
Translation and Cognition. Amsterdam: John Benjamins. 169187.
Muoz Martn, R. 2010b. "Leave no stone unturned: On the development of cognitive translatology". Translation and
Interpreting Studies 5(2). 145162.
Muoz Martn, R. 2011. "Nomen mihi Legio est: A cognitive approach to Natural Translation". In: Blasco, M.J. and A.
Jimnez (eds.), Interpreting naturally. Frankfurt: Peter Lang. 3566.
Muoz Martn, R. (in press, a). "Looking towards the future of cognitive translation studies". To appear in: Schwieter, J.H.
and A. Ferreira (eds.) Wiley's Handbook of Translation and Cognition.
Muoz Martn, R. (in press, b) "Reembedding translation process research. An introduction". In: Muoz, R. (ed.)
Reembedding Translation Process Research.
Muoz Martn, R. and T. Conde Ruano. 2007. "Effects of serial translation evaluation". In: Schmitt, P.A. and H.E. Jngst
(eds.), Translationsqualitt. Frankfurt: Peter Lang. 428444.
Nenonen, M. and E. Penttil. 2014. "Constructional continuity: (where) does lexicon turn into syntax?" The Mental
Lexicon 9(2). 316337.
Neumann, J. von. [1958] 2012 . The Computer and the Brain. New Haven: Yale University Press.
Nida, E.A. and C.R. Taber. [1969] 1982. The Theory and Practice of Translation. Leiden: E. J. Brill.
O'Brien, S. 2012. "Towards a dynamic quality evaluation model for translation". JosTrans 17: 5577.
Paradis, M. 1998, "Aphasia in bilinguals: How atypical is it?" In: Coppens, P. and Y. Lebrun (eds.), Aphasia in atypical
populations. Mahwah, NJ: Erlbaum. 3566.
Pavlenko, A. 1999. "New approaches to concepts in bilingual memory". Bilingualism: Language and Cognition 2(3). 209230.
Piccinini, G. 2015. Physical Computation: A Mechanistic Account. Oxford: Oxford University Press.
Piccinini, G. and S. Bahar. 2013. "Neural computation and the computational theory of cognition". Cognitive Science
37(3). 453488.
Pchhacker, F. 2005. "Quality research revisited". The Interpreters' Newsletter 13. 143166.
Poeppel, D. 2012. "The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech
and language". Cognitive Neuropsychology 29(12). 3455.
Popovi, M. and A. Burchardt. 2011. "From human to automatic error classification for machine translation output". In:
Forcada, M. L., H. Depraetere and V. Vandeghiste. Proceedings of the 15th Annual Conference of the European
Association for Machine Translation. Leuven: EAMT. 265272. Available at
http://www.ccl.kuleuven.be/EAMT2011/proceedings/pdf/eamt2011proceedings.pdf.
Popovi, M., A. Lommel, A. Burchardt, E. Avramidis and H. Uszkoreit. 2014. "Relations between different types of postediting operations, cognitive effort and temporal effort". In: Tadi, M., P. Koehn, J. Roturier and A. Way (eds.).
2014. Proceedings of the 17th Annual Conference of the European Association for Machine Translation. Dubrovnik:
EAMT. 191198. Available at http://hnk.ffzg.hr/eamt2014/EAMT2014_proceedings.pdf.
Price, C.J., D.W. Green and R. von Studnitz. 1999. "A functional imaging study of translation and language switching".
Brain 122(12). 22212235.
Prior, A., S. Wintner, B. MacWhinney and A. Lavie. 2009. "Translation ambiguity in and out of context". Applied
Psycholinguistics 32. 93111.
Pulvermller, F. 2013. "How neurons make meaning: Brain mechanisms for embodied and abstract-symbolic semantics".
Trends in Cognitive Sciences 17(9). 458470.
Putnam, H. [1961] 1983. "Brains and behavior". Reprinted in: Block, N. (ed.), Readings in Philosophy of Psychology, vol. 1.
Cambridge, MA: Harvard University Press. 2436.
Quian Quiroga, R., L. Reddy, G. Kreiman, C. Koch and I. Fried. 2005. "Invariant visual representation by single-neurons in
the human brain". Nature 435. 11021107.
Rayner, K. and C. Clifton Jr. 2002. "Language processing". In: Medin, D. (ed.) Stevens Handbook of Experimental
Psychology, 3rd ed., vol. 2, Memory and Cognitive Processes. New York: John Wiley. 261316.
Rescorla, M. 2015. "The computational theory of mind". In: Zalta, E.N. (ed.), The Stanford Encyclopedia of Philosophy.
URL <http://plato.stanford.edu/archives/win2015/entries/computational-mind/>.

19

Risku, H. 2013. "Cognitive approaches to translation". In: Chapelle, C. (ed.), The Encyclopedia of Applied Linguistics.
Oxford: Wiley-Blackwell. 110.
Rodriguez, P. 2006. "Talking brains: A cognitive semantic analysis of an emerging folk neuropsychology". Public
Understanding of Science 15(3). 301330.
Rojo Lpez, A.M. and M. Ramos Caro. (in press). "Can emotion stir translation skill? Defining the impact of positive and negative
emotions on translation performance". To appear in: Muoz, R. (ed.), Reembedding Translation Process Resarch.
Ruiz, C., N. Paredes, P, Macizo and M.T. Bajo. 2008. "Activation of lexical and syntactic target language properties in
translation". Acta Psychologica 128(3). 490500.
Samani, R. and F. Sharifian. 1997. "Cross-language hierarchical spreading of activation". In: Sharifian, F. (ed.), Proc. of the
Conference on Language, Cognition, and Interpretation. Isfahan: IAU Press. 11 23.
Schaeffer, M. and M. Carl. 2014. "Measuring the cognitive effort of literal translation processes". In: Germann, U., M.
Carl, P. Koehn, G. Sanchs, F. Casacuberta, R. Hill and S. OBrien (eds.), Proceedings of the Workshop on Humans
and Computer-assisted Translation. Stroudsburg, PA: ACL. 2937.
Schmid, H.J. [2014]. "A framework for understanding linguistic entrenchment and its psychological foundations in
memory and automatization". To appear in: Schmid, H.J. (ed.), Entrenchment, memory and automaticity. The
psychology of linguistic knowledge and language learning. Boston: APA and Walter de Gruyter.] Draft available at
http://www.anglistik.unimuenchen.de/personen/professoren/schmid/schmid_publ/introduction_entrenchment.pdf.
Secar, A 2005. "Translation evaluation A state of the art survey". In: Proceedings of the eCoLoRe/MeLLANGE
Workshop, Leeds, 3944. Available at
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.3654&rep=rep1&type=pdf.
Shannon, C.E. and W. Weaver. 1949. The Mathematical Theory of Communication. Chicago: University of Illinois Press.
Shastri, L.2003. "Spreading-activation networks". In: Nadel, L. (ed.), Encyclopedia of Cognitive Science, vol. 4. London:
Nature Publishing. 211218.
Shlesinger, M. and B. Malkiel. 2005. "Comparing modalities: Cognates as a case in point". Across Languages and Cultures
6(2). 173193.
Shreve, G.M., C. Schffner, J.H. Danks and J. Griffin. 1993. "Is there a special kind of 'reading' for translation? An
empirical investigation of reading in the translation process". Target 5(1). 2141.
Slobin, D.I. 1987. "Thinking for speaking". In: Proceedings of the 13th Annual Meeting of the Berkeley Linguistics Society.
Berkeley, CA: The Berkeley Linguistics Society. 435444.
Slobin, D.I. 1996. "From 'thought and language' to 'thinking for speaking'". In: Gumperz, J.J. and S.C. Levinson (eds.),
Rethinking linguistic relativity. Cambridge: Cambridge University Press. 7096.
Slobin, D.I. 2003. "Language and thought online: Cognitive consequences of linguistic relativity". In Gentner, D. and S.
Goldin-Meadow (eds.), Language in Mind: Advances in the Study of Language and Thought. Cambridge, MA: MIT
Press. 157192.
Snover, M., B. Dorr, R. Schwartz, L. Micciulla and J. Makhoul. 2006. A study of translation edit rate with targeted human
annotation. In: Proceedings of the 7th Conference of AMTA. Cambridge, MA: AMTA. 223231. Available at
https://www.cs.umd.edu/~snover/pub/amta06/ter_amta.pdf.
Snover, M., N. Madnani, B. J. Dorr and R. Schwartz. 2009. "Fluency, adequacy, or HTER? Exploring different human
judgments with a tunable MT metric". In: Proceedings of the Fourth Workshop on Statistical Machine Translation at
the 12th Meeting of the EACL. Athens: Association for Computational Linguistics. 259268. Available at
http://www.statmt.org/wmt09/WMT-09-2009.pdf.
Spivey, M. 2007. The Continuity of Mind. Oxford: Oxford University Press.
Steels, L. 2008. "The symbol grounding problem has been solved. So what's next". In: de Vega, M., A.M. Glenberg and
A.C. Graesser (eds.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford: Oxford University
Press. 223244.
Stymne, S., H. Danielsson, S. Bremin, H. Hu, J. Karlsson, A. Prytz Lillkull and M. Wester. 2012. "Eye tracking as a tool for
machine translation error analysis". In: Calzolari, N., et al (eds.). Proceedings of the 8th International Conference on
Language Resources and Evaluation (LREC'12). 11211126. Available at http://www.lrecconf.org/proceedings/lrec2012/pdf/192_Paper.pdf.
Stymne, S. and L. Ahrenberg. 2012. "On the practice of error analysis for machine translation evaluation". In: Calzolari,
N., et al (eds.). Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12).
17861790. Available at http://www.lrec-conf.org/proceedings/lrec2012/pdf/717_Paper.pdf.
Tabakowska, E. 1993. Cognitive Linguistics and Poetics of Translation. Tbingen: Narr.
Tirkkonen-Condit, S. 1991. "Relational propositions in text comprehension processes". In Sajavaara, K., D. Marsh, and T.
Keto (eds.). Communication and Discourse across Cultures and Languages. Jyvskyl: AFinLA. 239246. Available at
https://www.jyu.fi/hum/laitokset/solki/afinla/julkaisut/arkisto/49/tirkkonen-condit.
Tirkkonen-Condit, S. 2005. "The monitor model revisited: Evidence from process research". Meta 50(2). 405414.
Tulving, E. and D. L. Schacter. 1990. "Priming and human memory systems". Science 247(4940). 301306.

20

Turing, A.M. 1950. "Computing machinery and intelligence". Mind 59(236). 433460.
van der Linden, D., M. Frese and T. F. Meijman. 2003. "Mental fatigue and the control of cognitive processes: Effects on
perseveration and planning". Acta Psychologica 113. 4565.
Vygotsky, L. S. 1986. Thought and Language. Trans. by A. Kozulin. Cambridge, MA: The MIT Press.
Waddington, C. 2001. "Different methods of evaluating student translations: The question of validity". Meta 46(2). 311325.
Weaver, W. 1949. Translation. Typescript. Rockefeller Foundation Archives. Copy available at http://www.mtarchive.info/Weaver-1949.pdf.
Westermann, G. and D. Mareschal. 2014. "From perceptual to language-mediated categorization". Philosophical
Transactions of the Royal Society of London. Series B, Biological Sciences 369. 20120391.
Wittgenstein, L. 1953/1958. Philosophical Investigations. Trans. by G.E.M. Anscombe. Oxford: Blackwell.
Yeigh, D.A. 2012. Inhibition and Mental Effort: A Moderation Hypothesis. Unpublished PhD dissertation. Southern Cross
University, Australia.
Zwaan, R.A. 2004. "The immersed experiencer: Toward an embodied theory of language comprehension". In: Ross, B.H.
(ed.), The psychology of learning and motivation: Advances in research and theory, vol. 44. New York: Elsevier. 35
62.

21