You are on page 1of 44

Masaryk University

Faculty of Arts

Department of English
and American Studies

English-language Translation

Bc. Marek Bednář

Translation Universals:
The Good, the Bad, and the Necessary
Master’s Diploma Thesis

Supervisor: Mgr. Renata Kamenická, Ph.D.

2015
I declare that I have worked on this thesis independently,
using only the primary and secondary sources listed in the bibliography.

……………………………………………..

Author’s signature
Acknowledgements

I would like to thank my supervisor, Mgr. Renata Kamenická, Ph. D., for her patience and
tolerance for my peculiar methods and working tendencies, as well as valuable insights and
directions in terms of both practical and theoretical aspects of my thesis.
Table of Contents

1. Introduction...........................................................................................................................1

2. What are Translation Universals?......................................................................................3

2.1. Examples of Translation Universals.............................................................................4

3. Translation Universals in Non-Translations......................................................................8

4. Communication Universals: A Compound Issue.............................................................14

4.1. Studying Universality...................................................................................................15

4.2. Studying the Communication Process........................................................................20

5. The Implications for the Study of Translation Universals..............................................27

6. Conclusion...........................................................................................................................32

Works Cited.............................................................................................................................34

Résumé (English)....................................................................................................................39

Resumé (Česky).......................................................................................................................40
1. Introduction

For better or worse, translation is not an exact science. On one hand, we are unable to
accurately and definitively measure translation processes, reliably predict what a translation
from a given source language to a given target language will look like, or even replicate the
exact results of a translation experiment among different subject groups. There are no
numerical values to be assigned, no graphs to be charted, no equations to point to when a
mistake has been made. On the other hand, this allows us to unleash our creativity. There is
rarely one correct answer, one way to solve a problem (unless we are dealing with
terminology), one tried and tested way to approach a certain formulation or phenomenon
regardless of circumstances. We can adopt a wide variety of strategies depending on the
source text, the target language, the target audience, the genre, the text function (skopos) and
even the time period. It is a blessing. It is a curse.
It is, however, in the human nature to seek order in what appears chaotic and erratic,
and to try and discover regularities in seemingly widely differing products which share
common processes and methods by which they are compiled. It comes as no surprise that
translation scholars set out to discover what is common to all translations, describe the nature
of these phenomena which we now term ‘translation universals’, and strive to verify and
validate their occurrence and their purpose on empirical studies of translations from a myriad
of fields and languages.
It was through a stray thought, which, to me, almost came as a revelation, along with a
few subtle but vital hints hidden scattered across a number of works on translation universals
that I began to doubt whether what we today study as translation universals has indeed much
to do with translation at all. As I delved deeper and deeper into the world of translation
universals and their study, it became more and more evident that today’s empirical research,
either consciously or unawares, seems to ignore a number of aspects of translation universals
that take the scope of these features far beyond the field of translation studies itself. The
occasional, preciously rare study on paraphrase and re-translation points to the existence of
translation universals in places where translation, as we define it today, does not take place
(see Chapter 3). An introspective look into my own sentence formulation process was enough
to convince me that there was more to translation universals than translation studies would
lead one to believe. For do we not make use of features nigh on identical to translation
universals when we give our inner thoughts a lexical form? Do we not, in fact, make use of
“translation” universals on a daily basis without realizing it? And if so, what does this mean

1
for the field of translation studies? These are the ideas I shall explore in detail in the coming
chapters to reveal the thought process that went into this rather personal theoretical research
of mine, to shed light on the barriers I encountered on this journey, and to discuss the whys
and the wherefores of translation universals and the fundamentals of our communication
which, to me, seem intricately linked together. And while my research took on a mostly
theoretical character, and required me to delve, among others, into the areas of psychology,
linguistics, cognitive linguistics, psycholinguistics and neurolinguistics, I shall strive to bring
my findings to more tangible conclusions, and propose directions for further research into
translation universals and human communication across the above mentioned disciplines, as
well as describe, critically, some of the problems facing today’s empirical research into
translation universals, and discuss the validity and possible applications of its findings. It is
my hope that, perhaps, even as a newcomer to the field of translation studies and translation
universals, I might be able to provide some food for thought and contribute with a few ideas
which, though perhaps trivial to some of those more experienced than I, might in the end
spark a new discussion and bring a fresh perspective into the study of translation universals.

2
2. What are Translation Universals?

Universals of translation are linguistic features which typically occur in translated rather
than original texts and are thought to be independent of the influence of the specific
language pairs involved in the process of translation. (Laviosa-Braithwaite 1998: 288)

This definition of translation universals was proposed by Mona Baker in 1993 in her seminal
paper ‘Corpus Linguistics and Translation Studies – Implications and Applications’, where
she first introduced the term ‘translation universals’ along with a list of potential members of
this category as taken from previous hypotheses on the subject of translation-specific
linguistic features and strategies proposed by scholars such as Blum-Kulka, Toury,
Vanderauwera, Shlesinger and others. While various aspects of the term ‘universal’ and its
definition have been criticized by a number of scholars, this definition is nonetheless
generally held as the standard, and included in the Routledge Encyclopedia of Translation
Studies (Laviosa-Braithwaite 1998).
The idea of searching for generalizations and regularities in the complex and intricate
process of translation, however, predates Baker’s 1993 paper. The famous Israeli translation
scholar Gideon Toury has been a proponent of the search for what he terms ‘laws of
translation’ since the early 1980’s and proposed this endeavour to be the main goal of
descriptive translation studies. Some have even gone on to propose that translation represents
a special and unique code of its own, separate and different from both the source and target
languages – a third code or a hybrid language (as defined by Frawley (1984)) – which needs
to be studied in detail in comparison with the source and target textual production to find and
describe its specific features and regularities.
The true advent of the study of translation universals, however, came with the
emergence of electronic corpora in the nineties. While Blum-Kulka and Toury painstakingly
performed exhaustive (and surely exhausting), richly contextualized pen and paper analyses to
document the processes taking place in translation and deduce the universals (or laws/norms,
depending on the author) hidden within, the access to extensive electronic corpora allowed
researchers such as Baker to sweep through vast collections of both translated and non-
translated texts from a variety of genres, contrast and compare the phenomena found, and use
this data to describe and support their hypothesized translation universals. This shift in
methodology, promoted heavily by Baker’s 1993 paper, brought about a new wave of
enthusiastic research into translation universals.

3
The transition, however, was not without its problems. Due to the fact that researchers
were not required to leaf through books one by one in tedious search for translation
universals, they were now able to take on a more substantial and representative sample of
literature produced in the given languages to perform contrastive analysis. The attention to
detail and sensitivity to context in these studies across extensive corpora has, naturally,
suffered as a result, and while sample sizes could now become much more representative than
before, the newly emerging research using comparable and parallel corpora also introduced a
new issue of proper selection of corpora to conduct these studies. Whereas manual research as
performed by Toury and Blum-Kulka dealt exclusively with source texts and their
translations, research comparing translated and non-translated texts in the same language
works with texts which need not necessarily be comparable (due to the ambiguity of the term
‘original text’, as used for example by Baker in her definition of translation universals, this
thesis will operate with the term ‘non-translated text’ for original textual production created in
the target language, and ‘source text’ for the text upon which a translation is based, as
proposed by Chesterman (2001)).
The concept of translation universals and the methodology employed in their research
remains a controversial and widely discussed topic in translation studies. On one hand, some
claim to have found clear evidence supporting some of the hypothesized universal features of
translation (such as Laviosa-Braithwaite 1996), others have decried the endeavour altogether
as futile and impossible (House 2008, Tymoczko 1998), others still have attempted to redefine
the term ‘translation universals’ (Toury 2001), proposed new subtypes of universals
(Chesterman 2001), denied certain aspects of proposed universal features or their subtypes
(Becher 2010a), or pointed to issues with the methodology employed (Bernardini and Zanettin
2004). Some of these issues shall be elaborated on further in later chapters.

2.1. Examples of Translation Universals

It is important to note that there has been no unified classification of translation


universals. Each researcher may choose to define some translation universal types differently
or propose their own systemization based on certain common features. Chesterman, for
instance, differentiates between S-universals (those that can be identified by comparing the
translation to the source text) and T-universals (those that can be found by comparing the
translation to non-translations produced in the target language), while some others may

4
differentiate them based on the level of text they operate with – sematic, syntactic, lexical,
pragmatic etc. Based on these classifications, the lists of hypothesized universals will differ
author to author, though they do generally describe similar phenomena, with a small
collection of universals described in one scholar’s work being combined into just a single one
in studies published by a different author and so on. Thus, Chesterman’s S-universals include
lengthening (the fact that translations tend to be longer than their source texts), which, based
on the specific circumstances, might be explained by explicitation or a combination of other
universals in another author’s work.
The extent to which certain universals are confirmed or still only hypothetical can again
differ greatly based on the author in question. Some believe they can never be proven
definitively (Tymoczko 1998) or do not exist in the way we define them today (House 2008),
while others believe they have found conclusive evidence of the existence of at least some of
them (Laviosa-Braithwaite 1996). There are those who conduct their research in order to
support or refute hypotheses proposed previously by other scholars while others strive to
define their own universals, based on specific textual situations.
This thesis shall operate with a combined selection of some of the most prominent
universals found in the Routledge Encyclopedia of Translation Studies (Laviosa-Braithwaite
1998), Chesterman’s list of S- and T-universals (2001), Sarah Lind’s newsletter in TIC Talk
on translation universals (2007) and the introduction to Translation Universals: Do They
Exist? (Kujamäki and Mauranen 2004), and shall focus mostly on the following phenomena:
explicitation, implicitation, simplification, normalization/standardization, levelling out and
avoidance of repetition.
First proposed by Blum-Kulka in 1986, explicitation involves the addition of semantic,
syntactic or lexical elements to elucidate information and relations which are more implicit in
the source text. Thus, we can translate the sentence “Ich wohne in Köln” (I live in Cologne)
into English as “I live in Cologne, one of the largest cities in western Germany” for the
benefit of a target audience which is not likely to be very knowledgeable of German
geography. Similarly, instead of using a sentence “I went there”, and assuming the reader will
grasp from the context what “there” refers to, we might choose the more explicit “I went to
my father’s shop”, if we feel that, for instance, the anaphoric reference points to a section of
the text too far back for the reader to remember what “there” means without rereading the
text.
The opposite of explicitation, implicitation refers to the process of making implicit
information which was clearly and explicitly stated in the source text. Implicitation and

5
explicitation are not mutually exclusive, however, and can operate in unison in the same
segment of text, as can be illustrated for example on the sentence “Po několika týdnech
příprav jsem konečně vyrazil do Bratislavy” (After weeks of planning, I have finally departed
for Bratislava) which, depending on the context, might be translated from the source language
into English as “After weeks of planning, I have finally departed west, towards our beautiful
capital”, which, on one hand, implicitates the name of the city and assumes that the reader
will understand from the context that the speaker is in Slovakia and knows what the country’s
capital is, while also explicitating the fact that speaker had to travel west, and not, for
example, south, that he finds the city to be beautiful (a piece of information missing entirely
from the source, though perhaps present somewhere later in the text) and that Bratislava is in
fact the capital (a piece of information implicit in the source). This move, while seemingly
unnecessary, might, depending on the context, be motivated by a number of factors, such as a
cultural gap (in case Slovakia’s geography is not widely known in the target culture), a
compensation for omissions (the speaker’s positive attitude towards Bratislava might be
expressed in a different passage in the source text but, due to high information load, was left
out in that part of the translation; the translator then moves the expression “beautiful” to this
section of the text instead), or an attempt to avoid repetition (the word Bratislava was used
somewhere recently in the text, and its repeated use seems clumsy to the translator).
Simplification is, as the name suggests, the act of making the translated text simpler in
some fashion than its source text. Similarly to explicitation, this can take place on three levels:
syntactic, lexical and stylistic. Syntactic simplifications will tend to break up complex
sentences with a number of relative clauses into a series of smaller, shorter sentences, or
spread out the information load of a single sentence into a number of shorter phrases, while
lexical simplification will often resort to strategies such as replacing specific terms with their
superordinates (hypernyms) in cases where a suitable hyponym is not available in the target
language, or using more familiar synonyms. As simplification illustrates, universals are rarely
black or white, either or, yes or no. They are highly dependent on the perspective from which
the given feature is being studied. The very same sentence that might be used as an example
of explicitation might also serve as an instance of simplification, as simplification can utilize
strategies which might be categorized as explicitations or implicitations to achieve the goal of
simplifying the text in some way, and simplification itself might in turn be employed in order
to avoid repetition.
Normalization/standardization pertains to the effort of making the target text conform to
the standards of the target language and making it more conventional. It is important to note

6
that, just like most other universals, the employment of this strategy does not necessarily have
to be a positive; on the contrary, normalization can often take place despite the fact that the
source text might have been intentionally unconventional. While Vanderauwera (1985) seems
to view this feature in a more positive light as shifts in punctuation, lexical choice, style,
sentence structure and textual organization towards textual conventionality, apparently
approved by the target audience, by standardizing unconventional punctuation or finishing
incomplete sentences, Baker (1996: 183) takes on a more pejorative stance by defining
normalization as the tendency to exaggerate features of the target language and to conform to
its typical patterns to the extent that the translated text’s conventionality exceeds both that of
the source text and that of non-translated texts.
‘Levelling out’ is a term used by Shlesinger to describe shifts that push the text towards
the centre of the oral-literate continuum (oral here meaning having characteristics of a spoken
text, while literate meaning having characteristics of a written text, i.e. more formal, with a
more complex and elaborate sentence structure, high-brow lexis etc.). Oral texts will strive to
be more like written texts while written texts will lean towards a more spoken style, sentence
structure and/or lexis. This applies to both interpreting, which was Shlesinger’s point of
departure in her thesis (1989), and translation.
Finally, the avoidance of repetition is the act of reducing or omitting repetitions present
in the source text by the means of ellipsis, use of pronouns, synonyms, hypernyms and other
strategies. As mentioned earlier, various universals will often cooperate; avoidance of
repetition can, for example, be achieved via explicitation (a noun is instead replaced by its
paraphrase: Bratislava – the capital of Slovakia) or implicitation (omitting the word entirely,
where possible, for example in function words, or replacing it with an anaphoric/cataphoric
reference).

7
3. Translation Universals in Non-Translations

It is at this point that we arrive into the realm of my own personal research, thought
processes and hypotheses, which shall require a short introduction. I have always been
intrigued by human language and the way a few letters on a piece of paper can be organized
into a cohesive message which, in turn, is then stored in our mind as abstract concepts and
relations to be retrieved at a moment’s notice. It was when compiling a final paper for a
course which also provided an introduction into the issue of translation universals that the
thought first occurred to me. When I looked upon the universals before me, I began noticing
patterns all too familiar to me. It was then that I first formulated the idea that translation
universals may not, in fact, be specific to translation only, but are inherent in the nature of
human communication, be it written or oral, bilingual or monolingual, sophisticated or
ordinary. Features which we today term ‘translation universals’ are, I argue, encoded in our
language to such an extent that they are inseparable from human communication, and though
they do, without question, occur in translation, they are a feature which reaches far above the
scope of translation studies. To illustrate this point, I shall provide a number of examples from
ordinary monolingual situations we might encounter in our daily life.
Let us say that we have two friends, Adam and Benjamin, who know each other. One
day, we meet Adam in town and, among other things, he asks us that when we next meet
Benjamin, we pass him a message (the contents are not important for the purposes of the
example). The very next day, we meet Benjamin, and we want to relay the message.
Assuming we understood the contents of the message fully and properly, we are likely to
reconstruct the message in our own way. We might omit details of the original message which
we either do not find important or which might, in fact, interfere with Benjamin’s ability to
understand what was meant (1). We might choose to add some details which were not present
in the message if they are relevant, and were perhaps implicit from the encounter, which
Benjamin was not a witness of (2). Assuming that Adam was speaking of a matter which we
have little knowledge of or was using technical terms we had to look up later or ask him
about, we might choose to replace the technical term with either its definition or its hypernym
for the benefit of Benjamin who, as we presume, might encounter the same problem we have
and not understand what was said (3). From a stylistic perspective, the issue can be quite the
same. Let us say that Adam, a slightly older man, tends to use a more formal, old school
language which we either find strange or difficult to reproduce. In that case, we might either
choose to attempt to replicate it for Benjamin’s entertainment, or, more likely, try to

8
compensate for it by giving the message a more natural style (4). If Adam was at a loss for
words when formulating his message for us, he might have, for example, used the word
‘holiday’ in the message three times. We may again reproduce the message word for word for
more or less comedic effect, or we might instead replace the word ‘holiday’ with ‘vacation’ in
one case, and omit it completely in another (5). The situation presented thus demonstrates the
use of explicitation (2, 3), implicitation (1), normalization (3, 4) and avoidance of repetition
(5) in a monolingual environment.
The presence of these features in the realm of paraphrase, reported speech and relayed
messages has, though scarcely, been acknowledged by a number of authors. A report on the
findings of the ParaTrans Project (Kajzer-Wietrzny et al. 2014), an ongoing study of the
parallels between translation and paraphrase conducted at the Adam Mikiewicz University in
Poznan, notes that features identical to translation universals appear in paraphrase as well.
Similarly, Zhetsen’s (2009) study on intralingual translation, namely Danish retranslations
and modernizations of a section of the Bible, points out that modernizing the biblical text and
tailoring it to a specific target audience (children, readers familiar with the Bible, readers new
to the Bible etc.) operates with processes identical to translation, with differences being more
in the degree than in the kind of processes taking place. It is noteworthy that Zhetsen is an
author with years of experience in researching expert-layman communication in the medical
field, where communication of a kind similar to that of Bible translation takes place on a
regular basis: namely the relation of obscure information (obscure either by form or content)
to a wider audience; a process not dissimilar to translation between different languages, where
the necessity to relate information to the target audience stems from the language of the
original message, rather than the content or style.
Technically, these similarities between translation and paraphrase/intralingual
communication should hardly come as surprising. As Zhetsen (2009) points out, Jakobson
himself argued for a rather broad definition of translation as a phenomenon “fundamental to
all language transactions” (Jakobson 1959: 114). This includes intralingual translation
(rewording of verbal signs by means of other signs in the same language), interlingual
translation (translation proper, i.e. between languages) and intersemiotic translation
(transmutation of verbal signs by means of signs of nonverbal sign systems). This idea was
further expanded on by Steiner (1975) and Even-Zohar (1990), who viewed translation proper
as a heightened case of the process of communication, and claimed that the linguistic
problems implicit in interlingual translation are already implicit in all intralingual discourse.

9
These views, expressed over 50 years ago, however, seem to be mostly ignored by today’s
translation studies. While translation studies does not explicitly exclude intralingual
translation, the truth is that empirical studies of this subject are very scarce indeed. This, in
practice, means that empirical studies of translation universals are entirely disregarding a vast
section of translation itself, and obtain their data from a specifically defined subsection of
translation only (namely, translation proper). The results of this can be seen, for example, in
the categorization of explicitation as proposed in Klaudy’s summary on explicitation in the
Routledge Encyclopedia of Translation Studies (Baker and Malmkjær 1998: 83), which lists
translation-inherent explicitation as one of the four categories alongside pragmatic, obligatory
and optional. The section lists no reasons for the existence of this category beyond the vague
statement that translation-inherent explicitation is motivated by “the necessity to formulate
ideas in the target language that were originally conceived in the source language”. No
examples are listed. This feature was, rightfully, criticized by Becher (2010a: 22), who states
that, since translation is not fundamentally different from monolingual discourse as far as the
balance between explicitness and implicitness is concerned (though as previously stated, the
similarities go far beyond the limits of explicitness and implicitness), the tendency of
translators to compensate for cultural distance by explicitating is not translation-inherent, as
translators do nothing which is not present in non-translated texts as well.
It is my belief, however, that even the broad view of translation proposed by Jakobson
(1959) cannot fully encompass what we term ‘translation universals’, as these features reach
even beyond the realm of paraphrase, reported speech and relayed messages, which are
already, as mentioned before, severely underrepresented in the field translation studies. I
argue, that “translation” universals figure in the very way we speak due to the transition we
make between our thoughts – generally abstract, non-verbal concepts – and the way we
formulate these into verbal messages, governed by grammatical rules which are not inherent
to our way of thinking. Just like a message from the source text must be processed into the
target language based on the way the two different codes operate as well as the possible gaps
in knowledge between the two cultures, so does human communication have to adapt to the
process of verbalizing inherently non-verbal ideas, and expressing notions contained in our
minds in such a way as to be understandable to a person who may not have all the information
necessary. It could be said that our communication is in a way Jakobson’s intersemiotic
translation taken to the extreme: our very speech is a process of transmuting non-verbal ideas
and concepts into verbal messages.

10
The fact that thought does not have a verbal form is supported, though sometimes
indirectly, by a number of linguistic, psycholinguistic or cognitive linguistic studies. The
Würzburg School of psychology, founded over a century ago, which set out to study thought
experimentally already took the fact that thought consists of images and sensations (and not
words) as a starting point for their research (Börsch 1986: 196). Psycholinguistic and
neurolinguistic studies on the process of speech production virtually universally work with the
process of sentence production as first described by M. F. Garrett (1975). As Costa et al.
(2009) elaborate, the process of speech production consists of conceptual activation, retrieval
of lexico-semantic information, retrieval of phonological information, and the retrieval of
syntactic information, in this order, meaning that concepts we intend to talk about are
retrieved from the mind even before the selection of words takes place, and words, in turn, are
selected before their phonemic realizations are retrieved, which are then formed into a
sentence. Levelt (1992: 5) in his paper on accessing words in speech production states: “It is
widely held that a message is a conceptual structure, cast in a propositional language of
thought. It forms the input to the so-called formulator, whose task it is to map the message
onto linguistic form.” Though the formulator itself has rarely been described outside Levelt’s
work, many scholars before and after have pointed to various aspects of the process of
transition between conceptual thought and linguistic expression.
Some of these scholars, such as the above mentioned M. F. Garrett (1975), have even
used the word ‘translation’ to describe this transition. Dell (1986: 283), for instance, when
remarking on the temporal problem of sentence production, states: “thought, which generally
lacks temporal structure, must be translated via syntactic rules into temporally ordered
speech” (my emphasis). Possibly the most significant mention of this parallel, however, is
made by Langacker:

Let us consider abstractly the various factors involved in a particular instance of


language use. It is prompted when a speaker, assessing the total context, perceives the
need to find linguistic expression for a conceptualization. The need for such expression
constitutes a problem to be solved, and the overall situation places a variety of
constraints on what counts as an acceptable solution. We need not dwell on these
constraints, which include such factors as the following: how much detail the speaker
considers relevant; which aspects of the conceptualization he wishes to emphasize; his
social relationship to the hearer; his assessment of how much the hearer already knows
about the context and the notion to be conveyed; how the expression is to be integrated

11
with previous and anticipated discourse; the effect he wants to have on the hearer; his
estimation of the hearer’s linguistic ability; and how far he is willing to deviate from
linguistic convention. (Langacker 1987: 65)

I believe we do in fact need to dwell on these constraints, and very much so, as these
constraints are almost word-for-word identical to those imposed by the process of translation,
where they are the basic motivating force behind the basic considerations a translator must
work with, and give rise to translation universals due to the way the translator overcomes
these constraints. How much detail the speaker considers relevant will determine the degree
of explicitation or implicitation between the thought and the message. The social relationship
will determine the style the message should have, much like a translator will craft his text to
suit the target audience. The assessment of how much the hearer knows of the subject and the
estimation of the hearer’s linguistic ability will again determine how explicit or implicit (s)he
can afford to be and whether there exists a knowledge or cultural gap between the source (the
speaker’s thoughts) and the target audience. The integration of the message with previous and
anticipated discourse, or, simply, the relation of the message to the context, differs very little
from what a translator does (except, perhaps, that the translator sees the anticipated discourse
on the paper, while the speaker does not). The focus on the skopos of the message (desired
effect, function) is exactly the same in intra- and interlingual discourse. How far the speaker is
willing to deviate from linguistic convention is what motivates normalizations in translation
(if the translator is not willing to deviate as much as the author).
In studies of translation universals, a great amount of attention has been paid to the term
‘universal’ and the way we define it, but it is in fact the word ‘translation’ which is, in my
eyes, by far the greater offender, as these features do not originate in translation. The very
same processes that take place within translation also take place within our very minds when
we are formulating a message, be it spoken or written, and the “translation” universals which
emerge from these processes are, inevitably, inherent to human linguistic expression.
Translation studies is, therefore, in my opinion, describing and studying linguistic features
which in fact reach far beyond the scope of what translation studies should be concerned with.
This stance is shared by Juliane House (2008), who, to my knowledge, was the first to discuss
the parallel between general language use and translation universals. Her view is that
translation universals are motivated by the differences between the surface structure of a
sentence as defined by Halliday, and its deep structure (“underlying abstract stratum which
determines the meaning of sentences and is represented in the human mind” (House 2008: 6)).

12
Due to these differences, House claims that the study of translation universals is in essence
futile, as there are no and can be no true translation universals, i.e. universals that are specific
to translation and no other form of linguistic expression. As a result, House has apparently
abandoned the study of translation universals altogether and has not published any papers on
this topic since.
This is where House’s and my path diverge. While there is no doubt in my mind that the
term ‘translation universals’ is misleading and poorly defined, and there is a myriad of
problems with the way these features are studied, there can also be no question that these
features do figure heavily in the process of translation, and there is much to expand on and
study further. However, since these features, which I shall henceforth term ‘communication
universals’ do not truly originate in translation, the basics of these features must first be
established and described at a higher, more general level in an empirical manner, before they
can be properly studied within translation. This would not only help clarify what the term
‘universal’ truly means, as is often discussed in translation studies, but would allow for
comparisons and parallels to be drawn between general communication and translation to
reveal how the two processes differ, though perhaps, as mentioned by Zhetsen, not in the kind
of features appearing in them, but in the degree.

13
4. Communication Universals: A Compound Issue

It was at this point that my research reached the first major obstacle. While the notion
that translation universals are in fact inherent to all communication has been, though very
much indirectly, supported by studies outside translation and has even found some support
within translation studies thanks to House’s 2008 paper, further search for details regarding
the communication process which would allow for the description of communication
universals was required. However, while the manner in which we perceive language, acquire
language, and even, to an extent, produce language, has been described and studied both
theoretically and empirically within neuro- and psycholinguistics, the answers I sought eluded
me. How exactly, step by step, do we transition from the thought to the word? What are the
processes that take place in our mind before we formulate a sentence? How exactly does
Levelt’s formulator operate with conceptual information? A single answer to some of these
questions would have been sufficient to point my research in a specific direction, but it was
nowhere to be found.
It was then, after many weeks of searching that I was asked the question directly: “What
answer are you looking for specifically? How would you imagine such a statement?” It took a
few moments for me to realize that I had no response whatsoever. Could such a statement
even exist? I thus had to embark on quite a different journey. Instead of looking for an
answer, a direct description, I began to look for the methods by which such an answer could
be obtained, determine if it can indeed be obtained and draw due conclusions from either. But
much like translation universals, communication universals are a compound concept. Before I
could truly begin to delve into the process of communication and attempt to derive
universality from it, I had to first resolve the controversial issue of universality itself. For
what use is attempting to study communication universals, if the concept of universals is still
unclear? Since what we term ‘translation universals’ are in fact universals of communication
applied to translation, it seemed natural to attempt to resolve the issue of universals in
translation first, where a vast amount of material is readily available to work with, and apply
these conclusions to the study of the newfound concept of communication universals. I
therefore decided to search for fields which deal with the problem of universals and contrast
their approaches and methods to those employed in translation studies to find a possible
solution to the problem of universality.

14
4.1. Studying Universality

Universality, as a concept, is not unique to translation. It finds its origin in philosophy


from the days of Socrates and likely even earlier and remains a much discussed issue in
metaphysics to this day (Armstrong 1989). In a very simplified way, universals in
metaphysics can be described as properties which groups of particulars (individual things, be
they objects, animals (including humans), or abstract concepts) have in common. If we have
two rocks and both rocks are hard, then we can say that they share a universal; in this case the
property of being hard. We need not necessarily define a class based on these similarities,
however. If we find a sandstone and notice its relative softness, we will likely not
automatically exclude it from the class of rocks. Different members of a given class (which in
itself has some inherent properties which define it as a class) can be categorized and
subcategorized together based on the properties they share. Thus, while we have a class of
mammals, which share certain biological properties (universals), we can then have apes in a
subclass of their own since they have certain anatomical and evolutionary similarities which
tie them together. Universals are thus established based on certain conditions or levels of
abstraction we choose to impose on the given particular.
A step closer to translation stands the study of language universals, i.e. features or
properties shared by all languages, or by all language (Hockett 1966: 1). However, the
definition, though simple, can be quite deceiving. A statement that all languages have nouns
and verbs, though undoubtedly true and constituting a language universal, brings very little
new and useful information to the discussion of universal properties of languages. In
Greenberg’s list of language universals (1966a: 110), a foundation upon which the study of
language universals has built on since its publication, very few, if any, universals take on such
an unrestricted form (Greenberg 1966b: xix). Instead, we encounter statements such as “if the
nominal object always precedes the verb, then verb forms subordinate to the main verb also
precede it” (Greenberg 1966a: 111) or “in languages with prepositions, the genitive almost
always follows the governing noun, while in languages with postpositions it almost always
precedes it” (Greenberg 1966a: 110). In other words, Greenberg works with universals which
are, in some way, contingent on circumstances. If a certain language has a certain property,
then X is true or very likely true. If it does not have that property, then Y. This provides ideal
conditions for classification of languages based on their shared properties (Jakobson 1966:
268), though in Greenberg’s case, these would be limited to the grammatical level only, and
would focus on the order of meaningful elements within a sentence. An entirely new

15
classification would be required in terms of morphology, semantics, stylistics etc., though
such classifications would, too, be possible, as Greenberg demonstrated with his impressive
list of 45 universals (though the list of exceptionless universals has grown shorter since then).
Transitioning now to the field of translation universals, we will immediately notice a
rather startling difference: the majority of prominent universals investigated in translation are
unconditional statements about what translations do or do not include. For instance, when we
take a look at the explicitation hypothesis by Blum-Kulka, it proposes that translations will
generally tend to be more explicit than their respective source texts, and that explicitation is a
universal strategy inherent in the process of language mediation (Blum-Kulka 1986: 21).
There are no concrete conditions stated which would have to be met for this statement to be
true, which can lead to problematic situations. If, hypothetically, we study translations
between Czech and Slovak, two very similar languages and cultures which have shared much
of their history, and discover that explicitations appear rarely, if at all, does that, then, mean
we can no longer consider explicitation to be a universal? And what if we define, quite
specifically, the conditions in which explicitation shall appear in translation (cultural gap, for
instance) based on extensive evidence, and then discover translations where these conditions
are met and yet no explicitation takes place?
The problem lies in the inherent variability of translation. While language systems have
a certain order which can be studied and systematized and explained via conditional
statements, the process of translation, much like human communication, offers no such rules.
Though there are certain relations that exist between what is thought and what is said or what
is in the source text and how the message is reorganized into the target text, it is the human
factor which, in the end, determines the final version of the message. If the conditions for
explicitation are met (which would need to be defined first), the translator may still very well
choose to not explicitate. If the conditions are not met, the translator may explicitate anyway.
A communication or translation universal must therefore account not only for the conditions
present in the source, the conditions presented by the languages and cultures (in translation) or
persons (in communication) involved, but also for the choices, tendencies and idiosyncrasies
of the translator/speaker. Thus, as promoted by Toury (2001: 21), a proper translation
universal should not simply amount to “translators tend to avoid repetition” or “translations
tend to be longer than their source texts”, but should be a conditional and probabilistic
statement to not only account for the requirements which need to be present in the source text
for the universal to appear, but which also account for the variability presented by the
translator, which can never be predicted with a 100% accuracy, and must inevitably be

16
expressed in terms of probability only. A universal of such a kind will thus not be as easily
refuted by the existence of a text which does not show any evidence of it, since, as long as the
universal is represented strongly enough in a larger sample of texts, exceptions can be
investigated and explained by either the absence of the required conditions or by the stylistic
choice (correct or incorrect) of the translator without endangering an otherwise sound and
valid hypothesis.
As self-evident as this may seem, examples of empirical and theoretical studies on
universals which do not account for these criteria (as evidenced by the explicitation
hypothesis, or any entry in the list of translation universals in Routledge Encyclopedia of
Translation Studies (Baker and Malmkjær 1998)) are far too numerous to list here, though
two examples might perhaps suffice to illustrate my point.
Firstly, there is the case of Tymoczko’s article ‘Computerized Corpora and the Future of
Translation Studies’ (1998). In it, Tymoczko implies that the investigation of norms (here
standing for Toury’s laws and today’s universals) is futile unless we are able to gather as
many materials from as many cultures and time periods as possible in order to investigate
them. Otherwise, what we are doing is merely investigating the current practices set in the
current culture and not delving deep enough to uncover regularities and common features in
translation as a whole. As Kujamäki and Mauranen (2004: 1) summarize, Tymoczko basically
declares investigation of translation universals pointless. Attempting to amass such a broad
range of texts along with their translations from the various layers of society and historical
periods, not to mention languages, in order to present a sample which Tymoczko indicates
would be sufficient to start investigating translation universals could be likened to counting
the wagons of a passing train which not only moves faster than we can count, but also
spontaneously creates additional wagons on both ends. Collecting today’s literary production
plus the literature produced throughout mankind’s history along with the corresponding
translations, and organizing these into a corpus, if even possible, would render any attempt at
qualitative analysis hopeless. Today’s empirical studies already commonly suffer from a lack
of sensitivity to context and other translation related factors (as, for example, documented by
Becher (2010a, 2010b)). Analysing such an amount of text in terms of occurrences of any
universal whatsoever would turn the shortest of papers into a decade-long investigation. On
the contrary, it is my belief that investigations into translation universals must deliberately
limit their scope and field of interest in order to achieve valid results, since combining
together investigations across various fields, genres and time periods will tend to return
results of questionable usefulness, as each field and genre is regulated by its own norms and

17
routines within a given culture. In order for these to be identified and contrasted with the
target language and culture, they must first be isolated. Since as mentioned earlier, a proper
translation universal is not a mere black and white statement of “translation includes X” and
should take a form similar to Greenberg’s language universals, and thus be more of a
predictive statement regulated by conditions and (ideally) supported by empirical evidence, it
is not necessary to investigate the majority of human literary production in order to be able to
state with some confidence that X is a universal of translation (i.e. a feature/strategy –
depending on how deliberate its use is – often occurring in translation, not a feature present in
all translations as the definition of ‘universal’ would suggest).
A second example of an ill-defined concept of a translation universal can be found in
Pym’s paper (2007) on levelling out (equalizing universal) originally proposed by Miriam
Shlesinger. In it, Pym discusses Shlesinger’s findings from her research performed within
interpreting, where she identifies a tendency for texts to move towards the centre of the oral-
literate continuum. Thus, spoken texts take on characteristics of written texts (false starts are
eliminated etc.) and written texts take on characteristics of spoken ones. Since explicitation
has been categorized as a feature typical of literate texts, however (as discussed in Pym’s
article), and since, assuming Blum-Kulka’s hypothesis were sound (which, as proposed
earlier, it is not), all texts should be moving towards more literate features only, Shlesinger’s
equalizing universal would effectively falsify explicitation, even though it is, for example,
listed right alongside it by Lind (2007) or Baker (1996). Pym then goes on to propose that the
universals defined by Baker are at best only universal for written translation or not universal
at all. In the former case, Pym states that interpreting would have every right to declare itself a
field separate from translation studies, since it operates on different principles.
This, though perhaps an intentional hyperbole on Pym’s part, is a claim I find difficult
to support. As shown earlier, translation universals can generally only speak of tendencies and
probabilities, rather than certainties. Thus, while Blum-Kulka’s explicitation hypothesis is
most certainly flawed, explicitation as a feature common in translated texts (that is, appearing
commonly, not one necessarily common to all of them) is not falsified by the finding that in
interpreting the tendencies are different (explicitation still appears, just not as often as was
expected). Suggesting that interpreting should be viewed as a separate field outside translation
is thus, in my eyes, an exaggeration, as the differing tendencies in interpreting still operate on
the same grounds and with the same features as translation (source text, target text,
explicitations, implicitations etc.), with differences only in the circumstances in which certain
features appear. If a universal was taken as a conditional statement rather than a simple

18
declaration of “translation has a tendency to X”, as promoted by Toury, then it would be
specifically these conditions that would allow researchers to differentiate between what
occurs in translation and what occurs in interpreting without splitting the field of translation
studies in half. Furthermore, the fact that certain proposed universals seemingly negate each
other (as levelling out and explicitation do in Pym’s view) is more of a reflection on the poor
definition of what a universal is as such, rather than an issue which demands that one of the
two be eliminated. Translation universals tend to operate under certain specific conditions
much like Greenberg’s language universals do. Two opposing tendencies can coexist,
provided there are clear cut boundaries and probabilistic statements predicting when such a
universal should occur. They cannot, naturally, both appear in the same segment of text, but
they most certainly can coexist within a single translation or even a single sentence,
depending on the translator’s choices and judgement.
It is thus, in my eyes, vital, that any research of universals, be they the more general
communication universals or their occurrence within translated texts only, takes these facts
into account and either a) proposes a universal and then sets out to determine the specific
circumstances in which this universal has a tendency to appear or b) investigates texts for
recurring tendencies, and, based on these, determines which universals frequently operate
within and in what circumstances. That is, research should either begin with a hypothesis (a
common occurrence in translation studies) which is then further specified in terms of its
conditionality and probability through empirical research, or the work begins from the bottom
up like in Greenberg’s case, from empirical research to conclusions and universals based on
these. This, I believe, would help scholars avoid some of the issues discussed in this chapter
which would surely lead to more verifiable results due to a more consistent and comparable
handling of the term ‘universal’ across different studies, as well as clearer definition of
various universal features which would provide a good basis for further supporting research.
With the establishment of a basic definition of what universality is and what a proposed
universal should look like, we can now move on to the investigation of the process of
communication within the human mind to complete the picture of communication universals
with empirical methods of how these features could be researched.

19
4.2. Studying the Communication Process

In order to study the process by which our inner conceptualizations are transformed into
a lexical expression, and attempt to observe universality operating within this process, it is
first necessary to understand how language is organized in the human mind, both
systematically and spatially, in order for us to discover the best methods of studying the
communication process empirically. With linguistic expression being an activity we perform
on a daily basis, it would be a natural first choice to simply observe ourselves as we produce
speech and describe what goes on in our mind. Unfortunately, as Levelt aptly states:

Are we aware of how we do it? As for most other high-speed skilled behaviour, the
answer is “no”. We can muse about the meanings of lexical items. We can even reject a
word that jumps to mind and go for a more appropriate one. But we cannot trace the
process by which we retrieve a word to start with. Introspection is largely useless in the
study of lexical access. (Levelt 1992: 2)

Though introspection serves as a valuable tool in a number of psycho- and neurolinguistic


enquiries, it was necessary for researchers to obtain information on how our mental network
works in a different way. Psycholinguists thus turned to less direct methods which utilized the
unconscious nature of the speech process. They began investigating slips of the tongue and
speech errors which would hint at the underlying processes, and created lexical decision tasks
which required subjects to respond as quickly as possible to various enquiries regarding not
only similar words (in appearance or meaning) but also words which we usually associate
with each other (bread and butter, meat and potatoes etc.). The tasks also incorporated various
interferences between the words and the things they represent (for instance, the word ‘red’
written in yellow colour, with the task being to name the colour of the letters). It was precisely
through these methods that we have obtained a clearer idea of how language is organized in
our mind.
By studying slips of the tongue and speech errors during the above mentioned tasks,
psycholinguists have been able to observe the associations we make between words, colours,
sounds and other perceptions, and describe the organization of language in our mind; a system
we now call the ‘mental lexicon’. It is mental database of past experiences with individual
words including semantic, syntactic, morphological, phonological and orthographic
information, context in which these words appear etc., all of which is linked together into a
vast network. It is as if each of us bore in our brain a corpus of all words in all the languages

20
we know; indeed, the term ‘mental corpus’ has also been used to describe this intricate system
(Taylor 2012). Due to the experience-based functioning of this system, each of our own
personal corpora shall be different, as each of us experience and acquire language differently,
have a different vocabulary, know different languages which influence one another more or
less (depending on their closeness, regularity of use, our own ability to prevent interference
etc.), and have encountered different literature in our lives to further expand our corpus.
Unfortunately, the knowledge of the existence of such a system or even acquiring a detailed
description of the mental links contained within our mind is not quite sufficient to properly
understand the process of how the words we choose from our mental corpus relate to the
concepts within our mind, as the process of verbalizing our conceptualizations begins with the
conceptualization first, i.e. is of non-verbal origin. Thus, though research of the mental
corpus/lexicon has certainly brought us a leap forward when it comes to understanding the
psychological layout of language within our minds, it is the process of speech production
itself which we need to study to understand the relations between the thought and the
message.
This was a task neurolinguistics only too happily took up. Beginning in the 1960s,
neurolinguists began researching language empirically by working with patients with aphasia
(language disorder caused by damage to sections of the brain), a method used to this day.
With the developments in technology over the last few decades, researchers were gradually
able to also begin monitoring the process of speech through a combination of a wide variety
of methods, from eye-tracking and introspection, to functional magnetic resonance imaging
(fMRI), electroencephalography (EEG), electromyography (EMG) etc. in both aphasiacs and
healthy subjects. The tasks given to the subjects could become more and more specific and
focus on different aspects of speech production and reception. Thus, for instance, as
mentioned in Chapter 4, researchers have been able to determine that the process of speech
production consists of conceptual activation, retrieval of lexico-semantic information,
retrieval of phonological information, and the retrieval of syntactic information (Costa et al.
2009). They have also been able to determine the approximate onset of each stage of speech
production as well as the areas of the responsible for each, and even, to some extent, managed
to describe the processes taking place. Despite these discoveries, as stated previously, I have
not been able to find any direct answers to the key question: How does the process of
transforming a non-verbal message into the verbal code operate? What are the steps that take
place in our mind before we select the lexical expression of our message? I have thus
dedicated myself to the study of available empirical research on the issue of brain activity,

21
commitment to memory, imagined speech, reading and assignment of meaning in an attempt
to determine how research of communication universals could be performed at this general
level and how the findings could be applied to the research of the translation process. I
focused not only on the methods which were utilized in these studies, hoping to find one
suitable for the research of communication universals, but also on the areas which were being
studied in neuro- and psycholinguistics and the accuracy of data researchers were able to
obtain with the available technology.
One of the main achievements of the study of patients with aphasia has been the ability
to determine approximately which sections of the brain are involved in our ability to
understand and produce language. Before the era of MRI and EEG, researchers had to rely on
rather inaccurate information on where exactly the injury was located in the brain to
determine which section was responsible for the lost linguistic abilities seen in the subject
(Menn 2012). Often, accurate information was only obtained much later (during autopsy), or
not at all, though there were also cases where patients had to undergo cranial surgery due to
severe epilepsy or brain tumours. This required the brain to be directly stimulated to
determine the areas vital for the use of language in order to guide the surgeon away from
them, providing useful information to the field of neurolinguistics in the process. Though the
studies with aphasiac patients could identify the approximate larger cerebral regions in which
certain processes take place, more detailed information was not available, and the study of the
process of speech production was still out of reach.
With the emergence of functional MRI (fMRI) – a method which allows the observation
of processes taking place in the brain in real time – researching the functions of specific areas
of the brain could begin in earnest. fMRI allows scientists to measure brain activity by
detecting the changes in blood flow to various sections of the brain, which, naturally,
correspond with the places where increased brain activity takes place at the moment. While
fMRI does not allow the measurement of the processes themselves, it does enable us to locate
the specific section which needs to be monitored. The data obtained by research performed
using fMRI has revealed that there is no single place where language is located:

We can’t say that language is ‘in’ a particular part of the brain. It’s not even true that a
particular word is ‘in’ one place in one person’s brain; the information that comes
together when we understand or say a word arrives from many places, depending on
what the word means. (Menn 2012).

22
Thus, for instance, the left inferior frontal lobe seems to be responsible for semantic
processing (Bookheimer 2002: 151), while the organization of categories of objects and
concepts takes place in the temporal lobe. As Small and Tremblay (2011) describe, it is
becoming increasingly accepted that language relies (to an unknown extent) on an overlap
between language and other functional systems. Several experiments have, for example,
demonstrated a connection between speech and hand gestures, and studies on imagined
speech (speech produced in our head, instead of overtly (out loud)) have even shown that
motor response (the movements we make when, for example, speaking) plays a role even
when we are thinking to ourselves (Parnin 2011). The cerebral regions responsible for these
associated activities are thus also partially involved in speech to a lesser or greater extent.
These facts make studying the process of speech production rather difficult, as it
requires a synchronized measurement of a multitude of areas of the brain in order to be able to
accurately study the processes involved in the production of even a single word, let alone a
sentence or sustained speech. There still seem to be many questions left to answer by the use
of fMRI, as evidenced by the study by Small and Tremblay (2011) which attempts to clarify
whether motor response selection in overt sentence production (that is, the selection
movements to be made in order to phonologically produce the sentence we intend to utter
aloud) takes place in the inferior frontal gyrus or the lateral and medial parts of the premotor
cortex. In other words, we are still uncertain to this day about where certain speech-related
processes take place. To further complicate the problem, fMRI carries with it a number of
technical disadvantages. First and foremost, it is difficult to use fMRI in synchronization with
another method of measurement (such as EEG), since fMRI measures the flow of blood to the
brain. However, blood flows rather slowly in comparison to the milliseconds it takes neurons
to react to stimuli. Events in the brain using fMRI are thus detected with a 2–7 second delay
after the event took place. The equipment needed for fMRI also requires the subject to be
almost motionless and lying down, which eliminates any possibility of using fMRI in more
natural conditions for the subject and may negatively influence the results of the experiment
by placing the subject in an unfamiliar environment.
A rather more promising method of studying brain activity during thought or speech is
the use of EEG. While fMRI allows researchers to locate the regions of the brain where this
brain activity takes place and, based on the knowledge we have about certain regions, infer
what the given activity is, EEG enables them to measure the brain activity itself. Thus, for
instance, Costa et al. (2009) were able to determine, based on an experiment requiring
Spanish-Catalan bilingual subjects to name pictures in their L1 (Spanish) and Catalan-Spanish

23
bilinguals to name pictures in their L2 (Spanish), that, for example, lexical access (the
selection of words) happens 180 ms after picture presentation, with low-frequency words
adding up to 200 ms to the reaction time of the subject. Deng et al. (2009), on the other hand,
investigated the methods of sensing imagined speech by assigning subjects the task of
imagining two syllables (with no meaning) in a number of prescribed rhythms. While the
sample size was rather small (four subjects), the experiment has demonstrated that scientists
are not only able to detect the rhythm in which the subject thinks of the syllable, but also
differentiate which syllable is being thought. While isolated syllables without semantic
content are far removed from sustained spontaneous thought, and the experiment required
training the subjects to produce brain waves more easily discernible by the EEG, the results
would indicate that research into the process of language production still holds much
untapped potential.
Much like fMRI, however, EEG, is also not without its problems. While in fMRI
research cannot obtain accurate temporal information due to the delay caused by the slow
reaction of the blood flow, EEG faces another temporal problem. Since overt speech
production generally involves, among other things, movement of the eyes, lips, eye-lids or
head (Christoffels et al. 2011), these muscle activations cause artefacts (distortions,
disturbances) in the EEG signal, with the muscle activations producing a signal several times
stronger than brain activity does. For this reason, researchers either operate with passive
reading and listening tasks when measuring language-related activity in the brain, or, when
studying overt speech production, require the subject to produce speech after a short delay
(when, supposedly, processes such as lexical access and motor-response selection have
already taken place and been measured, and artefacts no longer play a role). In the case of
imagined speech, another issue is presented by the uncertainty whether the subject is truly
following the instructions. A stray thought can distort the recorded data. Also, as Christoffels
et al. (2011) point out, the need for overt naming and speech can influence the processes
taking place in the brain during the preparation stage to an unknown extent in comparison to
covert (imagined) speech, which we perform on a daily basis when thinking or reading.
Another interesting method utilizes the connection between speech and other associated
activities (such as hand gestures or other forms of motor response) to detect what linguistic
operations take place in the subject’s mind. Namely, Parnin (2011) in his study of inner
thoughts of software developers utilized the method of electromyography (EMG) which
detects electrical signals produced by muscle nerves. As Parnin describes, the process of
subvocalization (our inner voice; covert speech taking place in our mind) is linked to the

24
process of overt speech to such an extent that signals are being transmitted by our brain to our
articulatory muscles on how to produce the given word or sentence even though we are only
producing it in our mind. Thus, with proper understanding of the process of phonemic
production of words and sufficiently accurate technology, researchers could almost literally
read words from our mind as we are thinking them – though so far, researchers were only able
to detect a limited set of words on subjects trained in subvocalizing them properly for the
EMG to detect. However, the EMG signal detection has achieved around 92% accuracy
(Parnin 2011).
While at first glance these achievements sound extremely promising for the possibility
of researching the process human communication in the field of neuro- and psycholinguistics,
a number of factors indicate that this goal is still far out of reach. Firstly, the currently
available methods of measuring brain activity do not seem sufficient for the needs of the study
of communication universals. fMRI, while allowing spatially accurate information during
language tasks, does not offer much insight into the process of language production itself.
EMG, on the other hand, allows researchers to directly access the words being thought at the
moment. So far however, this method could only be successfully applied when the subjects
were specifically trained to subvocalize specific words so that the EMG could detect the
signals properly. Producing sustained thought without the subject being trained in every
minute aspect of it is, so far, not possible. This method also relies heavily on the process of
subvocalizing, which does not necessarily occur every time we think of something (Akhter
and Hurlburt 2008) and is not required for the processing of information (as evidenced by
speed reading which supresses subvocalizing altogether with no significant loss in text
comprehension) which would render EMG largely ineffective in spontaneous thought. Most
importantly, however, EMG only allows the detection of words themselves as they are being
thought and does not allow access to the conceptual level, before lexical access takes place.
This would leave EEG, with its ability to detect brain activity directly, as the prime
candidate for research in communication and its universals. When we observe the practical
methods used in empirical research with EEG, however, we will notice a recurring trend:
researchers work from the known to the unknown. Specifically, subjects are generally given a
stimulus (a word, a sentence, a recording) which they passively process, or are given a
stimulus (picture, pair of syllables and a rhythm etc.) from which they are then asked to
actively produce a lexical output, with the researcher recording the brain activity resulting
from the stimulus. In all these cases, the stimulus is known, limited. The output is likely to
correspond with a certain pattern: a picture of a dog will likely elicit the word ‘dog’ from the

25
subject, and the researcher observes the process of lexical access for the word. However, the
stimuli our sentences work with generally come from anywhere around us or from within our
mind, from the past or the present, from the unspoken context or from what has just been said.
In other words, our stimuli are generally unlimited. Since, naturally, taking a produced
sentence and tracking it back to its original conceptualization is an incredibly complex task,
researchers generally have to work with limited stimuli to obtain data. Researching our
communication in the natural environment would thus pose a major problem.
The extent of the analysable stimuli is not the only obstacle. While our understanding of
the processes taking place in the brain while producing or processing language has advanced
greatly over the last century, our understanding of the pre-lexical, non-verbal phase – the
conceptualization – is rather limited. Without exception, all the studies from the fields of
cognitive linguistics, psycholinguistics and neurolinguistics encountered during my research
have dealt either with the retrieval of lexico-semantic information, retrieval of phonological
information or the retrieval of syntactic information. Conceptual activation which precedes
these steps remains largely unexplored. As long as this remains the case, any advancements in
our ability to investigate lexical access will bring us no closer to the study of communication
universals. The very basic prerequisite of studying translation universals has been the access
to the source text (in S-universals) or comparable texts in the target language (in T-universals)
as these universals are observed only in contrast to another piece of text. We can detect that
the translator explicitated something only because we know from the source text that the
message was more implicit. We can claim that translations underrepresent unique items
(phrases and terms only specific to the given language) only when we compare translations to
non-translated texts produced in the target language. We cannot, however, attempt to study
universals, be they in translation or communication, if the source itself is unavailable. This,
unfortunately, is currently the case in communication, and it is doubtful that the situation will
change. Indeed, even if we were able to access the process of conceptual activation, how
would we evaluate such a source? Assuming our technology would advance to such a stage
that we would be able to reliably, almost routinely detect the process of lexical access and all
the subsequent stages of spontaneous and sustained language production, how would we
relate this data to a source which is in its very nature non-verbal? How do we determine that a
given brain wave has been subsequently explicitated before encoding the final message, since
the speaker decided the context was insufficient for the audience to understand, when all we
have access to are amplitudes of brain waves? These are question an experienced
neurolinguist might be able to answer. I, however, cannot.

26
5. The Implications for the Study of Translation Universals

It is due to our inability to obtain detailed and analysable data from the stage of
conceptual activation (inability caused by both technology and also, I believe, the non-verbal
nature of conceptualization itself) and thus inability to relate our speech to its “source text”
that the study of communication universals on the general level of human communication is,
in my opinion, not possible, and will remain so for the foreseeable future. This, however,
places an even greater emphasis on the empirical study of universals in translation, since
translation studies seems to be the only field where communication universals can be studied
with direct access to the source text. And while the process of translation does differ from the
process of intralingual communication in certain respects and does create its own unique
tensions which influence the occurrence of communication universals (such as interference
and language pair specific problems which do not appear in general communication), there is
still much to be gained from studying communication universals in this environment. Since a
general background for communication universals cannot be established on a higher level of
linguistics, however, certain issues within the study of universals in translation must be
resolved before translation studies can truly begin laying down the foundations for the study
of communication universals on a larger scale than what has been achieved so far.
Firstly, as discussed in Chapter 3, translation studies should embrace a wider
perspective of translation following Jakobson’s model (1959) in order to study
communication universals. Communication universals occur on all levels of human
communication and must therefore be studied in as wide a context as possible. A number of
studies from the field of paraphrase and intralingual translation (such as Zhetsen (2009)) have
documented the occurrence of universals beyond the region of interlingual translation, and
these should be accepted and incorporated into the field of translation studies. As Zhetsen
(2009) emphasizes, empirical studies of intralingual translation are few and far between, and
while the number of written sources where intralingual translation can be studied is limited
(modernizations and different takes on the translation of the same text generally do not occur
as often as regular translations), and paraphrase is mostly restricted to spoken language and
thus difficult to compile into a corpus, these areas must nonetheless be integrated into
translation studies in as great an extent as possible if we are to obtain a wider, more
representative sample of the occurrence of communication universals. This integration would
then also allow for the differentiation between general communication universals (those
occurring in specific frequencies and under specific conditions regardless of language pair

27
specific tensions, as seen, for instance, in paraphrase and re-translation) and those specific to
translation only (specific either to a given language pair or specific to translation in kind,
frequency or motivating factors).
Secondly, due to the variables involved in human communication, as well as problems
inherent in some of the definitions of proposed universals (see Chapter 4.1), communication
universals should take on the form of a conditional and probabilistic statement in accordance
with Toury’s definition of translation laws (2001) or Greenberg’s list of language universals
(1966a) to achieve more realistic and verifiable proposals which can, as Greenberg’s list
demonstrates, serve as a proper foundation for the study of universality and provide tangible
data. Otherwise, some potential universal features might be refuted in empirical studies due to
a poorly proposed definition, as illustrated, for instance, by Becher’s paper (2010b) on Blum-
Kulka’s explicitation hypothesis. Indeed, the tendency towards seeing universals as black and
white, yes or no statements instead of conditional and probabilistic statements can lead
scholars to abandon the field entirely as evidenced by House (2008). If, however, an approach
similar to Greenberg’s (1966a) is accepted, universals can be a very effective way to study
language and its use within a given context (such as in translation) without the term
‘universal’ becoming problematic and discouraging empirical research.
Thirdly, there is the issue of methodology. The study of universals with the use of
electronic corpora, and the resulting desensitization to context as well as a certain degree of
opaqueness for the reader, can lead to issues which may render certain studies invalid upon
closer inspection. Thus, as evidenced by Becher (2010a, 2010b), studies which claim to have
found evidence of a certain feature (in Becher’s case, translation-inherent explicitation, which
both Becher and myself deny the existence of) in fact ignore certain factors which motivated
the author to explicitate and which would place the explicitations into a different sub-
category. Similarly, as discussed by Bernardini and Zanettin (2004), some studies, while
perhaps avoiding the issue of contextual factors, may fail to obtain a representative and
comparable sample for their analysis either due to incomparable differences in frequency of
translation and genres to and from a given language, or due to the nature of texts selected for
the corpus. In the CEXI corpus example given by Bernardini and Zanettin, translation from
English into Italian was much more frequent than vice versa, with translation of non-fiction
into English being from the areas of art, sport, games and religion, while translation into
Italian was less frequent in these areas and more frequent in applied science texts. More
alarmingly, the texts selected for the fiction portion of the corpus included generally low-
brow and popular English literature, while the Italian texts consisted of high-brow classics.

28
Naturally, any analysis resulting from the study of such a corpus, however otherwise
meticulous and methodologically sound, would fail to provide valid results, which the reader
would likely only realize after studying the corpus itself for an extended period of time. Some
studies merely provide quantitative results of the analysis and the name of the corpus analysed
with the reader being left with very little information on the contents of the corpus, the
comparability of the samples, and the extent to which context was taken into account. In this
respect, while attention to detail on the part of the researcher should be considered a
prerequisite for the empirical study of universals, an easier access to the data analysed (such
as more detailed information on the corpus or some context for the analysed phenomena) is
paramount for easier verification of the results for the needs of the reader or the purposes of
any potential follow-up study.
Finally and, in my opinion, most importantly, we arrive at the issue of practical
application of the study of universals within the field of translation. Much work has been done
in studying translation universals, and the field of study has produced a number of interesting
hypotheses as well as empirical evidence of recurring tendencies in translation and
communication. How, however, can these findings be applied to our own process of
communication and translation? If we were able to determine what communication universals
operate in language based on the study of translation (in the Jakobsonian, broader sense of the
word), how could we use this knowledge in practice?
There have been, generally speaking, three ways of approaching the study of translation:
the prescriptive route, which attempts to predefine what translation should and should not be,
the pejorative route, which views translations in terms of their deficiencies and failures in
comparison to the qualities of the source text, and the descriptive route, which focuses on
analysing the way the process of translation is actually being performed by translators.
Descriptive translation studies have taken this third approach to heart and set out to describe
the phenomena within translation by studying, among others, large electronic corpora of
translations, their source texts, and non-translated texts in the target language. As Baker put it
(1993: 243): “The practical question of how to improve our translations will find more
reliable and realistic answers once the phenomenon of translation is explained in its own
terms.” It is, however, questionable whether simply describing translation will lead us to the
goal of understanding the good and the bad of our own translation process. As Øverås (1998)
states:

29
While the non-evaluative stance of Descriptive Translation Studies has indeed been a
valuable move within translation research, there is no reason why textual investigations
should not be of use to practising translators, as long as the latter also realize that there
is as much to be learnt from insights into the many possibilities and constraints that
operate on translation as there is from rigid prescription and definite answers.

This, I believe, is the key to moving forward in translation studies: the empirical study of
translation needs to accept a more systematic approach to the description of translation
tendencies and universals not only in translation in general, but in specific language pairs as
well and apply this systemized knowledge to translator education. While universals are a
feature well known to translation scholars, regardless of the stance they take on the issue, a
translation trainee is unlikely to keep up to date with the latest and greatest in the world of
universals of translation. Sometimes (as in my case) a translation trainee is fortunate to even
learn what translation universals are, let alone what tendencies are prevalent in the given
language pair. Indeed, even for the purposes of teaching, the world of translation universals is
so diverse, chaotic almost, that summarizing the findings of empirical translation studies for
the students of translation in any given language pair would likely prove unfeasible due to a
lack of systemization – not to mention the fact that some of the studies directly contradict the
findings of others (Jantunen 2004: 101) based on what aspects they focus on or how they
define the word ‘universal’.
The work being done in translation studies is, however, perfectly suited to resolve this
problem of accessibility and bring universals into the translation classroom not only by
describing the process of translation, but also evaluating and criticizing it (where necessary),
and even contrasting the two different language systems in contact – an activity performed on
a regular basis by translation scholars, but one that seldom travels beyond the pages of
translation journals. Language-specific features (and unique items) as well as the way they
relate to one’s own native language (and the universals arising from this relation during the
translation process) will likely never become an issue that will be adopted into conventional
language learning simply due to the fact that a native speaker will adopt the unique features
through regular usage of his mother tongue, and a second language learner will likely not be
able to adopt these features until (s)he becomes extremely proficient in the language in the
first place. The knowledge of these features would, however, be of much use to translation
trainees, who will thus become more aware of the two existing language systems and will be
offered a useful comparison between them. They would finally be given tangible and

30
empirical data in a world of non-binary relations and solutions, which translation doubtlessly
is. This approach could not only incorporate results of empirical research of universals, but
also integrate other aspects of the two language systems which often serve as motivating
factors for the occurrence of universals or various other shifts. Thus, for instance, the
tendency towards nominality/verbality (English likes to use noun phrases where Czech will
use verb phrases more often) in the language can be compared, the way relative clauses are
introduced can be discussed in detail (for a Czech speaker, having a relative clause not be
introduced with ‘that’ will likely make them feel like something is missing, since these
introductory words cannot be omitted in Czech, while an English speaker will be quite fine
with omitting ‘that’ without fear of not being understood), and even the specific use of
diminutives can be analysed (German, for instance, generally uses fewer diminutives than
Czech, especially when it comes to designations of kinship) with additional information
available on how certain universals such as explicitation or simplification figure in the
process. All of this is data obtained by translation scholars on a regular basis through their
empirical studies, but it is rarely put to any practical use beyond summarizing the results in a
numerical table and providing a few noteworthy examples in the surrounding text for other
translation scholars to ponder. It would, however, serve as the ideal basis for comparative
studies of various language pairs, and would allow for greater integration of universals as well
as language contact research into translator training.
Greater systemization and complementarity of empirical studies of translation, and
collection and analysis of results obtained in published studies for individual language pairs
could give the study of universals within translation a new direction and allow translation
teachers and trainees easier access to data on the issues specific to given language pairs as
well as large-scale tendencies in translation and communication, and would provide a useful
tool to achieve the goal of improving one’s translations or obtaining new perspectives on
translation strategies. Thus, the discipline would shift from descriptive studies to a mixture
between prescriptivism, descriptivism and contrastive linguistics and move on from
individual, isolated findings such as “explicitation seems to be a strong tendency in the
English – Hungarian translation direction” (Pápai 2004: 159) which are difficult to place into
greater context for a translation trainee, to a more immediately applicable collection of
statements which unify not only large-scale translational tendencies (universals) but also
language pair specific obstacles and the ways they are generally resolved with respect to a
given genre or context based on empirical evidence gathered from studies published on the
issue so far.

31
6. Conclusion

It has been my goal to investigate the nature of translation universals at a theoretical


level, unveil some of the underlying problems in the current methods of their study in relation
to the study of universals in other fields, such as philosophy and linguistics, and propose steps
to be taken in order to not only resolve these issues within translation but focus the study of
universals of translation on the next logical goal – namely the systemization of empirical
studies of translation universals and the phenomena accompanying the process of translation
to allow the inclusion of the results of empirical studies of translation into translator training.
Evidence has been provided to support the notion that translation universals are in fact
not specific to translation, but take place on a regular basis in communication, as they are
inherent to the process of expressing our inner thoughts – hence I have coined the term
‘communication universals’ to describe these features. However, due to technical (and, I
believe, cognitive) limitations discussed in Chapter 4.2, we are unable to study these
phenomena on a more general level in neuro- and psycholinguistics, and must therefore place
an even greater emphasis on the field of translation studies to properly investigate
communication universals in practice, as translation studies remains the only suitable field to
do so. This means resolving prevalent and long-standing issues in theoretical aspects,
methodology and applicability of results of empirical studies of translation and universality.
Translation studies should cease largely disregarding the area of intralingual translation,
and should incorporate this field of study into empirical research to allow for the description
of communication universals in a wider context and perhaps allow us to distinguish
tendencies which are truly specific only to translation (and thus could be called ‘translation
universals’) from those which are common to human communication. The issue of the term
‘universal’ should be resolved by incorporating probabilistic and conditional statements into
the very definition of communication universals similarly to the proposals made by Toury
(2001), which would help improve the validity and verifiability of hypotheses and subsequent
empirical studies. In addition, empirical studies of universals should strive for greater
transparency in terms of the contents of the corpora analysed and the extent to which context
was taken into account when working with the text to allow for easier access to and
verification of the data obtained. Finally, translation studies should attempt to break away
from the purely descriptive, non-evaluative approach to the study of universals and instead
focus its efforts on building a systemized and complementary range of empirical studies in as
wide a variety of language pairs as possible to create a coherent database of information on

32
tendencies and strategies prevalent in translation within these pairs, which would allow for the
application of the study of universals to translator training.

33
Works Cited

Akhter, Sara A. and Hurlburt, Russell T. (2008) ‘Unsymbolized Thinking’. In: Bridgeman,
Bruce (ed.) Consciousness and Cognition 17. Amsterdam: Elsevier. 1364–1374.

Armstrong, D. M. (1989) Universals. An Opinionated Introduction. London: Westview Press.

Baker, Mona (1993) ‘Corpus Linguistics and Translation Studies – Implications and
Applications’. In: Baker, Mona, Francis, Gill and Tognini-Bonelli, Elena (eds) Text and
Technology. In Honour of John Sinclair. Amsterdam: John Benjamins. 233–250.

Baker, Mona (1996) ‘Corpus-based Translation Studies: The Challenges That Lie Ahead’. In:
Somers, Harold (ed.) Terminology, LSP and Translation. Studies in Language Engineering in
Honour of Juan C. Sager. Amsterdam: John Benjamins. 175–186.

Becher, Viktor (2010) ‘Abandoning the Notion of “Translation-inherent” Explicitation:


Against a Dogma of Translation Studies’. In: Klaudy, Kinga (ed.) Across Languages and
Cultures 11 (1). Budapest: Akadémiai Kiadó. 1–28.

Becher, Viktor (2010) ‘Towards a More Rigorous Treatment of the Explicitation Hypothesis
in Translation Studies’. In: Schubert, Klaus and Van Vaerenbergh, Leona (eds.) tran-kom 3
(1). Berlin: Frank 7 Timme. 1–25.

Bernardini, Silvia and Zanettin, Federico (2004) ‘When is Universal not a Universal? Some
Limits of Current Corpus-based Methodologies for the Investigation of Translation
Universals’. In: Kujamäki, Pekka and Mauranen, Anna (eds) Translation Universals: Do
They Exist? Amsterdam: John Benjamins. 51–62.

Blum-Kulka, Shoshana (1986) ‘Shifts of Cohesion and Coherence in Translation’. In: House,
Juliane and Blum-Kulka, Shoshana (eds) Interlingual and Intercultural Communication:
Discourse and Cognition in Translation an Second Language Acquisition Studies. Tübingen:
Gunter Narr. 17–35.

34
Bookheimer, Susan (2002) ‘Functional MRI of Language: New Approaches to Understanding
Cortical Organization of Semantic Processing’. In: Cowan, Maxwell W. (ed.) Annual Review
of Neuroscience 25. Palo Alto: Annual Reviews. 151–188.

Börsch, Sabine (1986) ‘Introspective Methods in Research on Interlingual and Intercultural


Communication’. In: Blum-Kulka, Shoshana and House, Juliane (eds) Interlingual and
Intercultural Communication: Discourse and Cognition in Translation and Second Language
Acquisition Studies. Berlin: Gunter Narr. 195–209.

Chesterman, Andrew (2001) ‘Hypotheses about Translation Universals’. In: Gile, Daniel,
Hansen, Gyde and Malmkjær, Kirsten (eds) Claims, Changes and Challenges in Translation
Studies. Amsterdam: John Benjamins. 1–14.

Christoffels, Ingrid K., Ganushchak, Lesya Y. and Schiller, Niels O. (2011) ‘The Use of
Electroencephalography in Language Production Research: A Review’. In: Costa, Albert (ed.)
Frontiers in Psychology 2: 208.
(http://journal.frontiersin.org/article/10.3389/fpsyg.2011.00208/full) 19 April 2015

Costa, Albert, Strijkers, Kristof and Thierry, Guillaume (2009) ‘Tracking Lexical Access in
Speech Production: Electrophysiological Correlates of Word Frequency and Cognate Effects’.
In: Cerebral Cortex (2009). Online 13 August 2009. <http://cercor.oxfordjournals.org/>.
Dell, Gary S. (1986) ‘A Spreading-activation Theory of Retrieval in Sentence Production’. In:
Anderson, John (ed.) Psychological Review 93 (3). Washington: American Psychological
Association. 283–321.

Deng, Siyi, D’Zmura, Michael, Lappas, Tom, Srinivashan, Ramesh and Thorpe, Samuel
(2009) ‘Toward EEG Sensing of Imagined Speech’. In: Jacko, Julie A. (ed.) Human-
Computer Interaction. New Trends. Berlin: Springer. 40–48.

Even-Zohar, Itamar (1990) Polysystem Studies. Durham: Duke University Press.

Frawley, William (1984) ‘Prolegomenon to a Theory of Translation’. In: Frawley, William


(ed.) Translation: Literary, Linguistic and Philosophical Perspectives, London: Associated
University Press. 159–175.

35
Garrett, M. F. (1975) ‘The Analysis of Sentence Production’. In: Bower, Gordon H. (ed.)
Psychology of Learning and Motivation 9. Amsterdam: Elsevier. 133–177.

Greenberg, Joseph H. (1966) ‘Some Universals of Grammar with Particular Reference to the
Order of Meaningful Elements’ In: Greenberg, Joseph H. (ed.) Universals of Language.
Cambridge, Mass.: MIT Press. 73–113.

Greenberg, Joseph H. (ed.) (1966) Universals of Language. Cambridge, Mass.: MIT Press.

Hockett, Charles F. (1966) ‘The Problem of Universals in Language’. In: Greenberg, Joseph
H. (ed.) Universals of Language. Cambridge, Mass.: MIT Press. 1–29.

House, Juliane (2008) ‘Beyond Intervention: Universals in Translation?’. In: Schubert, Klaus
and Van Vaerenbergh, Leona (eds) trans-kom 1 (1). Berlin: Frank & Timme. 6–19.

Jakobson, Roman (1959) ‘On Linguistic Aspects of Translation’. In: Venuti Lawrence (ed.)
The Translation Studies Reader. London: Routledge. 113–118.

Jakobson, Roman (1966) ‘Implications of Language Universals for Linguistics’. In:


Greenberg, Joseph H. (ed.) Universals of Language. Cambridge, Mass.: MIT Press. 263–278.

Jantunen, Jarmo Harri (2004) ‘Untypical Patterns in Translations: Issues on Corpus


Methodology and Synonymity’. In: Kujamäki, Pekka and Mauranen, Anna (eds) Translation
Universals: Do They Exist? Amsterdam: John Benjamins. 101–126.

Kajzer-Wietrzny, Marta, Stachowiak, Katarzyna and Whyatt, Bohuslawa (2014) Exploring


Decision-making Processes in Translation and Paraphrase: A Report on the ParaTrans
Project. (http://bridge.cbs.dk/events/presentations/panel_4/CRITT_WCRE_Whyatt.pdf) 19
April 2015

Kujamäki, Pekka and Mauranen, Anna (eds) (2004) Translation Universals: Do They Exist?
Amsterdam: John Benjamins.

36
Langacker, Ronald W. (1987) Foundations of Cognitive Grammar: Theoretical Prerequisites,
Volume 1. Chicago: Stanford University Press.

Laviosa-Braithwaite, Sara (1996) The English Comparable Corpus (EEC): A Resource and a
Methodology for the Empirical Study of Translation. Unpublished PhD thesis.

Laviosa-Braithwaite, Sara (1998) ‘Universals of Translation’. In: Baker, Mona and


Malmkjær, Kirsten (eds) Routledge Encyclopedia of Translation Studies. London: Routledge.
288–291.

Levelt, Willem J. M. (1992) ‘Accessing Words in Speech Production: Stages, Processes and
Representations’. In: Sloman, Steven (ed.) Cognition 42 (1). Amsterdam: Elsevier. 1–22.

Lind, Sarah (2007) ‘Translation Universals (or Laws, or Tendencies, or…?)’ In: TIC Talk 63
2007. ( http://www.ubs-translations.org/tt/past_issues/tic_talk_63_2007/). 3 April 2015.

Menn, Lise (2012) Neurolinguistics


(http://www.linguisticsociety.org/resource/neurolinguistics). 19 April 2015

Øverås, Linn (1998) ‘In Search of the Third Code: An Investigation of Norms in Literary
Translation’ In: Clas, André (ed.) Meta, 43(4). Montreal: Les Presses de l’Université de
Montréal. 571–588.

Pápai, Vilma (2004) ‘Explicitation: A Universal of Translated Text?’ In: Kujamäki, Pekka
and Mauranen, Anna (eds) Translation Universals: Do They Exist? Amsterdam: John
Benjamins. 143–164.

Parnin, Chris (2011) ‘Subvocalization – Toward Hearing the Inner Thoughts of Developers’.
In: Guerrero, Juan E. (ed.) 2001 IEEE 19th International Conference on Program
Comprehension (ICPC) Proceedings. Conference Publishing Services. 197–200.

Pym, Anthony (2007) ‘On Shlesinger’s Proposed Equalizing Universal for Interpreting’. In:
Jakobsen, Arnt Lykke, Mees, Inger M. and Pöchhacker, Franz (eds) Interpreting Studies and
Beyond: A Tribute to Miriam Shlesinger. Copenhagen: Samfundslitteratur Press. 175–190.

37
Shlesinger, Miriam (1989) Simultaneous Interpretation as a Factor in Effecting Shifts in the
Position of Texts on the Oral-Literate Continuum. MA thesis. Tel Aviv University.

Small, Steven L. and Tremblay, Pascale (2011) ‘Motor Response Selection in Overt Sentence
Production: A Functional MRI Study’. In: Costa, Albert (ed.) Frontiers in Psychology 2: 253.
doi: 10.3389/fpsyg.2011.00253 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3183829/) 19
April 2015

Steiner, George (1975) After Babel: Aspects of Language and Translation. London: Oxford
University Press.

Taylor, John R. (2012) The Mental Corpus. How Language is Represented in the Mind.
Oxford: Oxford University Press.

Toury, Gideon (2001) ‘Probabilistic Explanations in Translation Studies: Universals – Or a


Challenge to the Very Concept?’. In: Gile, Daniel, Hansen, Gyde and Malmkjær, Kirsten
(eds) Claims, Changes and Challenges in Translation Studies. Amsterdam: John Benjamins.
15–26.

Tymoczko, Maria (1998) ‘Computerized Corpora and the Future of Translation Studies’. In:
Clas, André (ed.) Meta, 43(4). Montreal: Les Presses de l’Université de Montréal. 652–659.

Vanderauwera, Ria (1985) Dutch Novels Translated into English: The Transformation of a
Minority Literature. Amsterdam: Rodopi.

Zhetsen, Karen Korning (2009) ‘Intralingual Translation: An Attempt at Description’. In:


Bastin, Georges (ed.) Meta: Translators’ Journal 54(4). Montreal: Les Presses de l’Université
de Montréal. 795–812.

38
Résumé (English)

The aim of the thesis is to discuss in-depth the issue of translation universals and their
research in translation studies, address some of the underlying problems with the term
‘universal’ as well as the extent to which these phenomena are specific to translation only and
propose a new goal towards which empirical studies of translation should strive.
First, the concept of translation universals is introduced in Chapter 2 along with a brief
history of approaches and methodologies used in its study. Then, a number of examples taken
from some of the seminal works of descriptive translation studies are provided in Chapter 2.1.
Afterwards, in Chapter 3, the origin of translation universals is discussed in detail, with
evidence given of the occurrence of identical phenomena outside the realm of translation.
Based on this evidence, the idea of communication universals is introduced by the author and
the need to study these features on a more general level is expressed. The author proposes that
the general foundations for the study of communication universals must be established in
neuro- and psycholinguistics which offer themselves as the most suitable fields for such an
endeavour.
To this end, Chapter 4 and its subchapters deal with the issue of establishing a more
suitable definition of universals based on precedent set by translation studies, philosophy and
linguistics, as well as the methodological aspects of how communication universals can be
studied. An overview of the methods of studying communication in neurolinguistics is
provided and the pros and cons of each method are introduced. The chapter concludes with
the finding that current technology and the nature of communication and human thought itself
do not lend themselves to empirical study of communication universals at this general level,
and thus reinforces the need for the study of communication universals within translation as
the only suitable field.
Chapter 5 then discusses the practical implications of this discovery, and stresses the
issues which need to be resolved in order to set a standard and allow for verifiability and
comparability in empirical study of universals in translation. A deviation from the descriptive,
non-evaluative approach to translation studies is proposed, with the new focus being translator
training instead.

39
Resumé (Česky)

Hlavním cílem této práce je zaměřit se na problematiku univerzálií překladu a jejich


výzkumu, upozornit na problémy skrývající se v termínu „univerzálie“, a zhodnotit, zda jsou
tyto jevy skutečně vlastní pouze překladu. Na základě navržených řešení těchto problémů a
odpovědi na tuto otázku je pak navržen nový cíl pro praktické studium překladu.
V práci je nejprve představen pojem překladových univerzálií spolu s krátkým
přehledem metodologie a historie studia tohoto jevu. Poté je uvedeno několik příkladů
univerzálií, převzatých z některých z nejvýznamnějších studií v oboru translatologie.
Kapitola 3 se pak zabývá pravým původem těchto univerzálií a poskytuje důkazy o
výskytu těchto jevů mimo oblast přeložených textů. Na základě této skutečnosti je pak
definován nový pojem komunikační univerzálie, který je nutno studovat na obecnější úrovni
než jakou poskytuje translatologie. Autor práce navrhuje, že tyto obecné základy studie
komunikačních univerzálií je nejlépe možné poodhalit v oboru psycho- a neurolingvistiky.
S tímto cílem se pak kapitola 4 zaměřuje na stanovení nové definice univerzálií po
vzoru filosofie a lingvistiky, a rozebírá detailně možné metody studia komunikačních
univerzálií na základě poznatků z empirických studií z oblasti neurolingvistiky. Tyto metody
jsou zhodnoceny také s ohledem na jejich přednosti a nedostatky. V závěru kapitoly jsou pak
představena omezení, která studium univerzálií na této obecnější úrovni znemožňují. Důraz je
tak kladen především na potřebu studovat tyto jevy v translatologii, jakožto jediné k tomuto
účelu vhodné disciplíně.
Kapitola 5 pak rozebírá praktický dopad těchto zjištění a zdůrazňuje problémy, které je
nutné vyřešit, aby bylo možné vytvořit určitý základní model a vytvořit podmínky pro
ověřitelnost a srovnatelnost výsledků pro empirické studie univerzálií v překladu. Práce je
uzavřena návrhem odklonit se od čistě popisného přístupu ke studiu univerzálií a zaměřit se
na výuku studentů překladu.

40

You might also like