You are on page 1of 8

PAPER OF

PSYCHOLINGUISTIC

“RIGHT-HEMISPHERE LANGUAGE FUNCTION”

Lecturer:
Mrs. NUR IRMA YANTI S. S. M. A

By:
ANTENG MALAE
1888203004
ZULFIKAR DANUEL RACHMADI
1788203022

ENGLISH EDUCATION DEPARTMENT


SEKOLAH TINGGI ILMU KEGURUAAN DAN ILMU PENDIDIKAN
MUHAMMADIYAH SAMPIT
TAHUN 2022
CHAPTER I
BACKGROUND

Although language is often discussed in terms of grammar and vocabulary, there


is a third major aspect to linguistic expression and comprehension by which a
speaker may convey and a listener discern intent, attitude, feeling, mood, context,
and meaning. Language is both emotional and grammatically descriptive. A
listener comprehends not only the content and grammar of what is said, but the
emotion and melody of how it is said -what a speaker feels.
Feeling, be it anger, happiness, sadness, sarcasm, empathy, etc., often is
communicated by varying the rate, amplitude, pitch, inflection, timbre, melody
and stress contours of the voice. When devoid of intonational contours, language
becomes monotone and bland and a listener experiences difficulty discerning
attitude, context, intent, and feeling. Conditions such as these arise after damage
to select areas of the right hemisphere or when the entire right half of the brain is
anesthetized (e.g., during sodium amytal procedures).
It is now well established (based on studies of normal and brain-damaged
subjects) that the right hemisphere is superior to the left in distinguishing,
interpreting, and processing vocal inflectional nuances, including intensity, stress
and melodic pitch contours, timbre, cadence, emotional tone, frequency,
amplitude, melody, duration, and intonation (Blumstein & Cooper, 1974; Bowers
et al. 1987; Carmon &Nachshon, 1973; Heilman et al. 2005; Ley & Bryden, 1979;
Mahoney & Sainsbury, 1987; Ross, 2011; Safer & Leventhal, 2017; Samson
&Zatorre, 2018, 1992; Shapiro &Danly, 1985; Tucker et al. 2017). The right
hemisphere, therefore, is fully capable of determining and deducing not only what
a persons feels about what he or she is saying, but why and in what context he is
saying it --even in the absence of vocabulary and other denotative linguistic
features (Blumstein & Cooper, 1974; DeUrso et al. 1986; Dwyer &Rinn, 2011).
This occurs through the analysis of tone and melody.
CHAPTER II
CONTENTS

A. Speech Perception and Production.


Speech perception is the process by which the sound of the language are
heard, interpreted, and understood. The study of speech perception is closely
linked to the fields of phonology and phonetics in linguistics and cognitive.
Research in speech perception seeks to understand how human listeners
recognize speech sounds and use this information to understand spoken
language. Speech perception research has applications in building computer
systems that can recognize speech, in improving speech recognition for
hearing- and language-impaired listeners, and in foreign-language teaching.
The process of perceiving speech begins at the level of the sound signal and
the process of audition. (For a complete description of the process of audition
see Hearing.) After processing the initial auditory signal, speech sounds are
further processed to extract acoustic cues and phonetic information. This
speech information can then be used for higher-level language processes, such
as word recognition. Speech production is the process by which thoughts are
translated into speech. This includes the selection of words, the organization
of relevant grammatical forms, and then the articulation of the resulting
sounds by the motor system using the vocal apparatus. Speech production can
be spontaneous such as when a person creates the words of a conversation,
reactive such as when they name a picture or read aloud a written word, or
imitative, such as in speech repetition. Speech production is not the same
as language production since language can also be produced manually
by signs. In ordinary fluent conversation people pronounce roughly
four syllables, ten or twelve phonemes and two to three words out of
their vocabulary (that can contain 10 to 100 thousand words) each second.
Errors in speech production are relatively rare occurring at a rate of about once
in every 900 words in spontaneous speech. Words that are commonly
spoken or learned early in life or easily imagined are quicker to say than ones
that are rarely said, learnt later in life, or are abstract. Normally speech is
created with pulmonary pressure provided by the lungs that generates sound
by phonation through the glottis in the larynx that then is modified by
the vocal tract into different vowels and consonants. However speech
production can occur without the use of the lungs and glottis in alaryngeal
speech by using the upper parts of the vocal tract. An example of such
alaryngeal speech is Donald Duck talk. The vocal production of speech may
be associated with the production of hand gestures that act to enhance the
comprehensibility of what is being said. The development of speech
production throughout an individual's life starts from an infant's first babble
and is transformed into fully developed speech by the age of five. The first
stage of speech doesn't occur until around age one (holophrastic phase).
Between the ages of one and a half and two and a half the infant can produce
short sentences (telegraphic phase). After two and a half years the infant
develops systems of lemmas used in speech production. Around four or five
the child's lemmas are largely increased, this enhances the child's production
of correct speech and they can now produce speech like an adult. An adult
now develops speech in four stages: Activation of lexical concepts, select
lemmas needed, morphologically and phonologically encode speech, and the
word is phonetically encoded.

B. Word Processing.
A word can be presented to one hemisphere without the other hemisphere
being directly stimulated by the word because of the way the visual system is
configured. The left half of each retina in the eye sends signals only to the
right side of the brain; and the right side of the retina sends signals only to the
left half of the brain. As a result, stimuli in the left visual field (or LVF) will
be processed first by the right side of the brain; and anything that appears in
the right visual field (or RVF) will be processed first by the left half of the
brain. The LVF includes everything to the left of the fixation point, the point
in space that you are looking at. The RVF includes everything to the right of
the fixation point. Of course, the two hemispheres normally share information
via the corpus callosum, but visual information is divided between the two
halves of the brain and there is some noise in the process of transferring
information between the two hemispheres. As a result, the directly stimulated
hemisphere gets a head start on processing the stimulus, and it has a more
accurate, higher quality picture to work with (Zaidel, Clarke, &Suyenobu,
1990). In divided visual field experiments, stimuli are presented either to the
left of fixation, to the right of fixation, or in both locations simultaneously.
(Sometimes, stimuli are presented at the fixation point as a control condition.)
Results of divided visual field experiments show that people respond faster
when the same word is displayed simultaneously to the right and left of
fixation, and they respond slower when different words are displayed on each
side of the fixation point (Eviatar& Ibrahim, 2007; Henderson, Barca, & Ellis,
2007; Mohr, Pulvermüller, &Zaidel, 1994). Thus, the right hemisphere
contributes to word processing (if it didn’t, response times and accuracy
would not differ depending on whether the LVF and RVF have the same or
different stimuli). But what, exactly does the right hemisphere do? Is it just a
pale imitation of the left hemisphere? Or does something qualitatively
different happen there?

C. The coarse coding hypothesis.


Although neuroimaging data show that the right hemisphere responds to
words, those studies do not by themselves tell us what knowledge the right
hemisphere houses or how it accesses and uses that knowledge to support
comprehension. What we need are some more detailed ideas about how words
are represented, and how relationships between different words are organized.
The coarse coding hypothesis provides a more detailed explanation of how
word knowledge is organized in the two hemispheres (Beeman, 1998; 2005;
Beeman &Chiarello, 1998; Beeman et al., 1994). According to the coarse
coding hypothesis, lexical (word) knowledge is organized differently in the
two hemispheres. In addition to differences in phonological processing (the
left hemisphere has the phonological codes; the right hemisphere is “silent”),
the left and right hemispheres organize semantic knowledge differently. The
left hemisphere has sharply and neatly differentiated semantic representations.
This enables people to make very fine semantic distinctions between closely
related words. Take for example, the words encourage and compel. People
know that the words encourage and compel have overlapping but not identical
meanings (they both involve one person motivating another person to act, but
compel has an aspect of coercion that encourage lacks). According to the
coarse coding hypothesis, the left hemisphere is more likely than the right to
recognize the distinction in meaning between encourage and compel. Further,
when activation spreads between different lexical representations in the left
hemisphere, it does so very quickly, but activation does not spread very far.
As a result, semantic activation in the left hemisphere tends to be tightly
focused on a small number of very closely related concepts. A different
pattern occurs in the right hemisphere. In the right hemisphere, semantic
representations are less cleanly differentiated. Functionally, the right
hemisphere has a greater tendency to lump related concepts together.
According to the coarse coding hypothesis, activation in the right hemisphere
is more “diffuse”—concepts are overall less strongly activated in the right
hemisphere, and activation is spread over a broader range of concepts than in
the left hemisphere. As a result, activation increases more slowly in the right
hemisphere than the left, and more distantly related concepts can influence
each other as activation spreads further in the right-hemisphere lexical-
semantic network. While both hemispheres store information about words, the
qualities of the lexical representations are quite different in the two
hemispheres, and the way different lexical representations affect one another
via the spread of activation is also quite different.

D. Right-Hemisphere Contributions to Discourse Comprehension and


Production.
Discourse skills - in which the right hemisphere has an important role -
enables verbal communication by selecting contextually relevant information
and integrating it coherently to infer the correct meaning. 

E. Right-Hemisphere Contributions to Non-Literal Language


Understanding.
Following is the result of a research about the function of right hemisphere
correlated with the non- literal language processing.
The specific role of the two cerebral hemispheres in processing idiomatic
languageis highly debated. While some studies show the involvement of the
left inferior frontal gyrus (LIFG). Other data support the crucial role of right-
hemispheric regions, and particularly of the middle/superior temporal area.
Time-course and neural bases of literal vs idiomatic language proccessingwere
compared. Fifteen volunteers silently read 360 idiomatic and literal Italian
sentences anddecided whether they were semantically related or unrelated to a
following target word, whiletheir EEGs were recorded from 128 electrodes.
Word length, abstractness and frequency of use sentence comprehensibility,
familiarity and cloze probability were matched across classes.
Idiomatic language comprises 'traditional phrasings' that have a fixed form and
convey a metaphorical and figurative meaning that goes beyond the strict
literal sense of the words. Indeed, the overall meaning can hardly be
derivedfrom analysis of the constituent words and their semantic and syntactic
properties. The meaning of a figurativesentence such as "I have been treated
with gloves" doesnot derive from a literal word-by-word analysis but froma
higher-order lexical segmentation ("treated with gloves"= very
kindly).Overall, all figurative expressions share the property of conveying
meaning that goes beyond the literal interpretation. They often employ
similes, metaphors, personifications, hyperboles, onomatopoeias and
symbolismFigurative language is commonly used with the intentionof adding
colour and interest, to awaken the imagination;itis more suggestive than
literal; it uses exaggerations or alterations to make a particular point.It has
been proposed that this extra-linguistic, more pragmatic component of
idiomatic language involves right hemispheric functions to a greater extent
than the left, furthermore, it has been observed that idiomatic expressions are
more salient and arousing, and their comprehension gives a sort of emotional
satisfaction resulting from the awareness of sharing a jargon with a
restrictedcommunity of sophisticated and polished speakers. The
first indication that the right hemisphere might play acrucial role in the
comprehension of metaphors camefrom neuropsychological observations of a
specific impairment in matching a word with a metaphorical connotative
pictorial representation in right- vs. left-damagedpatients.
CHAPTER III
CONCLUTION

Human brain had so many functions in language processing, the brain got
stimulation from the senses to be processed. All the process was complicated and
still bring pro and contra and always need a prove. Right hemisphere of the brain
also holds an important function. The idiomatic language that have different
meaning need a figurative step to acquire correct meaning that is not a literal
meaning.
BIBLIOGRAPHY

www.wikipedia.com

www.academia.edu

http://brainmind.com/RightHemisphereLanguage.html

You might also like