You are on page 1of 13

Neuropsychology 1998, Vol. 12, No.

3,446-458

Copyright 1998 by the American Psychological Association, Inc. 0894^105/98/S3.00

Right Hemisphere Emotional Perception: Evidence Across Multiple Channels


Joan C. Borod
Queens College and Mount Sinai School of Medicine, The City University of New York Barbara A. Cicero Jamaica Hospital

Lpraine K. Obler The City University of New York Graduate School Hulya M. Erhan Beth Israel Medical Center

Joan Welkowitz New York University

Cornelia Santschi
Hospital for Joint Diseases, New York University Medical Center

liana S. Grunwald
Rusk Institute of Rehabilitation Medicine, New York University Medical Center John R. Whalen Bellevue Hospital Center

Reto M. Agosti
Braintree Rehabilitation Hospital, Boston University School of Medicine

Emotional perception was examined in stroke patients across 3 communication channels: facial, prosodic, and lexical. Hemispheric specialization for emotion was tested via righthemisphere (RH) and valence hypotheses, and relationships among channels were determined. Participants were 11 right-brain-damaged (RED), 10 left-brain-damaged (LED), and 15 demographically matched normal control (NC) adults. Experimental measures, with analogous psychometric properties, were identification and discrimination tasks, including a range of positive and negative emotions. Nonemotional control tasks were used for each channel. For identification, RBDs were significantly impaired relative to LBDs and NCs across channels and valences, supporting the RH hypothesis. No group differences emerged for discrimination. Findings were not influenced by demographic, clinical, or control variables. Correlations among the channels were more prominent for normal than for brain-damaged groups.

There have been many speculations in the literature regarding neuroanatomical mechanisms underlying emotional processing. One of the earliest discussions about emotion and the brain was in the 1800's by Hughlings Jackson (1874, 1880), who observed that emotional words (i.e., curses) could be selectively spared in aphasics with

left-hemisphere lesions. One of the first investigators (Mills, 1912a, 1912b) to suggest a direct link between emotional processing and the right hemisphere observed that the presence of a unilateral right-sided lesion was associated with a decrease in or paralysis of emotional expression, suggesting that emotional processing was localized within the right hemisphere. Later, Papez (1937) described an

Joan C. Borod, Department of Psychology, Queens College, and Department of Neurology, Mount Sinai School of Medicine, The City University of New York; Barbara A. Cicero, Department of Physical Medicine and Rehabilitation, Jamaica Hospital, Jamaica, New York; Loraine K. Obler, Department of Speech and Hearing Sciences, The City University of New York Graduate School; Joan Welkowitz, Department of Psychology, New York University; Hulya M. Fj-han, Department of Psychiatry, Beth Israel Medical Center, New York; Cornelia Santschi, Department of Neurology, Hospital for Joint Diseases, New York University Medical Center; liana S. Grunwald, Department of Psychology, Rusk Institute of Rehabilitation Medicine, New York University Medical Center; Reto M. Agosti, Department of Neurology, Braintree Rehabilitation Hospital, Boston University School of Medicine; John R. Whalen, Department of Nursing, Bellevue Hospital Center, New York.

Portions of this article were presented at the 102nd Annual Convention of the American Psychological Association in Los Angeles, August 1994, and at meetings of the American Psychological Society in New York, June 1995, and the International Neuropsychological Society in Orlando, Florida, February 1997. Parts of this article were based on a doctoral thesis conducted by Barbara A. Cicero at Queens College and the Graduate School of The City University of New York. This research was supported by NIMH Grant MH42172. We are grateful to Martin Sliwinski for his statistical input and to the anonymous reviewers for their helpful suggestions. Correspondence concerning this article should be addressed to Joan C. Borod, Department of Psychology, NSB-E318, Queens College, 65-30 Kissena Boulevard, Flushing, New York 11367.

446

CHANNELS OF EMOTIONAL PERCEPTION anatomical pathway within the subcortical limbic system that was intimately involved in emotion. Over the past 25 years, there has been increasing interest in the role of the neocortex in emotional processing. As a result of this research, two major hypotheses have been proposed to explain the relationship between the right cerebral hemisphere and emotion. These theories address the expression and experience of emotion, as well as emotional perception, which is the focus of the current study. (For a detailed description of these theories and underlying mechanisms, see Borod, 1992, and Liotti & Tucker, 1995.) The right-hemisphere hypothesis postulates that the right hemisphere is dominant for the expression and perception of emotion, regardless of valence (i.e., pleasantness level). The valence hypothesis states that the right hemisphere is dominant for negative/unpleasant emotions and the left hemisphere is dominant for positive/pleasant emotions. There is substantial literature pertaining to hemispheric specialization for emotional perception in unilaterally braindamaged participants, focusing, for the most part, on three channels of emotional communication: facial, prosodic/ intonational, and lexical.1 (For reviews of this literature, see Borod, 1992, 1996; Borod, Bloom, & Santschi-Haywood, 1998; Gainotti, Caltagirone, & Zoccolotti, 1993; Heilman, Bowers, & Valenstein, 1993; Kolb & Taylor, 1990; Ross, 1997.) In these studies, brain-damaged participants typically are required to identify the emotion expressed in a stimulus or to discriminate between two emotionally toned stimuli. For the prosodic channel, in most studies, right-braindamaged participants (RBDs), relative to left-braindamaged participants (LBDs) and normal controls (NCs), have been shown to be impaired in making judgments about the emotional tone of spoken sentences. For the facial channel, deficits in perceiving facial emotions are associated more frequently with pathology of the right rather than the left hemisphere. For the lexical channel, RBDs have shown impairment compared to LBDs and NCs in processing, via visual reading, emotional versus nonemotional words, sentences, and discourse. For the most part, these findings are consistent across individual emotions (e.g., disgust) and specific dimensions (e.g., unpleasantness). Although studies involving normal participants have typically examined multiple communication channels in parallel (e.g., Ekinan. Friesen, O'Sullivan, & Scherer, 1980; LoCastro, 1972; Mehrabian & Wiener, 1967), the vast majority of studies with unilateral brain-damaged participants have focused on a single channel of emotional communication. The unique contribution of the current study was to simultaneously examine facial, prosodic, and lexical emotional perception in RBD and LED stroke patients and matched normal controls via tasks with analogous psychometric properties (e.g., structure, item difficulty, and administration procedures; Borod, Welkowitz, & Obler, 1992). The inclusion of both positive and negative emotional stimuli permitted us to test the right-hemisphere and valence hypotheses across channels. Although the majority of perception studies with brain-damaged participants support the right-hemisphere hypothesis, there have been some in-

447

stances of selective deficits as a function of valence for the facial (Borod, Koff, Lorch, & Nicholas, 1986; Borod, Martin, Alpert, Brozgold, & Welkowitz, 1993; Mandal, Tandon, & Asthana, 1991) and lexical (Cicone, Wapner, & Gardner, 1980) channels. Based on the current literature, it was expected that the right-hemisphere hypothesis would be operative for all three channels. Given the design of this study, we were also in a position to examine the relationship among communication channels. Communication of emotion is a multidetermined behavior, using a number of different channels (e.g., facial, vocal, verbal, gestural, or postural). Clinically, disorders of emotional processing may be observed in any or all communication channels. Studies of normal participants typically have suggested an association between facial and vocal channels (e.g., LoCastro, 1972; Mehrabian & Wiener, 1967). In a similar vein, studies involving groups of brain-damaged participants have reported associations between facial and vocal channels for both emotional expression (Borod, Koff, Lorch, & Nicholas, 1985; Borod et al., 1990; Ross & Mesulam, 1979) and perception (Borod et al., 1990; Cimino & Bowers, 1988; cf. Benowitz et al., 1983). In fact, in the normal human literature, an underlying emotional processor has been posited for facial and prosodic communication (Malatesta, Davis, & Culver, 1984). In the present study, by examining associations among facial, prosodic, and lexical channels, we were able to address the issue of whether there is a central mechanism or separate neural systems for the perception of emotion (Borod, 1993; Bowers, Bauer, & Heilman, 1993; Gainotti, 1984; Semenza, Pasini, Zettin, Tonin, & Portolan, 1986). Another purpose of this study was to examine methodological issues in the neuropsychological study of emotional perception. One issue pertains to the nature of processing required on perception tasks. Although one assumes that such tasks directly measure the ability to extract emotional connotation from facial, prosodic, or lexical cues (Walker, McGuire, & Bettes, 1984), it is also possible that these tasks index nonemotional aspects of perceptual processing. A special feature of the current study was its inclusion of nonemotional control tasks (e.g., visuospatial stimuli, intonation contours, and neutral printed words) for each of the three communication channels. A second methodological issue is the paradigm in which emotional stimuli are displayed. Whereas clinical studies typically use identification paradigms, this study used both identification and discrimination tasks. The identification paradigm assesses an individual's ability to recognize and to categorize an emotion (Bowers, Bauer, Coslett, & Heilman, 1985), whereas the discrimination paradigm assesses an individual's basic ability to perceive stimulus characteristics (Borod etal., 1993).

By lexical, we refer in this article to the words used to convey emotional meaning in written expression. In arriving at the term lexical, we also gave serious consideration to the term verbal. Though both terms, by definition, pertain to words, verbal more commonly refers to spoken or oral expression, whereas lexical typically includes written expression, the method used in this study.

448

BOROD ET AL. Table 1 Demographic Variables for Each Participant Group RBDs (n = 11) Variable
Age Education Occupation Gender*

LBDs (n = 10)
M
63.2 15.6 6.9
7:3 SD

NCs (n = 15)

Measure Years Years 9-point scale3 M:F ratio

M
67.1 14.0 6.0
7:4

SD

SD

F(2, 33) 0.35 1.06 2.38

P
.707 .358 .108

11.1 2.7 1,9

12.3 3.1 2.1

64.8 9.4 15.3 2.4 7.5 1.1 10:5

Note. RBDs = right-brain-damaged participants; LBDs = left-brain-damaged participants; NCs = normal control participants; M = male; F = female. 'Hollingshead Occupational Rating Scale (1977). bFisher exact probability tests were conducted for all possible group comparisons, and none were significant.

Method
Participants
Participants were 11 RBD, 10 LED, and 15 NC adults between 34 and 81 years of age. All participants, except for 1 RBD,2 were right-handed by self-report and by the Coren, Porac, and Duncan (1979) lateral preference inventory, without history of being converted from left-handedness. All participants were native English speakers or fluent by age 7. Participants had no history of mental retardation, learning disability, dementia, psychiatric disorder, psychotropic drug treatment, or substance abuse. Psychiatric history was screened via the Schedule for Affective Disorders and SchizophreniaLifetime Version (SADS-L; Endicott & Spitzer, 1978) to exclude normals with any psychiatric history and braindamaged participants with a premorbid psychiatric history. Further, the three groups did not differ significantly, F(2, 33) = 0.71, p .498, on the Beck Depression Inventory (Beck, 1967): RBDs, U = 2.6 (2.2); LBDs, M = 2.5 (2.7); NCs, M = 1.7 (2.0). NCs had no history of neurological disease, and brain-damaged participants had no premorbid history of neurological disease (e.g., epilepsy). To ensure that the participant groups were comparable with regard to age, education, and occupational status (Hollingshead, 1977), we conducted one-way analyses of variance (ANOVAs). As can be seen in Table 1, no significant group differences in demographics were found. In addition, the ratios of male to female participants in each group did not differ significantly (chi-square test). Patient groups were tested at least 2 months poststroke onset (MPO). Median MPO was 21.0 months for the RBDs and 12.5 months for the LBDs, and the mean MPO was 54.1 (SD = 75.4) and 21.9 (SD = 30.2) months for the RBDs and LBDs, respectively. Statistical analyses did not reveal significant differences between the groups for either the median MPO (Mann-Whitney U; p = .458) or the mean MPO, t(19) = 1.26, p = .223. RBDs and LBDs were included for study if they suffered brain damage as a result of a unilateral cerebrovascular accident (CVA). Lesion location was confirmed by CT or MRI scan, neuroradiological report, or both, except for 1 RBD, where side of lesion was confirmed by clinical neurological examination. For the RBDs, lesion locations were as follows: two frontal, one parietal, one occipital, one corona radiata, one frontal plus amygdala and hippocampus, one frontal plus periventricular white matter and basal ganglia, one frontoparietal plus putamen, one corona radiata and internal capsule, and one globus pallidus and putamen. For the LBDs, lesion locations were as follows: one anterior cortical, one occipital, one frontoparietal, one frontotemporoparietal, one frontotemporoparietal plus external capsule, one occipitotemporal plus posterior internal capsule, two corona radiata, one basal ganglia, and one thalamus and posterior internal capsule. Although the

number of participants in each region is small, the groups studied were relatively well matched in terms of the presence of cortical, subcortical, or mixed lesions (i.e., lesions affecting both cortical and subcortical structures; see Table 2). Because the current study is part of a larger project testing the expression, as well as perception, of emotion, moderate or more severe aphasics of any sort were not tested. The LBDs were evaluated for language status via the Boston Diagnostic Aphasia Examination (BDAE; Goodglass & Kaplan, 1983) and the 15-item version of the Boston Naming Test (Mack, Freed, Williams, & Henderson, 1992). Additional language information was obtained from clinical observation, medical records (i.e., a speech-language pathologist's report or neurologist's evaluation), or both. In five cases, there were no language deficits. In one case, there were mild word-finding difficulties at the time of testing but no history of aphasia. In three cases, there was minor residual aphasia at testing, as well as history of aphasia. Finally, 1 LED was aphasic (i.e., Broca's aphasia) at the time of testing. Brain-damaged participants were recruited primarily from the Neurology Service at Mount Sinai Medical Center, New York, New York, as well as from area hospitals and via local ads and flyers. All testing was conducted at Mount Sinai Medical Center. Informed consent was obtained before the start of testing. Participants were reimbursed for participation time and transportation costs.

Procedures
Potential participants were screened for participation in the actual study. Participants meeting criteria were assessed on nonemotional control measures. Participants were then tested on identification and discrimination tasks of facial, prosodic, and lexical emotion (Borod, Welkowitz, & Obler, 1992). Screening and control measures were presented in a fixed order to all participants. Experimental tasks were presented in one of four random orders, equally distributed across the three participant groups.

Screening Measures
All participants were initially screened for demographic and medical background, using a comprehensive questionnaire (Borod, To ensure that the left-handed RBD did not differ from the right-handed RBDs in this study, we compared this participant's performance to that of the other RBDs on all emotion variables. Her performance was found to be consistent. If her brain laterality were crossed for emotion, then one would have expected her data to be more similar to those of the LBDs rather than to those of the other RBDs.
2

CHANNELS OF EMOTIONAL PERCEPTION

449

Table 2 Number of Cortical, Subcortical, and Mixed Lesions


Participants RBDs LBDs Note. RBDs = right-brain-damaged participants; LBDs = leftbrain-damaged participants. "Mixed = cortical and Subcortical lesions present. Cortical Subcortical Mixed8 Unknown

Welkowitz, & Obler, 1992). The Block Design (for LBDs and NCs) and Information (for RBDs and NCs) subtests of the Wechsler Adult Intelligence ScaleRevised (WAIS-R; Wechsler, 1981) were administered to ensure absence of dementia or mental retardation. Two subtests (Attention and Memory) from the Mattis Dementia Rating Scale (MDRS; Mattis, 1988) and the Complex Ideational Material subtest from the BDAE (Goodglass & Kaplan, 1983) were administered to ensure that participants were able to handle the basic cognitive requirements of the experimental tasks. Finally, to ensure adequate perceptual functioning across the three communication channels, we screened participants visually on the Benton Visual Form Discrimination Test (BVFD; Benton, Hamsher, Varney, & Spreen, 1983), auditorily on a measure of pure tone threshold (Beltone Special Instruments Division, 1987; Borod, Obler, Albert, & Stiefel, 1983) and on the Benton Phoneme Discrimination Test (BPD; Benton et al., 1983), and lexically on the Reading Sentences and Paragraphs subtest from the BDAE. The cutoff scores utilized in this study were based on previous research in our laboratory and were generally 1-2 SDs below the normal mean (see Table 3).

Experimental Procedures
All participants were tested on perceptual emotional tasks, examining facial, prosodic, and lexical channels, with analogous paradigms (discrimination, identification) and similar difficulty levels across channels (Borod, Welkowitz, & Obler, 1992). Mean accuracy levels of 80% (based on ratings by normal participants) were used across channels, tasks, valences, and emotions. Each

perception task included three positive (happiness, interest, and pleasant surprise) and five negative (sadness, fear, anger, disgust, and unpleasant surprise) emotions to test the emotion laterality hypotheses. These emotions were derived from those considered primary by Ekman and Friesen (1975) and Izard (1977); pleasant and unpleasant surprise were included to further test the valence hypothesis. There were two equivalent forms for the discrimination tasks (28 pairs each14 different trials and 14 same trials) and two random orders for the identification tasks. To develop the stimulus items for the facial and prosodic emotional tasks, eight actors and actresses were photographed and audiotaped while they were posing each of the eight emotional expressions. To obtain the facial and prosodic poses, instructions were developed using a number of sources (Ekman & Friesen, 1975, 1976; Izard, 1971, 1977, 1983; Scherer, 1979, 1981, 1982, 1989). For each of the eight emotions, the following information was provided to posers: (a) descriptions of the experience of the emotion, (b) circumstances that may elicit the emotion, (c) features of the facial expression of the emotion, (d) prototypical photographs of facial expressions of the emotion, and (e) vocal characteristics of the emotion. After extensive practice, nosers were photographed while producing four poses of each facial expression and were audiotaped while producing each prosodic emotion to four different neutral-content sentences. The facial (n 256) and prosodic ( = 256) poses (8 [posers] X 8 [emotions] x 4 [trials]) were randomized and were presented to raters for category accuracy ratings. Raters were right-handed, native Englishspeaking or fluent by age 7, unmedicated, normal adults, who had no history of neurological disease, psychiatric disorder, learning disability, or substance abuse. The rating data were used to select the final facial and prosodic stimuli for the emotion tasks. For the lexical emotional tasks, acceptable stimuli (i.e., words and sentences) from our previous research (Andelman, 1990; Borod, Andelman, Obler, Tweedy, & Welkowitz, 1992) were used as a base. In addition, new words and sentences were generated. The pool of emotion words was derived from numerous sources, including dictionaries, thesauri, word lists, and monographs (e.g., Brown & Ure, 1969; Davitz, 1969; Paivio, 1991; Thorndike & Large, 1944; Toglia & Battig, 1978; Webster, 1962). For the sentence task, stimuli were constructed based on these words. As

Table 3 Performance of the Participant Croups on the Screening Measures


Possible range RBDs LBDs

NCs
M
14.9 11.1

Variable Auditory comprehension

Measure BDAE Commands subtest BDAE Complex Ideational Material subtest BDAE Reading Sentences & Paragraphs subtest Information or Block Design" MDRS Attention subtest MDRS Memory subtest BVFD

Cutoff score 3 of 6 points on Items 1-3 4 of 6 points on Items 1-6 5 of 10 points 7ACSS 34 of 37 points 22 of 25 points 26 of 32 points 19 of 30 points 40dBe <=40 dBe S40 dBc

M
15.0 11.1

SD

M
14.9 10.3

SD

SD

0-15 0-12 0-10 0-19 0-37 0-25 0-32 0-30 0-40 0-40 0-40

0.0 0.9 0.7 2.4 1.0 1.0 4.2 2.1


11.3 11.8 18.0

0.3 1.7 1.0 3.3 2.0 3.1 3.5 1.1 4.6 6.5 9.5

0.3 1.1 0.3 3.2 0.9 1.0 2.1 2.5 7.2 8.8
10.8

Reading comprehension General intelligence Basic attention Basic memory Basic visual perception Auditory perception Pure tone threshold

9.6
10.9" 35.4 24.1 27.6 26.5 22.7 20.5 25.5

9.3
"9.5 35.7 23.3 28.9 27.2 17.8 13.8 15.3
C

9.9
12.4 36.4 24.1 29.9 27.1 24.5 18.5 16.5
d

BPD
500 Hz 1000 Hz 2000 Hz

Note. RBDs = right-brain-damaged participants; LBDs = left-brain-damaged participants; NCs = normal control participants; BDAE = Boston Diagnostic Aphasic Examination; MDRS = Mattis Dementia Rating Scale; BVFD = Benton Visual Form Discrimination Test; BPD = Benton Phoneme Discrimination Test. "Age-Corrected Scaled Score (ACSS) for Wechsler Adult Intelligence ScaleRevised (WAIS-R) subtests. bWAIS-R Information subtest. CWAIS-R Block Design subtest. dMean of Information and Block Design ACSSs. 'Mean of the right and left ears is less than or equal to 40 dB for each frequency.

450

BOROD ET AL. respectively. The 56 pairs comprising the entire set contained 28 different trials and 28 same trials, using the pairing procedures described for face. (A sentence discrimination task was not used because it was found to be too difficult for brain-damaged patients in pilot work.) There were two identification tasks. For word identification, three-word vertical clusters (each word representing the same emotion) were presented hi the center of an 8'/2 x 11 in. (21.6 X 27.9 cm) sheet of paper for a maximum of 20 s. Participants indicated (from the eight-option card) the emotion best represented by each cluster. There were 24 test trials (8 [emotions] X 3 [clusters]) and 2 practice trials. There were two randomized stimulus sets. An example of a three-word vertical cluster is "putrid, slime, stench" for "disgust." For sentence identification, 24 different seven-word sentences, displayed in the center of an 8'/2 X 11 in. (21.6 X 27.9 cm) piece of paper, were presented, and participants indicated the emotion represented. An example of a sentence is "He felt the urge to hit someone." for the anger category. There were 24 test trials (8 [emotions] X 3 [sentences]) and 2 practice trials. There were two randomized stimulus sets.

for the other channels, the words (n = 230) and sentences ( = 100) were randomized and were presented to healthy adult raters for category accuracy, as well as emotionality, ratings. These data were used to select the final lexical stimuli for the emotion tasks. Facial perception. Facial stimuli consisted of Ekman and Friesen (1976) slides depicting happiness, sadness, fear, anger, and disgust, and new slides created for pleasant surprise, unpleasant surprise, and interest (Borod, Welkowitz, & Obler, 1992). For discrimination, two slides of different posers with the same or different emotional expressions were presented on a Caramate projector, and participants indicated the same or different emotion orally or by pointing to a printed card. For each pair, each slide was presented for 5 s, with a 1 -s interstimulus interval (ISI). There were two equivalent stimulus sets (A,B), each consisting of 28 pairs (14 male and 14 female) and three practice trials. The 56 pairs comprising the stimulus set contained 28 different trials (each of the eight emotions paired with every other emotion) and 28 same trials (each emotion paired with the same emotion three or four times). For identification, slides of different emotional expressions were presented by Caramate, and participants identified the emotion portrayed. Maximum slide exposure was 20 s, and participants named or pointed to the correct response on a 8>/2 X 11 in. (21.6 X 27.9 cm) card, which was centrally placed and which vertically listed all eight emotions (to control for unilateral neglect). Practice was given with emotion labels to ensure the participants' familiarity with their meaning. Four response cards were randomized within and across participants. For the first trial, the examiner read choices aloud as the participant scanned down the card. There were 32 trials (16 male and 16 female posers, balanced across emotions), each emotion appearing four times, and 2 practice trials. There were two randomized stimulus sets for the identification task. Prosodic perception. Prosodic stimuli were sentences spoken by actors or actresses in the eight emotional tones. Four neutral sentences were used, chosen for similar grammar, rhythm, and length; comprehensibility; and low emotionality ratings (e.g., "They found it in the room."). For discrimination, two sentences intoned by the same poser with the same or different emotional tone were presented on recorded tape, and participants indicated "same" or "different" (see earlier description). For each pair, sentences were presented in normal cadence (about 3 s), with a 1-s ISI. There were two equivalent stimulus sets (A,B), each consisting of 28 gender-balanced pairs. The 56 parrs comprising the entire set contained 28 different trials and 28 same trials, using pairing procedures described previously; there were 3 practice trials in each set. For identification, recordings of emotionally intoned sentences were presented hi normal cadence. Participants identified the emotion portrayed from choices on the 8'/2 X 11 in. (21.6 X 27.9 cm) response cards described earlier. There were 24 genderbalanced trials, each emotion appearing three times, and 2 practice trials. There were two randomized stimulus sets. Lexical perception. Jo develop lexical tasks, 200 words and 100 sentences were generated and then rated by normal participants for category accuracy (eight choices) and emotionality (6-point scale from 0 [not at all} to 5 [extremely]). Item selection was based on accuracy >50% (overall M = 80%), emotionality >2.5, nonredundancy in content and syntax, and clarity/comprehensibility. For discrimination, two printed words, one above the other, representing the same or different emotion, were presented in the center of a white sheet of 8Vi X 11 in. (21.6 X 27.9 cm) paper for no more than 20 s. Participants indicated the same or different emotion. There were two stimulus sets (A,B), each consisting of 28 pairs, and three practice trials. An example of a same trial is "grief, regret" pertaining to "sadness," and an example of a different trial is "loving, stench" pertaining to "happiness" and "disgust,"

Control Tasks
All participants were tested on nonemotional tasks that controlled for cognitive factors that could potentially confound performance on the various experimental tasks. Within the facial channel, the Facial Recognition Test (Benton et al., 1983) and the Visual Matrices Test (Borod et al., 1993) were used to control for nonemotional facial recognition and visuospatial perception, respectively. Within the prosodic channel, the Intonation Contours Perception task (Borod, Welkowitz, & Obler, 1992) was used. This task is comprised of four nonsense-syllable strings (e.g., pa-da-ka), with three types of international stress (i.e., declarative, interrogative, and emphatic; Blumstein & Cooper, 1974); each of the 12 items is presented twice. Participants were required to identify 24 tape-recorded items via a multiple-choice response card that contained a drawing, a verbal label, and a punctuational symbol for each contour. Within the lexical channel, nonemotional lexical control tasks (Borod, Welkowitz, Obler, Whalen, et al., 1992) were used; these tasks were analogous to the emotional lexical experimental tasks in terms of task structure, instructions, and degree of difficulty. The eight nonemotional categories, classified as characteristics of people, were body type, complexion, hair type, intelligence, personality, teeth, vision, and voice type. These nonemotional tasks included a word identification task (e.g., "yellow, crooked, crown" for the teeth category), a sentence identification task (e.g., "He watched the concert until the end" for the vision category), and a word discrimination task (e.g., "scarred, light" for a same item pertaining to "complexion," and "tall, wise" for a different item pertaining to "body type" and "intelligence," respectively). There were two randomized stimulus sets for each of the nonemotional tasks. The overall mean category accuracy rating for these nonemotional tasks was 80.5% (compared to 79.9% for the emotional tasks), and the overall emotionality rating (on a 6-point scale from 0 [not at all] to 5 [extremely]) was 1.25 (compared to 3.32 for the emotional tasks). Ratings were made by healthy adults. Results The statistical analyses included three major components. First, repeated-measures ANOVAs were used to test the right-hemisphere and the valence hypotheses. The NewmanKeuls multiple comparison post hoc procedure was used to evaluate all significant main effects and interactions. The

CHANNELS OF EMOTIONAL PERCEPTION

451

Greenhouse-Geiser correction was applied to significance tests involving more than 1 dfon. within-subjects factors; all p values were appropriately adjusted. Second, the performance of the three participant groups on the nonemotional control tasks was compared using one-way ANOVAs. When significant group differences were found, correlations were then computed between the nonemotional control task and its appropriate experimental emotional measure(s). If the number of significant correlations between control and experimental tasks was more than would be expected by chance, the control measure was used as a covariate in an analysis of covariance (ANCOVA) for the relevant experimental task. Third, correlation coefficients were computed to evaluate the interrelationships among the facial, prosodic, and lexical channels of communication. The dependent measures used in the analyses consisted of percent correct total scores for each of the seven experimental tasks. For valence, separate percent correct scores were calculated for positive and negative items for both identification and discrimination tasks. In addition, for the discrimination tasks, percent correct scores were calculated for mixedvalence items (where one item in a pair was positive and the other item negative).

40

Identification (including EWID)

Identification (including ESID) Participant Group

Discrimination

Figure I . Participant group mean scores, across all three channels, for the emotional identification and discrimination tasks. RBD = right-brain damaged; LBD = left-brain damaged; NC = normal control; EWID = emotional word identification: ESID = emotional sentence identification.

Test of the Right-Hemisphere and Valence Hypotheses


To compare the right-hemisphere hypothesis to the valence hypothesis, three Group (RED, LBD, NC) X Channel (facial, prosodic, lexical) X Valence (2 or 3) repeatedmeasures ANOVAs were conducted, two with identification tasks and one with discrimination tasks. If the righthemisphere hypothesis were operative, there would be a significant main effect of group; if the valence hypothesis were operative, diere would be a significant interaction between group and valence. For valence, two levels (positive, negative) were included for identification tasks, and three levels (positive, negative, mixed-valence) were included for discrimination tasks. The facial identification (FID) and the prosodic identification (PID) tasks were used in both identification ANOVAs. In one of these ANOVAs, word identification (WID) was used for the lexical channel task while, in the other ANOVA, sentence identification (SID) was used for the lexical channel task. (This was done for statistical purposes so that the three channels had equal weight.) For the discrimination ANOVA, the facial discrimination (FDIS), prosodic discrimination (PDIS), and lexical word discrimination (WDIS) tasks were used.

other. There was also a main effect of channel, F(2, 66) = 66.67, p < .001, such that performance on PID (M - 49.1 17.1) was significantly worse than on FID (M = 65.1 13.3) or WID (M = 79.8 17.2); FID and WID were not significantly different from each other. However, the main effect of channel was modified by a significant interaction between channel and valence, F(2, 66) = 5.02, p = .01. For negative emotions, performance in both facial and prosodic channels was significantly (p < .05) lower than in the lexical channel. For positive emotions, performance in the prosodic channel was significantly lower than in the facial or lexical channel (see Figure 2). There were no significant interactions involving group (Group X Channel, p - .768; Group X Valence, p = .556; Group X Channel X Valence,/) - .359).

Identification Tasks, Using SID for the Lexical Channel


For the identification ANOVA (3 X 3 X 2) in which the lexical task used was SID, again there was a significant main effect of group, F(2, 33) = 7.77, p = .002 (see Figure 1). Using post hoc tests, overall, we found RBDs (M 51.4 16.3) were significantly impaired relative to LBDs (M = 65.5 13.6) and NCs (M = 68,8 19.0), who did not differ significantly from each other. In addition, there was a significant main effect of channel, F(2, 66) = 40.70, p < .001, such that overall performance in the prosodic channel (M - 49.1 17.1) was significantly poorer than in the facial (M = 65.9 13.4) or lexical (M = 70.7 17.8) channel. However, the main effect of channel was modified by a significant interaction between channel and valence, F(2, 66) = 3.90, p = .026. For both positive and negative emotions, performance in the prosodic channel was significantly lower than in the facial or lexical channels. Performance in the lexical and facial channels did not differ significantly for either valence (see Figure 2). There were no significant interactions involving group (Group X Channel, p = .664; Group X Valence, p = .784; Group X Channel X Valence, p = .345).

Identification Tasks, Using WID for the Lexical Channel


For the identification ANOVA (3 X 3 X 2) in which the lexical task used was WID, there was a significant main effect of group, F(2, 33) = 7.53, p = .002 (see Figure 1). Newman-Keuls post hoc comparisons revealed that, overall, RBDs (M - 54.6 18.3) were significantly impaired relative to LBDs (M = 68.2 15.8) and NCs (M = 72.0 20.4). LBDs and NCs did not differ significantly from each

452
Identification (Including EWID)
|p Positive Negative

BOROD ET AL.

Facial

Prosodic

Lexical (words)

Identification (Including ESID)


D Positive Negative

.001. Even though participant performances on both facial (M = 82.7 8.7) and lexical word (M = 80.9 11.7) discrimination tasks were lower than on the prosodic discrimination task (M = 90.0 9.9), these differences did not reach significance when tested on a post hoc basis. The effect of channel, however, was modified by a significant Channel X Valence interaction, F(4,132) = 8,60, p < .001 (see Figure 2). Post hoc analyses revealed that participants were significantly more accurate in discriminating positive and negative prosodic items than any other items. Positive and negative prosodic items were not significantly different from each other, There were no significant interactions involving the group variable (Group X Channel, p - .968; Group X Valence, p = .328; Group X Channel X Valence, p ~ .078).

Effect of the Nonemotional Control Tasks on the Experimental Emotional Tasks


Facial Prosodic Lexical (sentences)

Group Comparisons
To compare the performance of the three participant groups on the two facial, one prosodic, and three lexical nonemotional control tasks, we conducted one-way ANOVAs for Group (3). As can he seen in Table 4, there were significant group differences on three of these control tasks, one per channel. For facial recognition, post hoc NewmanKeuls tests revealed that both RBDs and LBDs, although not significantly different from each other, were significantly impaired relative to NCs. For intonation contours, post hoc tests revealed that RBDs performed significantly more poorly than LBDs or NCs, who did not differ from each other. For nonemotional word identification (NEWID), post hoc tests indicated that RBDs were significantly impaired relative to NCs and that LBDs did not differ significantly from the other two groups.

Discrimination
D Positive Negative Q Mixed

Facial

Prosodic

Lexical

Figure 2. Channel X Valence interactions, across all three participant groups, for the emotional identification and discrimination tasks. EWID = emotional word identification; ESID = emotional sentence identification.

Discrimination Tasks
For the discrimination task ANOVA (3 X 3 X 3), there was no significant main effect of group, F(2,33) = 1.13,p = .334 (RBD M = 82.0 11.8; LED M = 85.1 10.0; NC M = 86.5 10.7; see Figure 1). There was, however, a significant main effect of channel, F(2, 66) = 12.09, p <

Correlations
In light of these significant findings, the three control tasks indicated previously were correlated with positive, negative, and mixed-valence percent correct scores for their respective tasks, separately for each participant group: facial recognition with FID and FDIS, intonation contours with

Table 4 Performance of the Participant Groups on the Nonemotional Control Tasks


Category controlled for Facial emotion Prosodic emotion Lexical emotion Variable Face perception Visual perception Auditory perception Lexical perception Possible range
0-54 0-24 0-24 0-24 0-24 0-28

RBDs Measure BFRT Visual matrices'1 Intonation contours' Word identification0 Sentence identification2 Word discrimination'

LBDs

M
44.5 22.6 18.0 84.3 73.9 70.8

SD
3.8 1.2 4.0 11.9 9.3 12.6

M
44.0 23.2 21.3 87.9 75.4 77.5

SD

NCs SD M
4.7 1.5 3.1 4.7 14.9 13.4

F"
5.74 0.59 3.70 4.82 0.06 1.89

4.6 49.3 O.S 23.1 2.8 21.3 14.0 96.4 16.4 73.6 10.4 80.2

P .007** .561 .035* .015* .947 .166

Note. RBDs = right-brain-damaged participants; LBDs = left-brain-damaged participants; NCs = normal control participants; BFRT = Benton Facial Recognition Test. a For all F&, the degrees of freedom equaled 2, 33, except for intonation contours (df~ 2, 31). bBorod, Martin, Alpert, Brozgold, & Welkowitz (1993). cBorod, Welkowitz,&Obler(1992). *p<-05. **;><.01.

CHANNELS OF EMOTIONAL PERCEPTION PID and PDIS, and NEWID with emotional word identification (EWID). For the facial tasks, two out of six correlations were significant for FID and none out of nine for FDIS. There were no significant correlations out of six for PID and two out of nine for PDIS. For EWID, two out of six correlations were significant.

453

ANCOVAs
Because there were more significant correlations than would have been expected by chance for FID, PDIS, and EWID, two-way repeated-measures ANCOVAs, Group (3) X Valence (2, 3), were conducted for each of these experimental tasks with the relevant control task as the covariate for each analysis. In controlling facial recognition ability on group differences in facial emotional identification, the FID ANCOVA yielded a significant main effect of group, F(2, 32) = 6.10, p = .006. Post hoc tests revealed that RBDs were significantly less accurate than LBDs and NCs in facial emotional identification; the latter groups did not differ significantly. In controlling for NEWID on group differences in EWID, the EWID ANCOVA produced a trend for group, F(2, 32) = 2.92, p = .068. When post hoc tests were conducted, RBDs were less accurate than NCs and LBDs in identifying emotional words, but only the comparison to NCs was significant. Finally, in controlling for intonation contour perception on group differences in prosodic discrimination, the PDIS ANCOVA did not yield a significant group effect, F(2, 31) = 0.35, p = .710. Overall, the use of statistical controls did not alter the group findings reported earlier. More specifically, although performance between some control and experimental tasks was correlated, group differences for the emotional identification tasks were still evident after covarying for group differences in control tasks.

scores by group. Significant findings are indicated. As can be seen, the NCs showed relatively high significant correlations among all of the identification tasks (five correlations). However, they exhibited only one significant correlation (out of three) for the discrimination tasks. The LBDs, on the other hand, showed the opposite pattern. There were three out of three significant correlations for discrimination but only one out of five for identification. For the RBDs, there were no significant correlations for either discrimination or identification.

Discussion Right-Hemisphere and Valence Hypotheses


One of the goals of this study was to evaluate the right-hemisphere hypothesis and the valence hypothesis for emotional perception. Ours is the first perception study to include a range of positive and negative emotions across facial, prosodic, and lexical communication channels to test these hypotheses in brain-damaged patients. Overall, there was strong support for right-hemisphere dominance in the identification of emotional stimuli across channels. Evidence for the right-hemisphere hypothesis comes from the finding of significant main effects of group but no significant Group X Valence interactions. In these analyses, RBDs were significantly impaired (i.e., less accurate) relative to LBDs and NCs, who did not differ significantly from each other. Even when group performance was inspected on positive and negative items for each paradigm (see Table 6), there was no evidence for group differences as a function of valence. Further, when the trend for the Group x Channel X Valence interaction from the discrimination task ANOVA, F(8, 132) = 2.07, p = .078, was explored using post hoc tests, there were no significant participant-group differences for positive, negative, or mixed-valence items for any of the three channels. This study is consistent with a large number of studies in the emotion literature, focusing on a single channel, which have also found right-hemisphere dominance for emotional perception rather than differential specialization as a function of valence (e.g., facial, Bowers et al., 1985; prosodic, Heilman, Bowers, Speedie, & Coslett, 1984; lexical, Borod,

The Relationship Among Communication Channels


To investigate the relationship among facial, prosodic, and lexical channels of emotion, we computed separate intercorrelation matrices for the total scores, using Pearson product-moment correlations. Table 5 displays the correlation coefficients for identification and discrimination total

Table 5 Correlation Coefficients Among the Communication Channels for Emotional Identification and Discrimination Total Scores Task Identification FID PID Discrimination FDIS PDIS
PID .57

RBDs
WID .47 .38 SID .37 .59 PID -.21

LBDs
WID .76** -.29 SID .28 .33 PID .57*

NCs WID 82*** .73** SID .68** .78***

PDIS -.05

WDIS .09 .28

PDIS .63*

WDIS .62* .80**

PDIS .52*

WDIS .31 .39

Note. RBDs = right-brain-damaged participants; LBDs = left-brain-damaged participants; NCs = normal control participants; FID = facial identification; PID = prosodic identification; WID = word identification; SID = sentence identification; FDIS = facial discrimination; PDIS = prosodic discrimination; WDIS = word discrimination. *p<.05. **/><.01. ***p<.001.

454

BOROD ET AL.

Table 6
Valence Means Across the Three Channels by Participant Group Lexical task used Words Sentences Words Emotional valence Positive Negative Positive Negative Positive Negative Mixed RBDs
M
53.4 55.9 49.7 53.0 81.4 79.3 85.4
SD

LBDs
M
69.3 67.1 65.2 65.8 85.2 84.8 85.2
SD

NCs

Paradigm Identification Identification Discrimination

M
70.8 73.1 66.6 70.9 85.7 87.7 86.1

SD

12.3 13.7 12.3 15.3 8.9 7.0 11.6

10.3 8.4 9.8 7.6 8.8 8.2 10.8

13.9 15.0 13.6 16.3 6.0 9.0 11.5

Note. RBDs = right-brain-damaged participants; LBDs = left-brain-damaged participants; NCs = normal control participants. Andelman, et al., 1992). In this study, by using the same participants and a carefully constructed emotion battery, including multiple modalities and control tasks, we increased the power of the statistical design. Our study is consistent with others in spite of the fact that our patient groups had relatively mild impairments with respect to cognitive and linguistic deficits commonly associated with right- or left-hemisphere pathology. Due to the experimental demands of our procedures, it was necessary to exclude moderate and severe aphasics. In a related study on emotional perception, Benowitz and colleagues (Benowitz et al., 1983), using three nonverbal channels (i.e., facial expression, vocal intonation, and body movement), also found emotional communication dependent upon righthemisphere functioning. Blonder, Bowers, and Heilman (1991) likewise reported facial and prosodic emotional perception impairments for RBDs relative to LBDs and NCs. With regard to emotional sentences, RBDs were selectively impaired when content depicted nonverbal expressions but not when emotional situations were described. Unique to the current study are the facts that a range of positive and negative emotions was used to test the valence hypothesis across channels, controls were provided for relevant nonemotional variables for each channel, and the relationship among the communication channels was examined.3 For the emotional discrimination tasks under study, by contrast, there were no significant performance differences among the three participant groups. There are at least two possible explanations for the absence of significant findings on these tasks. First, discrimination tasks are typically less difficult than identification tasks. Indeed, although our participants did not perform at ceiling, performance ranged, as a rule, between 80% and 90% for all groups on the discrimination tasks. Thus, it may be that the brain-damaged participants who were cognitively intact enough to participate in the current study were able to perform comparably to the NCs on measures of discrimination. In a similar vein, neurolinguistic studies suggest that some aphasics cannot identify but can discriminate phonemic stimuli (e.g., Blumstein, Cooper, Zurif, & Caramazza, 1977). To our knowledge, the reverse has not been found. Second, the discrimination process, in the case of face and prosody, could be activating perceptual rather than emotional systems, so that the tasks are performed as perceptual matching tasks rather than emotional discrimination tasks. According to Borod et al. (1993), discrimination tasks of this nature assess the basic ability to perceive the actual stimulus configuration, requiring processing that is predominantly perceptual in nature rather than associative or categorical. In fact, there were very few significant findings (only 12.5%) when emotional discrimination and identification scores were correlated with each other within each channel, separately by participant group. This paucity of significant correlations further underscores the separateness of the discrimination and identification processes indexed in this study. It is important to note that the overall participant group findings for identification and discrimination held even when statistical controls were implemented for relevant nonemotional variables. Further, it was the case that the RBDs under study demonstrated the typical deficits reported in the neuropsychological literature for neutral face recognition and intonation contour perception (e.g., Borod, Koff, & Caron, 1983; Weintraub, Mesulam, & Kramer, 1981). The finding for the facial channel in the presence of a control for neutral face recognition indicates that the results obtained are most probably due to emotional processing deficits rather than to a more general deficit in face recognition. This finding is in agreement with several studies in the literature (e.g., Borod et al., 1993; Bowers et al, 1985; Cicone et al., 1980) that also suggest that facial emotion deficits are independent of neutral face recognition (or visual perception) in brain-damaged patients. Communication Channels For both the identification and discrimination analyses, there were overall differences as a function of channel. Although the stimuli selected for all experimental tasks had a mean accuracy level of 80%, the actual tasks differed in terms of difficulty. Nevertheless, the nonequality did not introduce ceiling or floor effects. For the identification tasks, participants performed significantly worse on prosodic than

While this article was in press, we came across a study by Schmitt, Hartje, and Willmes (1997) that examined facial, prosodic, and prepositional speech perception for one positive (happiness) and one negative (fear) emotion in RBDs, LBDs, and NCs. Selective impairments were found among RBDs for the facial and prosodic stimuli and among LBDs for the speech stimuli.

CHANNELS OF EMOTIONAL PERCEPTION

455

on either facial, lexical word, or lexical sentence tasks. For the discrimination tasks, the opposite pattern was seen. That is, all participant groups performed significantly better on prosodic than on facial or lexical tasks. These findings may be the result of factors associated with the prosodic tasks themselves rather than the result of a difference in the manner in which prosodic emotion is perceived or processed. It appears as if the prosodic identification task may be inherently more difficult than the other two identification tasks. By contrast, prosodic discrimination appeared to be relatively simple for most participants. One explanation could be the fact that, for the prosodic items, each parr of emotionally intoned sentences is produced by the same actor or actress. For the facial discrimination items, by contrast, two different actors or actresses are used for each pair. It may be that, for prosodic discrimination, participants are performing a relatively simple perceptual matching task rather than discriminating discrete emotions. Alternatively, prosodic indicators of emotion may be processed via different brain circuitry that has less connection to brain areas (presumably neocortical) required for the labeling process. To more directly examine relationships among the communication channels, we computed correlation coefficients among facial, prosodic, and lexical measures of emotional perception. The design of this study provided a unique opportunity to examine these relationships in brain-damaged participants and to compare these relationships to those for normal participants. When we examined the data as a function of participant group, for the normal participants, 100% of the interchannel correlations were significant for identification tasks, whereas only 33% were significant for discrimination. The findings for identification are consistent with earlier reports in the literature and suggest that there may be a central processor for emotion identification that is independent of channel (e.g., Borod, 1992; Semenza et al., 1986; cf. Bowers et al., 1993). This notion is further corroborated by the fact that correlations for the normal participants among emotional identification tasks (n = 5, median r = .73, range = .57.82) were substantially higher than correlations between emotional identification tasks and nonemotional control tasks (n = 5, median r = .21, range = .11-.57), where only one finding was significant. For the brain-damaged participants, there were fewer significant interchannel correlations across paradigms (i.e., 25%) than for the normal participants (i.e., 75%). Even though the finding for the patient groups seems to suggest a single processor for each channel, perhaps, when part of the system breaks down, there may be a redundant backup system that takes over (e.g., Stein, Brailowsky, & Will, 1995). In other words, maybe there is a central processor that is operative in normal circumstances, but in brain damage, it is necessary to rely on more primary and less associative ways of processing. When considering the different findings for the normal versus the patient groups, one could hypothesize that deficits in perceptual processing limit the performance of the patient groups, regardless of emotion, but are not present and hence not delimiting for the normal group. However, this hypoth-

esis would not be applicable in this study because all participants were carefully screened for basic auditory perception, visual perception, and visual reading ability (see Screening Measures section and Table 3). In any event, a larger number of participants would be essential to confirm this pattern of results. The battery of tests used in this study appears to have clinical utility because it not only differentiates individual groups with emotional perception deficits but also suggests where the breakdown in processing might occur. In quantifying the degree of impairment, the battery could have implications for rehabilitation with brain-damaged populations. More specifically, it would be possible to target specific areas for remediation and to establish a baseline from which to assess improvement in emotional functioning.

Exploratory Issues Although our participant groups were comparable in demographic, cognitive, and clinical status, we examined the effect of a number of these factors on the emotional perception variables under study. The following participant factors have been known to have an impact on performance in neuropsychological studies of emotion: gender (Burton & Levy, 1989; Duda & Brown, 1984; Strauss & Moscovitch, 1981), depression (Jaeger, Borod, & Peselow, 1987; Starkstein & Robinson, 1991), lesion level (Adolphs, Tranel, Damasio, & Damasio, 1994; Cohen, Riccio, & Flannery, 1994; Tucker, 1993), MPO (Henry, 1984; Shewan & Cameron, 1984), language deficits (Benson, 1973; Borod et al., 1986), and unilateral neglect (Gainotti, 1972; RuckdeschelHibbard, Gordon, & Diller, 1986). On a post hoc basis, to examine the relationship between these participant factors and our experimental variables, we computed correlations between scores on the Beck Depression Inventory, measures of neglect (letters and geometric forms; Diller, Ben-Yishay, Goodkin, Gordon, & Weinberg, 1974), and MPO and scores from the emotional identification tasks and from the discrimination tasks. In addition, Group X Channel X Valence ANOVAs were conducted on the experimental emotion variables, adding gender (2; male, female), language deficits (2; present, absent), or lesion level (3; cortical, subcortical, cortical plus subcortical) as betweensubjects factors. Results of these analyses were examined with respect to significant correlations or to significant main effects and interactions. Overall, the participant factors did not affect the emotion tasks, except in one case. For the Beck Depression Inventory, 17% of the correlations were significant, all occurring for negative emotions. The fewer depressive symptoms reported, the more accurately emotional stimuli were processed. Due to the number of significant correlations, all major analyses in this study were conducted again, controlling for performance on the depression inventory. The pattern of results remained the same. Although depression may not have had a major impact in the current study, it may be an important consideration in future work, especially if differences emerge as a function of valence. The fact that there were no effects of the lesion level

456

BOROD ET AL. Beck, A. (1967). Depression: Clinical, experimental and theoretical aspects. New York: Harper & Row. Beltone Special Instruments Division. (1987). Owner's manual. Chicago: Beltone Electronics Corp. Benowitz, L. 1., Bear, D. M., Rosenthal, R., Mesulam, M. M., Zaidel, E., & Speiry, R. W. (1983). Hemispheric specialization in nonverbal communication. Cortex, 19, 5-11. Benson, D. F. (1973). Psychiatric aspects of aphasia. British Journal of Psychiatry, 123, 555-566. Benton, A., Hamsher, K. deS, Varney, N., & Spreen, O. (1983). Contributions to clinical neuropsychological assessment. New York: Oxford University Press. Blonder, L. X., Bowers, D., & Heilman, K. M. (1991). The role of the right hemisphere in emotional communication. Brain, 114, 1115-1127. Blumstein, S., & Cooper, W. (1974). Hemisphere processing of intonation contours. Cortex, 10, 146-158. Blumstein, S., Cooper, W., Zurif, E., & Caramazza, A. (1977). The perception and production of voice-onset time in aphasia. Neuropsychologia, IS, 371-383. Borod, J. C. (1992). Intel-hemispheric and intrahemispheric control of emotion: A focus on unilateral brain damage. Journal of Consulting and Clinical Psychology, 60, 339-348. Borod, J. C. (1993). Emotion and the brainanatomy and theory: An introduction to the Special Section. Neuropsychology, 7, 427-432. Borod, J. C. (1996). Emotional disorders (emotion). In J. G. Beaumont, P. Kenealy, & M. Rogers (Eds.), The Blackwell dictionary of neuropsychology (pp. 312320). Oxford, England: Blackwell. Borod, J. C., Andelman, R, Obler, L. K., Tweedy, J. R., & Welkowitz, J. (1992). Right hemisphere specialization for the identification of emotional words and sentences: Evidence from stroke patients. Neuropsychologia, 30, 827-844. Borod, J. C., Bloom, R., & Santschi-Haywood, C. (1998). Lexical aspects of emotional communication. In M. Beeman & C. Chiarello (Eds.), Right hemisphere language comprehension: Perspectives from cognitive neuroscience (pp. 285-307). Hillsdale, NJ: Erlbaum. Borod, J. C., Koff, E., & Caron, H. S. (1983). Right hemispheric specialization for the expression and appreciation of emotion: A focus on the face. In E. Perecman (Ed.), Cognitive functions in the right hemisphere (pp. 83-110). New York: Academic Press. Borod, I. C., Koff, E., Lorch, M. P., & Nicholas, M. (1985). Channels of emotional expression in patients with unilateral brain damage. Archives of Neurology, 42, 345348. Borod, J. C., Koff, E., Lorch, M. P., & Nicholas, M. (1986). The expression and perception of facial emotion in brain-damaged patients. Neuropsychologia, 24, 169-180. Borod, J. C., Martin, C., Alpert, M., Brozgold, A. Z., & Welkowitz, J. (1993). Perception of facial emotion in schizophrenic and right brain-damaged patients. Journal of Nervous and Mental Disease, 181, 494-502. Borod, I. C., Obler, L. K., Albert, M. L., & Stiefel, S. (1983). Lateralization for pure tone perception: Age and sex effects. Cortex, 2, 165-175. Borod, I. C., Welkowitz, J., Alpert, M., Brozgold, A. Z., Martin, C., Peselow, E., & Diller, L. (1990). Parameters of emotional processing in neuropsychiatric disorders: Conceptual issues and a battery of tests. Journal of Communication Disorders, 23, 247-271. Borod, I. C., Welkowitz, I., & Obler, L. K. (1992). The New York Emotion Battery. Unpublished materials, Mount Sinai Medical Center, Department of Neurology, New York.

variable suggests that subcortical, as well as cortical, structures are involved in emotional perception. This is not surprising in light of basic and clinical data pertaining to the salience of structures such as the amygdala (e.g., Adolphs et al., 1994; Le Doux, 1989). Further, these findings are not inconsistent with notions of cortical/subcortical reciprocity in the regulation of emotion (Madigan, Borod, Ehrlichman, & Tweedy, 1995; Schneider, Gur, Jaggi, & Gur, 1994; Tucker, 1981). By including larger numbers of participants in future research of this nature, researchers can examine lesion site (e.g., anterior vs. posterior) and lesion size, as well as level.

Conclusions
In summary, we examined facial, prosodic, and lexical emotional perception in RBDs, LBDs, and NCs. The unique contribution of this study is that three communication channels and comparable tasks, with analogous psychometric properties, were used to test laterality hypotheses across a range of emotions in the same group of participants. Overall, the results support the right-hemisphere hypothesis for the identification of emotion. For discrimination, the groups did not differ from each other. These findings for emotion perception were not influenced by demographic or clinical variables. In addition, we conducted the first interchannel comparison for emotional perception in right- and left-braindamaged, as well as normal, participants. There is evidence from our normal data of a single processor across channels for emotional identification. Along with fewer betweenchannel correlations for the patients, it is possible that emotional processing in normals results from an interweaving of many abilities and channels, components of which can break down under conditions of brain damage. Our inclusion of numerous methodological features in a single research design (i.e., several channels, a range of positive and negative stimuli, nonemotional tasks for each channel, demographic and clinical controls, careful screening procedures, and neurological and normal populations) permitted a breadth of analysis that was not previously possible. Conceptualization of the problem in this way allowed us an opportunity to test hypotheses about hemispheric specialization for emotion in greater depth and to examine relationships among communication channels more comprehensively. Moreover, we have presented a theoretically grounded battery of emotional perception tasks that appears to have validity and implications for clinical assessment.

References Adolphs, R., Tranel, D., Damasio, A., & Damasio, H. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669-672. Andelman, R (1990). Perception and expression of emotional words and sentences in patients with unilateral brain damage. Unpublished doctoral dissertation, Queens College of the City University of New York.

CHANNELS OF EMOTIONAL PERCEPTION Borod, J. C., Welkowitz, J., Obler, L. K., Whalen, J., Erhan, H., & Grunwald, I. (1992). A test battery to assess lexical emotion in neurological disorders [Abstract]. The Clinical Neuropsychologist, 6, 351. Bowers, D., Bauer, R. M., Coslett, H. B., & Heilman, K. M. (1985). Processing of faces by patients with unilateral hemisphere lesions. Brain and Cognition, 4, 258-272. Bowers, D., Bauer, R. M., & Heilman, K. M. (1993). The nonverbal affect lexicon: Theoretical perspectives from neuropsychological studies of affect perception. Neuropsychology, 7, 433-444. Brown, W. P., & Ure, D. M. J. (1969). Five rated characteristics of 650 word association stimuli. British Journal of Psychology, 60, 233-249. Burton, L., & Levy, J. (1989). Sex differences in the lateralized processing of facial emotion. Brain and Cognition, 11, 210-228. Cicone, M., Wapner, W., & Gardner, H. (1980). Sensitivity to emotional expressions and situations in organic patients. Cortex, 16, 145-158. Cimino, C., & Bowers, D. (1988). [Relationship between judgments of facial and prosodic emotion among patients with unilateral hemispheric lesions]. Unpublished raw data. (Cited in Neuropsychology, 1993, 7, 433^44.) Cohen, M., Riccio, C., & Flannery, A. M. (1994). Expressive aprosodia following stroke to the right basal ganglia. Neuropsychology, 8, 242-245. Coren, S., Porac, C., & Duncan, P. (1979). Abehaviorally validated self-report inventory to assess four types of lateral preferences. Journal of Clinical Neuropsychology, 1, 55-64. Davitz, J. (1969). The language of emotion. New York: McGrawHill. Diller, L., Ben-Yishay, Y, Goodkin, R., Gordon, W., & Weinberg, J. (1974). Studies in cognition and rehabilitation in hemiplegia (Rehabilitation Monograph No. 50). New York: New York University Medical Center, Institute of Rehabilitation Medicine. Duda, P. D., & Brown, J. (1984). Lateral asymmetry of positive and negative emotions. Cortex, 20, 253-261. Ekman, P., & Friesen, W. (1975). Unmasking the face. Englewood Cliffs, NJ: Prentice-Hall. Ekman, P., & Friesen, W. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press. Ekman, P., Friesen, W., O'Sullivan, M., & Scherer, K. (1980). Relative importance of face, body, and speech in judgments of personality and affect. Journal of Personality and Social Psychology, 38, 270-277. Endicott, J., & Spitzer, R. (1978). A diagnostic interview: The schedule for affective disorders and schizophrenia. Archives of General Psychiatry, 35, 837-844. Gainotti, G. (1972). Emotional behavior and hemispheric side of lesion. Cortex, 8, 41-55. Gainotti, G. (1984). Some methodological problems in the study of the relationships between emotions and cerebral dominance. Journal of Clinical Neuropsychology, 6, 111-121. Gainotti, G., Caltagirone, C., & Zoccolotti, P. (1993). Left/right and cortical/subcortical dichotomies in the neuropsychological study of human emotions. Cognition and Emotion, 7, 71-93. Goodglass, H., & Kaplan, E. (1983). The assessment of aphasia and related disorders. Philadelphia: Lea & Febiger. Heilman, K. M., Bowers, D., Speedie, L., & Coslett, H. B. (1984). Comprehension of affective and nonaffective prosody. Neurology, 34, 917-921. Heilman, K. M., Bowers, D., & Valenstein, E. (1993). Emotional disorders associated with neurological disease. In K. Heilman & E. Valenstein (Eds.), Clinical neuropsychology (pp. 461-497). New York: Oxford University Press.

457

Henry, G. (1984). Emotional behavior following stroke: Patient versus spouse reports. Clinical Gerontologist, 2, 70-72. Hollingshead, A. (1977). A four factor index of social status. Unpublished manuscript, Yale University, New Haven, CT. Izard, C. (1971). The face of emotion. New York: Appleton-CenturyCrofts. Izard, C. (1977). Human emotions. New York: Plenum Press. Izard, C. (1983). The maximally discriminative facial movement coding system (Max) (Revised). Newark, DE: Instructional Resources Center, University of Delaware. Jackson, J. H. (1874). On the nature of the duality of the brain. The Medical Press & Circular, 1, 4\-44. Jackson, J. H. (1880). On affections of speech from disease of the brain. Brain, 2, 203-222. Jaeger, J., Borod, J., & Peselow, E. (1987). Depressed patients have atypical hemispace biases in the perception of emotional chimeric faces. Journal of Abnormal Psychology, 96, 321-324. Kolb, B., & Taylor, L. (1990). Neocortical substrates of emotional behaviors. In N. Stein, B. Leventhal, & T. Trabasso (Eds.), Psychological and biological approaches to emotion (pp. 115 144). Hillsdale, NJ: Erlbaum. Le Doux, J. E. (1989). Cognitive-emotional interactions in the brain. Cognition and Emotion, 3, 267-289. Liotti, M., & Tucker, D. M. (1995). Emotion in asymmetric corticolimbic networks. In R. J. Davidson & K. Hugdahl (Eds.), Brain asymmetry (pp. 389-423). Cambridge, MA: MIT Press. LoCastro, J. (1972). Judgment of emotional communication in the facial-vocal-verbal channels. Unpublished doctoral dissertation, University of Maryland, College Park. Mack, W., Freed, D., Williams, B. W., & Henderson, V. W. (1992). Boston Naming Test: Shortened versions for use in Alzheimer's disease. Journal of Gerontology: Psychological Sciences, 47, 154-158. Madigan, N., Borod, J., Ehrlichman, H., & Tweedy, J. (1995, July). Hedonic experience of odors in subjects with unilateral brain lesions: Preliminary findings. Paper presented at the meeting of the American Psychological Society, New York. Malatesta, C., Davis, J., & Culver, C. (1984, April). Emotion in the infant voice: Its relation to facial expression. Paper presented at the International Conference on Infancy Studies, New York. Mandal, M. K., Tandon, S. C., & Asthana, H. S. (1991). Right brain damage impairs recognition of negative emotions. Cortex, 27, 247-254. Mattis, S. (1988). Dementia Rating Scale. Odessa, FL: Psychological Assessment Resources. Mehrabian, A., & Wiener, M. (1967). Decoding of inconsistent communications. Journal of Personality and Social Psychology, 6, 109-114. Mills, C. K. (1912a). The cerebral mechanism of emotional expression. Transactions of the College of Physicians of Philadelphia, 34, 381-390. Mills, C. K. (1912b). The cortical representation of emotion, with a discussion of some points in the general nervous mechanism of expression in its relation to organic nervous disease and insanity. Proceedings of the American Medico-Psychological Association, 19, 297-300. Paivio, A. (1991). Imagery and familiarity ratings for 2448 words. Unpublished raw data, University of Western Ontario, Department of Psychology, London, Ontario, Canada. Papez, J. W. (1937). A proposed mechanism of emotion. Archives of Neurology and Psychiatry, 38, 725-743. Ross, E. D. (1997). Right hemisphere syndromes and the neurology of emotion. In S. C. Schachter & O. Devinsky (Eds.), Behavioral neurology and the legacy of Norman Geschwind (pp. 183-191). Philadelphia, PA: Lippincott-Raven.

458

BOROD ET AL. Starkstein, S., & Robinson, R. G. (1991). Neuropsychiatric aspects of cerebral vascular disorders. In S. Yudofsky & R. Hales (Eds.), The textbook of neuropsychiatry (pp. 449-472). Washington, DC: American Psychiatric Press. Stein, D., Brailowsky, S., & Will, B. (1995). Brain repair. New York: Oxford University Press. Strauss, E., & Moscovitch, M. (1981). Perception of facial expressions. Brain and Language, 13, 308-332. Thorndike, E., & Lorge, I. (1944). The teacher's workbook of 30,000 words. New York: Teacher's College, Columbia University, Bureau of Publications. Toglia, M. P., & Battig, W. F. (1978). Handbook of semantic word norms. Hillsdale, NJ: Erlbaum. Tucker, D. M. (1981). Lateral brain function, emotion, and conceptualization. Psychological Bulletin, 89, 19-46. Tucker, D. M. (1993). Emotional experience and the problem of vertical integration: Discussion of the special section on emotion. Neuropsychology, 7, 500-509. Walker, E., McGuire, M., & Bettes, B. (1984). Recognition and identification of facial stimuli by schizophrenics and patients with affective disorders. British Journal of Clinical Psychology, 23, yi-*A. Webster, D. (1962). Dictionary of synonyms, antonyms, and homonyms. New York: Avenel Books. Wechsler, D. (1981). Wechsler Adult Intelligence ScaleRevised [Manual]. San Antonio, TX: Psychological Corp. Weintraub, S., Mesulam, M-M., & Kramer, L. (1981). Disturbances in prosody: A right-hemisphere contribution to language. Archives of Neurology, 38, 742744.

Ross, E. D., & Mesulara, M-M. (1979). Dominant language functions of the right hemisphere? Archives of Neurology, 36, 144-148. Ruckdeschel-Hibbard, M., Gordon, W., & Diller, L. (1986). Affective disturbances associated with brain damage. In S. Filskov & T. Boll (Eds.), Handbook of clinical neuropsychology (Vol. 2, pp. 305-337). New York: Wiley. Scherer, K. (1979). Nonlinguistic vocal indicators of emotion and psychopathology. In C. Izard (Ed.), Emotions in personality and psychopathology (pp. 495-529). New York: Plenum Press. Scherer, K. (1981). Speech and emotional states. In J. Darby (Ed.), Speech evaluation in psychiatry (pp. 189-220). New York: Grune and Stratton. Scherer, K. (1982). Methods of research on vocal communication: Paradigms and parameters. In K. Scherer & P. Ekman (Eds.), Handbook of methods in nonverbal research (pp. 136-198). Cambridge, England: Cambridge University Press. Scherer, K. (1989). Vocal correlates of emotional arousal and affective disturbance. In H. Wagner & A. Manstead (Eds.), Handbook of socialpsychophysiology (pp. 165-197). New York: Wiley. Schmitt, J. 1, Hartje, W., & Willmes, K. (1997). Hemispheric asymmetry in the recognition of emotional attitude conveyed by facial expression, prosody, and prepositional speech. Cortex, 33, 65-81. Schneider, F., Gur, R. C., Jaggi, J. L., & Gur, R. E. (1994). Differential effects of mood on cortical cerebral blood flow: A 133 xenon clearance study. Psychiatry Research, 52, 215-236. Semenza, C., Pasini, M., Zettin, M., Tonin, P., & Portolan, P. (1986). Right hemisphere patients' judgments on emotions. Acta Neurologica Scandinavica, 74, 43-50. Shewan, C., & Cameron. H. (1984). Communication and related problems as perceived by aphasic individuals and their spouses. Journal of Communication Disorders, 17, 175187.

Received June 16, 1997 Revision received December 1, 1997 Accepted December 2, 1997

You might also like