You are on page 1of 17

RELC Journal

http://rel.sagepub.com/

Why Not Non-native Varieties of English as Listening Comprehension Test


Input?
Priyanvada Abeywickrama
RELC Journal 2013 44: 59
DOI: 10.1177/0033688212473270

The online version of this article can be found at:


http://rel.sagepub.com/content/44/1/59

Published by:

http://www.sagepublications.com

Additional services and information for RELC Journal can be found at:

Email Alerts: http://rel.sagepub.com/cgi/alerts

Subscriptions: http://rel.sagepub.com/subscriptions

Reprints: http://www.sagepub.com/journalsReprints.nav

Permissions: http://www.sagepub.com/journalsPermissions.nav

Citations: http://rel.sagepub.com/content/44/1/59.refs.html

>> Version of Record - Mar 11, 2013

What is This?

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


473270
2013
REL44110.1177/0033688212473270RELC JournalAbeywickrama

Article

RELC Journal

Why Not Non-native 44(1) 59­–74


© The Author(s) 2013
Reprints and permission: sagepub.
Varieties of English as co.uk/journalsPermissions.nav
DOI: 10.1177/0033688212473270
Listening Comprehension rel.sagepub.com

Test Input?

Priyanvada Abeywickrama
English Department, San Francisco State University, USA

Abstract
The existence of different varieties of English in target language use (TLU) domains calls into
question the usefulness of listening comprehension tests whose input is limited only to a native
speaker variety. This study investigated the impact of non-native varieties or accented English
speech on test takers from three different English use contexts: Korea, Sri Lanka and Brazil. The
findings showed that the variety of English or accented speech used had no impact on test taker’s
performance on a listening test for academic purposes. Also test takers from the three different
countries performed similarly even when the speakers shared the same native languages as the
test takers. Despite these findings students still perceived that a native variety of English should be
used in listening comprehension tests. Though the study suggests the use of non-native varieties
as test input, it also raises questions of fairness in the use of such varieties.

Keywords
Listening comprehension tests, validity, accent, non-native varieties, language attitudes

Introduction
With the spread of English as an international language, English is currently used by
many more non-native speakers than native speakers (Crystal, 2000: 2004). As a result,
the topography of the English language has changed dramatically in recent years
(Seidlhofer, 2004; Jenkins, 2006); the English used in many target language use (TLU)
domains is not limited to the English spoken by native speakers but includes English
spoken by non-natives alike. As Canagarajah (2006) points out, in any language use
context, whether local or international, the English language is not limited to a single

Corresponding author:
Dr Priyanvada Abeywickrama, English Department, San Francisco State University, 1600 Holloway Ave, San
Francisco, CA 94132, USA.
Email: abeywick@sfsu.edu

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


60 RELC Journal 44(1)

dominant variety. Current testing practices, however, often do not reflect this language
use situation. In most cases, the variety of English used has been, and still is, a native
variety. For example, the stimulus material and oral questions on the listening section of
the Test of English as a Foreign Language (TOEFL) are in standard North American
English (ETS, 2005). Similarly, the International English Language Testing System
(IELTS) listening tests use UK, UK regional, Australian and American accents
(Cambridge ESOL, 2008). In assessing English as a Second Language (ESL) listening
comprehension, test developers as in the above cases use samples of language that test
takers would encounter in the TLU domain. In the case of academia, specifically in the
US, a closer look at this domain will reveal a range of language varieties that students
come across. In recent years, the varieties of English that students encounter have not
only been those of fellow international students but also international teaching assistants
(PhD graduate students who teach and/or assist in introductory undergraduate courses)
and even faculty (Gorsuch, 2003; Major et al., 2005). For example, at the University of
California Los Angeles (UCLA), students encounter international teaching assistants in
introductory chemistry, mathematics and computer science courses as well as in ESL
courses. During the 2008-2009 academic year at UCLA, out of 2,980 teaching assistants
(TAs), 620 were international teaching assistants (ITAs) which makes up about 21% of
the total TA population. Among the ITAs, the largest groups were Chinese (22%), fol-
lowed by Korean (14%), Indian (7%), Japanese (4%) and others (53%) (Information
Services, Graduate Division, UCLA, 2009). Similarly, at Texas Tech University, ITAs
made up 30% of the total teaching assistant population, with Chinese students making up
the majority of ITAs followed by Indian students (Department of Institutional Research,
2007). This information shows a common scenario found in most universities in the US.
Therefore to limit test input to a native speaker variety in listening comprehension tests
such as TOEFL and IELTS is a misrepresentation of English use in US academia.
From an assessment perspective, if a listening comprehension test is to be authentic,
then the existence of the different varieties of English in academic domains calls into
question the usefulness (Bachman and Palmer, 1996) of listening comprehension tests
whose input is limited only to a native speaker variety. Following from that, this lack of
correspondence will limit the generalizability of test performance and therefore question
the validity of inferences made about those test takers’ listening ability. Then the ques-
tion arises, if tests are to be authentic in nature and correspond to students’ actual lan-
guage use, then shouldn’t listening comprehension tests such as those in the TOEFL and
IELTS tests use non-native varieties of English or accented speech along with native
varieties of English as test input?

Literature Review
As Major et al (2002) have pointed out, the relationship between accent and performance
on listening comprehension is not clear. This is partly because factors such as exposure,
familiarity and attitudes towards accents seem to contribute to performance and partly
because the construct of listening comprehension itself is complex. By and large, using
non-native varieties or, as commonly called, accented speech, in listening comprehension
has been picked out as the factor most contributing to unintelligibility or comprehension

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 61

difficulty (Goh, 1999). Flowerdew (1994) furnishes a summary of such studies and states
in their findings that the effects of accent on listening comprehension are both positive
and negative. While unfamiliar accents can cause difficulty in comprehension, some
accented English may actually aid listeners’ comprehension. Eisenstein and Berkowitz
(1981) found Standard American English was more intelligible than the English spoken
by working class New Yorkers or foreign accented English. Other research supports the
view of listeners having an advantage when they are familiar with or share the accent of
the speaker. For example, Gass and Varonis (1984) report that familiarity with accent, in
addition to being familiar with the topic and the speaker, aids listening comprehension.
Bent and Bradlow (2003) state that non-native listeners might find second language (L2)
speech more intelligible than native speech, whereas the opposite might be true for native
listeners. However, some research found the advantages of accent familiarity to be only
partially true. Smith and Bisazza (1982) show that while Japanese students found the
Japanese speaker easier to understand than the American speaker, Indian students under-
stood the American speaker better than a speaker of Indian English. A study by Major et
al (2002) shows that native speakers of Spanish found Spanish accented English easy to
understand but native speakers of Chinese found the Chinese accented English difficult to
understand. Then there are other studies that show that accent familiarity has no advan-
tage in comprehensibility. Van Wijngaarden et al (2002) claim that Dutch listeners found
native English speakers to be more intelligible than their ‘own’ non-native English accent
and even recently, Tokumoto and Shibata (2011) found Japanese and Koreans’ preference
for native English pronunciation over their own spoken English.
While testing research has looked at the effect of question preview (Sherman, 1997)
and the influence of reading (Friedman and Ansley, 1990) on listening comprehension
tests, the effect of accented speech (Ortmeyer and Boyle, 1985; Wilcox, 1978) has had
very limited focus. In one study, undergraduate comprehension of audio taped lectures
did not differ when ITAs spoke with an American accent or an intelligible, foreign
accent. Student comprehension only lessened with an unintelligible foreign accent
(Bresnahan et al., 2002). Most recently, Harding’s (2008) study found no significant
difference in test taker performance on a listening comprehension test when foreign
accented speech was used.
Another factor that confounds the use of non-native varieties in listening comprehen-
sion is the attitude or perception towards accented speech (Lippi-Green, 1997). Research
has shown that non-native speakers are often categorized as learners, uneducated or defi-
cient speakers and stereotyped solely on their accents (Brennan and Brennan, 1981;
Cargile, 1997). Gill (1991) showed that native speakers judge native speaker accents as
more acceptable than non-native accents. This negativity towards accented speech is not
restricted to native speakers alone; non-native speakers also show negative attitudes
towards non-native accents. For instance, Toro (1997) found American English was rated
higher by Puerto Rican students than English spoken by Greeks, Puerto Ricans or South
Americans. Similarly, English as a Foreign Language (EFL) learners in Austria found
native varieties to be more positive while non-native varieties (such as their own) were
ranked lowest (Dalton-Puffer et al., 1997). Recent research however shows that these
attitudes and perceptions may be more complex. Harding (2008) found that listeners’
views of using accented speech differed according to the purpose of the listening activity.

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


62 RELC Journal 44(1)

Test takers felt using accented speech in tests unfair for reasons of not having equal
access to accented speech or it being advantageous to those who are exposed to accented
speech or shared the same first language (L1). However, in terms of their own language
learning goals, if using accented speech fitted the purpose or was relevant to their com-
municative needs (i.e. being in environments of non-native speakers), test takers felt that
then including accented speech was acceptable.
As discussed above, previous research on the effect of accents on listening compre-
hension is still inconclusive and listener attitudes vary widely. However, given our cur-
rent language use contexts where non-native speakers with accented English exist
alongside native speakers of English, we should continue to examine the impact of using
both native and non-native accented English, especially in assessments where the valid-
ity of any test score inferences we make about a test taker’s listening ability is based on
listening comprehension tests that claim authenticity. The following research questions
guided this study:

1. Does the use of non-native varieties of English affect performance on listening


tests?
2. Is there any interaction between test takers’ native language and the variety of
English used in test input?
3. What are test taker perceptions and attitudes towards using non-native varieties
of speech in listening comprehension tests?

Method
Participants
Data for this study was obtained from 110 test takers from two different language use
contexts. They were Korea (N=39) and Brazil (N=33) where English is a Foreign
Language (EFL), and Sri Lanka (N=38) where English is a Second Language (ESL).
They represent typical international students who would attend US universities. The test
takers were college students who were taking courses in various disciplines and at the
same time were enrolled in English classes at universities in their respective countries.
They were identified as high intermediate level (Korean and Brazilian test takers) and
low intermediate level (Sri Lankan test takers) proficiency based on performance in their
respective placement tests and the syllabi of their respective ESL classes. Furthermore, a
background questionnaire was administered before the test and used primarily for obtain-
ing demographic information and information regarding exposure to different varieties
of English. This information was used to ensure that the test takers in the study had simi-
lar exposure to English language learning.

Instruments, Speakers and Procedure


The listening comprehension test consisted of eight listening texts; each followed by
three to four questions for a total of 31 multiple choice items. These listening passages
were retired versions of the TOEFL exam (TOEFL Institutional Testing Program), and

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 63

are representative of typical classroom lectures (see Appendix A for an example listening
test). The content was from different areas of academics such as business, art, psychol-
ogy, geology and biology. Reliability estimates (0.9) for the listening section were pro-
vided by the test publishers (TOEFL Test and Score Manual).
After test takers listened and answered questions for each listening test, they were
asked to evaluate speaker comprehensibility. This questionnaire consisted of six
items on a 1-7 point Likert scale from ‘strongly disagree’ to ‘strongly agree’ (see
Appendix B, speaker comprehensibility rating questionnaire). After the test takers
completed all eight listening comprehension tests, a final perception questionnaire
was given to learn about test takers’ attitudes and how they felt about the use of non-
native speaker input in listening comprehension testing (see Appendix C, attitude
questionnaire).
The speakers who provided the input for the listening test were teaching assistants
(TAs) at leading universities in the US. These speakers were chosen based on the ITA
information obtained from UCLA and Texas Tech University. Since a majority of ITAs
are Chinese, two Chinese TAs were included in the study. Then to represent an ITA who
spoke the languages of two of the test taker groups, one Korean and one Sri Lankan TA
were included. A TA who shares the same language as the Brazilian students was not
included to represent the many student groups who are taught by TAs who do not share
their language. The other four TAs were native speakers of American English for a total
of eight TAs. Except the American TAs, the other TAs had obtained a Test of Spoken
English (TSE) score of at least 50 in order to be eligible to be a TA. TSE scores can range
from 20-60. The TSE band descriptor for a score of 50 says ‘communication generally
effective; task performed competently’, but a closer look at the linguistic description
indicates that with such a score, ‘errors [are] not unusual; accent maybe slightly distract-
ing; some range in vocabulary and grammatical structures which may be slightly awk-
ward or inaccurate; delivery generally smooth with some hesitancy and pauses’ (ETS,
2001). Thus it can be concluded that these TAs’ speech was not considered native like.
Additionally, these TAs had also taken the Spoken Proficiency English Assessment
(SPEAK) at their respective universities and had scored at least 50 points, described as
‘communication generally effective’, which contrasts with the maximum score of 60
points, described as ‘communication always effective’ (ETS, 2001) thus further support-
ing the fact that these TAs were not native-like. At the time of the study, all eight TAs
were teaching undergraduate courses.
Table 1 describes the design of the study. Test takers were assigned to two groups at
random based on their availability to take the test and were administered the listening
comprehension test by an ESL instructor at their respective universities in their respec-
tive countries. Group 1 consisted of 16 Korean, 18 Brazilian and 21 Sri Lankan test tak-
ers and Group 2 consisted of 23 Korean, 15 Brazilian and 17 Sri Lankan test takers. Test
takers in both groups heard text 1, on American history read by an American TA. This
was to help familiarize students with the test. Also of concern was introducing foreign
accented English at the beginning of the test which may have a negative impact on the
test takers. For the other texts, the speakers (TAs) were counter-balanced. For example,
Group 1 listened to text 2 read by a Chinese TA (#1) while Group 2 listened to the same
text read by the Korean TA.

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


64 RELC Journal 44(1)

Table 1.  Listening Comprehension Test Speakers (TAs)

Text Group 1 (N=55) Group 2 (N=55)


1 American#1 American#1
2 Chinese#1 Korean
3 American#2 Chinese#2
4 Sri Lankan American#4
5 American#3 Chinese#1
6 Korean American#2
7 American#4 Sri Lankan
8 Chinese#2 American#3

Results
Research Question 1
The first research question asks whether the use of non-native input influences the test
takers’ performance on the test. As described in the methods section, the participants
were divided into two groups and the input language was counter-balanced across the
two groups. To answer this research question, using SPSS 10.0 for Windows, a one-way
analysis of variance (ANOVA) was conducted with the students’ performance as the
dependent variable. The results of the ANOVA are reported in Table 2.
As can be seen in Table 2, the difference in the mean performance across the two
groups was not significant, indicating that the use of non-native speaker input did not
make a difference in the participants’ performance on the listening tests. This means that
the variety of English used did not have any impact on performance. In addition, for each
group, the mean performance of test takers between American English speaking TAs and
non-native English speaking TAs was examined and found to be not significantly differ-
ent. Group 1: F (1, 53) = 5.325, p = .362 and Group 2: F (1, 53) = 4.837, p = .298.
Table 2.  Analysis of Variance Between Test Takers (Group 1 and Group 2).

Source Type III Sum of df Mean Square F Sig.


Squares
Between groups .693   1 .693 .565 .452
Within groups 777.475 108 1.226  
Total 778.168 109  

p < .05.

Research Question 2
Research question 2 addresses whether there was an interaction between the input lan-
guage and the test takers’ first language. In other words, this question asks if the test
takers would perform better when the speaker shares their first language. If there is a
significant interaction between these, for example then Korean test takers would perform

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 65

better than both Sri Lankan and Brazilian test takers when the input is Korean accented
English. If this is the case, the input language is then a source of test bias because the type
of input language favors test takers of certain language backgrounds. To answer this
research question, a two-way ANOVA was conducted. In this analysis, the students’ per-
formance was the dependent variable, and the speaker language and test takers’ first
language (L1) were the two independent variables. The results of the ANOVA are shown
in Table 3 and Figure 1 below. As can be seen in Table 3, the interaction between the
speaker’s L1 and the test takers’ L1 was not significant, suggesting that Sri Lankan stu-
dents did not perform significantly better than expected when the speaker (TA) was from
Sri Lanka, for example. Figure 1 also shows that mean differences in test taker groups
across input language are consistent, which also explains why there was no interaction
between the two independent variables. Specifically, even though Korean test takers
were favored when the input was from the Korean speaker (TA), the other two groups
also had the highest mean scores for the Korean input.
The two main effects, that is the speaker’s L1 and test takers’ L1, were significant. As
can also be seen in Figure 1, mean scores for speakers are different. For example, texts
read by Korean speakers were easiest while texts read by Sri Lankan speakers were most
difficult for all three groups. This explains why the first main effect, the input language,
was significant. This is not to say however that Korean accented English is easier to
understand than Sri Lankan accented English for example. The mean score was primarily

Table 3.  Analysis of Variance Between Test Takers and Speakers (TAs).

Source Type III Sum df Mean F Sig.


of Squares Square
Speaker L1 54.646 3 18.215 165.040 .000
Student L1 56.308 2 28.154 139.664 .000
Speaker L1*  
Student L1 .647 6 .108 .103 .996

p < .05.

3.5

3
Korean
2.5 Sri Lankan
Brazilian
2
1.5

1
CN KR SL AE
Input Language

Figure 1.  Mean Scores for Speakers (TAs).

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


66 RELC Journal 44(1)

the product of text passage/question difficulty and the accent difficulty and if there was
any difficulty, may be confounded with text passage/question difficulty. Figure 1 also
shows that mean scores are different across test taker groups; Korean test takers per-
formed better overall while Sri Lankan test takers had the lowest performance, which
explains why the second main effect, the test takers’ L1, was also significant.

Research Question 3
Research Question 3 attempted to get at test takers’ perceptions and attitudes towards using
non-native varieties as test input. Interestingly, test taker responses to the comprehensibil-
ity questionnaire did not provide any clear indication of test taker perceptions of non-native
speech as they were not consistent in their judgment of comprehensibility and were often
not able to identify the cause of the difficulty in comprehensibility. See Table 4 for the

Table 4.  Mean Comprehensibility Ratings for Speakers (TAs).

Speaker No Speak Proper Difficult to Native Overall


(TA) accent clearly intonation understand speaker impression
Sri Lankan 4 6 6 4 6 4
Korean 7 6 7 6 7 6
Chinese #1 4 5 6 5 6 4
Chinese #2 6 6 6 6 3 5
American #1 7 7 7 6 6 7
American #2 6 5 6 5 2 5
American #3 7 6 7 6 3 5
American #4 7 6 7 7 7 7

mean comprehensibility ratings of the speakers. For example, some test takers tended to
‘strongly agree’ with most of the qualities describing the speaker input (e.g. no accent,
proper intonation) but did not have a favorable overall impression of the speaker; or some
test takers did not agree that the speaker had an accent but also did not agree that they were
native speakers. A person with an accent may not necessarily speak a standard variety but
could still be comprehensible. However, such a speaker could be judged negatively simply
because his/her voice may not be favored by a test taker. Also it could be the case that other
factors such as the passage content or the questions are difficult to understand and result in
rating a speaker as being difficult to comprehend or rating a speaker non-native.
While the comprehensibility questionnaire did not provide any useful information,
the attitude questionnaire revealed some interesting findings. The main purpose was to
find out how test takers felt about the use of non-native varieties as test input. The first
question asked if they thought that it was good to use only American or British English
rather than other varieties in listening tests. For this question, 28% of test takers
responded positively because native varieties were easier to understand and 34%
responded positively because American and British English are considered Standard
English. However 32% gave a negative response to this question (because in real life

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 67

we hear both native and non-native varieties of English). Thus more than 62% still
responded that using a native variety was preferable. The next question asked test tak-
ers if it is important to use non-native varieties or accented English like Indian English,
Singaporean English, etc in listening tests. The responses seem to favor non-native
varieties with 46% saying yes (because in real life we also hear non-native varieties),
14% saying no (because non-native varieties are difficult to understand) and 11% say-
ing no because listening tests should only have Standard English. With this question
the use of non-native varieties seems to have risen in importance in the perceptions of
test takers. However, when the next question asked test takers if using non-native vari-
eties in listening tests made a difference in their test performance, test takers seemed
certain that it did; 48% said performance is better with native English while 30% said
their performance is better with non-native English and 10% said there would be no
difference. A small percentage (12%) felt that the use of non-native English only mat-
tered when the test taker was not proficient in English, which alludes to a test takers’
language proficiency as the issue rather than native or non-native speech. When asked
if the type of speech input may cause test bias, almost 60% of the participants said that
they thought Japanese test takers would better understand Japanese accented English
than test takers from other countries (41% said they would not). Finally, test takers
were asked even if a non-native variety was easy to understand if they prefer a native
variety of English on a listening test, to which there was an overwhelming yes (65%),
followed by no (21%) and it does not matter (14%).

Discussion
To the question ‘Why not non-native varieties of English as listening comprehension test
input?’, this study suggests that non-native varieties of English can be used as listening
test input though test takers may suggest otherwise. Specifically the test data do not show
any significant differences between test takers’ performance when they listened to input
produced by a native speaker or a non-native speaker. Thus for the Korean, Sri Lankan
and Brazilian test takers in this study, the use of accented speech did not impede their
performance. Furthermore, the test results did not show any bias when test takers listened
to input by a speaker who shares the same language background as the test takers. In
other words, Korean test takers did not perform better than either Sri Lankan or Brazilian
test takers when the input was produced by a Korean speaker of English. The mean
scores show that overall the Korean test takers did better than Brazilian test takers who
also did better than the Sri Lankan test takers irrespective of the speaker input. The mean
scores also match level of language proficiency of the test takers (Korean and Brazilian
being high intermediate and Sri Lankan being low intermediate) and therefore support
the validity of the score based inferences. Finally, test taker perceptions and attitudes
towards using non-native varieties of speech in listening comprehension tests were found
to be ambiguous. Test takers were not able to clearly indicate if they thought the input
was produced by a non-native speaker or that the speaker had an accent. Some test takers
also identified a native speaker’s English as being very difficult to understand and often
times two of the four American TAs were identified as being non-native! Evidently, test
takers were not able to perceive issues that made comprehensibility difficult and evaluate

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


68 RELC Journal 44(1)

accented or non-native speech as expected. It is possible that other factors such as pas-
sage and/or question difficulty may have confounded test taker perceptions.
The results of the attitude questionnaire on the other hand go counter to the findings
of the test performance data. Although some test takers acknowledged the presence of
non-native or accented speech in their language use contexts and said that they should be
included in listening tests, overwhelmingly test takers did not have a favorable attitude
towards non-native speech. These findings support most of the research on accent and
attitudes towards non-native speech.
Test takers’ preference still seems to be for American and British English as they
deem them to be easier to understand and/or because they are considered ‘the standard’.
Many test takers also felt that using a non-native variety of English would hamper their
performance. This raises the question of negative affect in test takers if they encounter
non-native speech in listening comprehension tests. Also an overwhelming majority felt
that test takers would have an advantage in listening comprehension and intelligibility
when the speaker shares the same L1 as the test taker. This closely relates to Kunnan’s
(2004) Test Fairness Framework in which he discusses qualities such as equal access and
absence of bias in tests. Test takers who have not been exposed or have had limited
access to certain accents may see the use of those accents as being unfair. Test takers
were also asked to give reasons for their preferences. Among them were the following
responses:
Preference for a native variety:

Native variety is better because it is Standard English.


There should be some uniformity.
Yes, [because] native variety is the variety that we learn.
Most parts of the world accept native English.

Preference for a non-native variety:

Some native variety pronunciation is different from our pronunciation.


We can’t pronounce some words properly because our mother tongue is not English.
Indian or SL English is familiar with our own language. So it will be easier to
understand.

Test takers who said that the variety did not matter said the following:

We should be able to understand any person talking in English.


If we can understand the person it does not matter if they are native or non-native
speakers.

These responses are not surprising given the social and political position of English as
an International language/lingua franca. In ESL contexts such as Sri Lanka, Singapore,
Kenya, the educational and sociocultural factors reflecting local culture and languages
provide for a more positive response for localized varieties of English. Yet, as Xu et al.
(2010) reveal, the growing use of English as a lingua franca presses L2 users to both

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 69

accept and adopt a standardized form. While there is evidence even in places such as
Korea that a Korean variety of English can be found (Honna, 2006) or that learners in
EFL contexts hold positive attitudes towards their variety of English in terms of marking
solidarity and legitimacy (Bian, 2009; McKenzie, 2008), still the current standards of
American and British English continue to be the models for global usage in EFL contexts
(Yano, 2001). Also, research shows that second language learners in EFL context may
lack an awareness of other varieties of English simply because they do not have exposure
to such speakers (Friedrich, 2000; Tokumoto and Shibata, 2011), and the result is a lean-
ing in favor of the standard varieties.
To sum up, while test takers were not unanimous in their perceptions and attitude of
non-native or accented speech being used in listening tests, the overall findings from the
attitude questionnaire was that test takers felt non-native or accented English should not
be used and that by using a non-native variety a test taker who shares the same language
as the speaker input will be at an advantage over others who do not share the same L1
background.

Conclusions
To answer the question put forward in the title of this study, ‘Why not non-native varie-
ties of English as listening comprehension test input?’, the quantitative evidence response
is ‘Yes’ because test takers do not perform differently in listening comprehension tests
when the input consists of both standard American and other varieties of English.
However when test takers were asked about the use of other varieties as test input, the
answer to the title of the study is ‘No’, because test takers’ perceptions and attitudes still
indicate a preference for a standard variety of English.
Even though our findings suggest the possibility of using accented or non-native vari-
eties of English as test input, further research is necessary as there are a number of limita-
tions in this study. First of all, the study was limited to only three non-native varieties of
English. The ITA demographics in the introduction showed more than 50% of ITAs are
from countries other than China, Korea, India and Japan. Thus, there is always the pos-
sibility that if different speakers (i.e. other accented or non-native varieties of English)
had been used, the findings might have been different. Second is the issue of passage/
question difficulty that was mentioned in the results above. Although the questions in
each listening test had almost the same average difficulty (between .54 to .57 item diffi-
culty) it was not possible to isolate the speaker input (non-native English or accented
English) as the only within subjects factor. Another limitation of the study is the level of
language proficiency of the test takers. In this study, all three groups of test takers
(Korean, Brazilian and Sri Lankan) were fairly similar in that they were identified as
being intermediate, but with different levels of language ability, it is possible that test
takers may perform differently. A number of studies have raised language proficiency as
a factor in research related to comprehension of not only oral skills but also literacy skills
(see Grabe, 2004; Silva and Brice, 2004; Vandergrift, 2004 for an overview of relevant
research).
Another question still remains whether it is appropriate to use non-native input even
if most students still preferred a native variety. This may be an issue because the use of

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


70 RELC Journal 44(1)

a non-native variety may cause a negative affect in test takers and as Swain (1984)
quite rightly pointed out, as test developers we should ‘bias for best’. Furthermore,
Taylor (2006: 52) says, ‘Applied linguists may see things differently but we should not
ignore or override the attitudes and perceptions of learners themselves’. Having said
that, though attitude and perception research shows a negative response towards
accented English, test developers should be careful of excluding accented or non-
native varieties of English as test input. Lindemann’s (2006: 38) research shows, ‘lis-
teners may react negatively to certain accents (and thus claim to find them unintelligible)
even when we would expect that the features of those accents themselves do not
directly impede intelligibility’. Similarly Watson Todd and Pojanapunya (2009) exam-
ined both implicit and explicit attitudes towards both native English Speaking Teachers
(NESTs) and non-native English Speaking teachers (NNESTs). While students
expressed a desire for NESTs, their implicit attitudes did not show a statistically sig-
nificant preference for NESTs or NNESTs. These studies seem to indicate that students
express a certain preference because they think they should prefer it based on linguistic
competence, or even race rather than teaching competence. Thus to make the case for
using non-native varieties of English, not only do test developers need to provide evi-
dence that such accented speech is not a source of test bias, but language users also
need to be educated that at a certain level, non-native varieties of English and accented
speech are comprehensible.
Until research shows that the use of non-native or accented English as input is not
relevant to the construct of listening, we may not be able to use non native/accented
English input in testing, especially in high stakes tests such as TOEFL and IELTS. At the
same time we also cannot ignore the reality of English as it is used today. In order to
address some of the issues of test usefulness, validity and authenticity in particular, we
should perhaps use non-native varieties or accented English in low stakes tests such as
placement, diagnostic or achievement tests in pedagogical situations. Taylor (2006) uses
the term ‘fitness for purpose’ to explain this aspect of test validity, where tests should
represent the accents and English varieties used in the contexts in which test takers are
likely to find themselves. Furthermore, introducing accented speech and non-native vari-
eties of English in the teaching of listening and speaking skills will expose learners to a
wider range of linguistic models and may help them to move away from the attraction of
the ‘standard’ and in making a NEST/NNEST dichotomy. As accented speech or non-
native varieties gain recognition among language users, test takers and other stakehold-
ers, the attitudes and perceptions towards the use of these non-native varieties of English
will also hopefully change.

Funding
This research received no specific grant from any funding agency in the public, commercial, or
not-for-profit sectors.

References
Bachman LF, Palmer A (1996) Language Testing in Practice. Oxford: Oxford University Press.
Bian YW (2009) Chinese learners’ identity in their attitudes towards English pronunciation/
accents. CELEA Journal 32: 66–74.

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 71

Bradlow AR, Bent T (2002) The clear speech effect for non-native listeners. The Journal of the
Acoustical Society of America 112(1): 272–84.
Brennan E, Brennan JS (1981) Measurements of accent and attitude toward Mexican-American
Speech. Journal of Psycholinguistic Research 10: 487–501.
Bresnahan MJ, Ohashi R, Nebashi R, Liu WY, Shearman SM (2002) Attitudinal and affective
response toward accented English. Language and Communication 22: 171–85.
Cambridge ESOL (2008) IELTS teaching resource. www.cambridgeesol.org/teach/ielts/listening/
aboutthepaper/overview.htm
Canagarajah S (2006) Changing communicative needs, revised assessment objectives: testing
English as an international language. Language Assessment Quarterly 3(3): 229–42.
Cargile AC (1997) Attitudes toward Chinese-accented speech: an investigation in two contexts.
Journal of Language and Social Psychology 16: 434–43.
Crystal D (2000) Emerging Englishes. English Teaching Professional 14: 3–6.
Crystal D (2004) The Language Revolution. Cambridge: Polity Press.
Dalton-Puffer C, Kaltenboec G, Smi U (1997) Learner attitudes and L2 pronunciation in Austria.
World Englishes 16: 115–28.
Department of Institutional Research (2007) Faculty by ethnic group [On-line]. http://www.irs.ttu.
edu/NEWFACTBOOK/Faculty/2007/F07ETHNIC.htm
Eisenstein M, Berkowitz D (1981) The effect of phonological variation on adult learner compre-
hension. Studies in Second Language Acquisition 4(1): 75–80.
ETS [Educational Testing Service] (2001-02) TSE and SPEAK Score User Guide, 2001-02.
Princeton, NJ: Educational Testing Service.
ETS [Educational Testing Service] (2005) TOEFL iBT Tips: How to Prepare for the Next
Generation TOEFL Test and Communicate with Comfort. Princeton, NJ: ETS.
Flowerdew J (1994) Academic Listening: Research Perspectives. New York, NY: Cambridge
University Press.
Friedman SJ, Ansley TN (1990) The influence of reading on listening test scores. The Journal of
Experimental Education 58( 4): 301–10.
Friedrich P (2000) English in Brazil: functions and attitudes. World Englishes 19(2): 215–223.
Gass S, Varonis EM (1984) The effect of familiarity on nonnative speech. Language Learning
34: 65–89.
Goh C (1999) How much do learners know about the factors that influence their listening compre-
hension? Hong Kong Journal of Applied Linguistics 4(1): 17–41.
Gill M (1991) Accents and Stereotypes: Their Effects on Perceptions of Teachers and Lecture
Comprehension (Foreign Accents) Doctoral dissertation, University of Nebraska – Lincoln
Lincoln, Nebraska. Dissertation Abstracts 52, 0138A.
Gorsuch G (2003) The educational cultures of international teaching assistants and US universi-
ties. TESL-EJ 7(3). http://www.tesl-ej.org/wordpress/
Grabe W (2004) Research on teaching reading. Annual Review of Applied Linguistics 24:
44–69.
Harding L (2008) Accent and academic listening assessment: a study of test-taker perceptions.
Melbourne Papers in Language Testing 13(1): 1–33.
Honna N (2006) East Asian Englishes. In: Kachru BB, Kachru Y and Nelson CL (eds) The
Handbook of World Englishes. Malden, MA: Blackwell, 114–29.
Information Services (2009) Graduate Division, University of California, Los Angeles. Jenkins J
(2006) The spread of EIL: a testing time for testers. ELT Journal 60: 42–50.
Kachru B (1985) Standards, codification and sociolinguistic realism: the English language in the
outer circle. In: Quirk R, Widdowson HG (eds) English in the World: Teaching and Learning
the Language and Literatures. Cambridge: Cambridge University Press, 11–30.

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


72 RELC Journal 44(1)

Kunnan AJ (2004) Test fairness. In: Milanovic M, Weir C (eds) Studies in Language Testing
18: European Language Testing in Global Context. Cambridge: Cambridge University Press,
27–48.
Lindemann S (2006) What the other half gives: the interlocutor’s role in non-native speaker per-
formance. In: Hughes R (ed.) Spoken English, TESOL and Applied Linguistics: Challenges for
Theory and Practice. Basingstoke: Palgrave Macmillan, 23–49.
Lippi-Green R (1997) English With An Accent. London: Routledge.
Major RC, Fitzmaurice SF, Bunta F, Balasubramanium C (2002) The effects of nonnative accents
on listening comprehension: implications for assessment. TESOL Quarterly 36(2): 173–90.
Major RC, Fitzmaurice SF, Bunta F, Balasubramanium C (2005) Testing the effects of regional,
ethnic, and international dialects of English on listening comprehension. Language Learning
55(1): 37–69.
McKenzie RM (2008) Social factors and non-native attitudes towards varieties of spoken English:
a Japanese case study. International Journal of Applied Linguistics 18: 63–88.
Ortmeyer C, Boyle JP (1985) The effect of accent differences on comprehension. RELC Journal
16(2): 48–53.
Seidlhofer B (2004) Research perspectives on teaching English as a lingua franca. Annual Review
of Applied Linguistics 24: 209–42.
Sherman J (1997) The effect of question preview in listening comprehension tests. Language
Testing 14(2): 185–213.
Silva T, Brice C (2004) Research on teaching writing. Annual Review of Applied Linguistics 24:
70–106. Cambridge: Cambridge University Press.
Smith LE, Bisazza JA (1982) The comprehensibility of three varieties of English for college stu-
dents in seven countries. Language Learning 32(2): 129–269.
SPSS Inc (1999) SPSS for Windows Release 10.0.5 Standard Version. Chicago, IL: SPSS Inc.
Swain M (1984) Teaching and testing communicatively. TESL Talk 15: 7–18.
Taylor L (2006) The changing landscape of English: implications for language assessment. ELT
Journal 60(1): 51–60.
Tokumoto M, Shibata M (2011) Asian varieties of English: attitudes towards pronunciation. World
Englishes 30( 3): 392–408.
Toro MI (1997) The Effect of Accent on Listening Comprehension and Attitudes of ESL Students.
Masters Thesis, University of Puerto Rico at Mayaguez, Puerto Rico. Masters Abstracts
International 36, 0041.
Vandergrift L (2004) Listening to learn or learning to listen? Annual Review of Applied Linguistics
24: 3–25. Cambridge: Cambridge University Press.
van Wijngaarden S, Steeneken H, Houtgast T (2002) Quantifying the intelligibility of speech
in noise for nonnative listeners. Journal of the Acoustical Society of America 111:
1906–16.
Watson TR, Pojanapunya P (2009) Implicit attitudes towards native and nonnative speaker teach-
ers. System 37( 1): 23–33.
Wilcox GK (1978) The effect of accent on listening comprehension: a Singapore study. English
Language Teaching Journal 32(2): 118–27.
Xu W, Wang Y, Case RE (2010) Chinese attitudes towards varieties of English: a pre-Olympic
examination. Language Awareness 19(4) 249–60.
Yano Y (2001) World Englishes in 2000 and beyond. World Englishes, 20( 2): 119–31.

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


Abeywickrama 73

Appendix A
Sample TOEFL Listening Test

On the recording, you hear:


Artist Grant Wood was a guiding force in the school of painting known as American
regionalist, a style reflecting the distinctive characteristics of art from rural areas of the
United States. Wood began drawing animals on the family farm at the age of three and
when he was thirty-eight, one of his paintings received a remarkable amount of public
notice and acclaim. This painting called American Gothic, is a starkly simple depiction
of a serious couple staring directly out at the viewer.
Now listen to a sample question.
What style of painting is known as American regionalist?
In your test book, you read:

(A) Art from America’s inner cities


(B) Art from the central region of the US
(C) Art from rural sections of America
(D) Art from various urban areas in the US

The best answer to the question ‘What style of painting is known as American region-
alist?’ is (C), ‘Art from rural sections of America.’ Therefore, the correct choice is (C).
Mark the answer by putting a check mark next to it.

Appendix B
Perceived comprehensibility ratings
Please evaluate his/her accent and comprehensibility. For example, if you circle ‘7’,
you agree with the statement. If you circle ‘1’, you do not agree with the statement.

Strongly agree…7       1 … Strongly disagree

There is no accent in the speaker’s pronunciation.    7654321


The way the speaker speaks is clear.          7654321
The speaker uses proper intonation.         7654321
The speaker’s English is very difficult to understand.  7654321
The speaker is a native speaker of English.       7654321
What is your overall impression? Easy to understand 7 6 5 4 3 2 1 Hard to
understand

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014


74 RELC Journal 44(1)

Appendix C
Questionnaire
Please choose the best answer.

1. Very often listening tests use American or British English (native variety) rather
than Singaporean English/Indian English etc. (non-native varieties). Do you
think this is good?
a. Yes, because native varieties are easier to understand.
b. Yes, because American or British English is considered standard English.
c. No, because in real life we hear both native and non-native varieties.
d. It does not matter.
2. Do you think it is important to use non-native varieties like Indian English,
Singaporean English etc. in listening tests?
a. Yes, because in real life we also hear non-native varieties.
b. No, because non-native varieties are difficult to understand.
c. No, because listening tests should only have standard English.
d. It does not matter.
3. Do you think the type of English (native or non-native) used in listening tests
makes a difference in listening test performance?
a. Yes, performance is better with native English.
b. Yes, performance is better with non-native English.
c. No, it does not make a difference at all.
d. It does matter only when the test taker is not proficient in English.
4. Do you think Japanese test takers will better understand Japanese English than
test takers from other countries?
a. Yes, they would.
b. No, they would not.
5. Even if a non-native variety is easy to understand do you still prefer a native
variety of English on a listening test?
a. Yes, I prefer a native variety.
b. No, I don’t prefer a native variety.
c. It does not matter.

Please explain why (you can write in your own language)________________


______________________________________________________________
______________________________________________________________
Any other comments(you can write in your own language): ___________
___________________________________________________________
___________________________________________________________

Downloaded from rel.sagepub.com at Afyon Kocatepe Universitesi on April 23, 2014

You might also like