Professional Documents
Culture Documents
OF CALIFORNIA
Santa Barbara
by
Raquel C. Santana-Paixão
Committee in charge:
March 2017
ProQuest Number: 10254074
All rights reserved
INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
ProQuest 10254074
Published by ProQuest LLC (2017 ). Copyright of the Dissertation is held by the Author.
All rights reserved.
This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
The dissertation of Raquel C. Santana-Paixão is approved.
___________________________________________________________
Dorothy Chun
___________________________________________________________
Laura Marqués-Pascual, Co-Chair
___________________________________________________________
Viola Miglio, Co-Chair
January 2017
Copyright © 2017
by
Raquel C. Santana-Paixão
iii
ACKNOWLEDGEMENTS
Professor Viola Miglio and Professor Dorothy Chun, for their guidance, continuous
Eduardo Viana da Silva, João Albuquerque, Eva M. Wheeler, Ricardo Brites, Pedro
Almeida, Patrícia Lino, Bianca Brigidi, Aina Cabra-Riart and Brendan Barnwell for their
iv
VITA OF RAQUEL C. SANTANA-PAIXÃO
January 2017
EDUCATION
Bachelor of Arts in Linguistics, Universidade Nova de Lisboa, Lisbon, Portugal, January
2007
Master of Arts in Iberian Linguistics, University of California, Santa Barbara, June 2012
Doctor of Philosophy in Hispanic Languages and Literatures, University of California,
Santa Barbara, January 2017
PROFESSIONAL EMPLOYMENT
2006-2008: English Language Instructor, EIL (Escola Internacional de Línguas), Lisbon,
Portugal
2008-2010: Portuguese Lecturer, Department of Spanish and Portuguese, UC Santa
Barbara; sponsored by Instituto Camões, Lisbon, Portugal
2010-2016: Teaching Assistant, Department of Spanish and Portuguese, UC Santa
Barbara
PUBLICATIONS
Miglio, V. G., Gries, S. T., Harris, M.J., Santana Paixão, R., & Wheeler, E. M.
(2013). Spanish lo(s)-le(s) Clitic Alternations in Psych Verbs: A Multifactorial Corpus-
Based Analysis. In: Selected Proceedings of the 15th Hispanic Linguistics Symposium, ed.
J. Cabrelli Amaro et al, 268-278. Somerville, MA: Cascadilla Proceedings Project.
AWARDS
Outstanding Teaching Assistant Award, Department of Spanish and Portuguese,
University of California, Santa Barbara, 2015
Excellence in Teaching Award, Graduate Student Association, University of California,
Santa Barbara, 2013
Graduate Block Grant (tuition paid during M.A and Ph.D. studies), Center for
Portuguese Studies and Spanish and Portuguese Department, University of California,
Santa Barbara, 2010
FIELDS OF STUDY
Second Language Teaching Methodology
v
ABSTRACT
by
Raquel C. Santana-Paixão
aiming to foster the development of students’ speaking abilities. With the development
of language teaching software, the use of computer based recording tools are becoming
perspectives and affective factors, with a focus on anxiety, towards the new computer-
based final oral exam vis-à-vis the traditional method; and 2) in terms of students’
throughout the instruction period. In order to analyze the latter, learners of two different
classes, denominated as control and experimental group, participated in this study over
the course of one quarter. The technology used for the current research is part of to the
online platform accompanying the Textbook used in the Language Program at the
vi
A total of 27 L2 learners of Portuguese for Spanish Speakers, at the university
level, participated in this study and completed a computer-based and face-to-face Oral
Achievement Exam at the end of the quarter. Learners were given a pre- and post-
questionnaire, inquiring about their expectations and final opinions towards both
modes of final oral exams. Here, students assessed their levels of anxiety, perceived
difficulty and perceived fairness in testing their speaking abilities, as well as their levels
of comfort with the technology used. The variables included in this survey were based
on a review of previous studies such as Tognozzi and Truong (2009) and Keynon and
Malabonga (2001). In addition, examinees completed a survey adapted from the FCLAS
investigate what kind of anxiety factors and feelings, presented on the anxiety scale, are
more involved in one type of oral exam in comparison to the other. In general, students’
reports indicate a tendency to consider the new computer-based oral exam as less
anxiety evoking and more difficult than the traditional face-to-face test. Nevertheless,
students also tend to view the traditional method as more fair in testing their speaking
abilities.
learners in the two different classes (control and experimental) were accessed through
a pre- and post- oral test. Results suggest that the treatment received by the
Experimental class does not play a particular role in fostering students’ speaking
abilities in the linguistic variables that were formally taught and emphasized throughout
the course.
vii
TABLE OF CONTENTS
Page
Acknowledgements iv
Vita v
Abstract vi
INTRODUCTION
1.1.4.3. Advantages 30
1.1.4.4. Limitations 32
CHAPTER 2: METHODOLOGY
2.1. Introduction 50
2.4. Participants 55
ix
2.5.2.1. Differences in the instructional treatment for Control 66
and the Experimental class
2.5.2.2. Voice recording tool used for the computer-based oral 69
activities taken throughout the quarter by the Experimental
class
2.5.2.3. Measuring students’ development of oral performance 70
2.6. Summary 72
3.1. Introduction 73
x
3.3. Students' improvement in oral performance 99
3.3.1. Does the Experimental class show greater improvement than 100
the Control class in certain linguistic areas and overall?
3.4. Summary of key findings on students' perspectives and affective factors 103
3.4.1. Students' overall reactions towards both types of oral exams 103
4.2.1. Students' general perceptions towards the computer-based test vis-à- 108
vis the face-to-face oral testing
4.2.1.1. Overall anxiety levels 109
4.2.2. Examination of anxiety factors and feelings that tend to be more 125
involved in one testing format in comparison to the other
4.3. Discussion of students' improvements in oral performance 131
4.4. General implications, limitations and suggestions for future research 135
xi
REFERENCES 148
APPENDICES
Appendix A: Set of 6 oral situations for the computer-based and the face-to- 155
face final oral exams
Appendix B: Description of the creation method for Oral Exams on 157
MyPortugueseLab
Appendix C: Instructions for the Technology-mediated Final Oral Exam 158
Appendix H: Instructions for the Pre- and Post- Test and computer-based 169
oral activities taken throughout the instruction period
Appendix I: Oral assessment activities taken throughout the quarter 170
xii
LIST OF TABLES
Table Page
1 Mean values across classes before and after taking the face-to- 78
face (F2F) and the computer-based oral exams
2 Mean values across classes on level of comfort with the 85
technology used before and after taking the computer-based oral
exam
3 Mean values for all student’s responses regarding levels of 87
anxiety, perceived difficulty and perceived fairness, after taking
the face-to-face (F2F) and the computer-based oral exams at the
end of the quarter
4 Items included in the anxiety scale 93
5 T-test results for Pre- and Post-test mean scores in both classes 101
6 Comparison between the Control and Experimental class in 102
terms of improvement in oral performance
xiii
LIST OF FIGURES
Figure Page
1 Results across classes before and after taking the face-to-face and 79
the computer-based oral exams regarding levels of anxiety,
perceived difficulty and perceived fairness
2 All students’ reactions to both types of oral exams, regarding 88
levels of anxiety, perceived difficulty and perceived fairness, at
the end of the quarter
xiv
INTRODUCTION
testing should play an important role in any foreign language program that aims to
develop oral proficiency (Omaggio Hadley 2001). As Omaggio Hadley (2001) remarks,
“we will need to administer some type of oral test, if only once or twice a semester”
(p.433).
Linguistics have conducted research on different types of oral assessment in second and
these studies have included an analysis of students’ perceptions towards these type of
tests. Nevertheless, in the literature reviewed, few studies were found involving an
testing methods.
and affective factors, the current study attempts to contribute to a more in depth
1
Hence, in line with previous studies (Keynon & Malabonga 2001; Jeong 2003; Joo
2007; Lowe & Yu 2009; Öztekin 2011), the first section in the current research analyzes
Since the construct of anxiety has been considered a significant factor in second
language learning (Horwitz et al. 1986), and the effects of foreign language anxiety in
oral performance have been examined by several scholars (Hewitt & Stephenson 2012;
Scott 1986; Wilson 2006; Philips 1992; Park & Lee 2005), the present study mainly
towards these tests. This analysis attempts to investigate what kind of anxiety factors
tend to be more involved in students’ feelings, in a comparison between both oral exam
formats applied in the Language Program where the present study takes place.
seen in the literature reviewed. This oral exam format is taken in pairs and it is based on
a randomly chosen oral communicative situation performed among learners (an oral
assessment arrangement similar to the traditional face-to-face final oral exam used in
the affordances of technology in oral testing, and whether or not the use of the computer-
based oral assessment tool is indeed an improvement over the traditional face-to-face
2
In addition, since the implementation of a system based on the completion of
computerized oral activities throughout the instruction period has also been pondered
by some of the instructors in the Language Program where the present study takes place,
this aspect constitutes another point of analysis in the current investigation. Thus, the
present research also examines whether there is any development in terms of oral
performance, when using technology as an oral assessment tool throughout the course,
in the kind of speaking assessment activities used in this foreign language curriculum.
The format of these computerized oral evaluation tasks resembled the computer-based
final oral exam structure applied in the current study. Reports on measures of learners’
feelings towards computer-based vs. face-to-face oral exams (Keynon & Malabonga
2001; Jeong 2003; Joo 2007; Öztekin 2011, Lowe & Yu 2009) by investigating the
students’ perspectives and affective factors, in a new computer-based final oral exam
vis-a-vis the traditional method used in a Portuguese language program. This aspect is
examined through students’ responses to an affect survey taken before and after the
course, regarding students’ overall reactions towards the two modes of oral testing, and
3
Furthermore, following Tognozzi & Truong’s (2009) research work on evaluating
the integration of a computer-based tool into the traditional curriculum, the present
study also analyzes whether or not there is any improvement in terms of oral
instruction period. The type of activities used in the current study correspond to paired
oral computerized tasks taken during class time, in a language lab. In order to analyze
the latter, this study examines the oral production of learners in a Control and an
Experimental class.
The technology used in this study corresponds to the web-based platform that
accompanies the elementary Portuguese textbook used in the curriculum. All oral
activities were based on the topics and grammar structures covered during the course
1992; Hewitt 2012; Tognozzi & Truong 2009; Park and Lee 2005; Satar & Özdener
2008). Although all these variables can be considered in the analysis of speaking
abilities, this study attempts to focus mainly on features formally taught and emphasized
in the course where the computer-based oral activities were implemented, such as
In sum, the main objective of this study is twofold: 1) to investigate the use of
affective factors and 2) to understand the value of technology in oral activities in terms
4
of promoting students’ development in oral performance, when these activities are done
throughout the instruction period. These aspects are studied in the specific context of
the final oral achievement exams and oral activities that are part of the Portuguese
To address these issues, the following research questions have been formulated:
1. What are the students’ reactions to the computer-based vis-à-vis the face-to-
face final oral exam, regarding levels of anxiety, perceived difficulty, and perceived
fairness in testing speaking abilities, as well as levels of comfort with the technology
used?
1.1. Do students’ perceptions towards both types of final oral exams differ
depending on 1) the different treatment received by the two classes; and 2) whether the
1.2. What are the students’ reactions to the computer-based vis-à-vis the face-to-
2. What anxiety factors and feelings, presented on the anxiety scale, are more
2.1. Is there a considerable difference between the Control and the Experimental
class?
3. Does the Experimental class show greater improvement than the Control class
5
3. Overview of dissertation
The first chapter of the present research includes an overview of oral testing in
testing. The overview focuses on what has been done regarding oral assessment in the
classroom context, including student’s opinions on this type of test, as well as a review
learning is also included in Chapter 1, containing the means found in the reviewed
literature to measure this variable, the factors involved in foreign language anxiety and
presented in Chapter 2, followed by analysis and results for each research question in
Chapter 3. Chapter 4 presents the discussion of the study. key findings on students’
perceptions and affective factors towards the two different modes of oral testing are
presented, together with the examination of whether or not there was an improvement
the quarter. Limitations in the current investigation, as well as its implications for the
field of second language learning and suggestions for future research will also be
6
CHAPTER 1: LITERATURE REVIEW
The oral exam type used in the current research corresponds to an Oral
research on oral assessment also discusses and explores the implementation of the
of oral testing that follows the ACTFL guidelines is the Oral Proficiency interview (OPI).
The current literature review thus starts with a description of these two different types
of oral assessment.
Familiarization Manual (2012)” the ACTFL OPI is an interactive and adaptive test,
adjusting to interests, experiences, and the linguistic abilities of the test takers. As it is
specified in the Manual, “The OPI assesses language proficiency in terms of a speaker’s
ability to use the language effectively and appropriately in real-life situations” (p.4). The
fact that the OPI is not an achievement test, where the speaker’s acquisition of specific
7
aspects of a course is assessed, and the fact that it is also not connected to any particular
method of instruction, is also highlighted in the description of this oral test. In addition,
it is explained that a speech sample is rated according to the proficiency levels described
five major levels of language proficiency, four major levels are included in the ACTFL
Rating Scale (derived from these guideline levels) and correspond to a) Superior; b)
Advanced; c) Intermediate and d) Novice. Each one of these levels represents a different
profile of functional language ability and is divided into “high”, “mid” and “low” sub-
(2012)”, The hierarchy of global tasks according to which the four major levels are
“Superior” level “can support opinion, hypothesize, discuss topic concretely and
speaker “can narrate and describe in all major time frames and handle a situation with
a complication” (p.4) ; The “Intermediate” level speaker “can create with language, ask
and answer simple questions on familiar topics” (p.4) and, at the “Novice” level, the
speaker “can communicate minimally with formulaic and route utterance, lists and
evaluation for the OPI takes into account linguistic components in the context of their
a global perspective. The considered criteria are thus: a) “the functions or global tasks
the speaker performs” (e.g.: “Communicate minimally with formulaic and rote
8
utterances, lists, and phrases” - in the case of a Novice speaker); b) “the social contexts
and specific content areas in which the speaker is able to perform them” (e.g.: “Some
dealing with non-native speakers”); and d) “the type of oral text or discourse the speaker
is capable of producing” (e.g.: “No pattern of errors in basic structures. Errors virtually
never interfere with communication or distract the native speaker from the message” -
this type of interview has a complex structure and the primary goal is to obtain a ratable
speech sample. In order to elicit this speech sample, the interview’s structure follows
four mandatory phases, which correspond to: “Warm up” (a first phase/introduction of
conversation openers pitched at a level which appears comfortable for the speaker” );
“Level Checks” (“questions that elicit the performance floor”, which corresponds to “the
linguistic tasks and contexts of a particular level that can be handled successfully”;
“Probes” (here the purpose is to “discover the level at which the speaker can no longer
sustain functional performance”- called the ceiling), and, finally, “Wind Down” (the
closing stage of the interview, which “returns the speaker to a comfortable level of
9
ability at the end of a sequence of courses or a major linguistic experience, such as study
abroad” (p.12). The author also considers this type of test to be better for “outside use”,
as in the case of when an employer wants to know how well an individual can function
in a certain language on the job, for example. Furthermore, Brown (1985) states that
“both classroom learning and testing should anticipate and prepare for proficiency
testing of the real-life sort, but a good proficiency test may be a poor classroom test” (p.
482).
Omaggio Hadley (2001) also calls attention to the important fact that the Oral
Proficiency Interview is different from an oral achievement test taken by a student. Oral
tests taken in the classroom are thus often referred to as oral achievement tests, as they
are used to evaluate students’ acquisition of specific course contents and they’re
considered to be valid if the test covers what has been taught in the course (Liskin-
Gasparro 1983; Brown 1985; Omaggio Hadley 2001;). The contexts in which an Oral
achievement test or a Proficiency test should be used are discussed in several works. A
study conducted by Norris & Pfeiffer (2003), for instance, discusses the use of ACTFL
purposes. The authors argue for “developing curriculum-dependent standards and the
use of assessment instruments that reflect the kinds of learning valued and fostered by
the program” (p.580). In their investigation, the authors examine the results of more
than 100 Simulated Oral Proficiency Interviews (SOPIs) administered across all levels of
10
instruction within one German foreign language department. The researchers mention
that, after observing the relationship between the student’s oral proficiency ratings
(based on ACTFL Guidelines) and their curricular status in the German Department at
Georgetown University, the faculty and administration of the department became aware
of the need for different assessment methods for different reasons. As it is explained in
the study, the use of ACTFL Proficiency Guidelines (used in OPIs) has proved to be
institutions. A few examples are the use of test ratings as an external criterion for the
abilities of students placed into the upper levels of the curriculum and also providing the
resulting ACTFL Guidelines oral proficiency ratings to students as one indication of the
foreign language abilities they have achieved within the program. However, as it is
explained in this research study, the faculty and administration of the department also
became aware that “the oral proficiency ratings are not used as a proxy or gate-keeping
(p.579). An oral achievement test thus seemed to be more suitable for that purpose.
a particular curriculum and to “measure the degree to which students achieve the
outcomes of a unit or a course or a program” (p.5). In addition, the author explains that,
speaking tests that are more closely tied to the program should be developed for
formative evaluation and that, besides being achievement-oriented, they should also be
communicatively based, i.e., “tests that reflect a classroom in which students are given
opportunities to interact verbally with the instructor and with each other with minimal
teacher intervention.” (p.9). Liskin-Gasparro (1983) points out several reasons for
11
testing that are internal to the curriculum, such as assigning grades. It is also mentioned
that these scores, usually with letter grades or percentage correct, can be quite useful in
“ranking students with respect to each other or in accessing their progress” (p.5).
Another reason would be the fact that “we can discover students’ areas of strength and
weakness, where they have learned the material, and where we as teachers need to
spend more time” (p.4), getting a sense of the effectiveness of the program and the need
the grading system should include an evaluation of linguistic and communicative factors
involved in oral communication, measuring global and integrative language skills. The
author thus advocates for “the assessment of the overall communication, as well as
for example.
When looking at the different scoring scales presented in several works involving
oral assessment in L2 (Tognozzi and Truong 2009; Larson 2000; Satar & Özdener 2008;
Liskin-Gasparro 1983; Omaggio Hadley 2001) we notice that the linguistic factors being
pronunciation and fluency. Furthermore, we can observe that the language components
being assessed on an oral testing scoring scale, and the description of what is being
measured under those components, vary from scale to scale, depending on how the
score sheet is designed and organized. Tognozzi & Truong (2009), for example, present
an oral testing scoring scale where they evaluate aspects as “speech natural and
12
continuous” under the subheading of “fluency”, or “almost entirely/entirely
vocabulary, accurate usage pertinent to level” under “vocabulary” and “almost always
correct” under “grammar” (p.15). Another example of an oral test-scoring sheet can be
found in Omaggio Hadley (2001). Here we can observe several grading components and
“Very conversant with vocabulary required by given context(s), excellent control and
mistakes, almost native like” (p. 444). In an attempt to identify language development in
communication tools (text and voice chat) with secondary school learners of English as
a foreign language, Satar & Özdener (2008) also developed a pretest and posttest
grading scale, specifically with reference to topics covered in chat sessions. In order to
investigate the differences in students’ speaking ability before and after the study, the
authors provide a scoring scale with a description of the ability parameters for each
question of the test. An example would be “To be able to form simple sentences using
the verb ‘to be’”, under the subheading of “Grammatical Structures” (p.612). The authors
also provide a description of the scoring criteria, with scores ranging from 0 to 4, where,
in the case of “Grammatical Structures”, the minimum score corresponds to “cannot use
13
the structure” and the maximum score to “can use the structure effectively and
As Omaggio Hadley’s (2001) states, when it comes to the evaluation of these tests,
“teachers should develop their own sets of criteria for scoring the various components
of the scales, using descriptions that are appropriate to the expected levels of speaking
ability for students in their course” (p. 443). Moreover, Larson (2000) highlights the fact
that the procedures chosen to score oral achievement and progress tests would depend
on each language program’s specific goals. As the author claims, “If, for example,
teachers stress pronunciation development, they would want to use assessment criteria
that include this aspect of language development” (p.59). In his paper, Larson (2000)
Some variants for achievement testing of oral skills, in the classroom context, are
presented in Omaggio Hadley (2001). The first is a sample test for individual or paired
interviews where, using a set of conversation cards, the instructor can either have a one
on one conversation with the student or have the learner work with a partner, believing
that he or she might feel more secure and comfortable. In this case, the instructor should
not get involved in the conversation since, as the author points out, students should be
able to understand each other after having practiced similar exercises in class before. On
the second variant, the instructors can either interview the students during their office
hours or in class time while the other students are completing other tasks. Here, the
teacher also selects one of the situational-based conversational cards and begins the
14
Regarding the use of the individual interviews method for oral achievement
testing, Brown (1985), for instance, describes a modified Oral Interview procedure
called RSVP (Response, Structure, Vocabulary and Pronunciation). The author explains
that, similarly to the oral interview proficiency exams, this procedure takes place in
stages but with significant differences. The researcher underlines the importance of
different stages in the implementation of this test in the classroom. Stage one consists of
routine inquires and questions that help to put the student at ease and set the
atmosphere of the examination, for instance. Some examples would be “How are you?”
or “what time is it?” (p.484). Stage two elicited students’ responses to questions from
the unit they had been preparing for and stage three is the one where students are asked
to give some opinion, describe a place or a person or interpret a drawing, while the
examiner tailors the items to the students’ level and manages the time without seeming
to hurry the learner. For his pilot study, Brown provided a grade matrix (3-2-1-0
numeric system), for each of the categories being assessed (Response, Structure,
Vocabulary and Pronunciation), that the examiner could easily remember until the
assessment ended and therefore wouldn’t distract the student by taking notes during
“fluency” and “represents the flow of thoughts and information that the student can
generate, the capacity to initiate, carry forward and conclude a message without calling
grammatical precision with which the student speaks” (p.483), “Vocabulary” to the
“lexical range and variety present” (p.483) and “Pronunciation” refers to “the quality of
15
pronunciation achieved” (p.483). The example of the oral assessment procedure
A more recent study by Gan & Hamp-Lyons (2008) reports on an example of oral
involving a group oral discussion based on an extensive reading program. The authors
argue that “peer group discussion as an oral assessment format has the potential to
relating to each other in spoken interaction” (p.328), and pay special attention to the
classroom conditions” (p. 330). Thus, their study attempted to evaluate the effectiveness
transition in their analysis sections. They emphasize the importance of “topic shift”,
which “involves the introduction of a new matter to the one discussed in the previous
turn”, in oral assessment authenticity of an oral group discussion task. Concluding that
the group of ESL students’ participants had no problem managing those topic shifts, they
can find an investigation on students’ attitudes towards face-to- face oral exams in a
study conducted by Zeidner & Bensoussan (1988), for instance. The authors examine
16
students’ attitudes towards an oral vs. a written English language test. Here, 170
an examinee feedback inventory covering diverse topics, such as test content and
Students showed a preference for written over oral tests, considering the latter as more
anxiety evoking and less pleasant, valuable and fair, despite considering them more
interesting to take. A more recent study by Paker and Höl (2012) has also reported on
students’ high anxiety levels during a speaking test at the Pamukkale University School
attitudes towards the oral test also revealed that students’ considered this test to be the
most difficult when compared to exams on other language skills. The authors comment
on the fact that most of the students had never experienced a speaking test before, which
might have affected their negative reactions and high anxiety levels towards it. In this
study, the majority of students also considered the oral test to be the most important
test to encourage the use of the target language and also suggested that the institution
should do more speaking activities, regardless of the students’ level in the foreign
language.
Scott (1986) analyzed the affective reactions of EFL students in two oral test
students’ responses to a questionnaire, the author states that “No significant difference
was found among student reactions to the different test formats” (p. 108) and further
comments on the fact that many students reported to feel nervous due to the testing
situation and that this had a negative impact on their performance. Learners also
17
revealed their preference for written tests over speaking tests, mentioning the lack of
time pressure in a written test situation, for instance, made them feel calmer and more
confident in their performance. In addition, the author claims that, in line with previous
studies, such as Savignon (1972), “students were able to separate out their judgments of
the validity of an oral test from its perceived difficulty and their anxiety in the testing
situation” (p.112).
assessment in a face-to-face context. In Shohamy (1982), for instance, one hundred and
and reactions to this type of assessment, with students perceiving oral interviews as a
low anxiety test and considering it helpful and valuable. As the author mentions,
students also perceived the test as a good chance to use the language and considered it
to be a learning opportunity.
ACTFL/ETS oral interview testing procedure and guidelines was adapted in classroom
use. The RSVP oral achievement test, explained above, was given in 3 to 4 week intervals,
constituting 60 % of the final grade. Although they did not take controlled data on the
program, students responses to course evaluations for this Pilot plan were taken. In
the usefulness of the exam in helping their strengths and weaknesses, its fairness, and
18
its instructional value. The author concludes that this type of test does not need be
impossibly time consuming and that “if spoken skill is truly the priority objective in your
class, you will find time for oral testing” (p.486). However, some problems and practices
associated with face-to-face oral exams have also been observed and will be discussed
as follows.
Aspects such as lack of time and large classes have been pointed out as
disadvantages in the administration of oral testing (Joo 2007; Larson 2000). Joo (2007),
for instance, indicates the aspect of “practicality” as one of the problems in the
implementation of the face-to-face interview. The author mentions the fact that
“numerous rooms, trained teachers, and a great deal of time would need to be spent to
Being aware of the difficulties in oral testing, Omaggio Hadley (2001) tries to
instructor’s office hours or even in the classroom, with students being tested
interview called SOPI (Simulated Oral Proficiency Interview) has been created
in a short time frame. As the author describes, “The SOPI consists of a master tape with
19
test directions and questions, a printed booklet with pictures and other materials used
in responding, and a cassette for recording the examinee’s responses” (p.438). Larson
(2000) also mentions a former implementation of oral achievement tests, similar to the
SOPI, administered via audio cassette tapes, in order for the oral testing procedure to
become less time consuming for the instructors. Yet, as the author claims “due to the
linear nature of cassette recordings, scoring the test tapes still required a substantial
As Tognozzi and Truong (2009) explain “As the need for assessment of speaking
on a less formal level has evolved, other types of tests have been developed, notably a
number of tests that are computer assisted” (p.2). Previous research on classroom oral
studies also have taken into account students’ perspectives regarding different types of
literature review mainly focusing on studies where student’s feelings and attitudes
were taken into account, since it constitutes a relevant aspect for the current research
work.
20
Previous research on learners’ improvement in terms of oral performance when
taking computer-based oral assessment activities throughout the instruction period will
oral production can be encountered, for instance, in studies concerned with Computer-
mediated Communication (CMC) involving voice and text chat tools (Satar & Özdener
2008; Payne & Whitney 2002). However, Satar & Özdener (2008) also refer to fact that
“Despite the large number of studies on text-based CMC, the use of the voice has yet to
be thoroughly researched” (p. 598). The authors examined the use of text and voice chat
Turkey, with learners of English as a foreign language. This research work included an
to a text chat, a voice chat and a control group. Students in the Experimental groups were
engaged in 4 chat sessions, taken in dyads, during 4 weeks. Results based on pre and
post speaking tests indicated an increase in the Experimental groups’ oral abilities, in
21
The improvement of L2 oral proficiency through synchronous CMC has also been
investigated by other authors such as Payne & Whitney (2002). The researchers
through chatroom interaction in the target language. The authors state that their
this model, “synchronous online conferencing in a second language should develop the
same cognitive mechanisms that are needed to produce the target language in face-to-
groups of 4 to 6 students. Through the examination of two control groups, receiving face-
to-face instruction, and two experimental groups, participating in two face-to-face and
two online class periods per week, during a 15-week semester. Students’ development
of oral proficiency was measured through the administration of a speaking pre and
posttest and concluded that the mean gain score for the experimental condition showed
oral assessment builder, that has a similar format to the COPI, into the traditional
students and teachers to communicate through recorded messages and voice email. It
practice exercises” (p.1). The researchers had a group of students using WIMBA, who
had to regularly complete WIMBA oral assignments as part of their homework, receiving
web-based oral feedback from the instructors, and a control group who would do the
22
same activities in class. As the authors mentioned, the WIMBA group produced “longer
and more accurate speech samples”, (p. 9) in comparison to the control group, in the
examination of the linguistic output, yielded by both groups, in a final oral exam.
several language skills, including speaking abilities. The authors investigate a treatment
group, replacing a fourth class period per week with multimedia activities, not including
speaking, outside the classroom, and a control group preparing the same multimedia
components in class. As the authors explain “both groups were assigned identical
writing homework and follow-up speaking tasks” and “For both groups, the speaking
tasks were evaluated in class” (p.277). Findings in this study reveal that, although the
treatment group performed better on reading and writing measures, both groups
performed equally well in terms of listening and oral skills. Regarding students’
improvements in oral production, the authors report that “there was no significant
difference between the groups in gain made over the semester, computed as posttest
Student’s feelings and attitudes towards the use of technology in oral assessment
have been investigated in a number of studies. In Tognozzi & Truong (2009) research
23
homework assignments, were asked to complete a pre and post-test survey at the
beginning and at the end of the quarter. This questionnaire asked questions regarding
students’ comfort level with technology, feelings towards being assessed through
results reported indicated that the majority of students believed that the Web-based
program used in the study (WIMBA) would help their speaking skills. Regarding the
opinions on the fairness in testing their oral skills, the students’ belief in the accurate
majority expressed “no opinion”. In addition, the majority of students indicated that they
enhanced type of oral assessment, among students enrolled in Spanish and Japanese
courses. The authors explain that, for the in-class OLA, students were assessed by the
instructors in a different classroom and, for the digital voice-recorded test, students’
procedure of in-class OLA when peers were present, because of a heightened sense of
the affective filter” (p.44) and refer to the fact that “using technology to assess oral skills
al. (1990) and is called the Portuguese Speaking Test (PST). According to the authors,
this test can also be referred to as a Simulated Oral Proficiency Interview (SOPI) and was
24
developed in response to the need of trained OPI interviewers for less commonly taught
languages. As it is explained in Stansfield et al. (1990), the PST consists in a tape based
test that uses recorded and printed stimuli and recordings of test takers’ responses. This
research work on the development and validation of the PST included an analysis of
students’ reactions towards the two oral testing formats. The majority of the examinees
Portuguese at different universities in the US. The authors reported a tendency for
students to be more nervous towards the PST and also perceive this test as more difficult
than the face-to-face interview, mentioning aspects such as “the unfamiliar mode of
these results. In addition, no students in this study thought that there were unfair
questions in the face-to-face interview and only a low percentage thought so for the
taped-based test. The researchers also mention students’ preference for the live
interview, despite their positive responses “about the content, technical quality and
ability of the taped test to probe their speaking ability” (p. 647).
Proficiency tests is called Computerized Oral Proficiency Interviews (COPI). Kenyon &
Malabonga (2001) analyzed examinees’ reactions towards this new Computerized Oral
Proficiency Instrument (COPI) and other oral proficiency assessments as SOPI and OPI.
The purpose of this study was to test student’s reactions as to whether COPI was an
improvement over the SOPI, in terms of reducing testing time for a large number of
students and different skill levels and, if so, proceed with further research in applying
computer technology to oral proficiency assessment. Thus, the authors’ focus was
25
mainly on the comparison between the two technology-mediated tests. A total of 55
graduate and undergraduate students enrolled in Spanish, Arabic and Chinese language
perceptions of the tests, regarding aspects such as the level of difficulty and fairness of
the test, as well as the levels of test taking anxiety. The statistic results revealed a general
positive reaction to the COPI in comparison to the SOPI and students considered one of
the best features of COPI the fact of having control over when to start and stop speaking.
tests (SOPI and COPI) and the traditional OPI, students tended to rate the OPI more
favorably in terms of fairness and no statistical different results were found between
students’ answers to the 3 types of oral exams in terms of difficulty and nervousness.
methods were also examined in Jeong (2003), where 144 Korean students of English as
Resource Center at the San Diego State University, employing “authentic tasks to elicit
speech ratable on the ACTFL (American Council on the Teaching of Foreign Languages)
proficiency guidelines” (p.2). As part of this research project, test takers completed an
and attitudes towards d-VOCI, measuring aspects such as “personal preference, difficulty
(both content and procedure), and usefulness in comparison to face-to-face OPI” (p.58)
26
and conveyed strong positive reactions towards the VOCI in general. Nevertheless, as
the author points out “about 70% of the subjects reported a preference for the face-to-
face interview over the d-VOCI.” (p.74) and seemed to value the human interaction with
the interviewers. This general preference for a face-to-face oral exam format is also
based and a traditional mode of oral testing, despite positive reactions towards
technology-mediated tests (Joo 2007; Öztekin 2011; Lowe & Yu 2009). Joo (2007), for
Korean University. In this study 43 students took both testing formats as midterms and
reactions towards the COT, regarding aspects such as “preparation time”, “nervousness”,
“tiredness” and “fairness”. Nevertheless, the author points out that more students
reported their preference toward taking the FTFI, considering this type of test to be
“more communicative and authentically interactive” (p.183). In Lowe & Yu (2009), the
majority of students, learners of English at a Chinese university, also considered the face-
to-face testing format included in this study to be “more ‘authentic’ modelling of ‘real
life’ use of spoken language” (p. 35), revealing their contentment with their live
interaction with the oral exam evaluator. In this research work, the author observed
students’ reactions towards the traditional face-to-face test of spoken English and the
computer-assisted speaking test, called College English Oral Test system (CEOTS),
developed by the Shanghai Foreign Language Education Press (SFLEP) and the
27
nervousness were reported towards the computer-based test for reasons such as
“worrying about making mistakes” (due to a feeling that they could not be corrected
once recorded), “lack of reaction from a computer” and “a fixed time given for a
response”. The author also explains that, although the face-to-face test was seen as a
better test of one’s speaking ability, the computer-assisted exam was considered to be
(CASA) in four groups of two different proficiency levels, corresponding to the pre-
speaking anxiety and speaking test anxiety, and another one on computer attitudes. The
and quality of the tests, and it also includes a subscale assessing students’ test-mode-
related anxiety levels. As the author explains, the speaking anxiety and speaking test
anxiety questionnaire is adapted from the Test Influence Inventory (TII) by Fujii (1993)
and the Foreign Language Anxiety Classroom Scale (FLCAS) by Horwitz (1986). The
researcher concludes that “the test takers at both levels clearly preferred the face-to-
face mode of speaking assessment over a computerized version for various reasons” (p.
135), such as considering this type of test as more conversational, interactional, personal
and fair.
28
The current study shares several similarities with Öztekin (2011) research, as it
oral achievement exam and includes a further investigation of students’ anxiety towards
the two different modes of oral testing. As it will be mentioned in the Methodology
chapter, the students’ perceptions questionnaire used in the current study is also based
on the ones found in Keynon & Malabonga (2001), among other authors in the literature
reviewed, and also uses a questionnaire adapted from the FLCAS by Horwitz (1988). In
the case of the current study, this type of questionnaire is used in order to uncover
anxiety factors that are more related to one type of test in comparison to the other.
computer-based oral exam was found in the reviewed literature, during the final stages
of the current dissertation, in a recent study conducted by Sayin (2015). Here, examinees
anxiety towards the two types of oral tests. The exams corresponded to a face-to-face
midterm with their lecturer and a computer-based final oral exam, which consisted of
model questions from TOEFL (Test of English as a Foreign Language) speaking sessions.
Results revealed a “high frequency of oral exam anxiety” (p. 116). In terms of students’
author specifically looked at the learners’ responses to the following items in the
speaking with the instructor”, 11 “I prefer having computer-based oral exams” and 6 “I
feel better when I have face-to-face oral exam rather than computer-based”). Results in
terms of frequency showed that 12 people responded affirmatively to items 5 and 11, as
29
opposed to 16 people on item 6. These values correspond to 43 % (items 5 and 11) vs.
57.2% (item 6). The author also explains that, “The overall results do not precisely
indicate that students reject having computer-based exams; however, while the oral
exam anxiety can be detected from the results, the preference of students towards face-
to-face exams and computer-based is not very clear since the frequencies and percentile
rates are very close” (p.116). In fact, results of a binomial test show that there is no
1.1.4.3. Advantages
in oral assessment, Tognozzi & Truong (2009) stress the fact that these computer-based
oral practices allow for individual feedback at a time that is convenient for the instructor,
which is usually not possible in the classroom. The authors also find a positive aspect of
WIMBA the fact that students can hear their recordings before submitting them to the
teacher. This aspect can help them improve their speaking abilities when receiving the
instructors’ feedback. Lee (2005) also points out that the use of the computerized
Blackboard program “facilitated the interaction between the students and the instructor
as the latter systematically guided, assisted, and provided constructive feedback to the
students” (p.151).
1
The proportion of people preferring computer-based exams (.43) was lower than the
ones preferring the face-to-face oral exam format (.57), p=0.09.
30
Larson (2000) mentions that the quality of speech recordings when using the
computer, referring to the fact that these are superior to the tape-based ones. The author
also discusses “the benefit of test uniformity” in the computer-based tests, in the sense
that it is more homogenous in terms of the questions students receive and the time they
have to respond.
Early & Swanson (2008) also list some advantages of Digital voice recording for
oral assessment, such as the fact that it “leaves digital artifact for indication of student
progress, accreditation data, and increased reliability of assessment” (p. 45). According
to the authors, “archived recordings can be played multiple times to calibrate scoring
criteria and assure equity in grading by different instructors” (p.46). Another positive
aspect has to do with the fact that “digital voice recordings offer instructors more
flexibility as to when and where they evaluate student performances” (Early & Swanson
2008: 47). This flexibility in the evaluation of learners’ responses to the computerizes
oral exam has also been pointed out by Larson (2000) as an important advantage of this
assisted oral testing, the leaners’ reports on the advantages in the use of computer-based
oral assessment methods (either in final oral exams or in computer based oral activities
taken during the instruction period) show that students often feel like they have more
control over when to start and stop speaking and believe that these tools help them
enhance their speaking abilities. (Kenyon & Malabonga 2001; Lee 2005; Tognozzi &
31
1.1.4.4. Limitations
language-testing tool, have also been identified and represent some challenges in the
on some disadvantages implied in the computerized oral testing method. In Kenyon &
Malabonga (2001), for instance, the subgroup of students taking the face-to-face OPI
speaking skills” (p.60). Other studies such have also concluded that students found the
real life communication and interaction (Jeong 2003; Joo 2007; Öztekin 2011; Lowe &
Yu 2009; Keynon & Malabonga 2001). As Öztekin (2011) states, “test takers tended to
value the existence of a live interlocutor listening to them” (p.77) and “The majority
favored the FTFsa over the CASA when they were asked about their ability to show one’s
Truong (2009) mention in their study “a fair percentage of students did not feel that
using WIMBA was a fair way to test their oral skills” (p.10). With regard to the students’
stated that “if criteria used to evaluate performances do not differentiate on examinees’
interactional abilities, it may not be necessary to use the labor- intensive face-to-face
32
interview for the assessment, no matter how examinees feel about the nature of the
assessment” (p.81).
have also been pointed out as an inconvenience inherent to this type of oral testing. As
Tognozzi & Truong (2009) mention, there are other expenses implicated in WIMBA,
such as “to write and record questions and directions for students”, “write story lines
and draw storyboards” and “write directions for instructors on how to use, conduct
assessments and grade them” (p.10), for instance. The authors thus call attention to the
fact that this aspect should be taken into account when considering the execution and
feelings, and in particular students’ levels of anxiety, towards the new computer-based
final oral exam vis-à-vis the previous face-to-face interview in a Portuguese Language
mainly focusing on studies measuring students’ anxiety levels towards oral exams, is
1.2. Previous research on anxiety in foreign language learning and towards oral
tests
and Cope (1986, and Horwitz 2001). Here, anxiety is identified as a “conceptually
33
distinct variable in foreign language learning” (Horwitz, Horwitz and Cope 1986: 125).
When describing Foreign Language Anxiety, the authors explain that although the
and fear of negative evaluation (further explained in detail) are useful in the definition
of foreign language anxiety, they propose that foreign language anxiety is not solely a
combination of these fears in foreign language learning. The authors thus describe this
related to classroom language learning arising from the uniqueness of the language
As Zheng (2008) states “The complexity of anxiety is also reflected in the means
of its measurement” (p.3). Some of the methods for measuring anxiety, mainly in the
experienced by students in foreign language learning, Horwitz, Horwitz and Cope (1986)
disagree” compose this instrument (Al-Shboul et al. 2013; Horwitz, Horwitz and Cope
1986). Some examples of these items correspond to statements such as “it frightens me
when I don’t understand what the teacher is saying” or “I don’t worry about making
mistakes in language class”. As Horwitz, Horwitz and Cope (1986) state “FLCAS affords
34
an opportunity to examine the scope and severity of foreign language anxiety” (p.129).
As also mentioned by Al-Shboul et al. (2013), this instrument has been adopted by
several researchers as a source to measure foreign language anxiety (Park and Lee 2005;
Zulkifli 2007; Phillips 1992; Ganschow & Sparks 1996; Wilson 2006; Liu & Jackson 2008;
Scott 1986; Paker and Höl 2012; Zeidner & Bensoussan 1988; Öztekin 2011; Joo 2007;
Sayin 2015; Jeong 2003; Joo 2007; Stansfield et al. 1990; Lowe & Yu 2009; Early &
Swanson 2008) have also used affect questionnaires in their research works in order to
investigate students’ reactions towards oral tests. Some of these studies have
these type of exams. The majority of rating scales on these questionnaires usually
requires ratings on a five-point or seven-point Likert-like scale and include both closed-
questionnaires presented in these studies. Zeidner & Bensoussan (1988), for instance,
different types of tests (including the oral), according to rating scales such as
part of the questionnaire, students were also asked to list the advantages and
disadvantages of oral tests and rate the degree of anxiety elicited by the oral test along
(1986) has administered an affective survey to students after completing group and pair
35
oral achievement tests. The researcher used a five-point Likert-like scale on items such
as “I felt nervous before the test” or “I thought the oral test was too difficult”. Here, some
items also included space for students to explain their ratings. Shohamy (1982) has also
asked students to answer a questionnaire upon completion of oral tests where students
indicated their agreement with statements such as “the testing experience was:”
also required students to comment on how they felt about the testing experience.
regarding the test takers’ perceptions towards a face-t-face and a computer-based type
intermediate level. This subscale includes items related to feeling “tense and anxious”
before, during and after the speaking test; Being “afraid of making mistakes during the
speaking test” or feeling relieved to see/not to see someone listening”, for instance. Sayin
oral exams in general and concerning a face-to-face and a computer-based oral exam in
particular. As the author explains, this questionnaire is based on Sarason’s (1980) Test
Anxiety Scale (TAS). Some of the statements included in this survey are quite similar to
the ones found in FCLAS, such as “Even if I’m well prepared I feel anxious during oral
exams”. Items that are more specific to the different modes of oral testing include “I fell
nervous when I start speaking in front of the instructor” or “I feel nervous when I see the
(2008) also claim that “one promising means to better understand the factors involved
36
in anxiety in language learning would be to interview learners on their feelings about
1.2.2. Factors involved in anxiety in foreign language learning and oral tests
anxiety induced in interpersonal situations. Here speech constitutes the highest stress
factor, especially when facing a large audience. This relates to the fact that
communication in an L2 requires taking chances and the learner is not familiar with
some of the linguistic rules (Abu-Rabia, Peleg & Shakkour 2014). In addition, difficulty
anxiety (Horwitz, Horwitz and Cope 1986; Abu-Rabia, Peleg & Shakkour 2014).
According to Horwitz, Horwitz and Cope (1986), “Test anxiety” refers to “a type of
performance anxiety stemming from a fear of failure” (p.127). The authors explain that,
since tests and quizzes are frequent in a foreign language class, test-anxious students
et al. (2014) claims, test anxiety especially occurs when the questions on a test are
ambivalent or have been less well studied. The third element of language anxiety,
referred in Horwitz, Horwitz and Cope (1986) is “fear of negative evaluation”. The
authors explain that this aspect is not limited to test-taking situations but it might occur
37
in an evaluative situation, such as speaking in a foreign language class, which requires
Several scholars have also carried out studies applying the questionnaire for
questionnaires based on this anxiety measure, finding triggers for foreign language
anxiety (Phillips 1992; Ganschow & Sparks 1996; Wilson 2006; Zulkifli 2007; Liu &
Jackson 2008; Hewitt and Stephenson 2011; Yahya 2013; Abu-Rabia & Shakkour 2013).
These authors have also identified factors related to fear of communication, test anxiety,
and fears of negative evaluation contributing to foreign language anxiety. Liu & Jackson
(2008), for instance, found that more than one third of the participants in their study,
which consisted of 547 Chinese learners of English as a foreign language, felt anxious in
their Language classrooms for fearing negative evaluation and for being apprehensive
of speech communication and tests. The results in their analysis also revealed that most
of the students did not like to risk using English in the class and that learners’
anxiety. When investigating the factors leading to speaking anxiety among students
fact that language anxiety in these EFL learners is aroused mainly by factors of fear of
disapproval by others. The author concludes that factors such as unpreparedness for
38
environment, tests and negative attitudes towards the English classes constitute factors
Other scholars have also acknowledged other causes of language anxiety. Young
(1991), for instance, has pointed at facts such as “1) personal and interpersonal
language testing” (p. 427), in a literature review on anxiety in language learning. Foreign
language learners have also indicated aspects such as “time pressure” in a test situation
as a cause of nervousness and anxiety (Scott 1986; Ohata 2005). Furthermore, variables
aptitude, personality variables such as fear of public speaking, and stressful classroom
experiences have also been identified as possible sources of anxiety in language learning
Factors associated with foreign language anxiety from the learners’ perspective
have also been investigated by Horwitz & Yan (2008). The authors’ research is based on
comparison with peers, learning strategies, language learning interest and motivation
were considered to be the most immediate sources of anxiety in language learning for
these students. The authors also found indirect sources of language anxiety among these
students, where “we might see the origins of language anxiety” (p.174), in variables such
English learning in China needs to play a role when interpreting the findings in this
39
study, since a different picture might be drawn in a different setting. A few authors
relationship between L1 anxiety (or poor L1 language skills) and L2 anxiety. In a study
examining the relation between linguistic skills, personality types, and language anxiety
in Hebrew students of English as a foreign language, Abu-Rabia et al. (2014) found that
students with good language skills in their L1 and L2 exhibited a low level of anxiety
with respect to both languages. The authors’ findings also indicate that the L2 learners
who experienced anxiety with respect to their L1 felt a comparable level of anxiety with
respect to their L2. Other authors, such as Ganschow & Sparks (1996), for instance, have
also investigated the relationship between the speakers’ L1 language skills and their L2
language classes. Findings in the authors’ study indicate that low-anxious students
perform better than high-anxious students on their native language in the phonological
and orthographic domain and on their grades in the foreign language at the end of the
year.
causes of students’ anxiety in the general context of language learning, a few authors
also discuss students’ anxiety more in depth in the context of oral language tests (Philips
1992; Öztekin 2011; Sayin 2015). As mentioned before, this aspect also constitutes one
of the focus in the current research. Öztekin (2011), for instance, includes a “Test-mode-
assessment (CASA). The author states that “Test takers were found to feel more anxious
40
before, during and after the CASA than the FTFsa in general.” (p.117). Furthermore, the
talking to a computer relieved them” (p.117), or being afraid of making mistakes when
taking the computer-assisted oral test. Sayin (2015) reports on a high frequency of
students’ anxiety towards oral exams. Moreover, a relatively high percentage of students
(over 50%) agreed with often or always “feeling heart beating very fast during oral
exams”; “feeling anxious during oral exams even when well prepared”; “forgetting things
(computer lab) and presence of other students”; and “feeling stressed when seeing the
general, Philips (1992), for instance, reports on highly anxious students’ negative
attitudes towards the oral exam, such as “going “blank”, “feeling frustrated at not being
able to say what they knew”, “being distracted” and “feeling panicky”.
As Horwitz, Horwitz and Cope (1986) point out, “oral tests have the potential of
students” (p. 128). Several researchers in the field of language anxiety have also
investigated the relationship between students’ anxiety and performance on oral tests.
41
1.2.3. Anxiety and oral performance
As Hewitt & Stephenson (2012) point out, “In general, investigators have found a
negative relationship between anxiety and oral performance: The higher the levels of
anxiety experienced by learners, the poorer their speaking skills tend to be” (p.172).
Scott (1986), for example, investigated the effect of learner’ self-reported anxiety
on performance in a study on students’ affective reactions to EFL group and pair oral
tests. The author found a negative correlation on both test formats, observing a similar
inverse relationship between students’ anxiety and performance on both types of tests.
A more recent research work developed by Wilson (2006) has also examined correlation
analyses between oral tests and students’ anxiety (using FCLAS) with Spanish learners
of English as an L2. After carrying out the scores, the author found a negative association
Philips (1992) also investigated the effect of anxiety on students’ oral exam
moderate and negative association between anxiety and performance during an oral
exam. This influential study in the field of language anxiety was replicated in Hewitt &
Stephenson (2012). The authors measured the internal reliability of all the instruments
used by Philips (1992) and found significant results in negative associations between
language anxiety and oral performance on an oral exam, with Spanish-speaking students
of English. Significant negative correlations between anxiety and oral performance were
also found in Park and Lee (2005) study with Korean learners of English, where the
42
higher the levels of students’ anxiety were, the lower scores they gained on their oral
performance.
testing with English students of Hebrew as a second language, for example, did not find
tests. The author mentions that, in the case of this study, the fact that students reported
a general positive attitude towards oral assessment did not necessarily indicate a high
level of performance on the oral test. Results on a study developed by Young (1986) also
did not indicate a significant correlation between speakers’ anxiety measures and their
on the fact that anxiety did not have such a strong influence on oral performance
possibly due to the fact that subjects knew those OPI results would not have negative
repercussion on them.
assessment and examinee’s test scores, Öztekin (2011) concludes that there is no
level. The author also reports on a significant negative correlation between the
computer-assisted test scores and two types of anxiety analyzed in the study (speaking
learning has mostly shown the mentioned variables’ potential to negatively influence
second language learning, it is worthwhile to take into account the fact that it does not
43
always have to have a negative effect. Ortega (2009), for instance, mentions what is
called “facilitating anxiety” as in the case of perfectionism (one of the causes of anxiety),
highlighting the fact that “some degree of tension can help people invest extra effort and
language teaching, which starts with a description of two types of oral assessment
modes, namely the Oral Proficiency Interview (OPI) and oral achievement tests.
The OPI is described as an interactive and adaptive test that assesses the
speakers’ ability to use language in real-life situations and is organized in order to obtain
a speech sample rated according to four major proficiency levels, ranging from “Novice”
interviews, for example (Liskin-Gasparro 1983; Omaggio Hadley 2000; Norris & Pfeiffer
2003). Oral achievement tests, on the other hand, are tied to a particular curriculum and
are meant to test what was taught during the instruction period. These tests have thus
classroom context, assessing the learners’ acquisition of specific course contents (Liskin-
Gasparro 1983; Brown 1985; Omaggio Hadley 2001; Larson 2000; Norris & Pfeiffer
44
in language programs, presented in several authors’ works (Omaggio Hadley 2001;
Brown 1985; Gan & Hamp-Lyons 2008), such as instructor interviews with students
working in pairs, individual interviews with the instructor, and in-class oral assessment
with learners working in groups. This section has also reviewed learners’ perspectives
reactions include students perceiving these tests as more difficult and anxiety evoking
and less pleasant, valuable or fair when compare to other forms of language tests
(Zeidner & Bensoussan 1988; Scott 1986; Paker & Höl 2012). Some of these studies have
also revealed learners’ positive reactions such as viewing these types of oral tests as an
important and valid way of testing their knowledge of the target language. Students were
also able to distinguish between their perceptions of validity of an oral exam from its
perceived difficulty and anxiety (Scott 1986). Although not the majority, a few research
studies have also reported on the learners’ low levels of anxiety in these testing
situations (Shohamy 1982). In addition, some authors (Brown 1985; Omaggio Hadley
2001; Larson 2000) have discussed limitations in using face-to-face methods for oral
This chapter has also covered previous research on technology- assisted oral
based oral assessment in comparison to face-to-face methods (Early & Swanson 2008;
Keynon & Malabonga 2001; Jeong 2003; Joo 2007; Lowe & Yu 2009; Öztekin 2011; Sayin
perceived difficulty or perceived fairness, for instance, are somewhat varied in results
reported in previous research, several authors report on a general preference for the
45
face-to-face test, despite examinees’ positive reactions towards the technology-
mediated or computer-based tests. Reasons for this preference are related to aspects
such as the appreciation for the live interaction with the examiner/interviewer, or
viewing the face-to-face test as “more communicative and authentically interactive”, for
instance (Stansfield et al. 1990; Jeong 2003; Joo 2007; Öztekin 2011; Lowe & Yu 2009).
computer based oral assessment methods consist of the fact of having control over when
to start and stop speaking and believing that these tools help them enhance their
speaking abilities (Lee 2005; Tognozzi & Truong 2009; Kenyon & Malabonga 2001).
Other aspects highlighted include the fact that the use of technology in oral activities
allows for individual and constructive feedback and the fact that computerized oral
assessment allows the examiners to evaluate students’ performance at any time and
place (Early & Swanson 2008; Larson 2000). Some disadvantages implied in the use
technology as an oral assessment tool are the fact that a few students do not feel
comfortable with the technology used and they frequently report not thinking that
speaking skills when taking computerized oral activities throughout the instruction
period, few studies including pre and post speaking tests were found. Some of these
46
involving voice and chat tools (Satar & Özdener 2008; Payne & Whitney 2002). Studies
assessing the integration of computerized oral assessment into the curriculum and
Youngs 1999) were also discussed. In the majority of these studies, the experimental
The third main section of this chapter has reviewed previous research on anxiety
in foreign language learning, since the analysis of students’ perceptions towards the two
oral testing modes (face-to-face and computer-based) in the current study mostly
focuses on students’ anxiety. We saw that language anxiety has been defined as “a
language learning arising from the uniqueness of the language learning process”
(Horwitz, Horwitz & Cope 1986: 128). The topics discussed included existing methods
to measure anxiety, factors involved in language learning anxiety and the potential
relationship between anxiety and oral performance. For the measure of students’
and Cope (1986), (Phillips 1992; Ganschow & Sparks 1996; Park & Lee 2005; Wilson
2006; Zulkifli 2007; Liu & Jackson 2008; Hewitt and Stephenson 2011; Yahya 2013; Abu-
Rabia & Shakkour 2014). Previous research has also included other types of affect
(Horwitz & Yan 2008; Kenyon & Malabonga 2001; Shohamy 1982; Scott 1986;
47
Sauvignon 1972; Paker and Höl 2012; Zeidner & Bensoussan 1988; Öztekin 2011; Sayin
2015).
Regarding the factors involved in foreign language anxiety, Horwitz, Horwitz &
Cope (1986) have identified three primary dimensions, which are communication
apprehension, test anxiety and fear of negative evaluation. These sources of language
anxiety have also been observed by several scholars in the field (Phillips 1992;
Ganschow & Sparks 1996; Wilson 2006; Zulkifli 2007; Liu & Jackson 2008; Hewitt and
Stephenson 2011; Yahya 2013; Abu-Rabia, Peleg & Shakkour 2014). Additionally,
aspects such as learner beliefs about language learning, instructor beliefs about
language learning interest, motivation, test types, time pressure in a test situation,
teacher characteristics and also L1 anxiety (or poor L1 skills) are representative
examples of language anxiety sources found in the reviewed literature. Highly anxious
students have also reported to feel “panicky”, “going blank”, “being distracted” and
“feeling frustrated at not being able to say what they knew” specifically in oral test
anxiety towards computer-based and face-to-face oral tests in their analysis. Öztekin
(2011) reported on examinees’ general higher levels of anxiety before, during and after
the computer-based test for reasons such as “speaking to a computer”, for example.
Sayin (2015) also reported on learners’ agreement with “feeling heart beating very fast
during oral exams”, “feeling anxious during oral exams even when well prepared”,
48
“forgetting things because of stress when talking face-to-face with the instructor”, in a
face-to-face test or “feeling stressed when seeing the time running on the screen”, in a
of the research reviewed has shown a negative influence of language anxiety on oral
language anxiety usually tend to exhibit lower levels of speaking skills (Hewitt &
Stephenson 2012; Scott 1986; Wilson 2006; Philips 1992; Park & Lee 2005). Some
studies, however, also reported not finding a strong relationship between students’
anxiety and oral performance (Shohamy 1982; Young 1986). Furthermore, Öztekin
(2011), for instance, did not find a correlation between speaking anxiety and speaking
correlation between speaking anxiety and speaking test anxiety and learners’ scores on
interpreting the development and results of the current study. The following chapter
49
CHAPTER 2: METHODOLOGY
2.1. Introduction
tool, through the analysis of students’ perceptions and oral performance, employs
several data collection procedures. The purposes in this chapter are to (1) define the
nature of research design, specific research questions and participants (2) describe the
data collection procedures for the analysis of students’ reactions towards the face-to-
face vs. the computer-based oral test and for the investigation of whether there was
activities throughout the instruction period; (3) provide a general description of the
The present study is based on the collection of affect surveys and recordings of
the course of one quarter. All learners took a computer-based and a face-to-face final
oral exam at the end of the course, in order to analyze students’ feelings towards the two
questionnaire before and after the course, on their expectations and final opinions
towards the two types of final oral exams. In addition, students responded to survey
50
questions concerned with the anxiety factors involved in each test.
In the analysis of students’ feelings towards both modes of final oral assessment,
it is important to note that these students belonged to two different groups, each
receiving a different treatment throughout the quarter. This aspect might influence the
way each group feels towards the computerized final oral exam in comparison to the
face-to-face method.
The reason why two different classes were used in this research was essentially
for the investigation of another point of analysis in the study, concerned with whether
activities throughout the instruction period. Thus, in order to investigate this aspect, one
class took paired computerized oral activities throughout the course in a computer lab,
during classroom time (the Experimental class), and the other one took regular oral
activities in the classroom (the Control class). The differences between students’ speech
in the Experimental class and the Control class were then examined by 4 raters, through
a pre- and a post- oral test. These tests were thus created specifically to measure
whether or not there was any improvement of speaking abilities in the application of
technology in oral assessment activities during the quarter and are independent from
the final oral exams taken at the end of the course. A detailed description of all
instruments, tests and oral activities will be provided further in this section.
affective factors (by investigating students’ reactions towards the computer-based vis-a
-vis the face-to-face final oral exam); and 2) to understand the value of technology in oral
51
assessment activities in terms of promoting oral performance (by analyzing students’
throughout the instruction period). As mentioned before, these aspects are studied in
the specific context of the final oral exams and computerized oral activities that are part
The research questions that guide the present study are the following:
1. What are the students’ reactions to the computer-based vis-à-vis the face-to-
face final oral exam, regarding levels of anxiety, perceived difficulty, and perceived
fairness in testing speaking abilities, as well as levels of comfort with the technology
used?
1.1. Do students’ perceptions towards both types of final oral exams differ
depending on 1) the different treatment received by the two classes; and 2) whether the
different groups, particularly before and after taking both types of tests, will be
predicted that the different treatment received throughout the quarter will influence
students’ feelings in both classes. It is expected that, on average, the Experimental class
will report lower levels of anxiety and difficulty towards the computerized final oral
52
exam vis-à-vis the face-to-face final oral exam than the traditional class, for being more
familiar with this exam format, due to the practice of computerized oral activities
1.2. What are the students’ reactions to the computer-based vis-à-vis the face-to-
Students’ reactions to the computer-based test vis-à-vis the face-to-face final oral
exam will also be explored in the context presented in the current research. However,
taking a face-to-face oral exam, it is predicted that, on average, students will report a
2. What anxiety factors and feelings, presented on the anxiety scale, are more
2.1. Is there a considerable difference between the Control and the Experimental
class?
The question of which anxiety factors are more involved in one test in
comparison to the other, and whether there is a considerable difference between classes,
is also exploratory, since they will be observed specifically in the context presented in
3. Does the Experimental class show greater improvement than the Control class
53
Previous studies have showed that L2 learners believed to have benefitted from
their language skills (Lee 2005). Tognozzi & Truong (2009) have also shown that an
feedback from the instructor, produced “longer and more accurate speech samples”
different context (in pairs and during class time, in a computer lab). Therefore, the
question of whether the Experimental class shows an improvement over the Control
class, in terms of oral performance, will be, in part, explored in the current research.
expected that the fact that students in the Experimental class can listen to their speech
after recording the conversation might trigger their awareness of oral mistakes. In
addition, the fact that the technology used in these oral activities also allow for the
reception of the instructor’s individualized feedback, which is usually not possible in the
54
2.4. Participants
this study over the course of one quarter. Students were enrolled in the second quarter
participants are split in two different groups, which correspond to two different classes,
defined as the Experimental class (14 students) and the Control class (13 students, plus
another instructor to form the last pair). All learners had been exposed to Portuguese
instruction during the previous quarter and, since they were enrolled in a Portuguese
for Spanish speakers’ class, students were familiar with the Spanish Language. These
students’ majors are mixed and the range of their age is from 18 to 21.
Participants in this study signed a consent form and were informed of the fact
that their involvement in this research was voluntary and unpaid, and that their decision
would not affect their grade in the course. All students decided, however, to participate
voluntarily and to take both types of tests in order to provide the researcher with
comparable results regarding their perceptions towards both the face-to-face and the
computer-based final oral exams. Since, for the study’s purposes, learners would have to
take an extra oral exam at the end of the quarter (the computer-based test), they were
offered the possibility of choosing the better results between the two final oral tests as
55
2.5. Data collection procedures
reactions towards the computer-based vis-a-vis the face-to-face final oral exam, L2
learners of Portuguese in both classes completed the new computer-based and the
traditional face-to-face final oral exam (described bellow) at the end of the instruction
period and were asked to answer affect questionnaires, which will be further explained
in detail.
improvement over the Control class in terms of oral performance, through the
application of technology in practice oral tasks, an oral pre-test, at the beginning of the
quarter, and an oral post-test, at the end of the instruction period, were administered to
the two different groups of students. These pre- and post-tests were integrated in the
course syllabus for the two classes and students completed them as part of the course
oral activities taken during the quarter. These tests were also taken through the
It is also important to note that the same instructor taught both classes, which
assured the groups’ homogeneity regarding the kind of instruction they were receiving.
Different materials were thus used to gather data on the application of technology as an
oral achievement- testing tool, regarding the analysis of students’ perceptions and
A detailed description for the procedures and data collection materials for
56
different points of the analysis in the current study is presented as follows.
2.5.1. Investigation of students’ perceptions and affective factors towards the two
comparison between this testing format and the traditional method, students in both
classes took a computer-based and a face-to-face final oral exam at the end of the quarter
and responded to affect questionnaires on their reactions towards the two testing
modes. I will start by presenting a description of the two types of oral exams taken at the
students went to their instructors’ office hours in pairs and randomly chose a
conversation card containing one out of 6 role-play situations, which were given to them
a week before the exam. Following the situations’ guidelines, students performed a
conversation with each other, during approximately five minutes, in front of the
instructor. Since this was an achievement exam, all the situations’ guidelines presented
in this final oral exam were related to the course content and can be seen in Appendix A.
It is relevant to note that participants in this study were already familiar with the
57
traditional final oral exam method, as they had completed a similar test in the preceding
quarter.
The new online final oral exam system attempted to deviate as little as possible
from the traditional face-to-face test, allowing for the two types of exams to be as
comparable as possible, with the main difference being the fact that the traditional
format was taken in the presence of the instructor in his/her office and the new format
was taken in a computer lab. Thus, the final computer-based oral exam followed what
had been done in the traditional method. Therefore, the exam was also taken in pairs
and students were given the same set of oral situations a week before, and were told to
choose a partner in order to prepare for it (Appendix A). The new exam format requested
students to talk for approximately the same period of time asked in the traditional
method.
When investigating the necessary steps in the design of an oral exam in this
program, the researcher got acquainted with several MyPortugueseLab functions that
allow for the random appearance of one out of six oral situations to be performed, which
assured the similarity between the two exam formats. The oral situation to be
performed was thus arbitrarily assigned to the students on the online platform for the
course, since in the traditional method the teachers also randomly selected one out of
the 6 oral situations for the students to complete in pairs. A step-by-step description of
58
The final computer-based oral exam was thus administered to students at the end
of the course. The investigator was present in the computer lab on the day students
completed the exam and a set of instructions was presented to the learners, containing
Each pair of students shared one computer in the lab and one microphone. The
background noise, which could interfere with the instructor’s understanding of the
Since the technology-mediated oral exam was taken in pairs, learners were asked
to sign in into one of the two students’ accounts on the learning platform, in order to
access and take the test. The instructor was then able assign a grade separately on each
student’s account.
What follows is a description of the online platform used in the creation of the
The learning online platform used in the creation of the new technology-
mediated final oral exam was MyPortugueseLab, an online platform by Pearson Higher
Education, which accompanies the elementary Portuguese textbook used in the course,
Ponto de Encontro. This learning platform allows students to access the online format of
the course textbook, through an icon called eText. MyPortugueseLab also offers a variety
of language-learning tools and resources, such as audio and video materials that
accompany the textbook. Instructors are able to assign activities, with specific deadlines,
that are graded either automatically or by the instructor, in the case of opened-ended
oral and written activities. After assignments are completed and graded, both students
59
and instructors can automatically track learners’ progress, as well as the overall
situations. Similar to the exercises found in the printed textbook, the assignments found
practice section consisting of a voice-recording tool, which allows users to listen to their
messages before submitting them. Besides providing written feedback, teachers can also
offer audio feedback on speaking activities or on any other instructor-graded task, such
as more opened-ended reading and writing exercises. For the guided machine-graded
activities, the program offers automatic feedback through “Multilayered feedback and
Point-of-Need Support”. Through this feature, students can get instant hints on incorrect
answers and try the activity again. The program also indicates specific sections in the
In addition to all the exercises and learning tools available in this online platform,
instructors are allowed to create and assign their own instructional materials. This
program was thus chosen for this study, not only because it is already being used in the
course, which ensures the students’ and teacher’s familiarity with it, but also due to the
fact that it allows for the instructors’ creation of technology-mediated practice oral
60
2.5.1.3. Measuring students’ perceptions and affective factors towards the two types of
computer-based vis-a-vis the face-to-face final oral exam, students responded to affect
questionnaires are not entirely objective as physiological tests would be, this this type
of surveys are accepted and used in the field of anxiety in language learning, as well in
occupational health studies, for instance. As Lenderink & Zoer et al. (2012) mention,
risk factors, and diseases and is frequently used in occupational health studies” (p.1).
assessment, learners were asked to complete a survey on their overall reactions to both
final oral exams. Thus, at the beginning of the quarter, students responded to a survey
regarding their expectations (Appendix D), and, at the end of the quarter, completed a
follow up survey on their final opinions towards both types of tests (Appendix E). The
purpose of having a pre- and post-survey in this study was to investigate to what extent
the application of technology on the final oral exam corresponded to the students’
expectations at the beginning of the quarter and possibly understand what features of
the oral exam revealed to be more or less effective in the learners’ perspective. In this
survey, students were asked to rate their level of comfort with the technology used as
61
well as their levels of anxiety, perceived test difficulty and perceived fairness in testing
their speaking abilities, when taking the computer-based and the traditional final oral
exam method. A 3-point scale of (1) “Low”, (2) “Medium” and (3) “High”, was used in this
section, followed by open-ended questions on the reason why students selected a certain
response. Finally, in order to further investigate some of the advantages and limitations
associated to both final oral exam formats, students were asked open-ended questions
on what they liked “the least” and “the most” when taking each type of test.
The procedure taken for data collection of the students’ general reactions
towards both types of oral exams was developed based on the review of literature and
an analysis of previous studies, such as Tognozzi & Truong (2009) Lee (2005) and
Keynon & Malabonga (2001), mainly in terms of the variables included in the
questionnaire. Some of these variables are also included in other studies examining
students’ reactions towards face-to-face and computer-based oral testing, such as Jeong
descriptive statistic analysis, based on students’ responses to the survey, were used in
the examination of students’ reactions towards the computer-based test vis-à-vis the
face-to-face test (research questions 1.1. and 1.2). Learners’ responses to open-ended
questions in the survey will be examined in the Discussions section of the current study.
which corresponded to the Control class and the Experimental class. The two classes
whether or not the Experimental class showed an improvement over the traditional
62
class in terms of oral performance when taking computerized oral activities throughout
the instruction period. (as it is further explained in detail). Since the two classes received
a distinct treatment, research question 1.1. examines whether or not there is a difference
in students’ reactions towards the two types of final oral exams in each class. As we have
seen, this research question also investigates if students’ opinions differ depending on
two different types of oral exams, rather than towards language learning in general.
Accordingly, in addition to students’ reports on their perspectives towards the new and
the traditional method, in order to study the variable of anxiety in both tests more in
depth, and provide a richer understanding of the factors involved in this variable in oral
testing, the sources of anxiety found in both types of oral exams were also observed.
Hence, in order to answer research questions 2 and 2.1, concerned with the the
anxiety factors and feelings that are more involved in one type of test in comparison to
the other, an additional survey was used. This survey is adapted from the FLCAS
(Foreign Language Classroom Anxiety Scale) found in Horwitz et al. (1986) and can be
found in Appendix F.
To fit the purposes of this research project, some items from the FLCAS were
either modified or deleted. The items deleted correspond to those that were not
considered relevant to the measure of students’ anxiety in the final oral exams. Other
items, such as those concerning the students’ feelings towards certain aspects in the
classroom, were modified, substituting the words “in the classroom” by “in the
63
computer-based” or “in the face-to-face oral exam”. The researcher also included a
towards each test in comparison to their written exams. Although this study did not
focus on a comparison between oral and written exams, the instructor was curious about
this aspect due to reports presented on some of the reviewed literature approaching this
matter (Zeidner & Bensoussan 1988; Scott 1986). The anxiety factors that are more
involved in one test in comparison to the other were thus investigated (research
question 2), followed by the question of whether or not there is also a difference
between classes in terms of students’ responses to this survey (research question 2.1).
Here, it is worth highlighting the fact that the anxiety factors examined in these Research
Questions are limited to the ones presented in the anxiety scale used in the current
study. At this investigation point, it is also relevant to mention that 3 students from this
research sample presented missing responses on some of the questions included in this
survey. These learner’s ratings on the answered questions were included in the analysis.
different groups of participants took part in the current research project. The two groups
consisted of the Control class, where students completed oral activities in the classroom
64
during the quarter, and to the Experimental class, where students completed the same
oral activities in a computer lab, also during class time. All these activities were already
part of the course syllabus and meant to be taken in class. The different instructional
treatment the two classes received will be explained in detail in the section bellow.
One of the goals in this study was thus to observe the differences in speaking
abilities between the two classes, examining whether the Experimental class, taking oral
activities through the computer during the instruction period, showed an improvement
over the Control class at the end of the quarter, in terms of oral performance. As
previously mentioned, aspects such as the fact that the computerized oral activities
allowed for students’ reception of individualized oral feedback, as well as the fact that
students could listen to their speech before submitting it (Tognozzi & Truong 2009),
improvement over the Control class, a computer-based pre-test, at the beginning of the
quarter and a post-test, at the end of the quarter, were administered to the students in
both the Control and the Experimental classes. It is important to note that this pre-/post
test was completed independently of the computer-based final oral exam taken at the
end of the quarter, since it was used for a different type of analysis in the current study.
The pre- and post- tests thus were exactly the same exam, taken at different
times. Therefore, the test had to be designed in a way that students were able to answer
the same questions at the beginning and at the end of the course. The researcher thus
created one oral situation, according to the content of the syllabus, and included
questions associated to topics that were tied to the Portuguese language class students
65
had taken in the previous quarter and that were also taught and developed in the
Portuguese language course students were currently enrolled in. A copy of the pre-
/post- test, as well as a list of the grammar and vocabulary topics included in this
Both classes took the pre- and post- test during class time, in a computer lab. The
researcher was also present at the time these tests were administered and instructions
on how to take the test were presented to the students prior to the test’s administration
(see Appendix H). These tests were taken in pairs and required one student to ask
questions and another to answer those questions. Learners were thus asked to trade
roles, in order to ensure that both students would produce the same amount of speaking.
This also assured their opportunity to answer all the questions in the test. Moreover,
learners were able to access the test when signing in into their accounts on
MyPortugueseLab. Since the pre- and post- tests were taken in pairs, learners were first
asked to sign in into only one of the students’ accounts and then sign in into the other
student’s account on the program. As it was stated on the instructions form, the student
signed in into his/her account would be the one answering the questions in the oral
Each pair of students shared one computer in the computer lab and one set of
2.5.2.1. Differences in the instructional treatment for the Control and Experimental class
After the conclusion of the oral pre-test, taken by both classes at the beginning of
66
the quarter, the Experimental class completed five computer-based oral activities, in
pairs, once every two weeks. The time these oral activities were taken were correlated
with the way they were already organized in the traditional syllabus. All oral tasks were
also taken through MyPortugueseLab, with the duration of approximately five minutes
each, and were based on prompts found in the oral situation sections of the Textbook
(Appendix I). The majority of these tasks correspond exactly to the same prompts found
in the Textbook, whereas a few of them were slightly modified by the researcher with
the addition of one or two more questions in the prompt, in order to include a relevant
The L2 Portuguese learners in the traditional class completed the same five oral
activities throughout the quarter, as regular pair-work activity taken in the classroom.
In order to make both groups as equal/comparable as possible, all oral tasks were
completed during class time, either in the regular classroom (for the Control class) or in
a computer lab (for the Experimental class). Moreover, since these oral situations were
also taken in pairs and required one student to ask questions and another student to
answer those questions, learners in both classes were also asked to trade roles, in order
to ensure they would get same type of oral practice every time.
presented for the pre/post test (Appendix H). Here students also shared one computer
and one set of headsets in the Computer Lab and were able to access the oral activities
As indicated in the literature reviewed, the fact that students can listen to their
67
speech before submitting it and the fact that the program allows students to receive
individual oral feedback when taking computer-based oral activities has been pointed
out as an advantage in previous studies (Tognozzi & Truong 2009; Kenyon & Malabonga
2001; Lee 2005). Since the learning platform used for the computerized oral activities
included these features, students’ in the Experimental class were able to listen to their
speech after recording the conversation and were also able to receive individual
feedback by the instructor. In order to, once again, make the two groups as comparable
as possible, in terms of the type of instruction received, the instructor was asked to
provide the same type of feedback to both classes, whenever students performed these
oral activities. This feedback was thus given to students in both classes in the form of
“direct correction”, where the explicit form is given to the student whenever an error
has been committed (Ellis 2009). As Ellis (2009) explains, in this type of feedback “the
corrector indicates an error has been committed, identifies the error and provides the
correction” (p.9). The author also refers to other oral feedback strategies that can be
used by the instructor, such as “to start with a relatively implicit form of correction (e.g.,
simply indicating that there is an error) and, if the learner is unable to self-correct, to
move to a more explicit form (e.g., a direct correction).” (p.14). Since the Experimental
class would not interact with the instructor whenever taking the computer-based oral
activities, this feedback strategy of moving from a more implicit to a more explicit form,
was harder to be adopted by the instructor when providing oral feedback through the
computer. For this reason, a feedback strategy in the form of “direct correction” seemed
Ellis (2009) also claims that corrective feedback can be both immediate and
68
delayed. As the author explains, written feedback, for instance, is almost invariably
delayed. In the case of the current study, the Experimental class received delayed
feedback, since it was given through the computer, at a later time, after students
Hence, the only difference in the feedback received by both groups was the fact
that, while the Experimental class received delayed and individual feedback through
in the classroom. Students in the traditional class could not receive individual feedback
due to the fact that it is not possible for the instructor to listen to each student’s
production of oral speech individually in the classroom context (Tognozzi & Truong
2009).
The difference in the treatment received by the two classes was inspired in the
2.5.2.2. Voice recording tool used for the computer-based oral activities taken
The voice-recording tool used for the technology-mediated oral activities taken
accompanies the Elementary Portuguese textbook used in the course. This online
69
In order to foster students’ oral proficiency, MyPortugueseLab integrates an oral
practice section, where, in addition to the oral tasks that are already provided in the
program, instructors can create their own oral assessment activities. As highlighted
above, the oral section allows users to listen to their messages before submitting them
and the instructors to provide oral feedback. Both the students and the instructor
teaching both sections of the same Portuguese language course received training in the
beginning of the quarter to assure their familiarity with the voice technology tool they
would be using.
As stated in the introduction chapter, the current study aims to examine whether
or not there is any improvement in students’ oral performance based on the linguistic
aspects that are formally taught and emphasized in the language program, such as
students’ overall communication were also assessed in order to measure more global
works providing examples of oral testing rating scales in L2 (Satar & Özdener 2008;
Tognozzi and Truong 2009; Larson 2000; Liskin-Gasparro 1983; Omaggio Hadley 2001).
70
As in the Satar & Özdener’s (2008) study, the scale in this project was designed
specifically for the research pre- and post-test, involving a description of the ability
parameters for each rating level in each question/prompt of the oral situation created
for the test, in an attempt to identify whether there was any language development in
speaking, in particular with reference to topics covered in class. This scale examines
functions, since these are the linguistic aspects formally taught and emphasized in the
course where the new technology oral assessment tool is being applied. The third
category consists of the necessary communicative functions needed to complete the task
rating scale, in order to also examine whether the general ideas conveyed by the learner
in the target language are easily understood by the examiner. This item of evaluation is
also part of the scale since, apart from the accuracy of the constituent parts of the
students’ speech, aspects such as students’ willingness to take risks and their display of
communicative ease within the context is valued throughout the course based on a
communicative language teaching approach. For the assessment of these variables, a 42-
point rating scale, ranging from 0 to 4 on each question/prompt was used and 4
Portuguese language instructors, who are all native speakers of the target language and
have also taught this class before in the same institution, used this evaluation scale, in
order to access students’ performance and ensure reliability of the results. A description
of the ability parameters for each rating level (from 0 to 4) on each question/ prompt of
the pre- and post- test oral situation was provided, allowing for the raters’ precise
71
evaluation of the aspects being analyzed in the test. A copy of this rating scale can be
seen in Appendix J.
2.6. Summary
study, describing the participants group and the instruments used to gather data for the
mediated vs. the face-to-face final oral exam, and for the examination of whether there
activities throughout the instruction period. Detailed information about data analysis
72
CHAPTER 3: DATA ANALYSIS AND RESULTS
3.1. Introduction
The purpose of this chapter is to provide a description of the data analysis and
questionnaire results for each research question in the current study. It will be
organized around two central sections corresponding to the two main research topics.
The first section reports on students’ perceptions and affective factors, with a
focus on anxiety, towards the two different modes of oral testing. This section will first
computer-based test vis-à-vis the face-to-face oral testing, and it will address students’
levels of overall anxiety, perceived difficulty and perceived fairness in testing their
speaking abilities, as well as their comfort with the technology used in the computer-
based oral exam. The presentation of these results will be followed by an examination of
the anxiety factors and feelings (measured in the anxiety scale) that are more involved
in one type of oral exam in comparison to the other. Results from the first section will
thus allow us to analyze and reflect on the affordances of technology in oral testing,
particularly regarding the exam format used, in terms of students’ perspectives and
affective factors.
The second section is devoted to the examination of whether or not there is any
73
resemble the paired computer-based final oral testing format used in the current study.
In order to investigate this aspect, the experimental group and the control group will be
performed in section 2 will thus allow us to reflect on whether or not there are any
affordances in the use of technology in these type of oral assessment activities, taken
abilities.
This section will examine students’ perceptions and affective factors towards the
computer-based and the face-to-face final oral exam, with a focus on anxiety. It should
first be mentioned that the design of the questionnaire on learners’ overall reactions
towards the two modes of oral testing did not allow for a fine-grained discrimination
between the variables included in this survey. Throughout the process of analyzing
students’ reported data in the survey concerned with learners’ overall reactions towards
the two modes of oral testing, the computation of Pearson correlation tests revealed that
levels for “Anxiety” and “Difficulty” were highly correlated. Descriptive exploratory data
analyses were thus used in this section in order to discuss the results based on learners’
self-reported data (on a 3 level-scale). As it will be further discussed in the last chapter
of this dissertation, the summary statistics used in the current investigation can form
the foundation for more complex computations and analyses through the use of an
instrument that will allow the variables to be kept more independent one from the other,
74
because understandably there usually is a correlation between increased anxiety and
perceived difficulty of the test. Suggestions on how to hold these factors separate will
In order to calculate summary statistics in this study, all collected data was
analyzed using R software. A descriptive data summary for these analyses will thus be
presented in tables and figures below. Samples of students’ qualitative reactions and a
discussion of these data will be considered in the Discussions and Conclusion chapter,
in order to further interpret the results yielded from the quantitative responses.
3.2.1. Students’ general perceptions towards the computer-based test vis-à-vis the
students’ general perceptions towards the face-to-face and the computer-based final
oral exams was investigated. Here, I attempt to answer research questions 1.1 and 1.2.
The first one is concerned with whether or not there were any differences on learners’
reactions depending on the treatment received by the control and the experimental
group and also depending on whether the survey was taken before or after completing
both modes of oral testing. Research question 1.2 focuses on all students’ responses
the face-to-face and the computer-based test at the end of the course.
75
3.2.1.1. Students’ responses in the Control and the Experimental class, before and
interpretation of students’ perspectives and affective factors towards the two oral exam
formats and, therefore, for a deeper reflection on the affordances of technology on oral
achievement testing.
The observation of students’ responses in the Experimental class vs. the Control
class takes into account the fact that the Experimental class took oral activities
throughout the quarter and the Control class did not, which might influence students’
Moreover, students’ reactions depending on the time the survey was taken is also
assessed, since learners responded to the questionnaire before and after completing
both types of oral exams. As mentioned in the methodology chapter, students completed
this survey at the beginning and at the end of the course, revealing their expectations
and final opinions on both modes of oral testing. Hence, this examination allows us to
understand whether or not there was a change in students’ opinions on both tests in
terms of overall levels of anxiety, perceived difficulty and perceived fairness in testing
their speaking abilities, before and after taking the tests, in a 3-point scale of 1- “low”, 2-
“medium” and 3- “high”. Results for the level of comfort with the technology used in the
computer-based final oral exam, using the same scale, will also be presented. Means
were thus computed for students’ ratings in all quantitative items included in the
affective questionnaire.
76
3.2.1.1.1. Overall anxiety, perceived difficulty and perceived fairness in testing
Table 1 below displays the values across classes on their reactions regarding
levels of anxiety, perceived difficulty and perceived fairness, before and after taking both
types of final oral exams (according to a 3-point scale of 1- “low”, 2- “medium” and 3-
comparison across the two different classes and between learners’ expectations and
final opinions towards the exams, the mean ratings presented in this table are also
77
Table 1. Mean values across classes before and after taking the face-to-face (F2F) and the
78
Figure 1. Results across classes before and after taking the face-to-face and the computer-
based oral exams regarding levels of anxiety, perceived difficulty and perceived fairness.
When taking a glance at Table 1, we can observe that, overall, the mean ratings in
students’ responses towards the Face-to-face test (F2F in the graphs) are higher than
the ratings towards the computer-based test (Computer), with respect to the anxiety,
79
difficulty and fairness variables included in the questionnaire, both in the Control and
the Experimental class, at the beginning and at the end of the course.
Overall anxiety
reported lower levels of anxiety towards the computer-based test in comparison to the
When taking a closer look at the mean ratings for the anxiety levels, we can see
that the Control group’s ratings towards both tests before the course (Computer mean=
1.69; F2F mean= 2.38) are higher than after the course (Computer mean= 1.38; F2F
mean= 2.00).
This aspect was also observed in the Experimental group, as this tendency is also
revealed in the students’ responses before the course began (Computer mean= 1.86; F2F
mean= 2.43) and after taking the oral exams at the end of the quarter (Computer mean=
Another similar trend between classes is the fact that, although learners tend to
show higher levels of anxiety towards both types of oral exams before the course in
comparison to after the course, the mean ratings towards the computer-based test, both
before and after completing it at end of the quarter, fall in between the “low” and the
“medium” level interval in the survey (mean range from 1.00 to 2.00). In case of the Face-
to-face test, the mean ratings obtained in students’ responses, also before and after the
course, fall in between the “medium” and “high” level interval in the questionnaire
80
(mean ranging from 2.00 to 3.00). However, it is interesting to note that the anxiety
ratings before and after the course, for both tests, were slightly higher in the
To summarize, overall anxiety results in the Control and the Experimental class
reveal that students’ tended to feel less anxious in the computerized test, in comparison
to the traditional method before after taking both tests at the end of the course. Learners
in both classes also tend to report lower levels of anxiety towards the two different types
of exams after completing them, at the end of the instruction period. Students in the
Control and Experimental class thus tend to consider both tests to be less anxiety
evoking than expected. Furthermore, we were also able to observe that the Experimental
Perceived difficulty
similar to the one found in their reactions towards anxiety. As seen in Table 1, students
in the Experimental and the Control class reported lower ratings towards the computer-
based test, in comparison to the traditional method, before and after taking both types
of final exams at the end of the quarter. Both groups thus show a tendency to consider
When looking specifically at the Control group’s responses before and after
taking the tests, we can also observe a decrease in students’ ratings regarding the
81
difficulty level of both oral exams from before (Computer= 1.69; F2F=2.00) to after
completing these tests at the end of the course (Computer= 1.54; F2F= 1.69).
With regard to the reactions in the Experimental class, ratings towards both tests
before the course (Computer= 2.00; F2F= 2.50) are also higher than after the course
Finally, when looking at the mean ratings for students’ responses regarding
difficulty, we can observe that, in comparison to the Control group, ratings in the
Experimental class were slightly higher both before and after the course.
overall levels of anxiety, learners in both groups tend to believe that the computer-based
oral test is less difficult than the Face-to-face one, both before and after taking the exams.
Students in both classes also tend to consider that both of these tests are less difficult
than expected. In addition, although the two classes have quite similar responses
regarding levels of difficulty towards both types of oral exams, before and after taking
them at the end of the course, ratings in the Experimental group tend to be somewhat
higher in general.
Perceived fairness
speaking abilities, Table 1 shows that learners in both the Experimental and the Control
class revealed a tendency to consider the computer-based test less fair than the face-to-
82
face method, both before and after taking the face-to-face and the computer-based oral
exams.
With the Control group in particular, we can observe that the average ratings
towards both types of final oral exams before the course (Computer= 2.15; F2F= 2.62)
are lower than ratings after the course (Computer= 2.54; F2F= 2.77).
For the Experimental group, the mean ratings towards the computer-based test
also tend to be lower before the course (Computer= 2.29) in comparison to after the
course (Computer= 2.36). However, this group’s responses towards the face-to-face test
indicate equal mean ratings in terms of fairness, before and after taking this type of oral
exam (F2F= 2.64). Therefore, the Experimental group’s attitudes towards the face-to-
face test in terms of fairness did not really change from their expectations to their final
opinions.
Furthermore, Table 1 indicates that before the course, the control group shows
slightly lower mean ratings towards both tests (Computer= 2.15; F2F= 2.62) in
comparison to the Experimental group (Computer= 2.29; F2F= 2.64), whereas after the
course, this group shows higher ratings in both tests (Computer= 2.54; F2F= 2.77) in
It is also interesting to note that, in general, the highest mean ratings in students’
responses in the questionnaire given before and after the course, can be observed in
their responses towards the fairness variable. Taken together, results in students’
responses in terms of Fairness indicate that learners in both classes considered the
computer-based oral exam to be less fair than the face-to-face one before and after
taking both tests. Additionally, both classes thought that the computer-based test was
83
fairer than what they expected (as the mean ratings towards this test were lower before
Regarding the face-to-face oral exam, the Control class also thought this type of
test was more fair than expected but opinions in the Experimental class towards the F2F
Furthermore, a slight difference we find between classes is the fact that, before
the course, the Control groups’ expectations towards both tests in terms of fairness are
lower than the Experimental group. However, after the course, results show that
students in the Control group tended to consider both oral exams more fair than learners
Results on learners’ reports regarding their level of comfort with the technology
used in the computer-based final oral exam are presented below. Table 2 shows the
means computed for students’ reactions in the Control and the Experimental class,
before and after taking the tests. As mentioned above, students’ responses on this
84
Table 2. Mean values across classes on level of comfort with the technology used before and
after taking the computer-based oral exam
Before Course After Course
Comfort with the Control Computer 2.62 2.46
Technology used
Experimental Computer 2.21 2.57
As Table 2 displays, the pattern for students’ responses, towards their level of
comfort with the technology used in the computer-based final oral exam, slightly differs
from one class to the other. First, we can observe that the mean ratings for the Control
group before the course (2.62) are somewhat higher than after the course (2.46),
whereas in the Experimental group, ratings before the course (2.21) are lower than after
the course (2.57). Additionally, we note that, before the course, mean ratings in the
Control group (2.62) are higher than in the Experimental group (2.21). Nonetheless,
after the course, students’ ratings in the Control group (2.46) are slightly lower than in
Overall, results regarding students’ responses towards the level of comfort with
the technology used in the computer-based test show that learners in the Experimental
group felt more comfortable than expected, while learners in the Control group felt less
In addition, students’ reports before class reveal that, in average, the Control
Group expected to feel more comfortable than the Experimental group but, after the
85
course, there is a slight tendency for the Experimental group to feel more comfortable in
3.2.1.2. Students’ overall reactions towards both types of oral exams at the end of
the quarter
After providing a descriptive summary for students’ reactions towards the new
computer-based vis-à-vis the face-to-face final oral exams across classes, and taking into
account students’ expectations (before the course) and final opinions (after the course)
towards the two modes of oral testing, I will examine all students’ perceptions after
taking both types of oral exams at the end of the quarter (after the course). Thus, in order
to answer research question 1.2, concerned with all students’ reactions regarding the
two types of oral exams at the end of the instruction period, leaners’ responses were
Table 3 below displays the mean values (in a 3-level scale ranging from 1- “low”
to 3- “high”) for all students’ responses towards the final computer-based and the face-
to-face oral exams, regarding levels of anxiety, difficulty and fairness in testing their
86
Table 3. Mean values for all student’s responses regarding levels of anxiety, perceived
difficulty and perceived fairness, after taking the face-to-face (F2F) and the computer-
based oral exams at the end of the quarter
After Course
F2F 2.18
F2F 1.85
F2F 2.70
87
Figure 2. All students’ reactions to both types of oral exams, regarding levels of anxiety,
perceived difficulty and perceived fairness, at the end of the quarter
Overall anxiety
Taking into account that the 3-level scale presented in the questionnaire
corresponds to 1- “low”, 2- “medium” and 3- “high”, Table 3 shows that the mean ratings
towards the computer-based test are closer to the “low” category in the survey
(mean=1.40). In the case of the face-to-face test, the average of learners’ ratings towards
anxiety falls into the “medium” category (mean=2.18). Hence, when examining all
students’ opinions regarding overall anxiety levels towards both types of final oral
exams, we can also observe students’ tendency to rate their levels of anxiety towards the
88
Perceived difficulty
With respect to all learners’ final opinions on levels of difficulty, Table 3 shows
that the mean ratings towards both types of oral exams are closer to the “medium”
category (Computer=1.59; F2F=1.85). However, the mean obtained for the face-to-face
test reveals that, in average, there is a slight tendency to perceive the face-to-face test as
more difficult than the computer-based, after taking both types of exams at the end of
Perceived fairness
speaking abilities, we can observe in Table 3 that the examinees’ ratings towards the
face-to-face final oral exam (F2F= 2.70), tend to lean towards the “high” numerical
highly fair. Although learners’ responses towards the computer-based also indicate a
tendency to consider this test quite fair (Computer=2.44), higher ratings towards the
traditional F2F oral testing method indicate examinee’s propensity to consider this type
89
3.2.1.2.2. Comfort with the technology used
computer-based oral exam, when taking into account all students’ responses at the end
of the instruction period, the mean rating for students’ answers to the final opinions
survey corresponded to 2.5, which falls in between the “2=medium” and the “3=high”
level categories included in the survey. This result suggests that, in average, students
tend to feel fairly comfortable with the technology-mediated oral assessment tool used
3.2.2. Examination of anxiety factors and feelings that tend to be more involved in
the computer-based and the face-to-face final oral exams, this section will now focus on
the question of anxiety in a comparison between the two testing formats. This analysis
thus concentrates on the examination of the anxiety factors and feelings, presented on
the anxiety scale, that tend to be more involved in one type of oral exam in comparison
to the other.
As mentioned in the methodology chapter, after taking both types of final oral
achievement exams at the end of the quarter, examinees completed a 5 point Likert-like
questionnaire, based on the Horwitz et al. (1986) FLCAS (Foreign Language Classroom
Anxiety Scale). As previously explained, the anxiety scale contains the following set of
90
response options: -2=Strongly disagree; 2= Disagree; 0=Neither agree or disagree; 1=
Agree; 2= Strongly agree. A total of 13 statements/ anxiety factors were included in the
scale, with each statement being presented individually in the context of the Face-to-face
from 3 students presented missing responses on some of the questions in the survey.
These learner’s ratings on the answered questions were included in the analysis.
In order to determine the anxiety factors and feelings that are more involved in
one type of test in comparison to the other, all students’ reactions towards the
statements provided on the anxiety scale were examined. As explained above, each item
was presented in the context of the final oral exam taken in the lab (Computer) and the
traditional test conducted in the instructor’s office (F2F). Descriptive analyses were
performed by looking at the differences in students’ answers between the two test types.
These values were thus calculated by, for each item, subtracting the mean ratings for the
computer-based from the F2F oral exam. These differences were then sorted from
smallest to largest. The factors and feelings considered as tending to be more involved
in one test in comparison to the other were thus the ones displaying the largest
disparities between students’ responses towards the F2F and the computer-based test.
Figure 3 below displays the mean ratings of all learners’ responses to the items
included in the anxiety scale, towards the F2F and the computer-based oral exam
formats, sorted by the size of the difference between the ratings on the two modes of
91
oral testing. These items are coded in numbers 1 to 13 and students’ assessments
towards them can be observed in the numbers ranging from -2 (strongly disagree) to 2
(strongly agree). For a clear understanding of the anxiety factors and feelings entered in
Figure 3, a description of each item, and the corresponding number, is provided in Table
4 below.
92
Table 4. Items included in the anxiety scale
Number Item
1 I never feel quite sure of myself when I’m speaking in the oral exam in the
instructor’s office/ in the LAB
3 I start to panic when I have to speak without preparation in the oral exam in
the instructor’s office/ in the oral exam in the LAB
5 In the oral exam in the instructor’s office/ in the oral exam in the LAB, I can get
so nervous I forget things I know.
7 Even if I am well prepared for the oral exam in the instructor’s office/ in the
LAB , I feel anxious about it.
8 I often feel like not going to my oral exam in the instructor’s office/ in the LAB
9 I can feel my heart pounding when I'm taking an oral exam in the instructor’s
office/ in the LAB
10 I feel more tense and nervous in my oral exam in the instructor’s office/ in the
LAB than in my written exams.
11 I get nervous and confused when I am speaking in the oral exam in the
instructor’s office/ in the oral exam in the LAB
12 I get nervous when I don't understand every word in the prompts for the oral
exam in the instructor’s office/ for the oral exam in the LAB
13 I get nervous when I have to perform oral situations for which I haven't
prepared in advance, in the oral exam in the instructor’s office/ in the oral
exam in the LAB
93
Figure 3. All students’ responses towards the computer-based and the face-to-face test on
the items included in the anxiety scale.
As illustrated in Figure 3, all the anxiety factors and feelings included in the
anxiety scale received higher average ratings when evaluated in the context of the face-
The group of items in the anxiety scale displaying the slightest differences in
students’ mean ratings between the two tests are number 12 (“I get nervous when I don't
understand every word in the prompts for the oral exam”), number 8 (“I often feel like
not going to my oral exam”), and number 1 (“I never feel quite sure of myself when I am
speaking in the oral exam”). Hence, with respect to the mentioned anxiety factors and
feelings, examinees’ opinion towards the final oral exam did not differ much depending
Moreover, the mean ratings towards item 12 and item 1 tend to lean closer to the “no
opinion” category in the survey, for both types of oral exams. In the case of statement 8
94
(“I often feel like not going to my oral exam”), in the context of the F2F test, this is the
only item in the anxiety scale falling in between the “disagree” and the “no opinion”
As pointed out before, items on Figure 3 are ordered according to the size of
distinction between students’ answers on the two types of exams can be observed in
another group of items that are also fairly similar in terms of variation in learners’ mean
ratings concerning the two testing modes. These items include number 4 (“I worry about
the consequences of failing my oral exam”), 13 (“I get nervous when I have to perform
oral situations for which I haven't prepared in advance”) and 3 (“I start to panic when I
have to speak without preparation in the oral exam”). However, as Figure 3 shows, the
distinction in students’ opinion between one test and the other regarding these items is
still relatively small. An interesting aspect to take into account is the fact that item 13 is
the one displaying the highest mean ratings in students’ responses to both the F2F and
between the two modes of oral testing become more evident when observing the
11(“I get nervous and confused when I am speaking in the oral exam”)
9 (“I can feel my heart pounding when I'm taking the oral exam”)
5 (“In the oral exam (…) I can get so nervous I forget things I know”)
7(“Even if I am well prepared for the oral exam (…), I feel anxious about it”)
95
6 (“It embarrasses me to talk in front of the teacher/computer in my oral exam”);
10 (“I feel more tense and nervous in my oral exam (…) than in my written
exams”)
The larger differences between students’ ratings on the two modes of oral testing
found in these items thus reveal a greater tendency for examinees to consider these
anxiety factors and feelings as more involved in the F2F test, in comparison to the
computer one.
Within this group of items, the statements included in the anxiety scale
in my oral exam”) and 10 (“I feel more tense and nervous in my oral exam in the
instructor’s office/ in the LAB than in my written exams”) are the ones presenting the
largest differences in students’ reactions towards one exam and the other, containing
similar values in terms of variation in students’ opinions. These anxiety factors thus
seem to be considerably more involved in the F2F test, revealing a stronger tendency for
students to feel more embarrassed in the presence of the instructor than in front of the
computer, when taking the final oral exam. As we can see, students’ also show a
propensity to feel higher levels of tension and nervousness in the F2F oral exam than in
scale tend to fall into the disagree (-1) and agree (1) category in the survey, which does
96
Overall, the present outcomes are in line with the ones presented in the former
investigation, where students’ perspectives towards both types of tests were analyzed
regarding aspects such as perceived overall anxiety, difficulty and fairness, as well as in
terms of their level of comfort with the technology used in the computer-based test. As
discussed above, students’ mean ratings towards both oral testing methods concerning
their overall anxiety levels were also higher in the context of the F2F test. This was the
case when observing all students’ responses both at the end of the course and when
examining learners’ reactions in the Experimental and the Control class separately,
Experimental class?
relevant for the current investigation of the anxiety factors and feelings that tend to be
more involved in one testing format in comparison to the other. Once again, the fact that
students in the sample belong to two different classes, receiving a different treatment
throughout the course, might influence their affective reactions towards both methods
of oral testing.
Figure 4 below replicates the structure of Figure 3, except that in this case it
displays the difference between classes in terms of learners’ responses regarding both
final oral exams in general. As in Figure 3, items are sorted by the size of the difference
between students’ ratings towards the anxiety factors and feelings included in the
97
questionnaire. Higher mean ratings for the Experimental class are presented on the left
and, in the case of the Control class, can be observed on the right.
Figure 4. Differences between classes, regarding students’ ratings towards the items
presented on the anxiety scale.
As we can observe, classes did not differ much in terms of their predisposition to
experience the anxiety feelings included in the scale. As indicated in Figure 4, the
discrepancy between students’ ratings in each class tends to be considerably low, with
the mean ratings towards the factors presented on the anxiety questionnaire either
nearly overlapping (e.g., items number 11 and 12) or presenting a considerable small
difference between ratings. Thus, results presented in Figure 4 suggest that the
treatment received by the Experimental group does not tend to have a particular
98
influence on the way students feel towards both types of oral testing, in comparison to
The second research question in the current study involved whether or not there
were any improvements in terms of oral performance when taking part in oral
assessment paired communicative tasks via the computer compared to the face-to-face
class, throughout the course. As explained in detail in the Methodology chapter, a Control
class completed these oral activities in the classroom (the traditional method), while an
Experimental class completed the same oral tasks during class time, in a computer lab,
At this point of analysis, data from 3 students belonging to the Control class and
1 student from the Experimental class were eliminated, due to the incompletion of either
the pre- or the post- test. This investigation thus includes 10 students in the Control class
university where the study took place) evaluated students’ speech in the pre- and post-
test, through a 42-point rating scale, organized through the assessment of “Grammatical
(Appendix J).
In order to understand the extent to which the different evaluators arrived to the
99
was tested by using Krippendorff’s Alpha, which has advantages over other IRR
statistics, such as robustness to missing data and different number of observers and
sample sizes (Hayes & Krippendorff 2007). Alpha was 0.766, which can be considered
to fall within the range of reliability suggested in Krippendorff (2004). As the author
states, “Consider variables with reliabilities between α = .667 and α = .800 only for
3.3.1. Does the Experimental class show greater improvement than the Control
terms of oral performance when taking the computerized paired oral communicative
activities throughout the instruction period (the treatment received by the Experimental
Class).
Considering the fact that one of the classes could already reveal a higher oral
performance level at the beginning of the instruction period, and that this could possibly
interfere with the interpretation of the results, as a first step in this analysis, a T-test was
administered in order to compare the overall pre- and post- test scores in each class (on
a 42-point rating scale). Table 5 below thus shows the average of the pre and post-test
mean scores in both classes and the difference between those means.
100
Table 5. T-test results for Pre- and Post-test mean scores in both classes
* < p 0.05
When observing the pre-test mean values in both classes, it is possible to notice
that, on average, the Control class produced a higher overall score than the Experimental
class in the pre- test. However, it cannot be concluded that the Control class was
significantly better than the Experimental one at the beginning of the course, as no
statistically different results were revealed in this analysis. Therefore, the two groups
Results for the two classes in terms of improvement in oral production were then
determined by calculating the difference between the pre and post-test scores (totaled
within certain linguistic areas and overall) for each student. A t-test was also performed,
in order to assess whether the scores in the two classes were statistically different from
each other. Table 6 below thus shows the t-test results for the differences between the
two groups in terms of improvement in oral performance, with respect to the linguistic
variables included in the 42-point pre-/post-test rating scale and total test scores.
101
Table 6. Comparison between the Control and the Experimental class in terms of
improvement in oral performance
Linguistic Control class Experimental class P-value
Variables improvement improvement
Grammatical 1.80 1.19 0.57
structures
Vocabulary 0.73 0.88 0.76
* < p 0.05
Experimental Class displays higher oral improvement values in linguistic variables such
as “Vocabulary” and “Overall communication”, and the Control class shows higher oral
these results did not differ statistically (p>0.05). Furthermore, although the average
overall gains in the Control class (Total score) was 5.13, whereas in the Experimental
class was 3.88, this analysis did not find statistically significant differences between the
two groups’ oral production gains in terms of overall test scores either. These results
thus suggest that the treatment received by the Experimental class might not play a role
102
the course. A reflection on some of the reasons that may account for the outcomes
obtained in the current research section, will be presented in the Discussions and
reactions towards both types of oral exams revealed examinees’ general tendency to
consider the computer-based final oral exam as less anxiety evoking and less difficult, in
comparison to the face-to-face test. We were also able to observe that, despite learners’
positive reactions towards the computer-based test in terms of levels of anxiety and
difficulty, students also tended to define this test as less fair in terms of testing their
speaking abilities. These aspects were observed when examining all students’ responses
together, after completing both types of oral exams at the end of the quarter, and when
comparing students’ reactions in each class, before and after the course (in a pre- and
post-test survey).
Results from the pre- and post-test surveys of students’ reactions across classes,
in the pre- and post-test survey, also indicate a tendency for students in both groups to
consider the two types of tests as less anxiety evoking and less difficult than expected,
with the Experimental group reporting slightly higher levels of anxiety and difficulty
103
With respect to each classes’ reactions towards both types of exams regarding
“fairness in testing their speaking abilities”, there was a tendency in both classes to
consider the computer-based test as more fair than expected. With regard to students’
attitudes towards the F2F test, the Control class also showed an increase in students’
mean ratings from their expectations to their final opinions, while students’ assessments
in the Experimental class were the same before and after taking the tests.
Regarding the examinees’ level of comfort with the technology used in the
computer-based test, students’ reports in the Experimental class reveal that, in contrast
to the Control class, learners felt more comfortable with the technology used in the final
computer-based oral exam than what they expected. At the end of the course, this class
also reports on slightly higher levels of comfort with technology used in the computer-
based oral exam. Furthermore, all examinees’ responses taken together at the end of the
quarter show learner’s tendency to feel quite comfortable with the computerized oral
higher mean ratings towards the F2F oral exam with regard to all the items included in
the anxiety scale, the tendency for some anxiety factors and feelings to be more involved
in the face-to-face method are somewhat more related to certain items. These include
aspects such as “getting nervous and confused when speaking in the oral exam” (item
11); “feeling the heart pounding when taking the oral exam” (item 9); “forgetting things
104
as a result of being nervous” (item 5); “feeling frighten to talk in front of the
teacher/computer” (item 2); and “feeling anxious about the exam even when well
prepared” (item 7). Nevertheless, the most evident differences between students’
responses towards the F2F and the computer-based oral exams indicate that students
tend to feel more embarrassed to speak in front of the instructor than in front of the
computer (item 6) and a predisposition to feel more tense and nervous in the F2F oral
exam than in the written exams, in comparison to the computer-based oral testing
responses towards the anxiety factors and feelings presented in the questionnaire, we
were able to observe very similar trends between these two groups.
With respect to the second main research topic in this study, concerned with the
activities throughout the quarter, main findings showed that, although the Experimental
class showed higher oral improvement values in terms of “Vocabulary” and “Overall
scores in the two classes’ pre tests was performed, in order to investigate whether the
Control class showed superior oral speaking skills from the beginning, since it could
imply an initial propensity for this class to present higher gains in terms of oral
105
improvement. However, although this class did show a higher score in the pre oral test
taken at the beginning of the course, no statistical significance was found in this
analysis.
3.6. Summary
regarding two central research topics concerning students’ affective reactions towards
the two types of final oral exams, with a focus on anxiety, and regarding students’
throughout the course. Taken together, results obtained in the descriptive analysis of
more positive reactions toward the computer-based final oral exam in general, except in
terms of fairness in testing learners’ speaking abilities. These trends were observable in
both classes, in a pre- and post-test survey taken before and after the course.
Control class. The following chapter offers an in-depth discussion and provides
106
CHAPTER 4: DISCUSSIONS AND CONCLUSION
4.1. Introduction
computer-based oral assessment methods (Keynon & Malabonga 2001; Jeong 2003; Joo
2007; Öztekin 2011; Tognozzi & Truong 2009), one of the main purposes of this study
was to examine students’ perceptions and affective factors, with a focus on anxiety,
Program, created with the intention of substituting the existing face-to-face oral exam
taken in the instructor’s office at the end of the quarter. The second main purpose in the
This final chapter provides a discussion of the key findings on the two main
research topics in the current investigation. Aspects related to this study’s limitations,
implications for the field of second language learning and teaching, and suggestions for
perceptions and affective factors towards the computer-based and the face-to-face oral
107
exams are divided into two sections. Hence, I will first consider the results from
students’ general perceptions towards both types of tests in terms of anxiety, difficulty
and fairness in testing their speaking abilities, as well as their level of comfort with the
technology used in the new computer-based oral exam. Next, I will discuss the results
with regard to the anxiety factors and feelings, included in the anxiety scale, that are
more involved in one type of test in comparison to the other. As mentioned in the
considered in this chapter, in order to further interpret the obtained quantitative results.
4.2.1. Students’ general perceptions towards the computer-based test vis-à-vis the face-
towards the exams takes into account the fact that participants in this study belong to
two different classes, receiving a different treatment throughout the instruction period.
This different treatment mainly consisted in the fact that the Experimental class took
time, and the Control class took the same activities in the classroom. It is also taken into
account that students responded to this survey before and after the course, i.e. before
and after taking both types of oral exams at the end of the quarter. Hence, this analysis
responses depending on each class and on the time the survey was taken.
108
For the second part of the analysis on students’ general perceptions, this
investigation focused on learners’ reactions only at the end of the quarter and without
As observed in the analysis and results chapter, a strong general trend that
resulted from students’ responses to this survey was a tendency to report lower mean
ratings towards the computer-based oral exam, in terms of anxiety, difficulty and
fairness. For comparison of the current research results with those reported in previous
studies, this chapter will focus on studies including examinees’ attitudes towards
computer-based and face-to-face oral exams, since this aspect constitutes the focus of
research topic 1.
With respect to students’ levels of anxiety towards both types of tests, results
presented in the previous chapter displayed learners’ tendency to report lower ratings
towards the computer-based test. These results were obtained when examining
students’ responses in each class, before and after taking both types of oral exams, and
when analyzing all students’ responses together at the end of the quarter. Furthermore,
mean ratings towards overall anxiety levels on both oral exams are also lower after
taking the tests, indicating students’ propensity to feel less anxious than expected in both
oral exams. This seems logical since students’ perceptions of their levels of anxiety
towards the oral exams should decrease after having completed them. In addition, in
comparison to the Control class, the Experimental class tends to report slightly higher
109
levels of anxiety, before and after taking the oral exams. This might suggest that learners
in the Experimental group tend to be somewhat more anxious in general, since they
show slightly higher levels of overall anxiety from the beginning, before taking the tests.
When providing a justification for feeling less anxious in the final computer-
based test in comparison to the face-to-face, the majority of students’ answers to the
open-ended questions in the pre- and post- survey indicated aspects related to talking
in front of the instructor in the face-to-face context. It is relevant to recall that, at the
beginning of the course, students were already familiar with the F2F format presented
in this study, since they had taken the traditional final oral exam in the previous quarter.
This means that examinees already knew what to expect from this type of oral test. Some
examples of common responses reporting lower levels of anxiety towards the computer-
[In the computer-based test] You don’t have someone stare at you the entire
time
[In the computer-based test] You do not get to witness the person evaluating
you
An interesting aspect is that, in students’ responses to aspects they liked the least
about the face-to-face format, students also refer to aspects related to the presence of
the instructor (e.g., “being watched”; “having the grader right next to you”), or simply
mention the fact that they feel more anxious/ nervous in this context (e.g., “the nerves”;
“it’s more stressful”; “the anxiety that came with being evaluated in person”). Responses
110
to aspects that students liked the most about the computerized test include comparable
opinions (e.g., “no one is observing me”; “no one watching”; “less pressure”; “less
nervousness”).
Examinees’ responses regarding overall anxiety levels towards the two modes of
oral testing in the current study present similarities and differences from the reviewed
Anxiety reports in this study differ from previous studies, where test takers
mediated oral exams in comparison to the face-to-face formats (Stansfield et al. 1990;
Lowe & Yu 2009; and Öztekin 2011). The authors referred to aspects such as the fact
that students saw the technology-mediated test as more “unfamiliar” and “unnatural”
(Stansfield et al. 1990), “worried about making mistakes”, mostly for being recorded,
and also due to “the lack of reaction from the computer” and “the fixed time given for a
response” (Lowe & Yu 2009), for example. Öztekin (2011) also comments on the fact
that almost half of the students appeared to be afraid of making mistakes in the
computer-assisted speaking assessment and mentions that “This may have resulted
from interlocutor interference in the FTFsa, given that interlocutors typically try to
The distinct results observed in the current study and these past research works,
regarding students’ levels of anxiety, might be somewhat related with the different
structure of the oral exam used in the present study. In the case of the face-to-face oral
test used here, the role of the examiner is more of the “grader”; a person who students
111
are speaking “in front of”, rather than the interlocutor/interviewer, since the test is
conducted in pairs and the instructor is not supposed to interfere in the oral situation
performed by the students. This aspect might contribute to students’ higher levels of
anxiety towards the face-to-face oral achievement test, in comparison to the computer-
based, since, by not interacting with the teacher during the role-play situation,
examinees might get a stronger awareness of the fact that they’re being evaluated. As
students’ responses to the open-ended questions in the survey show, examinees tend to
justify their higher levels of anxiety in the traditional test with the fact that they were
witnessing the instructor evaluating them or that they were having someone “staring at
Nevertheless, with respect to the anxiety variable, results in the current study are
also comparable to the ones obtained in Early & Swanson (2008) and Joo (2007), where
students also presented higher levels of nervousness in a face-to-face oral testing format,
digital voice-recorded OLA and mention students’ reports on “higher levels of affective
filter due to peer presence” (p.44) in the in-class oral assessment. In the case of Joo
(2007), the author states that “Talking directly to a teacher seems to have made the
students more nervous in the FTFI” (p.180). However, the author comments on the fact
that students’ higher levels of nervousness in the face-to-face test, “may partially be
explicated by the prominent Korean cultural tendency to save face by not making
112
Keynon & Malabonga (2001), for instance, predicted that students would feel less
nervous in the face-to-face OPI in comparison to the two technology-mediated (COPI and
SOPI) tests because of the “human interaction effect and its emphasis on the
interviewer’s putting the examinee at ease” (p.66). The authors however reported that
there was not a statistically significant difference in students’ ratings between the two
technology-mediated tests and the OPI. Nevertheless, Keynon and Malabonga (2001)
predictions, regarding the fact that students would feel less nervous towards the face-
to-face OPI, reinforces the idea that the role of the examiner seems to have an impact on
examinees’ feelings. In the case of the oral exam format presented in the current study,
learners seem to see the instructor more as the person evaluating them and watching
them while they speak, rather than someone “putting the examinee at ease” (Keynon &
Malabonga 2001; Öztekin 2011). Thus, the fact that the evaluator has a less interactional
role, might have an impact on students’ higher levels of anxiety towards this testing
format.
conducted in this study revealed a general tendency for students to consider the
computer-based test less difficult than the face-to-face one, since the learners’ mean
ratings in the computerized test were lower. This aspect was observed when all
students’ responses were analyzed together at the end of the quarter and when
differences between classes, before and after taking the tests, were investigated.
113
Students in both groups also tended to consider these tests less difficult than expected.
Moreover, similarly to the results yielded for the anxiety variable, learners in the
Experimental class reported slightly higher levels of perceived difficulty towards the
tests, in general. This aspect is not very surprising since the variables corresponding to
mentioned, the fact that learners in the Experimental class tend to show slightly higher
levels of overall anxiety, in comparison to the Control class, might suggest a tendency for
this group to be more anxious in general and, therefore, a tendency to perceive both oral
exams as more difficult (from their expectations to their final opinions) as well.
also visible in students’ open-ended responses to the general perspectives survey, since
the most common reasons provided for their levels of difficulty towards the tests are
similar to the ones provided for their levels of anxiety. Joo (2007) also noted that “The
more nervous students felt, the more difficult they felt the test was” (p. 181).
post- test survey, some students expected and considered both modes of oral testing to
be equally difficult, due to similarities between the tasks in the two testing formats and
the correspondence of the aspects being graded (e.g., “Still the same test, just different
ratings towards this type of oral exam format, in terms of difficulty, the majority of the
examinees’ answers were, once again, related to not being in the presence of the
instructor.
114
Samples of students’ responses regarding the computer-based test include:
Other reasons were related to the fact of having some control over the test by
being able to pause it (e.g., “you can pause it and think, then talk”). These comments
provide supporting evidence for results reported in Keynon and Malabonga (2001),
where students also consider aspects such as the fact of “having control of when to start
The possibility of being asked to speak for a longer period of time in the face-to-
face test is also among the most common reasons provided in students’ answers to open-
ended questions explaining lower levels of difficulty in the computer-based test. Written
If I am asked to speak for a greater period of time I will most likely have greater
difficulties
Through these kind of assertions, we can understand that students tend to relate
the possibility of being asked more questions in the oral exam with the level of test
difficulty. It appears that, in the learners’ point of view, the more they have to talk, the
more difficult the test seems to be. Here it is relevant to recall that, the type of face-to-
face oral exam format presented in the current study is not based on an
115
interview/interaction between the teacher and examinees. However, the fact that
learners also justify their higher levels of perceived difficulty towards the face-to-face
test with the fact that their teacher might want them to talk more suggests that, as much
as the instructor does not have an interactive role in the oral communicative situation
performed among learners, there might always be some kind of examiner’s interference
that will be difficult to control, as requesting the student to talk for a longer period of
time, for instance. Öztekin (2011), for example, also comments on the fact that
interlocutors in the face-to-face speaking assessment “might have interfered more than
needed and helped some students answer some questions, which would have decreased
tendency to consider the computer-based test less difficult than the the face-to-face, also
contrast with those reported in Öztekin (2011) and Stansfiled et. al (1990), where the
test to be more difficult than the face-to-face oral assessment. Öztekin (2011) mentions
that “test takers might feel that they have to put more effort into answering a question
in the computerized mode” (p.119) and also comments on students’ reports of difficulty
in terms of organizing their ideas in the computerized test, stating that “finding the
FTFsa environment more relaxing, the existence of an interlocutor being more helpful
reasons for this” (p.119). In the case of Stansfield (1990) the author mentions reasons
such as “discomfort in talking to a machine” (p.647) for students to consider the taped-
based test more difficult than the face-to-face OPI. Once again, it is interesting to note
116
that, while in these previous studies, the presence of the examiner could be related to a
reduction of students’ anxiety and difficulty, this aspect is actually seen by the majority
of students in the current study as a factor of higher levels of anxiety and perceived
difficulty towards the face-to-face oral exam. Again, this aspect could possibly be
explained with the fact that the oral test structure found in this study differs from the
ones presented in the reviewed literature, in the sense that, as previously mentioned,
both types of oral exams are taken in pairs, either in front of the computer or in front of
the instructor, in his/her office. The interaction is thus mainly among learners and not
with the teacher, in the face-to-face context. As observed above, in their justifications for
their lower levels of anxiety towards the computer-based test, learners also refer to
aspects such as not having someone evaluating in front of them or not having someone
looking at them, when taking the oral exam in the Lab. Thus, the fact that, in the case of
the face-to-face test, the instructor does not participate as an interlocutor also seems to
constitute one of the reasons why learners consider this oral exam format to be more
With respect to the variable of Fairness, an analysis across classes before and
after completing the tests, and an investigation of all students’ reactions at the end of the
quarter, showed that, in average, examinees considered the face-to-face oral exam to be
more fair in testing their speaking abilities than the computer-based test. Nevertheless,
here it is relevant to recall that, in general, learners also displayed a tendency to consider
117
the computer-based test quite fair in testing their speaking abilities. As previously noted,
the highest mean ratings towards examinees’ responses in the questionnaire appeared
when evaluating both tests in terms of fairness, both before and after taking the tests.
to-face test to be more fair than the computer-based oral exam correspond to the
following:
I felt the face-to-face exam was more fair because if we needed help with a certain
word or phrase the professor would be there to help move the conversation along.
correct myself and explain that my mistakes were from nerves rather than ability
level.
It is interesting to note that, while the presence of the teacher seems to be the
most common motive for students to consider the face-to-face oral exam more difficult
and anxiety evoking than the computer-based, this aspect also seems to constitute an
important justification for learners’ higher ratings towards the face-to-face test in terms
the fairness variable show, examinees also tend to consider the instructor’s presence
conversation along” or giving students the opportunity to explain the nature of their
regarding the aspects they liked the most about the face-to-face test, learners frequently
revealed their appreciation for the instructor’s feedback at the end of the test (e.g., “I got
118
instant feedback”; “The instructor gave us feedback right away”). Hence, in the learners’
point of view, this aspect constitutes another common advantage related to being in the
computer-based oral exam conducted in this study also constitutes a common reason for
students to consider the face-to-face test to be more fair. Samples of students’ responses
One bad recording sends a worse impression than messing up a bit and recovering
in person.
The recording might not be clear. In person the professor can hear directly.
With respect to this type of concerns related to the use of technology in the oral
exam, it is worth to mention that one of the challenges in conducting this research
project was the fact that, in one of the labs assigned for the administration of the final
computerized oral exam and oral assessment activities, there were some difficulties in
terms of the connection between certain computers and the microphones used in the
test. This aspect required the need for technical assistance and might have had some
note that comments related to technical difficulties can also be witnessed in students’
towards the tests (completed before taking both final oral exams). This aspect thus
119
oral exam. In fact, learners also commonly refer to this matter in their comments
regarding the aspects they liked the least about the computer-based test, in answers
such as “setting up with the headphones” or “I had technical difficulties that I feared
would affect the outcome scores”, for instance. Öztekin (2011), also discusses a number
reasons for the students’ relatively low ratings towards these tests. In the case of the
justification for less positive attitudes towards this type of test, especially in terms of
considering the face-to-face oral exam as more fair than the computer-based. These
capture oral speech features that might not be possible to capture via the computer.
Thus, students also display a tendency to feel like the instructor’s presence assures that
their oral production is well heard and understood, for instance. As Tognozzi & Truong
(2009) claim “Technology, though more sophisticated than ever, is still rife with
problems” (p.10). Öztekin 2011, for example, suggests that “Probably conducting the
speaking test with better technical equipment would yield better results in favor of the
CASA” (p. 120) and that “it should also be ensured that the technical equipment such as
headphones, microphones, computers and the internet, are working properly” (p. 129).
120
When considering the implementation of the computer-based test used in this research,
this aspect should be taken into account. In the context of the current study, it seems
equipment, would also contribute to students’ better assessment of this test, particularly
testing their speaking abilities are in line with the ones presented in Keynon &
fairness regarding the different modes of oral testing also tended to be more favorable
towards the face-to-face OPI. However, Stansfield et al (1990) point out that “in any case,
only a very low percentage of students felt there were ‘unfair’ questions on the taped
Tognozzi & Truong (2009) observation that “a fair percentage of students did not
feel that using WIMBA was a fair way to test their oral skills” (p.10) is also relevant to
this discussion for a reflection on a potential more general tendency for students to
towards the computer-based test in terms of fairness were also quite high overall.
in testing their speaking abilities, seem to confirm the general qualitative results
presented in Öztekin (2011). The author comments on the fact that students’ answers
to open-ended questions also refer to an overall tendency for students to believe the
face-to-face test would better reflect their oral proficiency. Moreover, the author
121
explains that students valued having someone listening to them in the face-to-face oral
achievement test. As the author states “the existence of someone listening to the test
takers and the attitudes of the interlocutors were among the most noticeable points the
test takers liked about the FTFsa” (p.78), reporting that the interlocutors were
Results regarding students’ levels of perceived fairness towards the tests also
differ from previous research, as the ones developed by Joo (2007) and Lowe & Yu
(2009), where the majority of students thought the computerized oral exam was fairer.
In the case of Lowe & Yu (2009) the computer-based test is considered to be more fair,
particularly in terms of grading, but the face-to-face was actually seen as a better way of
testing learners’ speaking skills. Nevertheless, authors in both of these studies explain
that, in general terms, the majority of students tend to prefer face-to-face test,
mentioning that “it better approximates real-life conversation, interacting with a person,
and that interacting made them somewhat more comfortable” (Joo 2007: 183), or that
they “enjoyed interacting with a ‘live’ examiner” (Lowe & Yu 2009: 35). This
appreciation for a live examiner can also be observed in Jeong (2003). The author
reported considerable positive reactions towards the computerized test but also
explains that the majority of students taking a traditional face-to-face interview and a
indicated a general preference for the face-to-face test referring that “the reason most
commonly sited was the interaction between the interviewers and the interviewees” (p.
74). The researcher also adds that “students seem to prefer human over technology-
122
Outcomes from students’ qualitative responses in the current study and those in
questionnaire, students tend to value the presence of the evaluator in face-to-face oral
exams. However, with respect to this tendency, there are some aspects that seem to
differentiate the present investigation from previous studies. Our participants generally
tend to appreciate the instructors’ presence, mainly when evaluating the test in terms of
fairness in testing their speaking abilities, since the examiner can make them feel like
they have the opportunity to explain their mistakes, help “moving the conversation
along”, or assure that their speech is well heard (in the event of technical failures or
difficulties). In previous research it seems that a tendency to value the presence of the
examiner in the face-to-face test involves aspects related to the human interaction factor
with the teacher/interviewer in general. In the current study, one of the most important
reasons to consider the instructors’ presence as a critical aspect in the face-to-face oral
exam’s fairness has to do with the fact that learners are in front of the person who is
assessing their oral production, which could be helpful at different levels, not the fact
that there is interaction. As explained before, both types of oral exams used in this study
are taken in pairs and the communication interaction is mainly among classmates. Thus,
the aspect of live human interaction is also present in the computer-based test, which
could constitute one of the reasons why learners tend not to focus as much on the
current and previous studies, it is relevant to note that in Jeong (2003) the face-to-face
interview is also taken in pairs. However, in the author’s study, the instructors also seem
123
to have the role of interviewers in the face-to-face test, since they are able to “tailor the
interview based upon students’ responses” (p. 74). In the case of the current research,
the instructor’s role in the face-to-face format is more “passive”, since his/her role is not
computer-based test and oral assessment activities, which were due to issues regarding
the connection between computers and microphones, students were already familiar
with the learning platform used, since they had worked with it for the completion of
several homework assignments. This might have had an influence on their reports of a
fairly high level of comfort with the technology used in the computer-based oral exam.
As previously mentioned, when students’ responses were taken together at the end of
the quarter, average ratings fell in between the scale point of 2 (medium) and 3 (high).
These results go in line with previous studies reporting on students’ low level of
difficulty dealing with the technological systems used in the computerized tests (Jeong
2003). Results also showed that, in contrast to the Control group, students in the
Experimental class felt more comfortable than expected. These results might be due to
the fact that the technology used in the computer-based final oral exam was the same as
the one used in the computerized oral activities taken by the Experimental group,
124
4.2.2. Examination of anxiety factors and feelings that tend to be more involved in one
factors towards both types of oral exams, was dedicated to the investigation of the
anxiety factors and feelings, presented on the anxiety scale, that were more involved in
one type of oral testing format in comparison to the other (research question 2).
feelings presented on the anxiety scale revealed a general tendency for students to
attribute higher mean ratings towards all the items in the context of the face-to-face oral
exam. These outcomes support the previously presented results regarding students’
opinions towards both modes of oral testing in terms of general anxiety levels, where
examinees also tended to consider the face-to-face test as more anxiety evoking than the
computer-based. These overall results differ from previous studies also including a more
based oral exam, based on an anxiety rating scale (Öztekin 2011; Sayin 2015). Öztekin
(2011), for instance, reports on students’ higher anxiety levels towards the computer-
responses to “test-mode related anxiety subscales”. In the case of Sayin (2015), the
author explains that students tend to show anxiety in both modes of oral testing and that
presented in the current study’s anxiety scale allowed us to observe more evident
125
discrepancies in students’ assessments between the two oral exam formats in some of
the items included in the questionnaire. The group of anxiety factors and feelings that
item number):
9 “I can feel my heart pounding when I’m taking the oral exam”
5 “In the oral exam (…) I can get so nervous I forget things I know”
7 (“Even if I’m well prepared for the oral exam (…), I feel anxious about it”)
10 (“I feel more tense and nervous in my oral exam (…) than in my written exams”)
These items were sorted by the size of the difference in students’ reactions
between one test and the other. Item 6 and 10 contained similar values and were the
ones displaying the highest differences in students’ opinions between the two types of
oral exams.
feelings of anxiety in general, while taking both modes of oral testing. Items 11, 9, 5, and
7, for instance, are not mentioning any particular source of anxiety. Most of these
statements refer instead to aspects that result from being anxious. Thus, higher mean
ratings towards these items, in the context of the face-to-face oral exam, indicate a
“getting confused”, “feeling their heart pounding” or “forgetting things they know”
126
whenever they are taking the oral exam in the instructor’s office. An overall tendency
for students to experience similar feelings is also presented in Sayin (2015), where
learners tend to agree with “feeling their heart beating very fast during oral exams”, in a
general reference to speaking tests, or “forgetting things they know because of stress”,
in the context of “talking face-to-face with the instructor”. Nevertheless, in the current
study, all items included in the rating scale were presented separately in the context of
each oral exam format. This aspect allowed us to understand, as mentioned above, that
these specific feelings tended to be more involved in the face-to-face test, in comparison
to the computer-based.
feeling more anxious in the context of the oral exam taken in the instructor’s office “even
when well prepared”. These results are also in line with students’ reports in Sayin
(2015), where the majority of examinees agreed with a similar statement presented in
the anxiety scale, although referring to oral exams in general. At this point, it is also
interesting to note that, in the process of analyzing students’ reactions towards each
statement included in the anxiety scale, we were able to observe learners’ tendency to
mentioned in the previous chapter, item 13 (“I get nervous when I have to perform oral
situations for which I haven’t prepared in advance”) received the highest mean ratings
in both oral exam contexts. Thus, students’ ratings towards item 7 (“Even if I’m well
prepared for the oral exam (…), I feel anxious about it”), indicate learners’ tendency to
feel more anxious in the face-to-face context even when a particular cause of oral exam
127
With respect to items 2 and 6, we can observe that their content is directly related
frightened to talk in front of the teacher/ computer” (item 2) and “feeling embarrassed
to talk in front of the teacher/computer” (item 6), also tend to be more involved in the
face-to-face test. As mentioned before, statement 6 was one of the items presenting the
largest differences in students’ reactions towards the face-to-face and the computer-
based test. Thus, with respect to learners’ ratings towards this statement, results
indicate a tendency for examinees to feel more embarrassed in front of the teacher than
majority of the other items presented on the anxiety scale, learners tend to show a
the instructor.
Results in this investigation also confirm the outcomes presented in the analysis
to the survey on students’ general perceptions towards both types of oral exams, where
students’ open-ended responses for their higher ratings towards the face-to-face test, in
terms of overall anxiety levels, refer to the presence of the teacher as a common source
of nervousness.
feel more tense and nervous in my oral exam (…) than in my written exams”), also
revealed one of the most evident differences between students’ reactions towards one
exam and the other. Learners’ reports of a tendency to feel higher levels of tension and
nervousness in the face-to-face oral exam than in their written exams, in comparison to
128
students’ perceptions towards language exams (Scott 1986; Zeidner & Bensoussan
tests, they indicate students’ overall preference for written over speaking exams, in
general. The current study thus adds to these results, suggesting that this preference for
written over speaking tests might show a tendency to be more involved in the context of
the face-to-face vis-à-vis the computer-based oral exam format included in the current
investigation.
anxiety scale, it was also interesting to note that item 8 (“I often feel like not going to my
oral exam”) was the only statement falling in between the “disagree” and the “no
opinion” classification in the questionnaire, in the face-to-face context. The fact that
students presented similar responses towards this item in both contexts suggests that,
regardless of the possible differences in students’ affective factors towards each exam,
difference between classes in the examination of students’ ratings towards the anxiety
factors and feelings presented in the anxiety scale. Results showed that the opinions
between the two groups were fairly close. The fact that each group received a different
treatment throughout the quarter did not seem to play a particular role on the way
students felt towards the tests. This also goes in line with the results presented in
students’ overall reactions towards the oral exams, where, although learners in the
Experimental class tended to show slightly higher levels of anxiety and difficulty
129
towards both tests, students’ reactions in both classes displayed very similar trends. It
is relevant to recall that the different treatment received by the two classes mainly
consisted in the fact that the Experimental class took computerized oral assessment
activities throughout the quarter, in a classroom context, receiving oral feedback from
the instructor, while the Control class took the same activities in class but not via the
computer. Thus, one could expect that aspects such as the fact that the Experimental
class was more familiar with the technology-mediated oral assessment tool used in the
final computer-based oral exam, for instance, could trigger larger differences in learners’
affective reactions towards the final oral exams. A probable explanation for the similar
responses between the two classes could be the fact that, the learning platform used in
the computer-based oral exam corresponds to the one used in the course for the
completion of written homework assignments and also for the completion of the oral
pre- and post-test that both classes took at the beginning and at the end of the quarter.
Hence, although the Experimental class was more familiar with the technology used in
the computer-based test, the fact that the Control group had also been exposed to this
between classes.
study, along with those reported in the literature review; allow us to reflect on the fact
that other questions could have a larger influence on the way students feel towards the
tests. Those questions could be related to the instructor’s behavior/role throughout the
exam (e.g., a more passive or active interaction with the examinees) or the oral testing
130
4.3. Discussion of students’ improvement in oral performance
The second main topic in the current study corresponded to the examination of
whether or not there were any improvements in students’ oral performance when
showed greater improvements than the Control class, with respect to certain linguistic
areas and overall. As explained in detail in the methodology chapter, the linguistic areas
first step in this analysis, the researcher examined the differences between the two
classes in terms of the overall scores obtained in the pre- and post-test, in order to
investigate whether one class showed superior oral skills over the other one from the
beginning. Although the Control class showed a tendency towards higher values in the
pre-test, the difference between the two groups was not statistically significant.
the Experimental class showed higher oral improvement values in terms of “Vocabulary”
and “Overall communication”, the results were not statistically significant. Given the fact
that the Experimental class was designed within the specific constraints of an existing
course that could not be modified, other than by introducing a different delivery method
for oral activities and oral exam, it is not surprising that students did not improve more
131
than in the F-2-F course. It is however important to underline the fact that the difference
in the oral performance scores of the groups were not significant and therefore there
was no initial advantage of the Control group over the Experimental group: the two
groups were, therefore, wholly comparable in terms of oral skills, before and after the
two treatments.
The fact that the Experimental group did not do better than the Control group in
this section may simply be due to the fact that, after having completed each computer-
based oral activity throughout the quarter, learners in this group might not have paid
Hence, for a broader interpretation of results in a similar type of analysis, a future study
should also include a questionnaire involving specific questions regarding the way
students used the additional advantages involved in the computer-based oral activities.
similar trends across classes in the examination of students’ affective reactions towards
both modes of oral testing in general, results in the investigation of students’ overall
perceptions towards the tests revealed that learners in the Experimental class tended
to show slightly higher levels of anxiety and perceived difficulty towards both types of
oral exams, before and after completing both tests at the end of the course. Although the
performance (Hewitt & Stephenson 2012; Scott 1986; Wilson 2006; Philips 1992) in our
case, however, our results were not statistically significant, and we can’t therefore
conclude that there was any difference in anxiety levels between the two treatments.
The outcomes regarding oral performance also contrast with the ones obtained
132
in Tognozzi & Truong (2009) and Satar & Özdener (2008). In Tognozzi & Truong (2009)
the authors conclude that an experimental group, using a Web-based program for the
completion of weekly oral tasks during the instruction period, showed “longer and more
accurate speech samples” (p. 9)Taking into account the linguistic categories comparable
in the current study and in Tognozzi & Truong (2009), although the authors also did not
terms of grammatical aspects, the group completing weekly computerized oral tasks
during the instruction period showed a statistically significant advantage over the
control group in terms of vocabulary. As previously mentioned, in the current study the
vocabulary, although our results failed to reach statistical significance. In Satar &
Özdener (2008), students in the Experimental groups (including voice chat group),
engaged in 4 chat sessions, taken in dyads, during 4 weeks, also showed an increase in
One aspect that could possibly justify the distinction in the results presented in
the current investigation, in comparison to the ones found in these previous studies,
could be due to the fact that the oral activities involved in this study were taken on a less
regular basis. In the current study activities were completed once every two weeks, in
order to adjust to the existing syllabus. In previous studies the chat sessions or
computerized oral activities were more frequent, tailored towards oral improvement
important to take into account the fact that we were working within the confines of an
already existing syllabus and did not modify the number of activities but only the way
133
the speaking tasks and the oral exams were delivered. Taken together, our results
throughout the instruction period, as it was done for this investigation, might not be
current study also did not show that the Experimental group achieved worse results
than the control group. This suggests that, for these groups of students, the computer-
based oral activities were equivalent to the face-to-face ones, in terms of fostering their
speaking abilities.
Furthermore, the fact that there was not a significant difference between classes
is also somewhat consistent with the fact that the descriptive analysis for students’
general reactions towards the tests and for learners’ responses towards the anxiety
factors and feelings presented in the anxiety scale, did not display considerable
differences between classes. The fact that, the pattern of students’ responses towards
both types of tests is quite similar between the two classes, and the fact that the
Experimental class did not show a greater improvement than the Control class in terms
of oral performance, indicates that, overall, the distinction in the treatment received by
the two groups did not have a particular influence on the different aspects analyzed,
134
4.4. General implications, limitations and suggestions for future research
The present study contributes to the field of second language teaching by adding
insights provided in this study, regarding students’ perspectives and affective factors
towards the two types of oral exams, can also be used by language instructors and
These results can also influence decisions regarding the implementation of technology
in the oral tests conducted in language classes, especially in the case of an oral
towards the two types of oral exams presented in this research can provide valuable
First, the sample size used was relatively small. A larger population would certainly
allow for a stronger interpretation and confidence in the results obtained in the analysis
presented.
Another limitation has to do with the fact that learners seemed to attribute
similar meanings to the anxiety and difficulty variables, included in the survey regarding
students’ general reactions towards the exams. As mentioned in the analysis and results
chapter, there was a high correlation between these two variables. Furthermore, when
survey, we could observe that learners’ justifications for their levels of perceived
135
difficulty were quite similar to the reasons reported for their levels of anxiety towards
the tests. With respect to the difficulty variable, the researcher expected students’
opinions to be due to aspects such as the content or the format of the oral test, for
overall reactions towards the different variables. These aspects would also allow for the
computerized oral activities, such as the possibility of listening to their own recordings
before submitting them or listening to the instructors’ feedback (Tognozzi & Truong
2009), would be beneficial in the process of interpreting and understanding the results.
Furthermore, increasing the number and scope of the kind of oral activities presented in
this research could also have a positive influence in students’ improvement in terms of
oral performance.
Moreover, the analyses used in this study also allowed us to reflect on aspects
that could lead to new research projects. Through the analysis of students’ affective
reactions, we observed that the different treatment received by the two classes did not
seem to particularly influence students’ opinions towards the tests, since response
patterns among the two classes tended to be similar, in general. Our results, along with
results reported in previous literature, allowed us to reflect on aspects such as the fact
136
that some features inherent to the specific type of oral exam format (e.g., the degree of
play a larger role on students’ feelings towards these tests. Hence, future studies on oral
achievement tests, and the use of technology on these type of exams, could adopt an
experimental design that would include a number of distinct types of face-to-face and
computer-based oral testing formats, and explore the impact of these features on
The analysis of students’ affective reactions towards both tests also allowed us to
observe that students’ explanations for perceiving the computer-based test as less
difficult and less anxiety evoking included aspects related to not being in the presence
of the instructor, such as not having someone “staring at them” and “evaluating” them.
However, in their justifications for perceiving the computer-based test as less fair in
testing their speaking abilities, learners also pointed at reasons that valued the presence
of the instructor in a face-to-face oral exam context, such as getting help with “moving
the conversation along”. These findings lead us to reflect on aspects such as whether
there is any relationship between these results on students’ perceptions towards the
two types of oral testing and potential learners’ beliefs that they have the right to be
overly helped and supported by their instructor. This aspect thus reminds us of the
impact that the level of students’ entitlement can have on their outcomes and on their
Halberstadt, & Aitken 2013). Hence, an investigation of this matter, in the area of second
137
4.5. Conclusion
The current conclusion section starts with a recap of the research questions
computer-based test vis-à-vis the face-to-face final oral exam, regarding levels of
anxiety, perceived difficulty, and perceived fairness in testing speaking abilities, as well
as levels of comfort with the technology used. This question was divided into research
depending on the different treatment received by the two classes and on whether the
survey was taken before or after the course. Here, descriptive analysis of students’
reactions towards the two different tests showed general similar patterns between
students’ responses in the two classes, before and after taking the final oral exams at the
end of the course. As seen in the results, both groups tended to consider the computer-
based final oral exam as less anxiety evoking, less difficult, but also less fair in testing
their speaking abilities, in comparison to the face-to-face test. This aspect was observed
in students’ answers taken before and after the course, which corresponded to their
expectations and final opinions towards the exams. Results also indicated a tendency for
students in both classes to consider the two different types of oral exams to be less
anxiety evoking and less difficult than expected. In terms of perceived fairness, it was
also noted that, despite showing a tendency to attribute higher mean ratings to the face-
to-face test in terms of fairness, learners also displayed a general tendency to consider
138
the computer-based test quite fair in testing their speaking abilities. Regarding
examinees’ levels of comfort with the technology used in the computer-based test,
outcomes on this analysis also showed that, in contrast to the Control group, students in
the Experimental class felt more comfortable than expected, which could be due to the
fact that this class used the computerized voice tool more frequently throughout the
instruction period.
Research question 1.2 was concerned with students’ reactions to the computer-
based vis-à-vis the face final oral exam at the end of the quarter, without differentiating
between classes. Here, learners’ also revealed a tendency to display lower levels of
anxiety, perceived difficulty, and also perceived fairness towards the computer-based
test. In addition, when students’ responses were taken together at the end of the
instruction period, mean ratings for students’ reports also revealed a fairly high level of
comfort with the technology used in the computer-based final oral exam.
the anxiety scale, that were more involved in one type of oral exam in comparison to the
other. Key findings revealed students’ general tendency to attribute higher mean ratings
towards the face-to-face oral exam with regard to all items presented on the anxiety
scale. Nevertheless, discrepancies in students’ assessments between the two oral exam
formats were more evident in some of the items included in the questionnaire. The
involved in the face-to-face format and correspond to “getting nervous and confused”
(item 11); “feeling their heart pounding” (item 9); “forgetting things they know as a
result from being nervous” (item 5); “feeling frighten to talk in front of the
139
teacher/computer” (item 2); and “feeling anxious about the exam even when well
prepared” (item 7). Nevertheless, students’ ratings to the items presented on the anxiety
scale revealed that the most evident differences between students’ reactions towards
the face-to-face and the computer-based test correspond to the fact that students tend
to feel more tense and nervous in the face-to-face oral exam than in written exams, in
comparison to the computer-based oral testing method (item 10) and a propensity for
examinees to feel more embarrassed to talk in front of the teacher than in front of the
Research question 2.1 was concerned with whether there was a considerable
difference between classes in the investigation of the anxiety factors and feelings that
were more involved in one type of oral exam in comparison to the other. Once again,
results showed very similar trends between the two groups in terms of their affective
greater improvement than the Control class in certain linguistic areas and overall. These
grammatical functions (aspects that were formally taught and emphasized during the
showed higher values in vocabulary and in general communication ability, main results
in this analysis did not show statistically significant differences between the two classes
towards the computer-based and the face-to-face oral exam, revealed a general tendency
140
to perceive the computer-based test as less anxiety evoking and less difficult than the
consider this type of oral exam format as being less fair in testing their speaking abilities.
As we were able to observe, responses in the Control and Experimental class displayed
quite similar general patterns between the two groups, from their expectations to their
final opinions. As explained throughout the study, the difference in treatment mainly
consisted in the fact that the Experimental class completed computerized oral
assessment activities during the instruction period, receiving individual oral feedback
from the instructor. The Experimental group completed these activities in pairs, in a
computer lab, and during class time, whereas the Control group completed the same
students. Both the Control group and the Experimental group used the computer-based
learning platform for other types of homework activities. Therefore, both groups were
overall reactions towards the tests revealed a tendency for students to consider aspects
related to the presence of the examiner in the face-to-face test as common justifications
for feeling less anxious in the computer-based test, and also for perceiving this
computerized oral exam format as less difficult, in comparison to the traditional method.
Reasons included the fact that, in the computer-based test, students did not feel the
nervousness of “performing in front of the instructor” or that they had someone “staring
at them” or “evaluating them” while taking the test. At the same time, aspects involving
141
the presence of the examiner in the face-to-face test were also among the most common
explanations by examinees who reported lower levels of perceived fairness towards the
included the fact that, by being in front of the instructor, learners felt like they had the
opportunity to explain the nature of their mistakes or that the instructor could help them
with “moving the conversation along”. In addition, students often indicated concerns
related to possible technical issues that could interfere with the clarity of their
recordings. In this case, the presence of the examiner also seemed important to ensure
face-to-face oral tests, it was possible to observe that reasons for valuing the presence
of the examiner or the more traditional oral testing format in the reviewed literature,
include aspects such as “the attitudes of the interlocutors” (Öztekin 2011: 78), “the
interaction between interviewers and interviewees” (Jeong 2003: 74), or the fact that
learners “enjoyed interacting with a ‘live’ examiner” (Lowe & Yu 2009: 35). In the
current research, the live contact with the examiner seemed to constitute an important
factor for students’ higher ratings towards the face-to-face test in terms of fairness, for
the reasons indicated above. In addition, the fact that the instructor provided the
examinees with immediate feedback at the end of the oral exam was often mentioned in
students’ comments with respect to the aspects they liked the most in the face-to-face
concerned with students’ perceptions towards both types of tests, examinees display a
142
tendency to essentially perceive the instructor as the person who is “watching”,
“listening to” and “evaluating” them, and not as someone who they are interacting with
while completing the face-to-face oral exam. It was suggested that one possible motive
behind students’ tendency to mostly focus on these features of the examiner has to do
with the fact that the oral achievement testing format included in this investigation is
taken in pairs, in both the computer-based and the face-to-face context. The
and not between the instructor and the examinees, which could trigger a stronger sense
As mentioned above, with respect to the analysis of the anxiety factors and
feelings that tended to be more involved in one testing format in comparison to the
other, key findings revealed a general tendency for students to attribute higher mean
ratings towards the items presented on the anxiety scale in the context of the face-to-
face oral exam. Results in this section were consistent with the outcomes presented in
the analysis of students’ general reactions, since learners’ mean ratings towards both
oral testing formats in terms of overall anxiety levels were also higher in the face-to-face
test. In addition, in their ratings towards the statements included in the questionnaire,
one of the most evident differences between students’ responses towards one type of
oral exam format in comparison to the other was the students’ tendency to feel more
embarrassed to talk in front of the instructor than in front of the computer. This aspect
was also consistent with students’ responses to open-ended questions in the overall
reactions survey, where learners tended to consider aspects related to the presence of
143
students’ ratings towards the items presented on the anxiety scale were also observable
different classes. This aspect is also in line with the results obtained on students’ general
reactions towards both types of tests, suggesting that the difference in the treatment
received by the two classes does not play a particular role in students’ feelings towards
the tests or that the difference in treatment was not that substantial. The analysis of
students’ responses on their affective reactions towards the computer-based and the
face-to-face oral achievement tests used in this study, along with results reported in the
literature, allowed us to reflect on the fact that other aspects, such as the degree of
interaction with the examiners, or whether the test is completed individually or in pairs,
investigation revealed that the use of technology in oral assessment activities taken
throughout the quarter (treatment received by the Experimental class), as done in this
abilities. As we could observe, the differences between the two classes in terms of
students’ oral production gains, on the linguistic variables emphasized during the course
and overall, were not significant. As it was suggested, completing a small number of
computerized oral activities during the instruction period might not be sufficient in
statistical analysis in the current study also did not show that the Experimental group
achieved worse results than the Control group, suggesting that, for these groups of
144
students, the computer-based oral activities were equivalent to the face-to-face ones, in
mostly related to the reduction of students’ anxiety. In addition, although the face-to-
face oral exam received higher mean ratings in terms of fairness, learners’ responses
towards the computer-based test also indicate a tendency to consider this oral testing
system to be quite fair. Therefore, examinees’ reactions towards the computerized test
As Lee (2005) states, “By listening to the voices of learners, FL educators have the
opportunity to reflect on their intended pedagogical efforts and further modify their
strategies and instruction to meet the needs and interests of learners” (p.140). In the
case of the current study, the voice of examinees has provided us with valuable insights
used. Reports on examinees’ affective reactions towards the tests included in this
research can thus be useful, not only in the program where this investigation took place,
but possibly in other language courses. The implementation of a computerized oral exam
into the FL curriculum could also provide test developers with practical insights in
manipulating factors such as presence or absence of the instructor, how involved the
instructor is in the actual test taking (e.g., providing help and feedback), whether the test
perceptions and affective factors towards both types of oral tests, as well as in terms of
145
students’ improvements in oral performance, when taking computerized oral activities
throughout the instruction period, has also contributed to the body of quantitative and
Moreover, new avenues for future research were also proposed, in light of the
findings presented in the current research. Since the different treatment received in the
two classes did not seem to particularly influence students’ opinions towards the tests,
it was suggested that a further study on students’ reactions towards different types of
computer-based and face-to-face oral testing formats, should include distinct aspects;
for instance, whether tests are taken individually or in pairs, or higher or lower levels of
interaction with the instructor. Further exploration is certainly needed on what impact
on students’ feelings these different features of oral exams have in the context of second
In addition, the fact that learners tended to find the instructors’ presence positive
in terms of fairness in testing their speaking abilities, due to aspects such as getting help
with “moving the conversation along”, as well as the fact that they showed a propensity
to find the the face-to-face test more difficult and anxiety evoking, due to aspects such
as feeling “watched” and “evaluated” by the teacher, lead us to reflect on aspects such
as to what what extent these facts could also be related to possible learners’ beliefs that
they have the right to be overly helped and supported by their instructor (in this case in
the process of learning a second language). In view of these findings, we surmise that
Specifically, entitlement can affect their achievements and their feeling towards exams
146
entitlement and its consequences have been researched since the 90s (Morrow 1994), a
focus on this aspect with respect to second language learning does not seem to have been
previously investigated. This aspect could thus constitute another relevant possibility
147
References
269-303
Abu-Rabia, S., Peleg, Y. & Shakkour, W. (2014). The Relation between Linguistic
Linguistics, 4, 118-141.
Al-Shboul, Murad M., Ahmad, Ismail S., Nordin, Mohamad S. & Rahman, Zainurin A.
Early, P., & Swanson, P. B. (2008). Technology for oral assessment. In C. M. Cherry and
Ganschow, L., Sparks, R. L., Anderson, R., Javorsky, J., Skinner, S., & Patton, J. (1994).
college foreign language learners. The Modern Language Journal, 78, 41-55
148
Ganschow & Sparks (1996). Anxiety about Foreign Language Learning among High
Gan, Z., Davidson, C., and Hamp-Lyons, L. (2008). Topic Negotiation in Peer Group
Hayes, A.F. & Krippendorff, K. (2007). Answering the call for standard reliability
Hewitt, E. and Stephenson, J. (2012). Foreign Language Anxiety and Oral Exam
Horwitz, E. K., Horwitz, M. B, & Cope, J. (1986). Foreign language classroom anxiety.
Horwitz, E. (1986). Preliminary evidence for the reliability and validity of a foreign
Horwitz, E. & Yan, J. (2008). Learners’ Perceptions of How Anxiety Interacts With
183.
Jeong, T. (2003). Assessing and interpreting students’ English oral proficiency using
University, Columbus.
149
Joiner, E. (1997). Teaching listening: How technology can help. In M. Bush, & R.
Joo, M. (2007). The attitudes of students’ and teachers’ towards a Computerized Oral
Jouët-Pastré C., Kobluka A., Sobral P., Moreira M.L., Hutchinson A.P. (2007). Ponto de
Prentice Hall.
Larson, J. W. (2000). Testing Oral Language Skills via the Computer. CALICO Journal, 18
(1), 53-66.
Lenderink, A.F., Zoer, L., et al. (2012). Review on the validity of self-report to assess
150
Liskin-Gasparro (1983). Teaching and testing oral skills. ACTFL Master Lecture
Liu, M., & Jackson, J. (2008). An Exploration of Chinese EFL learners’ unwillingness to
71-86.
Lowe, J. & Yu, X. (2009). Computer Assisted Testing of Spoken English: A Study of
the SFLEP College English Oral Test System in China. Journal of Systemics,
Norris, John M., & Pfeiffer, Peter C. (2003). Exploring the Uses and Usefulness of
Preliminary Case Interviews with Five Japanese College Students in the U.S.,
Omaggio Hadley, A. (2001). Teaching Language in Context (3rd edn). Boston: Heinle &
Heinle.
Arnold.
151
Öztekin, E. (2011). A Comparison of Computer-assisted and Face-to-Face Speaking
Paker, T. & Höl, D (2012) Attitudes and Perceptions of the Students and Instructors
Phillips, Elaine M. (1992). The Effects of Language Anxiety on Students’ Oral Test
project.org/.
Riordan, B. (2007). “There’s two ways to say it: Modeling nonprestige there’s”.
152
Satar, H.M., & Özdener, N. (2008). The Effects of Synchronous CMC on Speaking
Proficiency and Anxiety: Text Versus Voice Chat. The Modern Language
Sayin, B., A. (2015). Exploring Anxiety in Speaking Exams and How it Affects Students’
118.
Testing, 3, 99-118.
Stansfield, C., D. Kenyon, R. Paiva, F. Doyle, I. Ulsh and M. Cowles. (1990). The
641-51.
Tognozzi, E., & Truong, H. (2009). Proficiency and Assessment Using WIMBA Voice
153
Yahya, M. (2013). Measuring speaking anxiety among speech communication course
Young, D. J. (1986), The Relationship Between Anxiety and Foreign Language Oral
language anxiety research suggest? The Modern Language Journal, 75, 426
439.
Proficiency Test Format and a Conventional Speak Test Format. The Ohio
Zeidner, M., & Bensoussan, M. (1988). College students’ attitudes towards written
114.
http://www.pearsonelt.ch/1471/9781447964117/Ponto-de-Encontro-
Portuguese.aspx
154
APPENDIX A
Set of 6 oral situations for the computer-based and the face-to-face final oral exams
(for the analysis of students’ feelings towards the two types of oral exams):
1. You are talking to a friend about last weekend’s outing. Ask each other questions
and try to
exaggerate and out do each other. Talk about:
- Who did you go with
- How much money did you spend
- Where did you go
- What did you do
- Who did you meet
- Where are you going next month
2. You are baby-sitting a couple of kids. Now that they are asleep you call a friend to
pass the time.
Talk about when you two were little. Ask each other:
- Where you used to live
- TV programs you used to watch
- Baby-sitters you hated/loved
- Grammar school teachers you remember
- Games you used to play
- One terrible or great thing that happened to you
3. You are going out this weekend with a very special person and you decide to go
shopping for
clothes.
- S1 tell the clerk the situation and ask for advice on his clothes
- S2 (Clerk) advise something way too informal and with weird colors
- S1 describes what you want (the whole outfit) and ask for prices
- S2 (Clerk) respond and also try to sell something else
- S1 decline to buy that something else and buy the outfit you want
- Both say goodbye
4. You meet a friend at a restaurant for dinner.
- Greet each other
- what is your favorite dish
- if you are on a diet
- if you have any food restrictions / allergies
- discuss good and bad restaurants in town
- S1 orders an appetizer and a complete non-vegetarian meal and
something to drink
- S2 orders a different appetizer and a complete vegetarian meal, a drink
and a dessert
155
5. You are waiting for your table in the lounge of a restaurant. Talk to your dinner
companion about:
- if he/she cooks
- what his/her specialty is
- How to prepare it
- If he/she is allergic to any food
- about their favorite places to eat and why
- Favorite and least favorite foods
6. You run into a friend just before the school vacations (summer/winter). Talk about your
plans.
- Greet each other
- where are you going
- how are you getting there
- how long are you staying
- what are you going to do- who are you going with
- where are you going to stay
- narrate something interesting/ bad/ great about the place you are going to
156
APPENDIX B
157
APPENDIX C
Instructions for the Technology-mediated Final Oral Exam
1) Get in pairs and share a computer.
2) Use Firefox.
3) Sign in on MyPortugueseLab (you only need to choose one of the accounts. Your
instructor will then be able to assign a separate grade on each account).
4) Open the icon for the Final Oral Exam and read the prompts.
5) Share the microphone connected to your computer (make sure you speak close
to the microphone when it is your turn to speak).
6) To use the headphones, click in ALT and, at the same time, click in the sound
icon up in the screen, and choose the option Logitech USB Headset.
7) Start recording the conversation
8) When you’re done recording, click “play” to listen to the conversation and make
sure it worked.
158
APPENDIX D
A. Please rate your expected level of comfort using the technology applied in the computer-
based oral test.
1 Low
2 Medium
3 High
B. Please rate your expected level of anxiety when taking the computer-based oral test.
1 Low
2 Medium
3 High
Do you expect anything to make you feel more anxious about the computer-based oral test?
_________________________________________________________________
C. Please rate the expected level of difficulty you will feel when taking the computer-
based oral test.
1 Low
2 Medium
3 High
D. Please rate the level of fairness you expect from the computer-based test in the
evaluation of your speaking skills.
1 Low
2 Medium
159
3 High
E. Please rate your expected level of anxiety when taking the face-to-face oral test in the
instructor’s office.
1 Low
2 Medium
3 High
Do expect anything to make you feel more anxious on the face-to-face oral test?
________________________________________________________________________
F. Please rate the expected level of difficulty in taking the face-to-face oral test in the
instructor’s office.
1 Low
2 Medium
3 High
G. Please rate the expected level of fairness of the face-to-face oral test in the instructor’s
office in the evaluation of your speaking skills.
1 Low
2 Medium
3 High
H. Please describe how anxious you expect to feel taking a computer-based oral test in
comparison to the traditional method (the face-to-face oral test in your instructors’
office).
1 less anxious
2 just as anxious
3 more anxious
Why? ______________________________________
160
I. Please indicate your opinion on the level of fairness of the computer-based method in
testing your speaking skills in comparison to the traditional method.
1 less fair
2 just as fair
3 more fair
Why? ______________________________________
J. Please indicate your opinion on the expected level of difficulty of the computer-based
oral test in comparison to the traditional method.
1 less difficult
2 just as difficult
3 more difficulty
Why? ________________________________________
K. What do you expect to like the MOST about the COMPUTERIZED oral test?
_____________________________________________________________________
What do you expect to like the LEAST about the COMPUTERIZED oral test?
_____________________________________________________________________
What do you expect to like the MOST about the FACE-TO-FACE oral test in the
instructor’s office?
_____________________________________________________________________
What do you expect to like the LEAST about the FACE-TO-FACE oral test in the
instructor’s office?
____________________________________________________________________
161
APPENDIX E
A. Please rate your level of comfort using the technology applied in the computer-based
oral test?
1 Low
2 Medium
3 High
B. Please rate the level of anxiety you felt when taking the computer-based oral test.
1 Low
2 Medium
3 High
Did anything make you feel more anxious about the computer-based oral test?
_________________________________________________________________
C. Please rate the level of difficulty in taking the computer-based oral test, according to
your opinion.
1 Low
2 Medium
3 High
162
D. Please rate the level of fairness of the computer-based test in the evaluation of your
speaking skills, according to your opinion.
1 Low
2 Medium
3 High
E. Please rate the level of anxiety you felt when taking the face-to-face oral test in the
instructor’s office.
1 Low
2 Medium
3 High
Did anything make you feel more anxious about the face-to-face oral test?
________________________________________________________________________
F. Please rate the level of difficulty in taking the face-to-face oral test in the instructor’s
office, according to your opinion.
1 Low
2 Medium
3 High
G. Please rate the level of fairness of the face-to-face oral test in the instructor’s office in
the evaluation of your speaking skills, according to your opinion.
1 Low
2 Medium
3 High
H. Please describe how anxious you felt taking this computer-based oral test in
comparison to the traditional method (the face-to-face oral test in your instructors’
office).
1 less anxious
2 just as anxious
163
3 more anxious
Why? ______________________________________
I. Please indicate your opinion on the level of fairness of the computer-based method in
testing your speaking skills in comparison to the traditional method.
1 less fair
2 just as fair
3 more fair
Why? ______________________________________
J. Please indicate your opinion on the level of difficulty of the computer-based oral test in
comparison to the traditional method.
1 less difficult
2 just as difficult
3 more difficulty
Why? ________________________________________
K. What I liked the MOST about the COMPUTERIZED oral test was:
_____________________________________________________________________
What I liked the LEAST about the COMPUTERIZED oral test was:
_____________________________________________________________________
What I liked the MOST about the FACE-TO-FACE oral test in the instructor’s office was:
_____________________________________________________________________
What I liked the LEAST about the FACE-TO-FACE oral test in the instructor’s office was:
______________________________________________________________________
164
APPENDIX F
Survey on Oral Exams Anxiety adapted from FLCAS in Horwitz, E. K., Horwitz, M.
B., & Cope, J. 1986 (presented to students at the end of the Quarter).
1. I never feel quite sure of myself when I am speaking in the oral exam in the
instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
2. I never feel quite sure of myself when I am speaking in the oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
3. It frightens me to talk in front of the teacher in the oral exam in the teacher’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
4. It frightens me to talk in front of the computer in the oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
5. I start to panic when I have to speak without preparation in the oral exam in the
instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
6. I start to panic when I have to speak without preparation in the oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
7. I worry about the consequences of failing my oral exam in the instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
165
9. In the oral exam in the instructor’s office, I can get so nervous I forget things I
know.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
10. In the oral exam in the LAB, I can get so nervous I forget things I know.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
12. It embarrasses me to talk in front of the computer in my oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
13. Even if I am well prepared for the oral exam in the instructor’s office, I feel
anxious about it.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
14. Even if I am well prepared for the oral exam in the LAB, I feel anxious about it.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
15. I often feel like not going to my oral exam in the instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
16. I often feel like not going to my oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
17. I can feel my heart pounding when I'm taking an oral exam in the instructor’s
office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
18. I can feel my heart pounding when I'm taking an oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
19. I feel more tense and nervous in my oral exam in the instructor’s office than in my
written exams.
166
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
20. I feel more tense and nervous in my oral exam in the LAB than in my written
exams.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
21. I get nervous and confused when I am speaking in the oral exam in the instructor’s
office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
22. I get nervous and confused when I am speaking in the oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
23. I get nervous when I don't understand every word in the prompts for the oral exam
in the instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
24. I get nervous when I don't understand every word in the prompts for the oral exam
in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
25. I get nervous when I have to perform oral situations for which I haven't prepared in
advance, in the oral exam in the instructor’s office
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
26. I get nervous when I have to perform oral situations for which I haven't prepared in
advance, in the oral exam in the LAB
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree
167
APPENDIX G
Pre- and Post- Oral Achievement Test Assessment
(For the investigation of whether or not there is an improvement in students’
speaking abilities).
A) Oral situation topic: Festa de aniversário
You’re asking one of your friends when his/her birthday is and then you realize it
was not too long ago, so you want to find out what he/she did on his/her birthday.
Ask him/her:
1-When his/her birthday is
2-Whether he/she had fun
3-What places he/she went to during the day
4-Whether a lot of people went to his/her birthday party
5-What are some of those people like (physically and psychologically)
6-What he/she has in common with some of them
7-What kind of food he/she had on that day
8-The things he/she usually likes to eat
9-The things he/she usually likes to do on her birthday
10-What he/she thinks she will do on her next birthday
B) Topics included in the assessment:
1- Datas
2- Pretérito perfeito de verbos regulares
3- Pretérito perfeito do verbo “Ir”
4- Pretérito perfeito do verbo “Ir”
5- Adjetivos (Describing people)
6- Pronomes (Nós… ele, eles, etc.)
7- Comida
8- Expressão “Gosto de…” / Comida- (there is an emphasis in the use of the
expression “gosto de…” in 16A and 16B because of the students’ tendency to use
the Spanish expression “me gusta”)
9- Expressão “Gosto de…” + Verbo no infinitivo
10- Verbo “Ir” no futuro- (Distinction from the Spanish “Ir a…” to express future
actions)
168
APPENDIX H
Instructions for the Pre- and Post- Test and computer-based oral activities taken
throughout the instruction period
1) Get in pairs and share a computer.
2) Use Firefox.
3) Sign in on MyPortugueseLab (choose one of your student accounts).
4) Open today’s oral activity and read the prompts.
5) The student who has signed in into his/her account will be the one
answering the questions and also the one using the headphones+mic.
6) To use the headphones, click in ALT and, at the same time, click in the sound
icon up in the screen, and choose the option Logitech USB Headset.
7) Start recording the conversation
8) When you’re done recording, click “play” to listen to the conversation and make
sure it worked.
9) Sign in into the other student’s account
10) Trade roles (again, the student who has signed in into his/her account will be
the one answering the questions and also the one using the headphones+mic).
169
APPENDIX I
Oral assessment activities taken throughout the quarter- “Situações orais” taken
from the Textbook “Ponto de Encontro”.
(These activities were taken in the Computer lab by the Experimental group and in the
regular classroom by the Control group)
Atividade 1 (p. 284)
Role A: You were in a tennis clinic (a clínica de tênis/ténis) last summer. Tell your
friend a) where you went, b) the number of days you were there, and c) whether you
had fun and if you liked it or not. Answer his or her questions.
Role B: You would like to know more about the tennis clinic your friend attended last
summer. Ask your friend a) how many instructors he or she had, b) how much he or
she paid for the clinic, c) if he or she improved his or her game, and d) why he or she
liked or didn’t like the experience.
Atividade 2 (p. 320)
Role A: You are interviewing a well-known film critic to find out his or her opinion on
the best and worst American movies of the year. Ask him or her a) which is the best
American film, b) why, c) who is the best actor or actress (also ask him/her to describe
this actor or actress, d) which is the worst film of the year, and e) what she or he thinks
of films of the Portuguese-speaking world.
Role B: You are a well-known film critic. Answer your interviewer’s questions
according to your own opinions regarding the best and the worst American films and
actors.
Atividade 3 (p.350)
Role A: You are an advertising manager (gerente de publicidade) who is presenting a
new ad campaign (campanha publicitária) to the president of the company. After
showing two ads to the president, a) ask him or her if he or she likes them; b) mentions
the magazines where the ads will appear; and c) state the reasons why you chose those
magazines. Then answer his or her questions.
Role B: You are the president of an important company who has to decide about a new
advertising campaign. After telling the ad manager that you like the ads and listening to
his or her explanations, inquire a) about the cost of the campaign, and b) when it will
begin.
170
Atividade 4 (p.360)
Role A: You have moved to São Paulo and have opened a checking account (conta
corrente) at Banco do Estado de São Paulo. Ask a bank employee to help you write out
a check (say the information he gives you, as dates, out loud). After the employee gives
you all the details, a) thank him or her and b) say that you are very happy with the
services the bank provides the costumers.
Role B: You are the employee at Banco do Estado de São Paulo. Using a check, explain
to the customer where a) to put the date (say the date out loud), b) to write the payee’s
name, c) to write the amount (quantia) in numbers, d) to write the amount in words,
and e) to sing the check.
Atividade 5- Adapted (p.395)
Role A: You are playing the role of Narciso. You have decided to keep your fiancée and
go on a weight-gain diet. You go to a nutritionist. Tell him or her a) who you are and
why you made the appointment, and b) that you want to know how to gain 20
kilograms. Then ask him or her c) what foods you should eat (minimum of 3 foods), d)
what drinks you should have, and e) whether he or she thinks it is dangerous to gain so
much weight and why.
Role B: You are a nutritionist. Your client wants to gain weight to please his fiancée.
Ask him a) why he made the appointment and what he wants to achieve; then tell him
b) what is necessary to do in general for a person to gain weight, and specifically c)
what foods he should eat, and d) what drinks he should have. Finally, e) tell him that
you do not think it is healthy to gain or lose a lot of weight rapidly and that excessive
weight can cause health problems.
171
APPENDIX J
172
- To be able to talk about food (Q7 and complete but simple
Q8) answers
3- Very comprehensible/
- To be able to express likes (Q8 and complete and amplified
Q9) answers
- To be able to discuss activities and
make future plans (Q10
Overall Communication 0- Requires extra-sympathetic
listening; parts of message
- To be able to successfully perform the 0 1 2 3 still not understood;
oral situation presented minimally successful
1- Topics handled adequately
but minimally; ideas
conveyed in general;
basically on task but no
more
2- Topics handled adequately;
ideas clearly conveyed;
requires little effort to be
understood; some creativity
3- Displays communicative
ease within context(s);
creative, resourceful; easily
understood; takes risks
173