You are on page 1of 188

UNIVERSITY

OF CALIFORNIA

Santa Barbara

The Use of Technology as an Oral Achievement Testing Tool: Analysis of students’

perceptions and oral performance in a Portuguese Language Program

A dissertation submitted in partial satisfaction of the

requirements for the degree Doctor of Philosophy

in Hispanic Languages and Literatures

by

Raquel C. Santana-Paixão

Committee in charge:

Professor Viola Miglio, Co-Chair

Professor Laura Marqués-Pascual, Co-Chair

Professor Dorothy Chun

March 2017




ProQuest Number: 10254074




All rights reserved

INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.






ProQuest 10254074

Published by ProQuest LLC (2017 ). Copyright of the Dissertation is held by the Author.


All rights reserved.
This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.


ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
The dissertation of Raquel C. Santana-Paixão is approved.

___________________________________________________________
Dorothy Chun

___________________________________________________________
Laura Marqués-Pascual, Co-Chair

___________________________________________________________
Viola Miglio, Co-Chair

January 2017

The Use of Technology as an Oral Achievement Testing Tool: Analysis of students’

perceptions and oral performance in a Portuguese Language Program

Copyright © 2017

by

Raquel C. Santana-Paixão

iii
ACKNOWLEDGEMENTS

Firstly, I whish to thank my committee, Professor Laura Marqués-Pascual,

Professor Viola Miglio and Professor Dorothy Chun, for their guidance, continuous

support, knowledge and motivation.

I am also grateful to my colleagues and friends Carlos Pio, Ellen Oliveira,

Eduardo Viana da Silva, João Albuquerque, Eva M. Wheeler, Ricardo Brites, Pedro

Almeida, Patrícia Lino, Bianca Brigidi, Aina Cabra-Riart and Brendan Barnwell for their

valuable encouragement and academic discussions. I would also like to express my

gratitude to all of those who participated in this study.

Finally, I would like to thank my family for their unconditional support

throughout my PhD studies.

iv
VITA OF RAQUEL C. SANTANA-PAIXÃO
January 2017

EDUCATION

Bachelor of Arts in Linguistics, Universidade Nova de Lisboa, Lisbon, Portugal, January
2007
Master of Arts in Iberian Linguistics, University of California, Santa Barbara, June 2012
Doctor of Philosophy in Hispanic Languages and Literatures, University of California,
Santa Barbara, January 2017

PROFESSIONAL EMPLOYMENT

2006-2008: English Language Instructor, EIL (Escola Internacional de Línguas), Lisbon,
Portugal
2008-2010: Portuguese Lecturer, Department of Spanish and Portuguese, UC Santa
Barbara; sponsored by Instituto Camões, Lisbon, Portugal
2010-2016: Teaching Assistant, Department of Spanish and Portuguese, UC Santa
Barbara

PUBLICATIONS

Miglio, V. G., Gries, S. T., Harris, M.J., Santana Paixão, R., & Wheeler, E. M.
(2013). Spanish lo(s)-le(s) Clitic Alternations in Psych Verbs: A Multifactorial Corpus-
Based Analysis. In: Selected Proceedings of the 15th Hispanic Linguistics Symposium, ed.
J. Cabrelli Amaro et al, 268-278. Somerville, MA: Cascadilla Proceedings Project.

AWARDS

Outstanding Teaching Assistant Award, Department of Spanish and Portuguese,
University of California, Santa Barbara, 2015
Excellence in Teaching Award, Graduate Student Association, University of California,
Santa Barbara, 2013
Graduate Block Grant (tuition paid during M.A and Ph.D. studies), Center for
Portuguese Studies and Spanish and Portuguese Department, University of California,
Santa Barbara, 2010

FIELDS OF STUDY

Second Language Teaching Methodology

v
ABSTRACT

The Use of Technology as an Oral Achievement Testing Tool: Analysis of students’

perceptions and oral performance in a Portuguese Language Program

by

Raquel C. Santana-Paixão

Oral testing administration plays a significant role in foreign language programs

aiming to foster the development of students’ speaking abilities. With the development

of language teaching software, the use of computer based recording tools are becoming

increasingly used in language courses as an alternative to traditional face-to-face oral

exams. The present dissertation constitutes an examination of the use of technology in

an oral achievement test used in a Portuguese Language Program in terms of 1) students’

perspectives and affective factors, with a focus on anxiety, towards the new computer-

based final oral exam vis-à-vis the traditional method; and 2) in terms of students’

improvement of oral performance, when taking part in computer-based oral activities

throughout the instruction period. In order to analyze the latter, learners of two different

classes, denominated as control and experimental group, participated in this study over

the course of one quarter. The technology used for the current research is part of to the

online platform accompanying the Textbook used in the Language Program at the

University of California, Santa Barbara.

vi
A total of 27 L2 learners of Portuguese for Spanish Speakers, at the university

level, participated in this study and completed a computer-based and face-to-face Oral

Achievement Exam at the end of the quarter. Learners were given a pre- and post-

questionnaire, inquiring about their expectations and final opinions towards both

modes of final oral exams. Here, students assessed their levels of anxiety, perceived

difficulty and perceived fairness in testing their speaking abilities, as well as their levels

of comfort with the technology used. The variables included in this survey were based

on a review of previous studies such as Tognozzi and Truong (2009) and Keynon and

Malabonga (2001). In addition, examinees completed a survey adapted from the FCLAS

(Foreign Language Classroom Anxiety Scale) (Horwitz et al. 1986), in order to

investigate what kind of anxiety factors and feelings, presented on the anxiety scale, are

more involved in one type of oral exam in comparison to the other. In general, students’

reports indicate a tendency to consider the new computer-based oral exam as less

anxiety evoking and more difficult than the traditional face-to-face test. Nevertheless,

students also tend to view the traditional method as more fair in testing their speaking

abilities.

For the analysis of students’ improvements in oral production, when

participating in computer-based oral activities throughout the instruction period,

learners in the two different classes (control and experimental) were accessed through

a pre- and post- oral test. Results suggest that the treatment received by the

Experimental class does not play a particular role in fostering students’ speaking

abilities in the linguistic variables that were formally taught and emphasized throughout

the course.

vii
TABLE OF CONTENTS

Page

Acknowledgements iv

Vita v

Abstract vi

List of Tables xiii

List of Figures xiv

INTRODUCTION

1. Statement of the research purpose 1

2. Scope of the present study 3

3. Overview of the dissertation 6

CHAPTER 1: LITERATURE REVIEW

1.1. Overview in oral testing in foreign language teaching 7

1.1.1. Traditional face-to-face methods 7

1.1.1.1. Oral proficiency tests 7

1.1.1.2. Oral achievement tests 10

1.1.2. Students’ perceptions towards face-to-face oral testing 16

1.1.3. Difficulties in face-to-face oral testing 19

1.1.4. Previous Research on Technology Assisted Oral Testing 20

1.1.4.1. Students’ improvement in oral performance through 21


computerized oral activities

viii
1.1.4.2. Students' perceptions 23

1.1.4.3. Advantages 30

1.1.4.4. Limitations 32

1.2. Previous research on anxiety in foreign language learning and towards 33


oral tests

1.2.1. Measuring anxiety in foreign language learning and oral tests 34

1.2.2. Factors involved in anxiety in foreign language learning and oral 37


tests

1.2.3. Anxiety and oral performance 42

1.3. Summary of literature review 44

CHAPTER 2: METHODOLOGY

2.1. Introduction 50

2.2. Summary of Research design 50

2.3. Research questions revisited and hypotheses 52

2.4. Participants 55

2.5. Data collection procedures 56

2.5.1. Investigation of students’ perceptions and affective factors 56


towards the two types of final oral exams

2.5.1.1. Face to face oral exam 57

2.5.1.2. Computer-based final oral exam 58

2.5.1.4. Measuring students’ perceptions and affective factors 61


towards the two types of final oral exams

2.5.2. Investigation of students’ development of oral performance 64
when taking computer-based oral activities throughout the quarter

ix
2.5.2.1. Differences in the instructional treatment for Control 66
and the Experimental class

2.5.2.2. Voice recording tool used for the computer-based oral 69
activities taken throughout the quarter by the Experimental
class

2.5.2.3. Measuring students’ development of oral performance 70

2.6. Summary 72

CHAPTER 3: DATA ANALYSES AND RESULTS

3.1. Introduction 73

3.2. Students’ perceptions and affective factors 74

3.2.1. Students’ general perceptions towards the computer-based vis- 75


à-vis the face-to-face oral testing

3.2.1.1. Students' responses in the Control and the 76
Experimental class, before and after taking both types of oral
exams

3.2.1.1.1. Overall anxiety, perceived difficulty and 77
perceived fairness in testing their speaking abilities

3.2.1.1.2. Comfort with the technology used 84

3.2.1.2. Students' overall reactions towards both types of oral 86


exams at the end of the quarter

3.2.1.2.1. Overall anxiety, perceived difficulty and 86
perceived fairness in testing their speaking abilities

3.2.1.2.2. Comfort with the technology used 90

3.2.2. Examination of anxiety factors and feelings that tend to be more 90


involved in one testing format in comparison to the other

3.2.2.1. Is there a considerable difference between the Control 97
and the Experimental class?

x
3.3. Students' improvement in oral performance 99

3.3.1. Does the Experimental class show greater improvement than 100
the Control class in certain linguistic areas and overall?

3.4. Summary of key findings on students' perspectives and affective factors 103

3.4.1. Students' overall reactions towards both types of oral exams 103

3.4.2. Anxiety factors 104

3.5. Summary of key findings on students' improvement in oral performance 105

3.6. Summary 106

CHAPTER 4: DISCUSSIONS AND CONCLUSION

4.1. Introduction 107

4.2. Discussion of students' perceptions and affective factors 107

4.2.1. Students' general perceptions towards the computer-based test vis-à- 108
vis the face-to-face oral testing

4.2.1.1. Overall anxiety levels 109

4.2.1.2. Perceived difficulty 113

4.2.1.3. Perceived fairness 117

4.2.1.4. Comfort with the technology used 124

4.2.2. Examination of anxiety factors and feelings that tend to be more 125
involved in one testing format in comparison to the other

4.3. Discussion of students' improvements in oral performance 131

4.4. General implications, limitations and suggestions for future research 135

4.5. Conclusion 138

xi
REFERENCES 148

APPENDICES

Appendix A: Set of 6 oral situations for the computer-based and the face-to- 155
face final oral exams

Appendix B: Description of the creation method for Oral Exams on 157
MyPortugueseLab

Appendix C: Instructions for the Technology-mediated Final Oral Exam 158

Appendix D: Survey concerning students’ expectations towards oral 159


assessment methods

Appendix E: Survey concerning students’ feeling towards oral assessment 162
methods

Appendix F: Survey on Oral Exams Anxiety adapted from FLCAS in Horwitz, 165
E. K., Horwitz, M. B., & Cope, J. 1986

Appendix G: Pre- and Post- Oral Achievement Test Assessment 168

Appendix H: Instructions for the Pre- and Post- Test and computer-based 169
oral activities taken throughout the instruction period

Appendix I: Oral assessment activities taken throughout the quarter 170

Appendix J: Pre- and Post- Test Rating Scale 172

xii
LIST OF TABLES

Table Page

1 Mean values across classes before and after taking the face-to- 78
face (F2F) and the computer-based oral exams

2 Mean values across classes on level of comfort with the 85
technology used before and after taking the computer-based oral
exam

3 Mean values for all student’s responses regarding levels of 87
anxiety, perceived difficulty and perceived fairness, after taking
the face-to-face (F2F) and the computer-based oral exams at the
end of the quarter

4 Items included in the anxiety scale 93

5 T-test results for Pre- and Post-test mean scores in both classes 101

6 Comparison between the Control and Experimental class in 102
terms of improvement in oral performance

xiii
LIST OF FIGURES

Figure Page

1 Results across classes before and after taking the face-to-face and 79
the computer-based oral exams regarding levels of anxiety,
perceived difficulty and perceived fairness

2 All students’ reactions to both types of oral exams, regarding 88
levels of anxiety, perceived difficulty and perceived fairness, at
the end of the quarter

3 All students’ responses towards the computer-based and the 94


face-to-face test on the items included in the anxiety scale.

4 Differences between classes, regarding students’ ratings towards 97
the items presented on the anxiety scale

xiv

INTRODUCTION

1. Statement of research purpose

With purposeful communication being the goal of language instruction, oral

testing should play an important role in any foreign language program that aims to

develop oral proficiency (Omaggio Hadley 2001). As Omaggio Hadley (2001) remarks,

“we will need to administer some type of oral test, if only once or twice a semester”

(p.433).

Various scholars in the field of Second Language Acquisition and Applied

Linguistics have conducted research on different types of oral assessment in second and

foreign language instruction, investigating inherent advantages and limitations. Some of

these studies have included an analysis of students’ perceptions towards these type of

tests. Nevertheless, in the literature reviewed, few studies were found involving an

investigation of students’ reactions towards computer-based vis-à-vis face-to-face oral

testing methods.

By examining the implementation of a new computer-based final oral

achievement exam in a Portuguese Language Program, through students’ perspectives

and affective factors, the current study attempts to contribute to a more in depth

investigation of learners’ perceptions in a comparison between computerized and more

traditional oral exam methods.

1
Hence, in line with previous studies (Keynon & Malabonga 2001; Jeong 2003; Joo

2007; Lowe & Yu 2009; Öztekin 2011), the first section in the current research analyzes

students’ attitudes towards the use of technology in an oral achievement exam, in

comparison to the traditional method used in the curriculum.

Since the construct of anxiety has been considered a significant factor in second

language learning (Horwitz et al. 1986), and the effects of foreign language anxiety in

oral performance have been examined by several scholars (Hewitt & Stephenson 2012;

Scott 1986; Wilson 2006; Philips 1992; Park & Lee 2005), the present study mainly

focuses on the question of anxiety in the examination of learners’ affective reactions

towards these tests. This analysis attempts to investigate what kind of anxiety factors

tend to be more involved in students’ feelings, in a comparison between both oral exam

formats applied in the Language Program where the present study takes place.

Furthermore, the computerized oral achievement test used in the current

research corresponds to a computer-based testing structure different from the majority

seen in the literature reviewed. This oral exam format is taken in pairs and it is based on

a randomly chosen oral communicative situation performed among learners (an oral

assessment arrangement similar to the traditional face-to-face final oral exam used in

the curriculum, completed in the instructor’s office).

The present investigation thus aims to contribute to a better understanding of

the affordances of technology in oral testing, and whether or not the use of the computer-

based oral assessment tool is indeed an improvement over the traditional face-to-face

oral achievement exam, from the perspectives of students/examinees.

2
In addition, since the implementation of a system based on the completion of

computerized oral activities throughout the instruction period has also been pondered

by some of the instructors in the Language Program where the present study takes place,

this aspect constitutes another point of analysis in the current investigation. Thus, the

present research also examines whether there is any development in terms of oral

performance, when using technology as an oral assessment tool throughout the course,

in the kind of speaking assessment activities used in this foreign language curriculum.

The format of these computerized oral evaluation tasks resembled the computer-based

final oral exam structure applied in the current study. Reports on measures of learners’

improvement in oral performance through the completion of these computerized oral

activities will also contribute to the literature in this field.

2. Scope of the present study

The current research contributes to previous literature analyzing students’

feelings towards computer-based vs. face-to-face oral exams (Keynon & Malabonga

2001; Jeong 2003; Joo 2007; Öztekin 2011, Lowe & Yu 2009) by investigating the

students’ perspectives and affective factors, in a new computer-based final oral exam

vis-a-vis the traditional method used in a Portuguese language program. This aspect is

examined through students’ responses to an affect survey taken before and after the

course, regarding students’ overall reactions towards the two modes of oral testing, and

through learners’ reactions towards a questionnaire on anxiety factors and feelings

involved in each test.

3
Furthermore, following Tognozzi & Truong’s (2009) research work on evaluating

the integration of a computer-based tool into the traditional curriculum, the present

study also analyzes whether or not there is any improvement in terms of oral

performance, when using technology as an oral assessment tool throughout the

instruction period. The type of activities used in the current study correspond to paired

oral computerized tasks taken during class time, in a language lab. In order to analyze

the latter, this study examines the oral production of learners in a Control and an

Experimental class.

The technology used in this study corresponds to the web-based platform that

accompanies the elementary Portuguese textbook used in the curriculum. All oral

activities were based on the topics and grammar structures covered during the course

and meant to test the learning objectives stated on the syllabus.

L2 oral performance has been analyzed in previous studies through the

assessment of several aspects, such as lexical richness, grammar, fluency, pronunciation,

intonation, communication strategies and conversation skills, for instance (Phillips

1992; Hewitt 2012; Tognozzi & Truong 2009; Park and Lee 2005; Satar & Özdener

2008). Although all these variables can be considered in the analysis of speaking

abilities, this study attempts to focus mainly on features formally taught and emphasized

in the course where the computer-based oral activities were implemented, such as

certain vocabulary items, grammatical structures and communicative functions.

In sum, the main objective of this study is twofold: 1) to investigate the use of

technology in a final oral achievement test in terms of students’ perspectives and

affective factors and 2) to understand the value of technology in oral activities in terms

4
of promoting students’ development in oral performance, when these activities are done

throughout the instruction period. These aspects are studied in the specific context of

the final oral achievement exams and oral activities that are part of the Portuguese

Language Program where this study takes place.

To address these issues, the following research questions have been formulated:

1. What are the students’ reactions to the computer-based vis-à-vis the face-to-

face final oral exam, regarding levels of anxiety, perceived difficulty, and perceived

fairness in testing speaking abilities, as well as levels of comfort with the technology

used?

1.1. Do students’ perceptions towards both types of final oral exams differ

depending on 1) the different treatment received by the two classes; and 2) whether the

survey was taken before or after class?

1.2. What are the students’ reactions to the computer-based vis-à-vis the face-to-

face final oral exam at the end of the quarter?

2. What anxiety factors and feelings, presented on the anxiety scale, are more

involved in one type of oral exam in comparison to the other?

2.1. Is there a considerable difference between the Control and the Experimental

class?

3. Does the Experimental class show greater improvement than the Control class

in terms of oral performance, in certain linguistic areas and overall?

5
3. Overview of dissertation

The first chapter of the present research includes an overview of oral testing in

foreign language teaching, containing a literature review on face-to-face methods of oral

testing. The overview focuses on what has been done regarding oral assessment in the

classroom context, including student’s opinions on this type of test, as well as a review

on the application of technology in oral evaluation and studies reporting on students’

perceptions towards it. An overview of previous research on anxiety in second language

learning is also included in Chapter 1, containing the means found in the reviewed

literature to measure this variable, the factors involved in foreign language anxiety and

a relationship between anxiety and oral performance. A detailed methodology will be

presented in Chapter 2, followed by analysis and results for each research question in

Chapter 3. Chapter 4 presents the discussion of the study. key findings on students’

perceptions and affective factors towards the two different modes of oral testing are

presented, together with the examination of whether or not there was an improvement

on students’ speaking abilities when taking computer-based oral activities throughout

the quarter. Limitations in the current investigation, as well as its implications for the

field of second language learning and suggestions for future research will also be

presented in Chapter 4, followed by general conclusions.

6
CHAPTER 1: LITERATURE REVIEW

1.1. Overview of oral testing in foreign language teaching

1.1.1. Traditional face-to-face methods

The oral exam type used in the current research corresponds to an Oral

Achievement test, since it is taken in a classroom context and tied to a particular

curriculum (Liskin-Gasparro 1983; Brown 1985; Omaggio Hadley 2001). However,

research on oral assessment also discusses and explores the implementation of the

American Council of the Teaching of Foreign Languages (ACTFL) guidelines. An example

of oral testing that follows the ACTFL guidelines is the Oral Proficiency interview (OPI).

The current literature review thus starts with a description of these two different types

of oral assessment.

1.1.1.1. Oral proficiency tests

According to the description found on the “Oral Proficiency Interview

Familiarization Manual (2012)” the ACTFL OPI is an interactive and adaptive test,

adjusting to interests, experiences, and the linguistic abilities of the test takers. As it is

specified in the Manual, “The OPI assesses language proficiency in terms of a speaker’s

ability to use the language effectively and appropriately in real-life situations” (p.4). The

fact that the OPI is not an achievement test, where the speaker’s acquisition of specific

7
aspects of a course is assessed, and the fact that it is also not connected to any particular

method of instruction, is also highlighted in the description of this oral test. In addition,

it is explained that a speech sample is rated according to the proficiency levels described

in the ACTFL Proficiency Guidelines 2012-Speaking. Although these guidelines describe

five major levels of language proficiency, four major levels are included in the ACTFL

Rating Scale (derived from these guideline levels) and correspond to a) Superior; b)

Advanced; c) Intermediate and d) Novice. Each one of these levels represents a different

profile of functional language ability and is divided into “high”, “mid” and “low” sub-

levels. As it is also mentioned in the “Oral Proficiency Interview Familiarization Manual

(2012)”, The hierarchy of global tasks according to which the four major levels are

delineated is summarized in a rating scale, describing a range of proficiency for the

mentioned levels “Superior” to “Novice”. As it is stated in the Manual, a speaker of a

“Superior” level “can support opinion, hypothesize, discuss topic concretely and

abstractly, and handle a linguistically unfamiliar situation” (p.4) ; An “Advanced” level

speaker “can narrate and describe in all major time frames and handle a situation with

a complication” (p.4) ; The “Intermediate” level speaker “can create with language, ask

and answer simple questions on familiar topics” (p.4) and, at the “Novice” level, the

speaker “can communicate minimally with formulaic and route utterance, lists and

phrases” (p.4). As it is indicated in the OPI Familiarization Manual, the criteria of

evaluation for the OPI takes into account linguistic components in the context of their

contribution to overall speaking performance, evaluating spoken language ability from

a global perspective. The considered criteria are thus: a) “the functions or global tasks

the speaker performs” (e.g.: “Communicate minimally with formulaic and rote

8
utterances, lists, and phrases” - in the case of a Novice speaker); b) “the social contexts

and specific content areas in which the speaker is able to perform them” (e.g.: “Some

informal settings and a limited number of transactional situations”- in the case of an

Intermediate speaker); c) “the accuracy”, i.e., “degree to which the message is

understood” (e.g.: “Understood, with some repetition, by speakers accustomed to

dealing with non-native speakers”); and d) “the type of oral text or discourse the speaker

is capable of producing” (e.g.: “No pattern of errors in basic structures. Errors virtually

never interfere with communication or distract the native speaker from the message” -

in the case of Superior speakers, for instance) (p.6).

As mentioned in the “Oral Proficiency Interview Familiarization Manual (2012)”,

this type of interview has a complex structure and the primary goal is to obtain a ratable

speech sample. In order to elicit this speech sample, the interview’s structure follows

four mandatory phases, which correspond to: “Warm up” (a first phase/introduction of

the interview, which consists of “greetings, informal exchanges of pleasantries, and

conversation openers pitched at a level which appears comfortable for the speaker” );

“Level Checks” (“questions that elicit the performance floor”, which corresponds to “the

linguistic tasks and contexts of a particular level that can be handled successfully”;

“Probes” (here the purpose is to “discover the level at which the speaker can no longer

sustain functional performance”- called the ceiling), and, finally, “Wind Down” (the

closing stage of the interview, which “returns the speaker to a comfortable level of

language exchange”) (p.7).

An early work developed by Liskin-Gasparro (1983), called “Teaching and testing

oral skills”, refers to proficiency tests as “appropriate instruments to measure students’

9
ability at the end of a sequence of courses or a major linguistic experience, such as study

abroad” (p.12). The author also considers this type of test to be better for “outside use”,

as in the case of when an employer wants to know how well an individual can function

in a certain language on the job, for example. Furthermore, Brown (1985) states that

“both classroom learning and testing should anticipate and prepare for proficiency

testing of the real-life sort, but a good proficiency test may be a poor classroom test” (p.

482).

1.1.1.2. Oral achievement tests

Omaggio Hadley (2001) also calls attention to the important fact that the Oral

Proficiency Interview is different from an oral achievement test taken by a student. Oral

tests taken in the classroom are thus often referred to as oral achievement tests, as they

are used to evaluate students’ acquisition of specific course contents and they’re

considered to be valid if the test covers what has been taught in the course (Liskin-

Gasparro 1983; Brown 1985; Omaggio Hadley 2001;). The contexts in which an Oral

achievement test or a Proficiency test should be used are discussed in several works. A

study conducted by Norris & Pfeiffer (2003), for instance, discusses the use of ACTFL

Proficiency Guidelines (used in OPIs) in a foreign language program for different

purposes. The authors argue for “developing curriculum-dependent standards and the

use of assessment instruments that reflect the kinds of learning valued and fostered by

the program” (p.580). In their investigation, the authors examine the results of more

than 100 Simulated Oral Proficiency Interviews (SOPIs) administered across all levels of

10
instruction within one German foreign language department. The researchers mention

that, after observing the relationship between the student’s oral proficiency ratings

(based on ACTFL Guidelines) and their curricular status in the German Department at

Georgetown University, the faculty and administration of the department became aware

of the need for different assessment methods for different reasons. As it is explained in

the study, the use of ACTFL Proficiency Guidelines (used in OPIs) has proved to be

particularly useful to meet different purposes within language programs and

institutions. A few examples are the use of test ratings as an external criterion for the

abilities of students placed into the upper levels of the curriculum and also providing the

resulting ACTFL Guidelines oral proficiency ratings to students as one indication of the

foreign language abilities they have achieved within the program. However, as it is

explained in this research study, the faculty and administration of the department also

became aware that “the oral proficiency ratings are not used as a proxy or gate-keeping

device for students to meet any institutional or degree-program language requirements”

(p.579). An oral achievement test thus seemed to be more suitable for that purpose.

Liskin-Gasparro (1983) also considers oral achievement tests to be connected to

a particular curriculum and to “measure the degree to which students achieve the

outcomes of a unit or a course or a program” (p.5). In addition, the author explains that,

speaking tests that are more closely tied to the program should be developed for

formative evaluation and that, besides being achievement-oriented, they should also be

communicatively based, i.e., “tests that reflect a classroom in which students are given

opportunities to interact verbally with the instructor and with each other with minimal

teacher intervention.” (p.9). Liskin-Gasparro (1983) points out several reasons for

11
testing that are internal to the curriculum, such as assigning grades. It is also mentioned

that these scores, usually with letter grades or percentage correct, can be quite useful in

“ranking students with respect to each other or in accessing their progress” (p.5).

Another reason would be the fact that “we can discover students’ areas of strength and

weakness, where they have learned the material, and where we as teachers need to

spend more time” (p.4), getting a sense of the effectiveness of the program and the need

to develop new teaching strategies.

Regarding the elements being assessed on these types of tests, evaluating

students´ acquisition of specific course contents, Liskin-Gasparro (1983) mentions that

the grading system should include an evaluation of linguistic and communicative factors

involved in oral communication, measuring global and integrative language skills. The

author thus advocates for “the assessment of the overall communication, as well as

measurement of the accuracy of the constituent parts” (p.12), as grammar structures,

for example.

When looking at the different scoring scales presented in several works involving

oral assessment in L2 (Tognozzi and Truong 2009; Larson 2000; Satar & Özdener 2008;

Liskin-Gasparro 1983; Omaggio Hadley 2001) we notice that the linguistic factors being

evaluated generally correspond to vocabulary, grammar, communication functions,

pronunciation and fluency. Furthermore, we can observe that the language components

being assessed on an oral testing scoring scale, and the description of what is being

measured under those components, vary from scale to scale, depending on how the

score sheet is designed and organized. Tognozzi & Truong (2009), for example, present

an oral testing scoring scale where they evaluate aspects as “speech natural and

12
continuous” under the subheading of “fluency”, or “almost entirely/entirely

comprehensible to native speaker” under “pronunciation”, “rich and extensive

vocabulary, accurate usage pertinent to level” under “vocabulary” and “almost always

correct” under “grammar” (p.15). Another example of an oral test-scoring sheet can be

found in Omaggio Hadley (2001). Here we can observe several grading components and

detailed scoring criteria such as “Communication”- “displays communicative ease within

contexts”; “accuracy”- “shows exceptional control of required grammar concepts and

correctness in a variety of contexts”; “Fluency”- “Normal, ‘thoughtful’ delay in

formulation of thought into speech, language flows, extended discourse”; “Vocabulary”-

“Very conversant with vocabulary required by given context(s), excellent control and

resourcefulness”; and “Pronunciation”- “Correct pronunciation and intonation, very few

mistakes, almost native like” (p. 444). In an attempt to identify language development in

speaking, in a study examining the use of two synchronous computer-mediated

communication tools (text and voice chat) with secondary school learners of English as

a foreign language, Satar & Özdener (2008) also developed a pretest and posttest

grading scale, specifically with reference to topics covered in chat sessions. In order to

investigate the differences in students’ speaking ability before and after the study, the

authors provide a scoring scale with a description of the ability parameters for each

question of the test. An example would be “To be able to form simple sentences using

the verb ‘to be’”, under the subheading of “Grammatical Structures” (p.612). The authors

also provide a description of the scoring criteria, with scores ranging from 0 to 4, where,

in the case of “Grammatical Structures”, the minimum score corresponds to “cannot use

13
the structure” and the maximum score to “can use the structure effectively and

automatically” (p.13), for instance.

As Omaggio Hadley’s (2001) states, when it comes to the evaluation of these tests,

“teachers should develop their own sets of criteria for scoring the various components

of the scales, using descriptions that are appropriate to the expected levels of speaking

ability for students in their course” (p. 443). Moreover, Larson (2000) highlights the fact

that the procedures chosen to score oral achievement and progress tests would depend

on each language program’s specific goals. As the author claims, “If, for example,

teachers stress pronunciation development, they would want to use assessment criteria

that include this aspect of language development” (p.59). In his paper, Larson (2000)

also describes several methods of oral assessment techniques presented in a number of

works in the field of foreign language education.

Some variants for achievement testing of oral skills, in the classroom context, are

presented in Omaggio Hadley (2001). The first is a sample test for individual or paired

interviews where, using a set of conversation cards, the instructor can either have a one

on one conversation with the student or have the learner work with a partner, believing

that he or she might feel more secure and comfortable. In this case, the instructor should

not get involved in the conversation since, as the author points out, students should be

able to understand each other after having practiced similar exercises in class before. On

the second variant, the instructors can either interview the students during their office

hours or in class time while the other students are completing other tasks. Here, the

teacher also selects one of the situational-based conversational cards and begins the

interview based on the ideas provided in it.

14
Regarding the use of the individual interviews method for oral achievement

testing, Brown (1985), for instance, describes a modified Oral Interview procedure

called RSVP (Response, Structure, Vocabulary and Pronunciation). The author explains

that, similarly to the oral interview proficiency exams, this procedure takes place in

stages but with significant differences. The researcher underlines the importance of

different stages in the implementation of this test in the classroom. Stage one consists of

routine inquires and questions that help to put the student at ease and set the

atmosphere of the examination, for instance. Some examples would be “How are you?”

or “what time is it?” (p.484). Stage two elicited students’ responses to questions from

the unit they had been preparing for and stage three is the one where students are asked

to give some opinion, describe a place or a person or interpret a drawing, while the

examiner tailors the items to the students’ level and manages the time without seeming

to hurry the learner. For his pilot study, Brown provided a grade matrix (3-2-1-0

numeric system), for each of the categories being assessed (Response, Structure,

Vocabulary and Pronunciation), that the examiner could easily remember until the

assessment ended and therefore wouldn’t distract the student by taking notes during

the testing process. According to the authors explanation, “Response” corresponds to

“fluency” and “represents the flow of thoughts and information that the student can

generate, the capacity to initiate, carry forward and conclude a message without calling

attention to one’s difficulty in speaking” (p.483). “Structure” corresponds to “the

grammatical precision with which the student speaks” (p.483), “Vocabulary” to the

“lexical range and variety present” (p.483) and “Pronunciation” refers to “the quality of

15
pronunciation achieved” (p.483). The example of the oral assessment procedure

presented in Brown (1985) is also provided in Larson (2000).

A more recent study by Gan & Hamp-Lyons (2008) reports on an example of oral

achievement testing which corresponds to in-class oral performance assessment,

involving a group oral discussion based on an extensive reading program. The authors

argue that “peer group discussion as an oral assessment format has the potential to

provide opportunities for students to demonstrate ‘real-life’ interactional abilities,

relating to each other in spoken interaction” (p.328), and pay special attention to the

social dynamics in an activity designed for oral assessment in a “context of real

classroom conditions” (p. 330). Thus, their study attempted to evaluate the effectiveness

of these oral school-based activities, examining topic initiation, development and

transition in their analysis sections. They emphasize the importance of “topic shift”,

which “involves the introduction of a new matter to the one discussed in the previous

turn”, in oral assessment authenticity of an oral group discussion task. Concluding that

the group of ESL students’ participants had no problem managing those topic shifts, they

support the utilization of group oral assessment as a variant to interactive achievement

testing for oral skills in the classroom.

1.1.2. Students’ perceptions towards face-to-face oral testing

When it comes to examinees’ reactions to face-to-face methods in oral testing, we

can find an investigation on students’ attitudes towards face-to- face oral exams in a

study conducted by Zeidner & Bensoussan (1988), for instance. The authors examine

16
students’ attitudes towards an oral vs. a written English language test. Here, 170

learners of English as a foreign language, of an advanced reading course, responded to

an examinee feedback inventory covering diverse topics, such as test content and

procedures, examinee’s self-perceptions of anxiety, effort and expected success.

Students showed a preference for written over oral tests, considering the latter as more

anxiety evoking and less pleasant, valuable and fair, despite considering them more

interesting to take. A more recent study by Paker and Höl (2012) has also reported on

students’ high anxiety levels during a speaking test at the Pamukkale University School

of foreign languages. Learner’s answers to questionnaires on their perceptions and

attitudes towards the oral test also revealed that students’ considered this test to be the

most difficult when compared to exams on other language skills. The authors comment

on the fact that most of the students had never experienced a speaking test before, which

might have affected their negative reactions and high anxiety levels towards it. In this

study, the majority of students also considered the oral test to be the most important

test to encourage the use of the target language and also suggested that the institution

should do more speaking activities, regardless of the students’ level in the foreign

language.

Scott (1986) analyzed the affective reactions of EFL students in two oral test

formats (group and pair) in an achievement-testing situation. Through the analysis of

students’ responses to a questionnaire, the author states that “No significant difference

was found among student reactions to the different test formats” (p. 108) and further

comments on the fact that many students reported to feel nervous due to the testing

situation and that this had a negative impact on their performance. Learners also

17
revealed their preference for written tests over speaking tests, mentioning the lack of

time pressure in a written test situation, for instance, made them feel calmer and more

confident in their performance. In addition, the author claims that, in line with previous

studies, such as Savignon (1972), “students were able to separate out their judgments of

the validity of an oral test from its perceived difficulty and their anxiety in the testing

situation” (p.112).

Nevertheless, other studies have showed students’ positive reactions to oral

assessment in a face-to-face context. In Shohamy (1982), for instance, one hundred and

six students of Hebrew as a foreign language at a state University also answered a

questionnaire on their attitudes towards a face-to-face oral interview assessing

students’ proficiency in a communicative context. Results revealed positive comments

and reactions to this type of assessment, with students perceiving oral interviews as a

low anxiety test and considering it helpful and valuable. As the author mentions,

students also perceived the test as a good chance to use the language and considered it

to be a learning opportunity.

We can also observe students’ positive responses to the pilot program

implemented by Brown (1985) at a state university for Spanish classes. Here, an

ACTFL/ETS oral interview testing procedure and guidelines was adapted in classroom

use. The RSVP oral achievement test, explained above, was given in 3 to 4 week intervals,

constituting 60 % of the final grade. Although they did not take controlled data on the

program, students responses to course evaluations for this Pilot plan were taken. In

general, students at this University presented an optimistic opinion on aspects such as

the usefulness of the exam in helping their strengths and weaknesses, its fairness, and

18
its instructional value. The author concludes that this type of test does not need be

impossibly time consuming and that “if spoken skill is truly the priority objective in your

class, you will find time for oral testing” (p.486). However, some problems and practices

associated with face-to-face oral exams have also been observed and will be discussed

as follows.

1.1.3. Difficulties in face-to-face oral testing

Aspects such as lack of time and large classes have been pointed out as

disadvantages in the administration of oral testing (Joo 2007; Larson 2000). Joo (2007),

for instance, indicates the aspect of “practicality” as one of the problems in the

implementation of the face-to-face interview. The author mentions the fact that

“numerous rooms, trained teachers, and a great deal of time would need to be spent to

properly interview individual students” (p.173).

Being aware of the difficulties in oral testing, Omaggio Hadley (2001) tries to

offer some solutions, such as administering 10 to 15-minute oral exams in the

instructor’s office hours or even in the classroom, with students being tested

individually or in small groups. Nevertheless, these arrangements are still considered to

be significantly time consuming (Larson 2000).

As Omaggio Hadley (2001) explains, a similar format of the oral proficiency

interview called SOPI (Simulated Oral Proficiency Interview) has been created

particularly due to the impossibility of interviewing individually high numbers of people

in a short time frame. As the author describes, “The SOPI consists of a master tape with

19
test directions and questions, a printed booklet with pictures and other materials used

in responding, and a cassette for recording the examinee’s responses” (p.438). Larson

(2000) also mentions a former implementation of oral achievement tests, similar to the

SOPI, administered via audio cassette tapes, in order for the oral testing procedure to

become less time consuming for the instructors. Yet, as the author claims “due to the

linear nature of cassette recordings, scoring the test tapes still required a substantial

amount of teacher time” (p. 55).

1.1.4. Previous Research on Technology Assisted Oral Testing

As Tognozzi and Truong (2009) explain “As the need for assessment of speaking

on a less formal level has evolved, other types of tests have been developed, notably a

number of tests that are computer assisted” (p.2). Previous research on classroom oral

assessment technology application in language instruction has shown several examples

of students’ reactions to technology-mediated tests in several universities. Some of these

studies also have taken into account students’ perspectives regarding different types of

computer-based versus face-to-face oral assessment. The current section presents a

literature review mainly focusing on studies where student’s feelings and attitudes

towards computer-based oral assessment tests in comparison to face-to-face formats

were taken into account, since it constitutes a relevant aspect for the current research

work.

20
Previous research on learners’ improvement in terms of oral performance when

taking computer-based oral assessment activities throughout the instruction period will

also be considered in the following subsection.

1.1.4.1. Students’ improvement in oral performance through computerized oral


activities

In the literature reviewed, little research was found on studies reporting on

measurements of students’ improvement in terms of speaking abilities, when taking oral

assessment activities via the computer, in a classroom context, throughout the

instruction period. Some reports on direct measurements of students’ gains in terms of

oral production can be encountered, for instance, in studies concerned with Computer-

mediated Communication (CMC) involving voice and text chat tools (Satar & Özdener

2008; Payne & Whitney 2002). However, Satar & Özdener (2008) also refer to fact that

“Despite the large number of studies on text-based CMC, the use of the voice has yet to

be thoroughly researched” (p. 598). The authors examined the use of text and voice chat

synchronous computer-mediated communication tools among High school students in

Turkey, with learners of English as a foreign language. This research work included an

investigation on the development of speaking skills in 3 different groups corresponding

to a text chat, a voice chat and a control group. Students in the Experimental groups were

engaged in 4 chat sessions, taken in dyads, during 4 weeks. Results based on pre and

post speaking tests indicated an increase in the Experimental groups’ oral abilities, in

comparison to the control group.

21
The improvement of L2 oral proficiency through synchronous CMC has also been

investigated by other authors such as Payne & Whitney (2002). The researchers

included an examination of whether L2 oral proficiency could be indirectly developed

through chatroom interaction in the target language. The authors state that their

research is based on Levelt’s model of language production, explaining that, based on

this model, “synchronous online conferencing in a second language should develop the

same cognitive mechanisms that are needed to produce the target language in face-to-

face L2 conversation.” (p.14). The chatroom and face-to-face interactions consisted in

groups of 4 to 6 students. Through the examination of two control groups, receiving face-

to-face instruction, and two experimental groups, participating in two face-to-face and

two online class periods per week, during a 15-week semester. Students’ development

of oral proficiency was measured through the administration of a speaking pre and

posttest and concluded that the mean gain score for the experimental condition showed

an advantage over the control condition.

A study developed by Tognozzi & Truong (2009), evaluates the integration of an

oral assessment builder, that has a similar format to the COPI, into the traditional

curriculum and includes an investigation of students’ oral performance. They used

WIMBA, “a collaborative software program of online communications tools that allows

students and teachers to communicate through recorded messages and voice email. It

provides an opportunity to use a technologically enhanced oral proficiency exam and

practice exercises” (p.1). The researchers had a group of students using WIMBA, who

had to regularly complete WIMBA oral assignments as part of their homework, receiving

web-based oral feedback from the instructors, and a control group who would do the

22
same activities in class. As the authors mentioned, the WIMBA group produced “longer

and more accurate speech samples”, (p. 9) in comparison to the control group, in the

examination of the linguistic output, yielded by both groups, in a final oral exam.

Adair-Hawck, Willingham-MacLain & Youngs (1999) also conducted a study

assessing the integration of Technology-enhanced language learning (TELL) into the

language curriculum and report on college-level French learner’s development in

several language skills, including speaking abilities. The authors investigate a treatment

group, replacing a fourth class period per week with multimedia activities, not including

speaking, outside the classroom, and a control group preparing the same multimedia

components in class. As the authors explain “both groups were assigned identical

writing homework and follow-up speaking tasks” and “For both groups, the speaking

tasks were evaluated in class” (p.277). Findings in this study reveal that, although the

treatment group performed better on reading and writing measures, both groups

performed equally well in terms of listening and oral skills. Regarding students’

improvements in oral production, the authors report that “there was no significant

difference between the groups in gain made over the semester, computed as posttest

score minus pretest score” (p.284).

1.1.4.2. Students’ perceptions

Student’s feelings and attitudes towards the use of technology in oral assessment

have been investigated in a number of studies. In Tognozzi & Truong (2009) research

work, learners in the Experimental group, taking computerized oral activities as

23
homework assignments, were asked to complete a pre and post-test survey at the

beginning and at the end of the quarter. This questionnaire asked questions regarding

students’ comfort level with technology, feelings towards being assessed through

technology, and perceptions of the fairness of assessment through technology. The

results reported indicated that the majority of students believed that the Web-based

program used in the study (WIMBA) would help their speaking skills. Regarding the

opinions on the fairness in testing their oral skills, the students’ belief in the accurate

representation of their speaking abilities decreased in the post-survey, although the

majority expressed “no opinion”. In addition, the majority of students indicated that they

enjoyed using technology as part of instruction.

Early & Swanson (2008) also discuss students’ perceptions in a comparison

between a traditional in-class oral language assessment (OLA) and a technology-

enhanced type of oral assessment, among students enrolled in Spanish and Japanese

courses. The authors explain that, for the in-class OLA, students were assessed by the

instructors in a different classroom and, for the digital voice-recorded test, students’

language proficiency was evaluated through a web-based classroom system. The

researchers mention students’ reports on “a lack of satisfaction with the traditional

procedure of in-class OLA when peers were present, because of a heightened sense of

the affective filter” (p.44) and refer to the fact that “using technology to assess oral skills

appears to lower students’ levels of self-consciousness and nervousness” (p.46).

An early example of a technology-mediated oral test can be found in Stanfield et

al. (1990) and is called the Portuguese Speaking Test (PST). According to the authors,

this test can also be referred to as a Simulated Oral Proficiency Interview (SOPI) and was

24
developed in response to the need of trained OPI interviewers for less commonly taught

languages. As it is explained in Stansfield et al. (1990), the PST consists in a tape based

test that uses recorded and printed stimuli and recordings of test takers’ responses. This

research work on the development and validation of the PST included an analysis of

students’ reactions towards the two oral testing formats. The majority of the examinees

corresponded to undergraduate students who had completed at least one year of

Portuguese at different universities in the US. The authors reported a tendency for

students to be more nervous towards the PST and also perceive this test as more difficult

than the face-to-face interview, mentioning aspects such as “the unfamiliar mode of

testing and perceived ‘unnaturalness’ of speaking to a machine” (p.647) as causes for

these results. In addition, no students in this study thought that there were unfair

questions in the face-to-face interview and only a low percentage thought so for the

taped-based test. The researchers also mention students’ preference for the live

interview, despite their positive responses “about the content, technical quality and

ability of the taped test to probe their speaking ability” (p. 647).

As explained in Tognozzi & Truong (2009), a newer generation of Oral

Proficiency tests is called Computerized Oral Proficiency Interviews (COPI). Kenyon &

Malabonga (2001) analyzed examinees’ reactions towards this new Computerized Oral

Proficiency Instrument (COPI) and other oral proficiency assessments as SOPI and OPI.

The purpose of this study was to test student’s reactions as to whether COPI was an

improvement over the SOPI, in terms of reducing testing time for a large number of

students and different skill levels and, if so, proceed with further research in applying

computer technology to oral proficiency assessment. Thus, the authors’ focus was

25
mainly on the comparison between the two technology-mediated tests. A total of 55

graduate and undergraduate students enrolled in Spanish, Arabic and Chinese language

courses at their university, completed a questionnaire on their attitudes towards and

perceptions of the tests, regarding aspects such as the level of difficulty and fairness of

the test, as well as the levels of test taking anxiety. The statistic results revealed a general

positive reaction to the COPI in comparison to the SOPI and students considered one of

the best features of COPI the fact of having control over when to start and stop speaking.

Regarding students’ reactions in a comparison between the two technology-mediated

tests (SOPI and COPI) and the traditional OPI, students tended to rate the OPI more

favorably in terms of fairness and no statistical different results were found between

students’ answers to the 3 types of oral exams in terms of difficulty and nervousness.

Students’ perceptions between computer-based and more traditional oral testing

methods were also examined in Jeong (2003), where 144 Korean students of English as

Foreign Language were administered a face-to-face English interview and a multimedia-

enhanced English Oral proficiency interview, using d-VOCI (digital-Video Oral

Communication Instrument) computer program. As the author explains, d-VOCI (digital

Video Oral Communication Interview) was developed by the Language Acquisition

Resource Center at the San Diego State University, employing “authentic tasks to elicit

speech ratable on the ACTFL (American Council on the Teaching of Foreign Languages)

proficiency guidelines” (p.2). As part of this research project, test takers completed an

Electronic Literacy Questionnaire including information on students’ electronic literacy

and attitudes towards d-VOCI, measuring aspects such as “personal preference, difficulty

(both content and procedure), and usefulness in comparison to face-to-face OPI” (p.58)

26
and conveyed strong positive reactions towards the VOCI in general. Nevertheless, as

the author points out “about 70% of the subjects reported a preference for the face-to-

face interview over the d-VOCI.” (p.74) and seemed to value the human interaction with

the interviewers. This general preference for a face-to-face oral exam format is also

found among test-takers in other studies including a comparison between a computer-

based and a traditional mode of oral testing, despite positive reactions towards

technology-mediated tests (Joo 2007; Öztekin 2011; Lowe & Yu 2009). Joo (2007), for

instance, also examines students’ attitudes toward a self-administered computer-based

test (COT) and a Face-to-Face interview (FTFI) in an English conversation class, at a

Korean University. In this study 43 students took both testing formats as midterms and

responded to an attitude questionnaire, which revealed significantly more positive

reactions towards the COT, regarding aspects such as “preparation time”, “nervousness”,

“tiredness” and “fairness”. Nevertheless, the author points out that more students

reported their preference toward taking the FTFI, considering this type of test to be

“more communicative and authentically interactive” (p.183). In Lowe & Yu (2009), the

majority of students, learners of English at a Chinese university, also considered the face-

to-face testing format included in this study to be “more ‘authentic’ modelling of ‘real

life’ use of spoken language” (p. 35), revealing their contentment with their live

interaction with the oral exam evaluator. In this research work, the author observed

students’ reactions towards the traditional face-to-face test of spoken English and the

computer-assisted speaking test, called College English Oral Test system (CEOTS),

developed by the Shanghai Foreign Language Education Press (SFLEP) and the

University of Science and Technology of China (USTC). Here, higher levels of

27
nervousness were reported towards the computer-based test for reasons such as

“worrying about making mistakes” (due to a feeling that they could not be corrected

once recorded), “lack of reaction from a computer” and “a fixed time given for a

response”. The author also explains that, although the face-to-face test was seen as a

better test of one’s speaking ability, the computer-assisted exam was considered to be

more fair in terms of grading and less “open to examiner bias”.

The study conducted by Öztekin (2011) in Turkey, at Uludag University School of

Foreign Languages, also includes an investigation of students’ perceptions of a Face-to-

face speaking assessment (FTFsa) and a Computer-assisted speaking assessment

(CASA) in four groups of two different proficiency levels, corresponding to the pre-

intermediate and intermediate. The researcher explored students’ responses to a test

perceptions questionnaire (one for each testing mode), a questionnaire focusing on

speaking anxiety and speaking test anxiety, and another one on computer attitudes. The

perceptions questionnaire, partially adapted from Keynon & Malabonga (2001),

examines aspects related to different categories such as difficulty, comprehensiveness

and quality of the tests, and it also includes a subscale assessing students’ test-mode-

related anxiety levels. As the author explains, the speaking anxiety and speaking test

anxiety questionnaire is adapted from the Test Influence Inventory (TII) by Fujii (1993)

and the Foreign Language Anxiety Classroom Scale (FLCAS) by Horwitz (1986). The

researcher concludes that “the test takers at both levels clearly preferred the face-to-

face mode of speaking assessment over a computerized version for various reasons” (p.

135), such as considering this type of test as more conversational, interactional, personal

and fair.

28
The current study shares several similarities with Öztekin (2011) research, as it

also investigates students’ perceptions towards a face-to-face and a computer-based

oral achievement exam and includes a further investigation of students’ anxiety towards

the two different modes of oral testing. As it will be mentioned in the Methodology

chapter, the students’ perceptions questionnaire used in the current study is also based

on the ones found in Keynon & Malabonga (2001), among other authors in the literature

reviewed, and also uses a questionnaire adapted from the FLCAS by Horwitz (1988). In

the case of the current study, this type of questionnaire is used in order to uncover

anxiety factors that are more related to one type of test in comparison to the other.

Another study focusing on students’ anxiety in the context of a face-to-face and a

computer-based oral exam was found in the reviewed literature, during the final stages

of the current dissertation, in a recent study conducted by Sayin (2015). Here, examinees

also responded to a questionnaire regarding the causes behind students’ feelings of

anxiety towards the two types of oral tests. The exams corresponded to a face-to-face

midterm with their lecturer and a computer-based final oral exam, which consisted of

model questions from TOEFL (Test of English as a Foreign Language) speaking sessions.

Results revealed a “high frequency of oral exam anxiety” (p. 116). In terms of students’

answers to questions comparing computer-based with face-to-face oral exams, the

author specifically looked at the learners’ responses to the following items in the

questionnaire (5 “I find it easier to have a computer-based oral exam rather than

speaking with the instructor”, 11 “I prefer having computer-based oral exams” and 6 “I

feel better when I have face-to-face oral exam rather than computer-based”). Results in

terms of frequency showed that 12 people responded affirmatively to items 5 and 11, as

29
opposed to 16 people on item 6. These values correspond to 43 % (items 5 and 11) vs.

57.2% (item 6). The author also explains that, “The overall results do not precisely

indicate that students reject having computer-based exams; however, while the oral

exam anxiety can be detected from the results, the preference of students towards face-

to-face exams and computer-based is not very clear since the frequencies and percentile

rates are very close” (p.116). In fact, results of a binomial test show that there is no

significant distinction between the percentages. 1

1.1.4.3. Advantages

When considering some of the main advantages in the application of technology

in oral assessment, Tognozzi & Truong (2009) stress the fact that these computer-based

oral practices allow for individual feedback at a time that is convenient for the instructor,

which is usually not possible in the classroom. The authors also find a positive aspect of

WIMBA the fact that students can hear their recordings before submitting them to the

teacher. This aspect can help them improve their speaking abilities when receiving the

instructors’ feedback. Lee (2005) also points out that the use of the computerized

Blackboard program “facilitated the interaction between the students and the instructor

as the latter systematically guided, assisted, and provided constructive feedback to the

students” (p.151).

1
The proportion of people preferring computer-based exams (.43) was lower than the
ones preferring the face-to-face oral exam format (.57), p=0.09.

30
Larson (2000) mentions that the quality of speech recordings when using the

computer, referring to the fact that these are superior to the tape-based ones. The author

also discusses “the benefit of test uniformity” in the computer-based tests, in the sense

that it is more homogenous in terms of the questions students receive and the time they

have to respond.

Early & Swanson (2008) also list some advantages of Digital voice recording for

oral assessment, such as the fact that it “leaves digital artifact for indication of student

progress, accreditation data, and increased reliability of assessment” (p. 45). According

to the authors, “archived recordings can be played multiple times to calibrate scoring

criteria and assure equity in grading by different instructors” (p.46). Another positive

aspect has to do with the fact that “digital voice recordings offer instructors more

flexibility as to when and where they evaluate student performances” (Early & Swanson

2008: 47). This flexibility in the evaluation of learners’ responses to the computerizes

oral exam has also been pointed out by Larson (2000) as an important advantage of this

type of oral testing formats.

According to the literature reviewed on students’ opinions towards technology

assisted oral testing, the leaners’ reports on the advantages in the use of computer-based

oral assessment methods (either in final oral exams or in computer based oral activities

taken during the instruction period) show that students often feel like they have more

control over when to start and stop speaking and believe that these tools help them

enhance their speaking abilities. (Kenyon & Malabonga 2001; Lee 2005; Tognozzi &

Truong 2009; Early & Swanson 2008).

31
1.1.4.4. Limitations

Nevertheless, a number of limitations, related to computer technology as a

language-testing tool, have also been identified and represent some challenges in the

incorporation of this oral evaluation method.

Students’ answers to previous studies’ surveys contribute to an understanding

on some disadvantages implied in the computerized oral testing method. In Kenyon &

Malabonga (2001), for instance, the subgroup of students taking the face-to-face OPI

reported that these interviews “appeared to them to be a better measure of real-life

speaking skills” (p.60). Other studies such have also concluded that students found the

face-to-face format to better reflect their oral proficiency/speaking ability or aspects of

real life communication and interaction (Jeong 2003; Joo 2007; Öztekin 2011; Lowe &

Yu 2009; Keynon & Malabonga 2001). As Öztekin (2011) states, “test takers tended to

value the existence of a live interlocutor listening to them” (p.77) and “The majority

favored the FTFsa over the CASA when they were asked about their ability to show one’s

strengths and weaknesses in speaking English” (p.121). Furthermore, as Tognozzi a&

Truong (2009) mention in their study “a fair percentage of students did not feel that

using WIMBA was a fair way to test their oral skills” (p.10). With regard to the students’

different opinions on whether the interactional nature of a conversation could or could

not be captured in a technology mediated assessment, Kenyon & Malabonga (2001)

stated that “if criteria used to evaluate performances do not differentiate on examinees’

interactional abilities, it may not be necessary to use the labor- intensive face-to-face

32
interview for the assessment, no matter how examinees feel about the nature of the

assessment” (p.81).

The costs involved in the incorporation of a computer-based assessment method

have also been pointed out as an inconvenience inherent to this type of oral testing. As

Tognozzi & Truong (2009) mention, there are other expenses implicated in WIMBA,

such as “to write and record questions and directions for students”, “write story lines

and draw storyboards” and “write directions for instructors on how to use, conduct

assessments and grade them” (p.10), for instance. The authors thus call attention to the

fact that this aspect should be taken into account when considering the execution and

implementation of analogous programs in the foreign language curriculum.

As mentioned in the introduction section, the present research analyzes students’

feelings, and in particular students’ levels of anxiety, towards the new computer-based

final oral exam vis-à-vis the previous face-to-face interview in a Portuguese Language

Program. A brief overview of previous research on anxiety in foreign language learning,

mainly focusing on studies measuring students’ anxiety levels towards oral exams, is

thus presented in the following section.

1.2. Previous research on anxiety in foreign language learning and towards oral

tests

In order to adequately define foreign language anxiety in second language

research, a situation-specific anxiety construct has been proposed by Horwitz, Horwitz

and Cope (1986, and Horwitz 2001). Here, anxiety is identified as a “conceptually

33
distinct variable in foreign language learning” (Horwitz, Horwitz and Cope 1986: 125).

When describing Foreign Language Anxiety, the authors explain that although the

combination of the three constructs of fear of communication apprehension, test anxiety

and fear of negative evaluation (further explained in detail) are useful in the definition

of foreign language anxiety, they propose that foreign language anxiety is not solely a

combination of these fears in foreign language learning. The authors thus describe this

construct as “a distinct complex of self-perceptions, beliefs, feelings, and behaviors

related to classroom language learning arising from the uniqueness of the language

learning process” (p.128).

As Zheng (2008) states “The complexity of anxiety is also reflected in the means

of its measurement” (p.3). Some of the methods for measuring anxiety, mainly in the

context of oral exams, are thus provided in this section.

1.2.1. Measuring anxiety in foreign language learning and oral tests

As explained in Al-Shboul et al. (2013), in order to measure the amount of anxiety

experienced by students in foreign language learning, Horwitz, Horwitz and Cope (1986)

have developed a questionnaire called Foreign Language Classroom Anxiety Scale

(FCLAS). 33 items, in a 5-point Likert-scale, raging from “strongly agree” to “strongly

disagree” compose this instrument (Al-Shboul et al. 2013; Horwitz, Horwitz and Cope

1986). Some examples of these items correspond to statements such as “it frightens me

when I don’t understand what the teacher is saying” or “I don’t worry about making

mistakes in language class”. As Horwitz, Horwitz and Cope (1986) state “FLCAS affords

34
an opportunity to examine the scope and severity of foreign language anxiety” (p.129).

As also mentioned by Al-Shboul et al. (2013), this instrument has been adopted by

several researchers as a source to measure foreign language anxiety (Park and Lee 2005;

Zulkifli 2007; Phillips 1992; Ganschow & Sparks 1996; Wilson 2006; Liu & Jackson 2008;

Hewitt and Stephenson 2011; Abu-Rabia, Peleg & Shakkour 2014).

Furthermore, a number of authors (Kenyon & Malabonga 2001; Shohamy 1982;

Scott 1986; Paker and Höl 2012; Zeidner & Bensoussan 1988; Öztekin 2011; Joo 2007;

Sayin 2015; Jeong 2003; Joo 2007; Stansfield et al. 1990; Lowe & Yu 2009; Early &

Swanson 2008) have also used affect questionnaires in their research works in order to

investigate students’ reactions towards oral tests. Some of these studies have

particularly examined, or included in their research, students’ anxiety levels towards

these type of exams. The majority of rating scales on these questionnaires usually

requires ratings on a five-point or seven-point Likert-like scale and include both closed-

ended and opened-ended questions. We can observe a few examples of the

questionnaires presented in these studies. Zeidner & Bensoussan (1988), for instance,

presented students with a series of semantic-differential scales, instructing them to rate

different types of tests (including the oral), according to rating scales such as

“pleasant/unpleasant”, “light/heavy”, “non-threating/threatening”, “non-anxiety

evoking/anxiety evoking”, “no cause to worry/cause to worry”, to name a few. In another

part of the questionnaire, students were also asked to list the advantages and

disadvantages of oral tests and rate the degree of anxiety elicited by the oral test along

a seven-point scale from “very anxiety-evoking” to “not at all anxiety-evoking”. Scott

(1986) has administered an affective survey to students after completing group and pair

35
oral achievement tests. The researcher used a five-point Likert-like scale on items such

as “I felt nervous before the test” or “I thought the oral test was too difficult”. Here, some

items also included space for students to explain their ratings. Shohamy (1982) has also

asked students to answer a questionnaire upon completion of oral tests where students

indicated their agreement with statements such as “the testing experience was:”

“comfortable”, “fun”, “difficult”, “challenging”, “interesting”. A second part of the survey

also required students to comment on how they felt about the testing experience.

Öztekin (2011) includes a “test-mode related anxiety subscale” in a questionnaire

regarding the test takers’ perceptions towards a face-t-face and a computer-based type

of oral achievement testing administered to learners of English at pre-intermediate and

intermediate level. This subscale includes items related to feeling “tense and anxious”

before, during and after the speaking test; Being “afraid of making mistakes during the

speaking test” or feeling relieved to see/not to see someone listening”, for instance. Sayin

(2015) has also administered a questionnaire specifically on students’ anxiety towards

oral exams in general and concerning a face-to-face and a computer-based oral exam in

particular. As the author explains, this questionnaire is based on Sarason’s (1980) Test

Anxiety Scale (TAS). Some of the statements included in this survey are quite similar to

the ones found in FCLAS, such as “Even if I’m well prepared I feel anxious during oral

exams”. Items that are more specific to the different modes of oral testing include “I fell

nervous when I start speaking in front of the instructor” or “I feel nervous when I see the

time running on the screen”, for example.

In addition to the administration of this type of questionnaires, Horwitz & Yan

(2008) also claim that “one promising means to better understand the factors involved

36
in anxiety in language learning would be to interview learners on their feelings about

language learning” (p.153).

1.2.2. Factors involved in anxiety in foreign language learning and oral tests

As previously mentioned, the primary sources of language anxiety, as explained

by Horwitz, Horwitz and Cope (1986) are 1) communication apprehension, 2) test

anxiety and 3) fear of negative evaluation. “Communication apprehension” relates to the

anxiety induced in interpersonal situations. Here speech constitutes the highest stress

factor, especially when facing a large audience. This relates to the fact that

communication in an L2 requires taking chances and the learner is not familiar with

some of the linguistic rules (Abu-Rabia, Peleg & Shakkour 2014). In addition, difficulty

in listening to or learning a spoken message also characterizes this dimension of L2

anxiety (Horwitz, Horwitz and Cope 1986; Abu-Rabia, Peleg & Shakkour 2014).

According to Horwitz, Horwitz and Cope (1986), “Test anxiety” refers to “a type of

performance anxiety stemming from a fear of failure” (p.127). The authors explain that,

since tests and quizzes are frequent in a foreign language class, test-anxious students

are likely to experience great difficulties in these contexts. Furthermore, as Abu-Rabia

et al. (2014) claims, test anxiety especially occurs when the questions on a test are

ambivalent or have been less well studied. The third element of language anxiety,

referred in Horwitz, Horwitz and Cope (1986) is “fear of negative evaluation”. The

authors explain that this aspect is not limited to test-taking situations but it might occur

37
in an evaluative situation, such as speaking in a foreign language class, which requires

continual evaluation by the teacher and “real or imagined” evaluations of peers.

Several scholars have also carried out studies applying the questionnaire for

measuring L2 anxiety constructed by Horwitz et al. (1986), named FCLAS, or

questionnaires based on this anxiety measure, finding triggers for foreign language

anxiety (Phillips 1992; Ganschow & Sparks 1996; Wilson 2006; Zulkifli 2007; Liu &

Jackson 2008; Hewitt and Stephenson 2011; Yahya 2013; Abu-Rabia & Shakkour 2013).

These authors have also identified factors related to fear of communication, test anxiety,

and fears of negative evaluation contributing to foreign language anxiety. Liu & Jackson

(2008), for instance, found that more than one third of the participants in their study,

which consisted of 547 Chinese learners of English as a foreign language, felt anxious in

their Language classrooms for fearing negative evaluation and for being apprehensive

of speech communication and tests. The results in their analysis also revealed that most

of the students did not like to risk using English in the class and that learners’

unwillingness to communicate was significantly positively correlated with their FL

anxiety. When investigating the factors leading to speaking anxiety among students

speaking in English in a speech communication course, Yahya (2013) emphasizes the

fact that language anxiety in these EFL learners is aroused mainly by factors of fear of

negative evaluation, consisting of negative judgments by others, leaving unfavorable

impressions on others, making verbal, pronunciation, grammar or spelling mistakes, and

disapproval by others. The author concludes that factors such as unpreparedness for

class, communication apprehension with teachers, teacher’s questions, students’

perceptions of low ability in relation to their peers, corrections in classroom

38
environment, tests and negative attitudes towards the English classes constitute factors

causing language anxiety.

Other scholars have also acknowledged other causes of language anxiety. Young

(1991), for instance, has pointed at facts such as “1) personal and interpersonal

anxieties; 2) learner beliefs about language learning; 3) instructor beliefs about

language teaching; 4) instructor-learner interactions; 5) classroom procedures; and 6)

language testing” (p. 427), in a literature review on anxiety in language learning. Foreign

language learners have also indicated aspects such as “time pressure” in a test situation

as a cause of nervousness and anxiety (Scott 1986; Ohata 2005). Furthermore, variables

such as difficulty level of foreign language classes, personal perceptions of language

aptitude, personality variables such as fear of public speaking, and stressful classroom

experiences have also been identified as possible sources of anxiety in language learning

(Price 1991 as cited in Zheng 2008).

Factors associated with foreign language anxiety from the learners’ perspective

have also been investigated by Horwitz & Yan (2008). The authors’ research is based on

lengthy interviews with learners of English in a Chinese setting. Variables such as

comparison with peers, learning strategies, language learning interest and motivation

were considered to be the most immediate sources of anxiety in language learning for

these students. The authors also found indirect sources of language anxiety among these

students, where “we might see the origins of language anxiety” (p.174), in variables such

as “test types”, “teacher characteristics”, “class arrangement”, and “parental influence”.

Nevertheless, as the authors point out, the distinctive sociocultural characteristics of

English learning in China needs to play a role when interpreting the findings in this

39
study, since a different picture might be drawn in a different setting. A few authors

focusing on the investigation of anxiety in language learning have also discussed a

relationship between L1 anxiety (or poor L1 language skills) and L2 anxiety. In a study

examining the relation between linguistic skills, personality types, and language anxiety

in Hebrew students of English as a foreign language, Abu-Rabia et al. (2014) found that

students with good language skills in their L1 and L2 exhibited a low level of anxiety

with respect to both languages. The authors’ findings also indicate that the L2 learners

who experienced anxiety with respect to their L1 felt a comparable level of anxiety with

respect to their L2. Other authors, such as Ganschow & Sparks (1996), for instance, have

also investigated the relationship between the speakers’ L1 language skills and their L2

anxiety level in a population of high school women, enrolled in first-year foreign

language classes. Findings in the authors’ study indicate that low-anxious students

perform better than high-anxious students on their native language in the phonological

and orthographic domain and on their grades in the foreign language at the end of the

year.

Although, in the literature reviewed, research in this area mostly focuses on

causes of students’ anxiety in the general context of language learning, a few authors

also discuss students’ anxiety more in depth in the context of oral language tests (Philips

1992; Öztekin 2011; Sayin 2015). As mentioned before, this aspect also constitutes one

of the focus in the current research. Öztekin (2011), for instance, includes a “Test-mode-

related anxiety subscale” in an affect questionnaire on students’ perceptions regarding

a face-to-face speaking assessment (FTFsa) and a computer-assisted speaking

assessment (CASA). The author states that “Test takers were found to feel more anxious

40
before, during and after the CASA than the FTFsa in general.” (p.117). Furthermore, the

researcher reports on students’ responses revealing a number of anxiety factors such as

“speaking to a computer”, since “the presence of someone listening to them instead of

talking to a computer relieved them” (p.117), or being afraid of making mistakes when

taking the computer-assisted oral test. Sayin (2015) reports on a high frequency of

students’ anxiety towards oral exams. Moreover, a relatively high percentage of students

(over 50%) agreed with often or always “feeling heart beating very fast during oral

exams”; “feeling anxious during oral exams even when well prepared”; “forgetting things

because of stress when talking face-to-face to the instructor”; “finding it difficult to

concentrate during computer-based oral exams because of the testing environment

(computer lab) and presence of other students”; and “feeling stressed when seeing the

time running on the screen”, for example.

In a study examining the effects of students’ anxiety in oral performance in

general, Philips (1992), for instance, reports on highly anxious students’ negative

attitudes towards the oral exam, such as “going “blank”, “feeling frustrated at not being

able to say what they knew”, “being distracted” and “feeling panicky”.

As Horwitz, Horwitz and Cope (1986) point out, “oral tests have the potential of

provoking both test- and oral communication anxiety simultaneously in susceptible

students” (p. 128). Several researchers in the field of language anxiety have also

investigated the relationship between students’ anxiety and performance on oral tests.

41
1.2.3. Anxiety and oral performance

As Hewitt & Stephenson (2012) point out, “In general, investigators have found a

negative relationship between anxiety and oral performance: The higher the levels of

anxiety experienced by learners, the poorer their speaking skills tend to be” (p.172).

Scott (1986), for example, investigated the effect of learner’ self-reported anxiety

on performance in a study on students’ affective reactions to EFL group and pair oral

tests. The author found a negative correlation on both test formats, observing a similar

inverse relationship between students’ anxiety and performance on both types of tests.

A more recent research work developed by Wilson (2006) has also examined correlation

analyses between oral tests and students’ anxiety (using FCLAS) with Spanish learners

of English as an L2. After carrying out the scores, the author found a negative association

between Foreign language anxiety and overall grades in an oral test.

Philips (1992) also investigated the effect of anxiety on students’ oral exam

performance, involving Anglophone learners of French. The author’s findings revealed a

moderate and negative association between anxiety and performance during an oral

exam. This influential study in the field of language anxiety was replicated in Hewitt &

Stephenson (2012). The authors measured the internal reliability of all the instruments

used by Philips (1992) and found significant results in negative associations between

language anxiety and oral performance on an oral exam, with Spanish-speaking students

of English. Significant negative correlations between anxiety and oral performance were

also found in Park and Lee (2005) study with Korean learners of English, where the

42
higher the levels of students’ anxiety were, the lower scores they gained on their oral

performance.

Nevertheless, Shohamy’s (1982) research on affective considerations in language

testing with English students of Hebrew as a second language, for example, did not find

evidence of a strong relationship between students’ attitude and achievement on oral

tests. The author mentions that, in the case of this study, the fact that students reported

a general positive attitude towards oral assessment did not necessarily indicate a high

level of performance on the oral test. Results on a study developed by Young (1986) also

did not indicate a significant correlation between speakers’ anxiety measures and their

oral performance on an unofficial administration of the OPI. The researcher comments

on the fact that anxiety did not have such a strong influence on oral performance

possibly due to the fact that subjects knew those OPI results would not have negative

repercussion on them.

Furthermore, in the examination of a relationship between speaking test and

speaking anxiety towards a computer-assisted and a face-to-face oral achievement

assessment and examinee’s test scores, Öztekin (2011) concludes that there is no

correlation between speaking anxiety and speaking test scores at a pre-intermediate

level. The author also reports on a significant negative correlation between the

computer-assisted test scores and two types of anxiety analyzed in the study (speaking

anxiety and speaking test anxiety) at the intermediate level.

While, in the literature reviewed, research on anxiety in foreign language

learning has mostly shown the mentioned variables’ potential to negatively influence

second language learning, it is worthwhile to take into account the fact that it does not

43
always have to have a negative effect. Ortega (2009), for instance, mentions what is

called “facilitating anxiety” as in the case of perfectionism (one of the causes of anxiety),

highlighting the fact that “some degree of tension can help people invest extra effort and

push themselves to perform better” (p.202).

1.3. Summary of literature review

To summarize, this chapter has reviewed literature on oral testing in foreign

language teaching, which starts with a description of two types of oral assessment

modes, namely the Oral Proficiency Interview (OPI) and oral achievement tests.

The OPI is described as an interactive and adaptive test that assesses the

speakers’ ability to use language in real-life situations and is organized in order to obtain

a speech sample rated according to four major proficiency levels, ranging from “Novice”

to “Superior”, as described in the ACTFL Proficiency Guidelines. These interviews have

been considered appropriate instruments to measure students’ speaking abilities at the

end of a sequence of courses, study abroad experiences, placement tests or job

interviews, for example (Liskin-Gasparro 1983; Omaggio Hadley 2000; Norris & Pfeiffer

2003). Oral achievement tests, on the other hand, are tied to a particular curriculum and

are meant to test what was taught during the instruction period. These tests have thus

been considered a better measure in the evaluation of students’ L2 oral skills in a

classroom context, assessing the learners’ acquisition of specific course contents (Liskin-

Gasparro 1983; Brown 1985; Omaggio Hadley 2001; Larson 2000; Norris & Pfeiffer

2003). We have looked at variants of face-to-face oral achievement assessment methods

44
in language programs, presented in several authors’ works (Omaggio Hadley 2001;

Brown 1985; Gan & Hamp-Lyons 2008), such as instructor interviews with students

working in pairs, individual interviews with the instructor, and in-class oral assessment

with learners working in groups. This section has also reviewed learners’ perspectives

towards face-to-face oral assessment methods. Reports on students’ more negative

reactions include students perceiving these tests as more difficult and anxiety evoking

and less pleasant, valuable or fair when compare to other forms of language tests

(Zeidner & Bensoussan 1988; Scott 1986; Paker & Höl 2012). Some of these studies have

also revealed learners’ positive reactions such as viewing these types of oral tests as an

important and valid way of testing their knowledge of the target language. Students were

also able to distinguish between their perceptions of validity of an oral exam from its

perceived difficulty and anxiety (Scott 1986). Although not the majority, a few research

studies have also reported on the learners’ low levels of anxiety in these testing

situations (Shohamy 1982). In addition, some authors (Brown 1985; Omaggio Hadley

2001; Larson 2000) have discussed limitations in using face-to-face methods for oral

testing, considering lack of time a significant disadvantage in this type of assessment.

This chapter has also covered previous research on technology- assisted oral

testing, mainly focusing on studies reporting students’ attitudes towards computer-

based oral assessment in comparison to face-to-face methods (Early & Swanson 2008;

Keynon & Malabonga 2001; Jeong 2003; Joo 2007; Lowe & Yu 2009; Öztekin 2011; Sayin

2015). Although students’ reactions regarding aspects such as levels of anxiety,

perceived difficulty or perceived fairness, for instance, are somewhat varied in results

reported in previous research, several authors report on a general preference for the

45
face-to-face test, despite examinees’ positive reactions towards the technology-

mediated or computer-based tests. Reasons for this preference are related to aspects

such as the appreciation for the live interaction with the examiner/interviewer, or

viewing the face-to-face test as “more communicative and authentically interactive”, for

instance (Stansfield et al. 1990; Jeong 2003; Joo 2007; Öztekin 2011; Lowe & Yu 2009).

According to students’ perspectives, some advantages in the application of

computer based oral assessment methods consist of the fact of having control over when

to start and stop speaking and believing that these tools help them enhance their

speaking abilities (Lee 2005; Tognozzi & Truong 2009; Kenyon & Malabonga 2001).

Other aspects highlighted include the fact that the use of technology in oral activities

allows for individual and constructive feedback and the fact that computerized oral

assessment allows the examiners to evaluate students’ performance at any time and

place (Early & Swanson 2008; Larson 2000). Some disadvantages implied in the use

technology as an oral assessment tool are the fact that a few students do not feel

comfortable with the technology used and they frequently report not thinking that

technology mediated assessment can capture real life interaction. Furthermore,

researchers have commented on the expenses involved in the application of technology

in oral evaluation, mainly when it comes to the development of oral assessment

materials (Tognozzi & Truong 2009).

Regarding previous studies including measurements of students’ development of

speaking skills when taking computerized oral activities throughout the instruction

period, few studies including pre and post speaking tests were found. Some of these

studies are concerned with Synchronous Computer-mediated Communication (CMC),

46
involving voice and chat tools (Satar & Özdener 2008; Payne & Whitney 2002). Studies

assessing the integration of computerized oral assessment into the curriculum and

including an investigation of students’ oral performance through the completion of

multimedia activities (Tognozzi & Truong 2009; Adair-Hawck, Willingham-McLain &

Youngs 1999) were also discussed. In the majority of these studies, the experimental

condition shows an advantage over the control condition.

The third main section of this chapter has reviewed previous research on anxiety

in foreign language learning, since the analysis of students’ perceptions towards the two

oral testing modes (face-to-face and computer-based) in the current study mostly

focuses on students’ anxiety. We saw that language anxiety has been defined as “a

distinct complex of self-perceptions, beliefs, feelings, and behaviors related to classroom

language learning arising from the uniqueness of the language learning process”

(Horwitz, Horwitz & Cope 1986: 128). The topics discussed included existing methods

to measure anxiety, factors involved in language learning anxiety and the potential

relationship between anxiety and oral performance. For the measure of students’

anxiety in the context of language learning, several researchers have used

questionnaires adapted from FCLAS, an anxiety scale developed by Horwitz, Horwitz

and Cope (1986), (Phillips 1992; Ganschow & Sparks 1996; Park & Lee 2005; Wilson

2006; Zulkifli 2007; Liu & Jackson 2008; Hewitt and Stephenson 2011; Yahya 2013; Abu-

Rabia & Shakkour 2014). Previous research has also included other types of affect

questionnaires consisting of closed-ended and open-ended questions and interviews

(Horwitz & Yan 2008; Kenyon & Malabonga 2001; Shohamy 1982; Scott 1986;

47
Sauvignon 1972; Paker and Höl 2012; Zeidner & Bensoussan 1988; Öztekin 2011; Sayin

2015).

Regarding the factors involved in foreign language anxiety, Horwitz, Horwitz &

Cope (1986) have identified three primary dimensions, which are communication

apprehension, test anxiety and fear of negative evaluation. These sources of language

anxiety have also been observed by several scholars in the field (Phillips 1992;

Ganschow & Sparks 1996; Wilson 2006; Zulkifli 2007; Liu & Jackson 2008; Hewitt and

Stephenson 2011; Yahya 2013; Abu-Rabia, Peleg & Shakkour 2014). Additionally,

aspects such as learner beliefs about language learning, instructor beliefs about

language teaching, personal perceptions of language, perception of low abilities in

comparison to peers, teacher’s questions and corrections in classroom environments,

language learning interest, motivation, test types, time pressure in a test situation,

teacher characteristics and also L1 anxiety (or poor L1 skills) are representative

examples of language anxiety sources found in the reviewed literature. Highly anxious

students have also reported to feel “panicky”, “going blank”, “being distracted” and

“feeling frustrated at not being able to say what they knew” specifically in oral test

situations (Philips 1992).

Few studies have also included questionnaires specifically focusing on students’

anxiety towards computer-based and face-to-face oral tests in their analysis. Öztekin

(2011) reported on examinees’ general higher levels of anxiety before, during and after

the computer-based test for reasons such as “speaking to a computer”, for example.

Sayin (2015) also reported on learners’ agreement with “feeling heart beating very fast

during oral exams”, “feeling anxious during oral exams even when well prepared”,

48
“forgetting things because of stress when talking face-to-face with the instructor”, in a

face-to-face test or “feeling stressed when seeing the time running on the screen”, in a

computer-based test, for instance.

In the examination of a relationship between anxiety and oral performance, most

of the research reviewed has shown a negative influence of language anxiety on oral

abilities. As previous studies have shown, students demonstrating a high level of

language anxiety usually tend to exhibit lower levels of speaking skills (Hewitt &

Stephenson 2012; Scott 1986; Wilson 2006; Philips 1992; Park & Lee 2005). Some

studies, however, also reported not finding a strong relationship between students’

anxiety and oral performance (Shohamy 1982; Young 1986). Furthermore, Öztekin

(2011), for instance, did not find a correlation between speaking anxiety and speaking

test scores at a pre-intermediate level but also reports on a significant negative

correlation between speaking anxiety and speaking test anxiety and learners’ scores on

a computer-assisted oral test at an intermediate level.

The literature reviewed in this chapter is relevant to understanding and

interpreting the development and results of the current study. The following chapter

will consider this study’s methodology and research design in detail.

49
CHAPTER 2: METHODOLOGY

2.1. Introduction

The current research on the use of technology as an oral achievement-testing

tool, through the analysis of students’ perceptions and oral performance, employs

several data collection procedures. The purposes in this chapter are to (1) define the

nature of research design, specific research questions and participants (2) describe the

data collection procedures for the analysis of students’ reactions towards the face-to-

face vs. the computer-based oral test and for the investigation of whether there was

any improvement in students’ speaking abilities when taking computerized oral

activities throughout the instruction period; (3) provide a general description of the

statistical procedures used to analyze the data.

2.2. Summary of Research design

The present study is based on the collection of affect surveys and recordings of

spoken language produced by L2 leaners of Portuguese, enrolled in the second quarter

of Portuguese elementary series. 27 Portuguese learners participated in this study over

the course of one quarter. All learners took a computer-based and a face-to-face final

oral exam at the end of the course, in order to analyze students’ feelings towards the two

different modes of final oral assessment. Students thus completed an affect

questionnaire before and after the course, on their expectations and final opinions

towards the two types of final oral exams. In addition, students responded to survey
50
questions concerned with the anxiety factors involved in each test.

In the analysis of students’ feelings towards both modes of final oral assessment,

it is important to note that these students belonged to two different groups, each

receiving a different treatment throughout the quarter. This aspect might influence the

way each group feels towards the computerized final oral exam in comparison to the

face-to-face method.

The reason why two different classes were used in this research was essentially

for the investigation of another point of analysis in the study, concerned with whether

or not there is an improvement in oral performance when taking computer-based oral

activities throughout the instruction period. Thus, in order to investigate this aspect, one

class took paired computerized oral activities throughout the course in a computer lab,

during classroom time (the Experimental class), and the other one took regular oral

activities in the classroom (the Control class). The differences between students’ speech

in the Experimental class and the Control class were then examined by 4 raters, through

a pre- and a post- oral test. These tests were thus created specifically to measure

whether or not there was any improvement of speaking abilities in the application of

technology in oral assessment activities during the quarter and are independent from

the final oral exams taken at the end of the course. A detailed description of all

instruments, tests and oral activities will be provided further in this section.

As previously explained, the objective of the proposed study is 1) to examine the

use of technology in oral achievement testing in terms of students’ perspectives and

affective factors (by investigating students’ reactions towards the computer-based vis-a

-vis the face-to-face final oral exam); and 2) to understand the value of technology in oral

51
assessment activities in terms of promoting oral performance (by analyzing students’

development of speaking abilities when taking paired computerized oral activities

throughout the instruction period). As mentioned before, these aspects are studied in

the specific context of the final oral exams and computerized oral activities that are part

of the Portuguese Language Program where this research takes place.

2.3. Research questions revisited and hypotheses

The research questions that guide the present study are the following:

1. What are the students’ reactions to the computer-based vis-à-vis the face-to-

face final oral exam, regarding levels of anxiety, perceived difficulty, and perceived

fairness in testing speaking abilities, as well as levels of comfort with the technology

used?

1.1. Do students’ perceptions towards both types of final oral exams differ

depending on 1) the different treatment received by the two classes; and 2) whether the

survey was taken before or after class?

Question 1.1. is exploratory, since a comparison between students’ responses in

different groups, particularly before and after taking both types of tests, will be

investigated in the specific setting presented in the current study. However, it is

predicted that the different treatment received throughout the quarter will influence

students’ feelings in both classes. It is expected that, on average, the Experimental class

will report lower levels of anxiety and difficulty towards the computerized final oral

52
exam vis-à-vis the face-to-face final oral exam than the traditional class, for being more

familiar with this exam format, due to the practice of computerized oral activities

throughout the quarter.

1.2. What are the students’ reactions to the computer-based vis-à-vis the face-to-

face final oral exam at the end of the quarter?

Students’ reactions to the computer-based test vis-à-vis the face-to-face final oral

exam will also be explored in the context presented in the current research. However,

based on instructors’ informal observations of students’ apparent nervousness when

taking a face-to-face oral exam, it is predicted that, on average, students will report a

relatively lower or equal level of anxiety in the computer-based test.

2. What anxiety factors and feelings, presented on the anxiety scale, are more

involved in one type of oral exam in comparison to the other?

2.1. Is there a considerable difference between the Control and the Experimental

class?

The question of which anxiety factors are more involved in one test in

comparison to the other, and whether there is a considerable difference between classes,

is also exploratory, since they will be observed specifically in the context presented in

the current research.

3. Does the Experimental class show greater improvement than the Control class

in terms of oral performance, in certain linguistic areas and overall?

53
Previous studies have showed that L2 learners believed to have benefitted from

web-based learning throughout the quarter in terms of facilitating the development of

their language skills (Lee 2005). Tognozzi & Truong (2009) have also shown that an

experimental group completing oral activities in a Web-based program as part of their

homework, throughout the instruction period, and receiving individualized oral

feedback from the instructor, produced “longer and more accurate speech samples”

(p.9), in comparison to a control group. However, in the proposed study, the

Experimental class participates in the oral activities throughout the quarter in a

different context (in pairs and during class time, in a computer lab). Therefore, the

question of whether the Experimental class shows an improvement over the Control

class, in terms of oral performance, will be, in part, explored in the current research.

Nevertheless, based on Tognozzi & Truong (2009) hypothesis and reports, it is

expected that the fact that students in the Experimental class can listen to their speech

after recording the conversation might trigger their awareness of oral mistakes. In

addition, the fact that the technology used in these oral activities also allow for the

reception of the instructor’s individualized feedback, which is usually not possible in the

classroom, might also contribute to an improvement in terms of oral performance

(Tognozzi & Truong 2009).

54
2.4. Participants

As previously mentioned, a total of 27 adult Portuguese learners participated in

this study over the course of one quarter. Students were enrolled in the second quarter

of the Elementary Portuguese series of Portuguese for Spanish speakers. The

participants are split in two different groups, which correspond to two different classes,

defined as the Experimental class (14 students) and the Control class (13 students, plus

another instructor to form the last pair). All learners had been exposed to Portuguese

instruction during the previous quarter and, since they were enrolled in a Portuguese

for Spanish speakers’ class, students were familiar with the Spanish Language. These

students’ majors are mixed and the range of their age is from 18 to 21.

Participants in this study signed a consent form and were informed of the fact

that their involvement in this research was voluntary and unpaid, and that their decision

would not affect their grade in the course. All students decided, however, to participate

voluntarily and to take both types of tests in order to provide the researcher with

comparable results regarding their perceptions towards both the face-to-face and the

computer-based final oral exams. Since, for the study’s purposes, learners would have to

take an extra oral exam at the end of the quarter (the computer-based test), they were

offered the possibility of choosing the better results between the two final oral tests as

part of their grade.

55
2.5. Data collection procedures

As previously mentioned, in order to analyze students’ affective factors and

reactions towards the computer-based vis-a-vis the face-to-face final oral exam, L2

learners of Portuguese in both classes completed the new computer-based and the

traditional face-to-face final oral exam (described bellow) at the end of the instruction

period and were asked to answer affect questionnaires, which will be further explained

in detail.

Regarding the analysis of whether the Experimental class indicates any

improvement over the Control class in terms of oral performance, through the

application of technology in practice oral tasks, an oral pre-test, at the beginning of the

quarter, and an oral post-test, at the end of the instruction period, were administered to

the two different groups of students. These pre- and post-tests were integrated in the

course syllabus for the two classes and students completed them as part of the course

oral activities taken during the quarter. These tests were also taken through the

computer, using a voice-recording tool on the online platform called MyPortugueseLab

(described below), powered by the publisher Pearson Education.

It is also important to note that the same instructor taught both classes, which

assured the groups’ homogeneity regarding the kind of instruction they were receiving.

Different materials were thus used to gather data on the application of technology as an

oral achievement- testing tool, regarding the analysis of students’ perceptions and

affective factors and potential improvements in oral performance.

A detailed description for the procedures and data collection materials for

56
different points of the analysis in the current study is presented as follows.

2.5.1. Investigation of students’ perceptions and affective factors towards the two

types of final oral exams

As mentioned above, in order to answer the research questions concerned with

the investigation of students’ reactions towards new technology-mediated test in a

comparison between this testing format and the traditional method, students in both

classes took a computer-based and a face-to-face final oral exam at the end of the quarter

and responded to affect questionnaires on their reactions towards the two testing

modes. I will start by presenting a description of the two types of oral exams taken at the

end of the quarter.

2.5.1.1. Face-to-face final oral exam

The face-to-face oral exam corresponded to the traditional method, where

students went to their instructors’ office hours in pairs and randomly chose a

conversation card containing one out of 6 role-play situations, which were given to them

a week before the exam. Following the situations’ guidelines, students performed a

conversation with each other, during approximately five minutes, in front of the

instructor. Since this was an achievement exam, all the situations’ guidelines presented

in this final oral exam were related to the course content and can be seen in Appendix A.

It is relevant to note that participants in this study were already familiar with the

57
traditional final oral exam method, as they had completed a similar test in the preceding

quarter.

2.5.1.2. Computer-based final oral exam

The new online final oral exam system attempted to deviate as little as possible

from the traditional face-to-face test, allowing for the two types of exams to be as

comparable as possible, with the main difference being the fact that the traditional

format was taken in the presence of the instructor in his/her office and the new format

was taken in a computer lab. Thus, the final computer-based oral exam followed what

had been done in the traditional method. Therefore, the exam was also taken in pairs

and students were given the same set of oral situations a week before, and were told to

choose a partner in order to prepare for it (Appendix A). The new exam format requested

students to talk for approximately the same period of time asked in the traditional

method.

When investigating the necessary steps in the design of an oral exam in this

program, the researcher got acquainted with several MyPortugueseLab functions that

allow for the random appearance of one out of six oral situations to be performed, which

assured the similarity between the two exam formats. The oral situation to be

performed was thus arbitrarily assigned to the students on the online platform for the

course, since in the traditional method the teachers also randomly selected one out of

the 6 oral situations for the students to complete in pairs. A step-by-step description of

the test creation method on MyPortugueseLab can be found in Appendix B.

58
The final computer-based oral exam was thus administered to students at the end

of the course. The investigator was present in the computer lab on the day students

completed the exam and a set of instructions was presented to the learners, containing

the necessary steps to take the test (Appendix C).

Each pair of students shared one computer in the lab and one microphone. The

use of a microphone connected to the computer was relevant in order to avoid

background noise, which could interfere with the instructor’s understanding of the

students’ recorded speech, throughout the process of evaluation.

Since the technology-mediated oral exam was taken in pairs, learners were asked

to sign in into one of the two students’ accounts on the learning platform, in order to

access and take the test. The instructor was then able assign a grade separately on each

student’s account.

What follows is a description of the online platform used in the creation of the

computer-based oral exam.

The learning online platform used in the creation of the new technology-

mediated final oral exam was MyPortugueseLab, an online platform by Pearson Higher

Education, which accompanies the elementary Portuguese textbook used in the course,

Ponto de Encontro. This learning platform allows students to access the online format of

the course textbook, through an icon called eText. MyPortugueseLab also offers a variety

of language-learning tools and resources, such as audio and video materials that

accompany the textbook. Instructors are able to assign activities, with specific deadlines,

that are graded either automatically or by the instructor, in the case of opened-ended

oral and written activities. After assignments are completed and graded, both students

59
and instructors can automatically track learners’ progress, as well as the overall

progress of the class.

It is noted in the book’s description by Pearson that the assignments in the

program are based on a communicative approach, since the integration of vocabulary

and grammar focuses on the usage of language in interactional contexts of real-life

situations. Similar to the exercises found in the printed textbook, the assignments found

on MyPortugueseLab bring together the vocabulary, grammar structures and cultural

content presented in each chapter.

A relevant feature of MyPortugueseLab is the fact that it integrates an oral

practice section consisting of a voice-recording tool, which allows users to listen to their

messages before submitting them. Besides providing written feedback, teachers can also

offer audio feedback on speaking activities or on any other instructor-graded task, such

as more opened-ended reading and writing exercises. For the guided machine-graded

activities, the program offers automatic feedback through “Multilayered feedback and

Point-of-Need Support”. Through this feature, students can get instant hints on incorrect

answers and try the activity again. The program also indicates specific sections in the

etext that are linked to the activity’s topic.

In addition to all the exercises and learning tools available in this online platform,

instructors are allowed to create and assign their own instructional materials. This

program was thus chosen for this study, not only because it is already being used in the

course, which ensures the students’ and teacher’s familiarity with it, but also due to the

fact that it allows for the instructors’ creation of technology-mediated practice oral

activities and final oral exams.

60

2.5.1.3. Measuring students’ perceptions and affective factors towards the two types of

final oral exams

As previously explained, in order to measure students’ reactions towards the

computer-based vis-a-vis the face-to-face final oral exam, students responded to affect

questionnaires. Here it is relevant to mention that, although self-reported

questionnaires are not entirely objective as physiological tests would be, this this type

of surveys are accepted and used in the field of anxiety in language learning, as well in

occupational health studies, for instance. As Lenderink & Zoer et al. (2012) mention,

“Self-report is an efficient and accepted means of assessing population characteristics,

risk factors, and diseases and is frequently used in occupational health studies” (p.1).

Following Tognozzi & Truong’s (2009) procedure of administering a pre- and

post questionnaire on students’ reactions regarding technology-mediated oral

assessment, learners were asked to complete a survey on their overall reactions to both

final oral exams. Thus, at the beginning of the quarter, students responded to a survey

regarding their expectations (Appendix D), and, at the end of the quarter, completed a

follow up survey on their final opinions towards both types of tests (Appendix E). The

purpose of having a pre- and post-survey in this study was to investigate to what extent

the application of technology on the final oral exam corresponded to the students’

expectations at the beginning of the quarter and possibly understand what features of

the oral exam revealed to be more or less effective in the learners’ perspective. In this

survey, students were asked to rate their level of comfort with the technology used as

61
well as their levels of anxiety, perceived test difficulty and perceived fairness in testing

their speaking abilities, when taking the computer-based and the traditional final oral

exam method. A 3-point scale of (1) “Low”, (2) “Medium” and (3) “High”, was used in this

section, followed by open-ended questions on the reason why students selected a certain

response. Finally, in order to further investigate some of the advantages and limitations

associated to both final oral exam formats, students were asked open-ended questions

on what they liked “the least” and “the most” when taking each type of test.

The procedure taken for data collection of the students’ general reactions

towards both types of oral exams was developed based on the review of literature and

an analysis of previous studies, such as Tognozzi & Truong (2009) Lee (2005) and

Keynon & Malabonga (2001), mainly in terms of the variables included in the

questionnaire. Some of these variables are also included in other studies examining

students’ reactions towards face-to-face and computer-based oral testing, such as Jeong

(2003); Joo (2007); and Öztekin (2011), for instance.

As will be explained in detail in the Analysis chapter of this dissertation

descriptive statistic analysis, based on students’ responses to the survey, were used in

the examination of students’ reactions towards the computer-based test vis-à-vis the

face-to-face test (research questions 1.1. and 1.2). Learners’ responses to open-ended

questions in the survey will be examined in the Discussions section of the current study.

As mentioned before, the participants were divided in two different groups,

which corresponded to the Control class and the Experimental class. The two classes

received a different instructional treatment during the quarter, in order to investigate

whether or not the Experimental class showed an improvement over the traditional

62
class in terms of oral performance when taking computerized oral activities throughout

the instruction period. (as it is further explained in detail). Since the two classes received

a distinct treatment, research question 1.1. examines whether or not there is a difference

in students’ reactions towards the two types of final oral exams in each class. As we have

seen, this research question also investigates if students’ opinions differ depending on

whether the survey was taken before or after the course.

For the investigation of students’ feelings, the proposed project focuses

particularly on the question of anxiety, more specifically on students’ anxiety towards

two different types of oral exams, rather than towards language learning in general.

Accordingly, in addition to students’ reports on their perspectives towards the new and

the traditional method, in order to study the variable of anxiety in both tests more in

depth, and provide a richer understanding of the factors involved in this variable in oral

testing, the sources of anxiety found in both types of oral exams were also observed.

Hence, in order to answer research questions 2 and 2.1, concerned with the the

anxiety factors and feelings that are more involved in one type of test in comparison to

the other, an additional survey was used. This survey is adapted from the FLCAS

(Foreign Language Classroom Anxiety Scale) found in Horwitz et al. (1986) and can be

found in Appendix F.

To fit the purposes of this research project, some items from the FLCAS were

either modified or deleted. The items deleted correspond to those that were not

considered relevant to the measure of students’ anxiety in the final oral exams. Other

items, such as those concerning the students’ feelings towards certain aspects in the

classroom, were modified, substituting the words “in the classroom” by “in the

63
computer-based” or “in the face-to-face oral exam”. The researcher also included a

statement in the Anxiety scale survey concerning students’ levels of nervousness

towards each test in comparison to their written exams. Although this study did not

focus on a comparison between oral and written exams, the instructor was curious about

this aspect due to reports presented on some of the reviewed literature approaching this

matter (Zeidner & Bensoussan 1988; Scott 1986). The anxiety factors that are more

involved in one test in comparison to the other were thus investigated (research

question 2), followed by the question of whether or not there is also a difference

between classes in terms of students’ responses to this survey (research question 2.1).

Here, it is worth highlighting the fact that the anxiety factors examined in these Research

Questions are limited to the ones presented in the anxiety scale used in the current

study. At this investigation point, it is also relevant to mention that 3 students from this

research sample presented missing responses on some of the questions included in this

survey. These learner’s ratings on the answered questions were included in the analysis.

2.5.2. Investigation of students’ development of oral performance when taking

computer-based oral activities throughout the quarter

As previously explained in the current chapter, in order to investigate whether

or not there was a development in learners’ oral performance when taking

computerized oral assessment activities throughout the instruction period, two

different groups of participants took part in the current research project. The two groups

consisted of the Control class, where students completed oral activities in the classroom

64
during the quarter, and to the Experimental class, where students completed the same

oral activities in a computer lab, also during class time. All these activities were already

part of the course syllabus and meant to be taken in class. The different instructional

treatment the two classes received will be explained in detail in the section bellow.

One of the goals in this study was thus to observe the differences in speaking

abilities between the two classes, examining whether the Experimental class, taking oral

activities through the computer during the instruction period, showed an improvement

over the Control class at the end of the quarter, in terms of oral performance. As

previously mentioned, aspects such as the fact that the computerized oral activities

allowed for students’ reception of individualized oral feedback, as well as the fact that

students could listen to their speech before submitting it (Tognozzi & Truong 2009),

were thought to contribute to a greater improvement of oral abilities in the

Experimental class. In order to investigate whether the Experimental class showed an

improvement over the Control class, a computer-based pre-test, at the beginning of the

quarter and a post-test, at the end of the quarter, were administered to the students in

both the Control and the Experimental classes. It is important to note that this pre-/post

test was completed independently of the computer-based final oral exam taken at the

end of the quarter, since it was used for a different type of analysis in the current study.

The pre- and post- tests thus were exactly the same exam, taken at different

times. Therefore, the test had to be designed in a way that students were able to answer

the same questions at the beginning and at the end of the course. The researcher thus

created one oral situation, according to the content of the syllabus, and included

questions associated to topics that were tied to the Portuguese language class students

65
had taken in the previous quarter and that were also taught and developed in the

Portuguese language course students were currently enrolled in. A copy of the pre-

/post- test, as well as a list of the grammar and vocabulary topics included in this

assessment, can be found in Appendix G.

Both classes took the pre- and post- test during class time, in a computer lab. The

researcher was also present at the time these tests were administered and instructions

on how to take the test were presented to the students prior to the test’s administration

(see Appendix H). These tests were taken in pairs and required one student to ask

questions and another to answer those questions. Learners were thus asked to trade

roles, in order to ensure that both students would produce the same amount of speaking.

This also assured their opportunity to answer all the questions in the test. Moreover,

learners were able to access the test when signing in into their accounts on

MyPortugueseLab. Since the pre- and post- tests were taken in pairs, learners were first

asked to sign in into only one of the students’ accounts and then sign in into the other

student’s account on the program. As it was stated on the instructions form, the student

signed in into his/her account would be the one answering the questions in the oral

situation to be performed and being assessed on those responses.

Each pair of students shared one computer in the computer lab and one set of

headsets (consisting of headphones and a microphone).

2.5.2.1. Differences in the instructional treatment for the Control and Experimental class

After the conclusion of the oral pre-test, taken by both classes at the beginning of

66
the quarter, the Experimental class completed five computer-based oral activities, in

pairs, once every two weeks. The time these oral activities were taken were correlated

with the way they were already organized in the traditional syllabus. All oral tasks were

also taken through MyPortugueseLab, with the duration of approximately five minutes

each, and were based on prompts found in the oral situation sections of the Textbook

(Appendix I). The majority of these tasks correspond exactly to the same prompts found

in the Textbook, whereas a few of them were slightly modified by the researcher with

the addition of one or two more questions in the prompt, in order to include a relevant

grammatical/ vocabulary aspect taught in the course.

The L2 Portuguese learners in the traditional class completed the same five oral

activities throughout the quarter, as regular pair-work activity taken in the classroom.

In order to make both groups as equal/comparable as possible, all oral tasks were

completed during class time, either in the regular classroom (for the Control class) or in

a computer lab (for the Experimental class). Moreover, since these oral situations were

also taken in pairs and required one student to ask questions and another student to

answer those questions, learners in both classes were also asked to trade roles, in order

to ensure they would get same type of oral practice every time.

The Experimental class received instructions on how to take the computer-based

practice oral activities through MyPortugueseLab, which correspond to the instructions

presented for the pre/post test (Appendix H). Here students also shared one computer

and one set of headsets in the Computer Lab and were able to access the oral activities

through their accounts on MyPortugueseLab.

As indicated in the literature reviewed, the fact that students can listen to their

67
speech before submitting it and the fact that the program allows students to receive

individual oral feedback when taking computer-based oral activities has been pointed

out as an advantage in previous studies (Tognozzi & Truong 2009; Kenyon & Malabonga

2001; Lee 2005). Since the learning platform used for the computerized oral activities

included these features, students’ in the Experimental class were able to listen to their

speech after recording the conversation and were also able to receive individual

feedback by the instructor. In order to, once again, make the two groups as comparable

as possible, in terms of the type of instruction received, the instructor was asked to

provide the same type of feedback to both classes, whenever students performed these

oral activities. This feedback was thus given to students in both classes in the form of

“direct correction”, where the explicit form is given to the student whenever an error

has been committed (Ellis 2009). As Ellis (2009) explains, in this type of feedback “the

corrector indicates an error has been committed, identifies the error and provides the

correction” (p.9). The author also refers to other oral feedback strategies that can be

used by the instructor, such as “to start with a relatively implicit form of correction (e.g.,

simply indicating that there is an error) and, if the learner is unable to self-correct, to

move to a more explicit form (e.g., a direct correction).” (p.14). Since the Experimental

class would not interact with the instructor whenever taking the computer-based oral

activities, this feedback strategy of moving from a more implicit to a more explicit form,

was harder to be adopted by the instructor when providing oral feedback through the

computer. For this reason, a feedback strategy in the form of “direct correction” seemed

to be more suitable in this context.

Ellis (2009) also claims that corrective feedback can be both immediate and

68
delayed. As the author explains, written feedback, for instance, is almost invariably

delayed. In the case of the current study, the Experimental class received delayed

feedback, since it was given through the computer, at a later time, after students

completed the task.

Hence, the only difference in the feedback received by both groups was the fact

that, while the Experimental class received delayed and individual feedback through

MyPortugueseLab, the traditional class received immediate and non-individual feedback

in the classroom. Students in the traditional class could not receive individual feedback

due to the fact that it is not possible for the instructor to listen to each student’s

production of oral speech individually in the classroom context (Tognozzi & Truong

2009).

The difference in the treatment received by the two classes was inspired in the

methodology found in Tognozzi & Truong (2009).

2.5.2.2. Voice recording tool used for the computer-based oral activities taken

throughout the quarter by the Experimental class

The voice-recording tool used for the technology-mediated oral activities taken

throughout the quarter is also part of MyPortugueseLab (described above). As mentioned

previously, this program is part of an online course management system that

accompanies the Elementary Portuguese textbook used in the course. This online

platform provides a variety of language-learning tools and resources, as well as audio

and video materials.

69
In order to foster students’ oral proficiency, MyPortugueseLab integrates an oral

practice section, where, in addition to the oral tasks that are already provided in the

program, instructors can create their own oral assessment activities. As highlighted

above, the oral section allows users to listen to their messages before submitting them

and the instructors to provide oral feedback. Both the students and the instructor

teaching both sections of the same Portuguese language course received training in the

beginning of the quarter to assure their familiarity with the voice technology tool they

would be using.

2.5.2.3. Measuring students’ development of oral performance

As stated in the introduction chapter, the current study aims to examine whether

or not there is any improvement in students’ oral performance based on the linguistic

aspects that are formally taught and emphasized in the language program, such as

vocabulary items, grammatical structures and grammatical functions. Aspects of

students’ overall communication were also assessed in order to measure more global

language skills (Liskin-Gasparro 1983), such as being creative, resourceful or easily

understood, for instance.

The instrument to measure students’ development of speaking abilities

throughout the course is a result of an analysis of the reviewed literature, mostly on

works providing examples of oral testing rating scales in L2 (Satar & Özdener 2008;

Tognozzi and Truong 2009; Larson 2000; Liskin-Gasparro 1983; Omaggio Hadley 2001).

70
As in the Satar & Özdener’s (2008) study, the scale in this project was designed

specifically for the research pre- and post-test, involving a description of the ability

parameters for each rating level in each question/prompt of the oral situation created

for the test, in an attempt to identify whether there was any language development in

speaking, in particular with reference to topics covered in class. This scale examines

three different aspects of oral performance: vocabulary, grammar, and communicative

functions, since these are the linguistic aspects formally taught and emphasized in the

course where the new technology oral assessment tool is being applied. The third

category consists of the necessary communicative functions needed to complete the task

(e.g.: “to be able to accept/reject a suggestion”). In addition to these 3 linguistic

variables, another subheading, entitled “Overall communication”, was entered in the

rating scale, in order to also examine whether the general ideas conveyed by the learner

in the target language are easily understood by the examiner. This item of evaluation is

also part of the scale since, apart from the accuracy of the constituent parts of the

students’ speech, aspects such as students’ willingness to take risks and their display of

communicative ease within the context is valued throughout the course based on a

communicative language teaching approach. For the assessment of these variables, a 42-

point rating scale, ranging from 0 to 4 on each question/prompt was used and 4

Portuguese language instructors, who are all native speakers of the target language and

have also taught this class before in the same institution, used this evaluation scale, in

order to access students’ performance and ensure reliability of the results. A description

of the ability parameters for each rating level (from 0 to 4) on each question/ prompt of

the pre- and post- test oral situation was provided, allowing for the raters’ precise

71
evaluation of the aspects being analyzed in the test. A copy of this rating scale can be

seen in Appendix J.

2.6. Summary

The present chapter provided a description of the research methodology in this

study, describing the participants group and the instruments used to gather data for the

investigation of students’ perspectives and affective reactions towards the technology

mediated vs. the face-to-face final oral exam, and for the examination of whether there

is any improvement in learners’ oral performance after completing computer-based oral

activities throughout the instruction period. Detailed information about data analysis

and results are provided in Chapter 3.

72
CHAPTER 3: DATA ANALYSIS AND RESULTS

3.1. Introduction

The purpose of this chapter is to provide a description of the data analysis and

questionnaire results for each research question in the current study. It will be

organized around two central sections corresponding to the two main research topics.

The first section reports on students’ perceptions and affective factors, with a

focus on anxiety, towards the two different modes of oral testing. This section will first

present the questionnaire results of students’ general perceptions towards the

computer-based test vis-à-vis the face-to-face oral testing, and it will address students’

levels of overall anxiety, perceived difficulty and perceived fairness in testing their

speaking abilities, as well as their comfort with the technology used in the computer-

based oral exam. The presentation of these results will be followed by an examination of

the anxiety factors and feelings (measured in the anxiety scale) that are more involved

in one type of oral exam in comparison to the other. Results from the first section will

thus allow us to analyze and reflect on the affordances of technology in oral testing,

particularly regarding the exam format used, in terms of students’ perspectives and

affective factors.

The second section is devoted to the examination of whether or not there is any

improvement in students’ oral performance when taking computer-based oral activities

throughout the instruction period. As previously mentioned, these oral activities

73
resemble the paired computer-based final oral testing format used in the current study.

In order to investigate this aspect, the experimental group and the control group will be

compared in terms of improvement in oral production. The results of the comparison

performed in section 2 will thus allow us to reflect on whether or not there are any

affordances in the use of technology in these type of oral assessment activities, taken

throughout the instruction period, in terms of learners’ improvement in speaking

abilities.

3.2. Students’ perceptions and affective factors

This section will examine students’ perceptions and affective factors towards the

computer-based and the face-to-face final oral exam, with a focus on anxiety. It should

first be mentioned that the design of the questionnaire on learners’ overall reactions

towards the two modes of oral testing did not allow for a fine-grained discrimination

between the variables included in this survey. Throughout the process of analyzing

students’ reported data in the survey concerned with learners’ overall reactions towards

the two modes of oral testing, the computation of Pearson correlation tests revealed that

levels for “Anxiety” and “Difficulty” were highly correlated. Descriptive exploratory data

analyses were thus used in this section in order to discuss the results based on learners’

self-reported data (on a 3 level-scale). As it will be further discussed in the last chapter

of this dissertation, the summary statistics used in the current investigation can form

the foundation for more complex computations and analyses through the use of an

instrument that will allow the variables to be kept more independent one from the other,

74
because understandably there usually is a correlation between increased anxiety and

perceived difficulty of the test. Suggestions on how to hold these factors separate will

also be presented in the Discussions section.

In order to calculate summary statistics in this study, all collected data was

analyzed using R software. A descriptive data summary for these analyses will thus be

presented in tables and figures below. Samples of students’ qualitative reactions and a

discussion of these data will be considered in the Discussions and Conclusion chapter,

in order to further interpret the results yielded from the quantitative responses.

3.2.1. Students’ general perceptions towards the computer-based test vis-à-vis the

face-to-face oral testing

As a first step in the analysis of students’ perceptions and affective factors,

students’ general perceptions towards the face-to-face and the computer-based final

oral exams was investigated. Here, I attempt to answer research questions 1.1 and 1.2.

The first one is concerned with whether or not there were any differences on learners’

reactions depending on the treatment received by the control and the experimental

group and also depending on whether the survey was taken before or after completing

both modes of oral testing. Research question 1.2 focuses on all students’ responses

together (without differentiating between classes) on their general reactions towards

the face-to-face and the computer-based test at the end of the course.

75
3.2.1.1. Students’ responses in the Control and the Experimental class, before and

after taking both types of final oral exams

As explained in the previous chapter, this analysis is pertinent for a better

interpretation of students’ perspectives and affective factors towards the two oral exam

formats and, therefore, for a deeper reflection on the affordances of technology on oral

achievement testing.

The observation of students’ responses in the Experimental class vs. the Control

class takes into account the fact that the Experimental class took oral activities

throughout the quarter and the Control class did not, which might influence students’

responses towards the two types of final oral exams.

Moreover, students’ reactions depending on the time the survey was taken is also

assessed, since learners responded to the questionnaire before and after completing

both types of oral exams. As mentioned in the methodology chapter, students completed

this survey at the beginning and at the end of the course, revealing their expectations

and final opinions on both modes of oral testing. Hence, this examination allows us to

understand whether or not there was a change in students’ opinions on both tests in

terms of overall levels of anxiety, perceived difficulty and perceived fairness in testing

their speaking abilities, before and after taking the tests, in a 3-point scale of 1- “low”, 2-

“medium” and 3- “high”. Results for the level of comfort with the technology used in the

computer-based final oral exam, using the same scale, will also be presented. Means

were thus computed for students’ ratings in all quantitative items included in the

affective questionnaire.

76
3.2.1.1.1. Overall anxiety, perceived difficulty and perceived fairness in testing

their speaking abilities

Table 1 below displays the values across classes on their reactions regarding

levels of anxiety, perceived difficulty and perceived fairness, before and after taking both

types of final oral exams (according to a 3-point scale of 1- “low”, 2- “medium” and 3-

“high”). In order to further analyze the main tendencies in students’ responses, in a

comparison across the two different classes and between learners’ expectations and

final opinions towards the exams, the mean ratings presented in this table are also

shown graphically in Figure 1 below.

77
Table 1. Mean values across classes before and after taking the face-to-face (F2F) and the

computer-based oral exams

Before Course After Course

Anxiety Control Computer 1.69 1.38

F2F 2.38 2.00

Experimental Computer 1.86 1.43

F2F 2.43 2.36

Difficulty Control Computer 1.69 1.54

F2F 2.00 1.69

Experimental Computer 2.00 1.64

F2F 2.50 2.00

Fairness Control Computer 2.15 2.54

F2F 2.62 2.77

Experimental Computer 2.29 2.36

F2F 2.64 2.64

78
Figure 1. Results across classes before and after taking the face-to-face and the computer-
based oral exams regarding levels of anxiety, perceived difficulty and perceived fairness.

When taking a glance at Table 1, we can observe that, overall, the mean ratings in

students’ responses towards the Face-to-face test (F2F in the graphs) are higher than

the ratings towards the computer-based test (Computer), with respect to the anxiety,

79
difficulty and fairness variables included in the questionnaire, both in the Control and

the Experimental class, at the beginning and at the end of the course.

Overall anxiety

Descriptive statistics in Table 1 show that, in average, students in both groups

reported lower levels of anxiety towards the computer-based test in comparison to the

face-to-face oral exam (F2F), before and after the course.

When taking a closer look at the mean ratings for the anxiety levels, we can see

that the Control group’s ratings towards both tests before the course (Computer mean=

1.69; F2F mean= 2.38) are higher than after the course (Computer mean= 1.38; F2F

mean= 2.00).

This aspect was also observed in the Experimental group, as this tendency is also

revealed in the students’ responses before the course began (Computer mean= 1.86; F2F

mean= 2.43) and after taking the oral exams at the end of the quarter (Computer mean=

1.43; F2F mean= 2.36).

Another similar trend between classes is the fact that, although learners tend to

show higher levels of anxiety towards both types of oral exams before the course in

comparison to after the course, the mean ratings towards the computer-based test, both

before and after completing it at end of the quarter, fall in between the “low” and the

“medium” level interval in the survey (mean range from 1.00 to 2.00). In case of the Face-

to-face test, the mean ratings obtained in students’ responses, also before and after the

course, fall in between the “medium” and “high” level interval in the questionnaire

80
(mean ranging from 2.00 to 3.00). However, it is interesting to note that the anxiety

ratings before and after the course, for both tests, were slightly higher in the

Experimental class than in the Control class.

To summarize, overall anxiety results in the Control and the Experimental class

reveal that students’ tended to feel less anxious in the computerized test, in comparison

to the traditional method before after taking both tests at the end of the course. Learners

in both classes also tend to report lower levels of anxiety towards the two different types

of exams after completing them, at the end of the instruction period. Students in the

Control and Experimental class thus tend to consider both tests to be less anxiety

evoking than expected. Furthermore, we were also able to observe that the Experimental

class tends to report slightly higher levels of anxiety in general.

Perceived difficulty

Regarding levels of perceived difficulty, the pattern in students’ responses is

similar to the one found in their reactions towards anxiety. As seen in Table 1, students

in the Experimental and the Control class reported lower ratings towards the computer-

based test, in comparison to the traditional method, before and after taking both types

of final exams at the end of the quarter. Both groups thus show a tendency to consider

the computer-based test less difficult than the Face-to-face one.

When looking specifically at the Control group’s responses before and after

taking the tests, we can also observe a decrease in students’ ratings regarding the

81
difficulty level of both oral exams from before (Computer= 1.69; F2F=2.00) to after

completing these tests at the end of the course (Computer= 1.54; F2F= 1.69).

With regard to the reactions in the Experimental class, ratings towards both tests

before the course (Computer= 2.00; F2F= 2.50) are also higher than after the course

(Computer= 1.64; F2F= 2.00).

Finally, when looking at the mean ratings for students’ responses regarding

difficulty, we can observe that, in comparison to the Control group, ratings in the

Experimental class were slightly higher both before and after the course.

In sum, similarly to the results yielded by students’ reactions towards their

overall levels of anxiety, learners in both groups tend to believe that the computer-based

oral test is less difficult than the Face-to-face one, both before and after taking the exams.

Students in both classes also tend to consider that both of these tests are less difficult

than expected. In addition, although the two classes have quite similar responses

regarding levels of difficulty towards both types of oral exams, before and after taking

them at the end of the course, ratings in the Experimental group tend to be somewhat

higher in general.

Perceived fairness

With respect to students’ responses on perceived fairness in testing their

speaking abilities, Table 1 shows that learners in both the Experimental and the Control

class revealed a tendency to consider the computer-based test less fair than the face-to-

82
face method, both before and after taking the face-to-face and the computer-based oral

exams.

With the Control group in particular, we can observe that the average ratings

towards both types of final oral exams before the course (Computer= 2.15; F2F= 2.62)

are lower than ratings after the course (Computer= 2.54; F2F= 2.77).

For the Experimental group, the mean ratings towards the computer-based test

also tend to be lower before the course (Computer= 2.29) in comparison to after the

course (Computer= 2.36). However, this group’s responses towards the face-to-face test

indicate equal mean ratings in terms of fairness, before and after taking this type of oral

exam (F2F= 2.64). Therefore, the Experimental group’s attitudes towards the face-to-

face test in terms of fairness did not really change from their expectations to their final

opinions.

Furthermore, Table 1 indicates that before the course, the control group shows

slightly lower mean ratings towards both tests (Computer= 2.15; F2F= 2.62) in

comparison to the Experimental group (Computer= 2.29; F2F= 2.64), whereas after the

course, this group shows higher ratings in both tests (Computer= 2.54; F2F= 2.77) in

comparison to the the Experimental group (Computer= 2.36; F2F=2,64).

It is also interesting to note that, in general, the highest mean ratings in students’

responses in the questionnaire given before and after the course, can be observed in

their responses towards the fairness variable. Taken together, results in students’

responses in terms of Fairness indicate that learners in both classes considered the

computer-based oral exam to be less fair than the face-to-face one before and after

taking both tests. Additionally, both classes thought that the computer-based test was

83
fairer than what they expected (as the mean ratings towards this test were lower before

than after taking the exam).

Regarding the face-to-face oral exam, the Control class also thought this type of

test was more fair than expected but opinions in the Experimental class towards the F2F

assessment in terms of fairness did not change.

Furthermore, a slight difference we find between classes is the fact that, before

the course, the Control groups’ expectations towards both tests in terms of fairness are

lower than the Experimental group. However, after the course, results show that

students in the Control group tended to consider both oral exams more fair than learners

in the Experimental group.

3.2.1.1.2. Comfort with technology used

Results on learners’ reports regarding their level of comfort with the technology

used in the computer-based final oral exam are presented below. Table 2 shows the

means computed for students’ reactions in the Control and the Experimental class,

before and after taking the tests. As mentioned above, students’ responses on this

questionnaire were based on a 3 level scale of 1- “low”; 2- “medium” and 3- “high”.

84
Table 2. Mean values across classes on level of comfort with the technology used before and
after taking the computer-based oral exam


Before Course After Course

Comfort with the Control Computer 2.62 2.46
Technology used

Experimental Computer 2.21 2.57

As Table 2 displays, the pattern for students’ responses, towards their level of

comfort with the technology used in the computer-based final oral exam, slightly differs

from one class to the other. First, we can observe that the mean ratings for the Control

group before the course (2.62) are somewhat higher than after the course (2.46),

whereas in the Experimental group, ratings before the course (2.21) are lower than after

the course (2.57). Additionally, we note that, before the course, mean ratings in the

Control group (2.62) are higher than in the Experimental group (2.21). Nonetheless,

after the course, students’ ratings in the Control group (2.46) are slightly lower than in

the Experimental group (2.57).

Overall, results regarding students’ responses towards the level of comfort with

the technology used in the computer-based test show that learners in the Experimental

group felt more comfortable than expected, while learners in the Control group felt less

comfortable than predicted.

In addition, students’ reports before class reveal that, in average, the Control

Group expected to feel more comfortable than the Experimental group but, after the

85
course, there is a slight tendency for the Experimental group to feel more comfortable in

comparison to the Control group.

3.2.1.2. Students’ overall reactions towards both types of oral exams at the end of

the quarter

After providing a descriptive summary for students’ reactions towards the new

computer-based vis-à-vis the face-to-face final oral exams across classes, and taking into

account students’ expectations (before the course) and final opinions (after the course)

towards the two modes of oral testing, I will examine all students’ perceptions after

taking both types of oral exams at the end of the quarter (after the course). Thus, in order

to answer research question 1.2, concerned with all students’ reactions regarding the

two types of oral exams at the end of the instruction period, leaners’ responses were

taken together, without differentiating between classes.

3.2.1.2.1 Overall anxiety, perceived difficulty and perceived fairness in testing

their speaking abilities

Table 3 below displays the mean values (in a 3-level scale ranging from 1- “low”

to 3- “high”) for all students’ responses towards the final computer-based and the face-

to-face oral exams, regarding levels of anxiety, difficulty and fairness in testing their

speaking abilities. Results are highlighted visually in Figure 2 below.

86
Table 3. Mean values for all student’s responses regarding levels of anxiety, perceived
difficulty and perceived fairness, after taking the face-to-face (F2F) and the computer-
based oral exams at the end of the quarter

After Course

Anxiety All students Computer 1.40

F2F 2.18

Difficulty All students Computer 1.59

F2F 1.85

Fairness All students Computer 2.44

F2F 2.70

87
Figure 2. All students’ reactions to both types of oral exams, regarding levels of anxiety,
perceived difficulty and perceived fairness, at the end of the quarter

Overall anxiety

Taking into account that the 3-level scale presented in the questionnaire

corresponds to 1- “low”, 2- “medium” and 3- “high”, Table 3 shows that the mean ratings

towards the computer-based test are closer to the “low” category in the survey

(mean=1.40). In the case of the face-to-face test, the average of learners’ ratings towards

anxiety falls into the “medium” category (mean=2.18). Hence, when examining all

students’ opinions regarding overall anxiety levels towards both types of final oral

exams, we can also observe students’ tendency to rate their levels of anxiety towards the

computer-based test lower than the face-to-face.

88
Perceived difficulty

With respect to all learners’ final opinions on levels of difficulty, Table 3 shows

that the mean ratings towards both types of oral exams are closer to the “medium”

category (Computer=1.59; F2F=1.85). However, the mean obtained for the face-to-face

test reveals that, in average, there is a slight tendency to perceive the face-to-face test as

more difficult than the computer-based, after taking both types of exams at the end of

the instruction period.

Perceived fairness

Regarding students’ final responses on levels of perceived fairness in testing their

speaking abilities, we can observe in Table 3 that the examinees’ ratings towards the

face-to-face final oral exam (F2F= 2.70), tend to lean towards the “high” numerical

classification in the questionnaire, revealing students’ tendency to assess this test as

highly fair. Although learners’ responses towards the computer-based also indicate a

tendency to consider this test quite fair (Computer=2.44), higher ratings towards the

traditional F2F oral testing method indicate examinee’s propensity to consider this type

of oral exam to be more fair in terms of testing their speaking abilities.

89
3.2.1.2.2. Comfort with the technology used

Concerning examinee’s level of comfort with the technology used in the

computer-based oral exam, when taking into account all students’ responses at the end

of the instruction period, the mean rating for students’ answers to the final opinions

survey corresponded to 2.5, which falls in between the “2=medium” and the “3=high”

level categories included in the survey. This result suggests that, in average, students

tend to feel fairly comfortable with the technology-mediated oral assessment tool used

in the computer-based final oral exam.

3.2.2. Examination of anxiety factors and feelings that tend to be more involved in

one testing format in comparison to the other.

Having discussed the results in terms of students’ general perspectives towards

the computer-based and the face-to-face final oral exams, this section will now focus on

the question of anxiety in a comparison between the two testing formats. This analysis

thus concentrates on the examination of the anxiety factors and feelings, presented on

the anxiety scale, that tend to be more involved in one type of oral exam in comparison

to the other.

As mentioned in the methodology chapter, after taking both types of final oral

achievement exams at the end of the quarter, examinees completed a 5 point Likert-like

questionnaire, based on the Horwitz et al. (1986) FLCAS (Foreign Language Classroom

Anxiety Scale). As previously explained, the anxiety scale contains the following set of

90
response options: -2=Strongly disagree; 2= Disagree; 0=Neither agree or disagree; 1=

Agree; 2= Strongly agree. A total of 13 statements/ anxiety factors were included in the

scale, with each statement being presented individually in the context of the Face-to-face

and the computer-based oral exam.

Since previous results on this research topic were analyzed descriptively,

descriptive results will continue to be reported for these data.

As mentioned earlier in the methodology section, at this investigation point, data

from 3 students presented missing responses on some of the questions in the survey.

These learner’s ratings on the answered questions were included in the analysis.

In order to determine the anxiety factors and feelings that are more involved in

one type of test in comparison to the other, all students’ reactions towards the

statements provided on the anxiety scale were examined. As explained above, each item

was presented in the context of the final oral exam taken in the lab (Computer) and the

traditional test conducted in the instructor’s office (F2F). Descriptive analyses were

performed by looking at the differences in students’ answers between the two test types.

These values were thus calculated by, for each item, subtracting the mean ratings for the

computer-based from the F2F oral exam. These differences were then sorted from

smallest to largest. The factors and feelings considered as tending to be more involved

in one test in comparison to the other were thus the ones displaying the largest

disparities between students’ responses towards the F2F and the computer-based test.

Figure 3 below displays the mean ratings of all learners’ responses to the items

included in the anxiety scale, towards the F2F and the computer-based oral exam

formats, sorted by the size of the difference between the ratings on the two modes of

91
oral testing. These items are coded in numbers 1 to 13 and students’ assessments

towards them can be observed in the numbers ranging from -2 (strongly disagree) to 2

(strongly agree). For a clear understanding of the anxiety factors and feelings entered in

Figure 3, a description of each item, and the corresponding number, is provided in Table

4 below.

92
Table 4. Items included in the anxiety scale

Number Item

1 I never feel quite sure of myself when I’m speaking in the oral exam in the
instructor’s office/ in the LAB

2 It frightens me to talk in front of the teacher/computer in the oral exam in the


instructor’s office/ in the oral exam in the LAB

3 I start to panic when I have to speak without preparation in the oral exam in
the instructor’s office/ in the oral exam in the LAB

4 I worry about the consequences of failing my oral exam in the instructor’s


office/ in the LAB

5 In the oral exam in the instructor’s office/ in the oral exam in the LAB, I can get
so nervous I forget things I know.

6 It embarrasses me to talk in front of the teacher/computer in my oral exam in


the instructor’s office/ in the oral exam in the LAB

7 Even if I am well prepared for the oral exam in the instructor’s office/ in the
LAB , I feel anxious about it.

8 I often feel like not going to my oral exam in the instructor’s office/ in the LAB

9 I can feel my heart pounding when I'm taking an oral exam in the instructor’s
office/ in the LAB

10 I feel more tense and nervous in my oral exam in the instructor’s office/ in the
LAB than in my written exams.

11 I get nervous and confused when I am speaking in the oral exam in the
instructor’s office/ in the oral exam in the LAB

12 I get nervous when I don't understand every word in the prompts for the oral
exam in the instructor’s office/ for the oral exam in the LAB

13 I get nervous when I have to perform oral situations for which I haven't
prepared in advance, in the oral exam in the instructor’s office/ in the oral
exam in the LAB

93
Figure 3. All students’ responses towards the computer-based and the face-to-face test on
the items included in the anxiety scale.

As illustrated in Figure 3, all the anxiety factors and feelings included in the

anxiety scale received higher average ratings when evaluated in the context of the face-

to-face final oral exam taken in the instructor’s office (F2F).

The group of items in the anxiety scale displaying the slightest differences in

students’ mean ratings between the two tests are number 12 (“I get nervous when I don't

understand every word in the prompts for the oral exam”), number 8 (“I often feel like

not going to my oral exam”), and number 1 (“I never feel quite sure of myself when I am

speaking in the oral exam”). Hence, with respect to the mentioned anxiety factors and

feelings, examinees’ opinion towards the final oral exam did not differ much depending

on whether it was conducted in the instructor’s office or through the computer.

Moreover, the mean ratings towards item 12 and item 1 tend to lean closer to the “no

opinion” category in the survey, for both types of oral exams. In the case of statement 8
94
(“I often feel like not going to my oral exam”), in the context of the F2F test, this is the

only item in the anxiety scale falling in between the “disagree” and the “no opinion”

classification in the anxiety questionnaire.

As pointed out before, items on Figure 3 are ordered according to the size of

difference in students’ responses from smallest to largest. A slightly more noticeable

distinction between students’ answers on the two types of exams can be observed in

another group of items that are also fairly similar in terms of variation in learners’ mean

ratings concerning the two testing modes. These items include number 4 (“I worry about

the consequences of failing my oral exam”), 13 (“I get nervous when I have to perform

oral situations for which I haven't prepared in advance”) and 3 (“I start to panic when I

have to speak without preparation in the oral exam”). However, as Figure 3 shows, the

distinction in students’ opinion between one test and the other regarding these items is

still relatively small. An interesting aspect to take into account is the fact that item 13 is

the one displaying the highest mean ratings in students’ responses to both the F2F and

the Computer exam.

Considering the order presented in Figure 3, the difference in learners’ opinions

between the two modes of oral testing become more evident when observing the

following items on the anxiety scale:

11(“I get nervous and confused when I am speaking in the oral exam”)

9 (“I can feel my heart pounding when I'm taking the oral exam”)

5 (“In the oral exam (…) I can get so nervous I forget things I know”)

2 (“It frightens me to talk in front of the teacher/computer in the oral exam”)

7(“Even if I am well prepared for the oral exam (…), I feel anxious about it”)

95
6 (“It embarrasses me to talk in front of the teacher/computer in my oral exam”);

10 (“I feel more tense and nervous in my oral exam (…) than in my written

exams”)

The larger differences between students’ ratings on the two modes of oral testing

found in these items thus reveal a greater tendency for examinees to consider these

anxiety factors and feelings as more involved in the F2F test, in comparison to the

computer one.

Within this group of items, the statements included in the anxiety scale

equivalent to number 6 (“It embarrasses me to talk in front of the teacher/the computer

in my oral exam”) and 10 (“I feel more tense and nervous in my oral exam in the

instructor’s office/ in the LAB than in my written exams”) are the ones presenting the

largest differences in students’ reactions towards one exam and the other, containing

similar values in terms of variation in students’ opinions. These anxiety factors thus

seem to be considerably more involved in the F2F test, revealing a stronger tendency for

students to feel more embarrassed in the presence of the instructor than in front of the

computer, when taking the final oral exam. As we can see, students’ also show a

propensity to feel higher levels of tension and nervousness in the F2F oral exam than in

their written exams, in comparison to the computer-based test.

In addition, students’ responses towards all statements presented on the anxiety

scale tend to fall into the disagree (-1) and agree (1) category in the survey, which does

not show a particularly strong discrepancy in terms of differences in learner’s mean

ratings between the two types of exams, in general.

96
Overall, the present outcomes are in line with the ones presented in the former

investigation, where students’ perspectives towards both types of tests were analyzed

regarding aspects such as perceived overall anxiety, difficulty and fairness, as well as in

terms of their level of comfort with the technology used in the computer-based test. As

discussed above, students’ mean ratings towards both oral testing methods concerning

their overall anxiety levels were also higher in the context of the F2F test. This was the

case when observing all students’ responses both at the end of the course and when

examining learners’ reactions in the Experimental and the Control class separately,

before and after taking both tests.

3.2.2.1. Is there a considerable difference between the Control and the

Experimental class?

An analysis of whether there is a considerable difference between classes is also

relevant for the current investigation of the anxiety factors and feelings that tend to be

more involved in one testing format in comparison to the other. Once again, the fact that

students in the sample belong to two different classes, receiving a different treatment

throughout the course, might influence their affective reactions towards both methods

of oral testing.

Figure 4 below replicates the structure of Figure 3, except that in this case it

displays the difference between classes in terms of learners’ responses regarding both

final oral exams in general. As in Figure 3, items are sorted by the size of the difference

between students’ ratings towards the anxiety factors and feelings included in the

97
questionnaire. Higher mean ratings for the Experimental class are presented on the left

and, in the case of the Control class, can be observed on the right.



Figure 4. Differences between classes, regarding students’ ratings towards the items
presented on the anxiety scale.

As we can observe, classes did not differ much in terms of their predisposition to

experience the anxiety feelings included in the scale. As indicated in Figure 4, the

discrepancy between students’ ratings in each class tends to be considerably low, with

the mean ratings towards the factors presented on the anxiety questionnaire either

nearly overlapping (e.g., items number 11 and 12) or presenting a considerable small

difference between ratings. Thus, results presented in Figure 4 suggest that the

treatment received by the Experimental group does not tend to have a particular

98
influence on the way students feel towards both types of oral testing, in comparison to

the Control group.

3.3. Students’ improvement in oral performance

The second research question in the current study involved whether or not there

were any improvements in terms of oral performance when taking part in oral

assessment paired communicative tasks via the computer compared to the face-to-face

class, throughout the course. As explained in detail in the Methodology chapter, a Control

class completed these oral activities in the classroom (the traditional method), while an

Experimental class completed the same oral tasks during class time, in a computer lab,

receiving computerized oral individual feedback from the instructor.

At this point of analysis, data from 3 students belonging to the Control class and

1 student from the Experimental class were eliminated, due to the incompletion of either

the pre- or the post- test. This investigation thus includes 10 students in the Control class

and 13 students in the Experimental class.

It is important to recall that 4 different raters (all Portuguese instructors at the

university where the study took place) evaluated students’ speech in the pre- and post-

test, through a 42-point rating scale, organized through the assessment of “Grammatical

structures”, “Vocabulary”, “Communicative functions” and “Overall Communication”

(Appendix J).

In order to understand the extent to which the different evaluators arrived to the

same interpretation of facts/results (Krippendorff 2011), inter-rater reliability (IRR)

99
was tested by using Krippendorff’s Alpha, which has advantages over other IRR

statistics, such as robustness to missing data and different number of observers and

sample sizes (Hayes & Krippendorff 2007). Alpha was 0.766, which can be considered

to fall within the range of reliability suggested in Krippendorff (2004). As the author

states, “Consider variables with reliabilities between α = .667 and α = .800 only for

drawing tentative conclusions.” (p.241).

3.3.1. Does the Experimental class show greater improvement than the Control

class in certain linguistic areas and overall?

Research question 3 concerns whether or not there were any improvements in

terms of oral performance when taking the computerized paired oral communicative

activities throughout the instruction period (the treatment received by the Experimental

Class).

Considering the fact that one of the classes could already reveal a higher oral

performance level at the beginning of the instruction period, and that this could possibly

interfere with the interpretation of the results, as a first step in this analysis, a T-test was

administered in order to compare the overall pre- and post- test scores in each class (on

a 42-point rating scale). Table 5 below thus shows the average of the pre and post-test

mean scores in both classes and the difference between those means.

100
Table 5. T-test results for Pre- and Post-test mean scores in both classes

Control Experimental p-value

Pre 31.43 25.46 0.09

Post 36.55 29.33 0.12

* < p 0.05

When observing the pre-test mean values in both classes, it is possible to notice

that, on average, the Control class produced a higher overall score than the Experimental

class in the pre- test. However, it cannot be concluded that the Control class was

significantly better than the Experimental one at the beginning of the course, as no

statistically different results were revealed in this analysis. Therefore, the two groups

were comparable for the purpose of this study.

Results for the two classes in terms of improvement in oral production were then

determined by calculating the difference between the pre and post-test scores (totaled

within certain linguistic areas and overall) for each student. A t-test was also performed,

in order to assess whether the scores in the two classes were statistically different from

each other. Table 6 below thus shows the t-test results for the differences between the

two groups in terms of improvement in oral performance, with respect to the linguistic

variables included in the 42-point pre-/post-test rating scale and total test scores.

101
Table 6. Comparison between the Control and the Experimental class in terms of
improvement in oral performance


Linguistic Control class Experimental class P-value
Variables improvement improvement


Grammatical 1.80 1.19 0.57
structures

Vocabulary 0.73 0.88 0.76

Communicative 2.18 1.31 0.29


Functions

Overall 0.43 0.50 0.64
Communication

Total score 5.13 3.88 0.58


* < p 0.05

As indicated in Table 6, although, in a comparison between classes, the

Experimental Class displays higher oral improvement values in linguistic variables such

as “Vocabulary” and “Overall communication”, and the Control class shows higher oral

production gains in terms “Grammatical structures” and “Communicative functions”,

these results did not differ statistically (p>0.05). Furthermore, although the average

overall gains in the Control class (Total score) was 5.13, whereas in the Experimental

class was 3.88, this analysis did not find statistically significant differences between the

two groups’ oral production gains in terms of overall test scores either. These results

thus suggest that the treatment received by the Experimental class might not play a role

in fostering students’ speaking abilities on those linguistic variables emphasized during

102
the course. A reflection on some of the reasons that may account for the outcomes

obtained in the current research section, will be presented in the Discussions and

Conclusions chapter in this study.

3.4. Summary of key findings on students’ perspectives and affective factors

3.4.1. Students’ overall reactions towards both types of oral exams

The main findings in this descriptive exploratory analysis of students’ overall

reactions towards both types of oral exams revealed examinees’ general tendency to

consider the computer-based final oral exam as less anxiety evoking and less difficult, in

comparison to the face-to-face test. We were also able to observe that, despite learners’

positive reactions towards the computer-based test in terms of levels of anxiety and

difficulty, students also tended to define this test as less fair in terms of testing their

speaking abilities. These aspects were observed when examining all students’ responses

together, after completing both types of oral exams at the end of the quarter, and when

comparing students’ reactions in each class, before and after the course (in a pre- and

post-test survey).

Results from the pre- and post-test surveys of students’ reactions across classes,

in the pre- and post-test survey, also indicate a tendency for students in both groups to

consider the two types of tests as less anxiety evoking and less difficult than expected,

with the Experimental group reporting slightly higher levels of anxiety and difficulty

towards both types of oral exams in general.

103
With respect to each classes’ reactions towards both types of exams regarding

“fairness in testing their speaking abilities”, there was a tendency in both classes to

consider the computer-based test as more fair than expected. With regard to students’

attitudes towards the F2F test, the Control class also showed an increase in students’

mean ratings from their expectations to their final opinions, while students’ assessments

in the Experimental class were the same before and after taking the tests.

Regarding the examinees’ level of comfort with the technology used in the

computer-based test, students’ reports in the Experimental class reveal that, in contrast

to the Control class, learners felt more comfortable with the technology used in the final

computer-based oral exam than what they expected. At the end of the course, this class

also reports on slightly higher levels of comfort with technology used in the computer-

based oral exam. Furthermore, all examinees’ responses taken together at the end of the

quarter show learner’s tendency to feel quite comfortable with the computerized oral

assessment tool used in the current study.

3.4.2. Anxiety factors

Results presented in figure 3 essentially show that, although we can observe

higher mean ratings towards the F2F oral exam with regard to all the items included in

the anxiety scale, the tendency for some anxiety factors and feelings to be more involved

in the face-to-face method are somewhat more related to certain items. These include

aspects such as “getting nervous and confused when speaking in the oral exam” (item

11); “feeling the heart pounding when taking the oral exam” (item 9); “forgetting things

104
as a result of being nervous” (item 5); “feeling frighten to talk in front of the

teacher/computer” (item 2); and “feeling anxious about the exam even when well

prepared” (item 7). Nevertheless, the most evident differences between students’

responses towards the F2F and the computer-based oral exams indicate that students

tend to feel more embarrassed to speak in front of the instructor than in front of the

computer (item 6) and a predisposition to feel more tense and nervous in the F2F oral

exam than in the written exams, in comparison to the computer-based oral testing

method (item 10).

When comparing the Experimental and Control class concerning students’

responses towards the anxiety factors and feelings presented in the questionnaire, we

were able to observe very similar trends between these two groups.

3.5. Summary of key findings on Students’ improvement in oral performance

With respect to the second main research topic in this study, concerned with the

analysis of students’ improvement in oral performance when taking computerized oral

activities throughout the quarter, main findings showed that, although the Experimental

class showed higher oral improvement values in terms of “Vocabulary” and “Overall

communication”, the results were not statistically significant.

As a first step in this analysis, an examination of the difference between the

scores in the two classes’ pre tests was performed, in order to investigate whether the

Control class showed superior oral speaking skills from the beginning, since it could

imply an initial propensity for this class to present higher gains in terms of oral

105
improvement. However, although this class did show a higher score in the pre oral test

taken at the beginning of the course, no statistical significance was found in this

analysis.

3.6. Summary

The present chapter provided descriptive exploratory quantitative results

regarding two central research topics concerning students’ affective reactions towards

the two types of final oral exams, with a focus on anxiety, and regarding students’

improvement in oral performance when taking computerized oral assessment activities

throughout the course. Taken together, results obtained in the descriptive analysis of

students’ perspectives and affective factors revealed examinees’ tendency to report

more positive reactions toward the computer-based final oral exam in general, except in

terms of fairness in testing learners’ speaking abilities. These trends were observable in

both classes, in a pre- and post-test survey taken before and after the course.

Furthermore, no statistical significance was attained when observing whether the

Experimental class, taking computerized oral activities throughout the instruction

period, presented a better improvement in oral performance, in comparison to the

Control class. The following chapter offers an in-depth discussion and provides

concluding remarks with regard to the results presented in this chapter.

106
CHAPTER 4: DISCUSSIONS AND CONCLUSION

4.1. Introduction

In line with previous studies investigating students’ reactions towards new

computer-based oral assessment methods (Keynon & Malabonga 2001; Jeong 2003; Joo

2007; Öztekin 2011; Tognozzi & Truong 2009), one of the main purposes of this study

was to examine students’ perceptions and affective factors, with a focus on anxiety,

towards a new computerized oral achievement testing system in a Portuguese Language

Program, created with the intention of substituting the existing face-to-face oral exam

taken in the instructor’s office at the end of the quarter. The second main purpose in the

current study was to examine the implementation of computer-based paired oral

assessment activities done throughout the instruction period, as a mean to foster

students’ improvement in terms of oral performance.

This final chapter provides a discussion of the key findings on the two main

research topics in the current investigation. Aspects related to this study’s limitations,

implications for the field of second language learning and teaching, and suggestions for

future research, will also be presented, followed by a conclusions section.

4.2. Discussion of students’ perceptions and affective factors

As seen in Chapter 3, the analyses and results concerned with students’

perceptions and affective factors towards the computer-based and the face-to-face oral

107
exams are divided into two sections. Hence, I will first consider the results from

students’ general perceptions towards both types of tests in terms of anxiety, difficulty

and fairness in testing their speaking abilities, as well as their level of comfort with the

technology used in the new computer-based oral exam. Next, I will discuss the results

with regard to the anxiety factors and feelings, included in the anxiety scale, that are

more involved in one type of test in comparison to the other. As mentioned in the

analysis and results section, samples of students’ qualitative responses will be

considered in this chapter, in order to further interpret the obtained quantitative results.

4.2.1. Students’ general perceptions towards the computer-based test vis-à-vis the face-

to-face oral testing

As previously explained, the analysis concerning students’ general perspectives

towards the exams takes into account the fact that participants in this study belong to

two different classes, receiving a different treatment throughout the instruction period.

This different treatment mainly consisted in the fact that the Experimental class took

computerized oral activities throughout the quarter, in a computer-lab, during lesson

time, and the Control class took the same activities in the classroom. It is also taken into

account that students responded to this survey before and after the course, i.e. before

and after taking both types of oral exams at the end of the quarter. Hence, this analysis

started with an investigation of whether there were any differences in students’

responses depending on each class and on the time the survey was taken.

108
For the second part of the analysis on students’ general perceptions, this

investigation focused on learners’ reactions only at the end of the quarter and without

differentiating between classes.

As observed in the analysis and results chapter, a strong general trend that

resulted from students’ responses to this survey was a tendency to report lower mean

ratings towards the computer-based oral exam, in terms of anxiety, difficulty and

fairness. For comparison of the current research results with those reported in previous

studies, this chapter will focus on studies including examinees’ attitudes towards

computer-based and face-to-face oral exams, since this aspect constitutes the focus of

research topic 1.

4.2.1.1. Overall anxiety levels

With respect to students’ levels of anxiety towards both types of tests, results

presented in the previous chapter displayed learners’ tendency to report lower ratings

towards the computer-based test. These results were obtained when examining

students’ responses in each class, before and after taking both types of oral exams, and

when analyzing all students’ responses together at the end of the quarter. Furthermore,

mean ratings towards overall anxiety levels on both oral exams are also lower after

taking the tests, indicating students’ propensity to feel less anxious than expected in both

oral exams. This seems logical since students’ perceptions of their levels of anxiety

towards the oral exams should decrease after having completed them. In addition, in

comparison to the Control class, the Experimental class tends to report slightly higher

109
levels of anxiety, before and after taking the oral exams. This might suggest that learners

in the Experimental group tend to be somewhat more anxious in general, since they

show slightly higher levels of overall anxiety from the beginning, before taking the tests.

When providing a justification for feeling less anxious in the final computer-

based test in comparison to the face-to-face, the majority of students’ answers to the

open-ended questions in the pre- and post- survey indicated aspects related to talking

in front of the instructor in the face-to-face context. It is relevant to recall that, at the

beginning of the course, students were already familiar with the F2F format presented

in this study, since they had taken the traditional final oral exam in the previous quarter.

This means that examinees already knew what to expect from this type of oral test. Some

examples of common responses reporting lower levels of anxiety towards the computer-

based test thus correspond to the following:

[In the computer-based test] You don’t have someone stare at you the entire

time

Performing in front of the instructor is more nerve-wrecking

[In the computer-based test] You do not get to witness the person evaluating

you

An interesting aspect is that, in students’ responses to aspects they liked the least

about the face-to-face format, students also refer to aspects related to the presence of

the instructor (e.g., “being watched”; “having the grader right next to you”), or simply

mention the fact that they feel more anxious/ nervous in this context (e.g., “the nerves”;

“it’s more stressful”; “the anxiety that came with being evaluated in person”). Responses

110
to aspects that students liked the most about the computerized test include comparable

opinions (e.g., “no one is observing me”; “no one watching”; “less pressure”; “less

nervousness”).

Examinees’ responses regarding overall anxiety levels towards the two modes of

oral testing in the current study present similarities and differences from the reviewed

literature, analyzing students’ affective reactions towards computer-based/technology-

mediated and face-to-face oral tests.

Anxiety reports in this study differ from previous studies, where test takers

indicated relatively higher levels of anxiety/nervousness in the new technology-

mediated oral exams in comparison to the face-to-face formats (Stansfield et al. 1990;

Lowe & Yu 2009; and Öztekin 2011). The authors referred to aspects such as the fact

that students saw the technology-mediated test as more “unfamiliar” and “unnatural”

(Stansfield et al. 1990), “worried about making mistakes”, mostly for being recorded,

and also due to “the lack of reaction from the computer” and “the fixed time given for a

response” (Lowe & Yu 2009), for example. Öztekin (2011) also comments on the fact

that almost half of the students appeared to be afraid of making mistakes in the

computer-assisted speaking assessment and mentions that “This may have resulted

from interlocutor interference in the FTFsa, given that interlocutors typically try to

relive test takers during interviews” (p.76).

The distinct results observed in the current study and these past research works,

regarding students’ levels of anxiety, might be somewhat related with the different

structure of the oral exam used in the present study. In the case of the face-to-face oral

test used here, the role of the examiner is more of the “grader”; a person who students

111
are speaking “in front of”, rather than the interlocutor/interviewer, since the test is

conducted in pairs and the instructor is not supposed to interfere in the oral situation

performed by the students. This aspect might contribute to students’ higher levels of

anxiety towards the face-to-face oral achievement test, in comparison to the computer-

based, since, by not interacting with the teacher during the role-play situation,

examinees might get a stronger awareness of the fact that they’re being evaluated. As

students’ responses to the open-ended questions in the survey show, examinees tend to

justify their higher levels of anxiety in the traditional test with the fact that they were

witnessing the instructor evaluating them or that they were having someone “staring at

them the entire time”.

Nevertheless, with respect to the anxiety variable, results in the current study are

also comparable to the ones obtained in Early & Swanson (2008) and Joo (2007), where

students also presented higher levels of nervousness in a face-to-face oral testing format,

in comparison to a computer-based test. As mentioned in the literature review, Early &

Swanson (2008) compared a traditional in-class oral language assessment (OLA) to a

digital voice-recorded OLA and mention students’ reports on “higher levels of affective

filter due to peer presence” (p.44) in the in-class oral assessment. In the case of Joo

(2007), the author states that “Talking directly to a teacher seems to have made the

students more nervous in the FTFI” (p.180). However, the author comments on the fact

that students’ higher levels of nervousness in the face-to-face test, “may partially be

explicated by the prominent Korean cultural tendency to save face by not making

mistakes or by avoiding a situation where mistakes might be made in public” (p.183).

112
Keynon & Malabonga (2001), for instance, predicted that students would feel less

nervous in the face-to-face OPI in comparison to the two technology-mediated (COPI and

SOPI) tests because of the “human interaction effect and its emphasis on the

interviewer’s putting the examinee at ease” (p.66). The authors however reported that

there was not a statistically significant difference in students’ ratings between the two

technology-mediated tests and the OPI. Nevertheless, Keynon and Malabonga (2001)

predictions, regarding the fact that students would feel less nervous towards the face-

to-face OPI, reinforces the idea that the role of the examiner seems to have an impact on

examinees’ feelings. In the case of the oral exam format presented in the current study,

learners seem to see the instructor more as the person evaluating them and watching

them while they speak, rather than someone “putting the examinee at ease” (Keynon &

Malabonga 2001; Öztekin 2011). Thus, the fact that the evaluator has a less interactional

role, might have an impact on students’ higher levels of anxiety towards this testing

format.

4.2.1.2. Perceived difficulty

With respect to students’ levels of perceived difficulty, the descriptive analysis

conducted in this study revealed a general tendency for students to consider the

computer-based test less difficult than the face-to-face one, since the learners’ mean

ratings in the computerized test were lower. This aspect was observed when all

students’ responses were analyzed together at the end of the quarter and when

differences between classes, before and after taking the tests, were investigated.

113
Students in both groups also tended to consider these tests less difficult than expected.

Moreover, similarly to the results yielded for the anxiety variable, learners in the

Experimental class reported slightly higher levels of perceived difficulty towards the

tests, in general. This aspect is not very surprising since the variables corresponding to

perceived anxiety and difficulty were found to be highly correlated. As previously

mentioned, the fact that learners in the Experimental class tend to show slightly higher

levels of overall anxiety, in comparison to the Control class, might suggest a tendency for

this group to be more anxious in general and, therefore, a tendency to perceive both oral

exams as more difficult (from their expectations to their final opinions) as well.

A correlation between students’ mean ratings towards anxiety and difficulty is

also visible in students’ open-ended responses to the general perspectives survey, since

the most common reasons provided for their levels of difficulty towards the tests are

similar to the ones provided for their levels of anxiety. Joo (2007) also noted that “The

more nervous students felt, the more difficult they felt the test was” (p. 181).

When observing students’ responses to the open-ended questions in the pre-and

post- test survey, some students expected and considered both modes of oral testing to

be equally difficult, due to similarities between the tasks in the two testing formats and

the correspondence of the aspects being graded (e.g., “Still the same test, just different

conditions”; “Requires same levels of studying”). Nevertheless, when mentioning some

features inherent to the computer-based test having an influence on students’ lower

ratings towards this type of oral exam format, in terms of difficulty, the majority of the

examinees’ answers were, once again, related to not being in the presence of the

instructor.

114
Samples of students’ responses regarding the computer-based test include:

you are less anxious. No one is looking at you.

it’s only you and your partner.

There isn’t that pressure of someone evaluating in front you.

Other reasons were related to the fact of having some control over the test by

being able to pause it (e.g., “you can pause it and think, then talk”). These comments

provide supporting evidence for results reported in Keynon and Malabonga (2001),

where students also consider aspects such as the fact of “having control of when to start

and end speaking” (p.81) as an advantage in the computerized test.

The possibility of being asked to speak for a longer period of time in the face-to-

face test is also among the most common reasons provided in students’ answers to open-

ended questions explaining lower levels of difficulty in the computer-based test. Written

responses involve comments such as:

If I am asked to speak for a greater period of time I will most likely have greater
difficulties

… your teacher might want you to talk more

Professor can ask questions

Through these kind of assertions, we can understand that students tend to relate

the possibility of being asked more questions in the oral exam with the level of test

difficulty. It appears that, in the learners’ point of view, the more they have to talk, the

more difficult the test seems to be. Here it is relevant to recall that, the type of face-to-

face oral exam format presented in the current study is not based on an

115
interview/interaction between the teacher and examinees. However, the fact that

learners also justify their higher levels of perceived difficulty towards the face-to-face

test with the fact that their teacher might want them to talk more suggests that, as much

as the instructor does not have an interactive role in the oral communicative situation

performed among learners, there might always be some kind of examiner’s interference

that will be difficult to control, as requesting the student to talk for a longer period of

time, for instance. Öztekin (2011), for example, also comments on the fact that

interlocutors in the face-to-face speaking assessment “might have interfered more than

needed and helped some students answer some questions, which would have decreased

the reliability of the test.” (p. 78).

Results on students’ general perspectives towards both exams, revealing a

tendency to consider the computer-based test less difficult than the the face-to-face, also

contrast with those reported in Öztekin (2011) and Stansfiled et. al (1990), where the

majority of the participants considered the computerized/technology-mediated oral

test to be more difficult than the face-to-face oral assessment. Öztekin (2011) mentions

that “test takers might feel that they have to put more effort into answering a question

in the computerized mode” (p.119) and also comments on students’ reports of difficulty

in terms of organizing their ideas in the computerized test, stating that “finding the

FTFsa environment more relaxing, the existence of an interlocutor being more helpful

or getting anxious in an unfamiliar context (the CASA) can be considered prominent

reasons for this” (p.119). In the case of Stansfield (1990) the author mentions reasons

such as “discomfort in talking to a machine” (p.647) for students to consider the taped-

based test more difficult than the face-to-face OPI. Once again, it is interesting to note

116
that, while in these previous studies, the presence of the examiner could be related to a

reduction of students’ anxiety and difficulty, this aspect is actually seen by the majority

of students in the current study as a factor of higher levels of anxiety and perceived

difficulty towards the face-to-face oral exam. Again, this aspect could possibly be

explained with the fact that the oral test structure found in this study differs from the

ones presented in the reviewed literature, in the sense that, as previously mentioned,

both types of oral exams are taken in pairs, either in front of the computer or in front of

the instructor, in his/her office. The interaction is thus mainly among learners and not

with the teacher, in the face-to-face context. As observed above, in their justifications for

their lower levels of anxiety towards the computer-based test, learners also refer to

aspects such as not having someone evaluating in front of them or not having someone

looking at them, when taking the oral exam in the Lab. Thus, the fact that, in the case of

the face-to-face test, the instructor does not participate as an interlocutor also seems to

constitute one of the reasons why learners consider this oral exam format to be more

difficult than the computer-based.

4.2.1.3. Perceived fairness

With respect to the variable of Fairness, an analysis across classes before and

after completing the tests, and an investigation of all students’ reactions at the end of the

quarter, showed that, in average, examinees considered the face-to-face oral exam to be

more fair in testing their speaking abilities than the computer-based test. Nevertheless,

here it is relevant to recall that, in general, learners also displayed a tendency to consider

117
the computer-based test quite fair in testing their speaking abilities. As previously noted,

the highest mean ratings towards examinees’ responses in the questionnaire appeared

when evaluating both tests in terms of fairness, both before and after taking the tests.

Examples of common justifications presented by students who considered the face-

to-face test to be more fair than the computer-based oral exam correspond to the

following:

I felt the face-to-face exam was more fair because if we needed help with a certain

word or phrase the professor would be there to help move the conversation along.

In person it felt more forgiving. Talking to the instructor face-to-face allowed me to

correct myself and explain that my mistakes were from nerves rather than ability

level.

It is interesting to note that, while the presence of the teacher seems to be the

most common motive for students to consider the face-to-face oral exam more difficult

and anxiety evoking than the computer-based, this aspect also seems to constitute an

important justification for learners’ higher ratings towards the face-to-face test in terms

of fairness in testing their speaking abilities. As responses to open-ended questions to

the fairness variable show, examinees also tend to consider the instructor’s presence

advantageous at different levels, such as providing support in terms of “moving the

conversation along” or giving students the opportunity to explain the nature of their

mistakes in person. Here, it is also pertinent to mentioned that, in students’ responses

regarding the aspects they liked the most about the face-to-face test, learners frequently

revealed their appreciation for the instructor’s feedback at the end of the test (e.g., “I got

118
instant feedback”; “The instructor gave us feedback right away”). Hence, in the learners’

point of view, this aspect constitutes another common advantage related to being in the

presence of the instructor.

The chance of there being any technological problems or difficulties in the

computer-based oral exam conducted in this study also constitutes a common reason for

students to consider the face-to-face test to be more fair. Samples of students’ responses

to open-ended questions regarding this matter correspond to:

One bad recording sends a worse impression than messing up a bit and recovering

in person.

The audio may have not been recorded well.

The recording might not be clear. In person the professor can hear directly.

With respect to this type of concerns related to the use of technology in the oral

exam, it is worth to mention that one of the challenges in conducting this research

project was the fact that, in one of the labs assigned for the administration of the final

computerized oral exam and oral assessment activities, there were some difficulties in

terms of the connection between certain computers and the microphones used in the

test. This aspect required the need for technical assistance and might have had some

interference in students’ confidence in the quality of their recordings. It is interesting to

note that comments related to technical difficulties can also be witnessed in students’

answers to the open-ended questions in the pre-survey on students’ general reactions

towards the tests (completed before taking both final oral exams). This aspect thus

reveals learners’ anticipation of technological problems on the computer-based final

119
oral exam. In fact, learners also commonly refer to this matter in their comments

regarding the aspects they liked the least about the computer-based test, in answers

such as “setting up with the headphones” or “I had technical difficulties that I feared

would affect the outcome scores”, for instance. Öztekin (2011), also discusses a number

of students’ complaints regarding technical problems occurring throughout the

administration of the computer-based oral exams, referring to these issues as possible

reasons for the students’ relatively low ratings towards these tests. In the case of the

current study, the interference of technical problems on students’ affective reactions

concerning the computer-based oral exam also seems to constitute a common

justification for less positive attitudes towards this type of test, especially in terms of

fairness in testing their speaking abilities.

Through students’ comments regarding technical problems, we can see how

aspects related to the instructor’s presence seem to influence learners’ opinions of

considering the face-to-face oral exam as more fair than the computer-based. These

comments suggest an examinees’ tendency to believe that the instructor is able to

capture oral speech features that might not be possible to capture via the computer.

Thus, students also display a tendency to feel like the instructor’s presence assures that

their oral production is well heard and understood, for instance. As Tognozzi & Truong

(2009) claim “Technology, though more sophisticated than ever, is still rife with

problems” (p.10). Öztekin 2011, for example, suggests that “Probably conducting the

speaking test with better technical equipment would yield better results in favor of the

CASA” (p. 120) and that “it should also be ensured that the technical equipment such as

headphones, microphones, computers and the internet, are working properly” (p. 129).

120
When considering the implementation of the computer-based test used in this research,

this aspect should be taken into account. In the context of the current study, it seems

that improved technical equipment, or an increased training with the existent

equipment, would also contribute to students’ better assessment of this test, particularly

in terms of fairness in testing their speaking abilities.

Results in students’ perspectives towards both oral exams in terms of fairness in

testing their speaking abilities are in line with the ones presented in Keynon &

Malabonga (2001) and Stansfield et al (1990), where students’ reactions in terms of

fairness regarding the different modes of oral testing also tended to be more favorable

towards the face-to-face OPI. However, Stansfield et al (1990) point out that “in any case,

only a very low percentage of students felt there were ‘unfair’ questions on the taped

test” (p. 647).

Tognozzi & Truong (2009) observation that “a fair percentage of students did not

feel that using WIMBA was a fair way to test their oral skills” (p.10) is also relevant to

this discussion for a reflection on a potential more general tendency for students to

consider a computerized oral assessment tool as less fair than a face-to-face.

Nevertheless, it is important to note that, in the current study, examinees’ ratings

towards the computer-based test in terms of fairness were also quite high overall.

Students’ answers to open-ended questions in terms of the oral exam’s fairness

in testing their speaking abilities, seem to confirm the general qualitative results

presented in Öztekin (2011). The author comments on the fact that students’ answers

to open-ended questions also refer to an overall tendency for students to believe the

face-to-face test would better reflect their oral proficiency. Moreover, the author

121
explains that students valued having someone listening to them in the face-to-face oral

achievement test. As the author states “the existence of someone listening to the test

takers and the attitudes of the interlocutors were among the most noticeable points the

test takers liked about the FTFsa” (p.78), reporting that the interlocutors were

“understanding, friendly, smiling, motivating and relaxing” (p.78).

Results regarding students’ levels of perceived fairness towards the tests also

differ from previous research, as the ones developed by Joo (2007) and Lowe & Yu

(2009), where the majority of students thought the computerized oral exam was fairer.

In the case of Lowe & Yu (2009) the computer-based test is considered to be more fair,

particularly in terms of grading, but the face-to-face was actually seen as a better way of

testing learners’ speaking skills. Nevertheless, authors in both of these studies explain

that, in general terms, the majority of students tend to prefer face-to-face test,

mentioning that “it better approximates real-life conversation, interacting with a person,

and that interacting made them somewhat more comfortable” (Joo 2007: 183), or that

they “enjoyed interacting with a ‘live’ examiner” (Lowe & Yu 2009: 35). This

appreciation for a live examiner can also be observed in Jeong (2003). The author

reported considerable positive reactions towards the computerized test but also

explains that the majority of students taking a traditional face-to-face interview and a

multimedia-enhanced English Oral Proficiency Interview (referred to as d-VOCI)

indicated a general preference for the face-to-face test referring that “the reason most

commonly sited was the interaction between the interviewers and the interviewees” (p.

74). The researcher also adds that “students seem to prefer human over technology-

mediated interaction” (p.74).

122
Outcomes from students’ qualitative responses in the current study and those in

previous research reveal that, at some point in the examinees’ perceptions

questionnaire, students tend to value the presence of the evaluator in face-to-face oral

exams. However, with respect to this tendency, there are some aspects that seem to

differentiate the present investigation from previous studies. Our participants generally

tend to appreciate the instructors’ presence, mainly when evaluating the test in terms of

fairness in testing their speaking abilities, since the examiner can make them feel like

they have the opportunity to explain their mistakes, help “moving the conversation

along”, or assure that their speech is well heard (in the event of technical failures or

difficulties). In previous research it seems that a tendency to value the presence of the

examiner in the face-to-face test involves aspects related to the human interaction factor

with the teacher/interviewer in general. In the current study, one of the most important

reasons to consider the instructors’ presence as a critical aspect in the face-to-face oral

exam’s fairness has to do with the fact that learners are in front of the person who is

assessing their oral production, which could be helpful at different levels, not the fact

that there is interaction. As explained before, both types of oral exams used in this study

are taken in pairs and the communication interaction is mainly among classmates. Thus,

the aspect of live human interaction is also present in the computer-based test, which

could constitute one of the reasons why learners tend not to focus as much on the

interactional feature of the face-to-face oral test.

For the interpretation of the presented results, in a comparison between the

current and previous studies, it is relevant to note that in Jeong (2003) the face-to-face

interview is also taken in pairs. However, in the author’s study, the instructors also seem

123
to have the role of interviewers in the face-to-face test, since they are able to “tailor the

interview based upon students’ responses” (p. 74). In the case of the current research,

the instructor’s role in the face-to-face format is more “passive”, since his/her role is not

to interview the examinees or to engage in a communicative interaction with learners,

while they are performing the oral testing situation.

4.2.1.4. Comfort with the technology used

Despite some technical problems encountered in the administration of the

computer-based test and oral assessment activities, which were due to issues regarding

the connection between computers and microphones, students were already familiar

with the learning platform used, since they had worked with it for the completion of

several homework assignments. This might have had an influence on their reports of a

fairly high level of comfort with the technology used in the computer-based oral exam.

As previously mentioned, when students’ responses were taken together at the end of

the quarter, average ratings fell in between the scale point of 2 (medium) and 3 (high).

These results go in line with previous studies reporting on students’ low level of

difficulty dealing with the technological systems used in the computerized tests (Jeong

2003). Results also showed that, in contrast to the Control group, students in the

Experimental class felt more comfortable than expected. These results might be due to

the fact that the technology used in the computer-based final oral exam was the same as

the one used in the computerized oral activities taken by the Experimental group,

throughout the course.

124
4.2.2. Examination of anxiety factors and feelings that tend to be more involved in one

testing format in comparison to the other

Another point of analysis in the examination of students’ reactions and affective

factors towards both types of oral exams, was dedicated to the investigation of the

anxiety factors and feelings, presented on the anxiety scale, that were more involved in

one type of oral testing format in comparison to the other (research question 2).

A descriptive analysis of students’ responses towards the anxiety factors and

feelings presented on the anxiety scale revealed a general tendency for students to

attribute higher mean ratings towards all the items in the context of the face-to-face oral

exam. These outcomes support the previously presented results regarding students’

opinions towards both modes of oral testing in terms of general anxiety levels, where

examinees also tended to consider the face-to-face test as more anxiety evoking than the

computer-based. These overall results differ from previous studies also including a more

in depth analysis of students’ anxiety in the context of a face-to-face and a computer-

based oral exam, based on an anxiety rating scale (Öztekin 2011; Sayin 2015). Öztekin

(2011), for instance, reports on students’ higher anxiety levels towards the computer-

assisted speaking assessment (CASA), in comparison to the face-to-face test, in their

responses to “test-mode related anxiety subscales”. In the case of Sayin (2015), the

author explains that students tend to show anxiety in both modes of oral testing and that

the difference between them is not very clear.

A closer examination of students’ average ratings towards the statements

presented in the current study’s anxiety scale allowed us to observe more evident

125
discrepancies in students’ assessments between the two oral exam formats in some of

the items included in the questionnaire. The group of anxiety factors and feelings that

were thus considered as tending to be more involved in the face-to-face test, in

comparison to the computer-based, correspond to the following (listed here by their

item number):

11 “I get nervous and confused when I am speaking in the oral exam”

9 “I can feel my heart pounding when I’m taking the oral exam”

5 “In the oral exam (…) I can get so nervous I forget things I know”

2 “It frightens me to talk in front of the teacher/computer in the oral exam”

7 (“Even if I’m well prepared for the oral exam (…), I feel anxious about it”)

6 (“It embarrasses me to talk in front of the teacher/computer in my oral exam”)

10 (“I feel more tense and nervous in my oral exam (…) than in my written exams”)

These items were sorted by the size of the difference in students’ reactions

between one test and the other. Item 6 and 10 contained similar values and were the

ones displaying the highest differences in students’ opinions between the two types of

oral exams.

The majority of these statements indicate whether students are experiencing

feelings of anxiety in general, while taking both modes of oral testing. Items 11, 9, 5, and

7, for instance, are not mentioning any particular source of anxiety. Most of these

statements refer instead to aspects that result from being anxious. Thus, higher mean

ratings towards these items, in the context of the face-to-face oral exam, indicate a

greater propensity for students to experience anxiety-related sensations such as

“getting confused”, “feeling their heart pounding” or “forgetting things they know”

126
whenever they are taking the oral exam in the instructor’s office. An overall tendency

for students to experience similar feelings is also presented in Sayin (2015), where

learners tend to agree with “feeling their heart beating very fast during oral exams”, in a

general reference to speaking tests, or “forgetting things they know because of stress”,

in the context of “talking face-to-face with the instructor”. Nevertheless, in the current

study, all items included in the rating scale were presented separately in the context of

each oral exam format. This aspect allowed us to understand, as mentioned above, that

these specific feelings tended to be more involved in the face-to-face test, in comparison

to the computer-based.

Furthermore, as observed in examinees’ ratings to item 7, learners tend to admit

feeling more anxious in the context of the oral exam taken in the instructor’s office “even

when well prepared”. These results are also in line with students’ reports in Sayin

(2015), where the majority of examinees agreed with a similar statement presented in

the anxiety scale, although referring to oral exams in general. At this point, it is also

interesting to note that, in the process of analyzing students’ reactions towards each

statement included in the anxiety scale, we were able to observe learners’ tendency to

consider “lack of preparation” as a particular relevant factor of nervousness. As

mentioned in the previous chapter, item 13 (“I get nervous when I have to perform oral

situations for which I haven’t prepared in advance”) received the highest mean ratings

in both oral exam contexts. Thus, students’ ratings towards item 7 (“Even if I’m well

prepared for the oral exam (…), I feel anxious about it”), indicate learners’ tendency to

feel more anxious in the face-to-face context even when a particular cause of oral exam

nervousness (lack of preparation) is eliminated.

127
With respect to items 2 and 6, we can observe that their content is directly related

to aspects causing anxiety in learners. These emotions, corresponding to “feeling

frightened to talk in front of the teacher/ computer” (item 2) and “feeling embarrassed

to talk in front of the teacher/computer” (item 6), also tend to be more involved in the

face-to-face test. As mentioned before, statement 6 was one of the items presenting the

largest differences in students’ reactions towards the face-to-face and the computer-

based test. Thus, with respect to learners’ ratings towards this statement, results

indicate a tendency for examinees to feel more embarrassed in front of the teacher than

in front of the computer. Moreover, in comparison to their reactions towards the

majority of the other items presented on the anxiety scale, learners tend to show a

stronger propensity to experience this emotion of feeling embarrassed to talk in front of

the instructor.

Results in this investigation also confirm the outcomes presented in the analysis

to the survey on students’ general perceptions towards both types of oral exams, where

students’ open-ended responses for their higher ratings towards the face-to-face test, in

terms of overall anxiety levels, refer to the presence of the teacher as a common source

of nervousness.

Furthermore, as mentioned above, students’ average ratings towards item 10 (“I

feel more tense and nervous in my oral exam (…) than in my written exams”), also

revealed one of the most evident differences between students’ reactions towards one

exam and the other. Learners’ reports of a tendency to feel higher levels of tension and

nervousness in the face-to-face oral exam than in their written exams, in comparison to

the computer-based test, are comparable to previous findings in studies investigating

128
students’ perceptions towards language exams (Scott 1986; Zeidner & Bensoussan

1998). Although these studies do not compare face-to-face with technology-mediated

tests, they indicate students’ overall preference for written over speaking exams, in

general. The current study thus adds to these results, suggesting that this preference for

written over speaking tests might show a tendency to be more involved in the context of

the face-to-face vis-à-vis the computer-based oral exam format included in the current

investigation.

In the analysis of students’ reactions towards the statement included in the

anxiety scale, it was also interesting to note that item 8 (“I often feel like not going to my

oral exam”) was the only statement falling in between the “disagree” and the “no

opinion” classification in the questionnaire, in the face-to-face context. The fact that

students presented similar responses towards this item in both contexts suggests that,

regardless of the possible differences in students’ affective factors towards each exam,

learners tend to show a similar predisposition in terms of their willingness of taking

either type of tests.

This analysis also included an investigation of whether there was a considerable

difference between classes in the examination of students’ ratings towards the anxiety

factors and feelings presented in the anxiety scale. Results showed that the opinions

between the two groups were fairly close. The fact that each group received a different

treatment throughout the quarter did not seem to play a particular role on the way

students felt towards the tests. This also goes in line with the results presented in

students’ overall reactions towards the oral exams, where, although learners in the

Experimental class tended to show slightly higher levels of anxiety and difficulty

129
towards both tests, students’ reactions in both classes displayed very similar trends. It

is relevant to recall that the different treatment received by the two classes mainly

consisted in the fact that the Experimental class took computerized oral assessment

activities throughout the quarter, in a classroom context, receiving oral feedback from

the instructor, while the Control class took the same activities in class but not via the

computer. Thus, one could expect that aspects such as the fact that the Experimental

class was more familiar with the technology-mediated oral assessment tool used in the

final computer-based oral exam, for instance, could trigger larger differences in learners’

affective reactions towards the final oral exams. A probable explanation for the similar

responses between the two classes could be the fact that, the learning platform used in

the computer-based oral exam corresponds to the one used in the course for the

completion of written homework assignments and also for the completion of the oral

pre- and post-test that both classes took at the beginning and at the end of the quarter.

Hence, although the Experimental class was more familiar with the technology used in

the computer-based test, the fact that the Control group had also been exposed to this

kind of technology, might have helped attenuating a more accentuated difference

between classes.

Additionally, results of students’ affective reactions presented in the current

study, along with those reported in the literature review; allow us to reflect on the fact

that other questions could have a larger influence on the way students feel towards the

tests. Those questions could be related to the instructor’s behavior/role throughout the

exam (e.g., a more passive or active interaction with the examinees) or the oral testing

format (e.g., completed individually or in pairs).

130
4.3. Discussion of students’ improvement in oral performance

The second main topic in the current study corresponded to the examination of

whether or not there were any improvements in students’ oral performance when

taking computerized oral assessment activities throughout the course.

This aspect was investigated by examining whether the Experimental class

showed greater improvements than the Control class, with respect to certain linguistic

areas and overall. As explained in detail in the methodology chapter, the linguistic areas

included in the pre-/post-test rating scale corresponded to vocabulary items,

grammatical structures and grammatical functions (aspects formally taught and

emphasized during the course), as well as to aspects of students’ overall communication

(Liskin-Gasparro 1983), such as being creative, resourceful or easily understood. As a

first step in this analysis, the researcher examined the differences between the two

classes in terms of the overall scores obtained in the pre- and post-test, in order to

investigate whether one class showed superior oral skills over the other one from the

beginning. Although the Control class showed a tendency towards higher values in the

pre-test, the difference between the two groups was not statistically significant.

Results for students’ improvement in in oral performance showed that, although

the Experimental class showed higher oral improvement values in terms of “Vocabulary”

and “Overall communication”, the results were not statistically significant. Given the fact

that the Experimental class was designed within the specific constraints of an existing

course that could not be modified, other than by introducing a different delivery method

for oral activities and oral exam, it is not surprising that students did not improve more

131
than in the F-2-F course. It is however important to underline the fact that the difference

in the oral performance scores of the groups were not significant and therefore there

was no initial advantage of the Control group over the Experimental group: the two

groups were, therefore, wholly comparable in terms of oral skills, before and after the

two treatments.

The fact that the Experimental group did not do better than the Control group in

this section may simply be due to the fact that, after having completed each computer-

based oral activity throughout the quarter, learners in this group might not have paid

particular attention to the computerized oral feedback provided by the instructor.

Hence, for a broader interpretation of results in a similar type of analysis, a future study

should also include a questionnaire involving specific questions regarding the way

students used the additional advantages involved in the computer-based oral activities.

In addition, it is pertinent to recall that, although we were able to observe quite

similar trends across classes in the examination of students’ affective reactions towards

both modes of oral testing in general, results in the investigation of students’ overall

perceptions towards the tests revealed that learners in the Experimental class tended

to show slightly higher levels of anxiety and perceived difficulty towards both types of

oral exams, before and after completing both tests at the end of the course. Although the

literature reviewed reports a negative relationship between anxiety and oral

performance (Hewitt & Stephenson 2012; Scott 1986; Wilson 2006; Philips 1992) in our

case, however, our results were not statistically significant, and we can’t therefore

conclude that there was any difference in anxiety levels between the two treatments.

The outcomes regarding oral performance also contrast with the ones obtained

132
in Tognozzi & Truong (2009) and Satar & Özdener (2008). In Tognozzi & Truong (2009)

the authors conclude that an experimental group, using a Web-based program for the

completion of weekly oral tasks during the instruction period, showed “longer and more

accurate speech samples” (p. 9)Taking into account the linguistic categories comparable

in the current study and in Tognozzi & Truong (2009), although the authors also did not

find statistical significant differences between a control and an experimental group in

terms of grammatical aspects, the group completing weekly computerized oral tasks

during the instruction period showed a statistically significant advantage over the

control group in terms of vocabulary. As previously mentioned, in the current study the

Experimental class showed a tendency to higher oral improvement values in terms of

vocabulary, although our results failed to reach statistical significance. In Satar &

Özdener (2008), students in the Experimental groups (including voice chat group),

engaged in 4 chat sessions, taken in dyads, during 4 weeks, also showed an increase in

oral abilities, in comparison to the control group.

One aspect that could possibly justify the distinction in the results presented in

the current investigation, in comparison to the ones found in these previous studies,

could be due to the fact that the oral activities involved in this study were taken on a less

regular basis. In the current study activities were completed once every two weeks, in

order to adjust to the existing syllabus. In previous studies the chat sessions or

computerized oral activities were more frequent, tailored towards oral improvement

and therefore, contributed to students’ greater development of oral abilities. It is

important to take into account the fact that we were working within the confines of an

already existing syllabus and did not modify the number of activities but only the way

133
the speaking tasks and the oral exams were delivered. Taken together, our results

suggest that completing a small number of computerized oral assessment activities

throughout the instruction period, as it was done for this investigation, might not be

sufficient to improve students’ oral production. However, statistical analysis in the

current study also did not show that the Experimental group achieved worse results

than the control group. This suggests that, for these groups of students, the computer-

based oral activities were equivalent to the face-to-face ones, in terms of fostering their

speaking abilities.

Furthermore, the fact that there was not a significant difference between classes

is also somewhat consistent with the fact that the descriptive analysis for students’

general reactions towards the tests and for learners’ responses towards the anxiety

factors and feelings presented in the anxiety scale, did not display considerable

differences between classes. The fact that, the pattern of students’ responses towards

both types of tests is quite similar between the two classes, and the fact that the

Experimental class did not show a greater improvement than the Control class in terms

of oral performance, indicates that, overall, the distinction in the treatment received by

the two groups did not have a particular influence on the different aspects analyzed,

which overall is a considerably favorable endorsement of computer-delivered oral

activities and oral exams in second language learning.

134
4.4. General implications, limitations and suggestions for future research

The present study contributes to the field of second language teaching by adding

information to the existing literature on the application of technology in oral testing,

including a comparison between face-to-face and computer-based oral tests. The

insights provided in this study, regarding students’ perspectives and affective factors

towards the two types of oral exams, can also be used by language instructors and

program administrators in the development of their foreign language curriculums.

These results can also influence decisions regarding the implementation of technology

in the oral tests conducted in language classes, especially in the case of an oral

achievement test format similar to the one used here.

Although some reports on students’ tendencies regarding their perspectives

towards the two types of oral exams presented in this research can provide valuable

information to oral achievement test developers, some limitations should be mentioned.

First, the sample size used was relatively small. A larger population would certainly

allow for a stronger interpretation and confidence in the results obtained in the analysis

presented.

Another limitation has to do with the fact that learners seemed to attribute

similar meanings to the anxiety and difficulty variables, included in the survey regarding

students’ general reactions towards the exams. As mentioned in the analysis and results

chapter, there was a high correlation between these two variables. Furthermore, when

observing students’ responses to the open-ended questions in the general reactions

survey, we could observe that learners’ justifications for their levels of perceived

135
difficulty were quite similar to the reasons reported for their levels of anxiety towards

the tests. With respect to the difficulty variable, the researcher expected students’

opinions to be due to aspects such as the content or the format of the oral test, for

example. Hence, a survey including a clear definition of these constructs, as well as a

larger level-scale, could possibly determine a greater distinction between learners’

overall reactions towards the different variables. These aspects would also allow for the

investigation of complex relationships between students’ responses and the variables

included in the questionnaires, by performing more sophisticated analysis, through the

use of a “Mixed effects model” (Riordan 2007), for instance.

In addition, complementary information on students’ habits regarding their

usage of the technological resources available through the completion of the

computerized oral activities, such as the possibility of listening to their own recordings

before submitting them or listening to the instructors’ feedback (Tognozzi & Truong

2009), would be beneficial in the process of interpreting and understanding the results.

Furthermore, increasing the number and scope of the kind of oral activities presented in

this research could also have a positive influence in students’ improvement in terms of

oral performance.

Moreover, the analyses used in this study also allowed us to reflect on aspects

that could lead to new research projects. Through the analysis of students’ affective

reactions, we observed that the different treatment received by the two classes did not

seem to particularly influence students’ opinions towards the tests, since response

patterns among the two classes tended to be similar, in general. Our results, along with

results reported in previous literature, allowed us to reflect on aspects such as the fact

136
that some features inherent to the specific type of oral exam format (e.g., the degree of

interaction with the examiner or whether it is conducted individually or in pairs) might

play a larger role on students’ feelings towards these tests. Hence, future studies on oral

achievement tests, and the use of technology on these type of exams, could adopt an

experimental design that would include a number of distinct types of face-to-face and

computer-based oral testing formats, and explore the impact of these features on

students’ reactions towards the different tests.

The analysis of students’ affective reactions towards both tests also allowed us to

observe that students’ explanations for perceiving the computer-based test as less

difficult and less anxiety evoking included aspects related to not being in the presence

of the instructor, such as not having someone “staring at them” and “evaluating” them.

However, in their justifications for perceiving the computer-based test as less fair in

testing their speaking abilities, learners also pointed at reasons that valued the presence

of the instructor in a face-to-face oral exam context, such as getting help with “moving

the conversation along”. These findings lead us to reflect on aspects such as whether

there is any relationship between these results on students’ perceptions towards the

two types of oral testing and potential learners’ beliefs that they have the right to be

overly helped and supported by their instructor. This aspect thus reminds us of the

impact that the level of students’ entitlement can have on their outcomes and on their

feelings towards their exams in an academic context, for example (Anderson,

Halberstadt, & Aitken 2013). Hence, an investigation of this matter, in the area of second

language learning, would also be relevant in a future study.

137
4.5. Conclusion

The current conclusion section starts with a recap of the research questions

included in the present study and a summary of corresponding key findings.

Research question 1 was concerned with students’ reactions towards the

computer-based test vis-à-vis the face-to-face final oral exam, regarding levels of

anxiety, perceived difficulty, and perceived fairness in testing speaking abilities, as well

as levels of comfort with the technology used. This question was divided into research

questions 1.1 and 1.2.

Research question 1.1, focused on whether students’ perceptions differed

depending on the different treatment received by the two classes and on whether the

survey was taken before or after the course. Here, descriptive analysis of students’

reactions towards the two different tests showed general similar patterns between

students’ responses in the two classes, before and after taking the final oral exams at the

end of the course. As seen in the results, both groups tended to consider the computer-

based final oral exam as less anxiety evoking, less difficult, but also less fair in testing

their speaking abilities, in comparison to the face-to-face test. This aspect was observed

in students’ answers taken before and after the course, which corresponded to their

expectations and final opinions towards the exams. Results also indicated a tendency for

students in both classes to consider the two different types of oral exams to be less

anxiety evoking and less difficult than expected. In terms of perceived fairness, it was

also noted that, despite showing a tendency to attribute higher mean ratings to the face-

to-face test in terms of fairness, learners also displayed a general tendency to consider

138
the computer-based test quite fair in testing their speaking abilities. Regarding

examinees’ levels of comfort with the technology used in the computer-based test,

outcomes on this analysis also showed that, in contrast to the Control group, students in

the Experimental class felt more comfortable than expected, which could be due to the

fact that this class used the computerized voice tool more frequently throughout the

instruction period.

Research question 1.2 was concerned with students’ reactions to the computer-

based vis-à-vis the face final oral exam at the end of the quarter, without differentiating

between classes. Here, learners’ also revealed a tendency to display lower levels of

anxiety, perceived difficulty, and also perceived fairness towards the computer-based

test. In addition, when students’ responses were taken together at the end of the

instruction period, mean ratings for students’ reports also revealed a fairly high level of

comfort with the technology used in the computer-based final oral exam.

Research question 2 investigated the anxiety factors and feelings, presented on

the anxiety scale, that were more involved in one type of oral exam in comparison to the

other. Key findings revealed students’ general tendency to attribute higher mean ratings

towards the face-to-face oral exam with regard to all items presented on the anxiety

scale. Nevertheless, discrepancies in students’ assessments between the two oral exam

formats were more evident in some of the items included in the questionnaire. The

anxiety-related sensations included in these items were thus considered to be more

involved in the face-to-face format and correspond to “getting nervous and confused”

(item 11); “feeling their heart pounding” (item 9); “forgetting things they know as a

result from being nervous” (item 5); “feeling frighten to talk in front of the

139
teacher/computer” (item 2); and “feeling anxious about the exam even when well

prepared” (item 7). Nevertheless, students’ ratings to the items presented on the anxiety

scale revealed that the most evident differences between students’ reactions towards

the face-to-face and the computer-based test correspond to the fact that students tend

to feel more tense and nervous in the face-to-face oral exam than in written exams, in

comparison to the computer-based oral testing method (item 10) and a propensity for

examinees to feel more embarrassed to talk in front of the teacher than in front of the

computer (item 6).

Research question 2.1 was concerned with whether there was a considerable

difference between classes in the investigation of the anxiety factors and feelings that

were more involved in one type of oral exam in comparison to the other. Once again,

results showed very similar trends between the two groups in terms of their affective

reactions towards the two types of oral test.

Finally, research question 3 examined whether the Experimental class showed a

greater improvement than the Control class in certain linguistic areas and overall. These

linguistic areas corresponded to vocabulary items, grammatical structures and

grammatical functions (aspects that were formally taught and emphasized during the

course), as well as to aspects of overall communication. Although the Experimental class

showed higher values in vocabulary and in general communication ability, main results

in this analysis did not show statistically significant differences between the two classes

in terms of students’ improvements in oral performance.

Taken together, the findings in this investigation, regarding students’ perceptions

towards the computer-based and the face-to-face oral exam, revealed a general tendency

140
to perceive the computer-based test as less anxiety evoking and less difficult than the

traditional method. Nevertheless, students in this research also showed a propensity to

consider this type of oral exam format as being less fair in testing their speaking abilities.

As we were able to observe, responses in the Control and Experimental class displayed

quite similar general patterns between the two groups, from their expectations to their

final opinions. As explained throughout the study, the difference in treatment mainly

consisted in the fact that the Experimental class completed computerized oral

assessment activities during the instruction period, receiving individual oral feedback

from the instructor. The Experimental group completed these activities in pairs, in a

computer lab, and during class time, whereas the Control group completed the same

activities in a classroom context. The structure of these technology-mediated oral

activities resembled the computer-based final oral exam format administered to

students. Both the Control group and the Experimental group used the computer-based

learning platform for other types of homework activities. Therefore, both groups were

familiar with the technology used.

Learners’ responses to open-ended questions in the survey concerning their

overall reactions towards the tests revealed a tendency for students to consider aspects

related to the presence of the examiner in the face-to-face test as common justifications

for feeling less anxious in the computer-based test, and also for perceiving this

computerized oral exam format as less difficult, in comparison to the traditional method.

Reasons included the fact that, in the computer-based test, students did not feel the

nervousness of “performing in front of the instructor” or that they had someone “staring

at them” or “evaluating them” while taking the test. At the same time, aspects involving

141
the presence of the examiner in the face-to-face test were also among the most common

explanations by examinees who reported lower levels of perceived fairness towards the

computer-based oral exam, in comparison to the traditional method. Justifications

included the fact that, by being in front of the instructor, learners felt like they had the

opportunity to explain the nature of their mistakes or that the instructor could help them

with “moving the conversation along”. In addition, students often indicated concerns

related to possible technical issues that could interfere with the clarity of their

recordings. In this case, the presence of the examiner also seemed important to ensure

that their speech was well heard and understood.

When comparing the current study’s results with outcomes presented in

previous research analyzing students’ affective reactions to computer-based vis-à-vis

face-to-face oral tests, it was possible to observe that reasons for valuing the presence

of the examiner or the more traditional oral testing format in the reviewed literature,

include aspects such as “the attitudes of the interlocutors” (Öztekin 2011: 78), “the

interaction between interviewers and interviewees” (Jeong 2003: 74), or the fact that

learners “enjoyed interacting with a ‘live’ examiner” (Lowe & Yu 2009: 35). In the

current research, the live contact with the examiner seemed to constitute an important

factor for students’ higher ratings towards the face-to-face test in terms of fairness, for

the reasons indicated above. In addition, the fact that the instructor provided the

examinees with immediate feedback at the end of the oral exam was often mentioned in

students’ comments with respect to the aspects they liked the most in the face-to-face

test. However, as seen in learners’ answers to open-ended questions in the survey

concerned with students’ perceptions towards both types of tests, examinees display a

142
tendency to essentially perceive the instructor as the person who is “watching”,

“listening to” and “evaluating” them, and not as someone who they are interacting with

while completing the face-to-face oral exam. It was suggested that one possible motive

behind students’ tendency to mostly focus on these features of the examiner has to do

with the fact that the oral achievement testing format included in this investigation is

taken in pairs, in both the computer-based and the face-to-face context. The

conversational interaction in the face-to-face test is mainly conducted among learners

and not between the instructor and the examinees, which could trigger a stronger sense

of the fact that they are being evaluated.

As mentioned above, with respect to the analysis of the anxiety factors and

feelings that tended to be more involved in one testing format in comparison to the

other, key findings revealed a general tendency for students to attribute higher mean

ratings towards the items presented on the anxiety scale in the context of the face-to-

face oral exam. Results in this section were consistent with the outcomes presented in

the analysis of students’ general reactions, since learners’ mean ratings towards both

oral testing formats in terms of overall anxiety levels were also higher in the face-to-face

test. In addition, in their ratings towards the statements included in the questionnaire,

one of the most evident differences between students’ responses towards one type of

oral exam format in comparison to the other was the students’ tendency to feel more

embarrassed to talk in front of the instructor than in front of the computer. This aspect

was also consistent with students’ responses to open-ended questions in the overall

reactions survey, where learners tended to consider aspects related to the presence of

the examiner as common sources of nervousness. Additionally, quite similar trends in

143
students’ ratings towards the items presented on the anxiety scale were also observable

in a comparison between learners’ responses to the anxiety questionnaire in the two

different classes. This aspect is also in line with the results obtained on students’ general

reactions towards both types of tests, suggesting that the difference in the treatment

received by the two classes does not play a particular role in students’ feelings towards

the tests or that the difference in treatment was not that substantial. The analysis of

students’ responses on their affective reactions towards the computer-based and the

face-to-face oral achievement tests used in this study, along with results reported in the

literature, allowed us to reflect on the fact that other aspects, such as the degree of

interaction with the examiners, or whether the test is completed individually or in pairs,

could have a stronger impact on examinees’ feelings towards the exams.

Furthermore, results from the second research topic included in this

investigation revealed that the use of technology in oral assessment activities taken

throughout the quarter (treatment received by the Experimental class), as done in this

investigation, did not seem to influence learners’ improvement in terms of speaking

abilities. As we could observe, the differences between the two classes in terms of

students’ oral production gains, on the linguistic variables emphasized during the course

and overall, were not significant. As it was suggested, completing a small number of

computerized oral activities during the instruction period might not be sufficient in

improving students’ speaking abilities. However, as mentioned in the discussion section,

statistical analysis in the current study also did not show that the Experimental group

achieved worse results than the Control group, suggesting that, for these groups of

144
students, the computer-based oral activities were equivalent to the face-to-face ones, in

terms of fostering their speaking abilities.

Considering the aspects analyzed in the current study, success in the

implementation of technology in this type of oral achievement exam appears to be

mostly related to the reduction of students’ anxiety. In addition, although the face-to-

face oral exam received higher mean ratings in terms of fairness, learners’ responses

towards the computer-based test also indicate a tendency to consider this oral testing

system to be quite fair. Therefore, examinees’ reactions towards the computerized test

tended to be overall fairly positive.

As Lee (2005) states, “By listening to the voices of learners, FL educators have the

opportunity to reflect on their intended pedagogical efforts and further modify their

strategies and instruction to meet the needs and interests of learners” (p.140). In the

case of the current study, the voice of examinees has provided us with valuable insights

on the implementation of the kind of computer-based oral achievement exam method

used. Reports on examinees’ affective reactions towards the tests included in this

research can thus be useful, not only in the program where this investigation took place,

but possibly in other language courses. The implementation of a computerized oral exam

into the FL curriculum could also provide test developers with practical insights in

manipulating factors such as presence or absence of the instructor, how involved the

instructor is in the actual test taking (e.g., providing help and feedback), whether the test

is taken in pairs or individually, the level of advanced preparation allowed, etc.

Furthermore, the analysis conducted in this investigation, in terms of students’

perceptions and affective factors towards both types of oral tests, as well as in terms of

145
students’ improvements in oral performance, when taking computerized oral activities

throughout the instruction period, has also contributed to the body of quantitative and

qualitative research in the field of second language learning.

Moreover, new avenues for future research were also proposed, in light of the

findings presented in the current research. Since the different treatment received in the

two classes did not seem to particularly influence students’ opinions towards the tests,

it was suggested that a further study on students’ reactions towards different types of

computer-based and face-to-face oral testing formats, should include distinct aspects;

for instance, whether tests are taken individually or in pairs, or higher or lower levels of

interaction with the instructor. Further exploration is certainly needed on what impact

on students’ feelings these different features of oral exams have in the context of second

and foreign language learning.

In addition, the fact that learners tended to find the instructors’ presence positive

in terms of fairness in testing their speaking abilities, due to aspects such as getting help

with “moving the conversation along”, as well as the fact that they showed a propensity

to find the the face-to-face test more difficult and anxiety evoking, due to aspects such

as feeling “watched” and “evaluated” by the teacher, lead us to reflect on aspects such

as to what what extent these facts could also be related to possible learners’ beliefs that

they have the right to be overly helped and supported by their instructor (in this case in

the process of learning a second language). In view of these findings, we surmise that

students’ feelings of psychological entitlement can be a relevant factor for L2 learning.

Specifically, entitlement can affect their achievements and their feeling towards exams

in an academic context (Anderson, Halberstadt, & Aitken 2013). In fact, although

146
entitlement and its consequences have been researched since the 90s (Morrow 1994), a

focus on this aspect with respect to second language learning does not seem to have been

previously investigated. This aspect could thus constitute another relevant possibility

for future research in this area.

147
References

Adair-Hawck, B., Willingham-McLain, L. & Youngs, B. E. (1999). Evaluating the

Integration of Technology and Second Language Learning. CALICO Journal, 17 (2),

269-303

Abu-Rabia, S., Peleg, Y. & Shakkour, W. (2014). The Relation between Linguistic

Skills, Personality Traits, and Language Anxiety. Open Journal of Modern

Linguistics, 4, 118-141.

Al-Shboul, Murad M., Ahmad, Ismail S., Nordin, Mohamad S. & Rahman, Zainurin A.

(2013). Foreign Language Anxiety and Achievement: Systematic Review.

International Journal of English Linguistics, 3 (2), 32-45

American Council on the Teaching of Foreign Languages. (2012). ACTFL oral

proficiency interview familiarization manual. White Plains, NY: Author.

Anderson, D. Halberstadt, J. & Aitken, R. (2013). Entitlement attitudes predict students’

poor performance in challenging academic conditions. International Journal of

Higher Education, 2, 151-158.

Brown, W. (1985). RSVP: Classroom Oral Interview Procedure. Foreign Language

Annals, 18 (1), 481-485.

Early, P., & Swanson, P. B. (2008). Technology for oral assessment. In C. M. Cherry and

C. Wilkerson (Eds.), Dimension (pp. 39-48). Valdosta, GA: SCOLT Publications.

Ellis, R. (2009). Corrective Feedback and Teacher Development. L2 Journal, 1, 3-18.

Ganschow, L., Sparks, R. L., Anderson, R., Javorsky, J., Skinner, S., & Patton, J. (1994).

Differences in language performance among high-, average-, and low-anxious

college foreign language learners. The Modern Language Journal, 78, 41-55
148
Ganschow & Sparks (1996). Anxiety about Foreign Language Learning among High

School Women. Modern Language Journal, 80, 199-212.

Gan, Z., Davidson, C., and Hamp-Lyons, L. (2008). Topic Negotiation in Peer Group

Oral Assessment Situations: A Conversation Analytic Approach. Applied

Linguistics, 30 (3), 315-334.

Hayes, A.F. & Krippendorff, K. (2007). Answering the call for standard reliability

measure for coding data. Communication Methods and Measures 1, 1:77-89

Hewitt, E. and Stephenson, J. (2012). Foreign Language Anxiety and Oral Exam

Performance: A Replication of Philips ‘s MLJ Study. The Modern Language

Journal, 96 (2), 170-189.

Horwitz, E. K., Horwitz, M. B, & Cope, J. (1986). Foreign language classroom anxiety.

Modern Language Journal, 70, 125-132.

Horwitz, E. (1986). Preliminary evidence for the reliability and validity of a foreign

language anxiety scale. TESOL Quarterly, 20, 559-562.

Horwitz, E. (2001). Language anxiety and achievement. Annual Review of Applied

Linguistics, 21, 112-126.

Horwitz, E. & Yan, J. (2008). Learners’ Perceptions of How Anxiety Interacts With

Personal and Instructional Factors to Influence Their Achievement in English:

A Qualitative Analysis of EFL Learners in China. Language Learning, 58, 151

183.

Jeong, T. (2003). Assessing and interpreting students’ English oral proficiency using

VOCI in an EFL context. Unpublished doctoral dissertation. Ohio State

University, Columbus.

149
Joiner, E. (1997). Teaching listening: How technology can help. In M. Bush, & R.

Terry (Eds.), Technology-enhanced language learning (pp. 77-121).

Lincolnwood, IL: NTC Publishing Group.

Joo, M. (2007). The attitudes of students’ and teachers’ towards a Computerized Oral

Test (COT) and a Face-To-Face Interview (FTFI) in a Korean university setting.

Journal of Language and Sciences 14-2, 171~193.

Kenyon, D. M. & Malabonga, V. (2001). Comparing Examinee Attitudes Toward

Other Computer-Assisted Oral Proficiency Assessments. Language Learning &

Technology, 5 (2), 60-83.

Jouët-Pastré C., Kobluka A., Sobral P., Moreira M.L., Hutchinson A.P. (2007). Ponto de

encontro: Portuguese as a world language. Upper Saddle River, NJ: Pearson

Prentice Hall.

Krippendorff, K. (2004). Content Analysis: An Introduction to Its Methodology. 2nd

Edition. Thousand Oaks CA: Sage.

Krippendorff, K. (2011). Agreement and Information in the Reliability of Coding.

Communication Methods and Measures, 5 (2), 93-112.

Larson, J. W. (2000). Testing Oral Language Skills via the Computer. CALICO Journal, 18

(1), 53-66.

Lee, L. (2005). Using Web-based Instruction to Promote Active Learning: Learner’s

Perspectives. CALICO Journal, 23 (1), 139-156.

Lenderink, A.F., Zoer, L., et al. (2012). Review on the validity of self-report to assess

work-related diseases. Int Arch Occup Enviorn Health, (85), 229-251.

150
Liskin-Gasparro (1983). Teaching and testing oral skills. ACTFL Master Lecture

Series. Paper presented by a member of the American Council on the T eaching

of Foreign Languages (ACTFL) at the Defense Language Institute Foreign

Language Center, Monterey, CA.

Liu, M., & Jackson, J. (2008). An Exploration of Chinese EFL learners’ unwillingness to

communicate and foreign language anxiety. Modern Language Journal, 92,

71-86.

Lowe, J. & Yu, X. (2009). Computer Assisted Testing of Spoken English: A Study of

the SFLEP College English Oral Test System in China. Journal of Systemics,

Cybernetics and Informatics, 7 (3), 33-38.

Morrow, W. (1994). Entitlement and achievement in education. Studies in Philosophy

and Education, 13 (1), 33-47.

Norris, John M., & Pfeiffer, Peter C. (2003). Exploring the Uses and Usefulness of

ACTFL Oral Proficiency Ratings and Standards in College Foreign Language

Departments. Foreign Language Annals, 36 (4), 572-581.

Ohata, K. (2005). Potential Sources of Anxiety for Japanese Learners of English:

Preliminary Case Interviews with Five Japanese College Students in the U.S.,

TESL-EJ, Volume 9, Number 3, 1-21.

Omaggio Hadley, A. (2001). Teaching Language in Context (3rd edn). Boston: Heinle &

Heinle.

Ortega, L. (2009). Understanding second language acquisition. London: Hodder

Arnold.

151
Öztekin, E. (2011). A Comparison of Computer-assisted and Face-to-Face Speaking

Assessment: Performance, Perceptions, Anxiety, and Computer Attitudes

(Master’s thesis). Retrieved from http://hdl.handle.net/11693/17047

Payne, J.S. & P. J. Whitney. (2002). Developing L2 Oral Proficiency through

Synchronous CMC: Output, Working Memory, and Interlanguage

Development. CALICO Journal, 20, 7-32.

Paker, T. & Höl, D (2012) Attitudes and Perceptions of the Students and Instructors

towards Testing Speaking Communicatively. Pamukkale University Journal of

Education, 32(2), 13-24.

Park, H. & Lee, A. R. (2005). L2 Learners’ Anxiety, Self-Confidence and Oral

Performance. Paper presented at the Pan-Pacific Association of Applied

Linguistics (PAAL), Japan.

Phillips, Elaine M. (1992). The Effects of Language Anxiety on Students’ Oral Test

Performance and Attitudes. The Modern Languages Journal, 76(1), 14-26.

Price, M. L. (1991). The subjective experience of foreign language anxiety:

Interviews with highly anxious students. In E. K. Horwitz & D. J. Young (Eds.),

Language anxiety: From theory and research to classroom implications (pp.

101-108). Englewood Cliffs, NJ: Prentice-Hall.

R Core Team (2016). R: A language and environment for statistical computing. R

Foundation for Statistical Computing, Vienna, Austria. URL https://www.R

project.org/.

Riordan, B. (2007). “There’s two ways to say it: Modeling nonprestige there’s”.

Corpus Linguistics and Linguistic Theory, 3 (2): 233-279.

152
Satar, H.M., & Özdener, N. (2008). The Effects of Synchronous CMC on Speaking

Proficiency and Anxiety: Text Versus Voice Chat. The Modern Language

Journal, 92 (4), 595-613.

Savignon, S. J. (1972). Communicative Competence: An Experiment in Foreign-Language

Teaching. Philadelphia: The Centre for Curriculum Development.

Sayin, B., A. (2015). Exploring Anxiety in Speaking Exams and How it Affects Students’

Performance. International Journal of Education and Social Sciences, 2 (12), 112

118.

Scott, M. L. (1986). Students Affective reactions to oral language tests. Language

Testing, 3, 99-118.

Shohamy, E. (1982). Affective considerations in language testing. Modern Language

Journal, 66, 13-17.

Stansfield, C., D. Kenyon, R. Paiva, F. Doyle, I. Ulsh and M. Cowles. (1990). The

development and validation of the Portuguese Speaking Test. Hispania, 73,

641-51.

Tognozzi, E., & Truong, H. (2009). Proficiency and Assessment Using WIMBA Voice

Technology. ITALICA, 86 (1), 1-23.

Wilson, J. T. S. (2006). Anxiety in learning English as a Foreign Language: Its

Associations with Students Variables, with Overall Proficiency, and with

Performance in an Oral Test. La Directora, Universidad De Granada

153
Yahya, M. (2013). Measuring speaking anxiety among speech communication course

students at the Arab American University of Jenin (AAUJ). European Social

Sciences Research Journal, 1(3), 229-248.

Young, D. J. (1986), The Relationship Between Anxiety and Foreign Language Oral

Proficiency Ratings. Foreign Language Annals, 19: 439–445.

Young, D. J. (1991). Creating a low-anxiety classroom environment: What does

language anxiety research suggest? The Modern Language Journal, 75, 426

439.

Yu, E. (2006). A Comparative Study of the Effects of a Computerized English Oral

Proficiency Test Format and a Conventional Speak Test Format. The Ohio

State University. Unpublished PhD Thesis: Ohio State University.

Zeidner, M., & Bensoussan, M. (1988). College students’ attitudes towards written

versus oral tests of English as a Foreign Language. Language Testing, 5, 100-

114.

Zheng, Y. (2008). Anxiety and Second/Foreign Language Learning Revisited. Canadian

Journal for New Scholars in Education, 1(1), 1-12.

Zulkifli, V. (2007). Language Classroom Anxiety; A Comparative Study of ESL Learners.

Asian Journal of university Education, 3(2), 75-99.

http://www.pearsonelt.ch/1471/9781447964117/Ponto-de-Encontro-

Portuguese.aspx

154
APPENDIX A

Set of 6 oral situations for the computer-based and the face-to-face final oral exams
(for the analysis of students’ feelings towards the two types of oral exams):
1. You are talking to a friend about last weekend’s outing. Ask each other questions
and try to
exaggerate and out do each other. Talk about:
- Who did you go with
- How much money did you spend
- Where did you go
- What did you do
- Who did you meet
- Where are you going next month
2. You are baby-sitting a couple of kids. Now that they are asleep you call a friend to
pass the time.
Talk about when you two were little. Ask each other:
- Where you used to live
- TV programs you used to watch
- Baby-sitters you hated/loved
- Grammar school teachers you remember
- Games you used to play
- One terrible or great thing that happened to you
3. You are going out this weekend with a very special person and you decide to go
shopping for
clothes.
- S1 tell the clerk the situation and ask for advice on his clothes
- S2 (Clerk) advise something way too informal and with weird colors
- S1 describes what you want (the whole outfit) and ask for prices
- S2 (Clerk) respond and also try to sell something else
- S1 decline to buy that something else and buy the outfit you want
- Both say goodbye
4. You meet a friend at a restaurant for dinner.
- Greet each other
- what is your favorite dish
- if you are on a diet
- if you have any food restrictions / allergies
- discuss good and bad restaurants in town
- S1 orders an appetizer and a complete non-vegetarian meal and
something to drink
- S2 orders a different appetizer and a complete vegetarian meal, a drink
and a dessert

155
5. You are waiting for your table in the lounge of a restaurant. Talk to your dinner
companion about:
- if he/she cooks
- what his/her specialty is
- How to prepare it
- If he/she is allergic to any food
- about their favorite places to eat and why
- Favorite and least favorite foods
6. You run into a friend just before the school vacations (summer/winter). Talk about your
plans.
- Greet each other
- where are you going
- how are you getting there
- how long are you staying
- what are you going to do- who are you going with
- where are you going to stay
- narrate something interesting/ bad/ great about the place you are going to

156
APPENDIX B

Description of the creation method for Oral Exams on MyPortugueseLab

1) Click on “course materials”


2) Click on “add course materials”
3) Select “instructor activity”
4) Click “basic/random”
5) Click “save and continue”
6) Click “add questions”
7) Click “create a new question”
8) Click “Essay”
9) Enter a question tittle (e.g., Oral Exam)
10) Type in written instructions
11) Scroll down and click “add text settings”
12) Enter in maximum score for this question
13) Uncheck “text input” and Check “audio input”
14) Enter a maximum time to record
15) Click “save and close”
16) Scroll down (all the way) and click “add and close”
17) Turn “randomize” on (this is inside the box in very small letters). You don’t
need to do this again
18) A little white square will show up on the right. Enter the number 1
19) Under “section 1”, click “add question”
20) Repeat the process: “create new question”/ “Essay”/etc. You don’t need to turn
“randomize” on again.
21) When you are done introducing all 7 situations, click “save and return”
22) Scroll down (all the way) and click “save and close”. This will save the Oral
Exam into the course materials library.
23) The activity (Oral Exam) will be under your course materials library

157
APPENDIX C


Instructions for the Technology-mediated Final Oral Exam

1) Get in pairs and share a computer.

2) Use Firefox.

3) Sign in on MyPortugueseLab (you only need to choose one of the accounts. Your
instructor will then be able to assign a separate grade on each account).

4) Open the icon for the Final Oral Exam and read the prompts.

5) Share the microphone connected to your computer (make sure you speak close
to the microphone when it is your turn to speak).

6) To use the headphones, click in ALT and, at the same time, click in the sound
icon up in the screen, and choose the option Logitech USB Headset.

7) Start recording the conversation

8) When you’re done recording, click “play” to listen to the conversation and make
sure it worked.

158
APPENDIX D

Survey concerning students’ expectations towards oral assessment methods (presented


to students at the beginning of the Quarter).

Please fill in this basic information:


Name_____________________________________________________
Code Name________________________________________________

A. Please rate your expected level of comfort using the technology applied in the computer-
based oral test.

1 Low

2 Medium

3 High

B. Please rate your expected level of anxiety when taking the computer-based oral test.

1 Low

2 Medium

3 High

Do you expect anything to make you feel more anxious about the computer-based oral test?
_________________________________________________________________

C. Please rate the expected level of difficulty you will feel when taking the computer-
based oral test.

1 Low

2 Medium

3 High

D. Please rate the level of fairness you expect from the computer-based test in the
evaluation of your speaking skills.

1 Low

2 Medium

159
3 High

E. Please rate your expected level of anxiety when taking the face-to-face oral test in the
instructor’s office.

1 Low

2 Medium

3 High

Do expect anything to make you feel more anxious on the face-to-face oral test?
________________________________________________________________________

F. Please rate the expected level of difficulty in taking the face-to-face oral test in the
instructor’s office.

1 Low

2 Medium

3 High

G. Please rate the expected level of fairness of the face-to-face oral test in the instructor’s
office in the evaluation of your speaking skills.

1 Low

2 Medium

3 High

H. Please describe how anxious you expect to feel taking a computer-based oral test in
comparison to the traditional method (the face-to-face oral test in your instructors’
office).

1 less anxious

2 just as anxious

3 more anxious

Why? ______________________________________

160
I. Please indicate your opinion on the level of fairness of the computer-based method in
testing your speaking skills in comparison to the traditional method.

1 less fair

2 just as fair

3 more fair

Why? ______________________________________

J. Please indicate your opinion on the expected level of difficulty of the computer-based
oral test in comparison to the traditional method.

1 less difficult

2 just as difficult

3 more difficulty

Why? ________________________________________

K. What do you expect to like the MOST about the COMPUTERIZED oral test?
_____________________________________________________________________

What do you expect to like the LEAST about the COMPUTERIZED oral test?
_____________________________________________________________________

What do you expect to like the MOST about the FACE-TO-FACE oral test in the
instructor’s office?
_____________________________________________________________________

What do you expect to like the LEAST about the FACE-TO-FACE oral test in the
instructor’s office?
____________________________________________________________________

161
APPENDIX E

Survey concerning students’ feeling towards oral assessment methods (presented to


students at the end of the quarter)

Please fill in this basic information:


Name_____________________________________________________
Code Name________________________________________________

A. Please rate your level of comfort using the technology applied in the computer-based
oral test?

1 Low

2 Medium

3 High

B. Please rate the level of anxiety you felt when taking the computer-based oral test.

1 Low

2 Medium

3 High

Did anything make you feel more anxious about the computer-based oral test?
_________________________________________________________________

C. Please rate the level of difficulty in taking the computer-based oral test, according to
your opinion.

1 Low

2 Medium

3 High

162
D. Please rate the level of fairness of the computer-based test in the evaluation of your
speaking skills, according to your opinion.

1 Low

2 Medium

3 High

E. Please rate the level of anxiety you felt when taking the face-to-face oral test in the
instructor’s office.

1 Low

2 Medium

3 High

Did anything make you feel more anxious about the face-to-face oral test?
________________________________________________________________________

F. Please rate the level of difficulty in taking the face-to-face oral test in the instructor’s
office, according to your opinion.

1 Low

2 Medium

3 High

G. Please rate the level of fairness of the face-to-face oral test in the instructor’s office in
the evaluation of your speaking skills, according to your opinion.

1 Low

2 Medium

3 High

H. Please describe how anxious you felt taking this computer-based oral test in
comparison to the traditional method (the face-to-face oral test in your instructors’
office).

1 less anxious

2 just as anxious

163
3 more anxious

Why? ______________________________________

I. Please indicate your opinion on the level of fairness of the computer-based method in
testing your speaking skills in comparison to the traditional method.

1 less fair

2 just as fair

3 more fair

Why? ______________________________________

J. Please indicate your opinion on the level of difficulty of the computer-based oral test in
comparison to the traditional method.

1 less difficult

2 just as difficult

3 more difficulty

Why? ________________________________________

K. What I liked the MOST about the COMPUTERIZED oral test was:
_____________________________________________________________________

What I liked the LEAST about the COMPUTERIZED oral test was:
_____________________________________________________________________

What I liked the MOST about the FACE-TO-FACE oral test in the instructor’s office was:
_____________________________________________________________________

What I liked the LEAST about the FACE-TO-FACE oral test in the instructor’s office was:
______________________________________________________________________


164
APPENDIX F

Survey on Oral Exams Anxiety adapted from FLCAS in Horwitz, E. K., Horwitz, M.
B., & Cope, J. 1986 (presented to students at the end of the Quarter).

Name_________________ Class session____________ Code Name_______________

1. I never feel quite sure of myself when I am speaking in the oral exam in the
instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

2. I never feel quite sure of myself when I am speaking in the oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

3. It frightens me to talk in front of the teacher in the oral exam in the teacher’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

4. It frightens me to talk in front of the computer in the oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

5. I start to panic when I have to speak without preparation in the oral exam in the
instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

6. I start to panic when I have to speak without preparation in the oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. I worry about the consequences of failing my oral exam in the instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

8. I worry about the consequences of failing my oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

165
9. In the oral exam in the instructor’s office, I can get so nervous I forget things I
know.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

10. In the oral exam in the LAB, I can get so nervous I forget things I know.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

11. It embarrasses me to talk in front of the teacher in my oral exam in the


instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

12. It embarrasses me to talk in front of the computer in my oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

13. Even if I am well prepared for the oral exam in the instructor’s office, I feel
anxious about it.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

14. Even if I am well prepared for the oral exam in the LAB, I feel anxious about it.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

15. I often feel like not going to my oral exam in the instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

16. I often feel like not going to my oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

17. I can feel my heart pounding when I'm taking an oral exam in the instructor’s
office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

18. I can feel my heart pounding when I'm taking an oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

19. I feel more tense and nervous in my oral exam in the instructor’s office than in my
written exams.
166
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

20. I feel more tense and nervous in my oral exam in the LAB than in my written
exams.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

21. I get nervous and confused when I am speaking in the oral exam in the instructor’s
office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

22. I get nervous and confused when I am speaking in the oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

23. I get nervous when I don't understand every word in the prompts for the oral exam
in the instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

24. I get nervous when I don't understand every word in the prompts for the oral exam
in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

25. I get nervous when I have to perform oral situations for which I haven't prepared in
advance, in the oral exam in the instructor’s office

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

26. I get nervous when I have to perform oral situations for which I haven't prepared in
advance, in the oral exam in the LAB

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

167
APPENDIX G

Pre- and Post- Oral Achievement Test Assessment
(For the investigation of whether or not there is an improvement in students’
speaking abilities).

A) Oral situation topic: Festa de aniversário

You’re asking one of your friends when his/her birthday is and then you realize it
was not too long ago, so you want to find out what he/she did on his/her birthday.

Ask him/her:

1-When his/her birthday is
2-Whether he/she had fun
3-What places he/she went to during the day
4-Whether a lot of people went to his/her birthday party
5-What are some of those people like (physically and psychologically)
6-What he/she has in common with some of them
7-What kind of food he/she had on that day
8-The things he/she usually likes to eat
9-The things he/she usually likes to do on her birthday
10-What he/she thinks she will do on her next birthday

B) Topics included in the assessment:

1- Datas
2- Pretérito perfeito de verbos regulares
3- Pretérito perfeito do verbo “Ir”
4- Pretérito perfeito do verbo “Ir”
5- Adjetivos (Describing people)
6- Pronomes (Nós… ele, eles, etc.)
7- Comida
8- Expressão “Gosto de…” / Comida- (there is an emphasis in the use of the
expression “gosto de…” in 16A and 16B because of the students’ tendency to use
the Spanish expression “me gusta”)
9- Expressão “Gosto de…” + Verbo no infinitivo
10- Verbo “Ir” no futuro- (Distinction from the Spanish “Ir a…” to express future
actions)

168
APPENDIX H

Instructions for the Pre- and Post- Test and computer-based oral activities taken
throughout the instruction period


1) Get in pairs and share a computer.

2) Use Firefox.

3) Sign in on MyPortugueseLab (choose one of your student accounts).

4) Open today’s oral activity and read the prompts.

5) The student who has signed in into his/her account will be the one
answering the questions and also the one using the headphones+mic.

6) To use the headphones, click in ALT and, at the same time, click in the sound
icon up in the screen, and choose the option Logitech USB Headset.

7) Start recording the conversation

8) When you’re done recording, click “play” to listen to the conversation and make
sure it worked.

9) Sign in into the other student’s account

10) Trade roles (again, the student who has signed in into his/her account will be
the one answering the questions and also the one using the headphones+mic).

169
APPENDIX I

Oral assessment activities taken throughout the quarter- “Situações orais” taken
from the Textbook “Ponto de Encontro”.
(These activities were taken in the Computer lab by the Experimental group and in the
regular classroom by the Control group)


Atividade 1 (p. 284)

Role A: You were in a tennis clinic (a clínica de tênis/ténis) last summer. Tell your
friend a) where you went, b) the number of days you were there, and c) whether you
had fun and if you liked it or not. Answer his or her questions.

Role B: You would like to know more about the tennis clinic your friend attended last
summer. Ask your friend a) how many instructors he or she had, b) how much he or
she paid for the clinic, c) if he or she improved his or her game, and d) why he or she
liked or didn’t like the experience.

Atividade 2 (p. 320)

Role A: You are interviewing a well-known film critic to find out his or her opinion on
the best and worst American movies of the year. Ask him or her a) which is the best
American film, b) why, c) who is the best actor or actress (also ask him/her to describe
this actor or actress, d) which is the worst film of the year, and e) what she or he thinks
of films of the Portuguese-speaking world.

Role B: You are a well-known film critic. Answer your interviewer’s questions
according to your own opinions regarding the best and the worst American films and
actors.

Atividade 3 (p.350)

Role A: You are an advertising manager (gerente de publicidade) who is presenting a
new ad campaign (campanha publicitária) to the president of the company. After
showing two ads to the president, a) ask him or her if he or she likes them; b) mentions
the magazines where the ads will appear; and c) state the reasons why you chose those
magazines. Then answer his or her questions.

Role B: You are the president of an important company who has to decide about a new
advertising campaign. After telling the ad manager that you like the ads and listening to
his or her explanations, inquire a) about the cost of the campaign, and b) when it will
begin.



170
Atividade 4 (p.360)

Role A: You have moved to São Paulo and have opened a checking account (conta
corrente) at Banco do Estado de São Paulo. Ask a bank employee to help you write out
a check (say the information he gives you, as dates, out loud). After the employee gives
you all the details, a) thank him or her and b) say that you are very happy with the
services the bank provides the costumers.

Role B: You are the employee at Banco do Estado de São Paulo. Using a check, explain
to the customer where a) to put the date (say the date out loud), b) to write the payee’s
name, c) to write the amount (quantia) in numbers, d) to write the amount in words,
and e) to sing the check.

Atividade 5- Adapted (p.395)

Role A: You are playing the role of Narciso. You have decided to keep your fiancée and
go on a weight-gain diet. You go to a nutritionist. Tell him or her a) who you are and
why you made the appointment, and b) that you want to know how to gain 20
kilograms. Then ask him or her c) what foods you should eat (minimum of 3 foods), d)
what drinks you should have, and e) whether he or she thinks it is dangerous to gain so
much weight and why.

Role B: You are a nutritionist. Your client wants to gain weight to please his fiancée.
Ask him a) why he made the appointment and what he wants to achieve; then tell him
b) what is necessary to do in general for a person to gain weight, and specifically c)
what foods he should eat, and d) what drinks he should have. Finally, e) tell him that
you do not think it is healthy to gain or lose a lot of weight rapidly and that excessive
weight can cause health problems.









171
APPENDIX J

Pre- and Post- Test Rating Scale

(For the investigation of whether or not there is an improvement of speaking abilities,


when taking the technology-mediated oral activities throughout the instruction
period).


Description of ability parameters in Rating Description of rating levels
students’ answers to the pre- and Levels
post- test oral situation

Grammatical structures:
- To be able to use the correct sentence 0 1 2 3 0- Cannot use the structure
structure for dates (Q1) 1- Uses the structure with
- To be able to use the verb in the 0 1 2 3 many errors
preterit correctly (Q2) 2- Uses the structure with
- To be able to use the preterit of the 0 1 2 3 some errors
verb “ir” (Q3 and 4) 3- Uses the structure with no
- To be able to use the correct 0 1 2 3 errors
structure “gostar de…” (Q8)

- To be able to use the correct 0 1 2 3
structure “gostar de” + infinitive (Q9)

Vocabulary: 0- Does not use any
- To be able to use necessary 0 1 2 3 vocabulary items necessary
vocabulary to talk about a birthday to complete the task
date (Q1) 1- Uses a very limited range of
(E.g., numbers; months) 0 1 2 3 vocabulary items
- To be able to use necessary 2- Can use the necessary
vocabulary to describe people (Q5 and 0 1 2 3 vocabulary items to
Q6) complete the task with a
- To able to use necessary vocabulary few errors
to talk about food (Q7 and Q8) 3- Knows and uses the learned
vocabulary items well

Communicative functions:
- To be able to talk about past events 0 1 2 3 0- Not comprehensible at all
(Q2, Q3 and Q4) 1- Many errors interfering
- To be able to describe people (Q5 and 0 1 2 3 with communication/ short
Q6) 0 1 2 3 “yes” or “no” answers

0 1 2 3 2- Comprehensible with some
0 1 2 3 communication errors /

172
- To be able to talk about food (Q7 and complete but simple
Q8) answers
3- Very comprehensible/
- To be able to express likes (Q8 and complete and amplified
Q9) answers

- To be able to discuss activities and
make future plans (Q10


Overall Communication 0- Requires extra-sympathetic
listening; parts of message
- To be able to successfully perform the 0 1 2 3 still not understood;
oral situation presented minimally successful

1- Topics handled adequately
but minimally; ideas
conveyed in general;
basically on task but no
more
2- Topics handled adequately;
ideas clearly conveyed;
requires little effort to be
understood; some creativity
3- Displays communicative
ease within context(s);
creative, resourceful; easily
understood; takes risks

173

You might also like