You are on page 1of 14

9 Assessment and Good Language Teachers

Shahid Abrar-ul-Hassan and Dan Douglas

Introduction
Good language teachers are constantly assessing their students’ performance
in class for a variety of objectives beyond achievement or evaluation pur-
poses. They notice and gather evidence in various forms on how learners
perform in interactive classroom activities, how they complete written and
spoken exercises, what vocabulary they seem to gain and what they struggle
with, what syntactic forms pose problems for them, and what aspects of the
target discourse they control or have difficulties with. For example, a teacher,
after a few weeks of class instruction, could probably assess one of the
students as follows:
James is quite good at understanding and producing spoken French, and his social
vocabulary is one of his strong points, but his use of formal vocabulary and grammar
are a bit weak, and his reading and writing skills are things we’ll have to work on.
Teachers may get the information for this type of assessment spontaneously and
rather casually through noticing during the progression of class activities, or
sometimes more intentionally as when reviewing a homework assignment with
a student or giving feedback on a class presentation. This type of language
assessment is useful for developing or revising lesson plans or for providing
support to learners to help them progress (Little & Erickson, 2015; Norris,
2016). However, there are a number of potential challenges with this type of
informal classroom assessment:
• First, it is heavily grounded in the moment and a specific context, and thus, it
is difficult to assess all learners in the class in the same way, that is, with equal
opportunities to demonstrate their proficiency. Moreover, this practice poses
a challenge of fairness and consistency in language assessment.
• Second, and related to the first issue, during informal assessment, it is
difficult to hold all learners to the same standard, which makes it proble-
matic to compare one with another against criteria that apply to all in the
same way.

107

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
108 Shahid Abrar-ul-Hassan and Dan Douglas

• Third, all instructors probably have their idiosyncrasies, preferences, pet


peeves, and even biases (Weir, 2013), and thus, informal assessment of
the kind outlined above can be a bit one-sided and limited in scope.
The challenges of informal assessment are not only a matter of fairness, but
also one of coverage and thoroughness in ensuring that all aspects of
language learning are assessed, not just the ones of particular importance
identified by an individual instructor. Moreover, relying solely on one’s
own opinions or impressions of learners’ progress can be somewhat daunt-
ing and can place an enormous burden on the judgment of a single person.
It is very desirable to have some evidence or confirmation that the instruc-
tor’s estimation of learners’ language acquisition is accurate and fair.
Finally, while informal assessments of progress are useful for the teacher
in the context of daily classroom activities, often stakeholders outside the
classroom, such as administrators, parents, and upper-level wielders of
educational policy, may not be confident with the outcome of an instruc-
tor’s assessment, as generally accepted evidence of performance is required
for such audiences.
In the wake of new developments in language assessment, it is important to
consider a number of alternatives in language assessment because formal tests,
such as multiple-choice, fill-in-the-blank, or essays exams, are not the only way
to conduct language assessments (Carless, 2015; Norris, 2016). For example,
practitioners can assess learners by means of:
• Conference assessments, in which learners meet with the teacher to discuss
a particular piece of work or an assignment.
• Observational assessments, where the instructor observes student perfor-
mances and records, often with a checklist, aspects that are satisfactory or
that require intervention.
• Portfolio assessments, collections of learners’ work that chart progress over
time.
• Self- and peer-assessments, in which learners evaluate their own work, or
that of their fellow students, again, often with a checklist.
• Task-based and performance assessments, where learners are given specific
tasks to carry out in the target language and are rated on relevant aspects of
the performance.
• Dynamic assessments, which combine assessments of what learners are able to
do at the moment with an assessment of their potential for progressing in the
future.
Furthermore, two issues are fundamental to good language assessment practice
and the ethical use of language test results:
• Test reliability, the consistency and dependability of test scores.
• Test validity, the appropriateness of inferences, uses, and decisions based on
test results.

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 109

We will refer to these concepts later in the chapter. Two more basic concepts we
will introduce here and refer to later are the following:
• Summative assessments, which are usually carried out at the end of a unit or
course of study to measure achievement.
• Formative assessments, the purpose of which is to provide learners with
information about their progress that they can use to guide their continuing
learning, or to provide teachers with information that they can use to guide
course development and lesson planning.
We will turn now to a consideration of the practice of language assessment.

Language Assessment in Practice


Language assessment is a general term for the practice of evaluating a learner’s
progress in acquiring a language. A language test is one specific way of doing
this, but there are a number of alternatives in language assessment, which we
will discuss later in the chapter. A language test is essentially a device or
psychometric tool for measuring a person’s knowledge of and/or skill in
using a language. Language knowledge, however, is not directly observable,
so when instructors wish to measure it, they have to ask the learner to perform
some linguistic task, observe the performance, and infer the level of knowledge
or skill underlying that performance. Thus, a value judgment is made and
expressed as a number or a descriptor, such as excellent or average. In this
practice, three key concepts in language assessment become relevant: test,
measurement, and evaluation. It is possible to evaluate a person’s language
ability without measuring it, as mentioned in the preceding section. Most good
language teachers can do this kind of evaluation based on their teaching
experience and familiarity with their learners, which is a common practice in
many language teaching programs. Since measurement is the assignment of
a number representing a level of performance, it could be done without giving
a test. For example, an instructor might draw upon various sources of informa-
tion such as students’ performance on homework assignments, class participa-
tion, and formal presentations to assign a grade or a mark, though no test may
have been involved.
In order to achieve the advantages associated with tests (fairness, consis-
tency, thoroughness, and generally accepted evidence of reliability and valid-
ity), however, good language teachers need to be well versed in the use of
language assessments in the classroom – or, in other words, be assessment
literate. Assessment literacy, according to Stiggins (1995), is competence in
how to assess and what to assess, which entails expertise in generating valid
and reliable measurements, a full awareness of the potential confounding
elements, and an ability to be able to respond to potential pitfalls in assessment
practice. Teachers may think of tests rather negatively, picturing unrealistic,

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
110 Shahid Abrar-ul-Hassan and Dan Douglas

noncommunicative tests of discrete points of grammar or vocabulary that tell


very little about learners’ abilities to use language in meaningful ways.
Therefore, alternatives in language assessment are becoming increasingly
relevant (Turner & Purpura, 2015), and a number of alternatives in language
tests are commonly employed.
In discrete-point assessment, a single piece of grammatical or vocabulary
knowledge is tested by a test task, for example:
What is nose in Swahili?
a. Bega
b. Shinga
c. Pua
d. Moyo
If there are thirty or so items like this in a test, we can get a quick picture of how
well learners have mastered recently taught vocabulary (the answer is “c. Pua,”
by the way), but it does not tell much about their more general vocabulary
knowledge or how well they can use the language for communicative purposes.
To capture this, we need integrative language tests (Cheng & Fox, 2017), in
which tasks require a combination of types of knowledge and skill, as in the
Norwegian example below:
Gustav erpåbiblioteket for å låneenbokhankan lese for sønnen sin somer fem år.
Tidligere har sønnenhørtmyepålydbøker. Han liker historiersomfår ham til å le.
Han blir fort skremtog liker ikke at det blir for spennende. Gustav spør om hjelptil
å finneenboksom passer for enfemåring. Selvleser Gustav lite, han liker best å
se film.
Hvilkehistorier liker sønnen?
a. Triste historier
b. Spennendehistorier
c. Morsommehistorier
d. Fantasifullehistorier

In this test task, the learners have to read a passage, using their knowledge of
vocabulary and syntax, how texts are constructed in Norwegian, and cultural
information about libraries to make an inference that because the son likes stories
that make him smile, the correct answer is “c. Morsommehistorier (funny stories).”
This type of integrative test task is part of a family of language tests generally
known as communicative language tests. The idea behind communicative
language assessment is that even if learners control all the components of
a language – the phonology, the vocabulary, the syntax – they may still be
unable to communicate without an ability for language use (Douglas, 2010).
This ability includes knowledge of what functions the grammar can commu-
nicate (e.g., how to apologize, complain, congratulate) and what is socially
appropriate to say or write in a particular situation (e.g., whether one can use

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 111

a first name or must use a title and family name). Communicative tests involve
productive language used in specific contexts, as realistic input for learners to
comprehend and respond to, and/or as output produced by the learners. Reading
comprehension tasks, such as the one illustrated above, speaking tasks in
interview format, listening comprehension tasks in which test takers hear an
extended text and then must produce a summary, and writing tasks involving
filling in a personal information form are all examples of communicative
assessment tasks. Such tasks may be scored on a right/wrong basis, as in the
Norwegian task above, or rated according to a scale, either as a whole perfor-
mance (e.g., superior, good, average, poor) or as categories of language
knowledge with separate scores (say, three out of four, for example) for
grammar, vocabulary, usage, mechanics, style, and organization as well as
a total score.
Another approach to measuring learner progress is by task-based or perfor-
mance assessment (Norris, 2016). These two terms have somewhat different
origins and histories in language testing, but they will be used synonymously in
this chapter. As with integrative and communicative language testing, task-
based and performance assessment emphasize specific purpose, authentic, and
complex language use so that both learners’ knowledge of the language and
ability to accomplish communicative tasks with it can be assessed (Douglas,
2010; Norris, 2016). Performance tasks can range from the traditional written
essay or speaking task to more complex group projects in which learners have
to work together to solve a problem with a clear communicative goal. In rating
performance on such tasks, there is the problem of deciding whether successful
completion of the task itself is necessary or not. On the surface, it would seem
that task completion is essential, but tasks can be difficult to complete for
reasons that may be outside the test takers’ control – for example, in a task
where learners design a poster aimed at attracting tourists to their country, their
colored markers may run out of ink before they finish the actual poster. The
overriding concern in task-based assessment should be on the process, not the
outcome, and the focus should be on the language used rather than the correct
answer or response.
Tests are usually considered distinct procedures in which learners are given
instructions and input material that will elicit a language performance in
a measurable way to make an inference about the levels of language knowledge
and skill. However, there are other approaches to assessment that are perhaps
closer to evaluation, which seem less like tests. For example, conference
assessment usually involves teachers meeting one-on-one with learners to
review an assignment, give feedback, or make suggestions for revision
(Brown & Hudson, 1998). In this way, the instructor gets a clear picture of
how well learners are doing and what they may or may not understand.

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
112 Shahid Abrar-ul-Hassan and Dan Douglas

Moreover, talking about a particular piece of work with the teacher can help the
learner develop self-awareness and critical skills.
Closely related to conference assessment is portfolio language assessment
(Hamp-Lyons & Condon, 2000). This involves helping learners compile
a collection of their work, whether written (either on paper or in electronic
format, including blogs or email exchanges) or spoken language. Portfolio
assessment aims to help the learner tune into the learning process and see
progress over time. The instructor can gain diagnostic information about the
learner’s strengths and weaknesses, not only from examining the material in the
portfolio but also by interacting with the learner during the portfolio process.
Portfolios may contain samples of the learner’s best work over time (a show-
case portfolio), or examples of successive drafts of an extended assignment or
different assignments illustrating stages of learning and performance on dif-
ferent types of tasks (a progress portfolio). However, an obvious drawback to
both conference and portfolio language assessments is that they are time-
consuming and require extra resources, such as training, for consistency of
a measurement or reliability.
Another approach to helping learners become more self-aware and aware
of learning goals and criteria involves self- and peer-assessment (Lim,
2007; Turner & Purpura, 2015). This approach can enhance learner auton-
omy and self-motivation. There are two approaches to self- and peer-
assessment:
• Learners can be given, or work with the instructor or in groups, a set of
criteria to develop and then use these to evaluate their own or others’
work either during class or in communicative situations outside of class.
• Alternatively, learners may check can-do statements to self-evaluate com-
municative ability in various circumstances. Examples of can-do statements
include “Can go to a department store or other shop where goods are on
display and ask for what I want” and “Can ask for a refund or exchange of
faulty or unwanted goods” (ALTE, 2002).
Of course, the accuracy of self- or peer-assessments varies with the complexity
of the knowledge or skill being assessed. For example, it is more straightfor-
ward for learners to assess vocabulary knowledge than it is to assess rhetorical
effectiveness. Accuracy also varies according to how important the assessment
is for the learners – when they are rating themselves for the purpose of a class
discussion, modesty militates toward lower self-ratings, but when their assess-
ments might result in shortening a particular section of the curriculum, ratings
tend to be higher. It is important to give the learners plenty of practice with self-
and peer-assessment procedures before conducting them in earnest.
We will now discuss best practices in language assessment in the context of
a small survey of practitioners in the field.

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 113

A Study of Best Practices and Assessment Literacy


In order to understand assessment practices in general and assessment literacy
of language practitioners in particular, a version of reflective inquiry (Farrell &
Mom, 2015) was utilized. The instrument used was a questionnaire to elicit
reflections of language practitioners in response to seven prompts about dif-
ferent aspects of language assessment. The instrument yielded qualitative data
about the participants’ reflections on the development of assessment, objectives
and uses of assessment, commonly used summative and formative assessment,
best practices, their contentment with the outcome of assessment as well as
with practitioners’ assessment skills, and the sources of their learning in
assessment. A sample of nine postsecondary English language practitioners
from public and private sector institutions in Canada, which was reasonably
representative in terms of professional experience, gender, and work settings,
was purposefully selected (Creswell, 2014). The participants’ consent for
voluntary participation was sought and the instrument was delivered through
the web. The data were coded into segments (Creswell, 2014) and NVivo
software was also used in the analysis.
The reflections of the participants indicate their assessment literacy
(Stiggins, 1995), which is manifest in their professional practice. This study
is informed by a sociocultural theory. Based on Vygotsky’s view of learning as
a socially mediated activity, teaching practices are largely formed experien-
tially and are socially constructed from practitioners’ professional experiences
(Johnson, 2009). Thus, the reflections of the participants provided insider
views representing language teaching communities (i.e., emic perspectives)
that are based on substantive experiences as members of a professional com-
munity. Overall, there seems to be an agreement among the participants on the
inadequacy of assessment literacy of English language practitioners across
the teaching contexts. The data were analyzed by clustering the topics under
the following four research questions in the study:

i) How was assessment developed in your teaching context, and what


were the main objectives as well as uses?
The development of assessment plans, procedures, and instruments is a lengthy
and iterative process. This process is completed by involving a number of
stakeholders, such as coordinators, administrators, practitioners, and test devel-
opers. Although no or limited involvement of classroom practitioners in the
process is not uncommon, any specific assessment plan is rarely externally
mandated.
The data showed that the assessment plans at a program or institute level were
developed following a process model. This model involved an academic

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
114 Shahid Abrar-ul-Hassan and Dan Douglas

administrator in the lead role; he or she initiated as well as managed the process
with some contribution from the instructors who were invited to participate in the
process. Moreover, the role of practitioners varied and included, for example,
providing feedback in meetings, discussing tests, piloting a test, and contributing
to an assigned test project. One participant highlighted the process model, saying:
When a draft is finally completed, the exam goes through a piloting and item analysis
phase. Feedback from instructors is also collected . . . . It is then revised, piloted, and
item analyzed again before it is finally used.

And, similarly, another participant mentioned the management and the key
aspects of the process:
The program coordinator is responsible for all development in conjunction with the lead
teacher . . . for assessment; both individuals have the knowledge, skills, and experience
to lead the development. The program administrator has oversight over the development
and gives final approval.

The key feature of the institution or program-wide assessment development is


that it is a collaborative process and involves practitioners in supporting roles.
However, the data indicated that at the class level, practitioners have greater
autonomy in developing small-scale, low-stakes assessment tools. In other
words, the role of practitioners has an inverse relationship with the high-
stakes, large-scale assessment, which are mostly summative or exit
mechanisms.
Regarding the objectives of assessment in English language teaching, stu-
dent achievement remained the overriding objective. Practitioners and admin-
istrators were focused on gauging students’ linguistic skills at some specific
stages of the course or program. The crucial dimension of this measurement
was its use as an indicator of the effectiveness of the course or program to reach
target goals. For students, this measurement represented offering learning
evidence and meeting benchmarks. Some other uses of assessment included
diagnosis, placement, and feedback to both students and instructors, which
indicate that assessment was seen as a multidimensional curricular practice.
Therefore, the evidentiary aspects of language assessment had repercussions
for all stakeholders, including students, instructors, and administrators.

ii) What were the frequently used summative or formative assessments


as well as some best or most effective language assessment practices?
Although commonly used assessment practices were limited to some tradi-
tional approaches, a variety of tools was used across the settings. These tools
for formative assessment included short quizzes, unit tests, teacher feedback,
and peer/teacher observation. For summative assessment, the most frequently

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 115

used tools were exit examinations, portfolios, task- or project-based assess-


ment, homework and written assignments, and peer/self-assessment. Thus,
awareness for a wide-ranging and alternative assessment was evident, which
supports the earlier finding regarding the multidimensionality of assessment
practices. One participant noted:
Over the past few years, the focus on traditional summative assessment methods, such
as formal skills tests at the end of each term and at midterm, has given way to alternative
methods such as portfolio and task-based assessment.

It is pertinent to mention that participants viewed formative assessment as less


formal and consequential than summative assessment. Moreover, a somewhat
shifting focus from traditional summative assessment is perhaps due to its
inherent limitations and the higher validity of alternative assessment forms,
which involve a great deal of formative assessment. The construct of validity,
as mentioned earlier, refers to the extent to which an assessment tool or item
measures what it was aimed at measuring.
Best practices in this study were defined as research-informed assessment
practices that were promising or yielded higher validity and efficacy in
a particular teaching context. A best practice is an important part of
a practitioner’s professional repertoire. According to the data, the overarching
best practice was a multifaceted assessment plan that utilized a variety of
assessment tools, especially small-scale tools rather large exit examinations.
One participant asserted that:
a combination of both formative and summative assessment practices could work best. It
is immensely important for educators to plan varied and valid assessment tasks.
Their best practices, such as a standardized placement test and project- or task-
based assessment, echoed the findings presented in the earlier sections, which
included tackling the multidimensionality of learning and assessment using
a wide range of testing tasks and tools. Furthermore, the best practices of the
participants in this study reflected assessment that cultivated a positive impact,
used integrated skills (as opposed to discrete), and employed systematic or
standardized testing. Considering the specialized nature of assessment, the data
showed support for having an assessment lead at the program level for devel-
oping alternative assessment collaboratively with instructors.

iii) Did stakeholders feel content with the outcome of language


assessment, and what were the sources of your assessment literacy,
especially the most effective ones?
The participants expressed discontent with the outcome of language assess-
ment due to two major reasons:

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
116 Shahid Abrar-ul-Hassan and Dan Douglas

• First, their concerns emanated from their perceived low reliability, validity,
impact, or authenticity.
• Second, they expressed their apprehension regarding the overreliance on or
widespread popularity of traditional examinations among stakeholders, such
as instructors, administrators, and student guardians/sponsors.
Those who expressed some satisfaction based it on the fact that assessment
manifested “the process of multistep exam development and standardization.”
Moreover, the data presented a clear picture about a common practice of
utilizing assessment in isolation rather than as an embedded pedagogical
practice. This finding expands on the achievement focus of language assess-
ment as mentioned in the first section. One participant highlighted that:
from a critical pedagogical perspective, issues of fairness, judgment, and power rela-
tions come into play. Interaction between instructors and students should create roles in
which all voices are validated in assessment design and assessment is seen as negotiated.
Nevertheless, the data indicated some degree of satisfaction with the outcome
of assessment because the assessment was likely to be fair and power was
evenly distributed due to collaboration among instructors, such as using stan-
dard rubrics and maintaining some level of consistency.
The predominant sources of the participants’ assessment literacy were their
foundational graduate courses and experiential learning. The most effective learn-
ing opportunities were afforded through in-service workshops, online courses,
peer support, and hands-on project work. These opportunities, which were essen-
tially collaborative, proved to be customized learning zones and facilitated the
development of the participants’ tacit knowledge. Similarly, one participant found
that “longitudinal on-the-job training can potentially have maximal efficacy.” In
spite of having access to and utilization of a variety of learning opportunities, all
the participants were keen on enhancing their assessment literacy. In particular,
they identified specific areas of assessment literacy for further training, such as
developing scoring rubrics, designing task-based assessment, utilizing
a sociocultural approach in assessment, and expanding alternative assessment
options.

iv) Were you satisfied with the language assessment skills of practitioners
in the postsecondary setting, in general, including yours?
A key aspect of this reflective inquiry was to gain an understanding of the
participants’ self-realized assessment literacy level. Their length of profes-
sional experience ranged from nine to twenty-plus years of English language
education in postsecondary settings. The participants unanimously
expressed dissatisfaction with their own and other practitioners’ assessment
literacy. Although some were more empathic than others, they stressed that

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 117

optimized language assessment was the missing (or the weakest) link in
instructional practice. Reflecting on extensive experience, one participant
noted that:
there is generally a lack of assessment literacy; knowledge of key assessment principles
and skills in developing and analyzing assessment tools and procedures. . . . There is
a disconnect between instruction and assessment; what is taught, what is being tested
and learner goals.
This inadequacy of practitioners’ assessment literacy stems from several
factors:
• First, language assessment courses are not comprehensive and tend to focus
on testing.
• Second, these courses do not facilitate fully turning practitioners’ declarative
knowledge into procedural knowledge and then honing this procedural
knowledge through assessment projects.
• Third, language programs are teaching centered where assessment is primar-
ily limited to achievement purposes. A participant emphasized, “We all need
to receive more language assessment training in order to serve our students
better.”
In view of language assessment being relegated to a diminished role, “There is
a long way to go to upgrade the skills and knowledge of practitioners,”
according to one participant.

Implications for Classroom Practice


In light of the foregoing discussion, good language teachers need to acquire
adequate assessment literacy and become familiar with new approaches to
language assessment. An emerging approach to classroom-based language
assessment that focuses on learners themselves and the opportunity of taking
charge of their own learning by means of feedback from the good language
teacher is known as learning-oriented language assessment (LOLA).
LOLA refers to assessment with a primary focus on promoting productive
student learning processes (Carless, 2015). It involves three interrelated
components:
• First, productive assessment task design allows for students to be assessed on
meaningful tasks that require higher-order learning outcomes: analysis,
evaluation, and creativity.
• Second, LOLA promotes activities that support students in developing
understandings of what quality work looks like: going beyond instructions
and lists of criteria to explore quality academic performance.
• Third, this approach to assessment encourages feedback processes that focus
less on telling and more on entering into different forms of dialogue about

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
118 Shahid Abrar-ul-Hassan and Dan Douglas

student work, so that students can be primed to engage with and act on
feedback messages.
LOLA assumes that the primary goal of assessment in classroom contexts is to
promote learning processes and further successful learning outcomes, whether
this involves formal and planned or informal and spontaneous assessments
embedded in instruction (e.g., Katz, 2014). For instance, formal, planned
language assessment might include end-of-chapter tests that reflect the content
and style of learning activities to help learners see what they have learned and
where they would benefit from more practice. Planned assessments might also
involve regularly scheduled learner activities such as group discussions, indi-
vidual presentations, or team projects that give the instructor opportunities to
evaluate and comment on the performances, providing feedback that can be
used to motivate the learners and help them improve. Language assessment in
the classroom might also be more informal and unplanned, as when conducting
a vocabulary activity; for example, the teacher notices that the learners seem
not to grasp fully the meaning of an item and interrupts the activity to elaborate
on the meaning and deepen learners’ understanding. The goal of learning-
oriented language assessment is the development of knowledge, skills, and
abilities over time in the classroom through planned and unplanned assess-
ments and the provision of interactional feedback that facilitates learning
(Turner & Purpura, 2015) in a language teaching context.

Directions for Further Research


Since LOLA is a relatively new concept in assessment practice, much scope for
further research remains to determine its effectiveness and the best ways to
implement such an approach in the classroom. In particular:
• Research is needed about the question of how to deal with teachers’
expressed dissatisfaction with their own assessment literacy.
• Research is needed to explore how teachers can be better trained to deal with
assessment issues in their classrooms.
• Validation of research-based assessment competency models specific to
various language teaching sectors are in dire need to inform teachers, teacher
trainers, and program managers.

Conclusion
This chapter has considered how good language teachers can assess their
students in terms of evaluation, measurement, and testing, and reasons have
been suggested for using more formal language tests, including providing for
fairness and consistency in evaluating learners’ progress, holding all learners to
the same standard, ensuring more complete coverage of what has been taught,

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
Assessment and Good Language Teachers 119

getting a “second opinion” in support of the teacher’s own evaluation, and


finally providing more generally accepted evidence of progress for stake-
holders outside the classroom, including administrators and parents.
A number of alternatives in language assessment has been discussed, including
discrete-point and integrative tasks, communicative tasks, task-based assess-
ments, and performance assessments, and fewer “test-like” assessments such as
conferences, portfolios, and self- and peer-assessments.
The study reported here presented perspectives on language assessment from
classroom teachers, including aspects of assessment literacy that they think are
of greatest use and their views on best practices in classroom assessment. The
teachers in this study expressed general dissatisfaction with their own assess-
ment literacy and with their own training for assessment. Since the goal of
assessment is the promotion of learning, we must conclude that assessment
literacy is an important skill for good language teachers to develop.

References
ALTE. (2002). The ALTE can do project. Cambridge: Association of Language Testers
in Europe. Retrieved from: www.cambridgeenglish.org/images/28906-alte-can-do-
document.pdf.
Brown, J. D., & Hudson, T. (1998). The alternatives in language assessment:
Advantages and disadvantages. TESOL Quarterly, 32(4), 653–675.
Carless, D. (2015). Exploring learning-oriented assessment processes. Higher
Education, 69(6), 963–976.
Cheng, L., & Fox, J. (2017). Assessment in the language classroom. London: Palgrave.
Creswell, J. (2014). Research design: Qualitative, quantitative, and mixed methods
approaches (4th ed.). Thousand Oaks, CA: Sage.
Douglas, D. (2010). Understanding language testing. London: Routledge.
Farrell, T., & Mom, V. (2015). Exploring teacher questions through reflective practice.
Reflective Practice, 16(6), 849–866.
Hamp-Lyons, L., & Condon, W. (2000). Assessing the portfolio: Principles for practice,
theory, and research. Cresskill, NJ: Hampton Press.
Johnson, K. (2009). Second language teacher education: A sociocultural perspective.
New York, NY: Routledge.
Katz, A. (2014). Assessment in second language classroom. In M. Celce-Murcia,
D. Brinton, & M. Snow (Eds.), Teaching English as a second or foreign language
(pp. 320–337). Boston, MA: National Geographic Learning.
Lim, H. (2007). A study of self- and peer-assessment of learners’ oral proficiency. In
N. Hilton, R. Arscott, K. Barden, A. Krishna, S. Shah, & M. Zellers (Eds.),
Proceedings for the Fifth Cambridge Postgraduate Conference in Linguistics (pp.
169–176). Cambridge: Cambridge Institute of Language Research.
Little, D., & Erickson, G. (2015). Learner identity, learner agency, and the assessment of
language proficiency: Some reflections prompted by the Common European
Framework of Reference for Languages. Annual Review of Applied Linguistics, 35,
120–139.

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012
120 Shahid Abrar-ul-Hassan and Dan Douglas

Norris, J. (2016). Current uses for task-based language assessment. Annual Review of
Applied Linguistics, 36, 230–244.
Stiggins, R. (1995). Assessment literacy for the 21st century. Phi Delta Kappan, 77(3),
238–245.
Turner, C., & Purpura, J. (2015). Learning-oriented assessment in the classroom. In
D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment
(pp. 255–272). Berlin: De Gruyter Mouton.
Weir, C. (2013). An overview of the influences on English language testing in the United
Kingdom 1913–2012. In C. Weir, I. Vidakovic, & D. Galaczi (Eds.), A history of
Cambridge English examinations (pp. 1–102). Cambridge: Cambridge University
Press.

Downloaded from https://www.cambridge.org/core. University of Southampton Library, on 10 Mar 2022 at 03:32:00, subject to the Cambridge
Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108774390.012

You might also like