You are on page 1of 6

Technology-Enhanced Language Assessment: Innovative Approaches for Better Learning

Students using virtual reality goggles


Dr Hye-Won Lee of Cambridge Assessment English explains how the organisation is
implementing new technologies such as computer-based testing and artificial intelligence while
always keeping the learner central to the process.

The past two decades have seen major changes in the way people communicate. In this digital
age, we can deliver messages instantaneously at any time and from anywhere as long as there
is internet or mobile network coverage. Writing today is mostly done and shared on screens
rather than in print and is usually supported by smart spelling and grammar tools and/or online
dictionaries.

Video-conferencing technology has removed the physical barrier of face-to-face communication


over a distance, allowing us to reach out conveniently to people from all over the world. We
can even interact with electronic devices supported by artificial intelligence (AI) technology to
perform various everyday tasks such as creating reminders and turning on lights at home.

Information and communications technology have also changed language learning and
teaching. A traditional classroom is no longer the only place where formal learning takes place.
Distance learning that allows students to attend classes and access learning materials remotely
has made education more affordable and flexible.

Paper-based textbooks are being replaced by more engaging digital textbooks containing
hyperlinks, interactive presentations, and videos. Additionally, a digital learning environment
enables students to set their own pace of study and teachers to track students’ progress more
efficiently.

These technological advances in the educational context as well as our daily life have greatly
impacted the assessment of English language proficiency. Cambridge English strives to adapt to
these changes in our current and future test design without compromising our philosophy of
communicative language assessment with the learner at the centre.

Our endeavours to integrate new technology for the better have had several outcomes,
outlined below:
Adaptive testing based on a candidate’s level of ability.
Quick reporting of results enhanced by AI-enabled marking.
Various modes of testing available for the stakeholders.
Instantaneous feedback for enhancing learning and teaching.
Innovative assessment for the future.
1. Questions adaptive to learner ability
Computer-adaptive testing (CAT) is based on the tailoring of test questions to each candidate’s
language ability, as in the Linguaskill Reading and Listening components, for instance. In a
paper-based (PB) linear test, candidates from the same testing cycle usually receive the same
set of questions, possibly in exactly the same order. Although the test is targeted at a majority
of the candidates, the questions cannot help but be too difficult or easy for certain learners.

In contrast, CAT, currently available in listening and reading tests, provides questions according
to a candidate’s test performance. The computer decides the candidate’s level of ability based
on his or her response pattern using an iterative algorithm and eventually provides only a select
subset of the questions from a large test item bank to measure the target ability. All the
questions in the test item bank have been carefully calibrated prior to test delivery, which
enables the computer to change the difficulty level of questions as the test unfolds.

These individualised tests may give candidates a more positive test experience by reducing their
anxiety or fatigue during test-taking. Because an adaptive test consists mostly of items targeted
at language skills associated with a specific level of ability, it is commonly shorter in length than
a PB linear test and can offer immediate results by the end of the test. More information about
how Cambridge English CAT is constructed and administered can be found in issue 59 of the
Cambridge Assessment English publication Research Notes.1

2. Quick turnaround of test results


Computer-based (CB) tests in general can increase flexibility and efficiency in test
administration and scoring. On-demand testing is possible as testing can take place at a time
and place that best suits the stakeholders, which allows decision-makers to receive the test
results in a timely manner.
Automated marking of writing and speaking skills is also made possible by recent advances in
AI. Machine-learning algorithms can be trained to perform human-like evaluation on learner
speech and writing. The Writing module of Linguaskill, for example, is marked by an auto-
marker and a group of human examiners. When the auto-marker reports that it is not able to
accurately assign a score, it escalates the script to a human examiner. The examiner mark is
then added into the training data of the auto-marker to further improve its scoring capacity -
for more detailed information, see Insights: Keeping artificial intelligence human.

Compared to labour-intensive human marking, this ‘hybrid’ approach maximises the strengths
of both parties, thus making it more powerful and time efficient. Owing to speedy auto-
marking, Linguaskill candidates typically receive their Writing test results within half a day of
completion of the test.

3. Flexibility to meet stakeholder preferences


Despite the advantages of CB testing, concerns may arise about the impact of candidates’
computer literacy, such as their familiarity with reading on a screen and proficiency of typing on
a keyboard, on their test performance. Further concerns arise around the impact of technology
on the test construct itself, i.e. on the skills the test aims to elicit. From the early stages of test
development, Cambridge English carefully considers and researches such issues related to test
validity.

An accumulating body of research conducted on Cambridge English tests has indicated that the
Listening, Reading and Writing sections of the traditional PB mode and new CB mode result in
comparable test scores.2 Based on this research, tests for these language skills are offered to
candidates in the CB mode as an alternative to the PB equivalents. For example, the Listening,
Reading, and Writing components of IELTS are currently available in both PB and CB versions.3
Candidates thus enjoy the freedom to select the test mode which reflects their primary means
of communication.

Cambridge English is also exploring the option to deliver direct oral proficiency interviews
remotely using video-conferencing technology. This innovative test mode preserves the
interactional nature of speaking and also makes face-to-face speaking tests more accessible to
learners from geographically remote or politically unstable areas. Recent studies on the IELTS
Speaking test4 demonstrated score equivalency between the standard and video-conferencing
modes of test delivery, thus suggesting great promise of the new test mode.

4. Automated feedback for personalised learning


With the support of fast-developing technology, learners can receive instant, detail-oriented
automated feedback on their performances, which can promote individualised learning more
effectively, in contrast to many previous forms of assessment that focus only on the outcome of
an instructional unit. Ideally, individualised feedback provided to learners can also be utilised by
teachers to inform teaching.

These views have been manifested in a Learning Oriented Assessment (LOA)5 philosophy,
which is a systematic approach to linking assessment to learning that underlie Cambridge
English tests and learning materials design.

A CB diagnostic language test – of which Cambridge English has so far developed and trialled
two prototypes – is a good example of how learning-oriented assessment is realised in the
context of classroom assessment. Instead of giving test scores, the test provides each test-taker
with instant diagnostic feedback on their strengths and weaknesses. Teachers also receive
group-level feedback that details students’ areas for improvement and links to relevant online
teacher resources based on the Cambridge English Curriculum. In this case, automated
feedback equips teachers with knowledge of their students and enables learners to take control
of their learning, thus creating an intimate connection between assessment and learning.

Write & Improve is another example of LOA, where learners can practise and improve their
writing skills with the help of AI-powered automated feedback. Learners can submit their work
to this free online tool repeatedly and make changes referring to real-time, machine-generated
word and sentence-level feedback on their working drafts.6

Engaged in this scaffolded, customised evaluation cycle, learners can concentrate on areas that
need more attention, polish their writing, and learn from this iterative process (see the Insights
article linked to above for more details).
In addition, course-level instruction can (and should) be systematically aligned with learning-
oriented assessment, as achieved in Empower,7 a successful outcome from a collaborative
project between Cambridge English and Cambridge University Press.

Empower is a series of general English course books that comprise online unit progress tests on
the target lexis, grammar and functional language, and automated speaking tests on
pronunciation and fluency.8 An individual learner takes the unit tests on an online learning
management system and, depending on his/her achievement in each section of the test, is
immediately and automatically assigned to personalised online activities for further practice.

The impact studies on Empower have shown that learners demonstrated improved
performance on their second attempt of the same unit test after additional practice, and
reported that the seamlessly integrated learning and assessment cycle helped them to learn
and understand their own strengths and weaknesses better.9

5. Innovation for next generation assessment


New digital assessment and learning is expected to take ever more innovative forms, some of
which we are actively exploring:

Quiz your English is a gamified, multiplayer mobile application for practising vocabulary and
grammar skills.
Game-based assessment is seen as a fun and ideal way to immerse learners in the cycle of
learning and assessment.
Virtual reality technology is being trialled as a medium for simulating real-life tasks and eliciting
more authentic learner performance.
We believe that these future assessments will pave the way for a learning ecosystem where
technology supports meaningful experiences for learners.

To conclude...
Technology is integral to our daily lives nowadays and will continue to change the way we use
and learn languages. Cambridge English endeavours to improve stakeholders’ experience by
integrating state-of-the-art technologies into its test design, keeping the test construct or the
language skills being assessed up-to-date, and uniting assessment with learning.

On this threshold of transformations in language testing, we foresee that future assessment will
move away from the one-time test model to a truly personalised and engaging learner
experience that draws on the power of both technology and humans.

It is true that language assessment is evolving into a new phase, but it should be always
remembered that learners are at the centre of every aspect of the process and we make these
changes to better help people learn English and prove their skills to the world.

Dr Hye-Won Lee
Senior Research Manager, Cambridge Assessment English
https://www.cambridgeassessment.org.uk/insights/technology-enhanced-language-
assessment-innovative-approaches-for-better-learning/

You might also like