You are on page 1of 7

Viewpoint

RELC Journal
2023, Vol. 54(2) 445–451
Artificial Intelligence in English © The Author(s) 2023
Article reuse guidelines:
Language Teaching: The Good, sagepub.com/journals-permissions
DOI: 10.1177/00336882231168504
the Bad and the Ugly journals.sagepub.com/home/rel

Nicky Hockly
The Consultants-E (TCE), UK

Abstract
The use of educational technologies in English language teaching (ELT) has become widely accepted
in the post-pandemic era, and, for better or worse, some of these technologies rely on artificial
intelligence (AI). As an area of technological growth and increasing financial investment, we are
likely to see more AI-driven technologies in teaching and learning in the post-pandemic ELT
world. We are currently in the stage of ‘weak’ AI, which typically performs restricted tasks within
specific domains relatively well. However, ‘strong’ AI, equivalent to human intelligence, is the long-
term goal, and although this is no more than a theoretical construct at present, we can expect
‘stronger’ AI to emerge over time. ELT will not be immune to this development, and it behoves
us as language teachers to be familiar with AI’s current benefits and challenges, so that we can
better prepare for that future. This article describes how AI is currently used in ELT, and explores
some of the opportunities and challenges that AI can provide for learners, teachers and institu-
tions. Ethical issues such as collecting learner data, surveillance and privacy are considered, as
well as learner wellbeing and the digital literacies that teachers and learners will need to develop
to co-exist in a brave new world of educational AI. Chatbots are examined as one example of AI-
driven technology for language learning. There are of course many more, such as machine trans-
lation, intelligent tutoring systems and automated writing evaluation to name just a few; however, a
detailed consideration of these is beyond the scope of this article.

Keywords
Educational technology, Edtech, artificial intelligence (AI), English as foreign language, English as a
second language

Introduction
The COVID-19 pandemic saw an unprecedented uptake of educational technologies
(edtech) in high- and medium-resource contexts, as lockdowns resulted in school clo-
sures, and teachers and learners moved online. Since then, a range of digital technologies

Corresponding author:
Nicky Hockly, The Consultants-E (TCE), 4 Vivians Row, Swansea, UK.
Email: nicky.hockly@theconsultants-e.com
446 RELC Journal 54(2)

have become mainstream in many educational institutions as they move from the ‘emer-
gency remote teaching’ of the pandemic lockdowns (Hodges et al., 2020) to a more con-
sidered and planned approach to online and blended learning (Horizon Report, 2022).
This increased ‘normalisation’ (Bax, 2003) of online learning has come at a price, both
literal and metaphorical. For example, it has been estimated that the global e-learning
industry was worth over US$250 billion in 2021, and is predicted to be worth over US
$520 billion by 2027 (E-Learning Market Report, 2022). Online learning is clearly a
growth industry; however, in this article I will reflect on the metaphorical side of this
growth, that is, on the challenges and advantages that AI tools bring to institutions, edu-
cators and their learners.

What Is AI?
According to JISC (n.d.), AI is ‘a system’s ability to correctly interpret external data, to
learn from such data, and to use those learnings to achieve specific goals and tasks
through flexible adaptation’. AI solutions in education currently use a ‘weak’ or
‘narrow’ version of AI. These AI systems use machine learning to carry out specific
tasks, such as providing feedback on learners’ written work, translating a written text,
administering automated tests or providing structured conversation practice via a
chatbot app. Machine learning relies on statistical methods to draw on large sets of train-
ing data to identify patterns, build a model (often in the form of algorithms) and then take
actions based on that model. The training set used in the development of an AI system is
important, because any biases in the training data will be replicated in the AI’s outputs.
One highly publicised example of just how pernicious this can be was illustrated by
Microsoft’s 2016 Twitter chatbot ‘Tay’. Programmed to respond to user input, Tay
had to be taken offline after 16 hours because of the increasingly offensive nature of
its tweets. Another well-known example of algorithmic bias is how facial recognition
software is biased towards Caucasian features due to a lack of diversity in the training
data.
Recent AI applications, such as Google’s LaMDA (Language Model for Dialogue
Applications) chatbot or Open AI’s GPT 3 chatbot, use ‘deep learning’ to collect or
‘harvest’ their own data, rather than having those data pre-selected by humans. These
more complex AI applications create and adjust their own algorithms, depending on
whether task goals are met. In the case of the LaMDA chatbot, this can result in eerily
realistic ‘conversations’ with humans, as evidenced in a much-publicised polemic over
a Google engineer claiming that LaMDA was essentially a sentient being (see Tiku,
2022, for more on this). Chat-GPT, which learners can easily use to create essays from
prompts, has led to increased concern over academic integrity and assessment (see, for
example, Sharples, 2022).

The Bad and the Ugly: Ethics and AI


As educational institutions move increasingly into online and blended learning, and
invest in edtech tools for the classroom, large amounts of learner data are generated;
these data can then be used by AI applications. Clearly there are positives and negatives
in this scenario. On the plus side, ‘these expansions in data will demand a parallel expan-
sion in institutions’ AI technologies and capabilities for organizing and making sense of
Hockly 447

these data with the potential for helping drive decision-making and creating adaptive and
personalized education experiences’ (Horizon Report, 2022: 17).
On the minus side are concerns around privacy and the lack of transparency in the col-
lection of massive amounts of learner data by edtech tools. This is of particular concern
with minors’ data, especially when informed consent for obtaining these data has not
been sought. For example, a recent Human Rights Watch report found:

146 EdTech products directly sending or granting access to children’s personal data to 199
AdTech companies. […] These products monitored or had the capacity to monitor children,
in most cases secretly and without the consent of children or their parents, in many cases har-
vesting data on who they are, where they are, what they do in the classroom, who their family
and friends are, and what kind of device their families could afford for them to use. (Human
Rights Watch, 2022: 2)

In May 2022, this kind of unethical behaviour led the US Federal Trade Commission
to declare an intent to prosecute companies not respecting current legislation around col-
lecting children’s data.
Ethical concerns around privacy and surveillance are important in any use of big data –
such as learning analytics – in education. Even the use of predictive analytics to identify
at-risk learners in a learning management system (LMS) can be a double-edged sword.
On the one hand, the AI systems tracking learners in an LMS can identify a lack of
engagement or non-completion of compulsory coursework early on, and an instructor
can step in and offer support before the learner drops out or fails a course. On the
other hand, learners with idiosyncratic ways of working, or learners with personal cir-
cumstances that prevent full engagement with learning materials for certain periods of
time, can be unfairly penalised or wrongly identified as struggling. In some cases, the
use of AI algorithms to identify poor learner results may call into question a teacher’s
performance, even though these metrics may not reflect a learner’s overall progress or
learning over time (O’Neil, 2016).
To combat the lack of familiarity with how learner data are collected and managed by
edtech tools, Regan and Jesse (2018) identify six privacy concerns for teachers and lear-
ners. These are: information privacy, anonymity, surveillance, autonomy, non-
discrimination and ownership of information. To help unpack each of these areas, institu-
tions investing in edtech tools (and teachers and learners using these tools) may want to
ask the following questions of edtech providers:

• Information privacy (i.e. data protection): To what extent does the user have
control over their private data and what is shared / not shared?
• Anonymity: Are user data anonymised? If so, what data are anonymised, and what
are these data used for? What user data are not anonymised, and what are these
data used for?
• Surveillance: How and when are users surveilled? For what purposes?
• Autonomy: How much autonomy does the AI system give the user? For example,
what user data are not tracked or measured?
• Non-discrimination: What non-discriminatory measures does the AI system have
in place? For example, how are learners with special educational needs catered for
in the AI system? Who is disadvantaged by the AI?
448 RELC Journal 54(2)

• Ownership of information: Who ultimately owns and has control over the data?
What are the owners allowed to do with users’ data? For example, are the owners
allowed to share or sell these data to third parties? Are users the ultimate owners of
their own data?

A consideration of these wider ethical issues needs to frame any use of an AI-based appli-
cation or tool with language learners.
On a macro-level, initiatives are underway that attempt to deal with the ethical issues
created by the unregulated use of AI in education. These include the Institute for Ethical
AI in Education in the UK, which is currently developing a framework that includes sug-
gestions around regulation, codes of conduct, certification, standardisation, education and
awareness for stakeholders, and the importance of diverse and inclusive teams in AI tool
development. Other notable initiatives include UNESCO’s AI and Education – Guidance
for Policy Makers, which outlines strategic policy-level recommendations, and the
European Union’s draft Artificial Intelligence Act, which aims to place obligations on
providers and users of AI in ‘high risk’ contexts including schools.

The Good: Wellbeing, Language Learning and AI


Arguably one of the most important lessons that can be drawn from the pandemic is that
online and blended learning can be effective, and many institutions – particularly higher
education institutions (Chan et al., 2022) – plan to continue providing these learning
options to their learners post-pandemic. This requires two key things: first, addressing
inequalities in access and providing effective support for vulnerable student populations;
and second, developing a ‘robust online learning ecosystem’ (Chan et al., 2022: 241)
through investment in infrastructure and teacher development, and in transforming insti-
tutional cultures that may still be resistant to online and blended learning.
A substantial body of research has highlighted the negative effects of the pandemic on
the physical and emotional wellbeing of learners. The pandemic did, however, focus
minds on how the sudden move online had exacerbated digital inequalities, including
in relatively high-resourced institutions in the Global North. The importance of redirect-
ing resources to support teachers’ professional development in effective online and
blended teaching (including how to support disadvantaged and vulnerable learners
online) was a clear lesson learned from the pandemic (Chan et al., 2022). For
example, one university in the USA used a framework developed from the concept of
‘pandemic pedagogy’ to train their teaching staff. The training covered not just how to
teach online, but also how teachers could include ‘trauma-informed, equity-minded,
inclusive pedagogy’ (Chan et al., 2022: 21) in their online teaching approaches.

AI and Learner Wellbeing


This increased concern with learner wellbeing, and the development of tools, strategies
and approaches to support wellbeing, are arguably a post-pandemic positive.
AI-powered tools that can help us focus on our uses of technology, such as screen-time
notifications and nudges to switch off our mobile devices, have been available for some
time. What is relatively new, however, is the increased awareness that, as educators, we
can help our learners focus on digital wellbeing and deal with ‘digital disarray’ by
Hockly 449

developing attentional literacy, a macroliteracy that ‘sharpens our ability to focus our
attention on an object (including a perception, thought or feeling) of our choice, while
maintaining a broad awareness of our embeddedness within a larger, complex, shifting
context’ (Pegrum et al., 2022). Essentially, attentional literacy enables us to combat
‘digital disorder, digital disconnection and […] digital distraction’ (Pegrum et al.,
2022). Attentional literacy can be supported through classroom activities that focus on
increasing learners’ awareness of digital disorder, disconnection and distraction, and
encourage more mindful uses of particularly mobile technologies (see Pegrum et al.,
2022, for example activities).

AI and Language Learning: Chatbots


A range of AI-powered tools that can support learners’ language development currently
exists. These tools include language learning apps such as Duolingo and Busuu, auto-
mated writing evaluation tools such as Write & Improve, grammar tools such as
Grammarly and machine translation tools such as Google Translate, as well as
speech-to-text and text-to-speech apps, game-based learning apps, computer-based
testing and corpora. Due to limitations of space, I will consider just one of these tools
here: chatbot apps. These are free for learners to access and to use in their own time;
they also have a rich history of computer assisted language learning (CALL) research
to draw on.
Developments in speech recognition software over the last decade mean that chatbots
are now available in both text and audio form, and with deep learning AI apps such as
LaMDA in development, chatbots are set to become increasingly sophisticated.
However, most current English language chatbots are examples of weak AI, operating
within very specific linguistic domains such as ordering a meal, asking for directions,
or asking and answering simple pre-scripted questions.
Nevertheless, even within these limited contexts, research has shown that chatbots can
support language learners in a number of ways. For example, lower-proficiency learners
appreciate the clearly delimited contexts to practise English (Fryer and Carpenter, 2006),
and text-based chat can prepare less confident EFL learners for speaking by lowering
their affective filter (Satar and Özdener, 2008). More recent research into chatbots has
found that they are most effective when learners use them within their specified
domains, for example for repetition and pronunciation, and/or as sources of new vocabu-
lary and grammar (Fryer et al., 2017). In addition, using chatbots can lead to improved
learning confidence (Chen et al., 2020), and increased motivation and self-efficacy
(Winkler and Söllner, 2018) for learners.
However, chatbots are not perfect language partners and they are certainly not repla-
cements for human interaction. They can go off-topic or display grammar issues
(Coniam, 2014). In addition, and perhaps unsurprisingly, the novelty factor of using a
chatbot tends to wear off, even when the chatbot is delivered via a gamified, visually
attractive mobile app (Teske, 2017). Overall, chatbots in their current weak AI version
are most appropriate for lower-proficiency learners. Nevertheless, teachers may want
to encourage their learners to try a chatbot app out of class for a period of time, and
then report back periodically on what they have learned. Additional exposure to
English language content outside of class time can support learning outcomes, so this
is certainly an option worth exploring with learners.
450 RELC Journal 54(2)

Conclusion
The post-pandemic world is likely to see an increased use of educational technologies in
all contexts. Many of these tools are powered by AI, with its attendant benefits and draw-
backs. Where does this leave educators, and how can we ensure a principled and safe use
of AI-based edtech in the future? It is hoped that this article has provided some sugges-
tions in this respect. Institutions and teachers using edtech with learners can ask the ques-
tions based on the six areas outlined by Regan and Jesse (2018). Equally important,
learners (and parents for learners under 18) should be informed about data collection
and issues surrounding the use of AI-based tools, and informed consent discussed and
sought. Helping learners read the terms of service (ToS) of a popular app, for
example, is one place to start; developing their attentional literacy is another.
The laws and guidelines governing the use of learner data vary from country to
country. For example, there is the COPPA (Children’s Online Privacy Protection Act)
in the USA, GDPR (General Data Protection Regulation) in Europe and the Privacy
Act in Australia, to name a few. Managers and administrators should be familiar with
the basics of data protection law and ensure that these laws are respected by the AI
used in their institutions. Finally, one can join the conversation through taking part in
the UN International Telecommunication Union (ITU) AI for Good initiative, or the
Institute for Ethical AI in Education framework discussion, mentioned previously.
Prioritising a ‘good’ (i.e. principled) use of AI-driven edtech, and developing strategies
for dealing with the bad and the ugly, can help us face with confidence the increasing
amount of AI we are likely to see in our profession in the post-pandemic digital era.

Funding
The author received no financial support for the research, authorship and/or publication of this
article.

ORCID iD
Nicky Hockly https://orcid.org/0000-0002-7814-0690

References
Bax S (2003) CALL – Past, present and future. System 31: 13–28.
Chan RY, Bista K and Allen RM (eds) (2022) Online Teaching and Learning in Higher Education
during COVID-19. London: Routledge.
Chen L, Chen P and Lin Z (2020) Artificial intelligence in education: A review. IEEE Access 8:
75264–75278.
Coniam D (2014) The linguistic accuracy of chatbots: Usability from an ESL perspective. Text &
Talk 34(5): 545–567.
E-Learning Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2022–
2027 (2022). Available at: www.researchandmarkets.com/reports/5547178/e-learning-market-
global-industry-trends-share (accessed 18 October 2022).
Fryer LK and Carpenter R (2006) Bots as language learning tools. Language Learning &
Technology 10: 8–14.
Fryer LK, Ainley M, Thompson A, et al. (2017) Stimulating and sustaining interest in a language
course: An experimental comparison of chatbot and human task partners. Computers in
Human Behavior 75: 461–468.
Hockly 451

Hodges C, Moore S, Lockee B, et al. (2020) The difference between emergency remote teaching and online
learning. Educause Review. Available at: https://er.educause.edu/articles/2020/3/the-difference-
between-emergency-remote-teaching-and-online-learning (accessed 18 October 2022).
Horizon Report (2022) Educause publications. Available at: https://library.educause.edu/resources/2022/4/
2022-educause-horizon-report-teaching-and-learning-edition (accessed 18 October 2022).
Human Rights Watch (2022) How dare they peep into my private life? Available at: www.hrw.org/report/
2022/05/25/how-dare-they-peep-my-private-life/childrens-rights-violations-governments (accessed 18
October 2022).
JISC (n.d.) Explore AI. Available at: https://exploreai.jisc.ac.uk/ (accessed 18 October 2020).
O’Neil C (2016) Weapons of Math Destruction. London: Allen Lane.
Pegrum M, Hockly N and Dudeney G (2022) Digital Literacies (second edition). London:
Routledge.
Regan PM and Jesse J (2018) Ethical challenges of edtech, big data and personalized learning:
Twenty-first century students sorting and teaching. Ethics and Information Technology
21(3): 167–179.
Satar HM and Özdener N (2008) The effects of synchronous CMC on speaking proficiency and
anxiety: Text versus voice chat. Modern Language Journal 92(4): 595–613.
Sharples M (2022) Automated essay writing: An AIED opinion. International Journal of Artificial
Intelligence in Education 32: 1119–1126. Available at: https://link.springer.com/article/10.
1007/s40593-022-00300-7 (accessed 20 February 2023).
Teske K (2017) Duolingo. Calico Journal 3(3): 393–401.
Tiku N (2022) The Google engineer who thinks the company’s AI has come to life. Washington
Post, 11 June. Available at: www.washingtonpost.com/technology/2022/06/11/google-ai-
lamda-blake-lemoine (accessed 18 October 2022).
Winkler R and Söllner M (2018) Unleashing the potential of chatbots in education: A
state-of-the-art analysis. Academy of Management Proceedings.

You might also like