You are on page 1of 5

5/3/24, 12:14 AI in education is a public problem | code acts in education

code acts in education


learning through code/learning to code

AI in education is a public problem


Posted on February 22, 2024 by Ben Williamson

Photo by Mick Haupt on Unsplash

Over the past year or so, a narrative that AI will inevitably transform education has become widespread. You can
find it in the pronouncements of investors, tech industry figures, educational entrepreneurs, and academic
thought leaders. If you are looking for arguments in favour of AI in education, you can find them in dedicated
journals, special issues, handbooks and conferences, in policy texts and guidance, as well as on social media and in
the educational and technology press.

Others, however, have argued against AI in education. They have probed at some of the significant problems that
such technologies could cause or exacerbate, and deliberately centred those issues for public deliberation rather
than assuming AI in education is either inevitable or necessary. (Of course, there are also many attempts to
balance different views, such as a recent UK Parliament POSTnote.)

The recent critiques of AI in education resonate with Mike Ananny’s call to treat generative AI as a ‘public
problem’:

we need to see it as a fast-emerging language that people are using to learn, make sense of their
worlds, and communicate with others. In other words, it needs to be seen as a public problem. …
Public problems are collectively debated, accounted for, and managed; they are not the purview of
private companies or self-identified caretakers who work on their own timelines with proprietary
knowledge. Truly public problems are never outsourced to private interests or charismatic
authorities.

Schools and universities are by no means pristine institutions to be protected from change. However, they are part
of the social infrastructure of societies, with purposes that include the cultivation of knowledgeable, informed
citizens and publics. Efforts to transform them with AI should therefore be seen as a public problem.

https://codeactsineducation.wordpress.com/2024/02/22/ai-in-education-is-a-public-problem/ 1/5
5/3/24, 12:14 AI in education is a public problem | code acts in education

In this post I surface a series of 21 arguments about AI in education. These started as notes for an interview I was
asked to do, and are based on working with the National Education Policy Center on a forthcoming report on AI
and K-12 schools. In that report we accept AI may prove beneficial in some well defined circumstances in schools,
but we also caution against its uptake by school teachers and leaders until its outstanding problems have been
adequately addressed and sufficient mechanisms for ensuring public oversight put in place. This post is more like
an accessible, scrollable list of problems and issues from monitoring recent debates, media and scholarship on the
topic, a kind of micro-primer on critiques of AI in education, though it will no doubt be incomplete.

21 arguments against AI in education


Definitional obscurity. The term ‘artificial intelligence’ lacks clarity, mystifies the actual operations of
technologies, and implies much more capability and ‘magic’ than most products warrant. In education it
important to separate different forms of AI that have appeared over the last half-century. At the current time,
most discussion about AI in education concerns data systems that collect information about students for analysis
and prediction, often previously referred to as ‘learning analytics’ using ‘big data‘; and ‘generative AI’ applications
like chatbot tutors that are intended to support students’ learning through automated dialogue and prompts.
These technologies have their own histories, contexts of production and modes of operation that should be
foregrounded over generalized claims that obscure the actual workings and effects of AI applications, in order for
their potential, limitations, and implications for education to be accurately assessed.

Falling for the (critical) hype. Promotion of AI for schools is frequently supported by hype. This takes two
forms: first, industry hype is used to attract policy interest and capture the attention of teachers and leaders,
positioning AI as a technical solution for complex educational problems. It also serves the purpose of attracting
investors’ attention as AI requires significant funding. Second, AI in education can be characterized by ‘critical
hype’—forms of critique that implicitly accept what the hype says AI can do, and inadvertently boost the
credibility of those promoting it. The risk of both forms of hype is schools assume a very powerful technology
exists that they must urgently address, while remaining unaware of its very real limitations, instabilities and faults
or the complex ethical problems associated with data-driven technologies in education..

Unproven benefits. AI in education is characterized by lots of edtech industry sales pitches, but little
independent evidence. While AIED researchers suggest some benefits based on small scale studies and meta-
analyses, most cannot be generalized, and the majority are based on studies in specific higher education contexts.
Schools remain unprotected against marketing rhetoric from edtech companies, and even big tech companies,
who promise significant benefits for schools without supplying evidence that their product ‘works’ in the claimed
ways. They may just exacerbate the worst existing aspects of schooling.

Contextlessness. AI applications promoted to schools are routinely considered as if context will not affect their
uptake or use. Like all technologies, social, political and institutional contexts will affect how AI is used (or not) in
schools. Different policy contexts will shape AI’s use in education systems, often reflecting particular political
priorities. How AI is then used in schools, or not, will also be context specific, reflecting institutional factors as
mundane as budgetary availability, leadership vision, parental anxiety, and teacher capacity, as well as how
schools interpret and enact external policy guidance and demands. AI in schools will not be context-free, but
shaped by a variety of national and local factors, and inflected by the varied ways different stakeholders construct
and understand AI as a technology with educational relevance.

Guru authority. AI discourse centres AI ‘gurus’ as experts of education, who emphasize narrow understandings
of learning and education. Big names use platforms like TED talks to speculate that AI will boost students’ scores
on achievement tests through individualized forms of automated instruction. Such claims often neglect critical
questions about purposes, values and pedagogical practices of education, or the sociocultural factors that shape
achievement in schools, emphasizing instead how engineering expertise can optimize schools for better
measurable outcomes.

Operational opacity. AI systems are ‘black boxes’, often unexplainable either for technical or proprietary
reasons, uninterpretable to either school staff or students, and hard to challenge or contest when they go wrong.
This bureaucratic opacity will limit schools’ and students’ ability to hold accountable any actors that insert AI into

https://codeactsineducation.wordpress.com/2024/02/22/ai-in-education-is-a-public-problem/ 2/5
5/3/24, 12:14 AI in education is a public problem | code acts in education

their administrative or pedagogic processes. If AI provides false information based on a large language model
produced by a big tech company, and this results in student misunderstanding with high-stakes implications, who
is accountable, and how can redress for mistakes or errors be possible?

Curriculum misinfo. Generative AI can make up facts, garble information, fail to cite sources or discriminate
between authoritative and bad sources, and amplify racial and gender stereotypes. While some edtech companies
are seeking to create applications based only on existing educational materials, others warn users to double check
responses and sources. The risk is that widespread use of AI will pollute the informational environment of the
school, and proffer ‘alternative facts’ to those contained in official curriculum material and teaching content.

Knowledge gatekeeping. AI systems are gatekeepers of knowledge that could become powerful determinants
of which knowledge students are permitted or prohibited from encountering. This can happen in two ways:
personalized learning systems prescribing (or proscribing) content based on calculations of its appropriateness in
terms of students’ measurable progress and ‘mastery’; or students accessing AI-generated search engine results
during inquiry-based lessons, where the model combines sources to produce content that appears to match a
student’s query. In these ways, commercial tech systems can substitute for social and political institutions in
determining which knowledge to hand down to the next generation.

Irresponsible development. The development of AI in education does not routinely follow ‘responsible AI’
frameworks. Many AIED researchers have remained complacent about the impacts of the technologies they are
developing, emphasizing engineering problems rather than socially, ethically and politically ‘responsible’ issues.

Privacy and protection problems. Adding AI to education enhances the risk of privacy violations in several
ways. Various analytics systems used in education depend on the continuous collection and monitoring of student
data, rendering them as subject of ongoing surveillance and profiling. AI inputs such as student data can risk
privacy as data are transported and processed in unknown locations. Data breaches, ransomware and hacks of
school systems are also on the rise, raising the risk that as AI systems require increased data collection, student
privacy will become even more vulnerable.

Mental diminishment. Reliance on AI for producing tailored content could lead to a diminishment of students’
cognitive processes, problem solving abilities and critical thinking. It could also lead to a further devaluation of
the intrinsic value of studying and learning, as AI amplifies instrumentalist processes and extrinsic outcomes such
as completing assignments, gaining grades and obtaining credits in the most efficient ways possible—including
through adopting automation.

Commercialization infrastructuralization. Introducing AI into schools signifies the proliferation of edtech


and big tech industry applications into existing infrastructures of public education. Schools now work with a
patchwork of edtech platforms, often interoperable with administrative and pedagogic infrastructures like
learning management and student information systems. Many of these platforms now feature AI, in both the
forms of student data processing and generative AI applications, and are powered by the underlying facilities
provided by big tech operators like AWS, Microsoft, Google and OpenAI. By becoming infrastructural to schools,
private tech operators can penetrate more deeply into the every routines and practices of public education
systems.

Value generation. AI aimed at schools is treated by the industry and its investors as a highly valuable market
opportunity following the post-Covid slump in technology value. The value of AI derives from schools paying for
licenses and subscriptions to access AI applications embedded in edtech products (often at a high rate to defray
the high costs of AI computing), and the re-use of the data collected from its use for further product refinement or
new product development by companies. These are called economic rent and data rent, with schools paying both
through their use of AI. As such, AI in schools signifies the enhanced extraction of value from schools.

Business fragility. Though AI is promoted as a transformative force for the long term, the business models that
support it may be much more fragile than they appear. AI companies spend more money to develop and run their
models than they make back, even with premium subscriptions, API plus-ins for third parties and enterprise
licenses. While investors view AI favourably and are injecting capital into its accelerated development across
various sectors, enterprise customers and consumers appear to be losing interest with long term implications for

https://codeactsineducation.wordpress.com/2024/02/22/ai-in-education-is-a-public-problem/ 3/5
5/3/24, 12:14 AI in education is a public problem | code acts in education

the viability of many AI applications. The risk here is that schools could buy in to AI systems that prove to be
highly volatile, technically speaking, and also vulnerable to collapse if the model provider’s business value crashes.

Individualization. AI applications aimed at schools often treat learning as a narrow individual cognitive
process that can be modelled by computers. While much research on AI in education has focused on its use to
support collaboration, the dominant industry vision is of personalized and individualized education—a process
experienced by an individual interacting with a computer that responds to their data and/or their textual prompts
and queries via an interface. In other contexts, students have shown their dissatisfaction with the model of
automated individualized instruction by protesting their schools and private technology backers.

Replacing labour. For most educators the risk of technological unemployment by AI remains low; precariously
employed educators may, however, risk being replaced by cost-saving AI. In a context where many educational
institutions are seeking cost savings and efficiencies, AI is likely to be an attractive proposition in strategies to
reduce or eliminate the cost of teaching labour.

Standardized labour. If teachers aren’t replaced by automation then their labour will be required to work with
AI to ensure its operation. The issue here is that AI and the platforms it is plugged in to will make new demands
on teachers’ pedagogic professionalism, shaping their practices to ensure the AI operates as intended. Teachers’
work is already shaped by various forms of task automation and automated decision-making via edtech and
school management platforms, in tandem with political demands of measurable performance improvement and
accountability. The result of adding further AI to such systems may be increased standardization and
intensification of teachers’ work as they are expected to perform alongside AI to boost performance towards
measurable targets.

Automated administrative progressivism. AI reproduces the historical emphasis on efficiency and


measurable results/outcomes, so-called administrative progressivism, that has characterized school systems for
decades. New forms of automated administrative progressivism will amplify bureaucracy, reduce transparency,
and increase the opacity of decision-making in schools by delegating analysis, reporting and decisions to AI.

Outsourcing responsibility. The introduction of AI into pedagogic or instructional routines represents the
offloading of responsible human judgment, framed by educational values and purposes, to calculations performed
by computers. Teachers’ pedagogic autonomy and responsibility is therefore compromised by AI, as important
decisions abut how to teach, what content to teach, and how to adapt to students’ various needs are outsourced to
efficient technologies that, it is claimed, can take on the roles of planning lessons, preparing materials and
marking on behalf of teachers.

Bias and discrimination. In educational data and administrative systems, past data used to make predictions
and interventions about present students can amplify historical forms of bias and discrimination. Problems of bias
and discrimination in AI in general could lead to life-changing consequences in a sector like education. Moreover,
racial and gender stereotypes are a widespread problem in generative AI applications; some generative AI
applications produced by right wing groups can also generate overtly racist content and disinformation narratives,
raising the risk of young people accessing political propaganda.

Environmental impact. AI, and particularly generative AI, is highly energy-intensive and poses a threat to
environmental sustainability. Visions of millions of students worldwide using AI regularly to support their studies,
while schools deploy AI for pedagogic and administrative purposes, is likely to exact a heavy environmental toll.
Given today’s students will have to live with the consequences on ongoing environmental degradation, with many
highly conscious of the dangers of climate change, education systems may wish to reduce rather than increase
their use of energy-intensive educational technologies. Rather than rewiring edtech with AI applications, the
emphasis should be on ‘rewilding edtech’ for more sustainable edtech practices.

These 21 arguments against AI in education demonstrate how AI cannot be considered inevitable, beneficial or
transformative in any straightforward way. You do not even need to take a strongly normative perspective either
way to see that AI in education is highly contested and controversial. It is, in other words, a public problem that
requires public deliberation and ongoing oversight if any possible benefits are to be realized and its substantial
risks addressed. Perhaps these 21 critical points can serve as the basis for some of the ongoing public deliberation

https://codeactsineducation.wordpress.com/2024/02/22/ai-in-education-is-a-public-problem/ 4/5
5/3/24, 12:14 AI in education is a public problem | code acts in education

required as a contrast to narratives of AI inevitability and technologically deterministic visions of educational


transformation.
This entry was posted in Uncategorized and tagged ai, AIed, algorithms, artificial intelligence, data, edtech, education, learning, learning analytics, schools, technology.
Bookmark the permalink.

code acts in education


Blog at WordPress.com.

https://codeactsineducation.wordpress.com/2024/02/22/ai-in-education-is-a-public-problem/ 5/5

You might also like