You are on page 1of 228

ARTIFICIAL INTELLIGENCE AND

LEARNING FUTURES

Artificial Intelligence and Learning Futures: Critical Narratives of Technology and


Imagination in Higher Education explores the implications of artificial intelligence’s
(AI’s) adoption in higher education and the challenges to building sustainable
instead of dystopic schooling. As AI becomes integral to both pedagogy and
profitability in today’s colleges and universities, a critical discourse on these systems
and algorithms is urgently needed to push back against their potential to enable
surveillance, control, and oppression. This book examines the development,
risks, and opportunities inherent to AI in education and curriculum design, the
problematic ideological assumptions of intelligence and technology, and the evidence
base and ethical imagination required to responsibly implement these learning
technologies in a way that ensures quality and sustainability. Leaders, administrators,
and faculty as well as technologists and designers will find these provocative and
accessible ideas profoundly applicable to their research, decision-making, and
concerns.

Stefan Popenici is Academic Lead for Quality Initiatives in Education Strategy


at Charles Darwin University, Australia.
ARTIFICIAL
INTELLIGENCE AND
LEARNING FUTURES
Critical Narratives of Technology
and Imagination in Higher
Education

Stefan Popenici
Designed cover image: © Getty Images
First published 2023
by Routledge
605 Third Avenue, New York, NY 10158
and by Routledge
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2023 Stefan Popenici
The right of Stefan Popenici to be identified as author of this work has
been asserted in accordance with sections 77 and 78 of the Copyright,
Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced
or utilised in any form or by any electronic, mechanical, or other
means, now known or hereafter invented, including photocopying and
recording, or in any information storage or retrieval system, without
permission in writing from the publishers.
Trademark notice: Product or corporate names may be trademarks or
registered trademarks, and are used only for identification and explanation
without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Names: Popenici, Stefan, author.
Title: Artificial intelligence and learning futures : critical narratives of
technology and imagination in higher education / Stefan Popenici.
Description: New York, NY : Routledge, 2023. | Includes
bibliographical references and index. | Identifiers: LCCN 2022026178
(print) | LCCN 2022026179 (ebook) | ISBN 9781032210636
(hardback) | ISBN 9781032208527 (paperback) | ISBN
9781003266563 (ebook)
Subjects: LCSH: Artificial intelligence—Educational applications. |
Education, Higher—Effect of technological innovations on.
Classification: LCC LB1028.43 .P66 2023 (print) | LCC LB1028.43
(ebook) | DDC 378.1/7344678—dc23/eng/20220718
LC record available at https://lccn.loc.gov/2022026178
LC ebook record available at https://lccn.loc.gov/2022026179
ISBN: 978-1-032-21063-6 (hbk)
ISBN: 978-1-032-20852-7 (pbk)
ISBN: 978-1-003-26656-3 (ebk)
DOI: 10.4324/9781003266563
Typeset in Bembo
by Apex CoVantage, LLC
To Nadia,
my wife and my best friend.
CONTENTS

Introduction 1

SECTION I
Education, Artifcial Intelligence, and Ideology 7

1 The Ideological Roots of Intelligence 9

2 Imaginations, Education, and the American Dream 31

3 The Narrative Construction of AI 53

SECTION II
Higher Learning 73

4 Automation of Teaching and Learning 75

5 Surveillance, Control, and Power – the AI Challenge 100

6 Beauty and the Love for Learning 123

SECTION III
The Future of Higher Education 143

7 Imagination and Education 145


viii Contents

8 Scenarios for Higher Education 166

9 Re-storying Higher Learning 187

References 198
Index 209
INTRODUCTION

Every year some of the most popular and reputable dictionaries engage in the
ritual of selecting a word that encapsulates the most important ideas, cultural
trends, or opinions for that time. Words or formulas, such as “fake-news,” were
selected in the last years, and emphasised what defnes probably the best that
specifc period of our lives. It is interesting in this sense to note that in 2020,
Oxford Dictionaries found that current events were so complex and signifcant
that it was better to select several “Words of an Unprecedented Year.” This idea
serves to properly refect the “ethos, mood, or preoccupations” of that year. This
goes beyond simple marketing strategies for publishing houses and dictionaries; it
is a good way to reconnect us with the importance of language for our identities
and cultural milieu.
We can safely all say that the word that is defning the frst part of the 21st cen-
tury is “crisis.” The global pandemic of COVID-19 accelerated economic systems
and social crises, and revealed some unexpected truths about all of us. We have
an increasingly threatening climate crisis, which is endangering humanity and
our survival on Earth. We have a humanitarian crisis, where millions deal with
extreme poverty, famine, racism, and injustice, all disputing our commitments
to stated ideals and questioning the very idea of humanity. We have a migration
crisis with impacts across the world. We have a political crisis, with new fascist
regimes and wars arising in the last few years. We have an energy crisis and mas-
sive imbalances with impact across the world. We also have a worldwide crisis of
liberal democracy, a social crisis, a crisis of inequality, and a public health crisis.
Most importantly, we have a crisis of ideas.
Inequality is widening, the rich accumulate incomprehensible wealth and
stand disconnected from the rest of the world, while millions live in extreme
poverty in poor and developed countries. The climate crisis became a direct
DOI: 10.4324/9781003266563-1
2 Introduction

existential threat for humanity, extreme political movements arise from evil
ideologies of the past, and the reality of complete global disaster is openly dis-
cussed by leaders of the world. The situation is unprecedented. In 2021, the
U.N. General Assembly high-level meeting for leaders of 193 countries opened
with an urgent call for the world to “wake up.” António Guterres, the Secretary-
General of the United Nations, opened the meeting observing, “We are on the
edge of an abyss – and moving in the wrong direction. I’m here to sound the
alarm. The world must wake up.” He warned that “we are facing the greatest
cascade of crises in our lifetime,” at a time when people “see billionaires joyrid-
ing to space while millions go hungry on Earth” (Guterres, 20211). The feld of
education plays a crucial role in fnding the way we react to these warning. The
advancement of technology and especially of AI may help addressing these issues
in the contemporary curriculum and in the general aims of universities and col-
leges. It may also tempt to leave these critical issues aside and let technology solve
our challenges, joining the techno-utopian narrative of a world that is optimally
managed by these advancements. The aim of this book is to explore some of the
key areas that were ignored or remain superfcially investigated in the enthusiasm
for a technological revolution. It is important to underline here that this is not a
technical report on various forms of AI or on machine learning. The main aim is
to see how AI will impact and is now used in education and how current devel-
opments in this feld will determine the future of education. It is an analysis of
some of the most infuential variables that shape AI and an in-depth exploration
on how these recent developments in edtech impact on the future of graduates,
universities, on culture, societies, and our common futures. So this is not a book
written for or from the perspective of AI engineers; it is looking at the ideologi-
cal and technical roots of AI to understand the place of AI solutions in education,
especially on the impact on equity, the fair use of technology, the implications of
big data required for AI and impacts of surveillance, and the propensity of AI to
serve autocratic forces. Most importantly, it aims to provide ideas and warnings
to faculty and students that are exposed to data collection and various applica-
tions of AI in universities, and serve as an open analysis for academics, policy
makers, and anyone interested in the complex feld of learning and teaching in
higher education.
Probably a good way to describe why this is a necessary lecture is by paraphras-
ing the title of a book written by James Hillman and Michael Ventura: we’ve had
“a hundred years” of edtech and the world is getting worse. Higher education is
also getting worse. We are part of a general failure on the moral level, and we can
see systems crumbling on a practical and very real manifestation. We collectively
reached a point where the relevance of facts is disputed or entirely ignored and
ignorance is glorifed and imposed with barbarian combativeness. As it was the
case for the last one hundred years, America set again a major trend for the rest of
the world. We are Americanised in subtle and complex forms, with implications
that are separately addressed in a subchapter of this book.
Introduction 3

Regular publications devoted to higher education, such as The Chronicle of


Higher Education, Times Higher Education, and Inside Higher Education, and
others devote large spaces to deal specifcally with the crisis of higher educa-
tion. We have in fact multiple crises of higher education in the English-speaking
Western world: an ideological, an intellectual, a managerial, and an ethical crisis.
The most common answer to all these major challenges is a religious expectation
that technology will provide the panacea, the perfect silver bullet that can end
these problems. AI is developing in this environment of irrational and uncritical
approaches within higher education – and this book also aims to fnd some expla-
nations for naive approaches that may surpass this who are not closely familiar
with higher education as we have it today.
In a book published in 2021, Audrey Watters is investigating the long history
of teaching machines and notes the common refex of Silicon Valley entrepre-
neurs to eschew history and present their products as the revolutionary products
that will trigger the next disruption in education: “If you can peddle the story
that everything was stagnant until you came along, your ideas, your inventions
might seem that much more innovative and necessary – or so you hope” (Watters,
2021, p. 82). The history of technology is associated, especially in the past dec-
ades, with overinfated and unrealistic presentations and promises, with hype and
disappointments. As Watters notes, pseudo-innovation looks credible and inviting
when the history is presented as stagnation and apathy that is suddenly changed.
This is defnitely present in the history of the development, adoption, and appli-
cation of edtech in higher education. The sanctifcation of edtech, which is the
topic of some examples presented in the following chapters, is not a healthy
or productive refex. Educators have the mission to question new technologies,
especially when they are surrounded by magical thinking and teleological expec-
tations. This is an important efort and we fnd here some reasons to address it,
especially considering that edtech colonised higher education and captured the
imaginations of educators for the last decades. AI is expected in schools and uni-
versities with the same blind enthusiasm and little inquiry that surrounds edtech
for the last decades.
Universities were created to serve the common good and, with their concen-
tration of academics working together for research and education, can advance
knowledge and serve the society with wise and innovative solutions for our criti-
cal challenges. Higher education is the space where new ideas can organically
emerge, when the campus ethos is defned by intellectual efervescence, and
moral engagement for a civil and advanced society. The general aim of uni-
versities is to disseminate knowledge and nurture more informed, ethical and
educated citizens, able to bring a positive contribution for a civil society. In fact,
higher education shifted entirely few decades ago towards instrumental goals, in
line with neoliberal ideas adopted by the World Trade Organization, which make
universities focus entirely on the business of education, practically uninterested
in the idea and the aim of a higher education. The living reality of universities,
4 Introduction

in contradiction with what we fnd in their glossy brochures and generous mis-
sion statements, is defned by the aim to secure profts, to stay competitive on
the market of commodifed educational services and serve market demands for
“properly trained” workers. Universities are in an ongoing crisis of identity, being
pulled apart by managerial fads, anti-intellectualism and contradictory demands
that undermine their function and ideals. In this context, edtech entrepreneurs
found billions in profts and took higher education even more further away from
its meanings and foundational beliefs.
The book explores what stands behind the label of Artifcial Intelligence (AI)
and how these roots may impact on current applications in teaching and learn-
ing in university. Advancements in digital technology, especially in the feld of
AI, bring the promise of revolutionary solutions and assistance in mundane or
critically important tasks. Importantly, promises on AI are opening to new pos-
sibilities to approach the multifaceted crisis that impacts on our lives and future.
AI is already shaping our lives in obvious or unseen ways: applications for jobs are
selected with the use of AI algorithms, which can change a career or someone’s
future; law enforcement agencies use AI for identifcation, profling, and surveil-
lance; the law is applied as AI algorithms decide if a person is more inclined to
reofend than others; armed forces in various countries are using AI for military
purposes, often on a very thin and unclear ethical line; teachers, schools, and uni-
versities use AI to predict and deter plagiarism, organise exams, set rankings, and
assign grades; mass media is using AI to decide what we like and when is the best
time to have that favoured content delivered, and AI is telling editors what type
of content we want to see and this is all what we can access. Succinctly, we can
say that our lives are signifcantly impacted by the use of AI, by algorithms and
technologies that aim and have the power to capture and shape what we know,
see, and how we can imagine the world. New technological solutions are used
to aggregate, synthesise, and apply complex forms of surveillance for data used or
sold by corporations, banks, insurance companies, law enforcement authorities,
investors, marketers, and others who can aford to play on this market.
It is important to have this in-depth analysis of developments, risks, and oppor-
tunities of AI in education, especially in universities, and investigate how this will
shape the future of education, civic culture, politics, and society. At the core of
this analysis is the interest on how AI is and can be used in education, in teach-
ing and learning, and how our imagination is enhanced or limited by edtech.
This is why we prefer to take in the following pages the perspective of users of
AI systems rather than that of engineers that are developing them; research and
case studies are used from this perspective to see how and when they can enrich
education and when it can be a hindrance. This analysis starts from the idea that
AI is created by humans, which determines the way it works; it is also strongly
connected with the historical roots of the concept of intelligence. Inevitably, as
many studies reveal, the development and functioning of AI systems are laden
with values, preferences, biases, and limited perspectives that bring important
Introduction 5

implications for the AI applications in teaching and learning, and other academic
processes (such as plagiarism detection, surveillance, or learning analytics). This
analytical approach favours a human dimension that precedes and controls the
use of technology in open societies. It is a viewpoint that should guide regula-
tions in the feld of AI developments and applications in education but remains
ignored by some of the most infuential policy makers – as an example provided
by OECD demonstrates.
To investigate these complex developments that involve economic, techno-
logical, social, and cultural dimensions, we need a multidisciplinary approach,
an ongoing intellectual efort to analyse with diferent lenses, from diferent per-
spectives, and using what information “jumps together” from various felds and
disciplines. The preferred approach for interpretation and discovery in topics at
the heart of this book integrates multidisciplinarity, which is building knowl-
edge from diferent disciplines that maintain their boundaries; interdisciplinarity,
which is using links between disciplines to create knowledge in a new and coher-
ent whole; and trans-disciplinarity, which integrates sciences and humanities for a
new context that transcends traditional boundaries of disciplines for a wider and
superior understanding (Choi & Pak, 2008, p. 359).3
The use and evolution of edtech in higher education reveal probably the best
how damaging is the fragmentation of knowledge, with a general adoption of
ahistorical theories, statements, and artifcially limited interests and studies. The
fragmentation of knowledge is on itself a signifcant source of problems that
impacts severely the way we live and progress. This is why it is important to fnd
more than inter-, multi-, and trans-disciplinarity. The sociobiologist Edward O.
Wilson restored the importance of connecting principles of diferent disciplines
in his book on consilience, a term that was frst used in 1840 by William Whewell
in The Philosophy of the Inductive Sciences. Consilience comes from two Latin
words: “con,” which can translate as “together,” and “siliens,” which means “jump-
ing.” Wilson is adopting the defnition of consilience as “a ‘jumping together’
of knowledge by the linking of facts and fact-based theory across disciplines to
create a common groundwork of explanation.” It is unifed knowledge that inte-
grates and constructs a web of causal relations suitable to ultimately connect what
seems to stand as unrelated variables and phenomena. Analysis and fndings of
this book are using the approach suggested by consilience to understand various
developments and examples through the lens of humanities and sciences, bring-
ing together explorations from felds such as sociology and semiotics, statistics
and hermeneutics. The general aim is to avoid what Wilson noted on the classical
approaches, which is that “the ongoing fragmentation of knowledge and resulting
chaos in philosophy are not refections of the real world but artefacts of scholar-
ship” (Wilson, 1998, p. 84).
We still learn how interconnected and interdependent is our world, and how
delicate is the balance that we can destroy with ignorance, hubris, and irre-
sponsibility. In 2021, comprehensive research produced by the United Nations
6 Introduction

University, an atypical institution of higher education based in Tokyo, Japan,


released a report titled Interconnected Disaster Risks. Their study starts with a
warning:

Nobody is an island. We are interconnected. Our actions have conse-


quences – for all of us. As we become ever more interconnected, so do
the risks we share. To manage these risks, we need to understand why and
how they are interconnected. Only then can we fnd appropriate solutions.
(O’Connor et al., 2021, p. 75)

The report fnds that climate catastrophes, pandemics, and various other crises are
much more connected than was previously considered and all have the same root
causes. In order to understand them and design appropriate solutions, it is crucial
to think in a more integrated way, allowing knowledge to “jump” and intercon-
nect freely and use this process with wisdom. This is how we can imagine and
build sustainable solutions for the future.
Re-storying narratives and realities of higher education require a clear under-
standing of what we dream and what is shaping what we imagine. In this efort
we can see how and if AI can help universities and students to avoid a dystopian
future of continuous surveillance, control, and authoritarianism. This can help
to write a diferent future, based on an inspiring vision of an education that is
suitable to create a sustainable future with a complex and inspirational narrative.

Notes
1. Guterres, A. (2021). Secretary-general’s address to the general assembly. United Nations.
Retrieved September 22, 2021, from www.un.org/sg/en/node/259241
2. Watters, A. (2021). Teaching machines. The MIT Press.
3. Choi, B. C., & Pak, A. W. (2008). Multidisciplinarity, interdisciplinarity, and trans
disciplinarity in health research, services, education and policy: 3. Discipline, inter-
discipline distance, and selection of discipline. Clinical and Investigative Medicine, 31(1),
E41–E48. https://doi.org/10.25011/cim.v31i1.3140
4. Wilson, E. O. (1998). Consilience: The unity of knowledge. Knopf: Distributed by Ran-
dom House.
5. Sett, D., Hagenlocher, M., Cotti, D., Reith, J., Harb, M., Kreft, S., Kaiser, Zwick,
A., & Garschagen, M. (2021). Disaster risk, social protection and readiness for insurance solu-
tions. UNU-EHS, MCII, LMU, BMZ, GIZ.
SECTION I

Education, Artifcial
Intelligence, and Ideology

This section presents a comprehensive analysis of “intelligence” and its ideo-


logical roots. The stated aim of projects involved in the development of artif-
cial intelligence (AI) was to design a software and algorithms that can enable a
computer to do things that require human-like intelligence. This establishes a
direct link between human intelligence and AI; in efect, we need to investigate
what “intelligence” involves from a political perspective, and how the way we
understand “intelligence” impacts on the use of AI solutions in education. The
ideological roots on intelligence, the I in the AI label, are signifcant for a proper
understanding of this feld of rapid change. Here it is also explored how AI is
defned and how it became a narrative construct fundamental for the Californian
ideology, bringing extraordinary solutions and unprecedented risks. Looking at a
semiotic revolution that was presented as the new theory of information in the
early 1960s, which is rarely mentioned in the context of education, new tech-
nologies or cultural studies, this section explores the shift and profound change of
education. The new organisational and cultural realities of higher education stand
especially relevant as we are at a moment of adoption of AI in teaching, learn-
ing, and administration in universities across the world. Here is the point where
it is necessary to analyse the link between the colonising force of technology and
the simultaneous rise of the colonising impulse intrinsic to the American model.

DOI: 10.4324/9781003266563-2
1
THE IDEOLOGICAL ROOTS OF
INTELLIGENCE

We rarely see – if ever – in the development of AI or on its current applica-


tions an interest on a clarifcation of what we understand by “intelligence,” the
concept at the core of this conceptual construct. This is important, especially as
it remains quite difcult to defne AI, with meanings that are still unclear and
that are, at the same time, determined by the history of what we understand by
“intelligent.” This is a concept that determines with its connotations the direction
of AI systems and applications; it certainly deserves a closer look. Emil Cioran
is one of the thinkers who invite us to pay good attention to words, noting that
only our minds and contexts make them shine and reveal their complex shades
and refections:

[S]uppose we force ourselves to see to the bottom of words? We see noth-


ing – each of them, detached from the expansive and fertile soul, being null
and void. The power of the intelligence functions by projecting a certain
luster upon them, by polishing them and making them glitter.
(Cioran, 2012, p. 261)

It is important to understand the history and contexts of “intelligence” and


how the semantic and political roots of this concept shape its contemporary
developments.
Intelligence is a dangerous concept. It is too open to manipulations and play-
ful misusing, allowing to cover a large variety of meanings that determine, with
extraordinary power, its uses, and applications. This is why we need to have a
closer look and understand how intelligence is shaping current understandings
of AI and see why the history of “intelligence” and its ideological roots mat-
ter so much today. The convenient illusion that intelligence, as part of AI, is a
DOI: 10.4324/9781003266563-3
10 Education, AI, and Ideology

neutral, scientifc, and factual term is leaving this concept susceptible to maintain
blind spots that cause serious errors, with some of its implications explored in
this chapter. “Intelligence” was used in time with diferent defnitions, but one
certain view on what this is and how we can look at it shaped most theoretical
frameworks in this feld. A paper published in 2007 brings together over 70 dif-
ferent defnitions of intelligence (Legg & Hutter, 20072), which are refecting
how economic, political, ideological, or cultural positions determine the way we
decide to see it. It also shows that these defnitions are built on few common ele-
ments. These defnitions evolved from the eugenistic approach adopted by Galton
and Binet, and claim that human intelligence is a measurable attribute. In fact, a
peculiar trick marks this concept: intelligence is defned by what tests can capture
and what is relevant for schooling, employment, or the entity conducting these
tests (e.g. military). This specifcity leads to a selection of certain psychological
features that are used to certify that one is highly intelligent, or less. This selection
actually leads to situations where some attributes, such as creativity, are marginal
in deciding the IQ level.
This concept is so malleable, so open to various forms and reasons for manip-
ulation, that some experts in the feld decided that the best solution for this
extremely complex feld is to avoid using this value-laden concept. For example,
Arthur Jensen, a prominent expert on research of intelligence, went as far as
making the case of removing the term “intelligence” from all scientifc analysis,
including psychology. He notes that the use of the concept of intelligence

will continue, of course, in popular parlance and in literary usage, where


it may serve a purpose only because it can mean anything the user intends, and
where a precise and operational defnition is not important. Largely because
of its popular and literary usage, the word “intelligence” has come to mean
too many diferent things to many people (including psychologists).
(Jensen, 1998, p. 483)

It is now too late to take the advice of removing intelligence, and we can assume
that it was never a realistic solution for the confusion surrounding this term. It is
important to see how it becomes such an attractive term that fres the imagina-
tions of individuals, various groups, and governments that are funding almost
anything with an AI promise. We have to analyse succinctly how this concept
is relevant for AI, and the use of AI systems in education. This is not aiming
to be an all-embracing history of intelligence or a comprehensive analysis of its
evolution.
The main reason to look at the modern enquiries on human intelligence is
that the history of this concept stands directly linked with the evolution and
function of AI systems, and is vastly infuencing the way advanced technologies
are designed and impact on our lives. This relationship determined the AI imagi-
naries, projects for the future of AI, and promises of current solutions. In fact,
The Ideological Roots of Intelligence 11

AI and intelligence are both shaped by a common history, have same ideological
sources, and often serve common political interests. Leaving this common history
obscured may be comfortable and convenient for an industry with extraordinary
investments at stake, but replace the most relevant key for understanding AI with
magical thinking and slippery slogans. It is surprising in this sense to see that
many books and research papers on AI simply ignore the history of intelligence
and its refection on AI systems, even when there are explicitly situated at the core
of our possible futures.
How and when one can say that someone is intelligent or not? Since the
19th century, the ability to defne intelligence was ultimately a way to achieve
the power to determine destinies, the future life for people and groups of peo-
ple, and even infuence the distribution of economic resources. As Jensen clearly
suggested, there is no general defnition accepted for intelligence, but its ety-
mology is ofering some important reference points. “Intelligence” comes from
the Latin “intellegere,” which can be translated as the capacity to understand, to
comprehend. The use of this concept goes back many centuries ago in ancient
China and we fnd it mentioned by Homer in the Odyssey and by Plato in the
Republic. The meaning covered is close to a gift from Gods that is nurtured by
individuals with the love of learning and seeking of truth, to access virtue. The
most important point is that the history of humanity is closely aligned with ways
to understand intelligence. This is a signifcant component of this concept, which
is overlooked and remains obscured in our uses and understandings of AI: AI is
always much more than a series of algorithms used by computing systems. AI is
implicitly (or sometimes clearly) defning what it means to be human and what
we can accept to be defned as humanity, and what is “artifcial,” the instrumental
part that is not relevant in our understanding of humanity.
In the 19th century, Sir Francis Galton, a British mathematician, opened a
new way to look at intelligence, as a quantitative concept that can be meas-
ured. He pioneered the use of tests to assess intelligence and used statistics to
scientifcally measure it. The Cambridge Handbook of Intelligence notes that Sir
Francis Galton (1883) is “one of the earliest researchers on human intelligence,”
but strangely ignores what was at the core of Galton’s view on intelligence, the
concept of eugenics (Sternberg, 2020, p. 314). In fact, Francis Galton created
the term “eugenics,” which fundamentally infuenced his studies and ideas about
human intelligence. Galton explored the reaction time and other physical and
sensory abilities of some English noblemen. The sensorimotor tests developed
by Galton are not predictive or relevant for scholastic or signifcant cognitive
performances and remain largely irrelevant for science. The most infuential part
of his work is based on the idea to link the concept of social class to what can be
considered the inception of scientifc inquiries of intelligence.
The concept of eugenics is rooted in the Greek words of “eu” (‘well’) and
“genos” (‘birth’); this can be translated as “well born.” The idea of eugenics is
basically to improve the human race by breeding the elites and restricting the
12 Education, AI, and Ideology

inferior members of society from reproducing. Galton and his followers fnd that
intelligence and other noble qualities are hereditary. The inferior groups that
have to be controlled and eliminated are – since Galton – other races than whites,
“criminals and semi-criminals,” the poor and the unsuccessful. As Francis Galton
notes in the “Essays in Eugenics,” this new science is “the science which deals with
all infuences that improve the inborn qualities of a race,” (Galton, 1909, p. 355).
In his famous and infuential book, he presents with clarity some basic ideas for
his new science, writing that “the average intellectual standard of the negro race
is some two grades below our own” or noting that “the Australian type is at least
one grade below the African negro” (Galton, 20126). Intelligence is part of what
he named as “natural inheritance,” and height, hygiene, and external appearance
are all related to intelligence and have to be explored for a proper classifca-
tion required for a “the possible improvement of the human race and nations”
(Galton, 1901, p. 17). Galton not only sets the direction for the study on human
intelligence, but he is a representative thinker for the conviction that all can be
quantifed and measured. In The Mismeasure of Man, an excellent book for those
who try to understand the roots of what we call today “intelligence,” Stephen Jay
Gould notes that Galton was so convinced that everything can be quantifed and
measured and he “even proposed and began to carry out a statistical inquiry into
the efcacy of prayer” (Gould, 1996, p. 1078). In other writings, Galton indicates
how we can measure boredom; this interest for quantifcation also infuenced his
views on what is human intelligence and how we can measure it.
This marks the birth of the pseudo-science of eugenics, the precursor of the
Nazi programs of “racial hygiene,” a perverted euphemism used in the 20th cen-
tury to justify and organise the atrocities of the Holocaust. These toxic roots stand
as the most infuential source for most studies on intelligence, where eugenics still
plays an explicit or a subsumed role. It is important to consider how these theo-
ries infuenced the history of the 20th century and what we imagine when we
talk about what is humanity when some of Galton’s central ideas revolve around
approaches such as this: “it would be an economy and a great beneft to the
country if all habitual criminals were resolutely segregated under merciful surveil-
lance and peremptorily denied opportunities for producing ofspring” (Galton,
1909, p. 209). Decades later, Hitler noted that he studied “with great interest” the
eugenic studies provided by Francis Galton; here is the moment when the idea
that an ideology can let one decide who is part of the intelligent elite and what
groups are formed by “habitual criminals” or “semi-criminals” achieved the cover
of science.
There is no doubt that Francis Galton did not invent racism, “scientifc rac-
ism” or social injustice; these ideas were also widely shared and popular for the
British aristocracy. His unique contribution is that he built a scientifc argument
for racism and inequity. These scientifc foundations for statistics, psychology, and
psychometrics determined the way intelligence was investigated and also how it
is currently understood. Scientifc studies on intelligence are closely intertwined
The Ideological Roots of Intelligence 13

with eugenics for the inception of psychometrics, the framework of what can be
considered part of human intelligence.
Another important source for this feld is provided by Karl Pearson. He is
generally presented as the founding father of statistics. Pearson was a zealous fol-
lower of Galton, an extreme eugenist interested in the applications of statistics
on human genetics and intelligence. One of Pearson’s extensive studies on racial
diferences in intelligence stands as a relevant example for his views. Completed
with one of his colleagues at the University College London (UCL), an extensive
study on Jewish children and their parents reached this conclusion:

[T]his alien Jewish population is somewhat inferior physically and mentally


to the native population . . . we know and admit that some of the chil-
dren of these alien Jews from the academic standpoint have done brilliantly,
whether they have the staying powers of the native race is another question.
No breeder of cattle, however, would purchase an entire herd because he
anticipated fnding one or two fne specimens included in it.
(Delzell & Poliak, 201310)

In “The Grammar of Science,” an infuential book published for the frst time in
February 1892, Pearson justifed widespread killings and genocide against First
Nations in America with a dehumanised rationality:

It is a false view of human solidarity, a weak humanitarianism, not a true


humanism, which regrets that a capable and stalwart race of white men
should replace a dark-skinned tribe which can neither utilise its land for
the full beneft of mankind, nor contribute its quota to the common stock
of human knowledge.
(Pearson, 191111)

At the same time, Ronald Aylmer Fisher, a scholar who also brings crucial
contributions to statistical theory, as the scholar who left us the idea to use appli-
cations of statistical procedures to the design of scientifc experiments, also found
inspiration in Galton’s ideas. He was the founder of the Cambridge University
Eugenics Society and joined later Pearson in his work at the UCL. In 1933
Ronald Fisher became Galton Professor of Eugenics at the University College
London. Here is where their collaboration marks important steps for the progress
of statistics, psychology, and measurements of intelligence.
Aubrey Clayton observes in her book, “Bernoulli’s fallacy,” that “Pearson and
Fisher were both incredibly infuential and are jointly responsible for many of the
tools and techniques that are still used today – most notably, signifcance testing,
the backbone of modern statistical inference.” Their fundamental contribution to
statistics and advanced mathematics is universally accepted. If we want to under-
stand the ideological structure of intelligence and the role this concept plays in
14 Education, AI, and Ideology

the AI is important to look at the fact that the birth of this concept is based on
scientifc endeavours aiming to

detect signifcant diferences between races, like a supposed diference in


intelligence or moral character. “Objective” assessments of this kind were
used to support discriminatory immigration policies, forced sterilisation
laws, and, in their natural logical extension, the collective massacre of the
Holocaust. Sadly the eugenics programs of Nazi Germany were linked in
disturbingly close ways with the work of these early statisticians and their
eugenicist colleagues in the United States.
(Clayton, 2021, p. 1412)

One more development infuenced the adoption in America of studies on


intelligence: as part of educational reform. The French government asked two
psychologists, Alfred Binet and Theodore Simon, to fnd a scientifc instrument
suitable to distinguish between students who were “mentally retarded” and those
who were just “performing badly” in school. This led to the creation of Binet-
Simon scale for intelligence, which was adopted fast in the United States, where
was improved to fnd not only “defcient” children but also those who were
“above the average” and those labelled as “very bright.” Binet had a diferent view
on intelligence than Galton and his followers, on one crucial aspect: he consid-
ered that intelligence is variable, dynamic, and can be nurtured and improved,
not completely determined by heredity. In this sense, Binet was not that relevant
for the American adoption of research on intelligence. The idea that intelligence
is a measurable feature was suitable for the eugenist movement in the new world
and completely adopted in this new framework. The reality is that eugenics was
at that time a very popular approach, attractive for great thinkers of that time.
Winston Churchill, Home Secretary at that time, observed that feebleminded in
Britain deserved to be “segregated under proper conditions [so] that their curse
died with them and was not transmitted to future generations” (Kevles, 1986,
p. 9813). Churchill was inspired by the idea of eugenics to breed a new, superior
race; he was a vice president of the frst International Eugenics Conference, held
in London in 1912 (Kühl, 1994, p. 1414). There is a long list of politicians, intel-
lectuals, and countries sharing the view that intelligence is hereditary, and we can
breed those who have it inherited.
After “Hereditary Genius” was published, Francis Galton notes the very
enthusiastic review from Charles Darwin, who writes: “I do not think I ever in
all my life read anything more interesting and original.” However, Darwin indi-
cates a major point of diference, when he continues his letter to Galton by saying

You have made a convert of an opponent in one sense, for I have always
maintained that, excepting fools, men do not difer much in intellect,
The Ideological Roots of Intelligence 15

only in zeal and hard work; and I still think this is an eminently important
diference.
(Galton, 1908, p. 29015)

This important diference was mostly irrelevant for the American adopters of Gal-
ton’s approach, such as Henry H. Goddard, who used in 1908 the Binet-Simon
test scales in the United States. The study and understanding of intelligence in
the United States are intertwined with the history and principles of eugenics from
the very beginning. In fact, eugenics found extreme forms of application for the
frst time in America, not in Europe of Germany. The frst involuntary sterilisa-
tion law in the world, passed by the State of Indiana in 1907, and later adopted
by 29 other American states, is just one example in this sense. California was the
third state to adopt the law of forced sterilisations, which was expanded in 1913
to add among groups defned as dangerous for the racial hygiene of the nation
all people with a “mental disease, which may have been inherited and is likely
to be transmitted to descendants.” In just few decades, the United States force-
fully sterilised approximately 60,000 persons (Reilly, 201516), and its eugenic laws
were unchanged “on the books for nearly 70 years” (Lombardo, 2011, p. 9917).
There are well-documented studies and evidence on the strong relation between
American and German eugenists. One of the most relevant books on this topic
is James Whitman’s Hitler’s American Model, an extraordinary analysis based on the
evidence available in the archives. Whitman notes that “in Mein Kampf Hitler
praised America as nothing less than ‘the one state’ that had made progress toward
the creation of a healthy racist order of the kind the Nuremberg Laws were
intended to establish” (Whitman, 2017, p. 218). He also underlines that Hitler and
the Nazi party were not interested much in segregation, but in the shared com-
mitment to white supremacy and the American eugenic solutions (Whitman,
2017, pp. 4–7). Adam Cohen observes in his book, Imbeciles, that

[T]he United States in the 1920s was caught up in a mania: the drive to use
newly discovered scientifc laws of heredity to perfect humanity. Modern
eugenics, which had emerged in England among followers of Charles Dar-
win, had crossed the Atlantic and become a full-fedged intellectual craze.
(Cohen, 2016, p. 319)

Here is a particularly important relationship between the eugenic theories on


intelligence, with its standardised measurements of intelligence, and the evolu-
tion of racism and fascist ideologies. The scientifc justifcation and foundation
for the most extreme forms of racism are associated with racial laws and political
movement that stand associated with some of the most catastrophic times for
humanity. James Q. Whitman reveals this connection between the evolution of
eugenic theories on intelligence and far-right ideologies, when he notes what he
16 Education, AI, and Ideology

labels as a most uncomfortable irony of exchanges between American eugenists


and German Nazis:

[W]hen it came to the law of mongrelisation the Nazis were not ready to
import American law wholesale. This is not, however, because they found
American law too enlightened or egalitarian. The painful paradox, as we
shall see, is that Nazis lawyers, even radical ones, found American law on
mongrelisation too harsh to be embraced by the Third Reich. From the
Nazi point of view this was a domain in which American race law simply
went too far for Germany to follow.
(Whitman, 2017, p. 80)

The idea that “feebleminded” should be stopped to reproduce and that the
progress of a nation and community can be achieved if solutions are found to
keep “the unintelligent” under control was widely adopted by the American
politicians, industrialists, and academics, at least for the frst part of the 20th cen-
tury. In the United States, Harvard, Columbia, Stanford, and hundreds of other
universities were teaching eugenics. The academic backing of these theories was
already substantial when Francis Galton left his substantial fortune, including his
personal collection and archive, to the University College London. This donation
allowed to establish at UCL the Professorial Chair of Eugenics and a department
of eugenics, which determined the evolution of modern statistics and psychol-
ogy. This department later enticed Fisher to join UCL and led to the creation of
the frst department of mathematical statistics in the world. The impact of these
scholars is extraordinarily signifcant not only to understand the evolution of
theories on intelligence, but also to unravel the roots of AI, a construct based on
advanced mathematics and a certain view on intelligence that still permeate cur-
rent computer applications.
The adherence to eugenics, racist principles, and genocidal applications was
largely ignored after 1940s, or it was rebuilt with a diferent jargon that is usually
hiding the original intentions of this theory in defning and measuring “intel-
ligence.” It is evident that eugenics was not eliminated, and its epistemological
foundations for defnitions, measurements, and applications of intelligence still
stand prominent. UCL decided to change the names of buildings and learning
spaces that were directly linked with this racist past only in June 2020. The new
name for the UCL Galton lecture theatre is now “Lecture theatre 115.” The fact
that this space was renamed with a depersonalised, bureaucratic label, and not
a name of a less controversial scientist associated in the past with the university
deserves probably more attention. Only few years before the decision to rename
buildings honouring the name of eugenists it became public that “race-scientists”
and neo-Nazis held the eugenist “London Conference on Intelligence” at UCL
for the previous four years.
The Ideological Roots of Intelligence 17

The Professorial Chair of Eugenics Chair of Eugenics is currently found under


the new title of Galton Professor of Human Genetics, and the Annals of Eugen-
ics is now found under the new title of Annals of Human Genetics. This jour-
nal, once managed by Pearson, is publishing “material directly concerned with
human genetics or the application of scientifc principles and techniques to any
aspect of human inheritance” (Annals of Human Genetics, Overview20).
There is abundant evidence to reveal the fact that eugenics inspired the most
infuential scientists in the study of intelligence, and the role of intelligence in
education. For example, infuential thinkers such as William James and John
Dewey – both scholars with a major impact on politics, social sciences, and
education – had a specifc interest in measuring and developing “intelligence”
through education. Inspired by Galton and his ideas, Dewey and James estab-
lished a hierarchy of intelligence that is structurally represented as a taxonomy of
social status and mental abilities. Specifcally, James’ taxonomy of intelligence is
arranged from the level of

the tramp who lives from hour to hour; the bohemian whose engagements
are from day to day; the bachelor who builds but for a single life; the father
who acts for another generation; the patriot who thinks for a whole com-
munity and many generations; and fnally, the philosopher and saint whose
cares are for humanity and for eternity.
(James, 1983, p. 10721)

John Dewey found that individuals with “distinctive intellectual superiority” are
the leaders, while those with “lesser capacities for intelligent action” could only
be the followers (Gonzalez & Gonzalez, 197922). In fact, the simple translation
of this theory is that upper classes are more intelligent than lower socioeconomic
classes. This taxonomy is based on the idea that lower social classes have a low
level of intelligence while upper ranks have superior levels of intelligence and
social status.
The current defnition of intelligence was defnitely shaped by one of the
longest-running scientifc studies ever conducted, the Terman Life-Cycle Study.
This long-time complex research project profoundly infuences the way we see
how intelligence is defned and used today by computer engineers and edtech.
This research project also introduced the concept of IQ in the modern science.
In 1921, Lewis M. Terman, professor of psychology in Stanford University, initi-
ated the study, and its sample comprised 1,528 children (11 years old, on average),
all with IQs of 135 or above – placing them in the top 1% of the population at
the time. The Sage Encyclopaedia of Educational Research, Measurement, and
Evaluation is presenting the Terman Study of the Gifted, which was initially
titled Genetic Studies of Genius, as “one of the most famous longitudinal stud-
ies in the history of psychology” (Frey, 201823). Its participants (labelled in time
18 Education, AI, and Ideology

as “Termites”) were systematically followed for over 80 years. Comprehensive


surveys and interviews in this project explored a vast range of aspects in the life
of individuals selected, “including educational and occupational achievements,
mental and physical health, marital and parental status, and mortality” (Kell &
Wai, 2018, pp. 1665–166724). As expected, fndings of this study are complex and
cover a large variety of topics, from health, social life, specifc outcomes of “intel-
ligent” individuals as compared with the average population, to personal develop-
ments of this group of highly intelligent kids. One of the most signifcant fndings
of this extensive study may probably stand far from the original intentions of
Terman: this project provided scientifc evidence to support the idea that intel-
ligence is not the most important factor for human development, the quality of
accomplishments and outcomes, or for social and professional success. Ironically,
an example of how fawed is the certainty provided by various measurements of
intelligence is also provided by William Shockley and Luis Walter Alvarez, two
“termite” candidates who were not able to pass the test for the admission in the
study as their IQ was evaluated as too low. They failed the IQ test but are the
only individuals connected with Terman’s study who won later in life the Nobel
Prize for physics. This performance was not achieved by any of the 1,528 children
included in the group, labelled as the most intelligent in their generation. Most
of those selected by Terman became successful in life, taking prominent roles in
academia or getting well-paid jobs, fnding professional and social positions well
above the average of the general population. At the same time, other highly intel-
ligent members of the “termite” group became unemployed, with few of them
homeless or stuck in poorly paid careers and mediocre circumstances.
Terman set the standard for the IQ tests, but not all were inspired by his contri-
butions. For example, Walter Lippmann, an infuential journalist and cofounder
of the New Republic, wrote in 1922 in a public debate with Terman that

[T]he whole drift of the propaganda based on intelligence testing is to treat


people with low intelligence quotients as congenitally and hopelessly infe-
rior . . . I hate the impudence of a claim that in ffty minutes you can judge
and classify a human being’s predestined ftness in life. . . . I hate the abuse
of scientifc method which it involves. I hate the sense of superiority which
it creates, and the sense of inferiority which it imposes.
(Kevles, 1985, pp. 138–13925)

It became clearer in time that the relationship between intelligence and


achievement is not straightforward, but complex and multidimensional. Out-
comes are determined by other variables (i.e. social, economic, cultural), and
cannot be understood in isolation, in a straight deterministic line. Interestingly,
even the most successful members of the highly gifted group shared the com-
mon feeling of the majority of this cohort that they failed to achieve their full
potential. Intelligence is also not comprehensively efcient – if we use the logic
The Ideological Roots of Intelligence 19

and jargon of Big Tech – if it is not supported by a web of emotions, experiences,


motivation, and human feelings.
Terman is generally regarded as the father of IQ tests, and he defnitely shaped
how we defne and understand intelligence today. Terman was also infuenced
by eugenics, the cold-hearted, pseudo-scientifc theory of “racial hygiene” and
improvement of race. When the American Eugenics Society publicly asked Ter-
man about his opinion of eugenics, he answered in a letter by saying that

[I]t is more important for man to acquire control over his biological evolu-
tion than to capture the energy of the atom – and it will probably be far
easier. The ordinary social and political issues which engross mankind are of
trivial importance in comparison with the issues which relate to eugenics.
(Marks, 1974, p. 35126)

Terman was also an active member of eugenic societies and advocated for the
forced sterilisation of those labelled as “feeble-minded” in the American society.
There is a vastly disproportionate representation of the very poor and minorities
that shows how social class impacted the use of “intelligence,” from the incep-
tion of its scientifc explorations and defnition. Recent studies reveal how much
Terman was infuenced in his of prominent work by eugenics: “While champi-
oning the intelligent, he pushed for the forced sterilization of thousands of ‘fee-
bleminded’ Americans. Later in life, Terman backed away from eugenics, but he
never publicly recanted his beliefs” (Leslie, 200027).
It is interesting to note that Lewis M. Terman left us the concept of IQ and
changed not only psychology but also our understanding of what is intelligence,
and his son, Frederick (Fred) Terman defnitely infuenced the birth and future
of Silicon Valley. Fred Terman mentored two of his graduate students, William
Hewlett and David Packard, and guided them to create their own company,
which became known as Hewlett-Packard. Moreover, Terman managed to attract
William B. Shockley to Palo Alto and developed a close collaboration him, help-
ing Shockley to fnd some brilliant scientists for his work on semiconductors.
Shockley, a winner of the Nobel Prize in Physics in 1956, is considered to
be the “founding father” of Silicon Valley. He was not only a brilliant scien-
tist in physics but also a white supremacist and an active promoter of eugenics.
Sometimes his ideas went as far as causing an emotional response and public pro-
tests. For example, in an interview published in 22 November 1965 by the U.S.
News & World Report magazine, Shockley is expressing some of his views on
race and intelligence. His racism was so strident there that the Faculty of Genet-
ics at the University of Stanford signed an open letter of protest. It was signed by
some of the most reputable scientists in the feld, such as Joshua Lederberg, the
winner of the Nobel Prize in medicine in 1958. The letter is calling Shockley’s
ideas about genetics and race a “pseudo-scientifc justifcation for class and race
prejudice.”
20 Education, AI, and Ideology

Shockley reacted to this letter and pushed the argument further, speaking on
17 October 1966 at a meeting of the National Academy of Sciences, at Duke
University. Joel Shurkin presents this moment in his biographical book, Broken
genius: the rise and fall of William Shockley, creator of the electronic age:

[Shockley] proposed a mathematical “H-index” to act as a yardstick for fg-


uring out the genetic ancestry of individuals, how much white blood there
was in African-Americans, for instance. The index – which he dropped
after this proposal – would provide a base, an ‘objective benchmark’ for
research.
(Shurkin, 2006, p. 20528)

Shockley’s pioneering work in semiconductors was also marked by his absurd


management style, a source of tensions, and discontent in his team. His views on
intelligence and eugenics are not at all marginal at this time. A pivotal moment
is marked by the moment when eight “rebels” in his team decide to work inde-
pendently and set their own frm. Shockley’s team had enough of his microman-
agement and manipulative practices and the eight members that were focused
more on the development of silicone transistors decided to leave. They created
their own frm, the Fairchild Semiconductor, at a moment that is described in
Shurkin’s book as

the birth notice of Silicon Valley. What happened to the eight is not a
digression in the story of Bill Shockley. It is the key to understanding the
rest of his life. They became known in the mythology of the valley as the
“Traitorous Eight.”
(p. 181)

This step will later lead to the emergence of companies that shaped Silicon Valley,
such as Microchip Technology, Intel, AMD and others. In the following chapters
we will explore more the fascinating intertwining of engineering, development
of most advanced technologies and extreme ideologies. Here we underline suc-
cinctly that Silicon Valley, the birthplace of the AI, was from its birth directly
infuenced by a certain view on intelligence: elitist, inherited and determined by
racial categories.
Since Galton and Terman, intelligence proved to be one of the most power-
ful and dangerous concepts for humanity. It is disconcerting to see how much
the eugenic defnition on intelligence determined not only research, but also
public policies, national and state laws, civil rights, and immigration principles.
For example, David Dorado Romo describes in the extremely well-documented
“Ringside Seat to a Revolution: An Underground History of El Paso and Juárez:
1893–1923” how the fear of alien infection was used by the eugenicists to create
sympathy for their movement and translate their ideas in laws and practice. This is
The Ideological Roots of Intelligence 21

a well-based source for the harrowing fact that the infamous Zyklon B was used
for the frst time in history against humans in the United States, in early 1920s.
This extremely toxic gas was used in diferent concentrations in “chambers” built
to disinfect aliens at the Mexican border with the United States, in El Paso. David
D. Romo notes:

The use of Zyklon B as a pesticide on the U.S.-Mexico border inspired


Dr. Gerhard Peters to call for its use in German Desinfektionskammern. In
1938, Peters wrote an article for a German pest science journal, Anzeiger
fur Schadlinskunde, which included two photographs of El Paso delousing
chambers.
(Romo, 2005, p. 24029)

Years later, these solutions drew the attention of Nazis in Germany, and the same
toxic gas was used in a higher concentration in one of the most hideous projects
in the history of humanity. Soon after his nefarious study was published, Ger-
hard Peters became a managing director of the chemical company Degesch (The
Deutsche Gesellschaft für Schädlingsbekämpfung), an afliate of the conglom-
erate I. G. Farben, which supplied Zyklon B to the Nazi death camps. This is
where the toxic product was used in a higher concentration to kill innocent men,
women and children who were declared inferior or dangerous for the health on
the nation. It is one of the most horrifying examples on how eugenics can dehu-
manise and lead to atrocities.
The U.S. Immigration Law of 1917, a result of eugenist movement, was passed
at the same time when the Manual for the Physical Inspection of Aliens was
published by the United States Public Health Service. Here we fnd the list of
undesirable people, elaborated by what Romo describes as the most prominent
medical scientists, progressive politicians, and eugenists in America. This is a
list that reveals on itself the absurd and arbitrary nature of this theory; the list
includes “imbeciles, idiots, feeble-minded persons, persons of constitutional
psychopathic-inferiority [homosexuals], vagrants, physical defectives, chronic
alcoholics, polygamists, anarchists, persons aficted with loathsome or danger-
ous contagious diseases, prostitutes, contract laborers, all aliens over 16 years old
who cannot read” (Romo, 2005, p. 229). Records show that the U.S. Public
Health Service agents “cleaned” in this year 127,173 Mexicans at the bridge that
was linking the city of Juárez to the American city of El Paso. The newcomers
were afected by the extremely toxic efects of delousing and fumigations with
kerosene, gasoline, or Zyklon B. In 1917, the revolt against inhuman treatments
started when Carmelita Torres resisted the abuse against migrants at the border
and still it remains in history as the infamous Bath Riots. In fact, the fear that
Mexican newcomers will get very sick came true when the “Spanish fu,” which
originated in Kansas, in the Haskell County, infected a large number of Mexicans
living now in El Paso.
22 Education, AI, and Ideology

The Immigration Restriction Act of 1924 was a defnitive success for eugen-
ists, and became an extremely infuential document for immigration laws adopted
by countries such as Australia, New Zealand, or Canada. The adoption of the
1924 act was praised by Adolf Hitler in his Mein Kampf, where he is noting in
laudatory terms that the United States was making an efort to impose reason on
its immigration policies by “simply excluding certain races from naturalization”
(Lombardo, 200230). These details and the connection with eugenics became
blurred and mostly lost after 1940s, but the law was not changed for the following
40 years after the adoption.
There is no doubt that the unconscionable tragedy of the Holocaust is linked
with the pseudo-science of eugenics and its solutions of controlling the existence
of those labelled as people with an inferior intelligence. The step to include other
features next to intelligence – such as race, social status, moral values, or political
preferences – was a natural and facile strategy for those who held the power to
evaluate these attributes and to assign their diagnostics. These ideological roots
of the concept of intelligence opened it to be reused in other ideological frame-
works revolving on the principle that society can be “cleaned” of that which is
unworthy, inferior, or dangerous for the general public or political elites. The
concept of intelligence was commonly used against dissidents in the extreme Left
and the extreme Right dictatorships, with rebels, protesters, or free thinkers sent
to mental health institutions, “re-education” camps or simply eliminated. Since
Terman coined the term and build the frst version of IQ tests the use of “intel-
ligence” was the palpable result of a concept turned into an ideological weapon
which is most suitable to serve monopolies of social, economic, and political
power.
The concept of intelligence remain closely associated to eugenics in the 21st
century, with a strong infuence in politics, technology and what shapes public
life. It is tempting for computing and software engineers, AI frms, and program-
mers to separate the ideological determinants of a certain view on intelligence
and leave aside the history of people who created its tools and measures. Of
course, hiding these roots cannot reduce the powerful infuence of this original
sin of “intelligence” and its inhumanity and entrenched racism. The same temp-
tation can be observed in the works of some notable psychologists. To take just
two examples, we can read how Robert Sternberg is presenting Francis Galton, as
a forefather of the testing movement in intelligence. He observes that “the critical
thing to note about Galton is that he was the frst to study intelligence in anything
that reasonably could be labeled a scientifc way” (Sternberg, 2020, p. 3231). Inex-
plicably, Galton views on “race improvements,” eugenics and intelligence as a key
to understand the improvement of human race are not mentioned by Sternberg
in his long, comprehensive, and often-cited handbook.
In another extensive study, specifcally devoted to the concept of intelligence
and AI in “developing human Intelligence, future learning and educational
Innovation,” John Senior and Éva Gyarmathy devote an entire chapter to pre-
sent “A brief history of intelligence – artifcial and otherwise.” Inexplicably, the
The Ideological Roots of Intelligence 23

association of eugenics with intelligence studies and the importance of this theory
for education and psychology are not even succinctly mentioned (Senior & Gyar-
mathy, 202132). These unexplainable blind spots are signifcant; a fact explored
by semiotics is that a missing sign can be a sign with meanings that are relevant
and decisive for a narrative. We can take the real example of the Summit for
Democracy, where the US President Biden invited in December 2021 all mem-
ber countries of European Union except Hungary. This is on itself a story of a
member country with an authoritarian, undemocratic system that is not aligned
with the EU values. If one looked at fags of participating countries, a missing
fag is signifcant for the narrative of a decline of democracy in a part of Europe.
The absence of these defning stages and epistemological evolution of intelligence
stand as a sign of indiference to toxic roots, or indicate a naive approach and
understanding of AI.
In June 2021, the New York Times published an article about the initiative of
the state of Arizona to use “Holocaust gas in the executions,” noting that

Arizona has refurbished and tested a gas chamber and purchased chemicals
used to make hydrogen cyanide, a recent report said, drawing a backlash
over its possible use on death row inmates. Headlines noting that the chem-
icals could form the same poison found in Zyklon B, a lethal gas used by
the Nazis, provoked fresh outrage, including among Auschwitz survivors
in Germany and Israel.
(Hauser, 202133)

The last time Arizona used a gas chamber for a similar an execution was in 1999,
causing international protests and stupefaction. Ignorance on the history or sym-
bolic signifcance of these methods in 1999, and again in 2021, cannot be seri-
ously considered. Nevertheless it is a story that should make all cautious about the
very real risk of ignoring the horrors of the past and ideas that still have the power
to poison present and future solutions, even in open and democratic societies.
The constant omission of eugenics in studies presenting the history of the
research on intelligence, as well as eforts to impose atemporality and decontex-
tualisation of this concept, reveals that AI is placed frmly in an intellectual and
ideological minefeld. Intentional or involuntary eforts to ignore the genesis of
intelligence do not change the fact that AI is tributary to histories that shaped this
concept across the 20th century.
It is simply impossible to separate the emergence of both, AI and machine
learning, from eugenics. One reason is clearly presented by Wendy Hui Kyong
Chun and Alex Barnett in their book “Discriminating data: correlation, neigh-
borhoods, and the new politics of recognition,” which reveals that the 20th-
century eugenics shape the 21st-century AI, machine learning and data analytics:

British eugenicists developed correlation and linear regression, key to


machine learning, data analytics, and the fve-factor OCEAN model, at
24 Education, AI, and Ideology

least a century before the advent of big data. Although methods for linking
two variables preceded his work, Francis Galton is widely celebrated for
‘discovering’ correlation and linear regression, which he frst called “linear
reversion.”
(Chun & Barnett, 2021, p. 5934)

These roots determine the framework for development of AI and the use of
advanced technologies, which are glorifed and sanitised by the Big Tech in an
unprecedented avalanche of propaganda, marketing, and narrative constructions.
The ideological foundations of intelligence still shape AI. One key element for
the ideological function of this concept is that intelligence is measurable in fully
objective, precise, and value neutral. The collection of data stands as central as it
was for Francis Galton, Karl Pearson, and other eugenists, who stated in the very
early stages of research on intelligence that the future of eugenics is determined
by the capacity to collect and analyse “national statistics.” The idea of “predictive
patterns” is as familiar in the eugenic context as it is in the context of AI sys-
tems and machine learning solutions; the common ideological source for AI and
eugenics make this identical use natural, discrete, and efortless.
The current understanding of intelligence is shaped by the fact that since its
birth, the science of intelligence and IQ tests was associated with social classes and
new tools built to measure and justify the need to protect elites, and to breed their
noble qualities. Since Galton’s pioneering studies intelligence remains socially
efective to fnd, control, and eradicate what stands associated with those living
in poverty: low levels of intelligence, mental vulnerabilities, various dysfunctions,
immoral existence, and flth. The ideological determinant of intelligence cannot
be eliminated in any serious analysis of this concept and its role in the rise of
AI. We have to start by admitting that intelligence and IQ scores were from the
very beginning a symbolic certifcate for the upper social class, a scientifc proof
of superiority and special rights. Taxonomies of intelligence, which are since
the frst attempts organised in superior and inferior intellects and social classes,
opened the road to evaluate and associate intelligence with a given score and also
stand as a source of standardised tests in education.
Most common narratives of AI systems or machine learning present a common
presumption that data is “just” data, an atemporal and value-neutral fact, shaped
only by cold, exact, and quasi-relevant evidence and representations of reality. We
analyse more extensively in a following chapter how deceiving, distorting, and
naive is to accept that big data is unbiased, neutral, completely relevant. Here it is
important to underline that data cannot be entirely objective, and biases corrupt
the process of data collection especially when very large volumes of data require
a selection. Any selection is shaped consciously or not by the adopted values of
those who chose what data is going to be considered, by individual preferences,
ideological positions, assumptions, and personal bias. Importantly, AI cannot be
reduced to an atemporal, ahistorical system in any serious consideration of this
The Ideological Roots of Intelligence 25

feld. It is directly determined by its history and by the permanent association


with an ideological term, the concept of “intelligence,” which always had an
ideological function.
Undoubtedly, eugenics stand associated with the modern explorations of intel-
ligence, and determine the way we understand now human intelligence from the
perspective of psychology, technology, or social sciences. If we ask if this path is
the best way to defne IQ, we have a wrong question that can be as toxic as the
eugenic roots of current understandings of intelligence. AI is using intelligence
based on an overall understanding aligned with the eugenics movement, which
state that intelligence is the most important attribute and that this attribute is
directly measurable. In this understanding, intelligence is expressed in scholastic
felds, usually accessible by an economic and racial elite. Intelligence is also a key
concept for education, with a symbolic power so vast that many psychologists
designed and applied infuential studies to show that it can determine the fate
of individuals and groups. We will see in the following chapters how a certain
view on intelligence decides the kind of education various people can access and
how students perform. Unfortunately, education is a feld where racial theories
on intelligence had – and still have – a disproportionate impact. Intelligence was
defned and measured by scholars interested to provide a scientifc foundation for
hierarchies of humankind, and this ideological sap feeds solutions for standardised
IQ tests, new technologies, and education. The solid eugenic roots of what we
consider now human intelligence and the infuence of eugenics on the ideo-
logical structure of AI stand ignored but they are at the core of a constant clash
between stated democratic ideals of Big Tech and the reality of technological
corrosion of democracy and civil society.
The way we defne intelligence and the debate about what IQ tests actu-
ally measure is still open. Defning intelligence remains a difcult and contested
efort. This book is mostly informed by scientifc evidence that supports the view
on human intelligence as an extremely complex, fuid, and difcult-to-measure
attribute that can be represented as a web of various cognitive components, which
perform diferently in diferent contexts. Moreover, factors such as personal moti-
vation, social context, educational opportunities, family context, or personal
attributes are still marginally considered in the assessment of IQ. Measuring the
intelligence quotient is both an extremely complex endeavour that stands open
to errors, bias, and possible research limitations, and a symbolic act of power that
can determine on itself the result of a limited standardised test. These problems,
related to scientifc foundations for racism and discrimination, with eugenics and
fascists ideologies, stand closely intertwined with the development of AI.
From the very beginning, the use of “artifcial intelligence” raised the problem
of the ideological loading of “intelligence” and its discredited racist history. The
term “artifcial intelligence” was coined by John McCarthy, a young assistant pro-
fessor of mathematics, at Dartmouth in the summer of 1956. At that time a group
of scientists gathered to discuss developments on intelligence machines. Here is
26 Education, AI, and Ideology

where four participants submitted a project proposal to the Rockefeller Founda-


tion summarising the aim of their proposal as

a two-month, ten-man study of artifcial intelligence be carried out during


the summer of 1956 at Dartmouth College in Hanover, New Hampshire
(McCarthy et al., 200635). The study proposed to proceed on the basis of
the conjecture that every aspect of learning or any other feature of intel-
ligence can in principle be so precisely described that a machine can be
made to simulate it.
(McCorduck, 2004, p. 11136)

The seminal meeting at Dartmouth is considered to be the birth of AI. How-


ever, this was not the frst attempt of McCarthy to advance the use of “artifcial
intelligence”; during his work in the summer of 1952 at The Bell Telephone
Laboratories, he proposed Claude Shannon, a known expert in the feld of infor-
mation theory, to publish together a collection of papers on machine intelligence.
Shannon refused to use intelligence, a term that was problematic and too ideo-
logically determined, and their collection of papers was called Automata Studies
(Ashby et al., 195637). At the Dartmouth conference, AI was a disputed concept,
with various or opposed understandings already articulated. The adoption of AI
is marked by hesitations and reluctance to use the term, which was also cover-
ing a wide range of approaches and disciplines. In fact, the author of AI never
defned clearly what AI was, noting that this was a very fortunate marketing
choice (McCarthy, 198738). Reaching a single, universally adopted defnition of
AI will remain an insurmountable problem as this term was born as a marketing
concept rather than a result of a scientifc process. This peculiarity is also linked to
the pseudo-scientifc and morally wrong origins of the part that gives the irresist-
ible attraction: the ideological concept of “intelligence.”
Yarden Katz presents in Artifcial Whiteness, a book that deserves greater
attention as it uniquely explores AI “as a tool in the arsenal of institutions predi-
cated on white supremacy” (Katz, 2020, p. 839), how AI was for the very begin-
ning a label well suited to build on the inherent attraction and ideological strength
of “intelligence.” This was the key to secure generous funding for research and
development, even when donors had no idea about what stands behind the AI
title. It is a part of AI history that explains current applications and developments,
especially the predominant presence in military applications, surveillance, and
authoritarian systems. Few years after the Dartmouth conference, despite initial
setbacks and the fact that AI was still an unclear concept, “the term ‘Artifcial
Intelligence’ charmed the Pentagon’s elites, and they poured money into the feld.
In the 1960s AI ‘became the new buzzword of America’s scientifc bureaucracy’”
(Katz, 2020, p. 24). The U.S. Military became irresistibly attracted by AI in a
context described by Heinz von Foerster, a scholar in cybernetics who was run-
ning the Biological Computer Laboratory at the University of Illinois, as a “hot
The Ideological Roots of Intelligence 27

and cold” button for Pentagon and their generous funding. von Foerster noted:
“I talked with these people again and again, and I said, ‘Look, you misunderstand
the term [of AI]. They said, ‘No, no, no. We know exactly what we are funding
here. It’s intelligence!’.” Soon, he notes, “I was told everywhere, ‘Look, Heinz, as
long as you are not interested in intelligence we can’t fund you’” (Conway &
Siegelman, 2005, p. 32140). The American military complex poured money to
include AI in our futures and the capacity to maintain a continuous positive
publicity for AI. Securing a central role in the public imaginations, AI developed
since its inception within a context of unrealistic expectations, absurd or fool-
ishly infated promises, and a remarkable resistance to disappointments. Failure of
an AI system was immediately blurred by new narratives about AI potential and
possibilities. Yarden Katz describes the efort of opening the necessary critical
perspectives on AI as “crawling through a sewer. Reading the avalanche of ‘AI’
propaganda is a demoralizing experience” (Katz, 2020, p. 9).
It is demoralising as one cannot ignore how one legitimate call to analyse
the implications of AI centrality on our lives is smothered by an endless noise of
mediocre marketing, silly expectations, and shrewd marketing campaigns. Maybe
the most important fact that helps the efort to see beyond this “avalanche” of
nonsense is to remember that the concept of AI that it is not describing a cer-
tain system or a specifc technological solution of advanced computing; AI is an
ideological invention that covers various technologies in advanced computing,
sometimes in an incoherent manner.
In fact, even now the efort to agree on one common defnition that clearly
delimitates what exactly is AI still reaches an impasse. For example, at the end of
2021, various countries within European Union found extremely difcult during
ofcial talks to defne AI to fnalise the frst attempt in history to enact an AI Act:

Denmark, France, Italy, Netherlands, Malta, Estonia, Poland, Portugal,


Croatia and Ireland criticized the AI Act’s scope as being too wide i.e. they
believe fewer things should be called AI. They argue the current defnition
would restrict, for example, simple statistical systems, and in turn, stife
innovation.
(Heikkila, 202141)

A look at most important moments of the history of AI shows how often


its promises failed and proved to be amusingly absurd. This marked AI from its
inception: Marvin Minsky, one of the four “founding fathers of AI,” was conf-
dent to say in 1970 in an interview with Life magazine that “from three to eight
years we will have a machine with the general intelligence of an average human
being.” The New Scientist reported at the end of 2021 the extraordinary break-
through of an AI system, to read the time on an analogue clock (Stokel-Walker,
202142). This performance to read an analogue clock – easily performed by a
toddler – was not achieved until that moment. The marketing power of AI is
28 Education, AI, and Ideology

openly exploited by corporations with billions at stake; it is reassuring that their


solutions are “intelligent” and scientifc, as “artifcial” implies. This is a narrative
trick that exploits the human appetite for imagination and for magical solutions
that help the hero succeed. A signifcant part of what is AI is just a marketing
artifce able to promote a certain ideology and the interests of a techno-elite. This
also explains the capacity to forget and ignore the history of AI promises and fail-
ings. There is an intrinsic attraction of intelligence interwoven in modernity that
is conditioning us to attach hope to something that can be presented with cred-
ibility as “intelligent.” This can solve our problems and win our wars.
Not only that AI is not a specifc advanced technology, such as quantum com-
puting or synthetic biology, but it is also a technological thing based on what we
decide that is “intelligent.” We decide – consciously or not – what is intelligent.
This is determined by a long and inescapable tradition of understanding it as a
sum of attributes that were selected by scientists aligned with arbitrary metrics
and rankings of humanity. We decide what is “intelligent.” It is a dangerous error
to accept without a serious critique that a concept that is sourced in some of the
most nefarious prejudices, assumptions, and bias, that stand as the foundation for
the most inhuman ideologies, can be taken as the basis for a scientifc approach,
a key for technologies that can address some of the most pressing problems con-
fronting humanity today. It is a necessary exercise to stop and distance for a
moment from all the noise and hype and look in depth at what is at the source of
our words, investigate the marketing title, and see what ideology is served. Ulti-
mately, we have to see if the promised progress of a prominent solution is here,
and if we are all living in a better world.
Our understanding of intelligence, and what we name “AI,” refects certain
values, practices, preferences, and perspectives. AI is not just a range of algorithms
and code but an ideology and political project that stands inevitably linked with
its eugenic, racist, and elitist roots. These origins revolving around discriminatory
hierarchies of power and control make AI prone to perpetuate and reinforce the
tradition of class and ethnic discrimination as criteria for assigning power; it is a
project that is structurally unsuitable for equity, transparency, and democracy. It
is defnitely a direction that must be avoided in education, a feld that is placed at
the intersection of psychology, sociology, economy, and philosophy.
There is a deafening noise created by economic groups, investors, fantastic nar-
ratives on technology, and edtech sales points and only by going at “the bottom
of words” we can get an unbiased perspective on AI. So far, we have economists
and business groups shaping the agenda for schools and universities, determining
the practical steps and the selection of what and how students can learn and how
teachers can teach. Collectively, we reached a point where we confuse fnancial
elites and success with human intelligence and abandoned the ideals of common
good, social harmony, and progress; baseless myths at the core of neoliberal utopia
guide our culture and decision makers, but all come with the cost of growing
intertwined and increasingly dangerous crises. It seems that the more we are
The Ideological Roots of Intelligence 29

obsessed to get “intelligence” the more foolish mistakes we make to accelerate


and fuel humanity’s existential threats.

Notes
1. Cioran, E. M. (2012). A short history of decay. Arcade Publishing.
2. Legg, S., & Hutter, M. (2007). A collection of defnitions of intelligence. Frontiers in
Artifcial Intelligence and Applications, 157, 17–24. arXiv:0706.3639 [cs.AI]
3. Jensen, A. R. (1998). The g factor: The science of mental ability. Praeger.
4. Sternberg, R. (Ed.). (2020). The Cambridge handbook of intelligence (2nd ed., Cambridge
Handbooks in Psychology). Cambridge University Press. doi:10.1017/9781108770422
5. Galton, F. (1909). Essays in eugenics. The Eugenics Education Society.
6. Galton, F. (2012). Hereditary genius: An inquiry into its laws and consequences. Barnes &
Noble.
7. Galton, F. (1901, October 29). The second Huxley lecture of the anthropological institute,
included in the essays in eugenics. The Eugenics Education Society.
8. Gould, S. J. (1996). The mismeasure of man (Rev. and expanded. ed.). W. W. Norton &
Company.
9. Galton, F. (1909). Essays in eugenics. The Eugenics Education Society.
10. Delzell, D. A., & Poliak, C. D. (2013). Karl Pearson and eugenics: Personal opin-
ions and scientifc rigor. Science and Engineering Ethics, 19(3), 1057–1070. https://doi.
org/10.1007/s11948-012-9415-2
11. Pearson, K. (1911). The grammar of science (3rd ed.). Adam and Charles Black.
12. Clayton, A. (2021). Bernoulli’s fallacy: Statistical illogic and the crisis of modern science.
Columbia University Press.
13. Kevles, D. J. (1986). In the name of eugenics: Genetics and the uses of human heredity. Uni-
versity of California Press.
14. Kühl, S. (1994). The Nazi connection: Eugenics, American racism, and German national
socialism. Oxford University Press.
15. Galton, F. (1908). Memories of my life. Methuen & Co.
16. Reilly, P. R. (2015). Eugenics and involuntary sterilization: 1907–2015. Annual Review
of Genomics and Human Genetics, 16, 351–368. https://doi.org/10.1146/annurev-
genom-090314-024930
17. Lombardo, P. A. (2011). A century of eugenics in America: From the Indiana experiment to
the human genome era. Indiana University Press.
18. Whitman, J. Q. (2017). Hitler’s American model. The United States and the making of Nazi
race law. Princeton University Press.
19. Cohen, A. (2016). Imbeciles. The supreme court, American eugenics, and the sterilization of
Carrie Buck. Penguin Press.
20. The Annals of Human Genetics. https://onlinelibrary.wiley.com/page/journal/
14691809/homepage/productinformation.html
21. James, W. (1983). The principles of psychology. Harvard University Press.
22. Gonzalez, G., & Gonzalez, G. (1979). The historical development of the concept of
intelligence. Review of Radical Political Economics, 11(2), 44–54. https://doi.org/10.1177/
048661347901100204
23. Frey, B. B. (2018). The Sage encyclopedia of educational research, measurement, and evalua-
tion. Sage Reference. https://doi.org/10.4135/9781506326139
24. Kell, H., & Wai, J. (2018). Terman study of the gifted. In B. Frey (Ed.), The Sage ency-
clopedia of educational research, measurement, and evaluation (Vol. 1, pp. 1665–1667). Sage
Publications, Inc. www.doi.org/10.4135/9781506326139.n691
25. Kevles, D. J. (1985). In the name of eugenics: Genetics and the uses of human heredity.
Knopf.
30 Education, AI, and Ideology

26. Marks, R. (1974). Lewis M. Terman: Individual diferences and the construction of
social reality. Educational Theory, 24(4), 336–355. https://doi.org/https://doi.org/
10.1111/j.1741-5446.1974.tb00652.x
27. Leslie, M. (2000, July/August). The vexing legacy of Lewis Terman. Stanford Maga-
zine. https://stanfordmag.org/contents/the-vexing-legacy-of-lewis-terman
28. Shurkin, J. N. (2006). Broken genius: The rise and fall of William Shockley, creator of the
electronic age. Palgrave Macmillan.
29. Romo, D. D. (2005). Ringside seat to a revolution: An underground cultural history of El Paso
and Juárez, 1893–1923. Cinco Puntos Press.
30. Lombardo, P. A. (2002). “The American breed”: Nazi eugenics and the origins of the
Pioneer Fund. Albany Law Review, 65(3), 743–830.
31. Sternberg, R. J. (2020). The Cambridge handbook of intelligence. Cambridge University
Press.
32. Senior, J., & Gyarmathy, E. (2021). AI and developing human intelligence. Future learning
and educational innovation. Routledge.
33. Hauser, C. (2021, June 2). Outrage greets report of Arizona plan to use “holocaust
gas” in executions. New York Times. www.nytimes.com/2021/06/02/us/arizona-
zyklon-b-gas-chamber.html
34. Chun, W. H. K., & Barnett, A. (2021). Discriminating data: Correlation, neighborhoods,
and the new politics of recognition. The MIT Press.
35. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for
the Dartmouth summer research project on artifcial intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
36. McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects
of artifcial intelligence. A.K. Peters.
37. Ashby, W. R., Shannon, C. E., & McCarthy, J. (1956). Automata studies. Princeton
University Press.
38. McCarthy, J. (1987). Generality in artifcial intelligence. Communications of the ACM,
30(12), 1030–1035. https://doi.org/10.1145/33447.33448
39. Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
40. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
41. Heikkila, M. (2021, October 20). Politico AI: Decoded: AI goes to school – What EU
capitals think of the AI Act – Facebook’s content moderation headache. Politico. www.
politico.eu/newsletter/ai-decoded/ai-goes-to-school-what-eu-capitals-think-of-
the-ai-act-facebooks-content-moderation-headache-2/
42. Stokel-Walker, C. (2021, November 25). AI has learned to read the time on an ana-
logue clock. New Scientist. www.newscientist.com/article/2298773-ai-has-learned-to-
read-the-time-on-an-analogue-clock/
2
IMAGINATIONS, EDUCATION, AND
THE AMERICAN DREAM

In the fnal report of the United States National Security Commission on Artif-
cial Intelligence, which was published at the end of 2021, we fnd that

AI is not a single technology breakthrough, like a bat-wing stealth bomber.


The race for AI supremacy is not like the space race to the moon. AI is not
even comparable to a general-purpose technology like electricity. How-
ever, what Thomas Edison said of electricity encapsulates the AI future: ‘It
is a feld of felds . . . it holds the secrets which will reorganize the life of
the world.
(NSCAI, 2021, p. 71)

This may be inspiring and promising, but it is not clear. The main premise of such
an important document, written for an unprecedented military force, detailed in
16 chapters and over 700 pages, is that AI is not one thing, but “a feld of felds.”
This is epistemological dishonesty, as we can’t seriously say that only a specifc
computing system is AI. However, it leaves open the question of what exactly
is AI. A working defnition is included in the report, which defnes AI as “the
ability of a computer system to solve problems and to perform tasks that have
traditionally required human intelligence to solve” (NSCAI, 2021, p. 602). This
explanation leaves the entire weight of a clarifying efort to describe what is AI
on the hotly debated and complex feld of human intelligence. We have already
seen in the previous pages how questionable and contentious is to state precisely
what “human intelligence” is and how we can measure it. It seems that the inten-
tion of this defnition is to compare AI with the set of attributes measured by IQ
tests. If this is the case, then we have to admit that we place at the foundation of a
very important feld of technology a fawed epistemological framework. Human
DOI: 10.4324/9781003266563-4
32 Education, AI, and Ideology

intelligence is much more than what can be measured by tools inspired by some
eugenic theories. Moreover, when we look at AI and try to understand how
such an obscure and hollow concept became so popular, universally attached to
anything – from vacuum cleaners to rockets, telephones and guided missiles, edu-
cation and medicine – we have to remember that “intelligence” is there because
it is making the AI label attractive and desirable. This concept is sending to an
enticing human-like attribute of superior intellectual performance. AI was born
as a hollow concept, as Yarden Katz details in his extraordinary book on Artifcial
Whiteness that

[T]he term “Artifcial Intelligence” charmed the Pentagon’s elites, and


they poured money into the feld. In the 1960s AI “became the new buz-
zword of America’s scientifc bureaucracy.” Reputable AI practitioners at
MIT such as McCarthy and Minsky received unrestricted funding from
the Pentagon.
(Katz, 2020, p. 242)

The book presents how cyberneticians looked at the initiative of AI as a “a con


trick,” as practical solutions to exploit the military’s confusion to raise funds from
the Pentagon while in practice “simply [doing] the sort of things we [cyberneti-
cians] were all doing anyway” (Katz, 2020, p. 25).
AI was defned from its inception as a project to create a machine with
human-like intelligence. Marvin Minsky, the “father of artifcial intelligence,”
as he was named by his colleagues at MIT (MIT Media Lab, 20163), defned AI
in 1968 as “the science of making machines do things that would require intel-
ligence if done by men” (Stonier, 1992, p. 1074). Currently, the most ambitious
aim of researchers and industry is to create what was called since 1980s “strong
AI,” which is “artifcial intelligence that is in all respects at least as intelligent as
humans” (Butz, 20215). This looks like a great aim, but is in fact so great that a
large number of specialists in AI consider strong AI impossible to achieve. As a
defnition, this is not closer to what we should expect from a feld presented by
its engineers as frmly grounded in mathematics and exact science. Again, human
intelligence is far from being defned in a clear, scientifc, and widely accepted
form and human consciousness and intelligence still remain defned by various
hypotheses and speculations about their nature, what defnes them and how we
can measure intelligence. The psychometric approach, based on IQ tests, domi-
nates the applications and literature in the feld, but intelligence tests are placed in
perspective by more recent and nuanced theories. Unfortunately, what all these
great contributions achieve is not only to expand our understanding of intelli-
gence, but also reveal how complex is the endeavour to confne this concept to
strictly determined, algorithmic structures and mathematical sequences. AI is still
a project and this may explain the difculty to fnd a generally accepted defnition
of what it is.
Imaginations, Education, and the American Dream 33

In “Intelligence: Knowns and Unknowns,” a report published by a group of


experts that were selected by the American Psychological Association to investi-
gate what is known and unknown about intelligence, we fnd the observation that
in an experiment where “two dozen prominent theorists were recently asked to
defne intelligence, they gave two dozen somewhat diferent defnitions” (Neisser
et al., 1996, p. 776). The epistemological difculty to defne clearly what is intel-
ligence is also leaving AI open to exploitations and exaggerations. Specifcally,
this vulnerability is leaving AI open to automation bias, adoption of technological
solutions as panacea, and to the marketing push to label any computing system
as “intelligent.” When we don’t know exactly what is “intelligence” – and what
specifcally can be described as AI – it is quite easy to market something that can
be, for example, software of image processing that was around for decades, as an
innovative AI solution for designers and media producers.
So what is AI and how it is possible to avoid a defnition that is too general and
vague? The OECD’s AI Experts Group (AIGO) defnes an AI system as

a machine-based system that can, for a given set of human-defned objec-


tives, make predictions, recommendations or decisions infuencing real or
virtual environments. It uses machine and/or human-based inputs to per-
ceive real and/or virtual environments; abstract such perceptions into mod-
els (in an automated manner e.g. with machine learning (ML) or manually)
and use model inference to formulate options for information or action. AI
systems are designed to operate with varying levels of autonomy.
(OECD, 2019, p. 157)

The deep learning AI is defned as

a branch of machine learning that has its roots in mathematics, computer


science, and neuroscience. Deep networks learn from data the way that
babies learn from the world around them, starting with fresh eyes and
gradually acquiring the skills needed to navigate novel environments. The
origin of deep learning goes back to the birth of artifcial intelligence in
the 1950s, when there were two competing visions for how to create an AI:
one vision was based on logic and computer programs, which dominated
AI for decades; the other was based on learning directly from data, which
took much longer to mature.
(Sejnowski, 2018, p. 38)

Some of the most respected engineers in this feld warn us that deep learning can
work only if neural networks are continuously adjusted and improved, tweaked
and adapted. In this process, results are ahead of theoretical understandings and
we just trust the system to work. In the history of humanity, technology was
always associated with progress. History is also giving us key lessons that show
34 Education, AI, and Ideology

when technology is helping rather than destroy us. When technology is idolised,
we just take the road to disaster. This is just one set of reasons to argue that human
oversight of AI is a possible solution only for a certain set of circumstances, but
never sufcient for important decisions, which must be taken only by humans.
It may be clear at this point that we need more than a technical paragraph and
academic jargon to understand what is AI. Especially when we think about the
immensely complex feld of education, we have to understand not simply a short
defnition of AI, but see how this technological construct is working, and what
its use involves for students, teachers, and institutions adopting it. A good start
is provided by Ivana Bartoletti, a specialist in AI, who ofers a more sensible and
insightful defnition in her wise book, An Artifcial Revolution:

To put it simply AI is (so far at least) about machines performing a task


that humans perform and which is possible only because we, humans, have
taught them to do so. The thing we program them to do is to recognize
and act upon the correlation between things (intelligere); things that for us,
humans, make up some part of what constitutes life and experience.
(Bartoletti, 2020, p. 239)

In other words, AI is the generic name for computing systems that are able to
engage in human-like processes such as learning, adapting, synthesising, self-
correcting, and use of data for complex processing tasks. However, it is essential
to keep in mind the key feature underlined by Bartoletti, which is that AI is
determined entirely by what humans decide to feed these systems to do, what
data they select in felds they decide that are representative. This is a subjective
endeavour. AI is doing what human ask and enable the computing systems to per-
form, based on the amount of data we collect and the structure of the algorithms
used – which are also created and determined by humans.
In The Black Box Society, a seminal book written by Frank Pasquale, we fnd
why it is important to understand how the “knowledge problem” and secrecy
are intentionally cultivated in our lives by “our increasingly enigmatic technolo-
gies” and their masters, with great implications for everyone’s life. He notes that
“Secrecy is approaching critical mass, and we are in the dark about crucial deci-
sions. Greater openness is imperative” (Pasquale, 2015, p. 410). AI is developing
fast and the public is not invited to look at the extraordinary complex “black
box” of algorithms fuelling AI systems or let us understand its workings. The
most important corporations in new technologies, such as Google, Facebook
(Meta), Microsoft, Amazon, or Apple, refuse to provide access to their algorithms
and – as Frank Pasquale refected – we can fnd in general what information gets
in these systems, and we can access easily what results from aggregating this data,
but we don’t know what is happening in the middle of this process, in the “black
box” of technocracy. In general, with the exception of a very limited number of
engineers, we have no idea how data is aggregated, manipulated, and analysed by
Imaginations, Education, and the American Dream 35

algorithms. The rapid advancement of AI, and its adoption in some of the most
personal areas of our lives, is making the need of transparency on algorithms and
data practices even more acute; this is why we see in Europe an acceleration of
legislative initiatives aiming to regulate AI and limit possibilities of misuse. It is a
commendable direction that needs to be seriously analysed in this book to fnd
if legislative regulations represent a sufcient measure against abuse and wrong
applications of AI.
There are already a signifcant number of excellent books and studies on AI
that detail what is AI, and some used on this analysis. The basic element we
need to remember is that AI is a sum of algorithms that work with data, the fuel
of the AI engine. Algorithms are mathematical rules or steps involved to solve
a problem, and there are fundamentally two ways that algorithms are designed
by engineers in AI: explicit (using direct computations of data that is describing
known quantities) or implicit (which is applied in machine learning, where the
algorithm is derived from present and future data).
It is important to remember that AI is only as good as the data is, and the
collection, labelling, and entry of data determine the quality of expected results.
One of the most surprising illusions cultivated by media, academics, and centres
of narrative infuence is that data is just factual, clean, objective, unprejudiced,
and neutral; data is glorifed as the miraculous fuel of the most advanced scien-
tifc projects. The narrative of the almighty data is permeating academic culture
and practice; it is one of the most frequent invitations I heard in my work at
university. Big data and data analytics are in these contexts the magical key to
understand all that one may want to understand: student interests and student
engagement in learning, student needs and optimal teaching approaches, learn-
ing challenges and students’ future. It looks all good and innocuous until one
examines closer how all this happens. For example, only very rarely students
are informed about all data collected about them. Hardly ever they are asked
to express their consent on these practices or refuse data collection. Data col-
lected on the basis of arbitrary criteria is often used to reach conclusions about
students’ interest in learning, based on silly variables such as the number of clicks
recorded or the amount of time spent logged in the learning management system
(LMS). If this data is going to be used on AI solutions in higher education, we
can expect to see signifcant errors and damaging results. Big Data is wonderful,
we hear, because it ofers us a God-like view on everything: personal profles,
past performances, possible results, and future preferences or accomplishments.
The promise associated with Big Data as a way to do big and wonderful things
is an intentionally false representation. The frst reason for this fact is that we do
not have that clinical, virgin, neutral, and completely unbiased data to work with
simply because that is an impossible concept. Lisa Gitelman explains in the Intro-
duction of “Raw Data” is an oxymoron, a book opening diferent perspectives
on why the common phrase “raw data” is leading to an impossible situation. It is
impossible – we fnd in the book – because we always have data that is “cooked”
36 Education, AI, and Ideology

by those who collect or select what is interpreted. Gitelman draws the attention
towards imaginations of data and how this makes impossible to have entirely data
that is completely neutral:

Like events imagined and enunciated against the continuity of time, data
are imagined and enunciated against the seamlessness of phenomena. . . .
Every discipline and disciplinary institution has its own norms and stand-
ards for the imagination of data, just as every feld has its accepted method-
ologies and its evolved structures of practice.
(Gitelman, 2013, p. 311)

Someone decides what data is collected, and this limits what we collect,
regardless of quantity. No matter how “big” is data, it is impossible to imagine
a situation where we have all data on a phenomenon and it is all objective and
unrelated in what we think to be data, what we decide to collect as data and what
we decide to include in the “big data database.” In reality, the story of Big Data
is much more problematic than it looks in common narratives shared by media
and some academics self-appointed as experts in edtech. A study published in
Nature investigates if data-driven clinical decisions are biased and beneft difer-
ently some populations reaches the conclusion that “there are negative repercus-
sions of disregarding class performance disparities in the context of skewed data
distributions – a challenge still largely neglected but impacting many areas of AI
research” and that these models should “systematically undergo fairness assess-
ments to break the vicious cycle of inadvertently perpetuating the systemic biases
found in society and healthcare under the mantle of AI” (Röösli et al., 202212).
In other words, databases and AI models discriminate against some populations
and, although not addressed as a matter of validity, clinical decisions are distorted
by AI solutions to serve preferred groups. A study published by Harvard Busi-
ness Review in 2017 reveals that only 3% of companies’ data meets basic quality
standards, and concludes:

These results should scare all managers everywhere. Even if you don’t care
about data per se, you still must do your work efectively and efciently.
Bad data is a lens into bad work, and our results provide powerful evidence
that most data is bad. Unless you have strong evidence to the contrary,
managers must conclude that bad data is adversely afecting their work.
(Nagle et al., 201713)

“Big Data” is a much more complex story than this label suggests, and contains
also some equally big problems with its use. Students and the general public are
not informed on how much low-quality data that is collected on inadequate cri-
teria even though these models often shape decisions in education and the future
of students and graduates.
Imaginations, Education, and the American Dream 37

Big Data on learning stands also associated with a problem that goes beyond
the theoretical possibility to reach a highly accurate and complete database for our
AI models. Learning is contextual, infuenced by cultural, psychological, physical,
environmental, developmental, and sociological variables (to name just a few). It
is emotionally charged; quantity and the predictability of sequences in learning
do not refect much on the intensity or impact of a learning segment. To collect
all data on a phenomenon as complex as human learning is virtually impossible.
Even if we try – as an exercise of imagination – to start from the premise that this
can be possible, we have to keep in mind that just one random event can change
the weight and infuence of key data, and bring new variables that entirely change
all. There is no doubt that we can fnd patterns and create profles and pathways,
but if we lose perspective on the limits of this process, we risk to lose relevance
and dehumanise the entire process. Ivana Bartoletti is noting in her in-depth
analysis of data violence that “data has a huge faw, a faw that is widely ignored
or wilfully disguised: data is not neutral. It is inherently political. What to collect
and what to disregard is an act of political choice” (Bartoletti, 2020, p. 16).
Limitations of data and the nature of algorithms determine also areas where
AI can excel and will advance fast in the next decades, as well as limitations that
make its use misleading or damaging. AI is optimally working in areas where it is
possible to create algorithms aligned with clear sequencing, measurable and struc-
tured patterns. AI is working very well in felds such as medicine, where we have
clear DNA patterns and applications where large volumes of data can optimally
identify models and new possible applications; in learning and teaching, we have
a very diferent feld. There are political choices that limit and decide what data is
collected, how it is collected, who can have access to data and how is interpreted;
these limitations stand as crucial elements that should be open for scrutiny and
properly investigated if universities want to have agency on their project of higher
education. Universities – as we will see in some examples – have an unexplain-
able detachment and indiference on some extraordinarily important solutions
purchased from educational technology (edtech) vendors. Plagiarism deterrence
software, online solutions found under the term of LMS, surveillance, and many
other edtech applications are not considered in line with data concerns. These
applications come with serious implications for students’ learning and future, but
are simply ignored. If this looks like an unfair or exaggerated claim, I suggest a
simple experiment: consider yourself a prospective student and search online how
the university that may be chosen refects openly details on data management,
collection, aggregation, and uses of data by the third parties, corporate entities
that can access and use data collected from students.
Defnitions and understandings of AI are largely determined by the ideological
and emotional position of those who describe it. AI is more as a set of claims and
disparate software and hardware solutions. For enthusiastic computing engineers,
passionate about coding and all possibilities opened by the public interest on this
feld, AI is approached as magic power, and mastering it is reserved for the elite
38 Education, AI, and Ideology

of few creators and initiated. We also have the group of investors, naturally moti-
vated to infate the promise of AI and maximise their profts on the stock market,
and not interested to engage in conceptual delimitations. We can look here at
just one example of Marc Cuban, a billionaire active in tech investments who is
frequently quoted by magazines and newspapers that are comfortable to promote
the idea that wealth is synonymous with wisdom. This tech tycoon noted in 2017
at the Upfront Summit14 that “Artifcial Intelligence, deep learning, machine
learning . . . whatever you’re doing if you don’t understand it . . . learn it!
Because otherwise you’re going to be a dinosaur within 3 years.” Over 5 years
later, we can see that many investors in AI don’t understand it is ridiculous to
look at them as “dinosaurs.” Here we fnd a common theme associated with AI
in public spaces: most of those who talk about AI, including in higher education,
start from the assumption that they clearly know what are AI, deep learning, and
machine learning, and those who don’t will face extinction. The idea that all suc-
cessful investors on the stock market know what AI is should be taken with great
reserve, as the most prominent engineers in this feld note how much is unknown
in the way it functions and delivers results. For example, in an article published
in Science in 2018, we fnd the enquiry of an AI expert, Ali Rahimi, a researcher in
AI at Google, who is suggesting that we reached the point where AI is taking the
form of alchemy:

“Researchers do not know why some algorithms work and others don’t,
nor do they have rigorous criteria for choosing one AI architecture over
another. . . . I’m trying to draw a distinction between a machine learning
system that’s a black box and an entire feld that’s become a black box.”
Without deep understanding of the basic tools needed to build and train
new algorithms, he says, researchers creating AIs resort to hearsay, like
medieval alchemists.
(Hutson, 201815)

Top experts working for tech giants reveal recurrent difculties to explain AI’s
reproducibility problem or its “interpretability,” the complicated eforts and
impasse on explaining how a particular AI system or machine learning model has
come to its conclusions. The same article from Science is quoting François Chol-
let, a computer scientist at Google: “People gravitate around cargo-cult practices,”
relying on “folklore and magic spells.”
There is much more than “folklore and magic spells” that needs to be scru-
tinised to understand how ideological foundations of AI structure a specifc
approach for education and its futures. Theological motifs that are present in the
public discourse on AI have stronger roots than public can see and structure far
more than a simple enthusiasm for technology, or motivations limited to profts
and greed. We do not have the new alchemy in the AI, but a new cult. Wolfram
Imaginations, Education, and the American Dream 39

Klingler, founder and CEO of two technology frms based in Switzerland, noted
in an article he authored that Silicon Valley is at a stage where is institutionalising
its religious beliefs:

Digitalism, or machine religion. Digitalists believe in transcending the


human condition, ultimately overcoming death through machines. Just as
Christianity promises ultimate redemption from Original Sin, Digitalism
promises redemption from the unavoidable sin of our messy, distracted,
limited brains, irrational emotions, and ageing bodies.
(Klingler, 201716)

The public discourse on AI is marked by a religious fervour and strategic intoler-


ance for the few alternatives left against techno-determinism or for a serious scru-
tiny of promises, risks, or errors. This new religion is channelling the energy of
faith and the passion of true belief into a techno-utopian creed that is fundamen-
tally elitist, authoritarian, and based on dehumanising constructs. Some labelled
it as posthuman, others as antihuman; regardless of what is the most appropri-
ate name, we can see it proposing and operating with remarkable coherence
as de-humanising engines. In 2020, a report commissioned by Facebook (now
Meta) concluded that the online social media platform was “not doing enough to
help prevent our platform from being used to foment division and incite ofine
violence” in Myanmar (Warofka, 201817). “Not doing enough” is an unfortu-
nate euphemism to cover the reality of platforms used to spread and amplify
hate speech, hiding behind propagandistic marketing strategies when these tools
caused hideous crimes and ethnic cleansing. Sacha Baron Cohen underlined
some important facts when he delivered the Keynote Address in 2019 at ADL’s
Summit on Anti-Semitism and Hate.

Democracy – Sacha Baron Cohen said – which depends on shared truths,


is in retreat, and autocracy, which depends on shared lies, is on the march.
Hate crimes are surging, as are murderous attacks on religious and ethnic
minorities.
What do all these dangerous trends have in common? I’m just a come-
dian and an actor, not a scholar. But one thing is pretty clear to me. All this
hate and violence is being facilitated by a handful of internet companies
that amount to the greatest propaganda machine in history.
The greatest propaganda machine in history.
Think about it. Facebook, YouTube and Google, Twitter and others –
they reach billions of people. The algorithms these platforms depend on
deliberately amplify the type of content that keeps users engaged – stories
that appeal to our baser instincts and that trigger outrage and fear.
(Cohen, 201918)
40 Education, AI, and Ideology

This is not about a cult followed by a marginal group, or few eccentrics in


California, with a passion for technology but this is a well-structured, extremely
rich group that managed to impose a self-serving narrative that captured the
imagination of centres of power and, ironically, of large masses that were looked
at with contempt and disgust by the post-human elites. Placed at the intersec-
tion of American capital and technological advancements, Silicon Valley is using
the strength of American leverage on international institutions, employing most
powerful cultural weapons, such as the American Dream, to promote its goals.
This is a new religion. Sacha Baron Cohen warned what is at risk, noting in his
address that we leave the fate of the world at the mercy of a limited group:

The Silicon Six – all billionaires, all Americans – who care more about
boosting their share price than about protecting democracy. This is ideo-
logical imperialism – six unelected individuals in Silicon Valley imposing
their vision on the rest of the world, unaccountable to any government and
acting like they’re above the reach of law.

Academia used to keep alive with pride the space of ideas and debates about chal-
lenges and risks facing society, culture, and our common future; it was not ever
perfect, but the ambition was there. The serious warnings – delivered publicly
by an actor and activist with international exposure – failed to attack the atten-
tion of universities, academics, and the vast administrative structures in higher
education. When some edtech start-ups came with the story of Massive Open
Online Courses (MOOCs) academics and administrators went extreme, absurdly
enthusiastic about impossible promises of tech-solutionism. Strangely, a proven,
obvious, and active threat for learning and the advancement of knowledge, for
democracy and civil society was and still is largely ignored. Universities are more
focused on the new shiny trick that can balance profts and “secure markets.” The
following chapters provide a closer look at the lost lessons of MOOCs in higher
education; this story reveals the fervour of faith associated with edtech in higher
education.
There is a larger group than “the Silicon Six” sharing the power and faith in
Silicon Valley. Douglas Rushkof presented few clear characteristics of this cult,
writing in Medium that

[T]here is a Silicon Valley religion, and it’s one that doesn’t particularly
care for people – at least not in our present form. Technologists may pre-
tend to be led by a utilitarian, computational logic devoid of superstition,
but make no mistake: There is a prophetic belief system embedded in the
technologies and business plans coming out of Google, Uber, Facebook,
and Amazon, among others. It is a techno-utopian and deeply anti-human
sensibility.
(Rushkof, 201819)
Imaginations, Education, and the American Dream 41

In 2015 Anthony Levandowski, a Silicon Valley engineer who became well


known with his work for Google on self-driving cars, founded a new church
called Way of the Future to “develop and promote the realization of a Godhead
based on Artifcial Intelligence.” The analysis provided by Klinger is making just
one important error when he accepts the view that Levandowski is relevant for
the new cult of technologists. It is not. There is a reason to see that even Lev-
andowski abandoned his new “church,” which is that the idea had no traction
in a place that is known for adopting an anti-religious sentiment as the pillar of
their new religion. Silicon Valley shared from its inception a clear set of beliefs
and values: it built new technologies on eugenic and racist values and privately
kept the belief of white supremacy. It is anti-human because it looks at humans
as dirty, imperfect, and dangerous, as inferior groups of these fawed creatures are
multiplying too much. It is ironic to see that this part of techno-beliefs was picked
up by international media only when Prince William, The Duke of Cambridge,
said in November 2021 in a speech at the Tusk Conservation Awards that human
populations put “pressure” on Africa’s wildlife, noting that “the increasing pres-
sure on Africa’s wildlife and wild spaces as a result of human population presents a
huge challenge for conservationists, as it does the world over” (Daunton, 202120).
In an article on this event published by Al Jazeera, Edna Mohamed found the
ideological sources of this position in the old eugenic theories:

The ideology has racist connotations – in short, Black, Brown and margin-
alised people are blamed for overpopulation and consequently the environ-
ment’s demise. The idea’s origins can be traced to an essay by the English
18th-century economist Thomas Robert Malthus entitled “The Principle
of Population,” which lays the foundation for eugenics in the arena of cli-
mate change.
(Mohamed, 202121)

It is true that Silicon Valley is not making these beliefs as visible and direct as a
British royal, but it is relatively easy for anyone with some time and curiosity to
see them expressed in writings of founding fathers of AI. For example, we can
look at other interesting developments at M.I.T. that are relevant for the context
where Minsky introduces AI to the Pentagon – and to the world. A team of
experts working at M.I.T. published at that time “The Limits to Growth,” which
is the frst “report for the Club of Rome’s project on the predicament of man-
kind.” In this report we fnd the eugenic call for “population control,” explicitly
aimed to address the rapid population growth and fnd ways to have less people.
A conclusion of this report brings a new light on the favoured model that was
created by the group of experts at the Massachusetts Institute of Technology:
“some considered the model too ‘technocratic,” observing that it did not include
critical social factors, such as the efects of adoption of diferent value systems.
The chairman of the Moscow meeting summed up this point when he said,
42 Education, AI, and Ideology

“Man is no mere biocybernetic device.” This criticism is readily admitted. The


present model considers man only in his material system because valid social ele-
ments simply could not be devised and introduced in this frst efort” (Meadows
et al., 1972, pp. 187–18822). The New York Times reviewed at that time The Lim-
its to Growth in unambiguous terms: This report – it was noted

in our view, is an empty and misleading work. Its imposing apparatus of


computer technology and systems jargon conceals a kind of intellectual
Rube Goldberg device – one which takes arbitrary assumptions, shakes
them up and comes out with arbitrary conclusions that have the ring of
science.
(Passell et al., 1972, p. 123)

Technological utopianism greatly evolved since 1970s, but some parts of that frst
report for the Club of Rome have a surprising fulflment today. Every part of our
lives is now “too technocratic,” and social factors are addressed with the applica-
tion of new technologies. To take just one example, we can look at the story of
what is now known in Australia as the scandal of RoboDebt, which was the name
of the automated process of matching data on welfare recipients with their total
income data. This “intelligent” process of identifying overpayments triggered an
avalanche of debt notices, with most vulnerable people wrongly targeted by a
fawed algorithm. Soon, the AI system of control was sending 20,000 letters a
week, targeting welfare recipients identifed with debt. The system was wrong,
and there are reports of people committing suicide after being targeted by the AI
solution. A class action was launched against these decisions and courts found the
entire RoboDebt System unlawful and, consequently, found that 373,000 people
should receive refunds, rewarding $112 million in compensations and cancelled
debts of $398 million. There is no doubt that “robodebt” was a social, cultural,
and civic disaster; it probably went further than what “The Limits to Growth”
suggested in early 1970s.
For the last decade we see that AI moved at the heart of hope for new solu-
tions to make a better world. It is a story entrenched in what Evgeny Morozov
called “solutionism,” which is “recasting all complex social situations either as
neatly defned problems with defnite, computable solutions or as transparent and
self-evident processes that can be easily optimized – if only the right algorithms
are in place!” (Morozov, 2013, p. 524). The AI solutionism is explicitly aiming to
replace humans with something “super,” most often with a “super brain.” The
new AI narrative is not simply looking at the world as a mess that will be changed
and “solved” with technology. It is a promise to change the world and humanity;
all mortal faws, problems, and imperfections of humankind have now a solution
in the new, miraculous, clinically perfect and fawless formula distilled in the
algorithms of advanced AI. The point made at that meeting in Moscow is not
valid in the new context: man can be a biocybernetic device, and if “bio-” stands
Imaginations, Education, and the American Dream 43

against this project, we can remove it in the near future with the superior solution
of advanced AI.
Marvin Minsky was seduced by the eugenic ideas of population control, and
he publicly shared and supported these views; for example, his literary writings
present a mix of science fction and manifesto for a world with eugenic principles
in place. We can take the example of “Alienable Rights,” a novel published by
Minsky in July 1993 in Discover Magazine, and publicly accessible on a website
maintained by M.I.T. It is a story where two aliens evaluate human life on Earth.
The opening scene presents the alien named Surveyor teaching the one called the
“Apprentice,” who fnds humans primitive and disappointing. He is dismayed to
see that “evolution on Earth is still mainly based on the competition of separate
genes.”

“APPRENTICE: Their genetic systems can’t yet share their records of accomplish-
ments? How unbelievably primitive! I suppose that keeps them from being
concerned with time scales longer than individual lives.
SURVEYOR: We ought to consider fxing this – but perhaps they will do it
themselves. Some of their computer scientists are already simulating ‘genetic
algorithms’ that incorporate acquired characteristics. But speaking of evolu-
tion, I hope that you appreciate this unique opportunity. it was pure luck to
discover this planet now. We have long believed that all intelligent machines
evolved from biologicals, but we have never before observed the actual tran-
sition. Soon, these people will replace themselves by machines – or destroy
themselves entirely.
APPRENTICE: What a tragic waste that would be!
SURVEYOR: Not when you consider the alternative. All machine civilizations like
ours have learned to know and fear the exponential spread of uncontrolled
self-reproduction.” (Minsky, 199225)

Here we can see that Minsky is the father of AI and a high priest of the new
technocratic religion of Silicon Valley. It is a foundation of a theology that is
bringing together a posthuman ethos of authoritarianism with a reviewed plan for
eugenism, which is manifested in the belief that technology will improve qualities
of humankind with optimal solutions of population control, the replacement of
human agency and technological imperialism. It is a foundation of an exclusive
and elitist culture with vast colonising plans where the adoration of technology
makes ecological, social, and cultural disasters quasi-irrelevant for the overall aims
of the new project. It is a fascist utopia. This ideological context is placing educa-
tion at the top of felds that require an overall change. A “reformed” education
is the practical pathway for the change indicated decades ago by Marvin Minsky
in Silicon Valley.
The technocratic elite is looking at humankind as messy, chaotic, ephemeral,
and too often unpredictable; it is a world where it makes sense what Surveyor
44 Education, AI, and Ideology

is saying: adopt advanced technology or end themselves this pathetic existence.


“Genetic algorithms” open the way to have a new and improved “breed.” Here
is a glimpse on the ultimate eugenic dream of intelligent technocracy: a world
reserved for the improved herd, served by the most advanced technology we
can imagine, where the preachers of technology have a solution for eternal life.
Death, flth, misery, and useless plebs are clinically eliminated, and humanity can
fnally step forward as an improved race. It is not a detail that Apprentice and
Surveyors talk about new, colonised worlds: the imperial ethos is an integral part
of these post-human, dysfunctional dreams. Those who are part of the techno-
logical elite are close to achieve one of the main promises of this cult, to become
immortal. This is not a secret and we bring no signifcant news; the birth of the
AI cult was mostly covered by the immense noise of new media on Internet. For
example, in 2013, Jaron Lanier, the father of virtual reality, published a book that
soon won multiple awards, where he presented the role of AI in the techno cult
of Silicon Valley:

We, the technical elite, seek some way of thinking that gives us an answer to
death. . . . The infuential Silicon Valley institution preaches a story that goes
like this: One day in the not-so-distant future, the Internet will suddenly
coalesce into a superintelligent AI, infnitely smarter than any of us indi-
vidually and all of us combined; it will become alive in the blink of an eye,
and take over the world before humans even realize what’s happening. . . .
All thoughts about consciousness, souls, and the like are bound up equally
in faith, which suggests something remarkable: What we are seeing is a new
religion, expressed through an engineering culture.
(Lanier, 2013, p. 19326)

In fact, Marvin Minsky presented clearly the narrative of this new religion dec-
ades ago. He published in Scientifc American an article where the title “Will
Robots Inherit the Earth?” is answered clearly and succinctly in the subtitle: “Yes,
as we engineer replacement bodies and brains using nanotechnology. We will
then live longer, possess greater wisdom and enjoy capabilities as yet unimagined”
(Minsky, 199427).
There is no coincidence in the fact that AI is based on these fascist views.
The history of cybernetics and corporate groups in technology is intertwined
with the history of racist, anti-human, and eugenic projects, including solid col-
laborations with the Nazis. In the book IBM and the Holocaust: The Strategic
Alliance between Nazi Germany and America’s Most Powerful Corporation,
Edwin Black is presenting a well-researched and documented story of the alliance
between computing pioneers and one of the most abominable regimes in the his-
tory of humanity, the Nazis. It describes how the precursor of the computer, the
IBM punch card and its card sorting system was used to organise the Holocaust.
Imaginations, Education, and the American Dream 45

With the knowledge of IBM New York headquarters, Edwin Black notes, “IBM
Germany, using its own staf and equipment, designed, executed, and supplied
the indispensable technologic assistance Hitler’s Third Reich needed to accom-
plish what had never been done before – the automation of human destruction”
(Black, 200128).
A key tenet of the AI-cult is that the human brain is working as a computer,
and the interchangeable nature of AI solutions and human brain is beyond doubt.
We will have soon – these narratives inform us – technological solutions that
will make possible to transfer a human brain to a machine or insert a machine to
work as a part of someone’s brain. Humans can be “enhanced” with AI, which
is working as a human brain without all imperfections and conditions imposed
by the embodied condition of human life. In efect, the elite able to create intel-
ligent systems for computers look at themselves as demigods. We can see this in
the spectacular hubris exposed by the belief that some engineers specialised in
information-processing machines know exactly how a human brain works, and
how thinking, memories, emotions, creativity, imagination, and metaphors mix
and emerge from someone’s mind. This is why we have so many – obviously use-
less – apps to “fx” depression, relax, fx stress, enhance memory, and so on. This
view, along with what Lanier details in his book, refects the basics of a certain
view on what education is and how this needs to be organised. It becomes clear
why we see so often education presented as a mechanical, industrial process,
where “skills” are gained by applying sequences of instruction and assessment.
Learning is a matter of “micro” learning, which can be optimally covered by
micro-credentials. The new project of learning looks like a puzzle that is fxed by
some optimal software. It still requires some operators, but soon all data collected
on students’ patterns, teaching models, and assessment sequences will be smartly
managed entirely by AI systems. No more teachers, no more waste and useless
explorations: like a 3D printer, education is on the way to work as a technological
way advanced enough to put together all steps required to produce a graduate.
It may sound very appealing if the only interest in how learning happens was as
a student; we’ve all been students and it stands very appealing to believe that we
know exactly how education works. The anti-democratic project to debase and
value experts and hard earned expertise was all too visible in the COVID-19 pan-
demic. If good education takes a relatively long time to see results, in a pandemic
it was quite clear what will happen if someone would inject people with disin-
fectants or exposing infected patients’ bodies to UV light, as Trump suggested in
a White House public briefng in 2020. Despite life-threatening risks, a relatively
large number of people confused bizarre opinions with expert solutions, and dis-
played in general a remarkable confdence in claiming that their muses are better
than what experts recommend.
It is evident at this point that we have to see how open is higher education
to this theological proposal and proselytising passion surrounding what was also
46 Education, AI, and Ideology

called “the Californian ideology.” Richard Barbrook & Andy Cameron coined
this formula and expressed their surprise on its developments:

Who would have thought that such a contradictory mix of technologi-


cal determinism and libertarian individualism would becoming the hybrid
orthodoxy of the information age? And who would have suspected that
as technology and freedom were worshipped more and more, it would
become less and less possible to say anything sensible about the society in
which they were applied?
(Barbrook & Cameron, 199629)

We have to fnd if universities approach the engineering culture of Silicon


Valley with a critical mind, using evidence-based research to fnd the most appro-
priate tools used on teaching (and on students). It is also important to see if the
AI engineering culture, with its dormant roots in the eugenic theories, stands
compatible with the aims of education. AI giants are still guided by ideas of
“improvement” of human existence, and their view of redundancy of humanity
can all permeate AI for education. We also have to see if universities are not at risk
of altering learning and teaching blinded by the hype and glamour of advanced
computing solutions. To answer these questions and understand what can change
our common learning futures, we have to look at some major developments
in higher education and see how these transformations strengthen or weaken
Academia education to make informed and wise choices for students and the
advancement of knowledge. This is vital, especially as we see the rapid and vast
adoption of edtech as panacea; there is a general adoption of the belief that the
only solution we need for teaching and learning and for all problems confronting
now universities is a more rapid and generalised colonisation of Academia with
edtech.
The name usually associated with this tendency is “automation bias,” and we
fnd its manifestation in public policies guiding public health, education, law
enforcement policies and police reforms, regulation of fnancial services, and oth-
ers. “Automation bias” describes the propensity of humans to trust computers
too much, often beyond their own judgement. It is the common occurrence
when we blindly trust machines just because we know that they are intelligent;
they have no human fatigue, no distractions, no emotions. Computers operate
with clear and objective data, and work with scientifcally designed processes
that aggregate all information for optimal conclusions, which are delivered to
us, imperfect humans that are susceptible to errors. Machines, we hear so often,
are the key to progress, and become not only better at what they do, but ever
more intelligent. The Atlantic presented a story about automation bias and what
is happening when decision-making is dependent on computers and humans
trust them blindly. In “All Can Be Lost: The Risk of Putting Our Knowledge
in the Hands of Machines,” Nicholas Carr30 is presenting what happened when
Imaginations, Education, and the American Dream 47

computers sent wrong data and fight pilots trusted blindly computers: tragedies
where the uncritical belief that machines cannot be mistaken led to the crash of
the plane with hundreds of innocent people killed. Carr, the New York Times
best-selling author of The Shallows, is presenting with vivid examples that auto-
mation is changing both the task and us, the users. He is warning that “Automa-
tion remakes both work and worker. When people tackle a task with the aid of
computers, they often fall victim to a pair of cognitive ailments, automation com-
placency and automation bias” (Carr, 2014, p. 6731). “Automation complacency”
is driving us towards a false sense of security and comfort, based on the belief that
computing systems can deal with tasks that should be completed by us. Too many
disasters reveal how important it is to remain concerned about automation bias
and complacency: aircraft crashed, ships ran aground, and many innocent people
lost their lives. It is important to realise that overconfdence in technology –
no matter how advanced it is and how aggressive it is presented as “superior to
humans” – should be looked also from the perspective of disasters that did not
happen. Unthinkable tragedies were avoided when one operator decided that
computers might be wrong and technology is not always perfect. As we have the
case of Stanislav Petrov, “the man who saved the world.”
His story should be more widely known, as it is a lesson on the complex rela-
tionship of humans with technology. Stanislav Petrov was an ofcer in the Soviet
Air Defence Forces, and in the early morning of 26 September 1983 was in his
shift as the duty ofcer at Serpukhov-15, a secret command centre outside Mos-
cow that was Soviet Union’s missile attack early warning system. Few hours into
his shift, Soviet Lt. Col. Stanislav Petrov had computer screens indicating that an
American intercontinental ballistic missile (ICBM) had been launched and was
about to hit the Soviet Union. One after another, computers identifed a total of
fve missiles launched against the Soviet Union. This was happening at an espe-
cially tense time in the Cold War, when the Soviets mistakenly hit a Korean Air-
line commercial fight, killing 269 people, including an American congressman.
It was the time when Ronald Reagan labelled Soviet Union “an evil empire.”
David Hofman reported this extraordinary event in an article published in 1999
by the Washington Post:

Petrov’s role was to evaluate the incoming data. At frst, the satellite reported
that one missile had been launched – then another, and another. Soon, the
system was “roaring” . . . Despite the electronic evidence, Petrov decided –
and advised the others – that the satellite alert was a false alarm, a call that
may have averted a nuclear holocaust.
(Hofman, 1999, p. A1932)

The world was lucky. A decision was made against the machine by an ofcer
with a mind educated to evaluate and carefully consider what information, even
when it comes from the most advanced technology used by the military in his
48 Education, AI, and Ideology

time. It’s an illusion to believe that the advancement in technology makes impos-
sible a similar situation; we are exposed more than ever before and we have new
existential risks at stake.
In a world in crisis we have many reasons to emulate that type of education
that is building the capacity to constantly consider the possible limits of technol-
ogy. Unfortunately, as we will see in the following pages, the most infuential
forces in education stand far from this position, adopting the Silicon Valley ideol-
ogy and mythology with little or no inquiry. The story of “the man who saved
the world” is also proof that total trust of technology is a dangerous comfort, since
Trojans projected their wishes on the huge, coarsely built wooden horse and had
no curiosity to carefully examine and think about it before opening the gates to
take it in. It may be a coincidence that programmers name a “Trojan (horse)” the
tricky malware designed to look as a commonly used, harmless software to gain
access to a computer to control, damage, or steal information. The power of this
metaphor was not missed in the programming world, but was diluted to a shal-
low reading, missing the importance of keeping an alert and critical mind. For
the engineers, the solution for a “Trojan” code is just more code, to protecting
the system against malware with another software. For the forgotten hero of the
Cold War, the solution was to think if America would send only fve missiles for
an attack against Soviet Union, and his thinking saved the world; the machine
was close to obliterate us. There is no reason to be dramatic – more than the
perspective that a devout technophile was in charge in that day of September in
1983 at the secret military centre – as we simply have to contemplate the lesson
of these events. AI is part to human progress only if we maintain continuously
the aim to critique and understand how it works, who controls it, and what it
involves. Moreover, we have to fnd ways to avoid the deliberate tendency of Sili-
con Valley to fnd solutions for problems that do not exist, especially when they
create real-life debacles and human misery.
In education, the technological temptation to replace a difcult and lengthy
process with a trife technological solution is very attractive for teachers and even
more for those who sell them. The global edtech market was valued at around
US$85 billion in 2021, and it is estimated that will reach more than US$200 bil-
lion in 2027. At this moment, edtech venture capital frms raise fast billions in
funding; in 2021, a Silicon Valley venture capital frm on the edtech market
named Owl Ventures secured US$1 billion in new funds. There are many simi-
lar examples showing that there is very high and understandable motivation to
secure a part of the edtech market. However, there are strong ideological reasons
to make schools and universities very attractive for corporations and investors
focused on AI solutions.
It is not just media, technological corporations, or business groups cultivat-
ing a naive look about data and how AI actually works in education and society.
A signifcant example is provided by Andreas Schleicher, the Director of the
OECD Directorate for Education and Skills, who underlined in the Introduction
Imaginations, Education, and the American Dream 49

of an OECD’s report with serious weight on education policies and institutional


changes: “While technology is ethically neutral, it will always be in the hands of
educators who are not neutral. The real risks do not come from AI but from the
consequences of its application” (OECD, 2021a, p. 433). This is the OECD idea
about how we can radically reimagine what teaching and learning look like when
powered by digital technology. The idea that “technology is ethically neutral” is
not only absurd, but is also in direct contradiction with an impressive amount
of research and consistent literature showing the exact opposite. Previous para-
graphs in this chapter explain succinctly why when we look at the collection of
data for AI systems render absurd the claim that technology is ethically neutral.
The use of AI, in any system that involves an owner of technology applied on
something, especially on students and teachers, involves an ethical dimension.
Literature provides extraordinarily detailed examples form experts in the feld of
programming and AI, which show that technology is often a space of ethical dis-
asters. For example, Joy Buolamwini, a researcher in the MIT Media Lab’s Civic
Media group, and Timnit Gebru, a researcher working for Microsoft, published
a paper that demonstrates that “machine learning algorithms can discriminate
based on classes like race and gender” and shows how “bias present in automated
facial analysis algorithms and datasets” discriminate based on the colour of skin
and gender, calling for work to secure “algorithmic fairness” (Buolamwini &
Gebru, 201834). There are just too many shocking examples of AI logarithms
opening ethical minefelds with results of atrocious discrimination, profling, and
mistakes with real-life consequences. We can just remember here the example
of Amazon’s Rekognition software that identifed as criminals most black mem-
bers of the US Congress. Or the case of Amazon’s AI recruiting tool that taught
itself that male candidates are preferable to women, and automatically rejected
or downgraded resumes that included the word “women” (e.g. women’s rowing
club) air came from all-women’s colleges. Amazon tried to improve the system
and dropped in the end the entire project when it became clear that new or
improved algorithms do not guarantee an AI system that is ending “in the same
misogynistic spot.” We also have the well-known story of Tay, the AI Twitter
chatbot that used machine learning to develop “conversational understanding.”
TayTweets account was taken ofine as it became soon a source of racist, misogy-
nist, and absurd messages, citing Hitler and relentlessly promoting Trump and
his agenda at that time. Tay was never reinstated and still presents the lesson that
conversation involves much more than technical aspects. There are many other
signifcant examples of technology creating serious ethical debates and deba-
cles. The OECD director is not stopping at saying that technology is pure and
clinical, but explains where we should look for potential risks: teachers. Andreas
Schleicher is making the point that “the real risks do not come from AI,” but
from the fact that this magical technology is placed “in the hands of educators
who are not neutral.” Here we have again stated in a report that is closely watched
and followed by many decision makers in education across the world that the
50 Education, AI, and Ideology

only problem with AI is that humans ruin it. The imperfect, fawed, and biased
human teachers impair the perfect AI systems.
It is clear that human beings are always susceptible to do mistakes and be
biased, or even prejudiced. The diference is that when a human bias (which
is knowingly or unknowingly shaping choices of engineers) moves in AI, the
efects are completely diferent: it can afect potentially a much larger group of
people and stay for a long time hidden in the “black box” of AI algorithms. AI
can also gain bias when new data is taken for machine-learning processes and
it can move fast to extreme positions of bias and prejudice (such as Tay and the
chatbot). There is the solution to improve algorithms, and this is also well covered
by research and also full of traps; the argument is that in some symbolical spaces,
where important decisions are made – including education – AI should not be
considered as a replacement of human decisions and presence. Sometimes we
may even go as far as leaving technology aside for some human, not necessarily
efcient moments.
There are some details that are intriguing about what the Director of the
OECD Directorate for Education and Skills said in his opening essay in 2021.
First, OECD is an acronym standing for the Organisation for Economic Co-
operation and Development, which is an intergovernmental economic organisa-
tion with 38 member countries. Why is an economic organisation acting like an
expert in education, and not only in economics, but also on all aspects of educa-
tion? How did we get here? To understand this helps us see how AI will deter-
mine education and learning futures, and what choices stand ahead. For this, it is
important to look back at some key moments in recent history and also see how
the American perspective on the economy, society, and education infuenced the
rest of the world.

Notes
1. NSCAI. (2021). Final report. The National Security Commission on Artifcial Intel-
ligence. www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
2. Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
3. MIT Media Lab. (25 January 2016). Marvin Minsky, “father of artifcial intelligence,” dies at
88. https://news.mit.edu/2016/marvin-minsky-obituary-0125
4. Stonier, T. (1992). The evolution of machine intelligence. In Beyond information.
Springer. https://doi.org/10.1007/978-1-4471-1835-0_6
5. Butz, M. V. (2021). Towards strong AI. KI – Künstliche Intelligenz, 35(1), 91–101.
https://doi.org/10.1007/s13218-021-00705-x
6. Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J.,
Halpern, D. F., Loehlin, J. C., Perlof, R., Sternberg, R. J., & Urbina, S. (1996).
Intelligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. https://
doi.org/10.1037/0003-066X.51.2.77
7. OECD. (2019). Artifcial intelligence in society. OECD Publishing. https://dx.doi.
org/10.1787/eedfee77-en.
8. Sejnowski, T. J. (2018). The deep learning revolution. The MIT Press.
Imaginations, Education, and the American Dream 51

9. Bartoletti, I. (2020). An artifcial revolution: On power, politics and AI. The Indigo Press.
10. Pasquale, F. (2015). The black box society. The secret algorithms that control money and infor-
mation. Harvard University Press.
11. Gitelman, L. (2013). “Raw data” is an oxymoron. The MIT Press.
12. Röösli, E., Bozkurt, S., & Hernandez-Boussard, T. (2022). Peeking into a black box,
the fairness and generalizability of a MIMIC-III benchmarking model. Scientifc Data,
9(1), 24. https://doi.org/10.1038/s41597-021-01110-7
13. Nagle, T., Redman, T., & Sammon, D. (2017, September 11). Only 3% of companies’
data meets basic quality standards. Harvard Business Review. https://hbr.org/2017/09/
only-3-of-companies-data-meets-basic-quality-standards
14. The Upfront Summit 2017 was a prominent technology event for hundreds of top
American investors, representatives of startups, and corporate executives. The event
started on 31 January 31 at the GRAMMY Museum in Los Angeles.
15. Hutson, M. (2018). Has artifcial intelligence become alchemy? Science, 360(6388),
478–478. https://doi.org/doi:10.1126/science.360.6388.478
16. Klingler, W. (2017). Silicon Valley’s radical machine cult. Vice. www.vice.com/en/
article/kz7jem/silicon-valley-digitalism-machine-religion-artificial-intelligence-
christianity-singularity-google-facebook-cult
17. Warofka, A. (2018, November 5). An independent assessment of the human rights
impact of Facebook in Myanmar. Meta. https://about.fb.com/news/2018/11/
myanmar-hria/
18. Cohen, S. B. (2019, November 21). Sacha Baron Cohen’s keynote address at ADL’s 2019
never is now summit on Anti-Semitism and hate. Remarks by Sacha Baron Cohen, Recipi-
ent of ADL’s International Leadership Award. www.adl.org/news/article/sacha-baron-
cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism
19. Rushkof, D. (2018, December 12). The anti-human religion of Silicon Valley.
Medium. https://medium.com/team-human/the-anti-human-religion-of-silicon-
valley-ac37d5528683
20. Daunton, N. (2021, November 24). Why Prince William is wrong to blame habitat loss
on population growth in Africa. Euronews. www.euronews.com/green/2021/11/24/
why-prince-william-is-wrong-to-blame-habitat-loss-on-population-growth-in-africa
21. Mohamed, E. (2021, November 30). Experts critique Prince William’s ideas on Africa
population. AlJazeera. www.aljazeera.com/news/2021/11/30/experts-critique-
prince-williams-ideas-on-africa-population
22. Meadows, D. H., Meadows, D. L., Randers, J., & Behrens III, W. W. (1972). The limits
to growth; A report for the club of Rome’s project on the predicament of mankind. Universe
Books.
23. Passell, P., Roberts, M., & Ross, L. (1972, April 2). The limits to growth. The New
York Times.
24. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism.
PublicAfairs.
25. Minsky, M. (1992). Alienable rights. The MIT Press. https://web.media.mit.
edu/~minsky/papers/Alienable%20Rights.html
26. Lanier, J. (2013). Who owns the future? Simon & Schuster.
27. Minsky, M. (1994). Will robots inherit the earth? Scientifc American, 271(4), 108–113.
https://doi.org/10.1038/scientifcamerican1094-108
28. Black, E. (2001). IBM and the Holocaust: The strategic alliance between Nazi Germany and
America’s most powerful corporation. Crown Publishers.
29. Barbrook, R., & Cameron, A. (1996). The Californian ideology. Science as Culture,
6(1), 44–72. https://doi.org/10.1080/09505439609526455
30. Carr, N. (2013, November). All can be lost: The risk of putting our knowledge in the
hands of machines. The Atlantic. www.theatlantic.com/magazine/archive/2013/11/
the-great-forgetting/309516/
52 Education, AI, and Ideology

31. Carr, N. G. (2014). The glass cage: Automation and us. W.W. Norton & Company.
32. Hofman, D. (10 February 1999). I had a funny feeling in my gut. Washington Post
Foreign Service. www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter
021099b.htm
33. OECD. (2021). OECD digital education outlook 2021. https://doi.org/doi:https://doi.
org/10.1787/589b283f-en
34. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in
commercial gender classifcation proceedings of the 1st conference on fairness. Accountability and
Transparency, Proceedings of Machine Learning Research. https://proceedings.mlr.
press/v81/buolamwini18a.html
3
THE NARRATIVE CONSTRUCTION
OF AI

To understand why an economic organisation is claiming expertise in education,


and setting policies and practice in classrooms and universities across the world,
we have to look at some key moments for education and for AI in its modern
form. There is a rich literature of imagining AI long time before the Dartmouth
Summer Research Project on Artifcial Intelligence in 1956. Homer mentions in
the Iliad and in the Odyssey the story of precious metal dogs with agile minds
or the mythical Hephaestus who had automated servants made of gold, machines
with knowledge, and intelligence. The most signifcant power of AI is to cap-
ture and stir our imaginations, including what was defned as the sociotechnical
imaginary, which was defned by Sheila Jasanof and Sang-Hyun Kim as “col-
lectively held, institutionally stabilised, and publicly performed visions of desir-
able futures, animated by shared understandings of forms of social life and social
order attainable through, and supportive of, advances in science and technology”
(Jasanof & Kim, 2015, p. 41). The imaginative power of AI is especially visible in
media stories, which have a tendency to exaggerate advancements of technology.
This serves the narrative of extraordinary robots that are already serving or replac-
ing us, but remain far from reality. In the United Kingdom, a Select Committee
on Artifcial Intelligence was appointed by the House of Lords “to consider the
economic, ethical and social implications of advances in artifcial intelligence.”
The committee found that

The representation of artifcial intelligence in popular culture is lightyears


away from the often more complex and mundane reality. Based on repre-
sentations in popular culture and the media, the non-specialist would be
forgiven for picturing AI as a humanoid robot (with or without murderous
intentions), or at the very least a highly intelligent, disembodied voice able
DOI: 10.4324/9781003266563-5
54 Education, AI, and Ideology

to assist seamlessly with a range of tasks . . . this is not a true refection of


its present capability, and grappling with the pervasive yet often opaque
nature of artifcial intelligence is becoming increasingly necessary for an
informed society.
(Select Committee on Artifcial Intelligence, p. 222)

For example, in 2018 CNBC published an article titled “A.I. will be ‘billions
of times’ smarter than humans and man needs to merge with it, expert says”
(Kharpal, 20183); it ends with the note that “AI will turn us into ‘superhuman
workers.’”
The power to crystallise new visions, to seduce our imagination, and to blur
the boundaries of real possibilities became clear since AI was born. The story
of the birth of AI is retold in the “Dark hero of the information age. In search
of Norbert Wiener, the father of cybernetics,” a book published in 2005 by Flo
Conway and Jim Siegelman. Those moments are narrated by Heinz von Foerster,
a reputable specialist in physics and one of the most infuential pioneers of cyber-
netics. He established in early 1960s the Biological Computer Laboratory at the
University of Illinois and was actively looking to secure funding for research at the
same time when Marvin Minsky attracted the attention of the US military with
his new AI Lab at MIT. The work on AI found soon how generous can be fund-
ing from the Pentagon, especially as AI became “the new buzzword of America’s
scientifc bureaucracy.” Heinz von Foerster tried to clarify that both labs work
in cybernetics, remembering: “I talked with these people again and again, and
I said, ‘Look, you misunderstand the term,’” he said of AI.

They said, “No, no, no. We know exactly what we are funding here. It’s
intelligence!” . . . At the University of Illinois we had a big program which
was not called artifcial intelligence. It was partly called cybernetics, partly
cognitive studies, but I was told everywhere, “Look, Heinz, as long as you
are not interested in intelligence we can’t fund you.”
(Conway & Siegelman, 2005, p. 2614)

There may be a funny side of this story, showing how the military were con-
vinced that someone found a safe source of intelligence they can buy and use. It
is also concerning to see that the best funded military force in the history of the
world had such a simplistic view on technology and what computers can – and
cannot – do. There is also the aspect of dissonance that was at the core of von
Foerster’s failed attempts to help donors understand some basic facts about “intel-
ligence” and computing systems. It is probably ironic that von Foerster managed
to open his Biological Computer Laboratory (BCL) at Illinois with funding from
the US Air Force. In fact, this common source of funding for research in new
technologies is not a coincidence and we should remember that new technolo-
gies, especially AI, were all born as military research and applications; this impacts
The Narrative Construction of AI 55

on their nature and on how they shape our world and imaginations. As we will
see in the following chapter, these roots may explain the prevalence of authori-
tarian, dystopian, or anti-human imaginaries within Silicon Valley and edtech.
After he moved from Austria to the United States, von Foerster was invited to
participate in a series of conferences sponsored by the Josiah Macy Foundation,
where cybernetics and information theory were being explored by scientists like
Norbert Wiener, Margaret Mead, John von Neumann, and Claude Shannon.
Here he engaged in what is possibly one of the most important debates for our
century, on what is information and how we can defne it. Heinz von Foerster
was a profound thinker and a scientist, familiar with the philosophy of language;
he was not only inspired by the intellectual life in Vienna but he was also related
to Wittgenstein, the most signifcant philosopher of language and meaning. Prob-
ably this is why von Foerster disagreed with Claude Shannon about what can be
defned as information. Conway and Siegelman note that he argued that

information, even in the technical sense, could not be severed from mean-
ing without horrendous consequences for human understanding. “I com-
plained about the use of the word ‘information’ in situations where there
was no information at all, where they were just passing on signals,” von
Foerster remembered. “I wanted to call the whole of what they called
information theory signal theory, because information was not yet there.
There were ‘beep beeps’ but that was all, no information. The moment
one transforms that set of signals into other signals our brains can make an
understanding of, then information is born! It’s not in the beeps.”
(Conway & Siegelman, 2005, p. 262)

However, the solution to call “beep beeps” was in the same group of scholars.
This solution was defned by one of von Forester’s colleagues, Claude Shan-
non, who is considered “the father of information theory.” His revolutionary
contributions in the feld of cybernetics and information theory go beyond com-
puting, information theory, and cybernetics. It is not only that his ideas greatly
infuenced our intellectual and social lives; they changed the world. Marking his
death, an anonymous obituary published in The Times on 12 March 2001 notes
that Claude Shannon was

a playful genius who invented the bit, separated the medium from the
message, and laid the foundations for all digital communications. . . . [He]
single-handedly laid down the general rules of modern information theory,
creating the mathematical foundations for a technical revolution. Without
his clarity of thought and sustained ability to work his way through intrac-
table problems, such advances as e-mail and the World Wide Web would
not have been possible.
(James, 20095)
56 Education, AI, and Ideology

In this succinct note, we see briefy suggested the extraordinary revolution oper-
ated by Shannon, who had the idea to solve the problem of diferent meanings
that can determine how information is measured and read. Katherine Hayles is
ofering a succinct and clear explanation of Shannon’s revolutionary idea that
opened the door to the information society and new philosophical currents such
as postmodernism. Hayles notes that Claude Shannon realised that the main
problem of a new information theory and technological advancement was to fnd
reliable forms of quantifcation: the problem was that a new theoretical frame-
work that

had to be able to account for the information transmitted through natural


language, and the notorious capacity of words to mean diferent things
in diferent contexts seemed to pose an insurmountable barrier. Shannon
cut through this Gordian knot by declaring that the quantity he called
‘information’ had nothing to do with meaning. Rather, he defned it as
a function of probability, which means that the information content of
a message cannot be calculated in absolute terms, only with reference
to other possible messages that may have been sent. In efect, Shannon
solved the problem of how to quantify information by defning it internally
through relational diferences between elements of a message ensemble,
rather than externally through its relation to the context that invests it with
a particular meaning. It is this inward-turning defnition that allows the
information content of a message to be always the same, regardless of the
context into which it is inserted. Thus the frst, and perhaps the most cru-
cial, move in the information revolution was to separate text from context.
Without this stratagem, information technology as we know it could not
have come into being.
(Hayles, 1987, p. 256)

Shannon found the solution to separate the medium from the message, the text
from context, the information from meaning; he found that information can be
evaluated and read in separation from meanings, within a determined system that
remains constant. This changed everything.
Heinz von Forester was completely against this new defnition of informa-
tion as an entity separated from meanings and this form of decontextualisation.
He argued that when we separate information from meaning, we may solve a
technical problem, but we’ll infict severe damage on human understanding with
terrifying consequences for our idea of humanity. Decades later, Claude Shannon
noted that what he created at that time was a theory of communication rather
than a theory of information, admitting that he “thought that communication
is a matter of getting bits from here to here, whether they’re part of the Bible or
just which way a coin is tossed.” Shannon basically confrmed in late 1980s that
von Forester was right in his critique of the new theory of information. However,
The Narrative Construction of AI 57

in 1980s there was already too little interest in these nuances, and cybernetics
already used the approach proposed by Claude Shannon.
There are signifcant implications of the radical reconsideration of what is the
role and place of contextual meanings and the unprecedented separation between
information and meaning, of text from context. This change opened new pos-
sibilities for information technology, including the evolution of AI, but stay as a
subversive energy that alters human values, culture and social arrangements, our
views on the world and our imaginations. Hayles notes that the impact of decon-
textualisation was immediate and dramatic, and is using the real-life example of
genetic engineering. Humanity was for millennia defned in a coherent set of
relationships, keeping genetic sources and children in a continuum and clear con-
text. In vitro fertilisation (IVF) techniques is decontextualising human reproduc-
tion in a process where eggs can be collected from a woman, frozen, transported
to great distances, inseminated to a diferent person who is unrelated and can be
completely unknown to the donor, where the following stages leading to birth
and development of another human being are situated in a new and possibly
entirely diferent culture, society, and economy. In this new contextual relation-
ship, the birth can be separated from the biological origins: “As the formerly
integral connection between the genetic text contained in an unfertilized egg and
its biological context in the mother is disrupted, traditional defnitions of ‘birth,’
‘child,’ and ‘mother’ all have to be re-examined” (Hayles, 1987, p. 277). Birth,
child, mother, and father can stand decontextualised from the original biological
context, in new personal, legal, and social defnitions. This involves an immediate
and radical rethink of laws governing human reproduction and parenthood, and a
new landscape of values and political discourse related to this process.
In this radical reconsideration of the relationship between information and
context, text and meaning, knowledge and the need for coherence, we can start
to see how Claude Shannon changed the world. It is a revolution that not only
stands associated with wonderful advancement of science and technology, but it
also leads to extraordinarily toxic efects for areas such as education and medicine,
psychology, and culture productions. These areas are less explored and, as any
other major reforms, Shannon’s revolution created some dark spaces where seeds
of destruction are ideal environments for malignant growth.
Higher education is dramatically changed by the moment of a schism between
text and context. There are numerous examples where the solution is defned and
relevant internally, within its own system, and the external context is ignored.
We can take the example of massive open online courses (MOOCs), a case of
ridiculous technosolutionism that engaged universities in a frantic competition
to spend all possible resources on the new fad. The appeal of MOOCs was that
these online courses “appealed to broader narratives such as “education is broken”
and the dominant Silicon Valley narrative” (Weller, 20158). More specifcally,
MOOCs were built on a Silicon Valley narrative that is saying that education
is broken and only technology can fx it, and generous ideas such as the claim
58 Education, AI, and Ideology

that these courses ofer “free higher education for all.” Prominent journals around
the world competed to publish editorials on the new utopia prepared by edtech
for universities. The Economist published in 2012 an op-ed where it was clarifed
not only that this is the future of higher education, but this future is opened now
for all, “especially in poor countries” (The Economist, 20129). The New York
Times published in the same year an editorial titled “The Year of the MOOC”
(Papanno, 201210), while David Brooks and Thomas Friedman wrote enthusi-
astically about the MOOC revolution and the “tsunami” that will dramatically
change (and fx) universities (Brooks, 201211; Friedman, 201312). Academics and
university leaders reluctant to share the enthusiasm for the new panacea were
marginalised or excluded; The Chronicle of Higher Education observed in Sep-
tember 2012 that “the University of Virginia board’s decision to dismiss Teresa
A. Sullivan as president in June illustrated the pressure on universities to strike
MOOC deals quickly to keep up with peer institutions” (Azevedo, 201213). In
Australia, the fad went a step further; at that time, a vice chancellor of a regional
university presented the future of his university:

MOOCs merely confrm what we’ve known for years – that the most basic
currency of universities, information, is now more or less valueless, so uni-
versities might as well give it away. . . . The freemium strategy is particularly
well suited to the developing world where small fnancial margins can be
combined with mass scale.
(Barber, 201314)

After years of gullible belief that MOOCs are the silver bullet for higher edu-
cation, it became clear that these “open” (meaning free) courses have a much
less important role in the life, and budgets, of universities. Most students taking
MOOCs are already graduates, and the majority is not interested to enrol in new
courses (Perna et al., 201315).

The people who have taken up these opportunities are not the needy of
the world – noted Fiona M. Hollands of Columbia University’s Teachers
College on the margins of extensive research and reports – also noting
that [MOOCs] are not democratizing education. They are making courses
widely available, but the wrong crowd is showing up.
(Hollands & Tirthali, 201416)

The idea that universities will create free courses that will be taken at a “mass
scale” by the poor people living in the slums of Manila or the favelas of Brazil or
even poor neighbourhoods in cities like Detroit or Washington DC reveals much
more than the disconnect from reality. These utopian expectations are not just
showing that many decision makers in education have no idea about the life of
the poor; this refects the adoption of what Claude Shannon proposed and how
The Narrative Construction of AI 59

it is applied in education. The context of external world is simply irrelevant, but


when defned internally, in line with edtech utopia, interests of higher education,
and the favourite meta-narratives of Silicon Valley, the solution of MOOCs looks
just optimal. It was the new solution, the panacea that will make universities to
sell their land and buildings to developers and make anyone interested in higher
education get a degree in whatever they like. When it became obvious that these
promises simply fail, and many realised that MOOOCs are just a new label for
what universities were already doing in other ways17 in collaboration with Silicon
Valley, those who had the most fanatical reactions became vocal representatives of
a superior irony on the past fad. The blockchain buzzword was used for a while
by executives in higher education until AI was rediscovered as the solution to all
our problems.
We can look at this point at another solution which was adopted by most
universities after the COVID-19 pandemic. As most courses had to be deliv-
ered online, higher education had a new motivation to fnd solutions for what
is labelled as “academic integrity”; in other words, to hinder students’ attempts
to plagiarism. The poisonous idea to consider students as potential thieves, place
them under surveillance, and threaten them with oppressive tools of detection
for non-compliance became even more appealing for university administrators. If
we take here just the example of the extensive adoption of proctoring solutions
for online exams, we immediately see that their application discriminates openly
against students living in poverty. They discriminate against people with special
needs. The proctoring software used by most universities in the English-speaking
world use AI to fag any instance of “abnormal behaviour,” which is fagged as
plagiarism or intention of academic dishonesty. The abnormal behaviour that is
fagged includes instances where a student stares or looks up, or fdgeted, or had
a person passing the room while the camera is used for surveillance in the name
of “academic integrity.” Also, any student with stress or a mental health condition
as anxiety or depression is fagged by the AI as a potential cheater. Students who
are not rich enough to have their own room, or ofce, and live with their siblings
or relatives who can walk at the time of the exam are also fagged as plagiarists.
My worst experience with a widely used software which is ofering “solutions
to promote academic integrity” was at a reputable university in Australia, where
I was working. A professor found an abnormal number of students plagiaris-
ing – or so he said – and I was called to investigate the situation. Looking at the
“plagiarism reports” and investigating possible sources of plagiarism revealed that
fagged instances of plagiarism were not even connected with the feld of the
assignment; it was a case of clumsy writing, use of clichés (especially for students
with English as a second language) and in some cases an overuse of citations. The
professor was unmoved; he had a score that indicated him that these students
plagiarised. It was impossible to change his mind, and the Academic Board let
this one unchanged. A student wrote an e-mail detailing why there is no case to
suggest that she was stealing the work of others, but this did not matter in the end
60 Education, AI, and Ideology

and all students failed that exam. The author of that letter had a nervous break-
down and became a possible number of drop-ofs, leaving their university studies
for “personal reasons.” I left that university before it became clear if the student
was able to overcome the impact of a clear and absurd injustice. This exemplifes
again how fragmented and decontextualised is the academic reality and practice.
It also reveals the power of the machine: if the software is showing a score, we
believe it without a thought about the quality of data used, and the quality of
analysis within the machine.
Edtech companies such as ProctorU claim that the solution is a “human-
centered proctoring policy,” where a trained proctor is working with the AI solu-
tion to “truly prevent cheating.” This is not happening in real universities, where
real people operate. In courses with huge numbers of students (very appealing for
university budgets), there is no “working with the AI solution.” Academics see
that we have access to AI, which is presented as the perfect solution for short and
fast answers and this is how it will be used. In a feld where workloads increase to
absurd levels, it is fanciful to believe that universities will pay “experts” to work
with AI and analyse in-depth instances where software fags possible cheating.
The reality is that a score is determined and two options are realistically adopted:
students above the limit are failing, or the lecturer is looking the other way, pre-
tending that nothing happened. It is devastating for an innocent student to be
wrongly accused of stealing (or breaching academic integrity rules, if we use the
academic jargon). It can be disastrous and irreparable for students living in condi-
tions that are not optimal for the invasive surveillance of an AI system. Moreover,
this use is artifcially adding pressure on an online exam.
There are multiple arguments to ban this practice in universities, and I sub-
scribe unreservedly to at least one point, which is directly related to the rela-
tionship between text and context; the adoption of ubiquitous surveillance as
an integral part of higher learning is poisonous for any educational project. If a
university is genuinely interested in solutions against plagiarism, a serious look
at the root causes is much more important than a software. Looking at students
as potential thieves, criminals who must be observed and scared all the time is an
absurd approach for an educational endeavour. At the core of plagiarism deter-
rent, we fnd also a lack of clarifcation of academic ideals and no interest to clar-
ify scholarly destinations for students. Academics and university administrators fall
often in this trap, and forget that plagiarism is also a testimony of poor teaching,
or poor set of relationship between students and universities. Ultimately, surveil-
lance and threatening approaches are just stupid: students most at risk are those
acting honestly. Those interested to plagiarise can simply use themselves AI solu-
tions to paraphrase their essays to the extent that no plagiarism detection software
currently used by universities will be able to fag the cheat. After many decades,
universities lose the lesson that intelligent students interested to plagiarise are
always a step ahead their institutions; the solution is to build trust and explain why
cheating works in the end against the cheater, even when not caught.
The Narrative Construction of AI 61

A genuine and courageous conversation on the importance of learning for an


educated mind, prepared to face the new and unexpected challenges of a gradu-
ate, and the relative relevance of a grade stand much more important than a threat
of using an AI-based plagiarism deterrent. These discussions do not happen in
universities, and conferences and task force groups within academia fnd the rel-
evance within the system inwardly oriented or just performative exercises for an
illusory compliance. The meaning of education is lost; the wide external context
of higher education is not relevant for the constant and standardised measures of
information on plagiarism.
The severance of a coherent connection between text and context revolution-
ised not only informatics18 and cybernetics19, but education and other signifcant
aspects of our social life. Protesting against the idea to separate meaning from
information, von Forester also noted that

I complained about the use of the word “information” in situations where


there was no information at all, where they were just passing on signals . . .
I wanted to call the whole of what they called information theory signal.
theory, because information was not yet there. There were “beep beeps”
but that was all, no information. The moment one transforms that set of
signals into other signals our brains can make an understanding of, then
information is born! It’s not in the beeps.
(Conway & Siegelman, 2005, p. 26220)

He warned us that Claude Shannon’s revolution is devastating for humanity,


bringing horrendous consequences for human understanding and existence.
For example, we can look at the example of current medical solutions for
depression and risk of suicide. It is an important topic for anyone interested in
education and its context, on social policies or simply on contemporary problems
confronting our societies. As a fact, depression and anxiety are mental health
conditions that are recording a fast ascending trend for the last decade. Data
analysis from the World Health Organization (WHO) reveals that depression is
now afecting globally around 5% of adults (approximately 280 million people
in the world). Depression is a leading cause of disability worldwide and a major
source of loss in productivity. It is also associated with health problems and costs
for health systems across the world (WHO, 2021, September 1321) and the irrepa-
rable loss of suicides. The WHO estimates that over 700,000 people die due to
suicide every year; it stands as the fourth leading cause of death in 15–29-year-
olds (WHO, 2021, June 1722). There is a problem of quality and scarcity of inter-
national data on depression, suicides, and suicidality, but available data indicates
a working increase of cases. Experts in various countries note an ongoing rise of
cases of depressive and anxiety disorders, with an increase of risk in associated
diseases (Santomauro et al., 202123). These facts underline the urgent need to
properly address these conditions, to fnd best possible solutions for treatment.
62 Education, AI, and Ideology

There is medical treatment for mild, moderate, and severe depression, but there
is a new problem: the Americanisation of psychiatry across the world. This is
where the revolution of de-contextualisation can be analysed in its corrosive and
devastating impact. Separating text from context and its impact of the approach
and treatment of mental health is surprisingly clear and worrying.
Psychologists and psychiatrists across the world approach classify and treat
depression based on an American manual, named the “Diagnostic and Statistical
Manual of Mental Disorders” – or DSM. This manual, in its diferent updated
editions, lists psychiatric disorders and places them in clear taxonomies. Their
classifcations are linked to a list of possible medication and therapy approaches
for doctors in Berlin, Germany, or Vancouver, Canada; in Sydney, Australia, or
in Gdansk, Poland, etc. Regardless of where doctors, therapists, and patients are,
the same manual is used, regardless of cultural contexts, socio-cultural contexts,
or level of education. In October 2021, France24 published an interesting and
rare analysis of the profound crisis of psychiatry in a medical system, focusing on
the French case. Marie-José Durieux, a children’s psychiatrist at a Paris hospital,
explains in this article that this important feld of medicine was once complex and
fexible in France, had depth and original approaches, until the American model
became dominant and erased all other perspectives:

We associated psychiatry with imaginative sciences like philosophy, psycho-


analysis, sociology and literature, and we pushed the feld further. . . . In the
1980s, American ways of thinking and treatment methods were adopted
in France. French psychiatry, which was world-renowned, innovative and
pioneering, started little by little to go downhill because of America’s
infuence.
(Mazoue, 202124)

The French specialist is identifying the impact of colonisation of psychiatry, of


treatment for mental health and the path of re-storying the relationship between
the doctor and the patient to ft an American manual. The problem goes deeper
than simply shifting the focus from meaning to medication, notes the French
specialist.
The instrumentalisation of these relationships starts from the adoption of
DSMs, which – notes Durieux – are “increasingly pushing professionals to resort
to medication” and “it brainwashes early-career psychiatrists.” The DSM manual
is a sum of taxonomies, or precise classifcations which stand associated to certain
types of medication. Here is an interesting efect, also highly relevant for educa-
tion: the role of the psychiatrist is mainly reduced to identify the condition as
presented in the American manual, and prescribe the medication usually taken
for that specifc diagnostic. Of course, there are variations, but the adoption of
a certain treatment is not a matter of expert preference: if a condition is not
identifed as it is mentioned in the DSM, then it does not exist for a hospital, for
The Narrative Construction of AI 63

medical insurance and all other administrative systems. An alternative becomes de


facto impossible. In fact, medication is determined by the identifcation of one
condition indicated in the classifcation, as described by the manual. The context
of an individual’s life, culture, personal contexts at home and work, and meanings
associated with them remain entirely disconnected from DSM. This manual is
text separated from context, and it works as such.
To understand why this is a very abnormal way to think about human beings,
we can imagine a homeless individual diagnosed with depression while in hos-
pital. There is good medical care in the context of hospital, safety and separation
from stressors such as domestic violence, poverty, and homelessness. In this arti-
fcial context, in a hospital, functioning in Italy, a doctor is using the American
manual to determine the condition of the patient and possible treatment while
the stressors are removed. It makes very little sense. The context for that depression
is the extreme vulnerability and desperation associated with homelessness, and no
amount of medication can properly address this context. Ignoring it entirely is a
very strange idea. A treatment can work well in hospital care and therapeutical
approaches can help; as soon as that patient is out of hospital and surrounded by
all specifc stressors making the condition acute is making the entire approach
useless. Education is operating the same: the manual is provided by the OECD
and “treatments” are standardised and assessed with little interest for the local
context.
Creating solutions that follow a standardised model is something largely and
enthusiastically adopted in higher education; it is the new normal in higher edu-
cation. The same software solutions, the same learning management platforms,
the same gadgets, language, and edtech products are adopted by universities in
the United States and Europe, Canada and Australia, Singapore and the United
Arab Emirates, and so on. There is a signifcant irony and shrewdness to call a
standardised, rigid, and depersonalised process of learning and teaching “person-
alised education.”
The illusion of separating meanings and human life in perfectly quantifable
variables that can stand in a rigid taxonomy is an explicable extension of the logic
of eugenics, which leads to various classifcations of human beings; of course,
it is dehumanising and dangerous for any project designed to help or educate
real people. This is a solution based on a perverted logic, serving the need for
efciency and stable quantitative measurements, but it stands irrelevant for an
in-depth scrutiny. The problem is not that Claude Shannon applied his ingen-
ious solution to the theory of information rather than a specifc theory of com-
munication. The main problem is that his idea became integral part of felds
not suitable to a separation of information from meanings. As we will detail,
this symbolic twist comes with a destructive and dehumanising efect for
universities.
The trick to separate the meaning from the message, moving it in an inwards
and closed mechanism was more than a pathway to postmodernism; it set the
64 Education, AI, and Ideology

landscape to decontextualise the human condition and place meanings within


artifcially constructed symbolic systems. This shift made possible to have post-
truth as a distinct time for politics and mass media, and see that a counsellor of the
US President is describing false claims as “alternative facts.”25 The same President
of the United States told his followers in a public discourse that “What you’re
seeing and what you’re reading is not what’s happening” (Gajanan, 201826), lead-
ing many people to wonder if we have a real-life instance of Orwell’s dystopian
novel 1984. The postmodernist project is reaching new heights in real life in this
tumultuous frst part of the 21st century. The text is dissociated from context
and citizens are openly invited to ignore the context of reality, as the internal
reference point – in this case the Trumpian ideology – is the only coherent and
relevant system.
Rethinking the way to interpret context and meaning fragmented the project
of higher education, limiting imaginations and promoting the ongoing erosion of
a deeply human coherence of personal, social, and cultural frames of reference for
learning and teaching. When profts became the main raison d’être for universi-
ties, the internal dissonance became acute and educational ideals were broken
into multiple disconnected fragments. The marketable label for types of splinter
hold the old titles, while others, such as micro-credentials, hold the promise of
innovation. Data is in this new reality infnitely more important than mean-
ing, and big data is the promised land for academia. Infuential voices keep this
strange approach alive: for example, a New York Times editorial written in 2013
by David Brooks opened with this note: “If you asked me to describe the rising
philosophy of the day, I’d say it is data-ism” (Brooks, 201327). Dataism is aggres-
sively promoted and justifed from various positions; in 2016 Yuval Noah Harari,
a favourite author for Silicon Valley and a comfortable intellectual for Big Tech,
made the same argument in his famous book “Homo Deus: A Brief History of
Tomorrow” (Harari, 2016). Promoting the importance and power of Big Data
is not so much justifed for human existence as it is promoted with the cult-like
fervour by the tech elite, the few who know the ultimate Truth. This Truth is
partly hidden in the opaque algorithms managed by the big tech, and those who
question it are ridiculed and – if necessary – marginalised and excluded. Harari
is ofering a glimpse into the antihuman ideology of Silicon Valley when he
fnds that humans are obsolete, as their data processing capabilities are now vastly
exceeded by computers.
Tech aristocracy found these ideas charming and well-aligned with their own
views: Mark Zuckerberg selected Harari for his Facebook initiative “A Year of
Books,” for the “book club” of his corporation. Bill Gates wrote endorsements
for Harari writings; Harari launched his frst book published in the United States
delivering a public presentation that was organised at the Google headquarters.
The tech-elite enthusiasm for his ideas had the certainty of a new “end of his-
tory”; Harari was presented as the author capable to unpack what really matters
for us all: a mix of capitalism and technology. This view had the same depth as
The Narrative Construction of AI 65

the previous announcement of Francis Fukuyama, when he declared that we had


ahead of us to fght only boredom, as liberal democracy and capitalism are uni-
versally accepted and there is no realistic option for anything else.
It is important to look at another important feature revealed by the story of
Heinz von Forester and Claude Shannon: the extraordinary infuence on the
world of debates, ideas, and solutions originating in the American context. Ques-
tions and solutions born and debated in an American space such as Silicon Valley
are changing the entire world long before Google and other tech giants reformed
our ways of communication, politics, love, and imagination. It shows the power
of the American Dream. Silicon Valley built on the symbolic power of the Amer-
ican Dream, on the carefully built narrative of the American Dream.
The actor and comedian George Carlin once said that “it’s called the Ameri-
can Dream because you have to be asleep to believe it.”28 However, the Ameri-
can Dream is a powerful utopia, a product of propaganda able to enlighten the
imagination of people around the world. The formula of “American Dream”
was coined by James Truslow Adams in his book “The Epic of America,” who
devoted hundreds of pages to this concept; he notes that the American Dream is

that dream of a land in which life should be better and richer and fuller for
every man, with opportunity for each according to his ability or achieve-
ment. It is a difcult dream for the European upper classes to interpret
adequately, and too many of us ourselves have grown weary and mistrustful
of it. It is not a dream of motor cars and high wages merely, but a dream of
a social order in which each man and each woman shall be able to attain to
the fullest stature of which they are innately capable, and be recognized by
others for what they are, regardless of the fortuitous circumstances of birth
or position.
(Adams, 1931, p. 40429)

It was from the beginning an attractive utopia, luring immigrants from all over
the world to a land where the “common man” can achieve everything with hard
work and ingenuity, regardless of religion, race, nationality, or wealth. Millions
are lured with the narrative of a country where all opportunities are open for
those who want to have a better life. It is an irresistible utopia, and Adams knew
that his magic formula of the American Dream is not relevant for a rational evalu-
ation; he noted in the same book that

The American dream – the belief in the value of the common man, and the
hope of opening every avenue of opportunity to him – was not a logical
concept of thought. Like every great thought that has stirred and advanced
humanity, it was a religious emotion, a great act of faith, a courageous leap
into the dark unknown.
(Adams, 1931, p. 198)
66 Education, AI, and Ideology

In 1998 I went to America, to Washington, DC, to fnalise my doctoral dis-


sertation and lived there for few months. The university secured accommodation
and probably found a cheap solution to ofer me the chance to live in a Ben-
edictine monastery in the Capital city of America. It was this or a subtle joke,
as I found soon that Benedictine monks have “Ora et Labora” (pray and work)
as one of their guiding rules. I wasn’t inclined much to pray, so I spent my time
working as hard as I could, observing the promised land. I had the belief at that
time that the American Dream is a key to a great society. I have found later that
I wasn’t at all the only one believing this myth; in 1996, the Smithsonian museum
included in the “Points of Entry” exhibition a quote that was very much aligned
with my reaction in discovering a diferent America than the one imagined. The
exhibition used a quote from an anonymous Italian migrant, from 1903:

I came to America because I heard the streets were paved with gold. When
I got here, found out three things: First, the streets weren’t paved with gold;
second, they weren’t paved at all: and third, I was expected to pave them.
(Cited in Hoxhaj, 201530)

This is a great note to explain the experience of displacement and the impact
between the favourite narratives about a place and the reality.
I wasn’t as much impressed by my experiences on the East Coast of America as
I was moved by the state of decay from the poor, of intentional dehumanisation
and marginalisation of homeless and people living in extreme poverty there. I will
always remember the feeling that – beyond the ubiquitous and bizarre fascination
with Clinton’s afair with Monica Lewinsky – I was living in a place that was like
the ancient Rome, at the height of the Roman Empire. The feeling that people
living and working there cannot imagine something else to care about than the
imposing walls of the political centre of Washington, DC. Since then, spend-
ing there a hot and humid summer, a beautiful autumn, and an extremely cold
winter, I often remembered that feeling that America fnds normal to lead and
infuence the world. This is most often a well-intentioned impulse of Americans,
as the only possible model for humanity is the American model, with everything
that is associated with this concept. Exceptionalism is a keystone of the American
culture. The utopian metaphor of a “City upon the Hill,” as it was articulated by
John Winthrop on his 1630 sermon aboard the ship Arbella to the Massachusetts
Bay colonists, is part of the American Dream narrative. America was presented
by Winthrop as “a city upon a hill,” watched by the world as a guiding beacon
for a good future. Winthrop’s sermon planted the seeds to the widespread belief
in American imagination that the United States of America is God’s country
that is shining upon a hill. Far from being lost, the idea to “recapture,” “rebuild,”
or “fnd” the American Dream is an integral part of political and cultural dis-
course of presents America. The American Dream is a foundational myth that
will be always as powerful as it was when it was suggested in the 17th century and
The Narrative Construction of AI 67

articulated clearly in the 20th century. Of course, it is completely irrational to use


a cultural construct associated with equality and fairness when we consider the
long history of institutionalised racism, eugenic solutions, and totalitarian ten-
dencies in America; but probably this is the main strength of this cultural product.
The American exceptionalism was not an accident, and the American Dream
did not become the story believed by hundreds of millions of people just because
it is a powerful and seducing utopia. The American model was carefully and
patiently packaged and promoted, using all new technical advancements for smart
and insidious propaganda. The frst years of the 20th century were very successful
for Hollywood, named for a long time “The Dream Factory.” These beginnings
are also marking the time when the U.S. Commerce Department and its Bureau
of Foreign and Domestic Commerce start to provide assistance to Hollywood,
to aggressively promote silent movies, the most seductive production of those
times. This explains how Hollywood conquered later the imaginary structures of
the world. The frst and most obvious advantage was that these movies presented
and promoted an idyllic vision of American life, with specifc products, which
became visible, appealing, and desirable for new markets. The intention of Amer-
ican government was not simply to restrict this important tool to its economic
relevance; the movie industry was from the very beginning a project to export
American ideals and way of life, to colonise as peaceful and efcient as possible
other countries and cultures. This was even considered a secret: in 1927, the US
Commerce Secretary Herbert Hoover underlined in a speech that:

The motion picture is not solely a commercial venture; it is not solely an


agency of amusement and recreation; it is not solely a means through which
the world has gained a new and striking dramatic art; nor is it solely a real
and efective means of popular education. Beyond all this, it is a skilled and
potent purveyor between nations of intellectual ideas and national ideals.
(Hoover, 1927, pp. 291–29631)

The birth of imperial imagination in its American version can be associated


with these years. It marks the beginning of Americanisation of the world; it cap-
tured the dreams to promote certain ideas and the way of living America found
worth following. The way we dream and we imagine defnes who we are and
what we can become. There are ancient cultures that place a special emphasis on
dreams and the dream time for a very good reason. For thousands of years, col-
lective wisdom found that those who lose their dreams and imaginations become
lost, and capturing dreams is in fact a capture of real and possible life. In Australia,
the Dreamtime is a complex concept with a convoluted and unfortunate history,
marked by violent colonisation, cultural and ethnic genocides, and the story of an
amazingly elaborated and diverse culture. A paper focused on this concept sum-
marised and explains this term: “In Australia, the Dreamtime and its variants sig-
nify everything that was or remains aboriginal. Its currency encompasses scholarly
68 Education, AI, and Ideology

and popular discourse“(Wolfe, 1991, p. 19932). It is a concept for cultural identity,


mythology, history, dreaming, and imaginations for the most ancient living cul-
tures in the world.
The world adopted the American way to look at and understand the world,
the ‘intellectual and national ideals’ of the United States, without a serious con-
sideration on the power of these narratives and the seducing medium mastered
by Hollywood. In a review of a fascinating book on this topic, “Hollywood: The
Dream Factory,” we fnd that the analysis of the relationship between Hollywood
and America is fnding

the movies and the lives of their makers caricatures of American patterns
in general: the same emphasis on power, the same anxiety, the same busi-
ness values and gambling spirit, the same “escapes”-only more so. Here she
draws on the Lynds, on Erich Fromm, and on other writers and suggests
that Hollywood, in its fundamental attitudes, tends toward totalitarianism.
(Riesman, 1951, p. 59133)

In a diferent review we fnd another important aspect about debates that have an
in-depth analysis of the “dream factory,” of Hollywood or the new one, Silicon
Valley: “The picture of Hollywood which emerges from this book is a far from
pretty one but certainly worth having. It is internally consistent, and the vigor-
ous protests which have followed its publication have often naively confrmed
its author’s observations” (Linton, 1951, p. 27034). As Hollywood critics were
received with furious protests, those taking a critical position on Americanisation
found an equally counter-reaction. It is no surprise to fnd that Americanisation
remains a contested term, counterbalanced with the point that global adoption
of items and symbols specifc to the American culture – such as jeans, Mickey
Mouse, McDonalds, and Hollywood movies – are simply adopted to work very
diferently in their new environments. It is argued that the new cultural con-
text is making all diferent35. It is a false explanation. First and foremost, Ameri-
canisation is much more than a simple consumption of a Big Mac or watching
American cartoons. The adoption of all these items and values defnitely change
the context of the adopters. Secondly, Americanisation was for many decades an
intentional project. If we take only the example of the movie industry, we see that
America had a clear policy to expand its cultural and axiological codes, and eco-
nomic infuence across the world since 1920s. Since then, even when European
countries tried to impose quotas on US movies and music as a practical way to
protect their own cultural identities, American institutions and embassies pushed
strongly against all forms of resistance. Education is a perfect example of success
of the project of Americanisation, bringing together scholars and mass media,
economic mechanisms and institutions, subtle use of new technologies, and direct
propagandistic solutions. The United States of America is still infuencing the
rest of the world, and any fad, cultural trend, economic model widely popular in
The Narrative Construction of AI 69

America becomes soon a common occurrence across the world. The imperial-
ist imagination is – at least for now – extraordinarily successful, and immensely
damaging.
A comprehensive study36 on economic opportunities in the United States
looked at the promise of the American Dream, exploring if children will live
a better life than their parents. Analysing the evolution of data since 1940, the
authors of the study (from the US Census and Stanford University) have found
that income mobility had fallen sharply “primarily because of the growth in ine-
quality.” For example, the access to best universities in the United States is limited
to the privileged minority: out of 38 of the best US colleges, including those fve
in the Ivy League, the absolute majority of students come from top 1% of the
income scale, more than the entire lower 60% (Chetty et al., 202037). The report
of Harvard 2021 class concludes that “like in previous years, the surveyed mem-
bers of Harvard’s incoming class are largely white, straight, and wealthy” and only
58.8% of respondents say that they did not know of relatives who had attended
their university (Bishai & Lee, 201838). Over than one in six students at Harvard
reported that one or both parents attended their university. The imbalance is
painfully visible in a time when all celebrate the “massifcation of higher educa-
tion,” without saying which parts of the system opened for those less privileged.
The Californian ideology found a way to attach themselves to the long and
carefully cultivated American Dream. We have a Californian Dream, defned by
technological utopianism and inspired by neoliberalism and a form of psycho-
pathic individualism that emerged from the pseudo-philosophical writings of Ayn
Rand. It is built on the old structure of eugenic theories, a libertarian model
of maximum exploitation of anything that can be used as a resource, people
included, and technological solutionism. The high priests of Californian cult
genuinely believe that we can ruthlessly exploit our environments, accelerate
climate crisis, and move to another planet of space colony when life becomes
impossible on Earth.
Clarifying succinctly some of the aspects that shape education, AI, and cultural
context, we can see how an institution such as OECD, specialised in economy,
can practically shape the agenda for educational systems and universities around
the world.

Notes
1. Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and
the fabrication of power. The University of Chicago Press.
2. Select Committee on Artifcial Intelligence. (2018). AI in the UK: Ready, willing, and
able? HL Paper 100, 2017–19. Authority of the House of Lords.
3. Kharpal, A. (2018). A.I. will be “billions of times” smarter than humans and man
needs to merge with it, expert says. CNBC. www.cnbc.com/2018/02/13/a-i-will-be-
billions-of-times-smarter-than-humans-man-and-machine-need-to-merge.html
4. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
70 Education, AI, and Ideology

5. James, I. (2009). Claude Elwood Shannon 30 April 1916–24 February 2001. Biographi-
cal Memoirs of Fellows of the Royal Society, 55, 257–265. https://doi.org/doi:10.1098/
rsbm.2009.0015
6. Hayles, N. K. (1987). Text out of context: Situating postmodernism within an infor-
mation society. Discourse, 9, 24–36.
7. Hayles, N. K. (1987). Text out of context: Situating postmodernism within an infor-
mation society. Discourse, 9, 24–36.
8. Weller, M. (2015). MOOCs and the Silicon Valley narrative. Journal of Interactive Media
in Education, 2015(1), Art. 5. http://doi.org/10.5334/jime.am
9. The Economist. (2012, December 22). Free education. Learning New Lessons. www.
economist.com/international/2012/12/22/learning-new-lessons
10. Papanno, L. (2012, November 2). The year of the MOOC. The New York Times.
www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-
multiplying-at-a-rapid-pace.html
11. Brooks, D. (2012, May 3). The campus tsunami. The New York Times. www.nytimes.
com/2012/05/04/opinion/brooks-the-campus-tsunami.html
12. Friedman, T. L. (26 January 2013). Revolution hits the universities. The New York Times.
www.nytimes.com/2013/01/27/opinion/sunday/friedman-revolution-hits-the-
universities.html
13. Azevedo, A. (2012, September 26). In colleges’ rush to try MOOC’s, faculty are not
always in the conversation. The Chronicle of Higher Education. http://chronicle.com/
article/In-Colleges-Rush-to-Try/134692/
14. Barber, J. (2013, October 16). The end of university campus life. ABC Radio National
Australia. www.abc.net.au/radionational/programs/ockhamsrazor/5012262
15. Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013).
The life cycle of a million MOOC users. The University of Pennsylvania Graduate
School of Education. www.gse.upenn.edu/pdf/ahead/perna_ruby_boruch_moocs_
dec2013.pdf
16. Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report.
Center for Beneft-Cost Studies of Education, Teachers College, Columbia University
https://fles.eric.ed.gov/fulltext/ED547237.pdf
17. To take just one example, Apple was ofering iTunes U, a platform where anyone could
access a large variety of free courses, lectures and materials ofered by universities.
18. Informatics is a feld of study focused on the representation, structure, processing and
communication of information in natural and artifcial systems.
It is a discipline that encompasses various felds of computing and processing of infor-
mation, such as Artifcial Intelligence, Cognitive Science and Computer Sciences.
19. The etymological source of cybernetics is found in the Greek word “kybernetes”,
which means pilot, rudder, or a tool/device used to steer. Plato used this term in
Alcibiades to discuss the governance of people. Norbert Wiener defned cybernetics as
“the study of control and communication in the animal and the machine.”
20. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
21. WHO. (2021, September 13). Depression. www.who.int/news-room/fact-sheets/
detail/depression
22. WHO. (2021, June 17). Suicide. www.who.int/news-room/fact-sheets/detail/suicide
23. COVID-19 Mental Disorders Collaborators. (2021). Global prevalence and burden of
depressive and anxiety disorders in 204 countries and territories in 2020 due to the
COVID-19 pandemic. Lancet (London, England), 398(10312), 1700–1712. https://doi.
org/10.1016/S0140-6736(21)02143-7
24. Mazoue, A. (2021, October 3). “French psychiatry has gone downhill in part because
of American infuence.” France24. www.france24.com/en/france/20211003-french-
psychiatry-has-gone-downhill-in-part-because-of-american-infuence
The Narrative Construction of AI 71

25. “Alternative facts” was a formula used in 22 January 2017 during an NBC interview
by Kellyanne Conway, Counselor to the US President, to describe a lie.
26. Gajanan, M. (2018, July 24). “What you’re seeing . . . is not what’s happening.” People
are comparing this trump quote to George Orwell. Time. https://time.com/5347737/
trump-quote-george-orwell-vfw-speech/
27. Brooks, D. (2013, February 5). The philosophy of data. The New York Times, A, p. 23.
www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html
28. George Carlin. Life is Worth Losing, HBO, 2005
29. Adams, J. T. (1931). The epic of America. Little, Brown, and Company.
30. Hoxhaj, R. (2015). Wage expectations of illegal immigrants: The role of networks and
previous migration experience. International Economics, 142, 136–151. https://doi.org/
https://doi.org/10.1016/j.inteco.2014.10.002
31. Hoover, H. (1927). Motion pictures, trade, and the welfare our western hemisphere.
Advocate of Peace through Justice, 89(5), 291–296. www.jstor.org/stable/20661595
32. Wolfe, P. (1991). On being woken up: The dreamtime in anthropology and in Austral-
ian settler culture. Comparative Studies in Society and History, 33(2), 197–224. https://
doi.org/10.1017/S0010417500017011
33. Riesman, D. (1951). Review of Hollywood: The dream factory., Hortense Powder-
maker. American Journal of Sociology, 56(6), 589–592. www.jstor.org/stable/2772480
34. Linton, R. (1951). Review of Hollywood, the dream factory – An anthropologist
looks at the movie-makers, by H. Powdermaker. American Anthropologist, 53(2), 269–
271. www.jstor.org/stable/663894
35. An example of this approach is provided by ‘A Mickey Mouse Approach to Globaliza-
tion’, written by Jefrey N. Wasserstrom for the Yale University, Yale Center for the
Study of Globalization
36. Chetty, R., Grusky, D., Hell, M., Hendren, N., Manduca, R., & Narang, J. (2017).
The fading American dream: Trends in absolute income mobility since 1940. Science,
356(6336), 398–406.
37. Chetty, R., Hendren, N., Jones, M. R., & Porter, S. R. (2020). Race and economic
opportunity in the United States: An intergenerational perspective. The Quarterly Jour-
nal of Economics, 135(2), 711–783.
38. Bishai, G. W., & Lee, D. (2018). Makeup of the class. The Harvard Crimson.
SECTION II

Higher Learning

This section looks at higher education and its profound crisis of identity, placing
a special focus on higher learning in the new millennium. This analysis involves
also an enquiry on what is the result of the adoption of neoliberal ideas in higher
education. The adoption of anti-democratic practices in universities that place
profts ahead of educational aims is also investigated, looking at intense surveil-
lance on students and faculty and other practices enhanced by the advancement of
technology. The Americanisation of higher education, along with the incoherent
adoption or market mechanisms for academic life, adds much more than the pres-
sure of the audit culture and the metrifcation of academic life. We witness now
a direct impact on the level of intellectual life, which is withering under the sum
of pressures on academics. The last chapter of this section aims to place the use of
AI in the context of educational aims and in relation to human values, such as the
love for learning, beauty, and passion.

DOI: 10.4324/9781003266563-6
4
AUTOMATION OF TEACHING AND
LEARNING

Higher education is functioning at the crossroads of politics, in a space between


the creation of knowledge (research and development), consuming and shap-
ing culture and civil society, including political life. These are all shaping the
way teaching and learning are conducted. In the history of education, and the
history of universities, there are times when changes are revolutionary, involv-
ing diferent ways to think about teaching, learning, culture, and humanity.
There is the refrain of universities not changing, staying too rigid behind the
impenetrable wall of an imagined ivory tower. It is a theme convenient to lazy
thinkers, inclined to adopt what looks appealing for the majority, and a proft-
able way to present reality for diferent industries, such as fnancial consultancy
or edtech. For anyone even slightly familiar with this feld, it is absurd to ignore
that universities are unrecognisable from just one or two decades ago. There is
no doubt that universities are in a profound crisis, but sources of this crisis are
still debatable. Some fnd that there is not enough marketisation of universities,
while others are pointing at decades of defunding universities and the colonisa-
tion of all public spaces by the logic and discourse of neoliberal capitalism. The
point that is shared by all these positions is that higher education is currently in
a complex crisis. Stefan Collini is summarising the existential crisis of higher
education, noting that

Universities across the world in the early twenty-frst century fnd them-
selves in a paradoxical position. Never before in human history have they
been so numerous or so important, yet never before have they sufered from
such a disabling lack of confdence and loss of identity.
(Collini, 2012, p. 51)

DOI: 10.4324/9781003266563-7
76 Higher Learning

The impact of this loss of identity is rarely grasped by academics, university


administrators, and politicians. In general, these groups compete to re-colonise
academia with the language of markets, the logic of new public management and
stay oblivious to the current disaster and failure of models they are fraudulently
proposing to “save” the university. There are remarkable books and examples in
this sense, some which will be mentioned in this chapter. To understand how AI
can be used in higher education and what major risks will be associated with this
new revolution we have to identify the most important moments shaping univer-
sities, in special in the English-speaking countries.2
The most important changes are represented by the Americanisation of edu-
cation, which colonised universities and the idea of education with a neoliberal,
technocratic solutionism. Consequently, this led to an aggressive marketisation
of education, in a perverted form of capitalism. As other parts of the economy,
universities, which are governed and led as parts of the market, adopted AI-
powered technologies to manage faculty and employ new methods of surveil-
lance of students and staf. In the United Kingdom, Trades Union Congress
published a report on AI-powered technologies used on workers and noted at
that time that “AI-powered tools are now used at all stages of the employment
relationship, from recruitment to line management to dismissal. This includes the
use of algorithms to make decisions about people at work” (TOC, 2020, p. 63).
Since then the trend accelerated, under the impact of COVID-19 pandemic and
the rapid advancement of new technologies. The neoliberal rationale of higher
education is defned by the dissolution of the ideals of common good. Universi-
ties stand now as marketised, narrowly graduate-employability oriented institu-
tions, with dysfunctional ethos and intellectual endeavours reduced to economic
returns and sloganeering. In short, universities are now defned by mediocrity
and short-termism. In function, the intellectual life and the ethos of universi-
ties make them unrecognisable for what they were just four or fve decades ago.
Even if we look at the organisational arrangements or teaching and learning,
universities are in the third decade of this new century completely unrecog-
nisable: technology is central to teaching and universities place a central role
on learning management systems (LMS) (a glorifed title for online platforms),
teaching is haphazard and marked by casualisation and de-professionalisation,
making some damaging myths about teaching – such as the myth of learning
styles – still prevalent in what students are led to believe that it is higher edu-
cation. Virtually anyone can throw any bizarre idea as the “future of higher
education,” and this is how we have a signifcant literature on destructive ideas
such as the “Netfix model” for higher education, the Amazon model, and the
shopping-mall models. These are all starting from two massive errors marking
the feld: frst, the idea that any former student is an expert in education, as the
successful ending of that status reveals that any opinions have a solid background
on “reality,” versus useless “theory.” Secondly, these models consider learning
as a process built by stackable modules, a Lego of information and learning
Automation of Teaching and Learning 77

experiences that do not require coherence or even a unifying principle. Nano-,


micro-, or simply “intense courses” are used as a Lego set that leads to skills and
graduation. It is a depressingly naive and simplistic approach, but – with the use
of pre-packaged theories on personalised learning – they shape higher educa-
tion. This is also how we efectively remove higher learning and transformative
education from these experiences. Before we have an in-depth look at these
developments, we have to understand the leading cause for the current multi-
faceted crisis of higher education.
There is a long history of higher education, with key developments and ideas
that are highly relevant for the topic of this book, including the story of intel-
lectual and moral failures of academia before and during the Second World War.
However, here we have to limit our analysis to what happened to the intellectual
life and internal logic of universities after the Second World War. There was a
devastating impact on universities in countries abandoned to Soviet Union, but
next to irreparable damage we fnd a nuanced and important history of the evolu-
tion of academia. Nevertheless, we need to limit here our analysis to the Anglo-
phone countries in the West, as the colonising model of America was widely
accepted since 1990s across the world, with just few notable exceptions (e.g.
North Korea).
After the Second World War, international institutions such as the World Bank
pushed neoliberal policies designed to change the nature of higher education
from common good to a commodity at the same time as it was helping rebuild
countries devastated by Second World War. However, there was a general reluc-
tance in 1950s and 1960s to 1970s to adopt a neoliberal agenda after the experi-
ence of Nazi Germany privatisations and version of extreme capitalism and the
obsessive focus on technological excellence.
It is important to note that there is a false and widely shared narrative regard-
ing the economic policies favoured by Nazi Germany. This narrative presents
Nazis as anti-capitalists, strongly oriented towards the Left; this falsehood should
be succinctly addressed and clarifed here. There was a strong capitalist agenda of
Nazis in Germany, and the fact that the word “privatisation” comes in its English
usage after the Nazi economic policy brings the term “reprivatisierung” as abso-
lutely relevant in this sense. In a fascinating study on Nazi capitalism and the
genesis of the term “privatisation,” Germany Bel notes that “surprisingly, mod-
ern literature on privatization, and recent literature on the twentieth-century
German economy and the history of Germany’s publicly owned enterprises, all
ignore this early privatization experience” (Bel, 2010, p. 354). In fact, as her
article documents, Nazi Germany largely privatised companies under public
ownership and this “went against the mainstream trends in western capitalis-
tic countries, none of which systematically reprivatized frms during the 1930s”
(Bel, 2010, p. 34). In other words, Nazi Germany adopted a form of capitalism
that was more aggressive than economic systems in England or the United States
at that time (Bel, 2006).
78 Higher Learning

This phenomenon is also very well documented in “Betting on Hitler – The


Value of Political Connections in Nazi Germany,” an excellent synthesis of data
and trends of German capitalism. This book is analysing data on the Berlin stock
exchange and reveals that “Firms that had ‘bet on Hitler’ benefted substantially”
on the stock market (Ferguson & Voth, 2008, p. 1315). It was a capitalist eco-
nomic system in a particularly corrupt form.
It is convenient for a certain type of propaganda to cultivate the error of say-
ing that Nazi Germany was against capitalism when in fact it wasn’t. Historical
archives show that there were at one point some left-wing, anti-capitalist factions
in the Nazi Party, but evidence reveals that the aggressively capitalist orientation
had won and these policies were adopted by Hitler’s Germany. The confusion on
this topic is also cultivated on a fact that is separated from its history and context,
leaving many to believe that coining of “privatization” can be tracked to Peter
Drucker, who used in 1969 the term “reprivatization” in the sense currently used
by economists. Indeed, Drucker is using it in his book “The Age of Discontinu-
ity,” where he presents a strong critique of public sector and its managerial capabil-
ities, which is concluding that “even the best government programme eventually
outlives its usefulness. And then the response of government is likely to be: ‘Let’s
spend more on it and do more of it’. Government is a poor manager” (Drucker,
1969, p. 2146). In fact, Drucker is just translating “reprivatization” from the term
“reprivatisierung,” which was present in the economic policies of Nazi Germany.
To understand the current challenges for education, and for its future, it is also
important to look at what Peter Drucker fnds instrumental for “reprivatization.”
He notes in “The Age of Discontinuity” that “One instance of reprivatization in
the international sphere is the World Bank. Though founded by governments,
it is autonomous. It fnances itself directly through selling its own securities on
the capital markets. The International Monetary Fund, too, is reprivatization”
(Drucker, 1969, p. 223), especially without reference to meanings, humanity,
and empathy. In his extraordinary book “Capital in the Twenty-First Century,”
Piketty explains why neoliberal policies were unattractive for politicians and vot-
ing citizens:

in countries around the world, faith in private capitalism was greatly shaken
by the economic crisis of the 1930s and the cataclysms that followed. . . .
The traditional doctrine of “laissez faire,” or nonintervention by the state in
the economy, to which all countries adhered in the nineteenth century and
to a large extent until the early 1930s, was durably discredited.
(Piketty, 2014, p. 1367)

The view changed in 1970s, with one key moment marked by Milton Friedman
when he published in the New York Times magazine his essay manifesto, titled
unambiguously “The Social Responsibility of Business is to Increase its Profts”
(Friedman, 2007, pp. 173–1788). In the United States, and the part of Europe
Automation of Teaching and Learning 79

that was left outside the grip of Soviet communism, this was seen as a favourable
moment to launch a concerted attack on the role of the government, of state-led
planning, of taxation, and most of all, on the idea of common good. This new
trend emerged in early 1970s, fnding new strength in the 1980s; Tony Judt notes
that since 1973

free-market theorists had re-emerged, vociferous and confdent, to blame


endemic economic recession and attendant woes upon “big government”
and the dead hand of taxation and planning that it placed upon national
energies and initiative. In many places this rhetorical strategy was quite
seductive to younger voters with no frst-hand experience of the bane-
ful consequences of such views the last time they had gained intellectual
ascendancy, half a century before.
(Judt, 2005, p. 5379)

The World Bank promoted policies to change higher education from a domain
guided by the idea of common good to a part of the market, as a commodity since
early 1990s, but the frst key step towards its objective is represented by an inter-
national event in 1994: the Marrakech Ministerial Meeting. It was here when
representative of the 124 Governments and European Communities participating
in the Uruguay Round of Multilateral Trade Negotiations met at Marrakesh,
Morocco from 12 to 15 April 1994. The Marrakesh Agreement established the
World Trade Organization (WTO) and defned the basic framework for trade
relations among all WTO members, under market-oriented policies. This is how
WTO describes this moment: “The ‘Final Act’ signed in Marrakesh in 1994 is
like a cover note. Everything else is attached to this. Foremost is the Agreement
Establishing the WTO (or the WTO Agreement), which serves as an umbrella
agreement.”
This agreement also guides what was called “education services.” It is the
moment when for the frst time an international treaty education is included
in the list of sectors that are subject of trade in international markets. In 1999,
in Seattle, the WTO conference adopted the conclusion of the “Millennium
Round,” where education is legalised as part of the market, regulated by the same
rules of trade used for commercial entities. Education is now part of the market
just like other sectors, such as fnancial services, insurance and banking services,
and construction. The academic world is at this point dramatically changed. The
nature of education and of universities becomes marginal in the new commercial
feld of higher education. Universities are considered, ranked, and evaluated with
commercial concepts, procedures, and measurements; in fact, just few years after
the adoption of Millennium Round, we see the frst international academic rank-
ings, with the use and impact they have today. The “product” had to be measured
and ranked for a proper cost; the impact of the new feld of university rankings is
extraordinary for such a recent invention.
80 Higher Learning

These steps partly explain why we fnd in the second decade of the 21st cen-
tury the feld of higher education entirely engulfed in an ecosystem that normal-
ised commercial rankings and the economic rationale for university governance
and education aims. The lowest denominators of higher education and learning,
such as reducing life to employability and economic success, represent the light-
house guiding learning. The aims of education, the ideals of a truly civil society,
with a balanced appreciation of equity and fairness, humanism, social justice, and
economic prosperity, are all drifting towards a dissolution in control and con-
formism, technologisation of spaces that must be left for humans to defne and
build, and surveillance.
The shift from learning and research to a feld governed by WTO agree-
ments and commercial considerations happened independent from academia,
in a process that was not including representatives of student bodies, academics,
or universities. The relevance of this omission became clear when ministers of
economy, trade, and commerce became the key decision makers for entire sys-
tems of higher education, universities, and academics. Scholarly ideals became
ornaments for a diferent favour; the entire raison d’etre for universities is now
structured by commercial considerations. The very aims of education changed
in reality, regardless of what some mission statements claim or what we see in
glossy strategies adopted by universities. In commerce the ultimate reason is to
make a proft, and this greatly infuenced the evolution of academia for the next
decades.
Not just academia was changed by these decisions; all sectors that were gov-
erned according to the aim of common good and social progress, such as educa-
tion or healthcare, became open to the new regulating principles of trade and
commercial value. The WTO framework sets the function of the General Agree-
ments on Trade in Services (GATS) in three main components:

• the framework of regulations that set the general obligations governing trade
in services (including market access);
• details on specifc services sectors; and
• timelines of liberalisation commitments for all WTO members.

At that time, in the introductory notes of a speech delivered in New Zea-


land in 1999, the director general of the WTO observed that the entire world is
changed by these agreements, by a new ideology. He underlined that

From President Clinton to President Castro, Prime Minister Blair to Presi-


dent Mandela, President Cardoso, to then-prime Minister Prodi – all saw
this system as central to development and stability in our interdependent
world. Each stressed the reality of the globalisation process and the need to
improve its governance.
(Moore, 199910)
Automation of Teaching and Learning 81

The internal logic, the ethos, and aims of all sectors included in GATS were
altered or entirely changed under this new system of commercial agreements.
The process of globalisation came with generous promises, but the main princi-
ples guiding it revolved around the interest of the United States. Consequently,
globalisation accelerated the colonising process carried in the past by the Ameri-
can Dream; unknowingly, even declared enemies of America adopted the Amer-
ican model. Beyond caricatural representations of this process, the new wave
of colonisation with the American model goes much deeper than the adoption
of jeans or pop culture. We don’t need the level of expertise and brilliance of
thinkers such as Stiglitz or Piketty to see that when the United States adopted
neoliberalism the rest of the world followed the example. It is interesting to see
how international organisations, such as IMF, the World Bank, or the OECD
use their power and infuence to channel other countries, even those outside the
American areas of political infuence, to adopt neoliberalism for their economies,
public policies, and cultural development. David Harvey defned neoliberalism
as “a theory of political economic practices that proposes that human well-being
can best be advanced by liberating individual entrepreneurial freedoms and skills
within an institutional framework characterized by strong private property rights,
free markets, and free trade” (Harvey, 2005, p. 211). Pierre Bourdieu once noted
that neoliberalism is in essence just a programme of the methodical destruction
of collectives, of the idea of common good. The American model became syn-
onymous with the neoliberal project starting with 1970s and was promoted to the
rest of the world with the economic power of international organisations, mul-
tinational corporations, and political infuence. Of course, this is a very complex
and convoluted process that can be explored extensively; however, Joseph Stiglitz,
a Nobel Prize winner in Economic Sciences in 2001, summarised in an article
for The Guardian the key dynamic of globalisation and how it was forged in the
second part of the 20th century:

The US basically wrote the rules and created the institutions of globalisa-
tion. In some of these institutions – for example, the International Mon-
etary Fund – the US still has veto power, despite America’s diminished role
in the global economy. . . . To someone like me, who has watched trade
negotiations closely for more than a quarter-century, it is clear that US
trade negotiators got most of what they wanted. The problem was with
what they wanted. Their agenda was set, behind closed doors, by corpora-
tions. It was an agenda written by, and for, large multinational companies,
at the expense of workers and ordinary citizens everywhere.
(Stiglitz, 201712)

International organisations, such as WTO, World Bank, and OECD, had this
genesis and evolution, as tools of an American globalisation. There is a myth
that the Soviet system lost the Cold War when economies collapsed, but this is
82 Higher Learning

a superfcial judgement. That war was lost as soon as the American Dream, the
bright lights of Hollywood and the charisma of movie stars, was enlightening the
imaginations of people living in a system that was ofering a very diferent dream,
which stands as a barren land where imaginations wither. This is where the com-
petition was truly lost; the anti-communist revolutions came at a point when even
communist leaders knew that their system is a farce and the only thing keeping it
is fear and control. When these last foundations developed vulnerabilities it was
all lost. America wrote the rules for all countries, created institutions for globali-
sation, and attached to them a credible and seductive project, lit by the magical
brightness of the screen, where idealised heroes told all stories about a perfect
and beautiful life, always ending in a happy end. The important point made by
the US Commerce Secretary Herbert Hoover in 1927 became a powerful tool
in the ideological competition of the 20th century, and globalisation was aligned
with the idealised model of the American life. The efort to design protocols and
mechanisms suitable to properly serve the interests of various American corpora-
tions was the easy part.
The American cultural model infltrated the economic and public life across
the world, and the transition to the 21st century is marked by the colonisation
of last spaces of public good with the “logic of the market,” and the jargon, aims,
ideas, and utopias that shape the neoliberal project. The World Trade Organiza-
tion changed Academia in a higher education market guided by the aim to get
“value for money,” where competition, commodifcation of learning and teach-
ing, and the goals of efciency defne the ethos of universities. Professors became
commodities and service providers, and students recast as customers, with poten-
tial students as resources on the target markets. Universities adopted New Public
Management and institutional entrepreneurship, and market positioning became
the measure of judgement for various stakeholders. Leaders in academia are man-
agers, responsible for efciency, alignment with the market demands, partnerships
with the industry and other corporate structures, following the achievement of
Key Performance Indicators (KPIs). Rankings of what some unfortunate uni-
versities openly call “the product” – a ridiculous caricature of what is left from
the aims of education, devoid of coherence in learning and belief in the power
of good teaching – guide institutions of higher education across the world. It is a
competition to imitate and get as close as possible to Harvard University, the ulti-
mate model of prosperity and astute managerial practice. The aim became to get
as high as possible in international rankings on the market. For universities unable
to imitate Harvard, with mediocre research and poor indicators, new rankings
were created to suit all (and create another market, of convenient rankings that
can be manipulated). This is how we have hilarious examples of universities post-
ing proudly the Number 1 position in the country/world for green campuses,
or for sport facilities and results in gymnastics and so on. Pseudo-rankings and
deceiving practices became tools for universities to attract students, in a dynamic
of connotations that is not even remotely related to an educative intention.
Automation of Teaching and Learning 83

In other words, universities became unrecognisable in a relatively short time.


In just two decades, academia war turned into a space of managerial gibberish,
ruled by pseudo-managers who are eager to adopt solutions that lead to the
demise of universities. It is natural to see the enthusiasm of destruction of what
is left when the principle of accountability is valid only for lower organisational
levels, and those inciting the “disruption” do not pay any of the associated costs
and efects. The university is in the frst part of the 21st century an interregnum
structure, with some features typical to a corporate organisation and, at the same
time, with functional and organisational aberrations. Universities retain some old
academic features, with pockets of signifcant research and scholarship, teaching,
learning, and idealist areas that are still guided by the classical aims of education.
The Americanisation of universities developed across decades, involving poli-
cies, administrative decision, and cultural and managerial models sourced in the
imperial imagination of America. The conversion was not a transparent pro-
cess, and citizens, academics, or students were not part of what WTO decided
to change in the future of education, culture, and civil society. In this process
teaching was forcefully aligned with the way “good teaching” as defned in the
American universities. The dominant ideas in pedagogy and educational trends of
the American education became major trends for all countries, stimulated to join
the competitive market of education services as they are measured and designed
by the economists working for OECD or the World Bank. Under the control of
“market mechanisms,” that are in fact commercialisation derivates of the Ameri-
can model, and the application of neoliberal systems of governance, it was simply
normal to fnd expertise in economic institutions instrumental in globalisation.
International trade agreements set the architecture for a commercialised sys-
tem of higher education at a global scale, under American infuence and a subtle
design that favoured in time US corporations and universities on the global mar-
ket. The WTO was pushing from the very beginning the agenda of unfettered
free markets, and everything has changed within academia: for example, we have
a diferent structure of employment based on limited contracts in universities,
with an unprecedented predominance of casual staf, with devastating efects on
the morale and quality of teaching. Seeds of neoliberalism in academia were
planted long before this event, but the WTO agreement represents a drastic and
profound reorientation of university functions and goals.
The general complacency of universities and the comfortable inertia of the
academic body partly explain the lack of a concerted and consistent reaction
against these radical changes, although it was obvious that they stand against
the interest of academics, students, and civil society. The impact of this rev-
olution became impossible to ignore; in a book devoted to the “pathological
organisational dysfunction” of Academia, we fnd the analysis of a “syndrome
within which the toxic university has become enveloped in its unquestioning
embrace of the tenets of neoliberalism – marketization, competition, audit cul-
ture, and metrifcation” (Smyth, 2017, p. 513). We can add to the list of symptoms
84 Higher Learning

representative for the “pathological organisational dysfunction” of universities the


over-reliance on technology, the ubiquitous anti-intellectualism, the culture of
bullying, sloganeering, forcefully imposed mediocrity, and cynicism. An article
published by Nature in 2021 reveals the levels of bullying normalised in univer-
sities, with the majority “feeling unable to discuss their situation without fear
of personal repercussions” (Gewin, 2021, p. 29914). The role of leadership is to
maintain a culture of obedience and fear, convenient for control and surveillance,
strategic planning, competitiveness, and institutional entrepreneurship. Higher
education became in the last years one of the most stressful work environments,
recording an even higher number than the average recorded in occupations that
have by their nature highly stressful conditions (e.g. paramedics, police, or fre-
fghters). This is a culture seduced by its own voice repeating that “critical think-
ing” is a key for progress while making impossible for academics to be critical
thinkers; what is expected is obedience and comfortable mediocrity, along with
completion of quantitative targets of students or publications relevant for various
international rankings. How is it possible to have graduates able to be astute criti-
cal thinkers when their own teachers lost this skill long time ago? In the era of
social media, large-scale manipulations using AI, surveillance, and profling and
the rise of unprecedented risks students are left unprepared by the only institu-
tion promising that a certifcate will come with knowledge and skills, not only
credentialisation.
The idea that “what gets measured gets done,” which fuelled an obsessive
fxation on metrics and indicators, is not an accurate, normal, or even healthy
way to think about life. Risking here a truism, we can admit that some parts
can be exactly and entirely measured in education – with the inherent risk of
destructing the educative nature of the experience – but most parts of a complex
and meaningful educational experience is as complex as life itself. The metrif-
cation model proved in an endless loop that it fails to provide certainty, control,
or efcient factors against crises. In fact, the neoliberal model of metrifcation
is failing spectacularly in areas more suitable to measurements than education,
such as the fnancial sector. We can still remember how good all indicators
looked just before the Great Financial Crisis (GFC), in 2007–2008. The met-
rifcation in education reduced and distorted what is considered a success and
what is a failure; depth is abandoned for quantity; facile and fast delivery became
the only possible solution to function in the new managerialist paradigm. The
move to metrics and measurements for a list of KPIs in education stifed genu-
ine innovation, initiative while discouraging risk-taking, creativity, and genu-
ine responsibility. The culture of compliance and mistrust is leading to highly
inefcient systems that struggle with mediocre research produced to make the
required numbers.
The crisis of higher education, and the constant decline of higher learning in
the last decades, is also rooted in the old American anti-intellectualism. Alexis
de Tocqueville noted in the 19th century the American distaste for intellectuals
Automation of Teaching and Learning 85

and experts. Virtues such as education, wisdom, erudition, and in-depth knowl-
edge are not associated with American heroes, in the pop culture or real life.
Richard Hofstadter noted as early as 1960s that we witness the defnitive success
of anti-intellectualism in the American life in his Pulitzer-winning book, “Anti-
Intellectualism in the American Life.” The success of it was sold and adopted by
the rest of the world, and education was an integral part of this. Hofstadter detailed
in 1963, around the same time the concept of AI was born, the diference in pop-
ular perception and preference between intelligence and intellect. He noted that

although the diference between the qualities of intelligence and intellect


is more often assumed than defned, the context of popular usage makes it
possible to extract the nub of the distinction, which seems to be almost uni-
versally understood: intelligence is an excellence of mind that is employed
within a fairly narrow, immediate, and predictable range; it is a manipula-
tive, adjustive, unfailingly practical quality – one of the most eminent and
endearing of the animal virtues. Intelligence works within the framework
of limited but clearly stated goals, and may be quick to shear away ques-
tions of thought that do not seem to help in reaching them. Intellect, on
the other hand, is the critical, creative, and contemplative side of mind.
Whereas intelligence seeks to grasp, manipulate, re-order, adjust, intellect
examines, ponders, wonders, theorizes, criticizes, imagines.
(Hofstadter, 1963, pp. 24–2515)

America was able since the beginning of the last century to present the archetype
that was – consciously or not – followed and adopted by the rest of the world. It
is relevant to see that the typical American hero is an efcient primitive, a “down-
to-earth” character who can deal with real problems in life, as opposed to ridicu-
lous educated people. It is somehow amusing to see how Superman’s alter ego is
a clumsy nerd, incapable to function properly; his alter ego is the opposite of the
ultimate hero, as it stands proposed by the American model. In the introductory
chapter of his book “The anti-intellectual presidency,” Elvin T. Lim notes that
“the denigration of the intellect, the intellectual, and intellectual opinions has, to
a degree not yet acknowledged, become a routine presidential rhetorical stance.
Indeed, intellectuals have become among the most assailable piñatas of American
politics” (Lim, 2008, p. 316). The American anti-intellectual positions found new
strengths to colonise all spaces, from the economy to political discourse, when a
mediocre actor was elected as the President of the United States, in the 1980s.
Of course, we cannot say that Ronald Reagan was solely responsible for anti-
intellectual positions in the American culture, as the suspicion and distaste for
intellect and intellectual life are well rooted in the American culture, and an inte-
gral part of the American Dream. When Reagan declared that taxpayers should
not be asked “to subsidize intellectual curiosity” he opened a new perspective
with a devastating impact on education.
86 Higher Learning

The Reaganite neoliberalism found enthusiastic support in the United King-


dom, and political and cultural currents surrounding institutions of education
since then turned increasingly destructive and toxic. Probably one of the most
representative moments for the anti-intellectual neoliberal tendencies in the
Anglo-Saxon countries in this new millennium is represented by a famous
statement of Michael Gove, a former British Secretary of State for Education
(from 2010 to 2014), who said on a televised debate that “the people of this
country have had enough of experts17.” It is truly signifcant to see that the
anti-intellectual position is advocated publicly in Great Britain by the former
minister of education. It is the anti-elitist agenda constantly promoted by the
British Conservative Government for a decade. Of course, the British Govern-
ment is entirely formed by politicians educated in the most elitist schools and
universities of England. In this view “experts” are a nuisance, dangerous ele-
ments for good social order and progress; they are the parasitic “elite” that is
responsible for most harmful ideas promoted against good “regular people.” Of
course, the term “anti-elitist” is just a code sending to the educated, informed,
and fact-based positions; this is presented as repugnant, condescending, and
dangerous. It is nothing new in this position, as any fascist tendencies, on the
political Left or Right, reach the point of confrontation with intellectual elites.
Mao and the Cultural Revolution stand as a tragic example of fascist develop-
ments on a leftist movement with the extreme and brutal treatment of scholars
and intellectuals. On the political right the symbolic place occupied by educa-
tion degenerated fast in the neoliberal order, which places value on people with
money not on teachers. This paradigm is holding in high esteem the “job-
creators,” the wealthy who supposedly provide “opportunities for employment”
and “trickle-down” some of their wealth to the poor majority; those who edu-
cate people to function in society, including abilities required for those jobs, are
treated with contempts, as failures unable to earn more money or do something
really useful. In 2012 Mitt Romney, the former Governor of Massachusetts and
the Republican candidate in the Presidential race, noted in one of his speeches
that “There are college students at this conference who are reading Burke and
Hayek. When I was your age, you could have told me they were infelders for
the Detroit Tigers” (Romney, 201218). This is the project for a society struc-
tured on a culture where successful people do not read books. A visitor of a
university in the English-speaking countries may notice that there can be rarely
(if ever) be seen academics reading a book on campus; accomplished members
of academia have to display that there is no time for trivially spending in reading
a book when there are so many managerial decisions to make, tasks to address,
and indicators to check.
Neoliberalism stands as a foundation for a political, cultural, and civic system
that respects markets, not schools or universities. The distaste, hostility, and vio-
lence against the educated, the experts, are just direct consequences of these ideas.
This explains why a special report delivered by Reuters in the frst months of
Automation of Teaching and Learning 87

2022 reveals extremely worrying developments of dissolution of authority related


to education. Reuters notes that

[L]ocal school ofcials across the United States are being inundated with
threats of violence and other hostile messages from anonymous harassers
nationwide, fuelled by anger over culture-war issues. Reuters found 220
examples of such intimidation in a sampling of districts.
(Borter et al., 202219)

This special report presents examples found in a sample of school districts across
the America, which reveal a climate of aversion and extreme hostility against
school representatives and educators. Various forms of violence are triggered by
issues such as what students can learn, students’ health and safety, or simply disa-
greements over what should be in the school’s curriculum. Teachers and experts
in education are not trusted to make these decisions, and are treated with most
extreme contempt. The level of violence and aggression is terrifying. Journal-
ists use the example of audio recordings, letters, and messages that include death
threats against school ofcials and their families, including their children. In one
of these instances, for example, we fnd that the

Board members in Pennsylvania’s Pennsbury school district received racist


and anti-Semitic emails from around the country from people angry over
the district’s diversity eforts. One said: “This why hitler threw you c-ts in
a gas chamber.” In another case presented by the authors of this report we
fnd how a child of a school board member received a letter with a disturb-
ing message: “It is too bad that your mother is an ugly communist whore,”
said the hand-scrawled note, which the family read just after Christmas. “If
she doesn’t quit or resign before the end of the year, we will kill her, but
frst, we will kill you!.”

The role of education is subverted, and the efects of this radical shift refect
how dehumanised our society can become on the dystopian realities caused by
neoliberalism and colonial thinking. In fact, neoliberalism justifed and normal-
ised not only contempt for expertise and education, but cultivated ignorance and
distrust in the educated “elite.” For example, in 1969 an internal memo of a cor-
porate executive in the tobacco industry refects the incentive to cultivate igno-
rance and respect for ignorance: “Doubt is our product since it is the best means
of competing with the ‘body of fact’ that exists in the mind of the general public.
It is also the means of establishing a controversy” (Gee, 2008, p. 47420). The aim
of educative infuences fuelled by corporate structures is not simply to let people
remain ignorant; the aim is to make people think that they are educated and
know better than experts. This is the point where doubt can be successfully culti-
vated and profts collected. The multifaceted stories of Covid-19 pandemic have
88 Higher Learning

as a common feature the confusion between personal opinions and expertise,


between access to the Internet and genuine research. A signifcant part of society
is credentialised, not educated, and is led to believe that experts know less, or
have hidden reasons that can be masterfully revealed by a functional illiterate with
a diploma sold at one point by schools and universities engaged in a competition
to reach their corporate strategies and report successful completion of their KPIs.
The idea that education can be viewed mainly as a commodity that can be
packaged and sold like any other product is one of the most destructive forces
for our future. However, the defnitive blow to education came from the politi-
cal Left, when an American governor pushed further the neoliberal logic for the
governance and organisation of schools. To understand why this was a lethal blow
for education and the idea of common good in education, against any ideals that
are not directly related to markets and profts, we have to have a succinct reminder
of events that led to that moment. As soon as he was elected, Ronald Reagan
promoted his neoliberal agenda with an efective use of the American suspicion
towards the government and the authority. Reagan adopted tax cuts for the rich
in 1981 for the frst time in the American history. The narrative was that making
rich even richer will protect American citizens form the irresponsible and greedy
spending nature of the state. This does not make any sense for a rational person.
The absurd nature of this idea was proven by the constant increase of inequality
and the fact that the immensely rich do not improve the life of communities even
when the tax for billions earned is lower than what a teacher pays.
The 1980s set the trend of decline in state funding for higher education in
the United States, with the fnancial burden shifted on student fees. Inspired by
this neoliberal revolution in the United States, Margaret Thatcher adopted in
the United Kingdom an equally aggressive neoliberal agenda. Her government
found that the only solution for the stagnant British economy is a competitive
market, in all sectors, including education. Market colonisation found a new idea
proposed by the American Left. This new approach will entirely change how
schools – and later universities – are funded and governed. Bill Clinton, at that
time the Governor of Arkansas and leader of the National Governors Association,
proposed the idea of performance-based accountability model that was adopted
for schools since mid-1980s. It is a model that is based on “measuring perfor-
mance” blind to nuances, local contexts, and specifc challenges. This is how
metrifcation became a defning feature of “modern” education, a feature that
became inseparable from education and academic life. From the very beginning,
this idea was named the “horse trade” of educational reform. The market logic
looked relatively innocuous at that time: it was just saying that schools can have
more freedom in choosing the content and methods of instruction in exchange
of a “greater accountability” for the academic performance. It became clear that
it translates in ongoing measurements of the activity and performance of stu-
dents and teachers. KPIs and other narrow quantitative tools became central for
a thorough evaluation of educational “efciency.” Learning did not happen if it
Automation of Teaching and Learning 89

was not captured with the set of indicators relevant for immediately measurable
outcomes. This changed profoundly what we understand today as “education”
and “learning” in schools and universities.
Another efect of the “horse trade” was related to the fact that students from
disadvantaged backgrounds performed poorly; this is how schools – especially
serving disadvantaged communities – lost funds. The quick solution was to play
the system: students who typically performed poorly in tests were asked by school
administrators to stay at home during evaluations, and only “good” students were
included in tests. While some schools improved their results and secured funding,
the most vulnerable students were stigmatised and deprived of a relevant educa-
tion. It quickly became a perverted system. Other forms of manipulating data
became the norm, and more complex and subtle solutions were designed, all at
the cost of learning and real education. The idea of metrics and measurements
stand now at the centre of governance of higher education, shaping directly what
we understand by teaching and learning. Universities created space to “answer”
metrics that provide “a competitive advantage” on the international rankings.
Good positioning in rankings translates in funding and budgets for universities,
and entire programs perceived as not suitable to prove an immediate proft, such
as humanities and arts, increasingly lose fnancial sources to remain a part of
Academia.
The shift of focus from learning to test performance and results, metrics and
directly measurable outcomes of the performance-based governance is the most
signifcant step towards draining education of substance and relevance. It is not
that students do not learn anymore, but even the best students, genuinely inter-
ested to achieve as much as possible from their academic careers, learn for the test,
as the system is pushing all towards this end. The love of learning, the interest for
in-depth thinking, alternative solutions, or real foundations for a well-rounded
education stand replaced by crude instrumentalism and sloganeering, in a process
drained of signifcance, joy, and vigour. At declarative level, critical thinking,
“excellence,” and “performance” are secured by all “commodity providers”; a
simple honest look reveals that real education is happening now in spite of this
system, not nurtured by it.
Since Clinton, the American model of education is obsessively fxated on
performance-based accountability and universities’ return-on-investments
(ROIs). The international organisations, especially OECD and the World Bank,
aggressively promoted the new model, as an integral part of the Americanisation
of the world. From the structure of academic degrees, governance and adminis-
tration, publications and exchange of ideas, higher education cannot be imagined
now beyond the Anglo-Saxon model represented by American universities, and
some British policies and infuences. Unbridled greed, the key feature of Ameri-
can capitalism identifed by Stiglitz, is now proudly adopted by universities in the
21st century as a main value and raison d’être. Derek Bok, the former President
of Harvard University, astutely observed in his book Universities in the marketplace,
90 Higher Learning

that “what is new about today’s commercial practices is not their existence but
their unprecedented size and scope” (Bok, 2003, p. 221). “Greed is good,” the
speech delivered by the fctional character Gordon Gekko in the movie “Wall
Street,” is what led the world to the current crises, the slow implosion of our
systems: environment, democracies, social balance and civility, health and climate,
and our education and culture. Greed was the force that was constantly eroding
the meaning of education and changed universities in “Potemkin villages” where
incoherence, poor results, the dissolution of scholarly ideals, and the profound
malaise at the heart of academic ethos are hidden behind corporatist gibberish
and sloganeering.
The joy of learning – and teaching – was removed from higher education
with instrumental aims, crude metrics, and neoliberal managerialism. A report
published in 2017 found that

data indicate that the majority of university staf fnd their job stressful. Lev-
els of burnout appear higher among university staf than in general working
populations and are comparable to “high-risk” groups. . . . The proportions
of both university staf and postgraduate students with a risk of having or
developing a mental health problem, based on self-reported evidence, were
generally higher than for other working populations.
(Guthrie et al., 201722)

Research constantly fnds that the level of stress for academics is leading to a
signifcant increase of mental disorders. Results are often tragic and irreparable;
for example, Malcolm Anderson, a deputy head of section and a personal tutor
in accounting at Cardif University’s Cardif Business School, collapsed under
the stress and unreasonable requirements. He committed suicide and left notes
to his family and to his university; it was revealed that he had to mark 418 exam
papers in just 20 days. This translates to nine hours of work per day without any
breaks (including food or toilet breaks). This is only for assessment and mark-
ing of students’ work, excluding his work duties and expectations associated to
his role of Deputy Head of Section. Despite the fact that he complained to the
management about his allocation of work, he was ignored. The police detective
investigating this case noted that e-mails on Malcolm Anderson’s work computer
“refer to work expectations not being manageable and the number of students
going through the roof but there’s been cuts.”
In the United States, there are numerous stories of academics pushed to the
extremes. We have the story of Margaret Mary Vojtko, an adjunct professor of
French, who died of cardiac arrest when she had found that she lost her poorly
paid job at Duquesne University. Margaret’s insecure job, poorly paid, could not
cover her cancer treatment. She was forced to become homeless over the winter
as heating and basic maintenance of her house became impossible. In British
universities research reveals that academics work under the pressure of precarious
Automation of Teaching and Learning 91

employment arrangements, a feature of educational systems reformed by the


OECD and World Bank; for example, a report published in 2021 notes: “Casu-
alisation remains a problem for all academic staf groups but the use of fxed-
term contracts for research staf, and zero-hours and hourly-paid contracts for
teaching-only staf is endemic” (UCU, 202223). In Australia, studies show that the
majority of academic staf is employed on contingent contracts, fxed-term, and
casual/sessional employment arrangements (Andrews et al., 201624). Poorly paid,
with low and continuously declining social prestige, teachers are pushed to leave
and search a career that can secure a decent life. Those who remain in schools and
universities are presented – and perceived – as a type of servants left to look after
students and make them feel good. There is a crisis of authority for education and
an abrupt fall for teachers’ social prestige.
The managerial model adopted for higher education, where everything is
commodifed and reduced to the logic of profts and markets, all for sale, creates
a toxic culture incongruent to creativity, free thinking, open debates, inspired
and passionate teaching, and a culture of knowledge exploration and creation. It
is a dysfunctional ethos that is hindering the passion for learning and signifcant
explorations. Some of the most famous academics often refect publicly that in
the current context of higher education their achievements would be impossible.
In an interview published at the end of 2016, Saul Perlmutter, a Nobel Laure-
ate astrophysicist and a physics professor at the University of California Berkeley,
underlined that his world changing discoveries would not be possible today. He
noted that current research and intellectual environment in universities is limited
to predictable, short-term, and directly measurable outcomes, which are all stifing
creativity. Researchers, professor Perlmutter noted, are now “very good at not
wasting any money and also not good at making any discoveries.” In his speech at
the Times Higher Education World Academic Summit at Berkeley, he said that

in the modern-day context there’s a tendency to ask: “What is it that you


are planning to research? When will you fnish it? And what day will your
discovery be made? . . . I don’t think this particular project I’m describing
would have happened in today’s funding environment.”
(Lamb, 201725)

Similarly, Peter Higgs – the famous physicist that named the Higgs boson at
the core of research conducted at CERN’s Large Hadron Collider – noted in
2013 that no university would hire him these days as he would not be considered
“productive” enough:

Today I wouldn’t get an academic job – he said to The Guardian on his way
to Stockholm to receive his 2013 Nobel prize for science – It’s as simple as
that. I don’t think I would be regarded as productive enough.
(Vostal, 2016, p. 126)
92 Higher Learning

He noted in the same interview doubts on the capacity of our current academic
culture to lead to a similar scientifc breakthrough; the focus now stands on the
relentless push to secure grants and publish for quantitative targets, not depth and
creativity.
There are many other examples of “unproductive” scholars that had the long-
lost privilege of solitude and time to think and discuss their fnding to revolu-
tionise their felds of study, with incalculable benefts for society and economies.
We can consider another example in this sense; it is the story form 1970s, of a
famously unproductive academic with the Arts Faculty at Harvard University.
Universities at that time, including Harvard, were much less focused in quan-
titative targets and the pressure to “perform” for KPIs. However, his colleagues
looked at him as an oddity, an idler academic with no signifcant results. One
day this colleague fnally published a manuscript: “A Theory of Justice,” by John
Rawls. It is one of the most cited studies in his feld, with tens of thousands of
studies using his study. The most prestigious awards in the feld are associated
with this study published in 1971 after a long time of preparation. Eric Betzig,
the 2014 Nobel laureate in chemistry, revealed that the key of his success was
“Just being left alone and allowed to focus 100 per cent,” adding that “not being
in academia for me has been the key.” The model for these mutations follows a
relatively simple patter: lobbying and paying for infuence in decisions, imposing
ideas that lead the entire system to bankruptcy and then, at the moment when
the system is already in ruins, the opportune salvation of privatisation. Education
and learning accelerated this dynamics with the relentless work of international
organisations in fnance and economy. It was normalised that the best expertise in
in the hands of money-makers, of economists, bankers, and technologists. This is
how we reached the point where signifcant discoveries and ideas can be found
outside the walls of academia, far from the wild marketisation of everything.
We collectively contemplate now the fall of universities, and their complete
transformation in institutions of professional training, which are serving corpora-
tions and prepare the workforce for employers. There is no vision and no capacity
left to let students choose the complex and difcult path of what we used to call
a higher education. Signifcant research, new ideas, and exploratory analysis are
now possible much more outside the walls of academia, where the impact of the
destructive ideological fundamentalism of neoliberalism in education program-
matically eroded teaching, learning, and research. In an article published by the
National Review on the decline of prestige for American universities, a scholar
from Stanford presents with courage the new reality of higher education:

[I]magine a place where the certifcation of educational excellence, the


bachelor of arts degree, is no guarantee that a graduate can speak, write, or
communicate coherently or think inductively. . . . Imagine a place loudly
devoted to income, capital, and marketplace equity measured against the
reality that 800 of the largest colleges and universities hold more than
Automation of Teaching and Learning 93

$600 billion in endowments. Yet just 20 elite universities account for half
that total. And just four – Harvard, Yale, Stanford, and Princeton – account
for almost a quarter of all endowment funds.
(Hanson, 202127)

This is the American model for education: a prosperous elite of corporate enti-
ties, where learning is a marginal issue, and the fght for survival for the rest. The
irony is that we know that education is in free fall for the last decades, and we all
see that universities have a moment of identity crisis with no solutions on sight.
It was convenient to ignore the grave dysfunctions of the system, even to pretend
that it is not part of reality that “no guarantee that a graduate can speak, write, or
communicate coherently or think inductively.” The result of new managerialism
and economic models adopted for universities is an ethos of bullying and dysfunc-
tion, where resistance is futile, sanctioned, and ruthlessly eliminated. Academ-
ics became complicit in this system, for survival, playing the academic game of
production, and competition against each other. A study on this topic tragically
reveals that genuine critical thinking is excised and forms of resistance are limited
to secret grunts and whispers. Managerial demands and the ridiculous language
that show the insecurity of some failed economists or entrepreneurs are dealt with
obedience and public enthusiasm for the exploitation of others and themselves
(Kalfa et al., 201828).
We witness the end of illusions that all grave dysfunctions are not critical
for our existence and that aggressive mediocrity can be a comfortable substi-
tute for wisdom, knowledge, creativity, and educated imaginations. The pan-
demic revealed that we built collectively weak foundations that our vital systems
implode when we look at the world only in pecuniary, commercial terms. The
crisis of climate change is raising the very real possibility of catastrophic changes
and irreparable disasters; we see a continuous rise of extreme inequities, with
economic systems serving a minuscule minority of extremely wealthy profteers
and desperate poverty for the majority. The rise of extremism and fascism is
generalised and too strident to be ignored; it is normalised and already part of
our political systems. We may think that, to use the title of a book written on
this important topic by Michael Sandel, there are very few things left, if any, that
money can’t buy. We see now that we can ruthlessly exploit everything, squeeze a
proft from any part of life, but there is a cost for the amoral choices made by the
fnancial landlords and corporate megastructures. There may be almost nothing
left that “money can’t buy,” but we cannot buy a sustainable future; we have to
re-learn how to build one.
The new lexicon of academia is now refecting the new values guiding uni-
versities: “benchmarks,” “performance indicators,” “business acumen,” “customer
focus,” “talent management,” “rankings,” “outcomes,” “product,” “customers,”
“outputs,” and other terms used commonly in the neoliberal, managerialist jar-
gon. We exist in and are shaped by the language we use; creating value on the
94 Higher Learning

“learning market” is disconnected from human-centred concepts. This comes


with signifcant implications for what we understand now by “learning,” educa-
tion, and the idea of humanity. The neoliberal “newspeak” is dramatically chang-
ing the aims and reasons of higher education, impoverishing and strictly limiting
how we imagine education and our potentials. Stephen Ball, a professor at the
University of London and Director of the Education Policy Research Unit at the
Institute of Education, noted in this sense that “the novelty of this epidemic of
reform is that it does not simply change what people, as educators, scholars and
researchers do, it changes who they are” (Ball, 200329). It is not just the obsession
of “reform,” the code word of turning every aspect of our lives in a source of
profts and parts of a market, but we have a radical shift in what we consider to be
part of relevant reality. The new ideology decided that it is real only what can be
measured, or, as a book written by David Beer, what can be part of metrifcation.
David Beer’s observation that “what counts” is only what can be counted (Beer,
201630) is of extraordinary importance for what we have now defned as learning
and – even more important – for what we will have as our learning futures.
Bean counters and accountants decide what is real, but behind them is a grab
of power with an unprecedented capacity to colonise and incorporate all aspects
of our lives. This is why we have to pay special attention to David Beer when he
notes that the “understanding the intensifcation of measurement, the circula-
tion of those measures and then how those circulations defne what is seen to
be possible, represents the most pressing challenge facing social theory and social
research today” (Beer, 2016, p. 31). This logic of reducing reality to what is
measured permeates the collective sense of being, defnes the way of life, cultural
choices, and social interactions. Jobs are decided by metrics and algorithms decide
who is hired, and work is constantly measured to show efciencies and failures.
The number of “likes” on social media is taken as valid indicators of someone’s
value. The success of metrifcation is so complete that it leaves the illusion that
there is no other functional way to organise our personal, social, and professional
lives. Adopting a wider perspective is opening the possibility to see not only that
there are other approaches that are more suitable for education, but also reminds
us how recent and baseless is the quantitative view adopted to govern higher
education and organise our societies. The ahistorical dimension is essential for
metrifcation and technological solutionism, to maintain a culture of fast use and
disposable ideas where power is not substantially challenged. It is not a sustain-
able project and major cracks in the current arrangements are already shaking the
illusion that “the end of history” (Fukuyama, 1992) was fnally achieved with the
corporate monopoly of new technologies.
In the feld of education, teaching, learning, and academic careers stand
reduced to the very narrow quantitative judgements, in a lethal combination of
technologism (or technological solutionism) and neoliberal dogmas of manage-
rialism. Everything is subject to measurement of indicators, and rankings, which
require permanent surveillance and measurements. This is placing individuals,
Automation of Teaching and Learning 95

academics, and students alike, in a sadistic and amoral culture of “efciency” and
suspicion, which leads to the ascending trends of stress, anxiety, and suicides in
academia. Kathleen Lynch observed in an article published in 2015 that this focus
on rankings

also endorses a type of Orwellian surveillance of one’s everyday work that is


paralleled with a refexive surveillance of the self. One is always measuring
oneself up or down, yet there is a deep alienation in constantly living the
threat of the damage that a poor performance entails.
(Lynch, 2015, p. 1131)

In the context of current managerial, ideological, and intellectual crisis of


universities the promise of automation is very seductive. The automation of edu-
cation, which is often associated with the call to cut costs, eliminate waste, and
secure the increase of profts, brings the promise of personalisation or optimi-
sation of learning. These promises balance the further increase of surveillance
and stress for students, as a minor cost to be paid in order to achieve new “ef-
ciencies.” Higher education represents an increasingly large market for Google,
Amazon, Meta, Apple, and Microsoft, the Big Five tech giants, and all edtech
corporations. Edtech is a market where many companies have stock valuations
in tens of billions of dollars, and universities allocate immense budgets to cre-
ate and maintain online platforms, automated surveillance to deter plagiarism,
automated assessment tools, digital textbooks etc. Interests associated with these
markets should invite universities, especially as these institutions like to present
themselves as champions of critical thinking, to employ academic scepticism and
look without emotions at data, evidence, and analysis related to edtech solutions
and promises. Administrators and most academics have an abnormal eagerness to
promote the automation of teaching and learning, which is a short and practical
way to make themselves redundant for students and their institutions. I was prob-
ably unlucky to fnd in my decades of work in universities that academics making
a career as experts in edtech could not properly manage to use their own smart-
phones or set a wireless connection; functional illiterates in technology advocated
for all edtech we can buy. It is simple in this case to understand the enthusiasm for
edtech fads, as ignorance gives certainty. However, it is clear that there are some
real experts in edtech with the same fervour to promote these solutions, even
when it is obvious that learning will be hindered and the interests of students and
teachers will be afected.
One major ofcial premise of edtech is that teachers and learning are just com-
plemented, not replaced by automation. It is a more palatable way to drive the
posthuman agenda in education, or a post-teacher agenda in this case. For exam-
ple, it is stated that teaching remains a task for academics and only administrative
duties can be entirely replaced by the automation. AI is a perfect solution for this.
It looks like an obvious and easy solution, which is widely adopted already across
96 Higher Learning

higher education, by most universities. It is not. The reality is that universities,


especially on learning and teaching, remain too complex for a straightforward
application of AI solutions, and look very far from the idea of automating teach-
ing and learning.
We can take one example practical use of AI for administrative tasks, which was
well promoted across all media: the adoption of AI Watson by Deakin University
in Australia to provide administrative information to students. AI Watson is the
famous super-computer developed by IBM, made famous when the AI computer
triumphed over the world chess champion. It is one of the most visible examples
of AI solutions and it benefted in the last decade from an ongoing international
media campaign that is presenting AI Watson as the next solution for our biggest
problems. From healing cancer to management of medical resources, from cure to
various diseases, IBM’s Watson was presented constantly as the panacea:

It is one of the most powerful tools our species has created. It helps doctors
fght disease. It can predict global weather patterns. It improves education
for children everywhere. And now, we unleash it . . . on your taxes32.

For example, in 2014 an article published in international media in 2014


announced that “soon” IBM’s Watson will be “The Best Doctor In The World.”33
The constant over-hype and disappointing developments, far from stated goals,
made Oren Etzioni, a professor emeritus of computer science and CEO of the
Allen Institute for Artifcial Intelligence, say that “IBM Watson is the Donald
Trump of the AI industry – outlandish claims that aren’t backed by credible data.
Everyone – journalists included – know[s] that the emperor has no clothes, but
most are reluctant to say so” (Brown, 201734). In 2015, Deakin University released
for the media a major announcement titled “IBM Watson helps Deakin drive
the digital frontier,” where it stated that “Students at Deakin University ask IBM
Watson 1,600 questions a week to learn the ins and outs of life on campus and
studying in the cloud” and that this collaboration is making Deakin University to
push “the digital boundaries to make the student experience the best it can pos-
sibly be” (Deakin University, 201535). The promise was spectacular and preparing
this partnership came with unlimited enthusiasm: Deakin informed prospective
and current students, academics and the world that “Watson is revolutionary cog-
nitive search technology that thinks like a human. Deakin is the frst university in
the world where student advice will be powered 24/7/365 by Watson” (Deakin
University, 201436). The step to automate learning and allow a restless force that
“thinks like a human” deal with teaching and learning looked like an imminent
possibility. It was the time when university administrators found the solution for
all the hard work and complex arrangements required in higher education: an AI
computer. The usual clichés in the literature can be found across the news at that
time: AI Watson was re-imagining learning, the University, and teaching; infor-
mation was “personalised” and “optimally delivered” to students.
Automation of Teaching and Learning 97

The enthusiasm was kept alive for some years, but at one point, without media
releases or even academic debates on it Deakin University quietly stopped the
magical 24/7/365 help provided by Watson. There is no study on the reasons for
this decision taken by the Australian university, and no press release. It looks like it
is actually more suitable from a fnancial and organisational point of view to have
humans in charge with student administration and solutions for students’ needs.
Maybe the 24/7/365 fow of information about administrative arrangements is
not the most pressing need for a university student.
The AI Genie was returned into its bottle, but there is no signifcant discus-
sion about what happened to lead to this divorce. We have no research, no press
release and no academic debate about a university that made so much noise
about the adoption of AI and a discrete and abrupt stop of their AI applica-
tion. This is a story that should tell university administrators, teachers, students,
and anyone interested in education and the evolution of our societies that the
siren songs of marketers of edtech needs to be treated with healthy and objec-
tive scepticism. Teaching, learning, and higher education take a dangerous path
when we simplify all to ft the function of computing algorithms. Here we reach
again the important discussion about context and communication, informa-
tion, and technology and – most importantly – we have to fnd what it means
to be human. When systems start crumbling and crises become impossible to
ignore, hidden behind Potemkin screens, we have to understand what it means
to be educated; not just well informed, or employable, or intelligent, but truly
well-educated.

Notes
1. Collini, S. (2012). What are universities for? Penguin.
2. In special higher education systems in in Canada, USA, UK, Australia and New Zea-
land, but not necessarily restricted to these countries.
3. TOC. (2020). Technology managing people. The worker experience. Trades Union Con-
gress. www.tuc.org.uk/sites/default/fles/2020-11/Technology_Managing_People_
Report_2020_AW_Optimised.pdf
4. Bel, G. (2010). Against the mainstream: Nazi privatization in 1930s Ger-
many. The Economic History Review, 63(1), 34–55. https://doi.org/https://doi.
org/10.1111/j.1468-0289.2009.00473.x
5. Ferguson, T., & Voth, H.-J. (2008). Betting on Hitler – The value of political connec-
tions in Nazi Germany*. The Quarterly Journal of Economics, 123(1), 101–137. https://
doi.org/10.1162/qjec.2008.123.1.101
6. Drucker, P. F. (1969). The age of discontinuity; Guidelines to our changing society. Harper &
Row.
7. Piketty, T. (2014). Capital in the twenty-frst century. The Belknap Press of Harvard Uni-
versity Press.
8. Friedman, M. (2007). The social responsibility of business is to increase its profts.
In W. C. Zimmerli, M. Holzinger, & K. Richter (Eds.), Corporate ethics and corpo-
rate governance (pp. 173–178). Springer Berlin Heidelberg. https://doi.org/10.1007/
978-3-540-70818-6_14
9. Judt, T. (2005). Postwar. A history of Europe since 1945. The Penguin Press.
98 Higher Learning

10. Moore, M. (1999, July 1). The WTO: The challenge ahead. WTO News: Speeches,
Address to The New Zealand Institute of International Afairs. www.wto.org/english/
news_e/spmm_e/spmm01_e.htm
11. Harvey, D. (2005). A brief history of neoliberalism. Oxford University Press.
12. Stiglitz, J. (2017, December 6). Globalisation: Time to look at historic mistakes
to plot the future. The Guardian. www.theguardian.com/business/2017/dec/05/
globalisation-time-look-at-past-plot-the-future-joseph-stiglitz
13. Smyth, J. (2017). The toxic university: Zombie leadership, academic rock stars and neoliberal
ideology. Palgrave Macmillan.
14. Gewin, V. (2021). How to blow the whistle on an academic bully. Nature, 593, 299–
301. https://doi.org/10.1038/d41586-021-01252-z
15. Hofstadter, R. (1963). Anti-intellectualism in American life. Knopf.
16. Lim, E. T. (2008). The anti-intellectual presidency. The decline of presidential rhetoric from
George Washington to George W. Bush. Oxford University Press.
17. This is a quote from a TV interview with Faisal Islam of Sky News, on June 3, 2016,
and the Conservative politician and leader of the campaign for Brexit, Michael Gove.
The full sentence stated by Michael Gove is: “I think that the people of this country have
had enough of experts from organisations with acronyms saying that they know what is best and
getting it consistently wrong, because these people are the same ones who got consistently wrong”
(Gove is interrupted by the interviewer). Retrieved 6 February 2022, from https://
youtu.be/GGgiGtJk7MA
18. Romney, M. (2012, February 10). Mitt Romney – Remarks to the conservative political
action conference. Online by Gerhard Peters and John T. Woolley, The American Presi-
dency Project www.presidency.ucsb.edu/node/300160
19. Borter, G., Ax, J., & Tanfani, J. (15 February 2022). Schools under siege. A Reuters
Special Report. www.reuters.com/investigates/special-report/usa-education-threats/
20. Gee, D. (2008). [Review of doubt is their product: How industry’s assault on science
threatens your health, by D. Michaels]. Journal of Public Health Policy, 29(4), 474–476.
www.jstor.org/stable/40207213
21. Bok, D. (2003). Universities in the marketplace: The commercialization of higher education.
Princeton University Press.
22. Guthrie, S., Lichten, C., van Belle, J., Ball, S., Knack, A., & Hofman, J. (2017). Under-
standing mental health in the research environment. A Rapid Evidence Assessment. Rand
Europe.
23. UCU. (2016). Precarious work in higher education. Insecure contracts and how they have changed
over time. University and College Union www.ucu.org.uk/media/10899/Precarious-
work-in-higher-education-May-20/pdf/ucu_he-precarity-report_may20.pdf
24. Andrews, S., Bare, L., Bentley, P., Goedegebuure, L., Pugsley, C., & Rance, B. (2016).
Contingent academic Employment in Australian Universities. LH Martin Institute and Aus-
tralian Higher Education Industrial Association.
25. Lamb, H. (2017, January 12). Saul Perlmutter: “Scientifc discoveries aren’t made to order.”
Times Higher Education. www.timeshighereducation.com/features/saul-perlmutter-
scientifc-discoveries-arent-made-order
26. Vostal, F. (2016). Introduction: The pulse of modern Academia. In Accelerating aca-
demia: The changing structure of academic time (pp. 1–10). Palgrave Macmillan. https://
doi.org/10.1057/9781137473608_1
27. Hanson, V. D. (2021, April 29). American universities have lost their prestige.
National Review, April 29. www.nationalreview.com/2021/04/american-universities-
have-lost-their-prestige/
28. Kalfa, S., Wilkinson, A., & Gollan, P. J. (2018). The academic game: Compliance and
resistance in universities. Work, Employment and Society, 32(2), 274–291.
29. Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Educa-
tion Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065
Automation of Teaching and Learning 99

30. Beer, D. (2016). Metric power. Palgrave Macmillan.


31. Lynch, K. (2015). Control by numbers: New managerialism and ranking in higher
education. Critical Studies in Education, 56(2), 190–207. https://doi.org/10.1080/1750
8487.2014.949811
32. A Super Bowl 2017 commercial for IBM’s Watson AI.
33. Friedman, L. F. (2014, April 23). IBM’s Watson supercomputer may soon be the
best doctor in the world. Business Insider Australia. www.businessinsider.com.au/
ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014–4
34. Brown, J. (2017, October 8). Why everyone is hating on IBM Watson – Including
the people who helped make it. Gizmodo. https://gizmodo.com/why-everyone-
is-hating-on-watson-including-the-people-w-1797510888
35. Deakin University. (2015, November 25). IBM Watson helps Deakin drive the digital
frontier. Media Release. www.deakin.edu.au/about-deakin/news-and-media-releases/
articles/ibm-watson-helps-deakin-drive-the-digital-frontier
36. Deakin University. (2014). IBM Watson now powering Deakin. A new partnership that aims
to exceed students’ needs. http://archive.li/kEnXm.
5
SURVEILLANCE, CONTROL, AND
POWER – THE AI CHALLENGE

In 2018 Goldman Sachs analysts came with a report on the advancements of


gene therapies titled “The Genome Revolution.” It states that genome medicine
will reach a $5 trillion “addressable market,” and genome therapies will be able
to help treat tumours and blindness, cancer and rare diseases. Goldman Sachs is
posing this question in the research report: “Is curing patients a sustainable busi-
ness model?.” An article on this report, published by CNBC, is quoting Salveen
Richter, an analyst who wrote a note to Goldman Sachs clients: “While this
proposition carries tremendous value for patients and society, it could represent
a challenge for genome medicine developers looking for sustained cash fow”
(Kim, 20181). The question for the most infuential centres of power is not if a
new cure could help millions and save countless human beings from sufering, but
how that cure brings a fnancial proft, a “sustained cash fow.” More importantly,
this ideology invites us to accept that it is morally acceptable to see people dying
if the cure is not proftable. We have seen how pervasive this argument is in the
frst year of COVID-19 pandemic when governments and various public fgures
justifed the death of people with pre-existing conditions, or just old, if protecting
them stands against the good functioning of markets and the economy. This logic
is not limited to medicine and we fnd that it colonised all spaces of our societies,
especially education.
The American model of capitalism was strongly associated in post-war Amer-
ica not only with greed, as a guiding principle for action, but also surveillance. In
a fascinating history of insurance in America after the Second World War, Caley
Horan notes that

Public service and educational campaigns had other uses too. Many public
service and educational initiatives included a component of surveillance,
DOI: 10.4324/9781003266563-8
Surveillance, Control, and Power –AI Challenge 101

generating data that insurance companies used to refne the risk-rating and
classifcation structures used to price and determine availability of insurance
coverage. Some of these surveillance eforts collected data without consent.
(Horan, 2021, p. 442)

Insurance Era is a book that also reveals that corporate eforts to gather and con-
trol private data are not a recent phenomenon, a new idea of corporate giants to
create what Shoshana Zubof called “surveillance capitalism.” Horan’s book shows
how insurance companies used their power for surveillance at the beginning of
the 20th century, mainly to use data to condition and control who can buy a
car, a house, who can start a business or take a loan. It was a maintained illusion
that these decisions are determined simply by the access to buy insurance or the
capacity to pay more for it; data masters had the ultimate control when Internet
was not even invented. The lesson of the advantages ofered by surveillance was
kept and built on. New tech and edtech corporations did not invent surveillance
and the collection of data without consent, but just built on a solid tradition of
American capitalism on doing this. It was clear from the frst steps of new tech
that those who collect data will secure and maintain power.
On the other hand, we have another important root for educational tech-
nologies: AI, like the Internet, was born as a military project. It maintains in its
structural design the tendencies of surveillance and control, manipulation and
strategic advantage – and power. Too ignore these roots is like trying to under-
stand a newly discovered species without a basic understanding of biology. We
are all determined by our roots, and AI is not making exception. It is important
to note that the educational project proposed by the edtech and the AI revolu-
tion inconspicuously lead higher education towards a model of education close
to that of military schools of the 19th century. It is a controlling, authoritarian
model that is removing students’ agency, based on surveillance and control, based
on pedagogical myths and a common disregard for scientifc fndings that impact
on the ofcial narrative. No one presented military schools in the 19th century
as places where identity is broken and minds are forcefully leading to mediocrity;
they were presented as places where heroes, new leaders, and brilliant tacticians
are created. Big data and learning analytics are both associated in higher education
with the practice of collecting signifcant amounts of private data without student
consent, using extremely intrusive software that is euphemistically labelled with
noble words that suggest good intentions, such as “academic integrity,” again for
surveillance and control. Sensors, security measures, and Internet use surveil-
lance are now part of a complex system that collect an immense amount of data;
research on this topic constantly reveals that most students have no idea how and
what information is collected and how this can be used and misused.
It is not a coincidence that scientists with signifcant contributions in
the development of AI and edtech intersect within their careers projects for
the military. We can take the 1950s case of what was called at that time “the
102 Higher Learning

push-button schools,” which was proposal for a future educational system; this
looks today like a blue-print for education in our contemporary schools and
universities. Simon Ramo presented his manifesto for the “push-button schools”
in an article titled “A New Technique of Education,” which was published in
Engineering and Science Monthly, in October 1957. Dr. Simon Ramo was the
chief scientist of the Intercontinental Ballistic Missile Programme from 1954
until 1958, and a faculty member at California Institute of Technology. Ramo’s
contributions for the US military are so signifcant that he is often referred
as “the father of the intercontinental ballistic missile” (ICBM), developed by
the Pentagon. He noted that in the school of the future all students should be
registered, with all detailed relevant recorded, and only then they can engage
on the course of study that was determined for them. At this point, Ramo
notes, “the student receives a specially stamped small plate about the size of a
“Charga-Plate,” which identifes both him and his programme, and the alter-
native system that will allow to use “the fngerprint system” to access all data
relevant for a student:

When this plate is introduced at any time into an appropriate large data and
analysis machine near the principal’s ofce, and if the right levers are pulled
by its operator, the entire record and progress of this student will immedi-
ately be made available.

Ramo also details that students should be monitored, which is just a more palat-
able word for ubiquitous surveillance – and

after completing his registration, the student introduces his plate into one
machine on the way out, which quickly prints out some tailored informa-
tion so that he knows where he should go at various times of the day and
anything else that is expected of him.
(p. 19)

This is a remarkable description made in late 1950s of what is widely adopted in


education in the new century and, at the same time, a reminder that latest edtech
is much older than what its vendors like to present.
In push-button schools imagined in 1957 by Ramo

[A] typical school day will consist of a number of sessions, some of which
are spent, as now, in rooms with other students and a teacher and some of
which are spent with a machine. Sometimes a human operator is present
with the machine and sometimes not.

The teacher is not always required, and it is noted that the development of new
technologies can fnd solutions to entirely replace the teacher. Simon Ramo even
Surveillance, Control, and Power –AI Challenge 103

describes the birth of a “new industry,” with an “industrial team” that will work
with the teaching staf, which will

include education analysts, probably specializing in the various subjects.


These individuals would go through the records of the individual students.
They would be constantly seeking to discover the special problems that
need special attention by the direct contact of teacher and pupil.
(Ramo, 1957, p. 213)

Ramo presents not only a techno-utopian vision for education, where machines
will be humanised in a near future and students will have more free time and
study less, but also a surprisingly accurate picture of education as we know it
today. We have now the new industry where experts in machines with no inter-
est or discernible knowledge about ideas in education, pedagogical possibilities,
or educational theories have important roles in curriculum development and
“delivery” of courses; we also have what Ramo named the new job of “teaching
engineer”; it is only that now we call them “educational technologists,” or we
fnd this group under other techno-capitalist labels*4. We have a huge industry in
the imagined function of learning analytics, as it was imagined by Ramo, where
machines are used for student surveillance and collected data leads to pathways of
study or “discover the special problems that need special attention.”
In few words, we have now “push-button” education, with push-button
classes, where machines are used as teachers, data aggregators, educational solu-
tions, and technological mentors. Technicians guide and tinker the programmes
of these machines when needed, and students have a “personalised” education.
This new project, where machines are teachers and teachers are technicians, was
presented as necessary for technological and educational advancement and, most
importantly, to secure national interests of the United States. This aim stands in
perfect alignment with later developments and intertwined evolution of military
projects in the United States. A report for the military published in mid-1980s
succinctly explains how military applications sparked an “electronic revolution”
in education:

The military services continue to support important work on basic research


on cognition, artifcial intelligence, speech recognition, interactive learning
systems, and converging technologies. The military has been a major, and
occasionally the major, player in advancing the state-of-the-art. Computers
would probably have found their way into classrooms sooner or later. But
without work on PLATO, the IBM System 1500, computer-based equip-
ment simulation, intelligent instructional systems, videodisc applications,
and research on cognition, it is unlikely that the electronic revolution in
education would have progressed as far and as fast as it has.”
(Fletcher & Rockway, 1986, pp. 206–2075)
104 Higher Learning

Since 1950s we hear the same tempting promise, common in education in its
current form, that we will commission teaching to a machine and, once this is
secured, we will have better classes, education, and learning.
The promise of a “system that makes possible more education for more people
with fewer skilled teachers being wasted in the more routine tasks that a machine
should do for them” (Ramo, p. 22) is reframed for commercial reasons or restated
in new forms, as MOOCs, as learning analytics, and so on, for almost a century.
Sidney L. Pressey designed teaching machines in the mid-1920s. His inven-
tions were presented for the frst time at a conference of the American Psycho-
logical Association (APA) in 1924. His proposal was slightly improved the next
year and his primitive forms of “teaching machines” were able to administer
multiple choice questions (MCQs). Similarly to the current uses of MCQs, these
frst machines for assessment were presented as “teaching” machines, which is
obviously based on a fundamental confusion on what is teaching and what is
the role of the teacher in stimulating, guiding, and facilitating learning. These
inventions also compromised maybe forever the idea of assessment in schools
and universities. The promise of “teaching machines” is since then pushed to an
immediate future, and then again to the next few years. Currently, we are using
the same solutions and principles in a new context, with much more advanced
technologies, without the evidence that push-classes and “teaching machines”
actually enhance learning, improve teaching, and open new ways to achieve the
aims of higher education. This statement will automatically irritate the zealot
followers of edtech, most of them making a career using technology to mask
ignorance in education. However, we can back this statement with an extensive
study provided, surprisingly, by some of the most aggressive and infuential actors
promoting the idea that “teaching machines” will improve learning and teaching:
the OECD. The extensive report was compiled and published by the OECD in
2015, with a surprising refection on the myth at the core of educational reforms
promoted across the world in the last decades. It dispels the opinion that technol-
ogy is in itself a solution to our educational problems, and provides data and evi-
dence that paint a much more nuanced and diferent reality. The evidence refects
that the myth of techno-solutionism in education is not based on scientifc studies
and data, and reveals that it was a wrong and misguided solution for education.
Specifcally, the report concludes that:

• “Resources invested in ICT for education are not linked to improved student
achievement in reading, mathematics or science.
• In countries where it is less common for students to use the Internet at
school for schoolwork, students’ performance in reading improved more rap-
idly than in countries where such use is more common, on average.
• Overall, the relationship between computer use at school and performance
is graphically illustrated by a hill shape, which suggests that limited use of
computers at school may be better than no use at all, but levels of computer
Surveillance, Control, and Power –AI Challenge 105

use above the current OECD average are associated with signifcantly poorer
results” (OECD, 2015, p. 1466).

Decision makers in education are increasingly intolerant to the heretic idea


that edtech is not always positively associated with results in STEM and literacy,
and some schools with no computers perform better than some fully immersed
in edtech solutions. A simple overview of public funds allocated to computers
and edtech immediately reveals that, along with summaries of educational reform
priorities published yearly by the OECD, World Bank, and national govern-
ments. To say that educational results are better in places that do not have Internet
access to classrooms stands directly against policies and programmes promoted
by OECD in countries across the world. We have to remember again that the
Organisation for Economic Co-operation and Development was established to
“stimulate world trade,” and it is proven that it holds the ultimate power to change
the agenda of governments for education and research. As we quickly noted in a
previous paragraph, it is remarkable to see that economists single-handedly design
policies to change education at all levels, as a result of the post 1999 WTO deci-
sions for the world: it is a mix of the ideological trends of neoliberalism, solutions
aligned with corporate interests, and narratives of techno-futurism.
In an extensive and well-documented analysis, Dr. Regula Bürgi presents the
main factors that made an economic organisation take the role of authoritative
voice in education and shape with force educational policies across the world.
Bürgi notes that

education became a part of OECD agenda against the backdrop of a Cold


War “culture of control” and its knee-jerk tendency towards “education-
alization.” The United States was key in catalyzing this process, which was
highly geared towards myths and built more on political than – as it was
argued – scientifc foundations. The underlying epistemology had its ori-
gins, among other things, in military research on war, and it conceptualized
the world as a governable and controllable laboratory, with a tendency to
undermine democratic processes of deliberation.
(Bürgi, 2016, p. 1597)

The American advisors of the armed forces during the Second World War re-
invented their work as think tanks and “technocratic-educationalizing” networks
that used strategies developed for the war in the new civilian context. Bürgi pre-
sents the unseen story of infuences that shaped the current reality of education, in
schools and universities. It also presents the constant infuence of the US Depart-
ment of Defence on educational systems and, later, of the inner workings of
schools and universities, on the administration and funding, and on curricula and
teaching. Starting with late 1950s, the OECD used its power to change the way
we understand teaching and learning and the aims of education: “education was
106 Higher Learning

conceptualised as an education ‘system’ that – much like laboratory experiments –


must be governed or ‘developed’ in light of indicators” (Bürgi, 2016, p. 166). OECD
developed and promoted what Bürgi identifes as an “economic-technocratic”
approach with an absolutist character which remains intolerant to alternative
perspectives or even data and clear evidence that contradicts their ideological
position. OECD functions as the ultimate institutional power that is invoking
advancements in edtech as a form of authority that is puzzling, intimidating, and
breaking resistance to the techno-future’s vision, ideology, and its adoption. The
impact of the OECD’s unreserved support of edtech solutions for schools made
any attempt to question or investigate the utopian claims associated with the
vast promises was stifed with the backing of these powerful organisations. The
adoption was quasi-universal and schools are still waiting for the miracle to hap-
pen. However, we see now that after decades later, the OECD is looking at all
data and is telling the world that they got it all wrong. In the logic of those who
are holding the power, just months later the OECD was promoting with other
manifesto-reports the edtech solutionism and more recently, reached that amaz-
ing conclusion cited before, which states that technology is the perfect solution
for education but teachers remain a problem with their imperfect nature. There
is no apology or acknowledgement that their previous reports pushed conclu-
sions with no backing on valid data or research, but only on parallel agendas that
are not referring to education or learning. At this moment, OECD serves as an
arbiter within markets where education is sold and advertised, bought and instru-
mentalised. International programmes, such as PISA8, TIMSS, or PIRLS9, change
and dictate policies adopted by various governments across the world. The result
of these comparative international assessments is followed with maximum interest
by national centres of governance in education and resulting rankings often cause
intense national emotions (such as pride or outrage for deciding “scores”). Con-
sidering the immense power and authority of OECD, we can imagine that any
suggestion that can improve results that is coming from the institution designing
and administering these tests is quickly adopted and is changing national policies.
The explicit directive of the widespread adoption of computers and edtech in
education is one of the most visible suggestions coming from OECD to national
governments and educators. Nuances, critical analysis, or doubts are not included
in the series of highly infuential reports published by this international organisa-
tion, which was born and shaped by the expertise of American military consult-
ants. The neoliberal ideology was openly promoted for education, as a promised
key to “efciency” and progress. Key concepts and learning theories and data that
are extremely important for education were marginalised, eliminated, or ignored
and replaced with managerial jargon and magical thinking on the self-regulating
power of the market. Consequently, words and concepts – including learning and
education – were emptied of sense, semiotically restructured with the signifcance
and logic reconfgured for a clear alignment with neoliberal approaches. To
take just one example, we can look at one OECD document indicating how
Surveillance, Control, and Power –AI Challenge 107

education should be organised and managed: “Liberalisation and privatisation of


education, leading to freedom of thought and action and responsiveness to the
emerging environment, are seen as a precondition for entrepreneurship and eco-
nomic development” (Potter, 200810). Research and common sense prove now
that liberalisation and privatisation of education had the exact opposite efect
on education, restricting the freedom of thought under managerialism, enforced
strict hierarchical structures of control and punishment, and left universities vul-
nerable to groupthink, self-censorship, and mediocrity.
The main promise of edtech revolves around the idea of personalisation of
learning. Personalised instruction is an umbrella term that serves many agendas,
fnancial, ideological, and political interests, and corporate groups. It remains too
vague to be clearly measured, but personalisation is still the key term associated
with schooling, as it was imagined and designed in the United States at the begin-
ning of the 20th century: efcient, standardised, and aligned with the student
aptitudes and potential – or intelligence. Most commonly, it is used to cover pre-
dictive analytics based on data collected on students’ learning, social background,
activities, and results in tests. AI is bringing the promise of “super-charging”
personalisation of education, using data and complex algorithms to predict what
is the most suitable content, teaching method, educational intervention, and pace
of instruction for every student. This requires, of course, vast collection of data
about every student, to analyse the results of every individual student, their social,
economic, and cultural background, the level of interest for education, as it is
expressed in time spent on content, access of library resources, online materials
and platforms used for teaching, and so on. The implicit promise of AI is that
the efort of personalisation of education, which is solidly rooted in the ideas of
American scholars interested in education at the beginning of the 20th century,
is based on technological advancements that allow “objective” measurements,
where the machine is using all collected data to aggregate and analyse best path-
ways for every student. This idea works very well if we look at education as a
mechanical, administrative process, where properly surveilled and measured stu-
dents gobble information that is directly aligned with their potential and interests.
This view also takes a religious approach of data; in this view, where we turn to
algorithms to solve all our problems, data is placed in a position used by religious
extremes, as a god that should not be questioned, explored, or subject to a cri-
tique. The new inquisition for this tumultuous part of the 21st century is taken
by engineering, by the initiated members of the tech-priesthood, specifcally
those who write complex software for machine-learning and AI systems, and
who claim to fully understand AI software systems. Questioning AI is punishable
with derision, marginalisation, and professional “execution” because, as Meghan
O’Gieblyn observes in her book “God, Human, Animal, Machine,” we live a
time when “all the eternal questions have become engineering problems.” God is
now a matter of engineering in the church of Silicon Valley tech. Unfortunately,
it is nonsensical to look at data as a concept placed in an ethereal context where
108 Higher Learning

subjectivity, error, bias, prejudices, partiality, and limitations were all left behind,
as old and inferior ways of understanding reality. As we briefy noted before, data
involves not only a certain selection that makes it all limited and skewed by the
intentions of those who designed methods of collection, but is inescapably linked
to the past, to features, actions, information, and events that happened. All these
elements captured in “data” can be changed by one signifcant event, which can
make it all irrelevant for the new context. In the case of AI we can look at the way
personalisation works in education and think about how many troubled students
became great innovators, extraordinary minds that shaped felds of knowledge
and how easy it would be to block all of them using what was the evident truth
of their times. Many brilliant students had a time when they were disengaged and
uninterested in studies and a personalisation at that time would actually block
their positive evolution. Personalisation is also historical: it is recent in our history
the time when it was considered that only men can be scientists. Access to educa-
tion was restricted to the black population in America. It is a great error to ignore
the fact that what is considered reliable and accurate data is constantly changing,
in line with specifc places and time.
If we imagine schools applying “learning analytics” in the past, to all students,
most exemplary cases of extraordinary artists, writers, or scientists would disap-
pear in vocationalised pathways, flled with lower level information that would be
more aligned with their interests and potential as it looked at a certain time, as it
was seen at a particular time and place. Learning analytics, the systematic algorith-
mic analysis of data collected on students, is an extraordinary instrument of sur-
veillance that is integrated in LMS, which are also surveillance traps for students.
One important lesson of the last two decades is that the big technological
companies such as Google, Facebook/Meta, or Netfix use personalisation to
reward and entrench intellectual laziness, misinformation, and confrmation of
biases, in a spinning whirlpool of superfcial, irrelevant, and low-quality informa-
tion. The “recommendation algorithms” (or curation algorithms) stand at the
centre of public scandals of unethical use of data for political manipulation, such
as the Cambridge Analytica scandal. Data analytics and curated materials are self-
limiting and build a chaotic digital universe where individuals’ thinking, analysis,
and clear judgement are constantly hindered or suppressed. In a book devoted to
new methods of manipulation and censorship, Margaret E. Roberts details one
particular method used to suppress the possibility of people to fnd important
information: fooding. This approach was reportedly used in China, when cen-
sors realised that not all information can be suppressed when citizens use social
media tools, and some of this information can be dangerous for the stability
of the regime. The solution is simple: rather than suppressing all inconvenient
information is much more efcient to “food” the digital universe with a sea of
stories for fast-use. Flooding is “the coordinated production of information by an
authority with the intent of competing with or distracting from information the
authority would rather consumers not access” (Roberts, 2018, p. 8011). In other
Surveillance, Control, and Power –AI Challenge 109

words, an avalanche of junk makes impossible to focus and learn something of


substance, and this about implications. For example, we can imagine that news
about an important award granted to a dissident and activist such as Ai Weiwei
can be “fooded” with stories about cats and beauty contests for cats. This is how
an uncomfortable story that is relevant for a civil society will become invisible
in the sea of millions of postings about a new and ephemeral fad created by the
censorship. In fact, even in liberal democratic systems we are fooded every day
with a sea of information that is largely irrelevant, and shallow content selected
to be limited to what is decided by algorithms that we like to see. This is an
existence where surprise is suppressed, the unexpected is cancelled, and new
and original perspectives are closed in a well-curated universe where we see only
what is already part of our preferences. Imaginative experiments that can feed
and nurture our creativity are stopped at the border set by the algorithm. In this
sense, AI-personalised education is a project of boredom, self-confrming infor-
mation, and limited learning. In fact, meta-research on personalised education
constantly warns us that its efectiveness is not confrmed by research data. In a
report published in the United States by the National Education Policy Center,
Noel Enyedy notes that the enthusiasm for edtech, AI advancements, and use
of computers in the classroom provide a solid basis for personalised instruction,
concluding that

despite the advances in both hardware and software, recent studies show
little evidence for the efectiveness of this form of Personalized Instruc-
tion. This is due in large part to the incredible diversity of systems that are
lumped together under the label of Personalized Instruction. . . . In fact,
there is so much variability in features and models for implementation that
it is impossible to make reasonable claims about the efcacy of Personalized
Instruction as a whole.
(Enyedy, 2014, pp. i – 5)12

Serious research is failing to confrm grand statements on personalised education,


but a constant fow of stories, general statements, and assumptions fll mass media
and target decision makers. There is a part of the story of personalised education
where AI is creating bubbles of self-confrmation, intellectual mediocrity, and
laziness; this is the part where AI is not creating options for students to learn
“efciently,” but is stifing the eros of learning, the profound and intrinsic love to
learn, discover, and imagine new possibilities.
Personalised education is linked with an idealised and ahistorical view of edu-
cation, most often entirely disconnected from the realities of social class, and
economic, social, and cultural circumstances. It is presented as a magical solution
that can be used by teachers to make education relevant and engaging for all stu-
dents, with appealing options and bespoke pathways for all. All teachers passion-
ate about their work resonate with the idea of building on the potential of every
110 Higher Learning

student, maximising strengths and using individual interests to open new areas
of knowledge. A common aspect for utopian projects is the impulse to suppress
moral values and principles to reach the ideal place; and this is how all fail and
end in terrible tragedies. In this case, the utopian project of an AI-personalised
instruction is asking us to suppress moral considerations and ethical considerations
on the systematic use of surveillance, the present and future vulnerabilisation of
students, and standardisation of an education defned by mediocrity. Bizarrely,
most recent OECD reports on AI and learning include a model of automation
of personalised learning based on the evolution and promises of self-driving cars
(OECD, 2021b, p. 6013), fnding that AI in education will adjust individual tasks
based on students’ knowledge and will personalise the order in which students
“work through curriculum.” In the same publication we fnd a dystopian version
of instruction and schooling, with extraordinary levels of surveillance and com-
plete indiference to students’ privacy. Part of the “optimisation” of education
through the advancement of edtech and AI, the authors recommend the use of
“behavioural data” to collect information on “students’ behaviour during learn-
ing,” noting that “one important source of behavioural data are log fles. This
data lists the sequences of learner-technology interactions at the millisecond level
leaving a trail of activities with learning technologies.” Another source of behav-
ioural data are cursor movements and keyboard entries; the more one moves the
most engaged they look in the fnal reports. Eye movements are also captured, to
indicate what students look at during learning, which is used to detect allocation
of attention: “Wearable eye trackers also assess students’ interaction with physical
objects and social interaction during learning. In addition, specifc eye-tracking
data such as pupil dilation and blinking behaviour have been correlated to cogni-
tive load and afective states” (OECD, 2021a, p. 60).
The number of assumptions about learning in this case is staggering, but what
may be the most serious implication of these suggestions is that it reveals a way of
understanding students as a fxed and singular generic being, a trainable creature
that reacts, learns, and provides only in measurable and standardised outcomes.
It is a post-human student and education is designed in a post-human paradigm,
pretending to use a hybrid model where humans use AI to enhance their ef-
ciency. The most disappointing and troubling part is the impoverishment of edu-
cation, and what it means to be an educated person. The relentless surveillance of
students starts from the same type of assumptions leading to the idea that inmates
under permanent surveillance have a better behaviour. In general, it doesn’t and
cannot escape students that they are watched, and not necessarily seen. How can
we imagine an education where students are always under surveillance, and every
act, movement, or intention – including their eye movement – is recorded, ana-
lysed, and reported in forms that will alter their future learning and, ultimately,
their life? What are the implications of continuous and ubiquitous surveillance on
students’ mental health, motivation for learning or how schooling is perceived?
What if the AI report is wrong? What if a student stares at a spot not because it
Surveillance, Control, and Power –AI Challenge 111

is cheating, but just because that is how thinking and concentration happen for
that individual?
AI and learning analytics require us to reconsider what type of data is col-
lected, how relevant it is for what is supposedly refecting, and what is the qual-
ity of that data. As mentioned before, any AI-powered system is only as good as
data that is provided to the algorithm. If we take the example of the businesses,
obsessively indicated (directly or in a subtle form) by various consultancy forms
or OECD as the model that should be followed by higher education, the qual-
ity of data used for reporting and analytics is not encouraging. In 2017, Harvard
Business Review reported that only 3% (three!) of companies met basic standards
required for data. The report found that:

On average, 47% of newly-created data records have at least one criti-


cal (e.g., work-impacting) error. . . . Only 3% of the data quality (DQ)
scores in our study can be rated “acceptable” using the loosest-possible
standard . . . The variation in DQ scores is enormous . . . no sector, gov-
ernment agency, or department is immune to the ravages of extremely
poor data quality.
(Nagle et al., 201714)

It is just delusional to think that universities have a much better situation, and the
problem of reproducibility in academic research is just one factor that supports
this doubt. In efect, we can say that data-driven analytics and predictive solu-
tions require at least a serious interrogation, if not a complete refusal to reduce
an important part of higher education to elements that are so much susceptible
to error.
Data can be extremely deceiving even for the most professional reports. For
example, we have the case of COVID-19 pandemic. In early 2020, when the
number of infections was rising and people became worried for the future,
the US Administration had just complete confdence. On 26 February 2020, in
the White House briefng room, Trump made the clear point that everything
was under control: “We’re very, very ready for this – Trump said – for any-
thing.” He also said that in a report co-produced by the Johns Hopkins Center
for Health Security, which was ranking 195 countries on their readiness to con-
front a pandemic, “The United States, is rated number one most prepared.” Data
and evidence aggregated in the Global Health Security (GHS) Index placed the
United States as the most prepared country in the world to deal with a pandemic.
The reality of the following months proved that the United States was dealing
much worse than most countries afected by COVID-19. The reality was lost
somewhere between data reports and narrow indicators, which were extremely
relevant even if they weren’t identifed and measured (e.g. level of trust on sci-
ence and expertise; trust on the government and others). If we take an example
from fnance and markets, the new gods of contemporary world, we can see easily
112 Higher Learning

how data and predictive analytics can go wrong. In 2019, Argentina was suddenly
recording a market crash, in an event reported by Bloomberg as almost com-
pletely implausible: “there was a 99.994% probability that an event like Monday’s
sell-of in Argentina wouldn’t happen” (Sivabalan, 201915). The chance of that
happening was 0.006%, but it happened. AI was not helping human intelligence
and many people lost a lot of money. The lesson for education should be that data
can be partial, not including some variables that can be essential for a trend or a
report; the analytics report may be wrong because the indicators used to collect
data are biased, partial, or inconsistent. The possibilities of collecting data through
surveillance are skewed towards aspects that are in fact irrelevant for the student’s
interest and motivation for learning.
Surveillance is a constant reminder of power structures but it is not conducive
to mutual trust, intrinsic motivation for learning, or even a positive collaborative
relationship of students with their teachers and administrators. Probably the most
toxic part of ongoing surveillance, learning, and predictive analytics is that stu-
dents are not involved in reporting. Once data is collected, selected, aggregated,
and interpreted, a report is created, but the student cannot infuence the conclu-
sion of these reports. Most commonly, students cannot even see these reports, and
the conclusion, right or wrong, is impacting their academic pathways and experi-
ence. This is an unfair, wrong, and corrosive for a normal relationship required
for educational experiences. In an interview published in 2020 by the Institute
for Human-Centered AI (HAI) at Stanford University, Kate Vredenburgh notes
is using a well-placed metaphor to explain how AI’s scores and reports can take a
terrifying form. She makes the observation that Kafka’s “The Trial” is

so horrifying to us not only because an innocent person is punished but


also because the main character can’t do things to make the process fair.
He can’t respond to the charge to disprove it. It is deeply concerning to us
because he doesn’t have the information he needs for society’s basic institu-
tions to function morally well.
(Millar, 202016)

The idea of learning analytics and predictive analytics is a refection of the


contemporary impulse to quantify everything, to measure, aggregate, and analyse
so more assessments can be done for more data. Data is not only glorifed but also
deifed. It was called “dataism,” and education is now conceived and imagined
through the mantra that data-based policies and decisions are inherently good,
a refection of “truth.” A label attached to a student as a result of “data analysis”
cannot be touched by the student, disputed, or rejected; it is adopted as a fact
by teachers and institutions. There are no possibilities of recourse for a student,
so when a biased or a completely wrong report is created it remains untouched.
This is the most important failure in education, which is vastly multiplied by
AI-powered edtech and ML. Data collected and aggregated within Learning
Surveillance, Control, and Power –AI Challenge 113

Management Systems (LMS) in higher education is not shared with the students,
and learning analytics reports remain open only to the instructor and the institu-
tion. This is especially concerning when we see that errors of advanced AI sys-
tems led to wrongful arrests and derailed the life of innocent people. For example,
Wired published in March 2022 an article titled “How Wrongful Arrests Based
on AI Derailed 3 Men’s Lives” where it details how destructive was for people
and their families to be wrongfully identifed by facial recognition software and
arrested for crimes they did not commit (Johnson, 202217). The AI software lead-
ing to the arrest of these people was used in Detroit, where the Police Chief
admitted that it misidentifed people in 96% of cases. If AI was wrongfully used
to send people to prison, despite their obvious innocence, we can safely imagine
that in schools and universities, learning analytics reports can be erroneous and
irrelevant for students’ interests and potential.
This is far from being an isolated case: in 2022, the Associated Press published
a report about a man who was sent to prison, accused of murder, without any
other evidence than AI algorithms: “Prosecutors said technology powered by a
secret algorithm that analyzed noises detected by the sensors indicated Williams
shot and killed the man” (Burke et al., 202218). If the system of justice is opened
in some rare and fortunate cases to open enquiry and dispute, edtech is hermeti-
cally shut for students, and the “conviction” remains as it was set by the system.
Despite the wave of evidence that algorithms, created by humans who transmit
their own preferences, are open to endlessly used and reinforced biases, edtech is
unchanged and uninterested in limits and risks. The most infuential centres of
power, such as OECD, or corporations with a presence in edtech, international
consultancies frms, and various think tanks insist to present the advancements
in surveillance in education as a positive development for students, teachers, and
institutions of education. This is an unfortunate position not only because it is
treating AI as a perfect solution, without faws, in an uncritical and unscientifc
manner, but also because it leaves aside the fact that schools and universities use
proprietary software for surveillance and analysis. In efect, education is based on
black box algorithms, with no idea about how exactly information is processed
and what are potential risks and downsides.
A report on student surveillance practices in the US schools, released by the
Center for Democracy & Technology in 2021, reveals that

monitoring tools create a chilling efect on student self-expression – 58% of


students who report that their school uses monitoring software agree with
the statement, I do not share my true thoughts or ideas because I know
what I do online is being monitored.
(Hankerson, 202119)

Surveillance is directly and obviously related to power structures, where the


“observed” and the “controlled” are permanently reminded that what they are
114 Higher Learning

watched and any action outside what is permitted will have consequences; it is
a constant reminder that those holding power are watching. It is an obvious fact
that a regime of surveillance hinders personal expression, creativity and independ-
ence, imagination and courageous experimentation. In other words, the current
arrangements within education and the uncritical adoption of AI “learning ana-
lytics,” “predictive analytics,” and extensive surveillance enhanced by AI obstruct
and eliminate students’ creativity, spontaneity, and free and independent thinking.
It should not escape us that this is what education needs to nurture; it is impor-
tant to see the part in the mission statements and institutional strategies where
this is mentioned translated clearly in action. We have a profound inconsistency
to declare the need to nurture and elevate individual diferences, creativity, and
self-expression while using AI for surveillance. Ignoring these contradictions do
not serve anyone and create on a longer term the context for internal dissonance
and confict; for students it generates an ethos where love for learning, creativity,
and engagement can happen in spite of the system not because of it.
A study published in 2016 shows that surveillance is causing self-censorship
and suppression of dissent or expression of opinions that may look diferent from
what is accepted by the majority (Stoychef, 201620). While the authors acknowl-
edge that researchers “have consistently showed that perception of hostile opinion
climates – or when individuals believe their views difer from the majority – sig-
nifcantly chills one’s willingness to publicly disclose political views” (p. 296), this
study refects directly that online surveillance is enabling a culture of conformity
and self-censorship, which is against minority groups and opinions. This conclu-
sion, on the impact of surveillance in online environments, is just another confr-
mation of research on self-censorship, such as the “spiral of silence” presented by
Elisabeth Noelle-Neumann in 1974 (Noelle-Neumann, 197421).
Universities, schools, and edtech corporations are using the model of Big
Tech, the large online platforms such as Google, Netfix, YouTube, and Amazon,
that use various forms of data collection to provide personalised services at a
large scale. These companies collect any kind of user data in discrete and hidden
forms, sometimes covered by a general user agreement, which is designed as a
long, jargon-flled and technical text that most users can’t – or won’t – read. Data
collected is, as it is the case of edtech and institutions of education, seemingly
unobjectionable, including websites visited, applications used, time and duration
of use, and location. The problem for users starts when all this data is aggre-
gated with data collected by other companies, such as fnancial services; these
personal packages reveal where and when a credit card was used, why, how this
may impact on the purchasing preferences and possibilities in the future or what
personal health and fnancial risks are most probable for a certain individual.
There are well-documented and important books revealing the type of informa-
tion and personal risks, such as “The Black Box Society” by Frank Pasquale. His
book provides maybe the best analysis on the “one way mirror” used by tech-
lords and numerous real-life examples where algorithmic profling is afecting or
Surveillance, Control, and Power –AI Challenge 115

devastating people’s lives (Pasquale, 201522). Here is one important problem for
education: when the black box of AI is profling students, deciding what they can
and cannot do, placing them in arbitrary categories such as “at risk of failing,”
“isolated,” or “uninterested,” there is no possibility of recourse. Students cannot
appeal these decisions and most often not even know that a label was attached to
them to shape in an invisible and powerful way their academic life. The algorithm
decides and not even teachers can do much about it. Learning analytics and pre-
dictive analytics are not accountable forms of management of data or education;
students do not have their say in what label is attached to them, and schools or
universities simply don’t know how the AI algorithms used are working, and
cannot even fnd this. It is a strange fact, but universities, with schools of engi-
neering and top specialists in programming, with engineers who are educating
the engineers of the future, do not use their own platforms for online education,
or LMS. In Australia, for example, there is no university using an LMS created
within, with proprietary software owned by the university and people who can
actually take responsibility when something goes wrong, and can explain what
exactly was wrong in the algorithms. In efect, decision on students’ education is
very much infuenced by corporate entities with neither expertise nor interest in
educating people per se.
The importance of an algorithmic score can be vital; we can take the example
of an AI system used for over a decade by police in Spain, called VioGén. This
system was used by the Spanish Police to assess the risk levels for women who
fled a complaint of abuse. An external audit, presented in March 2022, reveals
that it has severe faws that lead to women’s risk being ranked too low. The use
of this system is associated with the disturbing fnd that VioGén system “dis-
cards” most cases “by giving them an ‘unappreciated’ risk score,” “only 3% of
the women who are victims of gender violence receive a risk score of ‘medium’
or above and, therefore, “efective police protection’.” For example, “women
who were killed by their partners and did not have children were systematically
assigned lower risk scores than those who did, with a recall diference between
groups of 44%” (Eticas Foundation, 2022, p. 3223). The audit of VioGén is fnd-
ing among its troubling conclusions that “VioGén is, in practice, an automated
system with minimal and inconsistent human oversight” (p. 32).
Edtech is massively used in universities as technologies of domination, where
AI is pushing with unprecedented efciency students and teachers to submit and
accept their manipulative authority and surveillance with apathy and resignation.
The New Management of universities is actively employing without much scru-
tiny surveillance and inadequate educational solutions partly because it is keeping
students and teaching staf docile and controllable, leaving the illusion of stability.
The most common reaction is that data collected in education is somehow
secured and used far from amoral data brokers who monetise people’s vulner-
abilities, preferences, and lives. The truth is that this assumption is far from what
is really happening. We can take just one example: PowerSchool, an edtech
116 Higher Learning

company that was used by tens of millions of students in over 73 countries, is


one of the most widely used web-based student information systems in North
America. Its services include “student information systems, learning manage-
ment and classroom collaboration, assessment, analytics, behaviour, and special
education case management.” Products ofered by this company are intertwined
with the collection of students’ data. In 2001 the company was sold to Apple, a
corporation with a good record in data collection and privacy. Few years later, in
2006, PowerSchool was sold again to the global education corporation Pearson.
Of course, this is an education corporation, so we can assume that students’ (and
teachers’) data was secure. In 2015, Pearson sold PowerSchool to Vista Equity
Partners, for $350 million cash. This company ofers also products for higher
education, in what is probably unintentional symbolic brand of PeopleAdmin,
and data is “on the market,” free to be used by data brokers.
This example, just one out of too many, shows that the assumption that stu-
dents’ and teachers’ data remains in education is not based on any solid foundation.
In the report “Without Consent: An analysis of student directory information
practices in U.S. schools, and impacts on privacy,” the frst sentence of the execu-
tive summary captures the appeal of education for data brokers: “If data is the new
oil, then student data is among the most desirable data wells of all” (Dixon, 2020,
p. i24). Data brokers, already able to provide someone’s location, address, contact
details, a map of political preferences, health status and risks, sexual orientation,
and so many other personal details, have now the capacity to employ AI systems
to use or sell data for any purposes. The AI black boxes of reputation and outputs
already create rankings that shape academic careers, students’ lives, teaching, and
the academic ethos. There is a real risk to excise signifcant learning from educa-
tion, limiting the process to a transactional and formal system where learning is
measured in limited outcomes, relevant and used only for assessments, at a certain
time. Sooner or later, students fnd that the compulsory use of LMS and other
online applications required for the studies is also a form to gather deceptively a
lot of personal data. Even when students do not openly protest, the trust is bro-
ken and this is where the love for learning and a genuine educational experience
become impossible.
Living under surveillance is incongruent with human nature; we are not set
to live constantly watched, permanently checked to obey the rule, fair or unfair.
Higher learning, the type of intrinsically motivated, profound learning that stays
as reference points for a long time in someone’s life, cannot happen in a post-
human context. Research reveals that employees placed under permanent sur-
veillance become more stressed, record higher level of anxiety, frustration, and
depression that those who are not watched. Ironically, as the neoliberal model
of management colonised all spaces and felds and called for the adoption of
surveillance to boost productivity, improve control and efciency, we fnd that
surveillance technology is leading to poorer performance, lower job satisfaction,
and workers’ burnout (Golbeck, 201425). The model and tools for surveillance
Surveillance, Control, and Power –AI Challenge 117

are not necessarily specifc to education; schools and universities are using against
students what is already available for corporations and various companies to watch
and control employees. In this sense, we can say without doubt that the trend is
not to humanise the workplace but to make employees work like robots. Uber is
using AI to monitor and rate workers’ performance and rank them on a fve-star
basis, and a certain number of low rating is leading to workers’ termination (stop
them have access to the app). In 2021, Amazon’s CEO Jef Bezos’ sent a letter to
shareholders to announce that Amazon workers will be managed through the use
of AI surveillance, specifcally watching what muscles are engaged in their work,
noting that: “We’re developing new automated stafng schedules that use sophis-
ticated algorithms to rotate employees among jobs that use diferent muscle-
tendon groups to decrease repetitive motion and help protect employees from
MSD risks” (Bezos,26 2021). The hellish conditions of Amazon’s employees are
well documented, including the dystopian use of surveillance on Amazon drivers
and warehouse workers. Schooling is developing on the same trend. In 2018 the
Hangzhou No. 11 High School in China introduced an AI system that is using
facial recognition to evaluate students’ level of engagement, collecting data that is
aggregated to report if a student is engaged or daydreaming, it is angry or bored,
engaged in reading, writing, actively listening the teacher, or if they are happy or
surprised. This system is scanning the entire classroom every 30 seconds to cap-
ture every movement or facial expression on all students in the classroom, labelled
as “the smart classroom behavioural analysis system.” This is just a part of the
“smart campus,” which integrates other areas of surveillance and features associ-
ated with facial recognition in other areas, such as the high school canteen, vend-
ing machines, or the library. All this information is aggregated and the teacher can
read it for “a better classroom management.” The Hangzhou Bureau of Education
made public their intention to extend the emotional evaluation system in over
190 schools and kindergartens. China is also using technologies that are mining
extraordinary large volumes of data from workers’ brains: “Government-backed
surveillance projects are deploying brain-reading technology to detect changes
in emotional states in employees on the production line, the military and at the
helm of high-speed trains” (Chen, 201827). It is tempting to believe that this level
of surveillance is common only for dictatorial regimes, but it is very far from the
truth. AI powered surveillance is already used in university campuses, in police
proflings and surveillance, and in the everyday life of every citizen. Policing
the classroom is grotesquely enhanced by AI systems, ofering the possibility of
extensive surveillance and control in the name of efciency and personalisation,
as a key for student engagement; results are the opposite of these promises. Edu-
cation at all levels is in a state of crisis, from an identity crisis of universities and
schooling in general to the rapid decline of prestige and social respect for educa-
tion and teachers, to extreme cases of extreme reactions against schools and uni-
versities. A report released by the American Psychological Association, based on
a survey of over 15,000 teachers and other school staf across America, found that
118 Higher Learning

more than 40% of school administrators reported verbal or threatening violence


from parents during the 2020–2021 school year. Close to 50% indicate a plan or
desire to quit or transfer jobs, 37% reporting at least one incident of harassment or
threat of violence from a student and even more frequent coming from a parent.
A teacher included in this extensive survey said that:

I have been physically assaulted multiple times by students in the build-


ing and they know that not only is there no one to stop them, but there
will be no consequences either. I ended up in the hospital the last time it
happened.
(McMahon et al., 202228)

Surveillance increased in schools across the world, with results that are speaking
now more about the impact of naive, uninformed, and toxically positive ideas
about education. Violence and distrust for institutions of education is another
indicator that we follow a wrong model.
Surveillance and monitoring of individual performance, along with the adop-
tion of the bizarre concept of “key performance indicators” to measure faculty
“outputs,” are much more suitable for a car engine than any good-intentioned
attempt to organise education. The language in higher education governance
and reporting reveals an industrialised vision for education, reducing the aims
of education to the lowest common denominators. This language creates and
supports a culture of audit, hierarchical control, and distrust in higher education,
normalising surveillance and control. The normalisation of surveillance, indoc-
trination, and numbifcation of students and teachers is noted and researched in
media or academic studies. We can take just one example provided by Forbes
in 2019, where we fnd the example of ClassDojo, an edtech product used by
schools: “ClassDojo, one of the most ubiquitous tools used to manage classrooms
and students, not only indoctrinates students into a surveillance culture, but is also
susceptible to security breaches that put student data at risk” (Baron, 201929). In
the normalisation of surveillance we hear often that it is a necessary compromise,
as surveillance is required for benefcial solutions made available to those who are
watched. This is exactly what STASI, the famously cruel secret police in the times
of East Germany (GDR), and other dictatorial regimes, argued: that surveillance
is in the interest of those who are surveilled, for a good functioning of their
society. Research reveals that we have extensive invasions of privacy favoured by
the efciency of what is called “the corporate cultivation of digital resignation,”
a complex strategy designed to suppress resistance against surveillance and numb
those who are permanently watched. A study published in 2019 is fnding four
main areas used by corporations to normal and foster digital resignation, which
consist of four “interrelated rhetorical tactics”:

placation, diversion, jargon, and misnaming. Placation involves eforts to


falsely appease concerns. Diversion refers to eforts to shift individuals’
Surveillance, Control, and Power –AI Challenge 119

focus away from controversial practices. The use of jargon – terminology


that is difcult for those outside a specifc group to understand – not only
generates confusion, but may frustrate eforts at comprehension. Similarly,
misnaming describes eforts to occlude industrial practices through the use
of misleading labels.
(Draper & Turow, 2019, p. 183030)

In 2021 the US Center for Democracy & Technology published a report of


their research, which reveals the extent of surveillance in schooling contexts. It is
fnding that the vast majority of surveyed students and teachers report that their
school is using surveillance software, and 80% of students report self-censoring as
a result of surveillance (Hankerson et al., 2021, p. 1531).
This refects the fact that in education surveillance and abuse of hierarchies of
power that are inherent in being watched create a more complicated reality, as
an integral educative relationship requires trust. Edtech opened a marketplace for
student data (Russell et al., 2018), where most private information and students’
work are exploited, monetized, and manipulated with signifcant impact for the
future of youth; trust is completely dissolved in this relationship.
Higher learning requires the courage to relax power structures, cultivate trust,
and favour collaboration in practice rather than a simple slogan. Rankings, per-
formance management, surveillance, and constant control hinder learning and
good teaching. If these ingredients are sacrifced for power control, we have
another form of education, a travesty that is making teachers and students, uni-
versities and governance pretend that learning is still happening, and results are
aligned with stated standards. This is the most important mechanism that eroded
Soviet communism: pretending that aims are reached and the system is working
well, when everybody knew that this is not true. The parallel escape those for-
tunate to miss the living experience of such a system, but is visible to academics
familiar with the Soviet communism. For example, Craig Brandist, a professor of
cultural theory and intellectual history at the University of Shefeld in the United
Kingdom, noticed while working on some archives of universities in the Soviet
Union that these documents look very similar in essence with the kind of docu-
ments requested by his own institution. Surprised by the striking similarity he
had the idea to try an experiment. The British scholar translated and made minor
tweaks to a document created by a Soviet scholar to justify the funding needs for
his research and incorporated into his own report on research for his institution.
The report was accepted without a comment. Craig Brandist observed that in the
Soviet regimes the problem was not that much the absence of a vibrant intellec-
tual life or that the Communist Party was imposing a very strict line, but a more
subtle and corrosive process. The main cause was a constant

erosion of the structures that insulated scholarship from the demands of


state policy and economic imperatives. [here is where] parallels are sur-
prisingly pervasive. They include the imperative for competition between
120 Higher Learning

institutions; the subordination of intellectual endeavours to extrinsic met-


rics; the need to couch research in terms of impact on the economy and
social cohesion; the import of industrial performance management tactics;
and the echoing of Government slogans by funders.
(Brandist, 201432)

The British academic ended his analysis with a note that underlined the major
diference between the Soviet context and our current neoliberal arrangements,
where academics have the freedom to openly critique the system, organise, and
fght for their work conditions. The irony makes that as a consequence for the
publication of that analysis Brandist was called by his university’s department of
human resources to be warned; he details in a subsequent article published by
Times Higher Education that he

noted the important diference – that academics in the UK, unlike those in
the Soviet Union of the 1930s, do not face routine censorship and repres-
sion for voicing critical views. But a few days later, I received a formal letter
from human resources suggesting that I should desist from publishing such
material and instead raise concerns internally.
(Brandist, 201633)

In 2012, Chris Lorenz, an academic at Vrije Universiteit Amsterdam, published


an article titled “If you’re so smart, why are you under surveillance?: Universi-
ties, Neoliberalism and New Public Management (NPM).” Here is where Lorenz
clearly identifes a set of similarities between the managerialist ideology adopted
by universities and the Soviet system. Among these similarities he reveals that the
“emphasis on control brings to light the frst hidden substantial aspect of NPM
managerialism that is reminiscent of state Communism. Like Communism, NPM
is totalitarian because it leaves no institutionalized room for criticism, which it
always sees as subversion” (Lorenz, 2012, p. 60834). This is a feature extensively
exploited by edtech, especially in promoting products based on AI.
The “fctitious rationality” of a system that is Kafkaesque in nature and irra-
tional under the veneer of well-designed managerial decisions is withering the
function of educational values in favour of ideological choices based on monitor-
ing, surveillance, and control (Barrow, 2010). Students are placed in this confusing
reality and have no reason to believe that their data – which shapes in the current
economic and social arrangements their future – will never be left in unknown
hands, or even sold to entities that keep their manipulative intentions obscure.
The advancement of AI is also signifcantly broadening the role of surveillance
and data collection in education. Tracking and tracing technologies, plagiarism
detection solutions, and other means of collection of various data on students’
life in campus and outside of it present some unique challenges. Beyond its obvi-
ous ethical betrayals, the adoption of general surveillance removes beauty from
Surveillance, Control, and Power –AI Challenge 121

education. Universities became spaces with a constant reminder of who is in


power to spy on you for control. This is an ugly space.

Notes
1. Kim, T. (2018, April 11). Goldman Sachs asks in biotech research report: ‘Is curing
patients a sustainable business model?’CNBC. www.cnbc.com/2018/04/11/goldman-
asks-is-curing-patients-a-sustainable-business-model.html
2. Horan, C. (2021). Insurance era: Risk, governance, and the privatization of security in postwar
America. The University of Chicago Press.
3. Ramo, S. (1957). A new technique of education. Engineering and Science, 21, 17–22.
4. such as “product specialists.”
5. Fletchert, D. J., & Rockway, M. (1986). Computer based training in the military. In J.
A. Ellis (Ed.), Military contributions to instructional technology. Praeger.
6. OECD. (2015). Students, computers and learning: Making the connection. OECD Publishing.
7. Bürgi, R. (2016). The free world and the cult of expertise: The rise of OECD’s edu-
cationalizing technocracy. International Journal for the Historiography of Education, 6(2),
159–175.
8. PISA is the OECD’s Programme for International Student Assessment
9. TIMSS and PIRLS are OECD’s international assessments of outcomes and trends in
student achievement in mathematics, science, and reading.
10. Potter, J. (2008). Entrepreneurship and higher education. OECD Publishing.
11. Roberts, M. E. (2018). Censored: Distraction and diversion inside China’s great frewall.
Princeton University Press.
12. Enyedy, N. (2014). Personalized instruction: New interest, old rhetoric, limited results, and the
need for a new direction for computer-mediated learning. National Education Policy Center.
http://nepc.colorado.edu/publication/personalized-instruction.
13. OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with artifcial intel-
ligence, blockchain and robots. OECD Publishing. https://doi.org/10.1787/589b283f-en.
14. Nagle, T., Redman, T. C., & Sammon, D. (2017, September 11). Only 3% of
companies’ data meets basic quality standards. Harvard Business Review. https://hbr.
org/2017/09/only-3-of-companies-data-meets-basic-quality-standards
15. Sivabalan, S. (2019, August 13). Argentina’s massive sell-of had a 0.006% chance of happen-
ing. www.bloomberg.com/news/articles/2019-08-13/argentina-rout-was-4-sigma-
event-beckoning-the-bravest-of-brave
16. Millar, K. (2020, June 24). HAI Fellow Kate Vredenburgh: The right to an explanation.
Human-Centered Artifcial Intelligence, Stanford University. https://hai.stanford.
edu/news/hai-fellow-kate-vredenburgh-right-explanation
17. Johnson, K. (2022, March 7). How wrongful arrests based on AI derailed 3 men’s lives.
www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/
18. Burke, G., Mendoza, M., Linderman, J., & Tarm, M. (2022, March 6). How AI-
powered tech landed man in jail with scant evidence. The Associated Press. https://apnews.
com/article/artifcial-intelligence-algorithm-technology-police-crime-7e3345485
aa668c97606d4b54f9b6220
19. Hankerson, D. M. (2021, September 21). CDT original research examines privacy implica-
tions of school-issued devices and student activity monitoring software. https://cdt.org/insights/
cdt-original-research-examines-privacy-implications-of-school-issued-devices-and-
student-activity-monitoring-software/
20. Stoychef, E. (2016). Under surveillance: Examining Facebook’s spiral of silence efects
in the wake of NSA internet monitoring. Journalism & Mass Communication Quarterly,
93(2), 296–311. https://doi.org/10.1177/1077699016630255
21. Noelle-Neumann, E. (1974). The spiral of silence: A theory of public opinion. Journal
of Communication, 24, 43–51. https://doi.org/10.1111/j.1460-2466.1974.tb00367.x
122 Higher Learning

22. Pasquale, F. (2015). The black box society: The secret algorithms that control money and infor-
mation. Harvard University Press.
23. Eticas Foundation. (2022). The external audit of the VioGén. https://eticasfoundation.org/
wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-
System.pdf
24. Dixon, P. (2020). Without consent: An analysis of student directory information practices in
U.S. schools, and impacts on privacy. World Privacy Forum. www.worldprivacyforum.
org/wp-content/uploads/2020/04/ferpa/without_consent_2020.pdf
25. Golbeck, J. (2014, September). All eyes on you. Psychology Today. www.psychologyto
day.com/us/articles/201409/all-eyes-you
26. Bezos, J. (2021). 2020 Letter to shareholders. www.aboutamazon.com/news/company-
news/2020-letter-to-shareholders
27. Chen, S. (2018, April 29). “Forget the Facebook leak”: China is mining data
directly from workers’ brains on an industrial scale. South China Morning Post. www.
scmp.com/news/china/society/article/2143899/forget-facebook-leak-china-
mining-data-directly-workers-brains
28. McMahon, S. D., Anderman, E. M., Astor, R. A., Espelage, D. L., Martinez, A.,
Reddy, L. A., & Worrell, F. C. (2022). Violence against educators and school personnel:
Crisis during COVID (Technical Report). American Psychological Association.
29. Baron, J. (29 January 2019). Classroom technology is indoctrinating students into
a culture of surveillance. Forbes. www.forbes.com/sites/jessicabaron/2019/01/29/
classroom-technology-is-indoctrinating-students-into-a-culture-of-surveillance/
30. Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation.
New Media & Society, 21(8), 1824–1839. https://doi.org/10.1177/1461444819833331
31. Hankerson, D. L., Venzke, C., Laird, E., Grant-Chapman, H., & Thakur, D. (2021).
Online and observed: Student privacy implications of school-issued devices and student activity
monitoring software. Center for Democracy & Technology. https://cdt.org/wp-content/
uploads/2021/09/Online-and-Observed-Student-Privacy-Implications-of-School-
Issued-Devices-and-Student-Activity-Monitoring-Software.pdf
32. Brandist, C. (2014, May 29). A very Stalinist management model. Times Higher Edu-
cation. www.timeshighereducation.com/comment/opinion/a-very-stalinist-manage
ment-model/2013616.article
33. Brandist, C. (2016, May 5). The risks of Soviet-style managerialism in UK universi-
ties. Times Higher Education. www.timeshighereducation.com/comment/the-risks-of-
soviet-style-managerialism-in-united-kingdom-universities
34. Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities,
neoliberalism, and new public management. Critical Inquiry, 38(3), 599–629. https://
doi.org/10.1086/664553
6
BEAUTY AND THE LOVE FOR
LEARNING

We enter the third decade of this new century facing across the globe multi-
ple, interconnected crises: a devastating war that was started by Russia against
Ukraine and against the values of humanism, freedom of choice, and human
dignity. We have another genocide in the heart of Europe and for months, the
world looked at new atrocities with the sense that we fail to stop it. There are
various reasons that can be identifed for the impotence of the world to stop the
carnage and obliteration of civilians in Ukraine; probably the most important is
the arrogance of thinking that the history ended with the universal acceptance
of liberal democracy and the Western model, as Francis Fukuyama said in early
1990s. Politicians, public fgures, philosophers, and commentators warn that this
confict can lead to Third World War1. Another dangerous crisis for the future
of humanity is the climate crisis, which is accelerating, already having a visible
and direct impact on our health, quality of life, on the political and social stabil-
ity of the world. The year 2022 is marked by the publication for the frst time
since 2014 of an international report on climate change by the Intergovernmental
Panel on Climate Change (IPCC). On the basis of 34,000 studies, the IPCC
report reveals “widespread, pervasive impacts to ecosystems, people, settlements,
and infrastructure,” noting that “climate change has caused substantial damages,
and increasingly irreversible losses, in terrestrial, freshwater and coastal and open
ocean marine ecosystems. The extent and magnitude of climate change impacts
are larger than estimated in previous assessments” (IPCC, 2022, p. 112). The
report indicates a clear state of emergency, with irreparable consequences for our
common future. As expected, the world paid attention for a week and weeks after
the usual competition to extract even more fossil fuels went ahead, accelerated.
Billions of people are highly vulnerable to the impact of climate change, over
half the world’s population will be afected in 2022 by severe water shortages,
DOI: 10.4324/9781003266563-9
124 Higher Learning

and extreme climate events become too frequent to allow adaptation and correc-
tions. In mid-March 2022 both of Earth’s poles recorded alarmingly abnormal
temperatures: Antarctica reached 40°C higher than normal average temperatures
for that time of the year, and the North Pole recorded 30°C above the average
temperatures. These astonishing developments in the world’s climate should be
extremely alarming for all politicians and decision makers across the world. In
reality, it was all ignored. Education is vastly responsible for this severe failure of
responsibility and wisdom.
These interconnected and compounding crises are directly linked to topics
addressed in this book. The climate crisis proves that our systems are failing,
beyond usual dichotomies such as West versus the Global South and America
versus Russia. The reality is that the entire world adopted a unique model of
cowboy capitalism, of extractive practices with sociopathic features informed by
pseudo-thinkers such as Ayn Rand. This model was informed and shaped by the
American dreams, where education is trumped by wealth, wisdom is ridiculed,
and a good ranking in the Forbes list of abysmally rich is showing all that really
matters in this world. Russian oligarchs found a familiar and permeable system,
common to their own kleptocracy in the United Kingdom and Australia, in the
United States and in Cyprus. A society where the rich set the rules and the poor
are exploited to the most extreme limits is not unique for one country, or even
for a continent; we have this model in various versions and local favours across
the world. This is why a character like Donald Trump is so similar to Russian
oligarchs, their manners, taste, and ostentatious distaste for culture and education.
It is a common platform of symbols, values, written and implicit messages, in a
truly globalised world; education was for decades under attack and we see now
the fnal blows, which are coming now from within. On the basis of these com-
mon cultural codes, on a “dream” that was relentlessly promoted and adapted to
new interests, we built economic, environmental, political, and social systems,
which are now crumbling around us.
It was impossible to fnd education unharmed; universities of the 21st century
are in a severe crisis of identity, mission, and meaning. Arthur Levine and Scott
J. Van Pelt identify in their book, The Great Upheaval, the main direction of
these changes, with themes and solutions that are commonly found within most
universities.3 According to these misguided ideas, higher education is not dif-
ferent from the music industry, or newspapers – skidding fast over the fact that
education is an infnitely more complex endeavour than printing a newspaper or
selling records. Typically for the rhetoric of the last decades in this space, Levine
and Van Pelt detail in their book a neoliberal future for the university, where we
will have “the rise of anytime, anyplace, consumer-driven content and source
agnostic, unbundled, personalized education paid for by subscription” (Levine &
Van Pelt, 2021, p. 2124). This is summarising well a good part of literature on
higher education and possible solutions for its future. It is in other words the
world of a Netfix for education, an Amazon-like platform taking the name and
Beauty and the Love for Learning 125

the pretence of Academia, with faculty playing the role of Amazon drivers or
warehouse workers. Education is reduced to a work of supervising various blocks
of training and assessments and the simple overview of edtech software and learn-
ing management platforms. The two authors just repeat there a boring mantra in
a book devoted, ironically, to innovation. According to them, students

are seeking the same kind of relationship with their colleges that they have
with their banks, supermarkets, and internet providers. They ask the same
four things of each: (1) convenience, (2) service, (3) a quality product, and
(4) low cost.

The proponents of this type of arguments ignore the possibility that this is what
students see possible, choosing only from what universities already provide. It
shouldn’t be so difcult to researchers in this feld to start from the fact that one
cannot make choices that are not even imagined, that remain detached from
what it looks like the only realistic alternatives. In other words, students indicate
what parts from what is already familiar, ofered by their university, they prefer.
There is no option to ask if they prefer the model of a university in 1960s, when
it was free of charge, or a university of the 21st century where the vast majority
of students graduate with a soul-crushing debt. Most probably, students’ choices
will look very diferent if researchers would be less inclined to simply justify their
ideological positions with misleading questions and fawed research. The argu-
ment, repeated and restated in various forms on writings on “innovation” that
lack any originality, serves the agenda set by the WTO few decades ago. Students
are customers, and all – including universities and their academics – are part of
the market.
The new industry of education, as imagined and set by GATS and WHO,
is mostly organised on myths and half-truths, on emotional advertisements and
low-standard science. AI is adopted in a system that is vulnerable to hype and
corporate-biased science, distorted by the “funding efect,” where results are
favourable to the source of sponsorship (Lundh et al., 20175). The utilitarian
view of education dominating the American thinking is radicalised, and a uni-
versity is reduced to a training institution with transactional relationships, a type
of supermarket where students can get packages of information that are useful to
fnd jobs. There are too many examples to illustrate this impoverishment of edu-
cation, which make not only a very long lecture, but a depressing one. Taking just
one example, we can look at what Patrick McCrory, the Republican Governor of
North Carolina, noted about funding for higher education, noting that “It’s not
based on butts in seats but on how many of those butts can get jobs. . . . I don’t
want to subsidize [what is not] going to get someone a job.” In forms that are less
crude we fnd this argument dominating the aims of universities across the world:
education is reduced to an institution of vocational training, and here is where
AI is viewed as the new panacea. If universities are just places where students get
126 Higher Learning

“education packages,” which are short, efcient, “personalised,” targeted blocks


of information-and-tests for credentials that open the door to graduates to take
jobs, then we can easily see why edtech is adopted without a critical refection
or a basic desire to retake control. Obscured by generous statements we can see
the irresistible temptation to use technologies that promise that with the right
number of clicks we can reach the aim of providing a credential to graduates,
employable and ready to join the market.
In essence, the entire project of education is technologised. This is not a
problem on itself; we should be concerned that the technological approach
for the extremely complex project of a higher education is reduced to basic
mechanics and oversimplifed banalities. Heidegger noted on technology that
it is as old as human civilisation. However, he found that the contemporary
understanding of technology, which is an “instrumental and anthropological
defnition,” stands as a diferent and new development for humanity: “The cur-
rent conception of technology, according to which it is a means and a human
activity, can therefore be called the instrumental and anthropological defnition
of technology” (Heidegger, 1977, p. 56). Far from an anti-technological per-
spective, he warned us that the main risk of technological advancement is the
alteration of our thinking and being, not the threat represented by a specifc
technological development, even if that is represented by applications of nuclear
fusion or – more recently – of AI. The most devastating risk of technology is
our ontological withering: “the approaching tide of technological revolution in
the atomic age could so captivate, bewitch, dazzle, and beguile man that calcula-
tive thinking may someday come to be accepted and practiced as the only way
of thinking” (Heidegger, 1969, p. 567). Universities are among the last spaces
where Heidegger’s optimistic view on technology can be nurtured and achieved,
but only if we take note and seriously consider that he warned us that “the issue
is the saving of man’s essential nature,” it is about the aim to keep “thinking
alive.” Here is where universities fail and step further in their existential crisis of
identity, scope, and future.
It escapes university administrators that the most signifcant and terrifying
crisis is not technological: we have an existential climate crisis, a profound cri-
sis of values, of sustainability, of living models, of equity and morality, and a
profound intellectual crisis. In fact, technological innovation is accelerated and
is opening to new possibilities in the 21st century. Our crises are intellectual,
moral and ontological; the extraordinarily dangerous exploitation of the Earth’s
resources that is accelerating climate change. We have political and cultural cri-
ses of the world order and stability that lead to wars and extreme violence.
The pandemic of Covid-19 revealed that we have across the world, including
across what was labelled as “developed” countries that are best prepared for
such events, a profound crisis of civility, of compassion and education. The
pandemic revealed that politicians moved very fast from feigning basic civility
to the extreme position of placing life on a second plan and market profts and
Beauty and the Love for Learning 127

economy above anything else. Boris Johnson, the Prime Minister of the United
Kingdom, declared on BBC that “ ‘I’ve given you the most important metric,
which is: never mind life expectancy, never mind, you know, cancer outcomes –
look at wage growth.”8
We have the scientifc solutions and a very concerning number of citizens not
interested – and not able to understand the implications – on how the fndings of
science can build a collective solution to our problems. Higher education is not
pressured to fnd solutions to advance technology; it should be focused to think
about the aims of higher education, to contribute actively to a civil society, to
elevate a wiser citizenry and contribute to the search of sustainable solutions for
graduates’ lives and our common futures.
It is naive to think that universities are in their identity and intellectual cri-
sis just after the WTO pushed higher education to work exclusively for mar-
kets, with extraordinarily damaging managerial models. These developments just
accelerated the cultivated mediocrity within universities, and the dissonance of
seeking profts and market positions and the need to stay relevant for education,
with a positive role for society. The trade agreements on education represent
just a decisive push in the wrong direction, a fatal blow with efects that become
visible just now. In late 1990s universities had already too many problems left
unsolved. In 1949, Susan Sontag wrote in her journal about the possibility to be
“taken” by academia:

I re-survey the life around me. Most particularly I become frightened to


realize how close I came to letting myself slide into the academic life. It
would have been efortless . . . stayed for a master’s and a teaching assistant-
ship, wrote a couple of papers on obscure subjects that nobody cares about,
and, at the age of sixty, be ugly and respected and a full professor.
(Sontag & Rief, 20089)

This paragraph is especially relevant as it points to the projection of a good life


as an academic: ugly, respected, and comfortable. In the long list of failures and
challenges facing higher education, we have the most signifcant debacle: an aes-
thetic failure. We can easily fnd a large number of excellent books and papers on
casualisation and abuse of academics in higher education, on managerial fads and
commercialisation of higher education; in other words, we can fnd a solid and
consistent interest on why academia is ugly, and what can be done to improve
it. At the same time, we can see that the concept of beauty is avoided in this
feld, like a shameful, or irrelevant, or not serious enough dimension for higher
education.
The main failure of education in the era of AI and technological progress is
aesthetic and intellectual. The aesthetic defciency is represented not only by
the fact that higher education is uninterested in the concept of beauty, in learn-
ing spaces, in curriculum, and in campus ethos. The entire project of higher
128 Higher Learning

education is marked by ugliness. Education is reduced to transactional relation-


ships, to market mechanisms, and to peculiar interests. The ethos of the campus
is defned by the dissolution of trust and mutual suspicions of abuse; students are
suspected of cheating, and most universities use now software designed to catch
plagiarists, and study its stealing texts and ideas from others. The adoption of these
tools is completely normalised, to such extent that we do not fnd questioning the
idea to place learning in a context of profound mistrust, criminalised and defned
by structures of power. Students fnd from the frst educational experiences that
all will be checked to see if and how one is stealing, plagiarising. The educational
relationship starts from the assumption that some – if not all – students cheat and
steal, and lecturers are encouraged to use all possible ways to hinder and sanction
this. It is nothing inspiring or beautiful in this project.
Frank Furedi, professor of sociology at the University of Kent, observed in
an article published in 2004 that “as the purpose of the university has become
increasingly unclear, academic integrity itself has become compromised. That is
why ‘experts’ on the subject of plagiarism appear more interested in explaining
the problem away than in exposing its root causes,” which are, according to him,
a loss of scholarship ideals. He continues, noting that the real problem “is not that
students cheat, but that they don’t think that there is anything wrong with this
behaviour. In an era where lecturers are encouraged to treat their students as cus-
tomers, academic scholarship loses any inner meaning” (Furedi, 200410). Furedi
is exploring the important part of how and why plagiarism became normalised,
looking at the loss of ideals and meaning of learning, a process that make moral
judgements on plagiarism naive and obsolete.
In 2017 an article from the same publication is making the argument that the
parallel industry of “cheating services,” which are selling students’ essays and other
texts for their assessments, is thriving on students’ and universities’ lack of interest
in learning. Its author notes that in our commercialised universities,

students may feel less ripped of by essay mills than by universities. Prospec-
tuses promise a collegial atmosphere, an unforgettable “student experience”
and unrivalled preparation for a rewarding career. In reality, university man-
agers are running a no-frills, bums-on-seats business with costs pared to the
bone and tight control imposed on academics by performance measures.
Student satisfaction is purchased with lax academic standards.
(Macdonald, 201711)

A top executive of a leading Australian university explicitly underlined the fun-


damental suspicion we have to have for students, using a metaphor that is inviting
us to consider that some students may be similar to terrorists:

It’s a bit like airport security – he noted on the need to use plagiarism
detection software – it’s a massive hassle to the vast number of people who
Beauty and the Love for Learning 129

have nothing to do with terrorism, but they accept that it is something they
have to do for the integrity of the system.
(Hare, 201612)

This is not just an unfortunate metaphor but stands as a natural refection within
a system that normalised fear and surveillance, oppressive structures of power, and
neoliberal nonsense. That university, where students are imagined as customers
and potential thieves, or “terrorists,” rule breakers who secure fnancial fows vital
for good market positions and top rankings, cannot claim with credibility that
learning, thinking, and discovery are something that matter the most. Treating
anyone like a potential terrorist, or thief, is not conducive to nurture a climate
of mutual trust and collaboration. This is a key of many failures for universities
of post-WTO decades: dissolving the academic ethos of the campus in cynical
calculations of the market, where customers do their tricks to get a better price
for what they want to get and those who sell taking precautions to catch thieves is
leading to a mercantile, cynical, and ugly reality. This is a part of an extraordinar-
ily impoverished view on education, where humanity is reduced to transactional
relations, and love and imagination are derided or ignored and stand separated
from the university’s concerns. In this managerialised and technologised para-
digm, education is dehumanised and reduced to functions relevant for internal
relations of power.
What is surprising is how much the process of claiming interests in what stu-
dents learn is mindlessly applied in the case of plagiarism. There is a proftable
industry of software solutions for what is called “plagiarism detection,” which is
based on the open distrust on the so-called customers of universities, who are
now also called “partners,” or “producers,” in a clumsy efort to make it sound less
lucrative. The vast industry of “plagiarism-detection” or “plagiarism-deterrence”
is just an implicit admission of a fundamental failure of teaching and learning, of
education in universities. These software solutions, such as Turnitin, Cadmus, or
SafeAssign, promise to fnd “cheats,” but are reduced to text-matching capabilities
that are often inferior to a Google search. In other words, students who are so lazy
and negligent and just cut and paste texts are identifed by the software. Other
software are focused on surveillance, and invade students’ private life and measure
behaviour and environments based on very troubling assumptions, leading to so
many error results and intense stress for students who were dropped by lecturers
and entire universities.
Student cheating and software used to hinder and identify plagiarism deserve
separate chapters devoted entirely on the symbolism attached to them, on the
profoundly corrupt practice to use without explicit agreements students’ work
to create massive databases used to enrich external corporate entities. There are
important ethical and educational aspects related to the industry that is funda-
mentally failing to protect students from external surveillance and so many other
important aspects related to the idea of using edtech without students’ knowledge
130 Higher Learning

and control over the implications related to data collected and possible misuses.
There is no doubt that these issues will become prominent in the future for any
institution of education; in this chapter we will limit the analysis on the impact of
AI on academic integrity and plagiarism, and on the symbolism associated with
current solutions used across higher education. For example, it is surprising to see
that Cadmus, a software solution that is informing potential users on its website
that “Cadmus takes an educative approach to academic integrity,” is taking the
name of the Greek mythological slayer of monsters. It is not difcult to see that
this software is a “slayer” of plagiarism, which should leave an education question
if a student plagiarising is a monster or is creating one. Turnitin and SafeAssign
are also open to diferent interpretations, but none can claim that their symbolism
starts from a position of mutual trust and love for learning.
Developments in AI already change the range of possibilities for plagiarism
and scrutiny on academic integrity. AI-based writing assistants, free or widely
afordable, already ofer a wide range of possibilities to paraphrase, rephrase, and
generate original texts in diferent languages, eliminating the possibility to be
identifed with current software used by universities. Of course, we will have in
the near future AI systems used to identify AI-generated texts, in a meaningless
race to cheat and catch, plagiarise, and sanction breaches of academic integrity
rules. These developments should make visible a simple and self-evident fact
about plagiarism: that there is always a possibility to beat the system. This pos-
sibility is much more appealing if the system is seen as meaningless or absurd.
Stating the obvious, we have to note again that the use of fear and threats to make
students learn and produce assignments is a wrong approach for an educational
project. Technological advancements and their applications in everyday life are
associated in human evolution with progress and with new possibilities to solve
important problems. There is a natural tendency to assume that technology is
associated with solutions even when we need more thinking and infnitely more
complex approaches. Heidegger warned us that technology is helping only as
long as we remain alert about the need to keep the meditative thinking alive.
There is a proper way of interacting with technology, which is not only preserv-
ing our humanity but also preparing us to see risks associated with it. This is why
“everything depends on our manipulating technology in the proper manner as
a means” (Heidegger, 1977, p. 513). Technology is associated with our evolution
with its power of helping humans, and human capacity to create and use technol-
ogy is the key to our progress and dominance on Earth. We survived and thrived
thanks to our ability to invent and use technology. What we tend to ignore is that
we had progress and a sustainable evolution only as long as we used technology as
a tool serving our humanity. There is an important message in the ancient Greek
myth of Icarus, who built himself waxen wings to fy, but soared too high and too
close to the Sun; the heat melted his wonderful technology and he plunged to
his death. The ideology of Silicon Valley is based on an opposite view, one that
is placing in an unnatural, asocial, cultural, and aesthetically indiferent position,
Beauty and the Love for Learning 131

where technology is turning one into God. In the Whole Earth Catalog, the
manifesto of Silicon Valley, which was labelled by Steve Jobs, as “one of the bibles
of my generation,” the relationship of humans with technology is clearly solved:

We are as gods and might as well get used to it. So far, remotely done power
and glory – as via government, big business, formal education, church –
has succeeded to the point where gross obscure actual gains. In response
to this dilemma and to these gains a realm of intimate, personal power is
developing-power of the individual to conduct his own education, fnd his
own inspiration, shape his own environment, and share his adventure with
whoever is interested. Tools that aid this process are sought and promoted
by the WHOLE EARTH CATALOG.
(Brand, 1968, p. 314)

AI is now the label attached to technology that is promising again to give us “the
power and glory” and make individuals “gods” with the power to create their
own education, environments, adventures, and knowledge. This is a dangerous
illusion. Technology is not viewed just as a set of tools, but an aim. This view
reveals the importance of the frst part of AI, the adjective of “artifcial.” What is
the signifcance of the word “artifcial” in the AI? Is it even a relevant question
to ask?
The role of language is often minimised, so it may be useful here to restate
that we exist and live in the language available to us. It shapes our understand-
ing of the world, our emotions, and our way of being. This is why it is not a
healthy position to take in assuming that one word is irrelevant, or marginal to
understand technologies or any other aspect. “Artifcial” is an intriguing choice,
and this becomes clearer if we try to create an alternative to the AI formula; we
can imagine that in late 1950s, it was possible to name this new feld “comput-
ing intelligence” or “cyber intelligence.” We already found how important it is
to understand all implications of “intelligence” and the particular history of this
term, and how it is infuencing the ideological positions taken by Silicon Valley
these days. The adjective “artifcial” is synonymous to words such as “synthetic,”
“fake,” “false,” “imitation,” “simulated,” or “manufactured,” “unnatural,” and “fab-
ricated.” In a diferent sense, “artifcial” is defning emotions such as “feigned,”
“false,” “pretended,” or “hollow,” “insincere.” It is a term that is rarely used with
a positive connotation, which may explain the reticence of the group of thinkers
and engineers to use the formula of AI in the years that followed the workshop
organised in 1955 by John McCarthy. There are many ways to read the birth and
the evolution of AI, under its current label and most probably some are better
than the one chosen for this book, which is that AI is containing an unintentional
warning for those who use it. It is a powerful form of intelligence, which has the
potential to grow exponentially in the future, but it is leading to an “artifcial”
world. It is a disembodied world immensely powerful on just some elements of
132 Higher Learning

“intelligence,” based on a particular type of algebra, and – most importantly – on


a narrow way to see and understand the world. There is a risk of adopting calcula-
tive thinking as the only way of thinking, as Heidegger warned us at the middle
of the last century. The “artifcial” side of AI is also represented by the ability of
humans to see their own mistakes and change the course of action or thinking,
leading to diferent results or actions. A group of researchers from the University
of Cambridge and University of Oslo published in 2022 their fndings on AI’s
inherent limitations caused by a mathematical paradox identifed in the 20th cen-
tury by two mathematicians with an extraordinary infuence: Alan Turing and
Kurt Gödel. They identifed a paradox of mathematics, which reveals that

it is impossible to prove whether certain mathematical statements are true


or false, and some computational problems cannot be tackled with algo-
rithms. And, whenever a mathematical system is rich enough to describe
the arithmetic we learn at school, it cannot prove its own consistency.

Researchers explored these limits in line with fndings of Stephen Smale, the
mathematician who proposed a list of 18 unsolved mathematical problems for
the 21st century, where the 18th problem was related to the limits of AI. One of
the authors, Dr. Matthew Colbrook, explained:

The paradox frst identifed by Turing and Gödel has now been brought
forward into the world of AI by Smale and others . . . There are fundamen-
tal limits inherent in mathematics and, similarly, AI algorithms can’t exist
for certain problems.15

The advancement of research on the limits of AI should be an integral part


of any conversation related to the adoption of AI in education, in teaching and
learning, and academic governance. Researchers note that

[T]he strong optimism regarding the abilities of AI is comparable to the opti-


mism surrounding mathematics in the early 20th century, led by D. Hilbert.
Hilbert believed that mathematics could prove or disprove any statement
and, moreover, that there were no restrictions on which problems could be
solved by algorithms. Gödel and Turing turned Hilbert’s optimism upside
down by their foundational contributions establishing impossibility results
on what mathematics and digital computers can achieve.
(Colbrook et al., 202216)

We have boundless optimism in education about the possibilities of edtech, and


AI in particular, with a remarkable lack of curiosity on the limits of technology
and on the educators’ responsibilities in using technology in education. This
opens universities to new risks and represents a very superfcial understanding
on the duty of care for our students. We have to consider that research on bias
Beauty and the Love for Learning 133

and AI reveal that its algorithms are most susceptible to errors and discrimina-
tion when someone or something is stepping out of average. We cannot say that
we want to cultivate independent thinking, and strong and creative identities,
ignoring at the same time that our tools are directly opposed to that, and remain
indiferent to the fact that students navigate a world organised and restricted by
edtech. Education is the feld where we should actively see that an individual is
suddenly fnding resources and the environment to bloom, to contradict the his-
tory of average results with extraordinary new achievements. When that history
becomes a label attached to a complex and often unpredictable mind, the result
can be the limitation and disenchantment of students with learning. Historical
data is not only the kind of data mostly used by algorithms, but it is also identi-
fed by the AI engineers as a main source of a latent algorithmic bias. Bias and
algorithmic forms of compartmentalisation invite mediocrity and apathy or can
give rise to protest and revolt. In a world of education where AI is pigeonhol-
ing individuals based on individual’s historical data, the majority of humanity’s
most signifcant fgures in the history of science and human culture would be
stuck under a limiting label, which also selects only the type of content suitable
for a mediocre or poor student. This would be a world where a student like
Einstein would have remedial content that is assumed to be able to entertain
such a student.
The current approach promoted by decision makers in education within local,
national, and international levels is informed by the neoliberal models of mana-
gerialism and technocratic solutionism. This is reducing the educational project
of universities to quantitative and instrumental outcomes. It is important to “turn
upside down” the optimism of edtech and the managerialised approach of teach-
ing and learning and see current limits and risks, areas of discontent and possibili-
ties for optimal use of technological advancements. A blind trust and enthusiasm
on AI can cost us our collective future. It is the time to approach with a healthy
scepticism fnancial consultants’ claims of expertise in education, corporate giants
with signifcant interests in the edtech market giving the next optimal solutions
for universities. Importantly, the current proposition in higher education is leav-
ing aside the most essential parts of humanity: love, beauty, imagination, passion,
and inspiration are left aside in a process that is left ugly and commercial. It is
limited to measurements, tests with extrinsic value for learning, grades and rank-
ings, credentialisation, and accreditation of empty products of instruction. That is
indeed a hollow, simulated, unnatural, and artifcial education.
We tend to forget that the relevance of education is not how it looks in reports,
or what we decide to certify in a space where credentials are given in exchange of
schooling fees, but what students learn and apply in their lives, at work, in their
families, and in society. A book based on extensive research published in 2013
makes this point:

The problem is that the learning achievement profle, the relationship


between the number of years children attend school and what they actually
134 Higher Learning

learn is too darn fat. Children learn too little each year, fall behind, and
leave school unprepared . . . Schoolin’ just ain’t learnin.
(Pritchett, 2013, p. 1417)

It is important to accept at this point that time spent in an institution of educa-


tion or even a certifcate does not necessarily lead to learning, knowledge, or
wisdom. This conclusion is confrmed publicly by another extensive report18,
published in 2018 by the World Bank, the institution that was pushing constantly
the neoliberal model in education as the only possible solution and claimed
for decades that we have only progress in this feld. Despite its over-optimistic
analysis that is typical for international organisations such as World Bank, the
author cannot avoid the conclusion that for too many children in school, learn-
ing simply doesn’t happen. In the chapter titled “The many faces of learning
crisis,” we fnd problem expressed clearly: “globally, 125 million children are not
acquiring functional literacy or numeracy, even after spending at least four years
in school.” When Pritchett was documenting his book “The Rebirth of Educa-
tion,” a father of one of the students spoke at a meeting with school authorities,
telling them:

You have betrayed us. I have worked like a brute my whole life because,
without school, I had no skills other than those of a donkey. But you told
us that if I sent my son to school, his life would be diferent from mine. For
fve years I have kept him from the felds and work and sent him to your
school. Only now I fnd out that he is thirteen years old and doesn’t know
anything. His life won’t be diferent. He will labor like a brute, just like me.
(Pritchett, 2013, p. 2)

Good grades that serve only to present statistics and “data-informed solutions”
require a solid interrogation; AI is giving bad datasets immense power, and this
require a constant critique and interrogation as well as a radical change on data
collection, evaluation, and on mechanisms designed to identify errors and bias.
Most of all, it requires an intellectual opening required to realise that sometimes
data cannot capture the entire picture and most signifcant parts can be missed by
quantitative measures.
Education cannot remain uninterested – as it is now – in the fact that our
systems are crumbling and that we already live and contemplate dystopian dis-
asters and realities. The time to build a new and more realistic, human project
for education is running out. We can start from the fact that genuine education,
that type of education that is making an impact on students’ life, that can bring
information and wisdom, that nurtures responsibility and civility, is intertwined
with narratives and dialogues, with the birth of love for learning, based on mutual
trust. If education ignores how humans make sense of information and life we
enter the whirlpool of self-comforting illusions about what students really learn,
Beauty and the Love for Learning 135

what is really academic integrity, including the integrity of all involved in educa-
tion, not just the current criminalising view on students and how they can be bet-
ter sanctioned. For this there are some basic facts that contribute to a more solid
foundation of learning and teaching in higher education, using the advancements
of AI and the emergence of new solutions. It is important to rethink the fact
that humans make sense of their life according to diferent temporal dimensions,
which are determined by cultural variables. Lera Boroditsky is one of the most
prominent academics exploring how languages and cultures construct our under-
standing of time, or how diferent we understand time and spatiality in cultures
shaped by diferent languages (Boroditsky, 201119). It is also crucial to understand
that humanity is shaped by an aesthetic and emotional dimension before a cogni-
tive construct is defned. This is why what we can call “the eros of learning” is
vital for a realistic and positive view on education.
AI, and edtech in general, starts from a disembodied, decontextualised and
atemporal view on how students learn. But the most concerning part is that the
eros of learning is the forgotten ingredient for academic endeavours. The love for
learning arises most often from a unique mix of tremendous eforts, discomfort,
frustrations and discoveries, new perspectives, and wider understandings. Learn-
ing by heart, not as memorisation, but as a deep love for new ideas, knowledge
and epistemological spaces opened through learning and imaginations, should be
reconsidered in the educational projects in the technological era. This is the part
ignored in the process of industrialisation, commercialisation, and trivialisation
of higher education. As we cannot simplistically measure imagination, or test the
love for learning, or the beautiful nature of an educational experience, or how
we refect on the ideas of goodness or civility, or how much we truly nurture
imagination through learning experiences, educators, and policy makers are led
to simplify and trivialise education, ignoring the human dimension of learning.
This trivialisation makes possible to have illiterate university graduates. There are
voices claiming that these concerns are not based on realities, that universities
work better than ever before by “selling” their “product.” We are told that man-
agement procedures optimise the instruction and new credentials are fawlessly
aligned with the needs of the “market” and “employers.” It is almost convincing
to listen to these opinions, but reality speaks in strident tones about our collective
failures of higher education. For example, we can look at the example of a medi-
cal doctor, Dr. Sherri Tenpenny, who was invited to give testimony at the Ohio
House Health Committee meeting in June 2021, and said that metal objects are
sticking to the bodies of vaccinated people. At the same time, a US Congress-
man (Rep. Louie Gohmert, R-Texas) was asking in the US Congress whether
there was anything that the U.S. Forest Service can do “to change the course of
the moon’s orbit or the Earth’s orbit around the Sun,” in order to combat climate
change (Gregorian, 202120). How was it possible to pass their exams in university?
What was measured in Gohmert’s education to make possible for him to receive
a Juris Doctor degree?
136 Higher Learning

Speaking at the European University Association annual conference at the


National University of Ireland (NUI) in Galway in 2016, the President of Ire-
land Michael D. Higgins warned that universities are facing an intellectual cri-
sis over their role in society. He noted that institutions of higher education are
under increasing pressure to produce graduates solely for the labour market. Most
importantly, the president of Ireland observed that “fostering the capacity to dis-
sent is another core function of the university” and that institutions of higher
education have “a crucial role in creating a society in which the critical explora-
tion of alternatives to any prevailing hegemony is encouraged.”21 Soft-marking is
one of the problems afecting universities, and research shows that academics feel
the pressure to push up grades. A survey conducted by the Guardian’s Higher
Education Network in the United Kingdom revealed that 46% of academics con-
frmed that they have been under pressure to mark students’ work better than
they deserved. Professor Tucker, a former academic at Queen’s University Belfast,
was sued after he said that lecturers are under pressure to pass underperforming
students, as their performance is assessed on the basis of grades students obtain.
He described the mechanism of chasing academic targets by UK university man-
agers: “Like central planners in general, they have measured their success by the
quantity of what they produce rather than by its quality.”
In Australia, a media investigation (the Four Corners programme) revealed
that local universities have been forced to accept students with false academic
records just to make their budgets, allowing for mass-cheating, promoting soft-
marking and even bribery. An ofcial inquiry by the NSW Independent Com-
mission Against Corruption (ICAC) uncovered that academics can feel pressure
“to intertwine compliance and proft rather than separating them, and to reward
proft over compliance, [and this] can be conducive to questionable and corrupt
behaviour.” The ofcial report reveals that the pressure on budgets makes aca-
demics complicit in fraud, as universities cannot aford to fail all students who are
underperforming and stay consistently substandard. It also notes that

in the search for international students, some universities in NSW are


entering markets where document fraud and cheating on English-language
profciency tests are known to exist. Some universities are using up to 300
local intermediaries or agents to market to and recruit students, resulting in
due diligence and control challenges. This has resulted in a gap emerging
in some courses between the capabilities of many students and academic
demands.

Some students are simply incapable to understand the language of instruction,


and stand unable to comprehend anything that is part of their university experi-
ence, and then graduate and leave with a diploma. At this time, their universities
report how they balance budgets while ignoring the strident realities caused by
marketisation. There are multiple examples in this sense, in countries around the
Beauty and the Love for Learning 137

world, where a graduate diploma does no guarantee the level of literacy required
to write properly a postcard. This is a direct efect of a general refusal to look
at and openly admit signifcant failures and abnormalities. Taking a blindingly
positive approach of what is happening in universities will not help anyone, and
this obvious refusal of intellectual honesty rapidly erodes the pillars of Academia.
These failures accumulate and lead to an accelerated erosion of authority in
education, a collapse of trust in what institutions of education certify and cre-
ate. The authority of educated people, another lost dimension, was sourced in
the Latin auctoritas, the attribute associated with the wisdom of the elders, the
keepers of tradition, knowledge, wisdom, and virtue. That type of authority is
distinct from power, which in the Roman world was held by potestas. “Auctori-
tas” is based on wisdom, and this power went beyond legal or institutional rights.
Socrates and Plato had knowledge and wisdom, and this is how the frst Academia
in the world was created. Replacing auctoritas and wisdom with the “market” left
education blocked in hubris, with contradictory aims and demands, with a pro-
found loss of identity and a self-imposed mediocrity. It is a model that is violently
hostile to a vibrant intellectual life. The work of teachers is undermined by the
current governance models and by cultural arrangements inherently promoted
with the neoliberal models. The authority is conferred to the market and its suc-
cessful players, capitalists, the rich members of the economy, by people accumu-
lating wealth. In this world teachers are perceived as poorly integrated individuals
in the market, people who are unable to do something better with their life and
earn a decent wage. At the end of 2021, a grotesque show was briefy noted in the
avalanche of stories presented by the media, where teachers had to fght against
each other in front of crowds gathered to watch a hockey game in the US state
of South Dakota, to scoop as many dollar bills as they could, so they can pay for
school supplies22. In fact, research shows that being a teacher is not an appealing
career; a report published in 2006 reveals that “the United States is facing nearly
200,000 teacher vacancies a year at a cost to the nation of $4.9 billion annually”
(Levine, 2006, p. 1123). In the following years this situation became even more
serious. In 2019, a report published by the Economic Policy Institute found that
“teacher shortage is real, large and growing, and worse than we thought. When
indicators of teacher quality (certifcation, relevant training, experience, etc.) are
taken into account, the shortage is even more acute than currently estimated.”24
AI presents the possibility of automation, which is especially appealing for insti-
tutions interested to maximise profts and balance their budgets. Research shows
that automation is associated with the tendency to favour business owners over
wage earners.25 In other words, we can expect that AI will increase the tendency
to reduce the number of academics highly specialised in their felds and replace
them with automated solutions. This will only accelerate the current trend of
precarious employment arrangements for faculty. The dialectical structure of the
system is shaped by market and profts, creating a crisis of identity, ideas, and
solutions for education at all levels across the world. Ironically, AI and edtech
138 Higher Learning

prove not only incapacity to suggest and build efective solutions to the crisis, but
accelerate it. Since Aristotle – and probably before him – technology is naturally
associated with the potential of human emancipation and solutions for our ethical
problems. Aristotle noted that

if every instrument could accomplish its own work, obeying or anticipating


the will of others, like the statues of Daedalus, or the tripods of Hephaes-
tus, which, says the poet, “of their own accord entered the assembly of the
Gods”; if, in like manner, the shuttle would weave and the plectrum touch
the lyre without a hand to guide them, chief workmen would not want
servants, nor masters slaves.26

Technology can make us gods, masters served by technological “slaves.” Now we


have the technological advancement when we have to reconsider these relation-
ships between humans and technology. This is important especially at the point
when the solutionist narrative of edtech represents a real and serious danger for
the liberal culture, democratic societies, and sustainable solutions for our futures.
The opaque nature of AI, including big data collection and its uses, requires an
in-depth exploration of the “edtech imaginary,” the narrative projections of how
new technologies are used for learning and teaching.
When we think about education today we can safely observe that Eros is not
remotely considered in teaching and learning. It is important to underline here
that we talk about a wider understanding of Eros, as it was presented by Plato in
his dialogue. It stands not only as we understand the concept today, limited to
erotic and sexual connotation, but we also look at Eros as desire, as the sense of
longing and the lure of mystical possibilities to achieve wisdom, to understand
gods. In this sense, Eros is also an integral part of human nature, a fundamental
basis for our corporeal experience with the world, which is generally ignored
or avoided in discussions about AI and its uses in education. Restricting Eros to
its sexualised meanings is similar to the impoverishments and confusions caused
when the concept of love is restricted to sexual desires. Eros, as it was explored
and metaphysically defned by Plato, is one of the most important missing parts
of schooling and instruction, which is making now education collapse around the
world. In the United States, a journalist exploring the collapse of teaching and
learning across America notes: “Teachers describe swaths of kids nearly anaesthe-
tised by technology, socially limited, and often displaying disruptive behaviour.
It’s not only teaching them that’s hard – it’s reaching them on any level” (Gray,
202227). This applies to most systems of education across the world, after almost
a century of OECD-isation of education, of looking at teaching and learning as
mechanical processes that can be guided properly by economists and the new god,
the “market.” In this mechanised view of education that is looking at teaching
and learning as engineering processes, it is natural to present and adopt various
technologies that can bring efciency in the well-aligned process of instruction.
Beauty and the Love for Learning 139

The fundamental problem of this view is that it separates education from our
human condition and nature. It is a decontextualised, narrow view that is reduc-
ing the human nature of learning to technics, leading to alienation. It can create
multiple aberrations.
In the dialogue Theages, Plato is presenting Socrates claiming that he knows
nothing, except one subject of learning: “the things of Eros.” This is placing Eros
at the core of Socratic methods of teaching and learning. We have here a com-
plex key left by Plato, placed as a central area of Socrates’ extraordinary expertise
in teaching; for Plato’s master Eros is the key to learning. Socrates is quoted in
Phaedrus as saying that Eros is “a certain desire,” and later in the same dialogue,
he is noting that Eros is related to “the nature of beauty,” which opens another
serious topic ignored by education, that of “beauty.” A lecture of Plato’s dialogues,
and a serious refection on its meanings, reveals that there is no algorithm to
create genuine Eros, and its artifcial surrogates lack power and depth. In-depth
learning, the kind of learning experience that is opening the desire to learn at all
moments of one’s life, is the part related to the Eros of learning, with the passion
and human desire to access new mysteries, to understand and touch inaccessible
spaces. The love and beauty are perfectly explained much later by John Keats in
his poem “Ode on a Grecian Urn” where he writes that:

“Beauty is truth, truth beauty, – that is all


Ye know on earth, and all ye need to know.”

Somehow, we fnd new ways to forget that all we need to know for our
humanity is linked to truth and beauty, to love of beauty. There is much more
in learning and teaching than a mechanical device that can optimally facilitate
passing the information and knowledge to multiple recipients, called students (or
“customers,” or “producers” – which is a clumsy attempt of some universities to
get out from the market paradigm ignoring the fact that producers are fundamen-
tally a basis for the extractive practices of capital). The Eros of learning is related
to the desire to learn, to do what Socrates was doing thanks to his expertise in
Eros: caring and nurturing human souls, opening pathways to wisdom.
Edtech is building its narratives based on the idea that the mechanics of teach-
ing and learning are enhanced by engineering solutions and aims of education
can be optimally achieved in this co-dependent dynamic. It obviously leaves out
the human passion, our desire for love, mystery, passion, our fundamental corpo-
reality, and need for embodied experiences. The crude simplifcation of eforts
of nurturing humanity as a process of manipulating information, delivering and
testing its mastery, is not serving the interest of educators, and not the long-term
interests of edtech companies. Designing instruction in schools and universities as
a process disconnected from the Eros of learning – from love, inspiration, beauty,
passion, happiness and friendship, mystery and hope – is building an alienated life,
an impaired humanity.
140 Higher Learning

It is nothing really new in fnding that education lost its meanings, identity,
and aims. Giambattista Vico, the remarkable philosopher of the Italian Enlight-
enment, noted at the end of the 17th century that the meaning of education
is recurrently forgotten, as the focus naturally shifts towards specialisation and
technical aspects. We have the responsibility to stop ourselves from the whirlpool
of technicalities and technological progress and ask what the aim of education
in the era of AI is. What does it mean to be an educated person? A graduate
diploma, and other forms of credentials, cannot be an aim of education. This
cannot answer the question of what it means to be an educated person. Acquir-
ing knowledge does not equal an educated mind, as any amount of data is not on
itself close to wisdom. Vico suggested that education is vital for our progress and
survival because good individuals create good communities and good societies.
It is a view opposed to current paradigms of governance in education, as Vico’s
theory of education is revolving around the idea of common good, a concept
eliminated in neoliberal arrangements. Vico’s theory of education is interested in
the eforts to bring together information and emotions, mind and heart:

Individuals who complete their own human nature make good citizens.
Without good citizens, there is no basis for a good society. Good citizens
act for the common good. . . . Vico’s conception of education is based on
the art of rhetoric directed by a vision of the Good that improves and pro-
motes the ethos of the individual as a member of the human community.
(Bayer, 2009, p. 2328)

This is another way to look at the Eros of learning, where the Eros is similar to
the ancient Greek perspective, of life and vitality, but one linked directly to the
idea of good and common good.
Academia is fundamentally opposed to the idea of Eros for two main reasons:
it is a concept linked to sexual desires and sexuality in general, which is a mine-
feld for universities. On the other hand, universities shifted their interest from
the idea of love, beauty, Eros of learning, and passion for teaching to concepts
that are easily covered by direct measurements. It is a feld of economic transac-
tions and market mechanisms where reputation is expressed algorithmically, and
academic life to quantitative measurements of all that can be measured: the num-
ber of students and the number of publications, the number of citations and the
number of graduates, and so on. Here is the space where software and algorithms
permeate all aspects of academic life, colonising spaces that are defning for what
we understand when we say human. In efect, talking about the Eros of learning
and beauty in education, especially in higher education, requires courage. This is
not part of universities’ agenda of research or part of the ideological preferences
common across higher education. The idea to marginalise beauty from education
is not limited to its scarcity in educational design, but is a dimension avoided even
in arts. Howard Garner observed that “[t]oday, particularly in the contemporary
Beauty and the Love for Learning 141

West, the status of beauty in relation to the arts could scarcely be more diferent.
Some observers eschew the term beauty altogether, while others use it in ways
quite diferent than in the past” (Gardner, 201129).
The Socratic Eros is inherently connected to beauty, again, not necessarily an
external, physical beauty. It is the beauty of knowing and thinking, of the way
to discover the Eros of learning and open real pathways to lifelong learning. The
beautiful education is built on the Eros of learning, the inherent desire and love
for learning and wisdom. Learning by heart is opened here as learning something
that is speaking to the heart; it is so meaningful for the student that it becomes
part of the “heart,” memory, and emotions. It is an educational project indiferent
to tricks and hints for assessments and grades, which is taking the aesthetic nature
of education as a foundation for teaching and learning. AI and other edtech appli-
cations can supplement and complete learning, but not replace education if we
choose to build it as a human project.

Notes
1. Herszenhorn, D. M. (2022, March 4). The fghting is in Ukraine, but risk of World War
III is real. Politico. www.politico.eu/article/fght-ukraine-russia-world-war-risk-real/
2. IPCC. (2022). Summary for policymakers [H.-O. Pörtner, D. C. Roberts, E. S. Poloc-
zanska, K. Mintenbeck, M. Tignor, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V.
Möller, & A. Okem (Eds.)]. In H.-O. Pörtner, D. C. Roberts, M. Tignor, E. S. Poloc-
zanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A.
Okem, & B. Rama (Eds.), Climate change 2022: Impacts, adaptation, and vulnerability.
Contribution of working group II to the sixth assessment report of the intergovernmental panel
on climate change. Cambridge University Press.
3. It is remarkable to see how much universities in China or USA, Russia or UK, EU or
Latin America share the same views, aims and inherent contradictions. There is a com-
mon quantitative measure for the number of published research (quality is marginal),
for the number of students and money secured for profts etc.
4. Levine, A., & Van Pelt, S. (2021). The great upheaval: Higher education’s past, present, and
uncertain future. Johns Hopkins University Press.
5. Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry spon-
sorship and research outcome. The Cochrane Database of Systematic Reviews, 2(2),
MR000033. https://doi.org/10.1002/14651858.MR000033.pub3
6. Heidegger, M. (1977). The question concerning technology, and other essays. Harper &
Row.
7. Heidegger, M. (1969). Discourse on thinking. A translation of gelassenheit. Harper & Row.
8. Nicholson, K. (2021, October 12). Timing of Boris Johnson’s holiday under fre after
damning Covid report slams his handling of the pandemic. HufPost.www.hufngtonpost.
co.uk/entry/boris-johnson-holiday-covid-report_uk_61653846e4b0cc44c510372f
9. Sontag, S., & Rief, D. (2008). Reborn: Journals and notebooks, 1947–1963. Farrar,
Straus and Giroux.
10. Furedi, F. (2004, August 6:14). Plagiarism stems from a loss of scholarly ideals. Times
Higher Education Supplement. www.timeshighereducation.com/features/plagiarism-
stems-from-a-loss-of-scholarly-ideals/190541.article
11. Macdonald, S. (2017, May 25). It’s not essay mills that are doing the grinding. Times
Higher Education. www.timeshighereducation.com/opinion/its-not-essay-mills-that-
are-doing-the-grinding
142 Higher Learning

12. Hare, J. (2016, April 13). University of Melbourne start-up Cadmus targets cheats. The
Australian.www.theaustralian.com.au/higher-education/university-of-melbourne-
startup-cadmus-targets-cheats/news-story/f5e2677aea4a90b54f5c5ee0e4d3eee7
13. Heidegger, M. (1977). The question concerning technology, and other essays. Garland
Publishing.
14. Brand, S. (1968, Fall). Purpose. In S. Brand (Ed.), Whole earth catalog. Portola Institute.
15. University of Cambridge. (2022, March 17). Mathematical paradoxes demonstrate the
limits of AI. ScienceDaily. Retrieved March 30, 2022, from www.sciencedaily.com/
releases/2022/03/220317120356.htm
16. Colbrook, M. J., Antun, V., & Hansen, A. C. (2022). The difculty of computing
stable and accurate neural networks: On the barriers of deep learning and Smale’s
18th problem. Proceedings of the National Academy of Sciences, 119(12). https://doi.org/
doi:10.1073/pnas.2107151119
17. Pritchett, L. (2013). The rebirth of education: Schooling Ain’t learning. Center for Global
Development.
18. World Bank. (2018). World development report 2018: Learning to realize education’s promise.
World Bank.
19. Boroditsky, L. (2011). How languages construct time. In S. Dehaene & E. Brannon
(Eds.), Space, time and number in the brain: Searching for the foundations of mathematical
thought (pp. 333–341). Elsevier Academic Press. https://doi.org/10.1016/B978-0-
12-385948-8.00020-7
20. Gregorian, D. (2021, June 10). Lunar new deal: GOP Rep. Gohmert suggests alter-
ing moon’s orbit to combat climate change. NBC News. www.nbcnews.com/politics/
congress/lunar-new-deal-gop-rep-gohmert-suggests-altering-moon-s-n1270219
21. President of Ireland (Media Library). (2016, April 7). Speech at the EUA annual confer-
ence. NUI Galway.
22. ABC News. (2021, December 15). United States teachers compete for cash for their classrooms and
critics liken it to Squid Game. www.abc.net.au/news/2021-12-14/south-dakota-dash-for-
cash-teachers-compete-money-squid-game/100699340
23. Levine, A. (2006). Educating school teachers. The Education Schools Project.
24. EPI. (2019). The teacher shortage is real, large and growing, and worse than we thought. Eco-
nomic Policy Institute. https://fles.epi.org/pdf/163651.pdf
25. Karabarbounis, L., & Neiman, B. (2014). Global decline of the labor share. The Quar-
terly Journal of Economics, 129(1), 61–103.
26. Aristotle & McKeon, R. (2001). The basic works of Aristotle. Modern Library.
27. Gray, R. (2022, April 2). Teachers in America were already facing collapse. COVID
only made it worse. BuzzFeed, Politics. www.buzzfeednews.com/article/rosiegray/
america-teaching-collapse-covid-education
28. Bayer, T. I. (2009). Vico’s pedagogy. New Vico Studies, 27, 39–56.
29. Gardner, H. (2011). Truth, beauty, and goodness reframed: Educating for the virtues in the
twenty-frst century. Basic Books.
SECTION III

The Future of Higher


Education

This last section presents an in-depth analysis of the role imagination can play in
education and explores the relationship between intelligence, imagination, and
AI. Looking at the possible futures of education, this section demonstrates that
the key challenges facing universities and open societies in this tumultuous start
of the 21st century are not technological but political, educational, and cultural.
Technological advancements, especially in the feld of AI, open to new areas of
knowledge, with possibilities not even explored today. In the contexts of extraor-
dinary technological adoption and acceleration, we have a concerning rise across
the world of authoritarian and fascist ideologies and movements and a widening
gap of socioeconomic and cultural segregation. The most plausible scenario for
the future of education includes AI as an integral part of future developments
and challenges for universities and education in general. This possibility requires
some key principles for the adoption and use of AI in higher education, which
are presented with the intention to assist educators and students in a constructive
and ethical integration of AI systems in the complex endeavour of casting and
creating a meaningful education.

DOI: 10.4324/9781003266563-10
7
IMAGINATION AND EDUCATION

In November 2008 at the inauguration of the New Academic Building of London


School of Economics, Queen Elizabeth II asked how it was possible to miss all
signs of the meltdown of international markets. In hindsight, there were obvious
warnings for what became the global fnancial crisis (GFC), a fall that shattered
fnancial markets and plunged all banking systems in turmoil, from mid-2007 to
2009. Luis Garicano, the Director of Research at the LSE’s Management Depart-
ment, was actually asked by the Queen “Why did nobody notice it?” His answer
was that “at every stage someone was relying on somebody else and everyone
thought they were doing the right thing.” In other words, it is a story of com-
plaisance, groupthink, and intellectual laziness that partly explain how the major
incoming crisis was missed. It is also the Panglossian efort to turn everything into
a positive narrative, often against common sense. Talking about the fnancial crisis
we can see now that, with very few and notable exceptions, academia also missed
the signs. For the frst decades on the 21st-century universities have failed to see
or imagine possible solutions for a number of critical crises for humanity. There
is the crisis of democracy, of dangerous levels of social and economic inequality,
of power imbalances and dark developments, which all prove that universities are
depleted of critical thinking. A result of decades where solutions for education
and research come from accountants and fnancial consulting groups, economists
and edtech corporations, is that universities lost the power to think for the com-
mon good, to speak truth to power, and to genuinely master critical thinking.
The narrative of a world with endless possibilities, functioning in the best
possible circumstances experienced by humanity was promoted relentlessly. It
worked; in September 2016, the US President at that time, Barack Obama, was
addressing a group of youth leaders from the Association of Southeast Asian

DOI: 10.4324/9781003266563-11
146 The Future of Higher Education

Nations (ASEAN) member countries in Laos, making again his favourite argu-
ment that that was the best time in human history. Specifcally, Obama noted that

just because we have so much information from all around the world on
our televisions, on our computers, on our phones, it seems as if the world
is falling apart. . . . But the truth is that when you look at all the measures
of wellbeing in the world, if you had a choice of when to be born and you
didn’t know ahead of time who you were going to be – what nationality,
whether you were male or female, what religion – but you had said, when
in human history would be the best time to be born, the time would be
now. The world has never been healthier, it’s never been wealthier, it’s
never been better educated. It’s never been less violent, more tolerant than
it is today1.

The President of the United States at that time made the same point just few
months later when he published as a guest editor for Wired magazine an opinion
piece entirely structured around the idea that “now” is the best time to be alive
in human history. Not only that that present was the best, but the future looks
just good:

[T]omorrow’s Americans will be able to look back at what we did – the


diseases we conquered, the social problems we solved, the planet we pro-
tected for them – and when they see all that, they’ll plainly see that theirs is
the best time to be alive.
(Obama, 20162)

We know how those young ASEAN found their countries led by authoritarians
such as Rodrigo Duterte in the Philippines, who publicly praised Hitler making
the note that “Hitler massacred three million Jews. Now, there is three million
drug addicts. I’d be happy to slaughter them3.” Duterte proved that his stated
intentions are followed by real murders. If we take the example of Laos, we hardly
fnd a reason at that time to embrace this feverish optimism. In the same year
when the US President found that we live the best possible times Laos was far
from being a free and democratic country, a fair society where young people can
dream for a bright future. In 2019, the Human Rights Watch noted that

Laos continues to be ruled through a one-party system. The formation of


other political parties is subject to criminal prosecution. The Government of
Laos has not taken signifcant steps to remedy its poor human rights record
and severely restricts freedom of speech, association and peaceful assembly.
The lack of fair trials of criminal suspects, especially those accused of politi-
cal ofences, widespread judicial corruption, and entrenched impunity for
those responsible for human rights violations are continuing problems.4
Imagination and Education 147

The government of Laos controlled at that time television, radio, and publications
and all are still subject of governmental censorship. Some of Laos’ dissidents were
found “disemboweled and stufed with concrete.” Obama’s inspiring discourse
looks now, after we know how ASEAN countries shaped their future, far from
the rule of law, democracy, and a civil society, closer to Voltaire’s caricatural char-
acter Dr. Pangloss, who is saying that “In this best of possible worlds . . . all is for
the best5.” The region where the future was confdently predicted in such positive
terms includes now extremely violent authorial regimes (e.g. Myanmar) and a fast
decline of individual and collective freedoms.
The so-called best years to be alive in human history, that were announced
as ideal times to prepare for a good future, proved to be far from the overly
optimistic predictions, marking the opposite: authoritarian tendencies, extrem-
ist ideas, and movements gained prominence and strength. Donald Trump, the
surprise candidate with a long list of anti-democratic and hateful rhetoric, was
elected despite his record of racist attitudes, sexist remarks, and pro-authoritarian
and anti-democratic views. Some analysts say that this background was actually
helping the atypical candidate to become the 45th President of the United States.
His administration is marked by extreme points that stand defnitely far from
that techno-utopia presented in 2016 at the ASEAN meeting of young leaders.
Freedom House summarised in a public report published in 2018 the evolution
of democracy across the world like this:

Political rights and civil liberties around the world deteriorated to their
lowest point in more than a decade in 2017, extending a period character-
ized by emboldened autocrats, beleaguered democracies, and the United
States’ withdrawal from its leadership role in the global struggle for human
freedom.
(Abramowitz, 20186)

The report also documented at that time the 12th consecutive year of decline
in global freedom, and an accelerating decline of civil rights and freedom in the
United States.
In 2022, the Freedom House report documents the 16th consecutive year of
decline of global freedom, and describes a world where totalitarian ideologies and
authoritarians are on the rise. The report states:

Around the world, the enemies of liberal democracy – a form of self-


government in which human rights are recognized and every individual
is entitled to equal treatment under law – are accelerating their attacks.
Authoritarian regimes have become more efective at co-opting or circum-
venting the norms and institutions meant to support basic liberties, and at
providing aid to others who wish to do the same.
(Repucci & Slipowitz, 2022, p. 17)
148 The Future of Higher Education

The comprehensive study provided by Freedom House notes another important


development: the rise of illiberal tendencies and manifestation within democ-
racies. In other words, we see the rise of fascism across the world. This is the
area where we can say that AI development and adoption represents one of the
most serious risks for our future. This is not because we will have AI so “smart”
that humans will be dominated by supercomputers, as some authors naively pre-
dict with certainty and lax evidence. AI is increasingly associated with projects
designed to manipulate peoples’ political preferences and votes. There is the
well-documented case of Cambridge Analytica, the company that was discreetly
working for Trump campaign. This obscure company became responsible for the
immense manipulation of votes through the use of data collected through the use
of social media, later aggregated to create what was called “psychographic tar-
geting” and manipulation of vote intentions. The narrative of a benefcial social
media as a source of democratisation and world cooperation was not only false,
but covered the serious risks of malicious use of people’s data and weaponisation
of powerful algorithms. We know now that this power was left in wrong and
harmful hands.
The risk of malicious use of AI is serious and very real. Kate Crawford, a
principal researcher at Microsoft Research, noted that AI is a “fascist’s dream,”
warning that we should treat with suspicion machine learning systems that are
claiming to operate free from bias, “if it’s been trained on human-generated data.
Our biases are built into that training data.” What the AI expert notes is that
the adoption of AI is happening in parallel with a rise of authoritarian ideas and
ideologies: “Just as we are seeing a step function increase in the spread of AI,
something else is happening: the rise of ultra-nationalism, rightwing authoritari-
anism and fascism” (Crawford, 20178). There is also the fact, noted by Crawford,
that history shows us that those who have control over information also held
the power; in fascist regimes this is associated with historical tragedies. The new
Fascism, tragically rising in Russia and fuelled across the world by Putin’s fascist
dictatorship, stand as an inhuman example of what can become possible when
we ignore it: wars, atrocities, genocide. Crawford notes that AI is often used as
pseudo-science, supposedly identifying personality traits by reading and measuring
facial features and human skulls. “These kinds of debunked scientifc practices –
Crawford explains – were used to justify the mass murdering of Jews and slavery
in the U.S.” AI is weaponising bad ideas and holds the possibility to multiply their
energy and impact to terrifying scale. What Crawford is not directly indicating
in her brilliant presentation is that AI is inherently linked to the idea of eugenics,
of phrenology and other pseudoscientifc approaches used to justify and promote
racism, slavery and fascism. The “intelligence” part in the AI is rooted deeply in
the eugenic thinking and evolution of this concept, and there is no clear attempt
to separate or clarify AI’s relationship with its toxic origins. The extraordinary
potential of dehumanisation and authoritarian abuse of its powers should keep us
all alert about the risks of AI.
Imagination and Education 149

Fascist movements have an interesting relationship with technology. We can


just think that Nazis gained power with a project that fundamentally rejected
liberalism and all democratic traditions while German capitalism and entrepre-
neurship (standing in opposition with a Jewish decadent capitalism) provided
support for extreme nationalistic agenda and a glorifcation of technology. The
war machine was built on this religious view of technology, which was for Nazis
a perfect way to empower German Volk. Nazis had a specifc view on technology,
which is close to the current deifed presentation of technology in the context
of AI developments. Technology was for them separated from Zivilisation (intel-
lect, reason, and democracy) and inherently linked with the idea of German
Kultur (authoritarianism, colonisation, emotion, intuition): it was “a coherent
and meaningful set of metaphors, familiar words, and emotionally laden expres-
sions that had the efect of converting technology from a component of alien,
Western Zivilisation into an organic part of German Kultur” (Herf, 1984, p. 19).
In the same book focused on the analysis of culture and technology in Germany
at the beginning of the 20th century, Jefrey Herf is using a quote from Thomas
Mann, who

captured the essence of reactionary modernism when he wrote that ‘the


really characteristic and dangerous aspect of National Socialism was its mix-
ture of robust modernity and an afrmative stance toward progress com-
bined with dreams of the past: a highly technological romanticism.
(p. 2)

The current view on technology, especially edtech, combines what Henry Gir-
oux identifed as “neoliberal fascism”, which is revolving around the recurring
tendency of technology to serve authoritarian impulses, integrating control,
and surveillance for antidemocratic projects. Henry Giroux is operating in his
book “The Terror of the Unforeseen” an important discussion for education,
noting that

we have been living the lie of neoliberalism and white nationalism for over
forty years and because of the refusal to face up to that lie, the United States
has slipped into the abyss of an updated American version of fascism of
which Trump is both a symptom and endpoint.
(Giroux & Casablancas, 2019, p. 1910)

Other contemporary fascisms, such as the crude and profoundly version rep-
resented by the Russian fascism, also favour a particular view on technology, one
that is serving ideological positions, nationalisms, and irrational actions. Espe-
cially now, in the context of rapid advancement of AI and increasing interest of
administrators to adopt to a greater extend edtech, it is important to look at the
role and meanings associated to “technology” to determine the type of education
150 The Future of Higher Education

futures we can expect and want to build. It is naive and unrealistic to believe that
the eugenic roots of AI and cybernetics are just a matter of the distant past, some
bad dreams of scientists in the 19th and 20th centuries. We don’t have to look too
hard to see it in political approaches and discourses, in technological manipula-
tions and formative projects.
In ancient Greece, techne was a concept intertwined with the possibility of
teaching, assigned to arts and manual crafts; it also included sciences, such as
mathematics and medicine. In Metaphysics, Aristotle is defning techne as the
ability to acquire knowledge of universal principles and causes. The current use of
technology is divorced from philosophy in academic disciplines, and engineering
stands symbolically opposed to humanities, as a “practical,” a certain route to a
successful career. There is a common trend – at least across the Anglosphere – of
cutting budgets for humanities and orient funding towards STEM disciplines (sci-
ence, technology, engineering, and mathematics). This is recurring topic on pub-
lic debates related to higher education, and stands refected in most reports that
include funding priorities for universities and colleges. It is a futile and quixotic
endeavour to try to change the opposite view on STEM and humanities, or even
to repeat why it is a dangerous idea to favour funding for employability rather
than secure a broader education, which can create not only good employees and
managers but also responsible citizens and independent thinkers, with a wider
understanding of politics, work, technology, and economics. This debate was
lost and decided at least since when WTO agreements pushed higher education
in the area of tradable commodities. As noted, any scrutiny on funding alloca-
tion shows the disproportionate preference in funding STEM felds across higher
education. What is important to note is that AI is bringing to the front not only a
religious admiration and infated expectations, but also a disconnect between our
humanity and a particular view on technology. Engineers of AI represent a new
type of priesthood, with its own jargon and codes, and the same certainty about
their own superiority that was assigned to Church clerics in the past.
Two academic researchers, Diego Gambetta and Stefen Hertog, explored for
years a striking and surprising overrepresentation of engineers in the extremist
movements. Their fndings are presented in a fascinating book titled “Engineers
of Jihad. The curious connection between violent extremism and education.”
Data analysed by the two researchers confrms that there is a signifcant overrep-
resentation in extremist movements of graduates in engineering. The correlation
was a serendipitous discovery: as a professor of social theory at the University of
Oxford, Gambetta was assigning students to investigate with a scientifc method
any random bit of trivia selected by them as interesting. One group of students
selected as a topic for their assignment the study of engineering graduates among
extreme Islamist movements. Hertog was at that time a doctoral student in the
same university with Gambetta and found this topic very intriguing. The frst step
of research found clearly a frequency of engineers in jihadist movements that was
far exceeding a normal statistical representation. When the research was extended
Imagination and Education 151

to include Russian fascist groups, white supremacist fascists in the United States,
and other violent groups on the far right, Gambetta and Hertog found again a
widely disproportionate numbers of engineers. Data analysis also revealed that
engineers were mostly absent on the political Left and its extreme movements.
From here, the research is well nuanced and opens to fascinating possibilities and
warnings for universities and policy makers. It is dispelling myths, such as the
cause of deprivation, where it is assumed that individuals join violent groups as
a result of deprivations and feelings of injustice based on professional status infe-
rior from that deserved with a higher qualifcation. What the research found was
that graduates in engineering were not only overrepresented in violent terrorist
groups, but most individuals come from afuent families and had personal possi-
bilities for a successful and well-paid career. To explain these fndings the authors
present as a case study the story of a leader of Al-Qaeda who moved in America
as a child and had, as a graduate in computer engineering, real opportunities for
a good career in the United States. Obviously, his choice was to use his skills and
intelligence for nefarious purposes. The most important aspect of these fndings
is that Gambetta and Hertog pose a question that is crucially important for educa-
tion’s future:

[W]hy are engineers not only proportionally more prone than all other
graduates to join Islamist extremists but also to do so even where the eco-
nomic situation is not so dire? And there is more -they note – we found
evidence that engineers are more likely to join violent opposition groups
than nonviolent ones, to prefer religious groups to secular groups, and to be
less likely to defect once they join an Islamist group. None of these fndings
seems explicable in terms of relative deprivation.
(Gambetta & Hertog, 2016, p. 16111)

What the authors identify as a “perfect match between types of degree and types
of extremists” is not consistently explained in the book. It is underlined that
we can talk about tendencies rather than an absolute causality, eliminating the
possibility to see that engineers are necessarily associated with forms of extrem-
ism. The overrepresentation is also associated with a counterintuitive fact that
engineers are joining these movements mostly as ideologues, as leaders of violent
groups.
The explanation for this tendency to fnd graduates of STEM, and in particu-
lar engineering, over-represented in extremist and violent movements is further
explored in relation to the need for search for certainty, and a reaction to the
unknown. Arie Kruglanski, a professor of psychology at the University of Mary-
land, explored the phenomenon known as “certainty-seeking,” and found that in

times of uncertainty, the need for closure is aroused, leading to a focus


on one’s own perspectives and the rejection of the opinions of others.
152 The Future of Higher Education

Moreover, the need for closure leads to a preference for one’s own groups,
leading to the stereotyping, derogation, and support for violence against
out-groups.
(Kruglanski & Orehek, 2011, p. 1512)

Thomas Metzinger, a German philosopher and Professor of Theoretical Phi-


losophy at Johannes Gutenberg University of Mainz and a member of the Euro-
pean Commission’s High-Level Expert Group on Artifcial Intelligence, once
noted that humans have “cognitive complexity, but without compassion and fex-
ibility in our motivational structure” (Ananthaswamy, 201513). Metzinger argues
that intelligence, our cognitive complexity, is working on primitive structures and
basic impulses. The entire conversation on AI in education, as it stands now dom-
inated by sales pitches and marketing narratives, is missing the crucial point that
empathy, compassion, and human values are not innate attributes and must be
educated. When this is excised from the project of education in the glorifcation
of STEM and engineering possibilities opened by edtech, we have a distorted
humanity. This approach is blindly aiming to create skilled STEM graduates who
are ready for employment, leaving the power of mind vulnerable to dehumanised
temptations, new fascisms, cruelty, and greed. Cultivating intellectual laziness and
the comfort of certainties, the Silicon Valley ideology conquered higher educa-
tion as a way of thinking, shaping universities as businesses interested in profts
and efciencies, “technological solutions” but not the meaning of learning and
teaching. The applications of AI in education stand directly infuenced by the
engineers’ way of thinking and corporate business solutions. Decision makers and
university leaders are in agreement to severely cut funding for humanities and
arts, which are perceived as useless for graduates’ job-readiness, and this can lead
directly to the type of problems described by Metzinger. At a time when the sci-
entifc community, United Nations, and international organisations openly warn
that we face a torturous mass extinction if we don’t fnd solutions for climate
change, higher education is moving further away from aims of education suitable
to nurture global solidarity, compassion, and wisdom. Techno-utopia adopted by
education is promising that edtech will solve our problems, in a discourse where
not even the problems are sanely articulated. There is within AI an assump-
tion that algorithms always follow specifc steps and conferred rules, and lead to
clear and expected results that solve our problems. The narrative of AI presents
a system built on various degrees of certainty, remaining structurally opposed
to uncertainty and indiferent to contextual nuances and wisdom. But wisdom
and adherence to values and behaviours guided by compassion and interest on
humanity’s future can be achieved with much more than few clicks, even on
most advanced AI systems. The confict between this world of certainties and the
real world of poly-crises, with social, environmental, and cultural systems placed
under increased strain and permanent uncertainty, may fuel directly the fascists’
dreams of AI, as they were labelled by the Microsoft expert. It doesn’t matter if
Imagination and Education 153

AI will be trained to simulate real-world’s uncertainties. AI is based on a


certain type of algebra and a specific type of thinking about the world. Ulti-
mately, AI is based on a logic of binary opposites that can become danger-
ous for humanity when it makes us think in simplified opposites and stand
applied to ideology. This model of organising world with zero and one,
good and bad, black and white, can fuel intolerance, stir hate and conflicts;
when understanding is reduced to these binary opposites humanity is lost,
as it happened so often in human history. The reduction to binary structures
is probably the shortest route to – and the most nourishing environment
for – fascism. Horrifying war crimes committed by Russia in Ukraine in
the first months of 2022, the ongoing dehumanisation of people living in
Ukraine, and the terrifying extremes of Russian fascism strangely presented
by Russian propaganda as manifestations of national pride show again that
humanity’s most pressing problems are not technological, but cultural, civic,
educational, and moral.
It is also a lesson presented by the outnumbered and outgunned Ukrainian
army: their ingenuity and resilience proved once again that technological and
military superiority is not sufcient for success.
The importance of adopting a constant and insightful scrutiny on what edtech
is proposing and what can actually deliver, including an in-depth analysis of risks,
is essential for education. Edtech’s tendency to present its products as the new
solution and sufcient solution for teaching and learning was explicitly presented
in an article published in 1970. Two American academics, Theodore Sizer and
David Kirp, observed since then how important it is to have an ongoing control
of educational technology, especially as edtech is designed and owned by corpo-
rations with fundamentally diferent interests than educational aims or lifelong
learning. Their analysis is noting that “the development of new technology for
education raises the question of control. Large corporations have entered the
education feld. Certain factors inherent in the educational system tend to pre-
vent a take-over by the educational business technologists” (Sizer & Kirp, 1970,
p. 114). This astute advice was entirely ignored and missed by universities and their
new neoliberal landlords.
Sizer and Kirp also note in the same paragraph that “the new educational
technology is not an end in itself, worthy of encouragement for its own sake;
it is a means of efectively carrying out educational ends independently fxed
by those whose central concern is the education of [students].” Edtech became
an end in itself in higher education, as some quotes from policy and guideline
documents elaborated by the OECD stand as evidence that this is an intentional
development. We can guess that the American researchers had a solid reason to
warn us that this can happen, probably based on directions already visible at the
end of 1960s. In the context of a wider adoption of AI systems in education, this
is an especially important point for universities, students, and faculty. It is a risk
especially because one of the most powerful advantages of edtech corporations is
154 The Future of Higher Education

the successful colonisation of thinking about education. The colonising narrative


of technology as a solution for all problems, suitable to accelerate and make learn-
ing efcient, to facilitate “better” assessments and solve all problems, is widely
accepted and belived by educators. As we noted in previous pages, Heidegger
was able to see this clearly, and detail it in his lecture on human thinking in what
stands as a perfect description of intellectual life in today’s academia:

man today is in fight from thinking. This fight-from-thought is the ground


of thoughtlessness. But part of this fight is that man will neither see nor
admit it. Man today will even fatly deny this fight from thinking. He will
assert the opposite.
(Heidegger, 1969, p. 4515)

In fact, the distaste for thinking and uncomfortable ideas, reasoning and knowl-
edge explorations is cultivated and exploited. Technology completed its project
of eliminating diferent ways of thinking about being and possibilities, and is
reinforcing now narratives that are serving the colonising aim. The “man today”
is living under an avalanche of ephemeral information presented as knowledge,
media gossip, and “stories of the day,” an avalanche of meaningless information
that is cultivating superfciality and ephemeral interests. Big Tech and its satellites
manage the project of controlling our imaginations, and this is a possibility that
should worry educators to a higher degree than the possibility of colonisation of
thinking that was contemplated by the German thinker in the middle of the last
century.
Corporate world is actively engaged in colonising imaginations in education
at all levels, and this explains partly why we see that concepts such as “innova-
tion” cover semantically the adherence to the mantra of techno-solutionism and
neoliberal policies. Universities stridently make the point that all that is called
“innovation” is not creative and imaginative, and can be immediately identifed
as a narrow limitation of what we understand by higher education, higher learn-
ing, and participation in the civil society. The last aspect is even more reduced
to graduate employability, and it remains unclear what universities see as their
own contribution to the “civil” part of our societies in the 21st century. The way
we imagine our communities, our responsibilities, and how students and faculty
engage with society is becoming even more important at a point where various
fascisms grow and tempt in spaces that were very recently considered safe from
such attacks. In other words, the contradiction of technological advancement and
extensive adoption of edtech in teaching and learning is associated now with the
need to limit our imaginations to the “right” narratives, which are excluding and
introspective critique of technological projects and solutions. How is the limita-
tion of imaginations even possible? Many will protest and “assert the opposite,”
claiming that education is now more open than it was ever before. The frst argu-
ment is that it is impossible to ignore the responsibility of education in the rise of
Imagination and Education 155

fascisms across the world. The shift of focus from substance to management, from
ideas to market positionings and “product” development for profts led education
towards intellectual laziness, groupthink, and indiference towards the challenge
raised by narrow, localist, intolerant, and hateful ideas; unfortunately, this is what
has happened in the last decades and it cannot be denied or ignored anymore.
Dismissed or accepted, these developments make even more evident that we
have to explore the fascinating relationship between technology and imagina-
tion in education, challenging at the same time the indiference for this topic
openly manifested by higher education to this enormously powerful faculty of
human mind. Trying to understand the place of imagination in edtech can help
us explore what can be useful to make the aims of higher education more relevant
for students and for the intellectual life of the campus.
Imagination is the basic ingredient for AI, which was always part of what
was labelled as “sociotechnical imaginaries,” defned by Sheila Jasanof and Sang-
Hyun Kim as “collectively held, institutionally stabilized, and publicly performed
visions of desirable futures, animated by shared understandings of forms of social
life and social order attainable through, and supportive of, advances in science
and technology” (Jasanof & Kim, 2015, p. 416). The narrative structures of AI
imaginaries are extremely complex and powerful, playing most of the time a
much more important role than real technological possibilities of AI systems pro-
posed. In fact, all funding for the advancement of AI, and research interest, is
based on the ability to structure and present coherent and compelling narratives
based on sociotechnical imaginaries. There is no part of these narratives that is
irrelevant or redundant; their complexity requires a semiotic analysis. Meanings
and connotations are changed and altered to act as powerful tools that are used
for utopian visions and corporate agenda. The use of language is weaponised by
edtech capitalists to occupy the imaginary of the campus with a certain narra-
tive and suppress alternative narratives or credible critiques. It is an application of
codes and beliefs associated with the new colonisations, which is the posthuman
language of technological dominance and neoliberal dogmatism.
Especially in the case of AI we can observe that a preferred rhetorical device
for its stories is represented by the overuse of catachresis, the device of narrative
re-description and abusive misuse of meanings for key terms. Richard Nordquist
defned catachresis as “a rhetorical term for the inappropriate use of one word
for another, or for an extreme, strained, or mixed metaphor, often used delib-
erately” (Nordquist, 202017). Edtech is constantly using words in contexts that
assign new or even opposite meanings, sometimes as part of a new jargon adopted
by the initiated to communicate and signal their adherence to the set of beliefs
specifc to technologism. This is an intentional abuse, similar to the use of the
concept of “freedom” by antidemocratic, authoritarian movements and leaders;
the attraction created by the misuse of positive and attractive ideas is parasitically
exploited to confer power to a diferent meaning, which is now suitable to serve
the new narrative. A visible case is represented by the misuse of “artifcial” in the
156 The Future of Higher Education

AI, which is implicitly suggesting that “artifcial” is synonymous with “techno-


logical,” or “cybernetic” or something placed in that range of meanings. Even if
we admit that it may send to the most palatable meaning of “man-made” we see
that it is serving a misuse or refects a profound misunderstanding of AI systems,
machine learning, and current developments in the feld.
Taking the example of a report on AI and education published in the United
Kingdom by JISC, we fnd that “AI can transform students’ education outcomes –
for example, by providing a personalised learning experience that improves social
mobility and student wellbeing” (JISC, 2021, p. 118). Here is another typical
intentional misuse of what we commonly understand when we say that some-
thing is “personalised,” bespoke for a person, built specifcally for that individual.
The personalisation provided by edtech is doing the opposite, as it is brilliantly
presented by Audrey Watters in her book Teaching Machines. There is no point
to repeat or summarise the arguments presented so well by Watters on how indi-
vidualisation and personalisation are used for political purposes, not educational,
but it is important here to note an introductory paragraph:

Teaching Machines isn’t just a story about machines. It’s a story about peo-
ple, politics, systems, markets, and culture. It’s a story of the twentieth-
century education technologists and education psychologists and education
publishers and education reformers who built and sold (or at least tried
to build and sell) machines they claimed could automate self-instruction,
could engineer a more personalised – or as they were more likely to call it,
“individualized” – education system. It’s a story of how education became
a technocracy, and it’s a story about how education technology became big
business. It’s a story of how the science of teaching and learning, as well
as our imagination about teaching and learning came to be caught up in
mechanization, in efciency, and, to quote the French philosopher Jacques
Ellul, in “psychopedagogic technique.”
(Watters, 2021, pp. 9–1019)

The reality of “personalisation” in education, with its narrative well-funded


and supported by tech-billionaires such as Bill Gates and Mark Zuckerberg, is
an impoverishment of imaginations, a limitation of education to our own past
preferences, to obscure algorithms and unknown responsibilities. It is an under-
standing of education that

incentivizes tribalization by nudging our ideas and interests, against our


knowledge, into alignment with others supposedly like us, with sometimes
alarming results (and here I am thinking of the rapid descent one experi-
ences on platforms, such as YouTube, from innocent, everyday queries into
underworlds of conspiracy theory and religious extremism).
(Dunn, 2020, p. 4820)
Imagination and Education 157

The edtech use of “personalisation” is related to a narrow approach that pro-


foundly hurts and alienates human imagination.
Another example that is relevant to understand how imaginations are drained
and manipulated is given by an OECD report on education, published in 2022.
It is opening the guidebook on policies for education with a strong statement:

AI changes how people learn, work, play, interact and live. As AI spreads
across sectors, diferent types of AI systems deliver diferent benefts, risks
and policy and regulatory challenges. Consider the diferences between a
virtual assistant, a self-driving vehicle and an algorithm that recommends
videos for children.
(OECD, 2022, p. 6 21)

This report is useful for structured framework of critical examination and clas-
sifcation of AI systems, which can help developers and policymakers fnd and
evaluate specifc risks in the AI systems, such as bias. At the same time, the report
is opening with the idea that AI is changing how people learn, a hypothesis that
is not based on research, theoretical structures, or academic literature. In other
words, there is no scientifc argument to claim that AI is “changing how people
learn.” The OECD’s text is stating the intention to help users pose critical ques-
tions for the adoption and development of AI systems, to “guide an innovative
and trustworthy approach to AI as outlined in the OECD AI Principles.” This
makes even more surprising to fnd the idea that AI is changing the way humans
learn, in an ambiguous and misleading form. Focused on machine learning and
how AI systems function, the report is not approaching at all how AI is changing
human learning. It leaves that important statement is isolated, unexplained, and
not backed by references or possible sources in favour of this claim. It is possible
to admit that learning is changed by any new medium, even by the use of Blue-
tooth speakers and Internet, but it is clear that this is not what is intentionally
implied here. This is why the example is so relevant: it states a vague hypothesis as
a fact, abusing the symbolic power of an international economic organisation, and
opens narrative possibilities for exaggerations and mistakes. There is no scientifc
argument to prove that we have a new way of learning thanks to AI, but provides
a solid source for this claim in future research, as an idea proven by the OECD.
The example presented here is also indicating something else relevant about AI:
the power of technology to limit and engineer human imagination. It is implied
that AI is changing how people learn so we can let AI deal with this extraordinar-
ily complex task. We have a solution; we can focus our interests and imaginations
on other, more entertaining, areas of interest.
Edtech is narratively related to the efort to narrow human imagination,
to subsume its force, and colonise spaces where our imaginations can four-
ish. Imagination is our human power to transcend the immediate boundaries
of senses and knowledge, to navigate across time and spaces, and to transcend
158 The Future of Higher Education

present conditions. It is the attribute that allows us to build and explore endless
possibilities before we can experience them in real life. Progress in neurology
and psychology revealed that imagination is central in regulating mental health,
in problem-solving, creativity, and learning. AI narratives in education have the
power to trigger the emergence and adoption of new systems and AI solutions,
translating in real life what was frst imagined and narrated. At the same time, we
can see that AI can build unreasonable expectations, and let edtech applications
manage areas that cannot be properly supported by the specifc system or app.
Building false expectations can impact on students’ and public trust on the use
of AI in education and also mark the quality of students’ experience and their
education, in general.
The ancient Greek term of techne was interrelated to another key term for
education, which is named in Plato’s works poiesis, which is naming the act in
which one brings something that did not exist before into being. In fact, AI exists
only because this idea enlightened the imagination of mathematicians, thinkers,
and engineers, and someone called this possibility “artifcial intelligence.” This is
why it is even more surprising to see that imagination is in general ignored or
considered mere “fuf” in academia, taking no space in studies on curriculum, in
strategies or in the administration of institutions of higher education. Philosophi-
cally, the concept of imagination and its crucial relation to education is an empty
space for universities. A simple look at the vast area of policies and procedures
of – let’s say “plagiarism” and “academic integrity” – reveals that the semiotic
message sent by academia is that we have to look only at skills and employability,
and that “imagination” is not part of serious concerns in teaching and learning.
This is important, as imagination is not only a source for learning and expand-
ing horizons, but also can be the platform to accelerate misinformation, narrow
views, and errors and ignorance, which can be fuelling fascist views and nefarious
tendencies. Imagination is the main battlefeld where we decide if education or
manipulations create the future.
The error perpetuated by current views on the role of higher education is that
universities are concerned in teaching and learning only with the efort to nur-
ture students’ intelligence, provide knowledge, and create skills that make gradu-
ates employable. Employability is one of the aims of a higher education, but not
the most important and defnitely not the single one. If a graduate is not a good
citizen, a moral person, one who can function well in a civil society, we do not
have anything else than a failure. The 20th century showed in horrifying exam-
ples what a highly technological society that is indiferent to a shared concept
of humanity and social progress can cause. Nazi Germany opened the feld of
knowledge for space exploration with the advancement of studies on rockets,
with scientists such as Wernher von Braun, but this cannot erase the fact that
forced labour was used on development of the V-2 rocket, and crimes against
humanity are linked to that project. It is also a warning to all that technology is
never value neutral, and misuses should never be dismissed. Knowledge is just a
Imagination and Education 159

part of learning, and human intelligence is expanded through imagination. We


have to remember also that only a certain type of imagination can be used for
what we understand with a human-centred, well-rounded education: empathic
imagination. Emotions are naturally awakened by imagination, and this particular
power of imagination can be used to engage not only students’ intelligence or
capacity to memorise and complete assignments. It is a pathway that can be used
for a more generous and integral view on what we understand when we say that
we have or we ofer the change to engage in meaningful education.
Education is also tasked to solve the challenge to fnd ways for students to avoid
boredom, and equally important, to be treated with contempt or condescendence
by the teacher or by the institution. This challenge may sound self-evident, com-
mon sense for anyone involved in education, but there are some unseen obstruc-
tions against translation to practice of this idea. As we briefy detailed in previous
pages, the space of imagination in higher education is already occupied, colonised
by powerful narratives, that make the efort to fnd alternatives look absurd or just
naive misunderstandings. There is the neoliberal narrative, where students and
parents are customers of a commodity named credential, where higher education
is the necessary passage towards a diploma that opens the door to new and bet-
ter jobs. Of course, this is a quasi-simplifed picture of what universities became
in the frst decades after the WTO event in 1999. Students may be also under
new labels, such as “producers,” or “partners” or, rarely, “learners”; this does not
change the fact that the new role assigned is that of customers who are a revenue
resource for institutions engaged against each other to get a higher place on the
most visible rankings. Increasingly, imaginations in education are also colonised
entirely by edtech and its jargon, values, and priorities. We can take just one
example from a newsletter distributed widely in higher education in the United
Kingdom, Australia, and New Zealand: it is a manifesto op-ed unambiguously
titled “Why hyper-personalization is critical for higher ed.” The article states from
the very beginning that
[W]e live in a hyper-personalised, Netfix, Uber, and Door Dash everything
world – a universe of Amazon like same-day everything.

All businesses – and especially colleges and universities – must leverage


hyper-personalisation to remain competitive, to grow, and to deliver on
their mission within their communities.
(Kibby, 202222)

This short quote refects currently a set of some of the most overused arguments
in the feld of higher education policies, governance, and theories of teaching
and learning. The “innovative” part is a depressingly futile exercise of parroting
in various forms the idea that universities are like Uber or any other “platform
companies,” that a Netfix-model for learning is the future, that we have no dif-
ference between a highly exploitative company with an unclear and debatable
160 The Future of Higher Education

future and an institution of learning and research with so many hundreds of years
of development and contributions to humanity. What really changed dramatically
is that the feld of educational imagination is colonised by greed and cynicism,
completely occupied by a logic of unfettered markets with the glorifcation of
ruthless and psychopathic solutions.
Privatisation of public good and colonisation of culture and imaginations with
neoliberal examples of extreme exploitation of workers and the cultural impover-
ishment stand directly associated with the intentional destruction of civil society
and democracy. This is another example of catachresis as an intentional efort to
twist semantics and rhetorical redescription of meanings (Parker, 199023). This
project is much more complex than a simple appropriation of words against their
meanings and signifcation, to suppress democratic citizenship and responsibility,
and use freedom to restrict, ban, and eliminate all that stand against the construc-
tion of extreme imbalances of power. Since Socrates, it became evident that only
when we have educated and responsible citizenry we can expect positive demo-
cratic choices. Consequently and the “poorly educated” loved by Donald Trump
(Saul, 2016) remain vulnerable to manipulations and control, and adopt resistance
to social and cultural changes even when it is evident that they will beneft from
social evolution. History is flled with examples in this sense. We can remember
that in what is considered a key event for civil rights movement in the United
States, the violent clashes at the Chicago Democratic Convention in 1968 are
marked by an apparent contradiction, noted by Hofman and Joy observed in
their book Counterculture Through the Ages: “While the New Leftists were
chanting ‘Power to the people,’ polls showed that the people approved of beating
the hell out of them (Chicago Democratic Convention, 1968) and even shooting
them dead (Kent State, Black Panther Party)” (Sirius & Joy, 2005, p. 5724). This is
just one of numerous examples that prove that democracy is impossible with an
ignorant citizenry, and that education can create the desire to build and maintain
civil and democratic societies. Progress requires an in-depth understanding of
learning and education that nurtures compassionate imagination and intelligence.
The compounded crisis of the world requires a reimagination of education with
a much wider sense than a simple delivery of content limited by our previous
choices, personal history, and algorithmic aggregation of data. Algorithms, as
complex and fascinating as they are at work, cannot properly contextualise, adapt,
and understand what is needed for such an education.
The relationship between imagination and AI is also developing in machine
learning systems that are able to generate new content, art, and imagery. I remem-
ber a very interesting point made by a participant at a conference on learning
spaces, on the kind of software used as an example of AI and human imagination
and creativity. The participant was an architect involved in various projects with
schools and children and used edtech and online games to generate new solutions
for learning spaces. He underlined that his practical experience stands against the
assumption that these systems nurture and generate imaginative solutions and
Imagination and Education 161

creative alternatives. In fact, he underlined, it was surprising for him to see that
children played online games such as Minecraft where the limit to build new
structures and castles is given mostly by one’s imagination; children built their
structures as Disney-like castles or other old, boring, quasi-similar structures.
There was no creativity and imagination in a virtual reality with almost limitless
possibilities. This should not surprise much when we actually know how imagi-
nation is impoverished and exploited by corporate narratives. Educators cannot
aford under the sum of administrative, economic, cultural, and social pressures to
nurture a rich and generative imagination.
A comprehensive report on the evolution AI published in 2021 by Stanford
University notes that research on the performance of human-AI teams shows that
AI-only teams currently outperform human-and-AI teams. It also notes that at
this moment we can expect “several near-term opportunities for AI to improve
human capabilities and vice versa” (Littman et al., 2021, p. 4825). A similar report,
which was published in 2022 by the Stanford Institute for Human-Centered AI
at Stanford University, fnds that AI “language models are more capable than
ever, but also more biased” (Zhang et al., 202226) and reinforces again the crucial
importance of large data sets with quality data for AI.
There is no doubt that we will witness an extensive use of AI in education,
and this not only requires an application of AI functionalities to augment human
capabilities, but also requires mechanisms able to check and address biases and
errors in a feld that is infnitely more complex and relevant for our societies
than Netfix, Uber, or a shopping mall. The challenge to address this prop-
erly cannot be more clear and vital for educators and decision makers: human
imagination and humanistic education can save us. If we fail our imaginations
and choose to ignore the importance of the common good in education and
across our societies we can only accelerate our demise, as we are currently doing.
Noam Chomsky noted in April 2022 that “we’re approaching the most danger-
ous point in human history. . . . We are now facing the prospect of destruc-
tion of organised human life on Earth” (Eaton, 202227). The “grim cloud of
fascism” across the world is getting thicker and more sufocating, and climate
change is regarded as a serious problem only at a declarative level. The nuclear
annihilation or the climate disaster can lead fast to our demise. The UN report
on climate change, which is warning us all that we live now the days of the last
chance to address climate change and stop burning fossil fuels, is explicit in say-
ing that our survival will be at risk. The report is grim, but we may have there
the most optimistic version; the activist Greta Thunberg noted on Twitter on
the 5 April 2022 that in

reading the new #IPCC report, keep in mind that science is cautious and
this has been watered down by nations in negotiations. Many seem more
focused on giving false hope to those causing the problem rather than tell-
ing the blunt truth that would give us a chance to act.
162 The Future of Higher Education

We can add that national governments used lobbying and all available forms of
infuence to redact and change this fnal report in line with their own economic
interests. These economic interests are at the moment also a leading cause to
the climate change emergency. The solution is to stop our failures of imagina-
tion and try fnding new solutions, new ways of thinking, and drop neo-fascist
and extremist neoliberalism for new projects that are capable to consider the
common good. An article published in Scientifc American makes an important
point about one of the most meaningful lessons of the COVID19 pandemic: the
extreme individualism and ongoing competition stand as a destructive approach
for our interests. It notes that “A microbe revealed the lie of rugged individual-
ism. We are not self-sufcient and independent; we never have been. Our fates
are bound together. Taking care of others is taking care of ourselves” (Nelson,
2022, p. 3328). As the article concludes, the most important lesson of this pan-
demic is that we have to design and adopt “national policies of communal care,”
and reimagine the common good, for our common future and survival.
As noted in previous pages, human life is shaped by what we imagine, and
before direct experiences and living situations we make sense of the world and
its possibilities with symbols and meanings and with our capacity to imagine.
Imagination is one key human attribute for our humanity, and stands as a faculty
that made the hominin evolution unique and dominant over other species. One
of the main problems with the conversion of education into a proft-oriented
industry is that imagination is not included in the overall narrative of higher
education. Profts, market, managerialism, efciency, competitive advantages,
and other terms stand relevant in the narrative structure of academia. Imagina-
tion, inspiration, and compassionate empathy are now felds reduced to simple
statements in institutional strategies, usually without any action attached or any
specifc pathway indicated. In fact, the concept of learning is consistently and
increasingly shadowed by a focus on efciencies of the system, which makes
credentials much more important than any intrinsic values and transformations
for students. A simple and common model of privatisation was, especially since
1990s, to reduce funding for the target of privatisation and infict as much dam-
age as possible through obviously inappropriate policies until the target is in a real
debacle. This is the stage when the saviours come with the solution of privatisa-
tion, which was in fact the main aim that was undermining the system from the
very beginning. There are books and fascinating case studies on these mecha-
nisms, adopted by countries such as Russia, creating the new “oligarchs.” In this
chapter it is relevant only to accept an invitation to think about the similarities of
this strategy with the last decades of development in higher education: a constant
cut of public funding, an aggressive push to adopt the neoliberal model of gov-
ernance, even when it became visible that it is not suitable for universities. When
the ethos of the campus became an example of institutionalised mediocrity and
dysfunction, the new “saviours” from corporate structure come regularly to show
what should be done next. The range of solutions is depressingly common and
Imagination and Education 163

unimaginative, standing reduced to very few ideas: more neoliberal governance, a


higher dependence on corporate structures for technologies, which are on them-
selves able to solve all challenges in teaching and learning. The acceleration of AI
developments should underline clearly the need to engage a serious critique to its
current approaches and to the solutions proposed. Most importantly, it needs to
reconsider the importance of narrative imagination for students, learning, teach-
ing, and faculty. One of the most visible examples on how this can be done is
presented by media and the flm industry.
There is already a solid tradition in the flm industry on engaging imagina-
tion, myths, and cultural archetypes to create appealing narratives. For example,
George Lucas, the creator of Star Wars, used the works of Joseph Campbell on
mythological structures to build his narrative structure for the flm that had an
enormous impact on popular culture, and also on military projects. In every
culture and at any time – from Antiquity through the Middle Age and to the
present – storytelling has been the most efcient method for moral education.
Cultivating power of imagination with meaningful stories and myths was the
central didactic method for civic education in the classical Greek culture. Stories
enable us to imagine the others and to double our amount of information with
the capacity to understand the others, empowering individuals to become part of
the Polis and of the world.

Narrative imagination – explains Martha Nussbaum – is an essential prepa-


ration for moral interaction. Habits of empathy and conjecture conduce
to a certain type of citizenship and a certain form of community: one that
cultivates a sympathetic responsiveness to another’s needs and understands
the way circumstances shape those needs, while respecting separateness and
privacy.
(Nussbaum, 1997, p. 9029).

Mythological structure maintains the power to invite and engage our imagina-
tions; as Plato noted in The Republic “surely the myths are, as a whole, false,
though there is truth in them too.” The “truth” in myths is maintained in cultural
motifs and archetypes, that are currently ignored by higher education and vastly
exploited by marketing, advertisement and other industries to attract and create
connection with people. Lucas asked an academic what are the structures that are
inherently appealing for people, directly related to primordial stories and visions
about time and being; the Hero, the embodiment of evil, the universal balance
were built in line with what can be found in Campbell’s book “The Hero with
a Thousand Faces” (Campbell, 196830). Education can learn from this example
and fnd new and superior energies to create more engaging narratives, where
the power of imagination goes well beyond the aim of employability or efcien-
cies for study time. These new models are much more important and, at the
same time, more appropriate for a world of uncertainties and unprecedented
164 The Future of Higher Education

challenges, where unpredictability is part of our daily lives. A fexible, educated


mind of responsible and active citizens, and compassionate humanity are what
can turn unpredictability into new opportunities and address the sum of crises
confronting us now, at the beginning of the 21st century.

Notes
1. Remarks by US President Obama at YSEALI Town Hall. Souphanouvong Uni-
versity, Luang Prabang, Laos, September 7, 2016. The White House, Ofce of the
Press Secretary. https://obamawhitehouse.archives.gov/the-press-ofce/2016/09/07/
remarks-president-obama-yseali-town-hall
2. Obama, B. (2016, December 10). Now is the greatest time to be alive. WIRED. www.
wired.com/2016/10/president-obama-guest-edits-wired-essay/
3. BBC News. (2016, September 30). Jewish leaders react to Rodrigo Duterte Holocaust
remarks. www.bbc.com/news/world-asia-37515642
4. Human Rights Watch. (2019, August 11). Australia: Press Laos to protect rights
dialogue should address enforced disappearances. Free Speech. www.hrw.org/
news/2019/08/11/australia-press-laos-protect-rights
5. Dr Pangloss is the foolish tutor and inept philosopher in Voltaire’s classic tale of Can-
dide (1759).
6. Abramowitz, M. J. (2018). Freedom in the world 2018. Democracy in Crisis. https://
freedomhouse.org/report/freedom-world/2018/democracy-crisis
7. Repucci, S., & Slipowitz, A. (2022). The global expansion of authoritarian rule.
freedom in the world 2022. Freedom House. https://freedomhouse.org/sites/default/
fles/2022-02/FIW_2022_PDF_Booklet_Digital_Final_Web.pdf
8. Crawford, K. (2017, March 12). Dark days: AI and the rise of fascism. 2017 SXSW Con-
ference. https://youtu.be/Dlr4O1aEJvI
9. Herf, J. (1984). Reactionary modernism: Technology, culture, and politics in Weimar and the
Third Reich. Cambridge University Press.
10. Giroux, H. A., & Casablancas, J. (2019). The terror of the unforeseen. Los Angeles Review
of Books.
11. Gambetta, D., & Hertog, S. (2016). Engineers of jihad: The curious connection between
violent extremism and education. Princeton University Press.
12. Kruglanski, A. W., & Orehek, E. (2011). The need for certainty as a psycholog-
ical nexus for individuals and society. In Extremism and the psychology of uncertainty
(pp. 1–18). https://doi.org/https://doi.org/10.1002/9781444344073.ch1
13. Ananthaswamy, A. (2015, August 5). What if . . . Intelligence is a dead end? New Scientist.www.
newscientist.com/article/mg22730330-900-what-if-intelligence-is-a-dead-end/
14. Sizer, T. R., & Kirk, L. D. (1970). Technology and education: Who controls? Academy for
Educational Development. https://eric.ed.gov/?id=ED039732
15. Heidegger, M. (1969). Discourse on thinking. Harper & Row.
16. Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and
the fabrication of power. The University of Chicago Press.
17. Nordquist, R. (2020, August 27). Catachresis (Rhetoric). www.thoughtco.com/what-is-
catachresis-1689826
18. JISC. (2021). AI in tertiary education: A summary of the current state of play. JISC.
https://repository.jisc.ac.uk/8360/1/ai-in-tertiary-education-report.pdf
19. Watters, A. (2021). Teaching machines. The MIT Press.
20. Dunn, T. (2020). Inside the Swarms: Personalization, gamifcation, and the net-
worked public sphere. In J. Jones & M. Trice (Eds.), Platforms, protests, and the chal-
lenge of networked democracy. Rhetoric, politics and society. Palgrave Macmillan. https://doi.
org/10.1007/978-3-030-36525-7_3
Imagination and Education 165

21. OECD. (2022). OECD framework for the classifcation of AI systems. https://doi.org/
doi:https://doi.org/10.1787/cb6d9eca-en
22. Kibby, B. (2022, April 6). Why hyper-personalization is critical for higher ed.
eCampus News. www.ecampusnews.com/2022/04/06/why-hyper-personalization-
is-critical-for-higher-ed/
23. Parker, P. (1990). Metaphor and catachresis. In J. Bender & D. E. Wellbery (Eds.), The
ends of rhetoric: History, theory, practice. Stanford University Press.
24. Sirius, R. U., & Joy, D. (2005). Counterculture through the ages: From Abraham to acid
house. Villard.
25. Littman, M. L., Ajunwa, I., Berger, G., Boutilier, C., Currie, M., Doshi-Velez, F.,
Hadfeld, G., Horowitz, M. C., Isbell, C., Kitano, H., Levy, K., Lyons, T., Mitchell,
M., Shah, J., Sloman, S., Vallor, S., & Walsh, T. (2021). Gathering strength, gathering
storms: The one hundred year study on artifcial intelligence (AI100) 2021 study panel report.
Stanford University. http://ai100.stanford.edu/2021-report.
26. Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo,
H., Niebles, J. C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, R.
(2022). The AI index 2022 annual report. AI Index Steering Committee, Stanford Insti-
tute for Human-Centered AI, Stanford University.
27. Eaton, G. (2022, April 6). Noam Chomsky: “We’re approaching the most dangerous
point in human history.”New Statesman. www.newstatesman.com/encounter/2022/04/
noam-chomsky-were-approaching-the-most-dangerous-point-in-human-history
28. Nelson, R. G. (2022). A microbe proved that individualism is a myth. Scientifc Ameri-
can, 326(3), 32–33. doi:10.1038/scientifcamerican0322-32
29. Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal educa-
tion. Harvard University Press.
30. Campbell, J. (1968). The hero with a thousand faces (2nd ed.). Princeton University Press.
8
SCENARIOS FOR HIGHER
EDUCATION

When we talk about edtech and AI, we have to accept that at a time of
compounding crises we need the courage to ask – after so many decades of
intense use of learning analytics, LMSs, plagiarism detection software, and all
other edtech applications – why education is getting worse if all these solutions
work so well, as the edtech sector is claiming. A research report released by the
University and College Union in March 2022 shows that in the United Kingdom
two-thirds of university staf was considering leaving higher education, and that
university staf is “demoralised, angry and anxious about the future of higher
education itself ” (UCU, 2022, p. 21). The report concludes that “the results of
the survey simply reinforce what our members have been telling us for years –
that they and their colleagues have reached breaking point” (p. 9). At the same
time, in the United States we witness the same feeling of imminent collapse, with
The Chronicle of Higher Education noting that faculty is facing “common chal-
lenges: Far fewer students show up to class. Those who do avoid speaking when
possible. Many skip the readings or the homework. They have trouble remem-
bering what they learned and struggle on tests.” Faculty describe “a disconcerting
level of disconnection among students, using words like ‘defeated,’’exhausted,’
and ‘overwhelmed’” (McMurtrie, 20222). This is part of trend that goes back
decades before the pandemic can be used as another convenient excuse. Research
conducted in the United States by Richard Arum and Josipa Roksa, who con-
clude that

[a]n astounding proportion of students are progressing through higher


education today without measurable gains in general skills as assessed by
the CLA. While they may be acquiring subject-specifc knowledge or
greater self-awareness on their journey through college, many students
DOI: 10.4324/9781003266563-12
Scenarios for Higher Education 167

are not improving their skills in critical thinking, complex reasoning, and
writing.
(Arum & Roksa, 2011, p. 363)

Specifcally, 45% of students did not demonstrate a signifcant improvement in


learning during the frst two years of college, 36% of students registered no sig-
nifcant improvement in learning over four years of college, proving that close
to half of all students did not learn much in the frst two years of their studies
in higher education and over a third have not learned much by their gradua-
tion. Signifcantly, students showing no signifcant learning had relatively good
marks. The Chronicle of Higher Education noted that this extensive and well-
documented study

didn’t reveal anything that college leaders didn’t know, in quiet rooms
behind closed doors, all along. Academe was so slow to produce this
research because it told the world things that those in academe would rather
the world didn’t know.
(Carey, 20124)

This reassertion was followed by another book, which shows that this percent-
age of poorly prepared students are equal to what authors called “aspiring adults
adrift” (Arum & Roksa, 20145), underemployed, unemployed, and with a poor
social integration. It is a profound crisis for institutions that are incapable and
uninterested in learning, in fnding meaning in knowledge and create active and
responsible citizens, or at least to clarify what is a meaningful life in society, com-
munity, economy, or why the diference between right and wrong still matters.
Higher education stands reduced to a transactional relationship where stu-
dents learn that they’ll become employable, which is a lie, an absurd claim, and
a terrible depletion of any project of education. This reality, which is exten-
sively documented in research, books, journal articles, and reports, is not afect-
ing the enthusiasm for edtech to claim that the new app, system, or platform will
“unleash” the potential of our institutions for higher education. Learning analyt-
ics and predictive analytics promise for decades that “personalised” education is
the key to engage students, to make better teachers, to facilitate an optimal design
for curriculum. The OECD notes that “[t]he goal of learning analytics is to use
the increasing amounts of data coming from education to better understand and
make inferences on learners and the contexts which they learn from” and also
clarifes the potential of predictive analytics as:

Intelligence augmentation systems, also called decision support systems,


communicate information to stakeholders such as teachers and stakeholders
in a way that supports decision-making. While they can simply provide raw
data, they often provide information distilled through machine-learning
168 The Future of Higher Education

models, predictions, or recommendations. Intelligence augmentation sys-


tems often leverage predictive analytics systems, which make predictions
about students’ potential future outcomes, and – ideally – also provide
understandable reasons for these predictions.
(Baker, 2021, p. 466)

If only the silver bullet of “analytics” and big data will be the magical button that
can solve curriculum, teaching, and learning. It was not and there are no reasons
to believe that it will be. A new promise is now taking centre stage: the use of
AI in education. A recent report of the Center for American Progress notes that
“Artifcial intelligence can help students learn better and faster when paired with
high-quality learning materials and instruction” (Jimenez & Boser, 20217).
In the United Kingdom, the usual suspects opened a new national centre for
(AI) in higher education:

The initiative – which has been welcomed by global technology companies


including Amazon Web Services, Google, and Microsoft – is led by Jisc
and supported by innovation-focused universities and colleges throughout
the UK. . . . The national centre supports the government’s AI strategy,
which the digital secretary, Oliver Dowden, announced in March, saying:
“Unleashing the power of AI is a top priority.”
(JISC, 20218)

But AI in higher education is not at all a new idea, nor the promise of “unleashing”
this technological panacea. A book published in 1987 on AI in higher education
opens with this paragraph:

For over twenty years Artifcial Intelligence has been recognized as an


established discipline interacting with computer science, engineering,
human sciences and many other areas. The latest development proves that
Artifcial Intelligence ofers methods which may be successfully used in the
feld of education.
(Marík et al., 1990, p. V9)

The Prague conference in 1989 focused on three main topics: the teaching of
AI in higher education; the uses of AI in higher education, and the research and
development in AI in higher education. We can easily imagine this conference
dealing with exactly these issues starting from the premise that AI may be “suc-
cessfully used in the feld of education” to unleash its magical power. Since 1990
to the third decade on the 21st century, we hear the same promise that AI will be
the solution for universities and colleges across the world. What happened since
the announcement on the AI’s blooming in education is played on repeat for the
last six or seven decades? Why do we hear the same words and we are told that
Scenarios for Higher Education 169

this is “innovation”? More importantly, why academics and university administra-


tors fall for transparent narratives that clearly aim to colonise and control spaces
once controlled by academics and to extract as much data and money as possible?
The perplexed and gullible posture is now taken as a visionary stance and an
opening to progress. We reached a point where we have to reconsider what is
“progress” and who controls the results; the AI revolution may be a possible start
in this new direction.
If AI is used mainly for assessment, learning analytics, and predictive analytics
in the current forms, we can only expect some serious problems ahead, accel-
erated, multiplicated, and intensifed. The predictive function associated with
personalisation is linked to examples of abusive and secretive practices that jeop-
ardised the interests of users in insurance and fnancial services, social policies (e.g.
the robodebt scandal in Australia), surveillance, and policing. Facebook used the
“like” button to manipulate opinions or sell data collected to marketers. Amazon
was fned by the EU regulators for alleged breaches of EU privacy laws. The
predictive functions sold as services under the label of Amazon forecast are based
on vast collection of private data, as an article in The Guardian notes: “Those
who have requested their data from Amazon are astonished by the vast amounts
of information they are sent, including audio fles from each time they speak to
the company’s voice assistant, Alexa” (O’Flaherty, 202210). AI is immensely mul-
tiplying the power to use technology to manipulate human behaviour. Research
fndings demonstrate how AI can be efectively used to weaponise our vulner-
abilities and exploit our preferences and habits to fnd patterns and complex ways
to manipulate our choices and behaviour (Dezfouli et al., 202011). The fact is
that edtech is rarely, if ever, benevolent; massive corporation use data collected
from students, without their explicit agreement, to monetise and extract maxi-
mum value for corporate stakeholders. Profts and market positions guide the
action of corporate entities and there is no reason to believe that in education
this works somehow diferently. This is why it is vital to fnd ways to use AI in
education in transparent forms, within complex and ethical frameworks. If AI is
used in education as it is currently designed and favoured by commercial entities,
with no natural interest in the quality of learning, students’ long-term interests,
and well-being, then we leave institutions of higher education vulnerable for the
future. Considering the interconnected crises shacking now higher education, we
can expect that a class action for misuse of students’ data or accelerating the ero-
sion of trust in universities can amount to the fnal and fatal blow. To avoid this is
important to gain control on edtech applications, eliminate black boxes, establish
mechanisms for an ongoing scrutiny of edtech’s data practices, and provide algo-
rithmic transparency, including the right to appeal AI’s decisions.
The hubris of software engineers and the propagandistic persistence of tech
investors are blurring at the moment the positive relationship we can have with
technology. Since early 1970s, it was explicitly stated that “as our understand-
ing of the history of technology increases, it becomes clear that a new device
170 The Future of Higher Education

merely opens a door; it does not compel one to enter” (White, 1974, p. 2812).
Technology is just a tool; a complex one, but just a tool that can be used to fx
or to wreck. It can help us build foundations for complex future structures or we
can deceive ourselves that technology is on itself is a solution for our problems.
Human history is flled with examples of disasters created by a misunderstanding
of technology. Not only that technology is a tool for humans but we have to
remember that the tempting narrative stating that technology is neutral is absurd
for a historical and even engineering perspective. This falsehood, of neutral
intelligent machines, is emerging as an easy way to hide the AI faws and risks,
the abuse of trust, the vast possibilities of manipulation, and the concentrations
of power conferred to those who control the AI solutions. This myth is repeated
and sustained by well-funded lobbying groups, by celebrities of Silicon Valley
and investors, or by “expert” international organisations. The new persistence
to state something so easily denied by independent studies conducted by repu-
table researchers and organisations is not only abating but is also not making
it true. One of the most notable experts in the history of technology, Melvin
Kranzberg, a Professor of the History of Technology at Georgia Institute of
Technology, noted that human decisions are determined not solely by scientifc
fndings and rational mechanisms, but also by our humanity: emotions, fears
and hopes, preferences, and aversions. He noted that “only if information and
software systems take into consideration human feelings and capabilities – the
‘human hardware’ – can they capitalize fully on the computer’s growing infor-
mational and analytical capacity” (Kranzberg, 1990, p. 313). When we sacrifce
meaning and subsume the aspirations of humanity to the cult and deifcation
of technological solutions, the efect is an ongoing erosion of foundations and
overlapping crises. Technology is promising for a long time solutions for our
ecological crises, but we reached a point where the science tells us that parts of
our planet will become uninhabitable and we have a last chance to survive if
we stop emissions causing climate change. Technology is useless if the human
mind is not prepared to control and steer it. If we wait for a new miraculous
technological solution for our crumbling world order, for the rise of fascism, to
stop the horrors of wars, for ecological and social systems that are crumbling in
fragments, it should be obvious that we will fail. AI, an extraordinary develop-
ment of technology, is also a tool that is shaped by humans, by what we decide
to be part of data collected, by our preferences and unconscious prejudices, by
our emotions and cultural horizon. AI is also a technology that is achieving
what our minds imagine that it can reach, and can hinder or ruin when our
expectations are not realistic or based on real limitations of this complex tool.
In this complex and interconnected system of cultural systems, economic and
demographic determinants, and political decisions, it is natural to expect that
universities will be changed by the current turmoil of our systems and by the
technological advancement. The AI is an integral and important part of futures
of higher education, which can take radically diferent routes. Here is where the
Scenarios for Higher Education 171

history of technology can help us see the future of AI in universities, starting


from the fact that technological progress is a history of extraordinary progress
or stories of complete disasters, sometimes on the same technological advance-
ment. Social and political systems are also directly afected within an interrelated
system when one or more parts plunge in disarray. For example, we can look at
the crisis of democracy across the world and the role of technology in this con-
text. The enthusiasm for social media and new developments in computing and
cybernetics was fuelling the certainty that technology will promote democracy
across the world, will break barriers between people and cultures, and will help
to reach mutual understanding. Techno-democratic optimism was strident in
the context of what was called the Arab Spring, in 2011. In 2012, Mark Zuck-
erberg noted in a public letter that

Today, our society has reached another tipping point. . . . There is a huge
need and a huge opportunity to get everyone in the world connected, to
give everyone a voice and to help transform society for the future. . . . At
Facebook, we build tools to help people connect with the people they want
and share what they want, and by doing this we are extending people’s
capacity to build and maintain relationships. People sharing more – even if
just with their close friends or families – creates a more open culture and
leads to a better understanding of the lives and perspectives of others.
(Zuckerberg, 201214)

In 2019, in a speech delivered at the Anti-Defamation League, Sacha Baron


Cohen summarised how social media works in our democracies:

Facebook, YouTube and Google, Twitter and others – they reach billions of
people. The algorithms these platforms depend on deliberately amplify the
type of content that keeps users engaged – stories that appeal to our baser
instincts and that trigger outrage and fear. It’s why YouTube recommended
videos by the conspiracist Alex Jones billions of times. It’s why fake news
outperforms real news, because studies show that lies spread faster than
truth. And it’s no surprise that the greatest propaganda machine in history
has spread the oldest conspiracy theory in history – the lie that Jews are
somehow dangerous. As one headline put it, “Just Think What Goebbels
Could Have Done with Facebook.”
(Cohen, 201915)

We know now that Facebook was instrumental in the organisation and initiation
of mass killings and ethnic cleansing (e.g. Myanmar, Ethiopia). Social media is
increasingly revealed to be responsible for the instigation of hate crimes, and it is
working as a nursery platform for dangerous conspiracy theories, for fascist and
extremist movements. Social media failed to deliver on all its generous promises;
172 The Future of Higher Education

Internet is not conducive to nurture democratic ideas and a civil society. In an


extensive report with 65 countries assessed, the Freedom House notes that

Disinformation and propaganda disseminated online have poisoned the


public sphere. The unbridled collection of personal data has broken down
traditional notions of privacy. And a cohort of countries is moving toward
digital authoritarianism by embracing the Chinese model of extensive
censorship and automated surveillance systems. As a result of these trends,
global internet freedom declined for the eighth consecutive year in 2018.
(Shahbaz, 2018, p. 116)

This is part of a concerning trend that is documented by research and narratives


of civil society.
In 2021, the same organisation provides extensive data and evidence to reveal
that the antidemocratic trend is only increasing: “attacks on democratic institu-
tions are spreading faster than ever in Europe and Eurasia, and coalescing into
a challenge to democracy itself ” and that for the past decade “amid the erosion
of the liberal democratic order and the rise of authoritarian powers, the idea
of democracy as an aspirational end point has started to lose currency in many
capitals” (Csaky, 2021, p. 117). The idea of democracy itself is questioned and
undermined; the constant erosion of the idea of democratic citizenship and the
rise of authoritarian and antidemocratic ideologies remain in reality far from the
main areas of interest for universities. The abandonment of social and educa-
tional responsibilities by academia is a result of contradictions and tensions that
were ignored for decades; regardless of motives, universities maintain their social
responsibility simply at a declarative, rhetorical level, or simply ignore it and focus
on other areas of interest, such as corporate management, university rankings, and
institutional positioning on the market. In fact, the erosion of democracy is vis-
ible within academia, with universities structured on increasingly rigid and anti-
democratic hierarchies of power, using fear and precarious forms of employment
to quell dissent and independent thinking. It is not even necessary to explore
the vast literature and evidence-based arguments in this sense: a simple look at
the geographies of the campus reveals that executives are spatially isolated from
students and faculty, occupying the most modern and comfortable building in
campus, often far from the areas where teaching and learning happen. In general,
these spaces refect the opposition and isolation from the rest of the campus,
working as a reminder that faculty and students step in a space of power and con-
trol; the architectural structure is telling all that the leader controls all resources
and futures in that space. Neoliberal management solutions and corporate fads
changed universities to work as organisations where the ethos of the campus is
not academic, focused on knowledge or academic ideals with direct relevance for
students and community; academia is a business that is relevant as a part of the
market, and education is a consumer commodity.
Scenarios for Higher Education 173

The constant erosion of intellectual life in academia and excessive focus on


quantitative criteria and various forms of exploitation are the open secrets of
higher education. Public personalities draw attention to Big Tech’s abuses and
responsibility for the accelerating erosion of democracy, and pinpoint clear
responsibilities for the climate emergency, but the campus remains quiet and
is only accidentally interested in scrutinising the complex impact of edtech’s
monopolies on education.
These developments make even more important to think about AI in the con-
text of futures of higher education, especially at a time when we see that prob-
ably the main failure of the university is the failure of imagination. As previously
noted, the aim of higher education is reduced to some of the most basic, crude,
and dull outcomes, unimaginative and uninspiring technicalities that invite cyni-
cism and disengagement. There is no power to inspire because there is no real
desire to awaken the love for learning, and to fre up the imagination of those
who learn, faculty, and students. In fact, the space of imaginations in higher
education is left vacant for narratives that are fundamentally dangerous for our
future: conspiracy theories and anti-science delusions, fascist ideologies and anti-
democratic demagogues. The narrative context is not inviting for human imagi-
nations: it is a story of crude commercial transactions, where graduates purchase
a certifcate in a cynical and disheartening exchange, where the lie is necessary to
maintain the facade that is presenting graduation as a result of higher learning and
consistent work, not simply as the purchase of a credential.
Imaginaries and narrative structures are relevant for universities just as mar-
keting tools for promotional materials, in the context of a puzzle of new public
management myths and technicalities, slogans and intentions, and instrumental
promises such as “employability.” The love for learning, inspiration, and curi-
osity; the narratives of students as human beings engaged in learning, and the
expansion of vision and understandings are crushed under the obsessive focus on
big data, surveillance and predictive analytics, edtech gadgets, and software appli-
cations. Even academic integrity and plagiarism are approached from an instru-
mental position, with a focus on punishments and institutional consequences,
treating students as an inert capital requires proper formation to be managed as
needed. The solutions adopted by many universities are some dystopian behav-
iourist solutions, where eye trap movement software identifes students who are
supposedly plagiarising when they don’t stare at the camera; in some cases, stu-
dents reported that the eye tracker used in these proctoring solutions fagged
students for cheating as they were crying during the test. Looking on the side
trying to think about a question is enough to be fagged as a cheater, which is
an academic term for thieves – of intellectual property. To blame an innocent
student for stealing is not even remotely connected with the idea of education
or higher learning, as it is obviously absurd to use eye movements to measure
learning, honesty, or focus on a test. These absurd developments are not only
adopted without a serious exploration of educational and ethical implications of
174 The Future of Higher Education

the extreme surveillance adopted in the name of academic integrity, but are also
used even after testimonials and researches proved how fawed and dangerous for
learning they are. Just think how these proctoring technologies have impact on
students with a disability, especially as some do not want to declare ofcially their
conditions. AI proctoring solutions are at the moment dystopian, misleading
edtech applications that are responsible for traumatic experiences for students and
work as a clear endorsement of mutual mistrust as foundation for the educational
project of higher education.
The constant increase of plagiarism cases in universities remains a largely
unexplained phenomenon: why is cheating rising in higher education? Why uni-
versities largely fail to stop it? Why is this becoming – for the past decades – a
common problem for universities and colleges, creating a market for edtech com-
panies specialised in “academic integrity software”? A possible explanation is that
the overall approach of plagiarism is itself the main cause, not so much the generic
student who gets blamed and suspected so much. This new relationship invites a
reaction against an oppressive force interested only on bureaucratic arrangements,
cheating on delivery and ofers while pretending to care about what is learned. If
“customers” are led to believe that payment is the guarantee of a commodity and
later discover that what was sold to them is far from what was presented in glow-
ing terms, the recourse to cheat (plagiarism) the vendor can be seen as a natural
reaction against deceiving and abusive practices. It is a way to react, consciously
or just intuitively, to a fraud packaged as higher learning; universities have lost
interest in learning placing now the focus on various rankings, profts and mar-
ket shares. It is a matter of immediate evidence to see that institutions of higher
education don’t even consider the need to clarify and students see why learning
matters, not a certain diploma, credential, and bureaucratic recognition. It is dis-
ingenuous to hear the complaints from academia that students cheat, and highly
ofensive to see how students are imagined in this common story: they have to
learn that cheating is bad, or get an intensive course on “academic integrity,”
designed especially for international students, who are supposedly not familiar
with standards in academic integrity. This can be explained as racist contempt,
looking at students as inert beings, educated in schools where teachers appar-
ently told them: “this is your assignment, go home and copy/paste the answers.”
Higher education is now looking at itself as fxing all with student-training and
intensive courses on academic plagiarism; an arrogant and self-sufcient posi-
tion. Students rarely protests, as resistance is obviously much less efective than
indiferent compliance. Some students can use these experiences to learn how to
chat better, to avoid making the obvious mistakes that get one caught. Learning
in higher education is not determined by fear of punishment, customer relations
and not even by some new glitzy technology. Higher learning is determined by
the way students are imagined and addressed as human beings, with all that this
entitles. Students can be sick and depressed, can be distracted or focused, can have
problems related to their own contexts that cannot be read at all by eye-tracking
Scenarios for Higher Education 175

technologies, surveillance, and analytics. A decontextualised software is missing


all these crucial aspects.
A new narrative, which is dominated by tech solutions and disinterest for
human attributes, is forming the background for a constant erosion of academic
culture and the crisis of higher education. The challenge is to see the possible
futures of academia, especially in the context of rapid technological advancements
recorded by the AI. The main beneft of thinking about possible scenarios for
the future is that it creates a narrative of a possible future, which can engage our
imagination and explore the implication of our decisions. Imagining the future
can build and test alternatives in a safe way. Thinking about the future is especially
crucial for an optimal integration and use of AI and other edtech solutions, and
provides opportunities to explore challenges and potential approaches for a new
architecture of ideas and practice. Our “stories about the future” play an impor-
tant role in building a better future, revealing possible new approaches while
creating possibilities to consider challenges and obstacles ahead. Some scenarios
are most plausible than others, but it is important to explore the entire range for
an informed choice or for a proper consideration of implications for the future as
they are determined by our current preferences.
The scenario identifed here is the most plausible, based on trends already
set in higher education across Western Anglophone countries, including other
advanced systems of higher education in countries such as Singapore, China, and
the European Union. These university governance models are widely shared and
shape managerial arrangements for universities across the world. This scenario
will be further shaped by the market-oriented university, and is a natural devel-
opment of current trends and institutional choices. In this paradigm, universi-
ties fnd that securing funding, budget allocation, optimal market position, and
controlling costs work as the most important and infuential rationale for these
institutions. The permanent positions for the faculty will be entirely eliminated,
favouring limited-term contracts and casual work. AI systems will replace teach-
ing and most of the other functions formerly held by academics, creating a very
diferent experience for students, with a direct focus on assessments covered by
multiple choice questions and other standardised assessments that can ft in the
capacities of algorithms and clearly set patterns. The role of academics in campus
will be entirely diferent, with very limited numbers working to augment AI sys-
tems rather than work with students, taking the role of technical supervisors and
academically advanced operators on edtech, in some cases covering aspects that
require human overview or edtech calibration.
In this scenario, teaching, tutoring, and assessment are controlled and delivered
by AI systems in intensive short courses, with a much larger adoption of micro-
credentials and short-term learning, for “intensive” courses that are designed to
serve the needs of employers and accreditation bodies. This evolution represents
just the consolidation and acceptance of a constant trend in the past decades,
shaped by a decline in the study time for university students and an increase of
176 The Future of Higher Education

leisure time. The motivation for learning and student engagement further eroded
and short, intensive, vocational courses became much more suitable for demand
and alignment with the market needs. Research on this topic documented well
this trend, for a relatively long time; we can take the example of “Leisure College,
USA: The Decline in Student Study Time,” a study completed by Philip Babcock
and Mindy Marks on study time for students in American universities:

In 1961, the average full-time student at a four-year college in the United


States studied about twenty-four hours per week, while his modern coun-
terpart puts in only fourteen hours per week. Students now study less than
half as much as universities claim to require. This dramatic decline in study
time occurred for students from all demographic subgroups, for students
who worked and those who did not, within every major, and at four-year
colleges of every type, degree structure, and level of selectivity. Most of
the decline predates the innovations in technology that are most relevant
to education and thus was not driven by such changes. The most plausible
explanation for these fndings, we conclude, is that standards have fallen at
postsecondary institutions in the United States.
(Babcock & Marks, 2010, p. 118)

Researchers also found that time allocated for leisure activities increased with an
average of nine hours per week between 1961 and the 2000s.
In fact, education fails to address the most obvious challenge related to intel-
ligence: at a time when AI is rapidly progressing, human intelligence is declining,
in a clearly defned trend. The IQ levels, the most common measurements for
human intelligence, recorded a constant increase for the most of the 20th century,
in a phenomenon labelled as the “Flynn efect,” a label given after James Flynn, an
intelligence researcher who discovered and documented this trend. The constant
increase in the IQ scores slowed in 1970s and new research found a reversal of the
increasing trend. IQ scores are since then decreasing. Peter Dockrill summarised
this development in an article for Science Alert: “An analysis of some 730,000 IQ
test results by researchers from the Ragnar Frisch Centre for Economic Research
in Norway reveals the Flynn efect hit its peak for people born during the mid-
1970s, and has signifcantly declined ever since” (Dockrill, 201819). In fact, the
overall decline of the IQ for the world’s population was reconfrmed in a study
published in 2018 by James Flynn, the scientist who gave the name to this process.
He notes in this study that “The IQ gains of the 20th century have faltered,” and
warns that “during the 20th century, society escalated its skill demands and IQ
rose. During the 21st century, if society reduces its skill demands, IQ will fall”
(Flynn & Shayer, 201820).
We can consider the measurements of IQ scores as a relatively limited perspec-
tive on human intelligence, and consequently question the hypothesis of decline
of human intelligence. New and extensive research conducted on other variables
Scenarios for Higher Education 177

related to human intelligence refects also a certain decline in human intelligence.


For example, research on two key indicators, the fow of ideas and research pro-
ductivity, indicate also a constant decline, well documented in 2021 by research-
ers associated with ETH Zurich, the University of Geneva in Switzerland, the
Academy for Advanced Interdisciplinary Studies in China and Tokyo Institute of
Technology in Japan. They conclude that regardless of

what time bias correction factor is used, it is clear that scientifc knowledge
has been in decline since the early 1970s for the Flow of Ideas indices and
since the early 1950s for the Research Productivity indices until 1988, the
end of our database. Moreover, because we use the general population as a
proxy for the number of researchers, this decline is in fact underestimated
as mentioned above.
(Cauwels & Sornette, 2022, p. 1021)

Other fndings prove that this trend is not reversed for the last decades. These
conclusions invite for a more in-depth analysis, opening with their complexity
and implications an in-depth search that can be covered properly by other books.
This is not a recent discovery and was previously noted by one of the founders of
cybernetics, Norbert Wiener. He noted in early 1950s:

I consider that the leaders of the present trend from individualistic research
to controlled industrial research are dominated, or at least seriously
touched, by a distrust of the individual which often amounts to a distrust
in the human. . . . The general statistical efect of an anti-intellectual policy
would be to encourage the existence of fewer intellectuals and fewer ideas.
(Wiener, 1994, p. 2222)

It is a surprising prediction of higher education in the 21st century and also a


warning that this is a dangerous direction.
Focusing on the education futures and AI, we can see that most probably we
will have a strong trend of decline of human intelligence. Research data proves
that climate change and its impact will negatively afect billions across the world,
that very large numbers have poor nutrition at a time when we have a decline in
school education and educational values. These variables are all connected to a
rise or decline in the intelligence and our fexibility of mind. Universities are at
the crossroads of these changes and the current indiference for what stands sig-
nifcant for learning and teaching, or what students really learn and explore, can
only accelerate further decline of quality and relevance across higher education.
There is the expectation and the promise that we are on the cusp of AI solutions
that will solve the task of teaching, learning, and innovation in higher education.
So far, this is a baseless promise and a running target; the closer we think we are
the further it goes. Rather than moving focus on nurturing innovative minds,
178 The Future of Higher Education

well-informed and responsible individuals with a well-rounded education, the


world of education is constantly waiting for the moment when computers, at
a push of a button, will solve the marginal concerns of learning, teaching, and
innovation for institutions focused more on corporate designs, business solutions,
and the sale of their services as commodities. The fact that these promises are
not changed in essence since 1960s does not bother university administrators and
academics; the next software will supposedly provide just what we need. This
approach is encouraged from various directions, by corporations with key inter-
ests in the market represented by schools, universities, and other tertiary educa-
tion institutions, by international organisations such as OECD or World Bank,
and by some well-funded academic groups. For example, a joint study between
Oxford University and Yale University does not explore if AI will exceed human
capabilities, but when this will happen; the supreme confdence that AI will
replace human abilities is not even questioned. The study presents the inevita-
bility to have AI replacing humans in virtually all felds in the next 120 years.
In fact, the study presents opinions of engineers and research in the feld of AI,
underlining that the “accurate forecasting of transformative AI would be invalu-
able” (Grace et al., 201823). Not only that this study is just a refection of various
opinions, not based on evidence and data, and it is a puzzle of various opinions
that start from the hypothesis that AI will replace and exceed human intelligence.
This type of literature, along with an avalanche of “expert opinions” from busi-
ness groups, consultancy frms, and corporate think tanks, is accepted without
serious refection or academic inquisitiveness or intellectual curiosity.
Indeed, the irony of AI is that we witness an accelerated development of tech-
nology at a moment in human history where we see human intelligence decline,
but the possibility for AI to exceed human complexity of thinking, emotions,
contextualising, and imagination is very far from being decided. The creation
of a culture of obedience, groupthink, and submissive mediocrity will drastically
impact universities for the next decade.
One of the most important implications of these trends is that we are at
a moment when we collectively face some of the most severe challenges for
humanity ever present. We have the rise of extremism and fascism across the
world, and authoritarian solutions that are directly linked to historical nightmares
become attractive alternatives for an increasing number of people. We have new
words that are increasingly associated by media and ofcials to the start of the
Third World War. We have the global climate crisis, and the last years present
not only unprecedented extremes in heat waves and aberrant weather events,
but also test directly the limits of human survivability. All these major crises and
challenges come at a time when we have not only a constant decline in innova-
tion and intelligence, but also a fundamental existential crisis for universities.
The neoliberal model drastically changed universities; the President of Ireland
once noted that universities are now “market-driven” which led to an attrition
of range and depth, pushing higher education towards ruination. The president
Scenarios for Higher Education 179

of the Republic of Ireland Michael D. Higgins made these observations at the


All European Academies Conference in June 2021, noting also that it is not an
overstatement to state that

the very raison d’être of the university, I believe, is at stake. Academics all
over the world should be concerned that future generations may weep for
the destruction of the concept of the university that has occurred in so
many places, which has led to little less than the degradation and debase-
ment of learning, the substitution of information packaging for a discursive
engagement or search for knowledge.
(Higgins, 202124)

He also noted that university leaders

describe and introduce themselves as CEOs of multi-million euro enter-


prises rather than as academics frst and foremost whose main responsibil-
ity might be to defend and cultivate the intellectual life of their academic
institutions, facilitating an enriching learning environment for staf and
students alike.

In a campus culture permeated by market interests, defned by narrow utili-


tarianism and new extremes of New Public Management, we must question how
it is possible to have graduates with independent and critical thinking abilities,
engaged and responsible citizens with democratic values when teachers’ lost (or
cannot freely exercise) these attributes? The answer is that we cannot realisti-
cally expect graduates with a well-rounded education, with intellectual, civic, and
moral responsibilities when academics work in a culture intolerant for dissenting
views, turned against diferent opinions and “useless explorations,” which are not
economically relevant. Not that academics are innocent victims, as most made a
strategic step back to protect their careers and future in institutions where con-
tracts are used as weapons against derailments from current dogmas. The future of
civic culture is determined by higher education especially when media is focused
on dumbifcation and other social institutions lost interest in education’s value.
This is why, most probably, our education futures will be even more dominated
by technology and technocratic utopianism and less suitable to nurture human-
ism, intellectual freedom, and critical thinking, and all other abilities required at
a time of extreme uncertainties and crises that afect individuals, communities,
and the world.
Focused on vocational training and uninterested to nurture students’ abilities
and potential for independent thinking, growth, and scholarly engagement, uni-
versities will further emphasise assessments, using AI to make assessment guide
the educational process. The marginal experiments in this area, such as ungrad-
ing, will remain marginal and reveal shortcomings that further disadvantage
180 The Future of Higher Education

students from low socio-economic and deprived areas. The Netfix or the Ama-
zon model of higher education, virtually of any highly exploitative corporations
that is placing users under extensive surveillance for predictions, manipulation,
and monetisation, will be further adopted by most universities as optimal models
for organisation and organisational identity. In few words, education will be com-
pletely commodifed in a model that will use the rapid advancements of AI not in
relation to learning and higher levels of cognitive tasks, but as means to maximise
profts and reduce costs.
In this new paradigm, assessment is mostly reduced to standardised exams
and practical assessment tasks, under AI supervision. Multiple Choice Questions
take a signifcant part of assessment tasks, while other assignments are fne-tuned
with the help of technical staf managing the AI systems in coordination with
corporate vendors, engaging rarely part-time academics for curriculum align-
ment, once or twice a year. This evolution was accelerated by a rethink of budget
allocation in higher education, further cuts leading to a total elimination of public
funds for universities, all as part of a signifcant reform across higher education,
which was promoted by a bipartisan group of politicians and Big Tech representa-
tives. Since employability is the stated aim of universities, curriculum is limited
to accreditation requirements and employers’ requirements; a list of skills and
content will be reviewed every three or fve years. The idea of higher learning,
with blue-sky research and in-depth explorations of various topics, which was
considered a poor investment for short-term returns, is mostly eliminated from
curriculum and research.
The most useful approach to see how AI will change our education futures
is to stop and consider what happened in the last decades and what we see now.
The rise of the extreme right and fascist currents is vastly documented and visible
in our everyday lives. Authoritarianism and the general decline of democracy are
recorded across the world: Bertelsmann Transformation Index (BTI) has recorded
more autocratic than democratic states in 2022, and it is notable to see that an
extreme dictatorship line Russia is evaluated in this index as a “moderate autoc-
racy.” This is a very optimistic evaluation for a country with a foreign minister
who is publicly expressing some of the most vile themes of antisemitic discourse,
where free press is replaced by an extreme state propaganda machine. The “mod-
erate” dictatorship ordered the arrest of opposition leaders and dissenting jour-
nalists. In fact, the public discourse regressed to the point that anti-fascism is a
negative term, being presented as such by Donald Trump and his administration,
in media and public discourse across the world. Big Tech companies are directly
linked to the antidemocratic developments, and these key actors guiding the use
of AI present an important warning for education and the future of civil societies.
The message is that it is not only the risks associated with ubiquitous surveil-
lance, which is largely employed at the core of the business model for large tech
and edtech companies, but a monopoly on technologies of immense power in a
model that is fundamentally opposed to democracy and civil society.
Scenarios for Higher Education 181

The most visible actors in the sector have a very problematic relationship with
democracy, expressed publicly in various contexts. Pamela Jones Harbour, a law-
yer for Microsoft and a former commissioner of the Federal Trade Commission,
wrote for the New York Times the article “The Emperor of All Identities,” present-
ing Google as the Internet “emperor,” that is amassing “unbridled control over
data gathering, with grave consequences for privacy and for consumer choice”
(Harbour, 201225). More recent investigations and reports such as “Google Aca-
demic Inc.” show that Google engaged in practices opposed to transparency and
democratic practices: Google was discreetly “paying millions of dollars each year
to academics and scholars who produce papers that support its business and policy
goals” (Tech Transparency Project, 201726). In May 2022, Nathaniel Persily, Pro-
fessor of Law and Director of the Stanford Cyber Policy Center, Stanford Law
School, noted in his hearing for the US Senate Judiciary Subcommittee on Pri-
vacy, Technology, and the Law that “we cannot live in a world where Facebook
and Google know everything about us, and we know next to nothing about
them.” The implications of leaving this problem unaddressed become now more
visible, with extreme right and fascist ideas moving closer to the centre of public
discourse and events.
Peter Thiel, the rich and infuential tech mogul, was once described by Polit-
ico as “the Silicon Valley libertarian who spoke at Trump’s convention, gave more
than $1 million in support of his campaign and is now a member of Trump’s
transition team” (Purdy, 201627). Thiel, who became powerful beyond the feld
of investments in new technologies, shaping and infuencing political life in the
United States, clarifed in one of his essays, “The Education of a Libertarian,” that
his view is that democracy and capitalism are not compatible, while arguing for
a libertarian formula that “makes the world safe for capitalism” (Thiel, 200928).
If we scrutinise Facebook – rebranded as Meta – we can say without any doubt
that it acts as an unprecedented machine for misinformation, a platform for the
propagation of hate speech and undemocratic movements. Adrienne LaFrance
noted in The Atlantic, in 2021, that Facebook/Meta is acting like a hostile entity,
as “the largest autocracy of Earth” that is actively undermining and attacking lib-
eral democracies. She observed that

Facebook is a lie-disseminating instrument of civilisational collapse. It is


designed for blunt-force emotional reaction, reducing human interaction
to the clicking of buttons. The algorithm guides users inexorably toward
less nuanced, more extreme material, because that’s what most efciently
elicits a reaction. Users are implicitly trained to seek reactions to what
they post, which perpetuates the cycle. Facebook executives have toler-
ated the promotion on their platform of propaganda, terrorist recruitment,
and genocide. They point to democratic virtues like free speech to defend
themselves, while dismantling democracy itself.
(LaFrance, 202129)
182 The Future of Higher Education

Facebook, LaFrance continues, is immune to the idea of civic obligation, already


comes with a clear record of actions against free press, and remains fundamen-
tally anti-democratic. In this context, it is important to consider the fact that
Facebook had the most infuential and consequential use of AI, as Facebook –
Cambridge Analytica scandal revealed. In 2022 Meta (former Facebook) released
a new and more complex AI language model called OPT-175B and a technical
report, which fnd that the new AI language model “has a high propensity to
generate toxic language and reinforce harmful stereotypes, even when provided
with a relatively innocuous prompt” (Zang et al., 202230).
The same research report notes that this new AI language model comes with
“a higher incidence rate for stereotypes and discriminatory text” and “appears
to exhibit more stereotypical biases in almost all categories except for religion.”
This example reveals how dangerous AI systems give an unprecedented power
in the history of humanity to manipulate and infuence people’s values, opin-
ions, and political and ethical choices. The impact of AI in our daily lives is so
entrenched and intertwined with our mundane interactions that we normalised
it and turned it invisible. At this time, education is dominated by a dull and stu-
pefying enthusiasm for “technology” and blindness for its own constant decline
of intellectual life, civic relevance, and educational value for students. Democ-
racy is in retreat and at risk around the world, with AI tools providing not only
exponential change for medicine, research, and various sectors of the economy
but also the exponential growth and power for fascist ideas, building rapidly the
acceptance of autocratic, anti-human, and extremist movements. Silicon Valley
was not, at any point, defned by values of democracy and open society. The Cali-
fornian ideology is a mix of techno-solutionism, libertarianism, and autocracy,
where the “keepers of truth” – software engineers and the owners of most pow-
erful AI systems – lead as an initiated and superior class. AI is on itself associated
directly with antidemocratic tendencies and models; we have the unprecedented
power to manipulate individuals and populations and achieve personal and secre-
tive goals unprecedentedly concentrated in the hands of just few men. Forbes
ranking of richest people in the world shows that top-15 positions of world
billionaires are almost exclusively occupied by people who control mass media
empires, Internet platforms, and most powerful AI systems. Top 200 billionaires
refect an unprecedented concentration of control of world’s nutrition, cultural
preferences, information, communication, and AI systems. This small number of
owners control and make political decisions on how AI is developed and applied,
holding the fnancial power to use and maintain AI and interconnected tech-
nologies under their control, without basic transparency. The President of the
European Commission noted that the future of the Internet is also the future of
democracy; this may be the most important reason education at all levels should
have a much more critical and analytical approach in the adoption and use of AI
in teaching and learning, and on university governance.
Scenarios for Higher Education 183

The future of education is determined now, and is almost decided, by the


development and functions of edtech solutions with corporate providers; current
arrangements move decisions and control to the corporate owners of information
software in edtech. As we have seen in the examples presented before, the anti-
democratic instincts and trends have most plausible changes to infuence the con-
trol and applications of AI in higher education. This will be complicated by a fact
that is currently under-researched, which was revealed by a group of researchers
interested to use “machine learning model predictions of bioactivity for the pur-
pose of fnding new therapeutic inhibitors of targets for human diseases” and had
the idea to explore “how AI could be used to design toxic molecules,” to use the
same AI in reverse, rather than providing the cure to produce the poison. They
note: “It was a thought exercise we had not considered before that ultimately
evolved into a computational proof of concept for making biochemical weapons”
(Urbina et al., 202231). Importantly, authors warn us that it is now “entirely pos-
sible that novel routes can be predicted for chemical warfare agents, circumvent-
ing national and international lists of watched or controlled precursor chemicals
for known synthesis routes” (p. 190). They suggest a solution directly aligned
with a conclusion of this book: the emergence of AI requires ethical guidelines.
Education makes even more important an efort to create and adopt a set of ethi-
cal guidelines for the adoption and use of AI in teaching and learning. These
guidelines should start from a realistic perspective that makes users more aware of
the potential for a dual use of AI.
The unprecedented increase of competition in higher education to attract stu-
dents paying fees, which is intensifed by the economic and demographic impacts
of Covid-19 pandemic, creates the environment for a decline in quality. There is
an extreme focus of institutions of higher education to become attractive on the
national and international markets for potential customers, which is not linked to
any substantial efort to maintain quality. Most probably, the ongoing decline in
quality, the vocationalisation, and oversimplifcation of a higher education expe-
rience will become less susceptible to be denied and covered with manipulated
data and sloganeering. A small group of universities will re-evaluate edtech to
separate hype from positive potential of various technological solutions, and focus
to create a culture of creativity, respect for learning and knowledge, personal
curiosity, and unbounded imaginations that are used to engage students and fac-
ulty in signifcant and in-depth experiences of learning and knowledge creation.
This small group of institutions will become increasingly segregated from the
massive group of “intensive” universities, on a model slightly connected with
the relationship between organic food and junk food industries. Unfortunately,
the small group of “healthy” universities, which can justify extraordinarily expen-
sive fees through their prestige and objective results, will be dominated by students
from wealthy families, with monocultural background and same socio-economic
horizons.
184 The Future of Higher Education

Future developments will make AI a key part of educational solutions, with


applications in curriculum design, teaching, and assessment. The best scenario is
that decision makers, politicians, and an active group of university administrators
will react to the critical threats represented by the rise of fascism and antidemo-
cratic infuences, and will start by addressing with courage and realism the ongo-
ing decline in the campus ethos. Courage – as one of the foundational values for
the new current – will provide the momentum to look at higher education, from
its policies to results, with a genuine interest to fnd what is really happening
behind comfortable and misleading reportings. This type of initiatives will also
stimulate a reconsideration of the aims of higher education for the 21st century
and set principles for the use of edtech and AI in future designs of learning, stu-
dent experience, and research.
The multifaceted impact of climate change on world populations, on politi-
cal landscapes, and new conficts generated by the scarcity of resources will add
new pressures on universities to provide knowledge and graduates with informed
and fexible minds, who are able to fnd new approaches and ingenious solutions
to the numerous crises faced by communities and countries in the future. The
increasing ageing populations and the need of all students to fnd meaning and
substance in campus bring back the importance of solutions for lifelong learning.
Graduates with a well-rounded education will take the lead to initiate struc-
tural changes on the educational model for higher education. The extraordinary
impact of wars, climate extremes, and the multiplication of highly consequential
misuses of AI will soon demand new and courageous solutions, in a new para-
digm, with new priorities and a rejection of education as a commodity with stu-
dents as customers in a market.

Notes
1. UCU. (2022). UK higher education. A workforce in crisis. www.ucu.org.uk/media/12532/
HEReport24March22/pdf/HEReport24March22.pdf
2. McMurtrie, B. (2022, April 5). A ‘stunning’ level of student disconnection. The
Chronicle of Higher Education. www.chronicle.com/article/a-stunning-level-of-student-
disconnection
3. Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses.
University of Chicago Press
4. Carey, K. (2012, February 12). “Academically adrift”: The news gets worse and
worse. The Chronicle of Higher Education. http://chronicle.com/article/Academically-
Adrift-The/130743/
5. Arum, R., & Roksa, J. (2014). Aspiring adults adrift: Tentative transitions of college gradu-
ates. The University of Chicago Press.
6. Baker, R. S. (2021). Artifcial intelligence in education: Bringing it all together. In
OECD digital education outlook 2021: Pushing the frontiers with artifcial intelligence, block-
chain and robots. OECD Publishing. https://doi.org/10.1787/f54ea644-en
7. Jimenez, L., & Boser, U. (2021, September 16). Future of testing in education: Arti-
fcial intelligence. CAP. www.americanprogress.org/article/future-testing-education-
artifcial-intelligence/
Scenarios for Higher Education 185

8. JISC. (2021, April 27). New national Centre will “unleash the power of AI” in educa-
tion. JISC. www.jisc.ac.uk/news/new-national-centre-will-unleash-the-power-of-ai-
in-education-27-apr-2021
9. Marík, V., Stepánková, O., & Zdráhal, Z. (1990, October 23–25). Artifcial intelli-
gence in higher education. CEPES-UNESCO International Symposium, Prague, CSFR,
Proceedings.
10. O’Flaherty, K. (27 February 2022). The data game: What Amazon knows about you and
how to stop it. www.theguardian.com/technology/2022/feb/27/the-data-game-what-
amazon-knows-about-you-and-how-to-stop-it
11. Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human
decision-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.
https://doi.org/doi:10.1073/pnas.2016921117
12. White, L. (1974). Medieval technology and social change. Oxford University Press.
13. Kranzberg, M. (1990). Software for human hardware. In P. Zunde & D. Hocking
(Eds.), Empirical foundations of information and software science V. Plenum Press.
14. Zuckerberg, M. (2 February 2012). Facebook’s letter from Mark Zuckerberg – full
text. The Guardian. www.theguardian.com/technology/2012/feb/01/facebook-letter-
mark-zuckerberg-text
15. Cohen, S. B. (2019, November 23). Read Sacha Baron Cohen’s scathing attack on
Facebook in full: “Greatest propaganda machine in history.” The Guardian. www.the
guardian.com/technology/2019/nov/22/sacha-baron-cohen-facebook-propaganda
16. Shahbaz, A. (2018). The rise of digital authoritarianism. In Freedom on the net. Free-
dom House. https://freedomhouse.org/sites/default/fles/FOTN_2018_Final.pdf
17. Csaky, Z. (2021). The antidemocratic turn. In Nations in transit 2021. Freedom House.
https://freedomhouse.org/sites/default/fles/2021-04/NIT_2021_fnal_042321.pdf
18. Babcock, P. S., & Marks, M. S. (2010). Leisure College, USA: The decline in student study
time. American Enterprise Institute for Public Policy Research.
19. Dockrill, P. (2018, June 13). IQ scores are falling in “worrying” reversal of 20th cen-
tury intelligence boom. Science Alert. www.sciencealert.com/iq-scores-falling-in-wor
rying-reversal-20th-century-intelligence-boom-fynn-efect-intelligence
20. Flynn, J. R., & Shayer, M. (2018). IQ decline and Piaget: Does the rot start at the top?
Intelligence, 66, 112–121. https://doi.org/https://doi.org/10.1016/j.intell.2017.11.010
21. Cauwels, P., & Sornette, D. (2022). Are “fow of ideas” and “research productivity”
in secular decline? Technological Forecasting and Social Change, 174, 121267. https://doi.
org/https://doi.org/10.1016/j.techfore.2021.121267
22. Wiener, N. (1994). Invention: The care and feeding of ideas. The MIT Press.
23. Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). Viewpoint: When
will AI exceed human performance? Evidence from AI experts. Journal of Artifcial
Intelligence Research, 62, 729–754. https://doi.org/10.1613/jair.1.11222
24. Higgins, M. D. (2021, June 8). On academic freedom. Address at the Scholars at Risk
Ireland/All European Academies Conference: President of Ireland. Speeches. https://
president.ie/en/media-library/speeches/on-academic-freedom-address-at-the-schol
ars-at-risk-ireland-all-european-academies-conference
25. Harbour, P. J. (2012, December 19). The emperor of all identities. The New York
Times. www.nytimes.com/2012/12/19/opinion/why-google-has-too-much-power-
over-your-private-life.html
26. Tech Transparency Project. (2012, July 11). Google academics inc. Report. www.tech
transparencyproject.org/articles/google-academics-inc
27. Purdy, J. (2016, November 30). The anti-democratic worldview of Steve Bannon
and Peter Thiel. Politico. www.politico.com/magazine/story/2016/11/donald-trump-
steve-bannon-peter-thiel-214490/
28. Thiel, P. (2009, April 13). The education of a libertarian. Cato Unbound. Cato Institute.
www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/
186 The Future of Higher Education

29. LaFrance, A. (2021, September 27). The largest autocracy on earth. The Atlantic.
www.theatlantic.com/magazine/archive/2021/11/facebook-authoritarian-hostile-
foreign-power/620168/
30. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab,
M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D.,
Koura, P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). OPT: Open pre-trained
transformer language models. ArXiv, abs/2205.01068.
31. Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artifcial-intelli
gence-powered drug discovery. Nature Machine Intelligence, 4(3), 189–191. https://doi.
org/10.1038/s42256-022-00465-9
9
RE-STORYING HIGHER LEARNING

There is no need to engage much our imagination to see risks in the adoption
of AI solutions in education. China is using AI to impose a dystopian version of
a dictatorship that is weaving the most toxic ideas and practices from neoliberal
capitalism and communism. Social scores are assigned to all Chinese citizens, and
these are determined by the data collected from extensive surveillance and are
aggregated by obscure algorithms; a negative social score can restrict the possibil-
ity to board a train or a bus, to travel free, access services or get a job, and so on.
In China AI is used to reinforce and secure the authoritarian control of citizens,
from continuous surveillance to predictive policing and immediate punishment
for breaches of rules, dissent, or the possibility of dissent. Forbes reported in 2022
that China is

already known for its use of AI for civilian repression. IPVM exposed the
chilling aims of the People’s Republic of China (PRC)’s AI-automated
racism to surveil Uighur Muslims. Its reporting has been corroborated and
published jointly with the Washington Post, NYTimes, and BBC. Huawei
partnered with leading PRC AI/facial recognition developer Megvii to
patent the so-called ‘Uighur alarm’ to identify Uighurs by face and track
their movement, turning the Xinjiang province in a de-facto “open-air
prison” for 25 million people.
(Layton, 20221)

China is one of the most active players in the development of AI, and some ana-
lysts predict that China will be the most important actor in this feld in less than
a decade. The applications of AI in the Chinese classrooms are already developed

DOI: 10.4324/9781003266563-13
188 The Future of Higher Education

and at work. Media reported the case of the Middle School No. 11 in Hangzhou,
where students are under permanent surveillance and the algorithm assigns them
a score on attention, engagement, and work.
If we want to see how AI is dangerously used in our daily lives, we can look
to other countries, not just China. Unfortunately, liberal democracies provide a
long list of examples of misuse and abuse of AI and algorithms against citizens
living in free societies. In the Netherlands, the use of a self-learning algorithm
to create risk profles for possible childcare beneft fraud resulted in the unlawful
and ethnical profling of childcare applicants with dual nationalities. The “child-
care benefts scandal,” which is more a scandal about the abuse of AI capabilities
for profling and control, led to the resignation of the entire Dutch government
in January 2021. Tens of thousands of families were pushed to poverty as the
Dutch tax agency placed enormous debts based on a set of indicators used by
the algorithm. There were also cases of suicide recorded for people targeted by
this scheme. Over a thousand children were taken into foster care as a result of
this AI solution. In the end, the Dutch Tax and Customs Administration (Belast-
ingdienst) received a fne of €3.7 million, which is an unprecedented record. In
Australia we have the example of Robodebt, an unlawful use of AI algorithms
for debt assessment and recovery, which afected 443,000 people. A federal court
approved in 2021 a settlement worth $1.8 billion between the Australian Gov-
ernment and victims of robodebt, which targeted fnancial vulnerable people,
most of them without any debts. It is known that this scheme led to suicides,
but an exact number of victims is not possible to estimate. In the United States,
an Associated Press report supported by the Pulitzer Center for Crisis Reporting
reveals how AI is used to identify possible cases of child abuse or neglect, which
is so obscure that some people fagged don’t even know that the decision was
taken by an algorithm. Recent research shows that the algorithm used

showed a pattern of fagging a disproportionate number of Black children


for a ‘mandatory’ neglect investigation, when compared with white chil-
dren. The independent researchers, who received data from the county,
also found that social workers disagreed with the risk scores the algorithm
produced about one-third of the time.
(Ho & Burke, 20222)

If families accused of neglect or abuse go to court not even the score given in the
algorithm can be found by them or their attorneys. The obscure nature of simi-
lar AI systems is already part of extensive research and media reports, leaving no
doubt that the problem of faulty AI and secret algorithms is far from being solved.
There is already widespread adoption of AI in various industries, legal sys-
tems, and public administration, and all that they refect is the desire to use the
capacity of AI to reinforce a certain managerial framework, focused entirely on
cutting costs and maximise profts by squeezing out of employees as much labour
Re-storying Higher Learning 189

as possible for a low income. This is more evident when we look at the visible
part of AI but is overwhelming on the unseen part, or the implicit part of AI,
which is represented by extremely poorly paid workers higher to add and label
data. A BBC investigation on this topic found that “Artifcial intelligence and
machine learning exist on the back of a lot of hard work from humans,” with
extremely intense and low paid work, which can be anything “anything from
labelling images to help computer vision algorithms improve, providing help for
natural language processing, or even acting as content moderators for YouTube
or Twitter” (Wakefeld, 20213). Extensive invisible work is making possible AI,
and universities should think carefully about who is going to do this work, how
it will be paid, and what other costs are involved, especially considering that
current AI systems depend on the extreme exploitation of energy and mineral
resources from the planet, as well as cheap labour and large amounts of data. An
article in Tribune published at the same time starts with the note that “the hype
around artifcial intelligence and its potential to liberate us from work often misses
a crucial fact – that AI in its current form depends on low-paid human work-
ers to function in the frst place” (Slater, 20214). The article reveals that some of
the most prominent AI systems “rely on workers from Southeast Asia and sub-
Saharan Africa, where they tend to receive below or barely a living wage.” The
environmental impact is also an important aspect, which will not be possible to
ignore in the near future. A study published by Nature notes that in 30 months,
from 2016 to 2018, only Bitcoin generated about as much carbon dioxide over
30 months as 1 million cars in the same period, and this measure does not include
other carbon dioxide-generating activities such as computer cooling, building, or
operators (Krause & Tolaymat, 20185). Since 2018, the trend of energy consump-
tion for cryptocurrencies is only ascending, raising some important challenges not
only for the environment but also for the users who are interested to maintain a
socially and environmentally responsible profle. AI is opening new possibilities
for data modelling, essential to create sustainable models for climate action, cli-
mate prediction, and pollution reduction.
There are new and unprecedented forms of control and manipulations, which
become overpowering when AI is multiplying exponentially their force. In this
context, universities fnd that most students are naturally comfortable with tech-
nology, especially young students who were growing up in a medium where new
technologies are an integral part of socialisation and communication. The hype of
AI may be as unprecedented as the powers unleashed by the AI systems. We can
choose an example from a sea of promises, warnings, and predictions about the
miracle and panacea of AI; but we can briefy stop at the example of the report
released in May 2022 by the Special Committee on Artifcial Intelligence in a
Digital Age, for the European Parliament. One of the key drivers of the Report,
presented also in media releases, is the statement that “To be a global power means
to be a leader in AI.” We can imagine how appealing this statement is for a group
of politicians dreaming to be part of a leading power in the world. The Report
190 The Future of Higher Education

is not actually explaining how such a global power is navigating our most serious
crises: accelerated climate change and increasing events of extreme weather, the
global rise of fascism, the constant erosion of democracy and democratic values,
including for some of the founding members of the EU. These existential crises,
which are very real and urgent, are entirely ignored by this document focused
on the tasks of building a “digital infrastructure,” regulatory framework, and “the
development of AI skills. There is no doubt that this is a very important area for
technological advancement, but it is already late to realise that our main chal-
lenges are not so much technological as they are determined by the erosion of
democratic values, climate emergency, impoverished imaginations that are eas-
ily colonised by conspiracy theories, and extremist ideologies. The most serious
challenge is not to join the contest of the most global leader, but to fnd solution
for our educational and imaginative crisis.
Data from Hawaii’s Mauna Loa Observatory, released by the U.S. National
Oceanic and Atmospheric Administration (NOAA), shows that in May 2022
the Earth’s CO2 level hit the highest recorded point in history. At the same
time, large parts of India and Pakistan recorded an extraordinarily abnormal and
also unprecedented heat waves, which melted fast glaciers and caused devastating
foods. These events were mostly lost on the avalanche of worrying news about
the war in Ukraine, with Russia regularly threatening with nuclear attacks and
possible annihilation of human race. It is hard to miss the irony that we hear
everywhere how we have this God-like powerful invention named artifcial intel-
ligence at a time when we lost almost everything from our control. The global
new rich, fnding their fortunes in the “Californian ideology,” consider that our
future is secure in the alternative reality of the metaverse or on the ridiculous
idea of extra-terrestrial (Mars) colonies, which will be set by the trillionaires who
were tired to live in a destroyed Earth. If their incomprehensible wealth wouldn’t
be so real, it would be probably easy to dismiss with a laugh the wacky ideas of
this class. Infuenced by these infantile delusions, higher education is caught in
utopian or naive tech-imaginaries related to these dystopian dreams. The cur-
rent context – of extremely dangerous military conficts, unprecedented climate
change and environmental destruction, social and cultural segregation, inequity,
and inequality – is not drawing much the attention of techno-utopians. The most
signifcant crisis facing now higher education is a crisis of meaning; higher edu-
cation needs as a matter of priority to become more human and meaningful for
students, which is much more than an entry-level job after graduation. This type
of priorities and concerns should be addressed, and this can start by considering
that the neoliberal paradigm imposed by WTO and OECD for education may
not be the most suitable model for a sustainable future for students, for universi-
ties, and for civil societies across the world.
Looking at a more detailed level, which is related directly to teaching, learn-
ing, and governance in higher education, we can fnd guiding principles for a
responsible and constructive adoption of AI.
Re-storying Higher Learning 191

The frst principle is that AI can be adopted by an institution of higher educa-


tion only after budgetary and ethical aspects related to data collection and aggre-
gation are well considered. AI works only when vast amounts of data are provided
and its collection, labelling, and allocation are dependent on the materiality of
human labour, costs, and workloads. If data collection and other human-determined
processes require prohibitive costs, it is better to fnd alternative solutions, not
necessarily AI. Many universities found – and will fnd – that the cost of AI is too
big for the selected applications. At the same time, collection of extensive data
implies important ethical questions that require serious consideration: how much
data is ethical to collect from students, how is data protected on short and long
term, and how these processes can afect students’ and faculty’s privacy. These
questions may seem trivial at the moment, especially when universities are mak-
ing the frst steps in the adoption and use of AI. On a medium and long term,
students and graduates may have solid reasons to question how data was collected
and protected, and future class actions against universities are very possible. Con-
sidering the fast adoption of AI in other felds and the increasing misuse of data,
we can expect to have soon some uncomfortable questions related to privacy and
ethical use of data. It is important for new uses and users to realise that behind
AI there is a large number of people that make it work, from engineers to the
numerous people who label and add data to the systems. This is most often a
highly intensive and time-consuming work.
As a part of ethical considerations on data, universities should consider their
duty of care and actively seek to protect students from the possible misuse of
data, from deceiving practices of surveillance and data collection and to involve
students in key processes required for the good functioning of AI systems adopted
in campus.
The second principle for the adoption of AI is that its positive use is deter-
mined by the acknowledgements of its impossible neutrality. AI is determined
by codes and preferences of its engineers, and an ongoing audit of possible bias,
preferences, and errors is absolutely essential for its use in education. Universi-
ties should be always aware that those who control data and AI systems can pro-
foundly change how students experience education, what and how they learn,
and how they can shape their future. The frst years of the pandemic, since 2020,
proved that the most signifcant AI players accelerated their tendency of expan-
sion, monopolisation, and control. This concentration of power and control
makes possible already for these giants to set and infuence political and cultural
agenda, and take unilateral decisions on how the future AI will be designed. Here
is again the warning mentioned in previous chapters, noticed by Marcuse in his
1941 analysis of German Nazi dictatorship; he found “a striking example of the
ways in which a highly rationalised and mechanised economy with the utmost
efciency in production can also operate in the interest of totalitarian oppres-
sion” (Marcuse & Kellner, 1998, p. 416). In these frst decades of the 21st century
we have a highly rationalised and technocratic society, with what some argue
192 The Future of Higher Education

that is a “limbic capitalism,” which exploits and extracts value from the human
vulnerability to addictions, maximising fndings of psychology and psychiatry for
profts (Courtwright, 20197). Zubof notes that we have a surveillance capitalism,
adopted widely by giant tech corporations and governments. It is clear that we
have a new form of capitalism, obsessively exploitative and psychopathic. In 1941,
Marcuse found that this type of society invites authoritarianism and is perfectly
structured to serve dictatorships. Higher education should not remain exposed by
placing edtech owned and controlled by corporations, especially at a time when
a common anti-democratic is part of their business model.
The inevitability of edtech, which is obsessively cultivated and promoted
along with the idea of a defnitive technological determinism of modern world,
which refects the structural dependence of human life on technological advances,
should be taken with reserve by universities. Edtech and its mastery is not an aim
of higher education although most universities forget this simple fact. They place
more efort and interest (and budgets) on cultivating the ability of students to use
software and edtech platforms (LMS) than learning, student engagement, and the
capacity to stir intellectual curiosity for independent study. Zubof presents very
well what stands behind technological determinism, noting that

Every doctrine of inevitability carries a weaponized virus of moral nihilism


programmed to target human agency and delete resistance and creativity
from the text of human possibility. Inevitability rhetoric is a cunning fraud
designed to render us helpless and passive in the face of implacable forces
that are and must always be indiferent to the merely human. This is the
world of the robotized interface, where technologies work their will, reso-
lutely protecting power from challenge.
(Zubof, 2020, p. 2258)

AI is also part of a narrative of inevitability of technologisation of educa-


tion, bringing also the promise of optimal personalisation, of making education
more like Netfix and Amazon. AI will fnally help select content and topics
that are aligned with individuals’ interests. Learning without challenge, on top-
ics the student likes or already fnds interesting, is naturally leaving students
where they are, in the same intellectual horizon, within the same knowledge
boundaries. It is a contemptuous position for students’ possibilities and capacity
to discover the unknown on their own eforts presented as care for students’
interests. This position is based on the assumption that adults, AI engineers and
administrators of AI systems, know better than students what can be interesting
and what can spark new ideas and unexpected passion for discovery. It is an
immensely disdainful position in education, which is in fact as old as we can
imagine. Far from being inevitable, AI needs to be used or to be turned of by
teachers and students, using the best possible judgement to serve educational
intentions.
Re-storying Higher Learning 193

The third principle is that AI can be used in education as a tool, to enhance


and facilitate access to information, to aggregate data, and to provide fast answers,
but not as the educator. Education, especially higher learning, is too complex
and malleable, contextualised and enhanced by serendipity to be properly expe-
rienced by the use of algorithms. The kind of education that is possible to be
delivered and controlled by AI is a form of instruction that must be avoided: a
dehumanised, information focused, simplifed, rationalised, and narrow process
that can inform and disorient, train and disengage. We need a meaningful educa-
tion more than ever before, and current challenges require education that is able
to nurture the ability to think independently, freely, and creatively, and to actively
and autonomously discern value from meaning and fake from real. AI can easily
replace a Teaching Assistant (TA) in providing information about class schedules
and various timelines, help with administration and basic orientation for students
in services such as those provided by libraries or student accommodation. This can
be done even better than a human TA, if we reduce TAs work to administrative
tasks. The preponderant use of AI is not suitable if the aim is to engage students
in meaningful education, that is creating not only skills relevant for employability
but also meaning and sense for life choices, education for responsible and active
citizens who are aware of the importance of our civil societies.
The fourth principle is that AI use is framed by the social, economic, cul-
tural, and political contexts. The variables determine the acceptance and proper
application of AI solutions for the uniquely complex endeavour of education and
higher learning. Students from various socio-economic backgrounds may relate
very diferently with AI solutions, depending on their previous experiences and
the perception of fair application of AI in decision-making. Edtech results are
determined by diferent contexts and cultural determinations; without in-depth
understanding and consideration for these contexts, AI can hinder education
more than it helps to increase efciency and the achievement of educational or
organisational goals.
The ffth principle is related to some of the most obvious features of AI, which
is that it stands as a label for a large variety of systems and applications: selecting
the most suitable edtech/AI solution for the stated aims and outcomes can deter-
mine the ethos of the campus, the quality of learning, and meaning for education
and lifelong learning. This last principle is completed by the rule that technology
should not distract educators from what is truly important for students: a mean-
ingful education that is preparing them to be good human beings, engaged in
lifelong learning and responsible members of society with agile minds.
The constant devaluation of a higher degree, the pressure of free market ide-
ology and neoliberal formulas on universities, and the ongoing degradation of
academic life, within universities and as it is perceived by the public, will increase
pressure for changes. We will not have the kind of “disruption” that is parroted
on repeat for the last 20 years, with “innovation” that is always reduced to an
incremental change in edtech and package with that the same old and clumsy
194 The Future of Higher Education

products. The real change will arise from the sum of tensions and crises socie-
ties and humanity face these days, with universities still able to ofer solutions of
substance and the ability to reinvent themselves. In early 1970s we were warned
that “If we narrow the scope of education, we narrow our operative conception
of civilisation, and we impoverish the meaning of participation in civilised com-
munity” (Schefer, 1973, p. 609). We have missed entirely this message and all
self-proclaimed managers and accountants of educational products, able to empty
higher education of meaning and substance, moved academia far from the aim
of building a civilised community. The last chance is to reimagine and re-story
the aim of education. In this sense, the challenge is to bring the hypothesis that
education is an endeavour that is too complex to be organised and constructed by
the AI. Edtech can be a solution for administrative problems, or even set instruc-
tion and training but not a well-rounded education and meaningful educational
experiences.
An important step to rethink higher education in the age of AI is to imagine
the impact of teaching on students’ lives, on their values and ways of thinking,
on creating the potential for them to expand their horizon and learn more, as
independent thinkers. This sense of responsibility faded, as universities adopted
the absurd language of trade, with educational experienced named “the product”
in policies adopted by some universities, and faculty’s work as “customer care.”
There is the same type of life-changing responsibility that shapes the identity of
medical doctors, which is rooted in the Hippocratic Oath. Similarly, we have to
start thinking about an Educational Oath that is symbolically taken by all who
teach in higher education the moment when the real and signifcant responsibil-
ity for students’ future is properly contemplated by those who will teach and
by those who organise teaching. This oath can change the nature of teaching
arrangements in universities, as marginal experiences that can be properly covered
by overworked and overwhelmed sessional employees, casual staf exploited in
precarious and insecure work arrangements. The temptation to further reduce
costs and use AI instead of this exploited and precarious class will soon arise, and
it will undoubtedly open universities for grave errors and future misconduct and
public disapproval.
A code of ethics for academics, which is specifcally addressing the need to
place teaching and learning in a specifc ethical and professional framework, is
required for higher education to invite faculty and students to understand the
impact and the importance of teaching. A Teacher’s Ethical Pledge can also set a
clear set of commitments, rights, and expectations that contribute to a culture of
quality, which is widely shared. Students have the right to learn, access knowl-
edge, and develop self-cultivation skills and beneft from teachers who are able
and willing to help and guide them with expertise, compassion, commitment,
and ethical responsibility. This involves the student’s right to have access to equi-
table education, with engaging, relevant, and high-quality curriculum, assess-
ment, and teaching practice, in a safe and suitable climate for learning, where the
Re-storying Higher Learning 195

intellectual space allows students to freely learn and evaluate what they learn with
an educated and independent mind. From a teaching perspective, faculty have the
responsibility to create what Eric Ashby (1969) defned as a key attitude of the
university teacher, which is “to teach in such a way that the pupil learns the dis-
cipline of dissent” (Ashby, 1969, p. 6410). Students have the right to gain what A.
N. Whitehead named the mastery of the art of using knowledge, and the ability
to react against “inert ideas” and manipulations, which is increasingly important
at a time when AI is opening possibilities for deepfakes, for vast manipulations
and intrusions in peoples’ lives.
A code that is setting the main duties in teaching is not only helping faculty
understand the impact of their practice but also signalling the students that their
interest are de facto at the heart of university’s interests, not less important than
research or fnancial arrangements. The role of the teacher is to enable students
to use unhindered access to higher learning and to educate them to seek future
learnings and enjoy the mastery of knowledge. Students have the right to learn in
a context of a meaningful and engaging education, with knowledge and teaching
solutions that are fexible and adaptive to complex changes, such as new devel-
opment in edtech and AI. We can briefy sketch here a possible framework for
a Teaching Pledge, as a commitment to the student’s right to quality education,
in a space and place defned by mutual respect, curiosity to learn, and the free
exploration of knowledge. It is an expression of academic’s allegiance to help
each student achieve maximum potential as a member of society, with respect for
learning and interest to actively contribute to the common good in the society.
In adopting, AI in education academics may take the pledge to:

• Observe students’ right to learn by accessing high-quality, engaging, and


formative education for higher learning. This principle is linked to the alle-
giance to secure an equitable education to all students;
• Demonstrate respect for students’ agency and embrace a pedagogy of com-
passion, leaving teacher’s own vantage point and interests aside to see situ-
ations from the student’s perspective. This refers to the practical ways to
understand and respond to student’s needs, in a specifc context;
• Be caring, fair, and eager to design and apply educational solutions that are
serving learning needs for students from a large variety of social, cultural,
economic, and linguistic backgrounds;
• Encourage and guide the free and passionate pursuit of student’s higher
learning in all areas of study, emphasising the value of knowledge and well-
informed and responsible intellectual dissent, as well as curiosity, creativity,
academic scepticism, lifelong learning, and intellectual honesty;
• Adopt refective professionalism and openness as an ongoing principle of
evaluation and improvement of own practice, engaging students in designing
new and/or alternative solutions for teaching, assessment, and curriculum
design – including the selection of edtech solutions;
196 The Future of Higher Education

• Adopt respectfulness for learners as a virtue that is cultivated to create an


open and safe environment for learning where all students can participate
unhindered in debates and discussions. Students should never be exposed
by their educators to disparaging remarks or embarrassment, but celebrated
when learning and engagement translate in quality results;
• Have the courage to adopt changes in teaching and curriculum design and
deal with difcult decisions to reach solutions that are serving the interests of
students and to maintain an optimal environment for higher learning;
• Never use the academic position for personal benefts from students, in any
forms;
• Protect students from intrusive practices, surveillance, and data-mining by
third parties, and not use or share students’ personal information, opinions,
and work if this is not specifcally required by law or by professional reasons
that are well documented;
• Aim to protect students’ interests and give them the highest possible quality
of learning, skills, and knowledge for their holistic development and future
endeavours in life as functional members of the workforce and society.
• The role of all those who teach is to educate and open students’ thinking for
lifelong learning and wisdom. This involves the serious responsibility to guide
the intellectual and moral development of students with compassion, care,
and understanding, with an ongoing improvement in teaching and learning.

This teaching pledge can also serve as a guide that nurtures mutual trust and
helps academics guide practice in line with the signifcant responsibilities in
teaching and the long-term and complex impact of their important profession.
In the book An Artifcial Revolution, Ivana Bartoletti is writing the last sentence
of the book with a hopeful thought:

Amid the global turmoil, maybe, just maybe, the promise of the Artifcial
Intelligence will force us to confront our shared humanity and the physi-
cal and digital environments we inhabit. For many of us, disappointed yet
optimistic, this is the time to dare to imagine.
(Bartoletti, 2020, p. 12611)

Education is the space where we can start contemplating the power of our shared
humanity in a campus that moved fast in superfcially known digital environ-
ments. The rise of AI is just another reason to accept that this is the time when
we must start to imagine.

Notes
1. Layton, R. (2022, April 23). Commerce’s BIS can help stop China’s quest for AI dom-
inance. Forbes. www.forbes.com/sites/roslynlayton/2022/04/23/commerces-bis-can-
help-stop-chinas-quest-for-ai-dominance/
Re-storying Higher Learning 197

2. Ho, S., & Burke, G. (2022, April 30). An algorithm that screens for child neglect raises concerns.
Associated Press. https://apnews.com/article/child-welfare-algorithm-investigation-
9497ee937e0053ad4144a86c68241ef1
3. Wakefeld, J. (2021, March 28). AI: Ghost workers demand to be seen and heard.
BBC. www.bbc.com/news/technology-56414491
4. Slater, A. (2021, May 18). How artifcial intelligence depends on low-paid work-
ers. Tribune. https://tribunemag.co.uk/2021/05/how-artifcial-intelligence-depends-
on-low-paid-workers
5. Krause, M. J., & Tolaymat, T. (2018). Quantifcation of energy and carbon costs
for mining cryptocurrencies. Nature Sustainability, 1(11), 711–718. https://doi.
org/10.1038/s41893-018-0152-7
6. Marcuse, H., & Kellner, D. (1998). Collected papers of Herbert Marcuse. Routledge.
7. Courtwright, D. T. (2019). The age of addiction: How bad habits became big business. The
Belknap Press of Harvard University Press.
8. Zubof, S. (2020). The age of surveillance capitalism: The fght for a human future at the new
frontier of power. Profle Books.
9. Schefer, I. (1973). Reason and teaching. Routledge and Kegan Paul.
10. Ashby, E. (1969). A Hippocratic oath for the academic profession. Minerva, Reports and
Documents, 8(1), 64–66.
11. Bartoletti, I. (2020). An artifcial revolution: On power, politics and AI. Indigo Press.
REFERENCES

Abramowitz, M. J. (2018). Freedom in the world 2018. Democracy in Crisis. https://


freedomhouse.org/report/freedom-world/2018/democracy-crisis
Adams, J. T. (1931). The epic of America. Little, Brown and Company.
Ananthaswamy, A. (2015, August 5). What if . . . Intelligence is a dead end? New Scientist.
www.newscientist.com/article/mg22730330-900-what-if-intelligence-is-a-dead-end/
Andrews, S., Bare, L., Bentley, P., Goedegebuure, L., Pugsley, C., & Rance, B. (2016).
Contingent academic employment in Australian universities. LH Martin Institute and
Australian Higher Education Industrial Association.
Aristotle, & McKeon, R. (2001). The basic works of Aristotle. Modern Library.
Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Uni-
versity of Chicago Press.
Arum, R., & Roksa, J. (2014). Aspiring adults adrift: Tentative transitions of college graduates.
The University of Chicago Press.
Ashby, W. R., Shannon, C. E., & McCarthy, J. (1956). Automata studies. Princeton Uni-
versity Press.
Azevedo, A. (2012, September 26). In colleges’ rush to try MOOC’s, faculty are not always
in the conversation. The Chronicle of Higher Education. http://chronicle.com/article/
In-Colleges-Rush-to-Try/134692/
Babcock, P. S., & Marks, M. S. (2010). Leisure college. The Decline in Student Study Time.
Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education
Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065
Barber, J. (2013, October 16). The end of university campus life. ABC Radio National
Australia. www.abc.net.au/radionational/programs/ockhamsrazor/5012262
Barbrook, R., & Cameron, A. (1996). The Californian ideology. Science as Culture, 6(1),
44–72. https://doi.org/10.1080/09505439609526455
Baron, J. (2019, January 29). Classroom technology is indoctrinating students into a culture
of surveillance. Forbes. www.forbes.com/sites/jessicabaron/2019/01/29/classroom-
technology-is-indoctrinating-students-into-a-culture-of-surveillance/
Bartoletti, I. (2020). An artifcial revolution: On power, politics and AI. Indigo Press.
References 199

Bayer, T. I. (2009). Vico’s pedagogy. New Vico Studies, 27, 39–56.


Beer, D. (2016). Metric power. Palgrave Macmillan.
Bel, G. (2006). Retrospectives: The coining of “privatization” and Germany’s national
socialist party. Journal of Economic Perspectives, 20(3), 187–194. https://doi.org/10.1257/
jep.20.3.187
Bel, G. (2010). Against the mainstream: Nazi privatization in 1930s Germany. The Eco-
nomic History Review, 63(1), 34–55. https://doi.org/https://doi.org/10.1111/j.1468-
0289.2009.00473.x
Bezos, J. (2021). 2020 Letter to shareholders. www.aboutamazon.com/news/company-news/
2020-letter-to-shareholders
Bishai, G. W., & Lee, D. (2018). Makeup of the class. The Harvard Crimson.
Black, E. (2001). IBM and the holocaust: The strategic alliance between Nazi Germany and
America’s most powerful corporation. Crown Publishers.
Bok, D. C. (2003). Universities in the marketplace: The commercialization of higher education.
Princeton University Press.
Borter, G., Ax, J., & Tanfani, J. (2022, February 15). Schools under siege. A Reuters Special
Report. www.reuters.com/investigates/special-report/usa-education-threats/
Brandist, C. (2014, May 29). A very Stalinist management model. Times Higher Education.
www.timeshighereducation.com/comment/opinion/a-very-stalinist-management-
model/2013616.article
Brandist, C. (2016, May 5). The risks of Soviet-style managerialism in UK universities. Times
Higher Education. www.timeshighereducation.com/comment/the-risks-of-soviet-style-
managerialism-in-united-kingdom-universities
Brooks, D. (2013, February 5). The philosophy of data. The New York Times, A, p. 23.
www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in com-
mercial gender classifcation proceedings of the 1st conference on fairness, accountability and trans-
parency. Proceedings of Machine Learning Research. https://proceedings.mlr.press/
v81/buolamwini18a.html
Bürgi, R. (2016). The free world and the cult of expertise: The rise of OECD’s education-
alizing technocracy. International Journal for the Historiography of Education, 6(2), 159–175.
Burke, G., Mendoza, M., Linderman, J., & Tarm, M. (2022, March 6). How AI-
powered tech landed man in jail with scant evidence. The Associated Press. https://
apnews.com/article/artificial-intelligence-algorithm-technology-police-crime-
7e3345485aa668c97606d4b54f9b6220
Butz, M. V. (2021). Towards strong AI. KI – Künstliche Intelligenz, 35(1), 91–101. https://
doi.org/10.1007/s13218-021-00705-x
Carr, N. G. (2014). The glass cage: Automation and us. W.W. Norton & Company.
Cauwels, P., & Sornette, D. (2022). Are ‘fow of ideas’ and ‘research productivity’ in
secular decline? Technological Forecasting and Social Change, 174, 121267. https://doi.
org/10.1016/j.techfore.2021.121267
Chen, S. (2018, April 29). “Forget the Facebook leak”: China is mining data directly
from workers’ brains on an industrial scale. South China Morning Post. www.scmp.
com/news/china/society/article/2143899/forget-facebook-leak-china-mining-
data-directly-workers-brains
Chetty, R., Grusky, D., Hell, M., Hendren, N., Manduca, R., & Narang, J. (2017).
The fading American dream: Trends in absolute income mobility since 1940. Science,
356(6336), 398–406.
200 References

Chetty, R., Hendren, N., Jones, M. R., & Porter, S. R. (2020). Race and economic
opportunity in the United States: An intergenerational perspective. The Quarterly Jour-
nal of Economics, 135(2), 711–783.
Choi, B. C., & Pak, A. W. (2008). Multidisciplinarity, interdisciplinarity, and transdiscipli-
narity in health research, services, education and policy: 3. Discipline, inter-discipline
distance, and selection of discipline. Clinical and Investigative Medicine, 31(1), E41–E48.
https://doi.org/10.25011/cim.v31i1.3140
Chun, W. H. K., & Barnett, A. (2021). Discriminating data: Correlation, neighborhoods, and
the new politics of recognition. The MIT Press.
Cioran, E. M. (2012). A short history of decay. Arcade Publishing.
Clayton, A. (2021). Bernoulli’s fallacy: Statistical illogic and the crisis of modern science. Colum-
bia University Press.
Cohen, A. (2016). Imbeciles. The Supreme Court, American eugenics, and the sterilization of
Carrie Buck. Penguin Press.
Cohen, S. B. (2019, November 21). Sacha Baron Cohen’s keynote address at ADL’s 2019 never
is now summit on Anti-Semitism and Hate. Remarks by Sacha Baron Cohen, Recipient
of ADL’s International Leadership Award. www.adl.org/news/article/sacha-baron-
cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism
Colbrook, M. J., Antun, V., & Hansen, A. C. (2022). The difculty of computing stable
and accurate neural networks: On the barriers of deep learning and Smale’s 18th prob-
lem. Proceedings of the National Academy of Sciences, 119(12), e2107151119. https://doi.
org/doi:10.1073/pnas.2107151119
Collini, S. (2012). What are universities for? Penguin.
Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
Courtwright, D. T. (2019). The age of addiction: How bad habits became big business. The
Belknap Press of Harvard University Press.
Daunton, N. (2021, November 24). Why Prince William is wrong to blame habitat loss
on population growth in Africa. Euronews. www.euronews.com/green/2021/11/24/
why-prince-william-is-wrong-to-blame-habitat-loss-on-population-growth-in-africa
Delzell, D. A., & Poliak, C. D. (2013). Karl Pearson and eugenics: Personal opinions
and scientifc rigor. Science and Engineering Ethics, 19(3), 1057–1070. https://doi.
org/10.1007/s11948-012-9415-2
Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human deci-
sion-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.
https://doi.org/doi:10.1073/pnas.2016921117
Dockrill, P. (2018, June 13). IQ scores are falling in “worrying” reversal of 20th century
intelligence boom. Science Alert. www.sciencealert.com/iq-scores-falling-in-worrying-
reversal-20th-century-intelligence-boom-fynn-efect-intelligence
Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation. New
Media & Society, 21(8), 1824–1839. https://doi.org/10.1177/1461444819833331
Drucker, P. F. (1969). The age of discontinuity; guidelines to our changing society. Harper &
Row.
Dunn, T. (2020). Inside the swarms: Personalization, gamifcation, and the networked pub-
lic sphere. In J. Jones & M. Trice (Eds.), Platforms, protests, and the challenge of networked
democracy. Rhetoric, politics and society. Palgrave Macmillan. https://doi.org/10.1007/
978-3-030-36525-7_3
Eaton, G. (2022, April 6). Noam Chomsky: “We’re approaching the most dangerous point
in human history”. New Statesman. www.newstatesman.com/encounter/2022/04/
noam-chomsky-were-approaching-the-most-dangerous-point-in-human-history
References 201

Ellis, J. A. (1985). Military contributions to instructional technology. Praeger.


Enyedy, N. (2014). Personalized instruction: New interest, old rhetoric, limited results, and the
need for a new direction for computer-mediated learning. National Education Policy Center.
http://nepc.colorado.edu/publication/personalized-instruction.
Ferguson, T., & Voth, H.-J. (2008). Betting on Hitler – The value of political connections
in Nazi Germany*. The Quarterly Journal of Economics, 123(1), 101–137. https://doi.
org/10.1162/qjec.2008.123.1.101
Fletcher, D. J., & Rockway, M. (1986). Computer based training in the military. In J. A.
Ellis (Ed.), Military contributions to instructional technology. Praeger.
Flynn, J. R., & Shayer, M. (2018). IQ decline and Piaget: Does the rot start at the top?
Intelligence, 66, 112–121. https://doi.org/https://doi.org/10.1016/j.intell.2017.11.010
Frey, B. B. (2018). The Sage encyclopedia of educational research, measurement, and evaluation.
Sage Reference. https://doi.org/10.4135/9781506326139
Friedman, M. (2007). The social responsibility of business is to increase its profts. In W.
C. Zimmerli, M. Holzinger, & K. Richter (Eds.), Corporate ethics and corporate governance
(pp. 173–178). Springer. https://doi.org/10.1007/978-3-540-70818-6_14
Fukuyama, F. (1992). The end of history and the last man. Free Press.
Furedi, F. (2004, August 6:14). Plagiarism stems from a loss of scholarly ideals. Times
Higher Education Supplement. www.timeshighereducation.com/features/plagiarism-
stems-from-a-loss-of-scholarly-ideals/190541.article
Galton, F. (1901, October 29). The second Huxley lecture of the anthropological institute, included
in the essays in eugenics. The Eugenics Education Society.
Galton, F. (1908). Memories of my life. Methuen & Co.
Galton, F. (1909). Essays in eugenics. The Eugenics Education Society.
Galton, F. (2012). Hereditary genius: An inquiry into its laws and consequences. Barnes & Noble.
Gambetta, D., & Hertog, S. (2016). Engineers of jihad: The curious connection between violent
extremism and education. Princeton University Press.
Gardner, H. (2011). Truth, beauty, and goodness reframed: Educating for the virtues in the twenty-
frst century. Basic Books.
Giroux, H. A., & Casablancas, J. (2019). The terror of the unforeseen. Los Angeles Review
of Books.
Gitelman, L. (2013). “Raw data” is an oxymoron. The MIT Press.
Golbeck, J. (2014, September). All eyes on you. Psychology Today. www.psychologytoday.
com/us/articles/201409/all-eyes-you
Gonzalez, G., & Gonzalez, G. (1979). The historical development of the concept of intel-
ligence. Review of Radical Political Economics, 11(2), 44–54. https://doi.org/10.1177/
048661347901100204
Gould, S. J. (1996). The mismeasure of man (Rev. and expanded. ed.). W. W. Norton &
Company.
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). Viewpoint: When will
AI exceed human performance? Evidence from AI experts. Journal of Artifcial Intel-
ligence Research, 62, 729–754. https://doi.org/10.1613/jair.1.11222
Gregorian, D. (2021, June 10). Lunar new deal: GOP Rep. Gohmert suggests alter-
ing moon’s orbit to combat climate change. NBC News. www.nbcnews.com/
politics/congress/lunar-new-deal-gop-rep-gohmert-suggests-altering-moon-s-
n1270219
Guterres, A. (2021). Secretary-general’s address to the general assembly. United Nations.
Retrieved September 22, 2021, from www.un.org/sg/en/node/259241
Guthrie, S., Lichten, C., van Belle, J., Ball, S., Knack, A., & Hofman, J. (2017). Under-
standing mental health in the research environment. A rapid evidence assessment. Rand Europe.
202 References

Hankerson, D. M. (2021, September 21). CDT original research examines privacy implications
of school-issued devices and student activity monitoring software. https://cdt.org/insights/
cdt-original-research-examines-privacy-implications-of-school-issued-devices-and-
student-activity-monitoring-software/
Harari, Y. N. (2016). Homo deus: A brief history of tomorrow. Harvill Secker.
Harbour, P. J. (2012, December 19). The emperor of all identities. The New York Times.
www.nytimes.com/2012/12/19/opinion/why-google-has-too-much-power-over-
your-private-life.html
Hare, J. (2016, April 13). University of Melbourne start-up Cadmus targets cheats. The Aus-
tralian. www.theaustralian.com.au/higher-education/university-of-melbourne-startup-
cadmus-targets-cheats/news-story/f5e2677aea4a90b54f5c5ee0e4d3eee7
Hauser, C. (2021, June 2). Outrage greets report of Arizona plan to use ‘holocaust gas’
in executions. New York Times. www.nytimes.com/2021/06/02/us/arizona-zyklon-b-
gas-chamber.html
Hayles, N. K. (1987). Text out of context: Situating postmodernism within an information
society. Discourse, 9, 24–36. www.jstor.org/stable/41389085
Heidegger, M. (1969). Discourse on thinking. A translation of Gelassenheit. Harper & Row.
Heidegger, M. (1977). The question concerning technology, and other essays (1st ed.). Harper &
Row.
Heikkila, M. (2021, October 20). POLITICO AI: Decoded: AI goes to school – What
EU capitals think of the AI act – Facebook’s content moderation headache. Politico.
www.politico.eu/newsletter/ai-decoded/ai-goes-to-school-what-eu-capitals-think-
of-the-ai-act-facebooks-content-moderation-headache-2/
Herf, J. (1984). Reactionary modernism: Technology, culture, and politics in Weimar and the Third
Reich. Cambridge University Press.
Herszenhorn, D. M. (2022, March 4). The fghting is in Ukraine, but risk of World War
III is real. Politico. www.politico.eu/article/fght-ukraine-russia-world-war-risk-real/
Higgins, M. D. (2021, June 8). ‘On academic freedom’ – Address at the scholars at risk
Ireland/all European academies conference: President of Ireland. Speeches. https://
president.ie/en/media-library/speeches/on-academic-freedom-address-at-the-schol
ars-at-risk-ireland-all-european-academies-conference
Ho, S., & Burke, G. (2022, April 30). An algorithm that screens for child neglect raises concerns.
Associated Press. https://apnews.com/article/child-welfare-algorithm-investigation-
9497ee937e0053ad4144a86c68241ef1
Hofman, D. (1999, February 10). I had a funny feeling in my gut. Washington Post Foreign
Service. www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter021099b.
htm
Hofstadter, R. (1963). Anti-intellectualism in American life. Knopf.
Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report.
Center for Beneft- Cost Studies of Education, Teachers College, Columbia Univer-
sity. https://fles.eric.ed.gov/fulltext/ED547237.pdf
Hoover, H. (1927). Motion pictures, trade, and the welfare our western hemisphere. Advo-
cate of Peace through Justice, 89(5), 291–296. www.jstor.org/stable/20661595
Horan, C. (2021). Insurance era: Risk, governance, and the privatization of security in postwar
America. The University of Chicago Press.
Hoxhaj, R. (2015). Wage expectations of illegal immigrants: The role of networks and
previous migration experience. International Economics, 142, 136–151. https://doi.org/
https://doi.org/10.1016/j.inteco.2014.10.002
References 203

Hutson, M. (2018). Has artifcial intelligence become alchemy? Science, 360(6388), 478–
478. https://doi.org/doi:10.1126/science.360.6388.478
James, I. (2009). Claude Elwood Shannon 30 April 1916–24 February 2001. Biographi-
cal Memoirs of Fellows of the Royal Society, 55, 257–265. https://doi.org/doi:10.1098/
rsbm.2009.0015
James, W. (1983). The principles of psychology. Harvard University Press.
Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the
fabrication of power. The University of Chicago Press.
Jensen, A. R. (1998). The g factor: The science of mental ability. Praeger.
Johnson, K. (2022, March 7). How wrongful arrests based on AI derailed 3 men’s lives. www.
wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/
Judt, T. (2005). Postwar. A History of Europe since 1945. The Penguin Press.
Kalfa, S., Wilkinson, A., & Gollan, P. J. (2018). The academic game: Compliance and
resistance in universities. Work, Employment and Society, 32(2), 274–291.
Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
Kell, H., & Wai, J. (2018). Terman study of the gifted. In B. Frey (Ed.), The Sage ency-
clopedia of educational research, measurement, and evaluation (Vol. 1, pp. 1665–1667). Sage
Publications, Inc. www.doi.org/10.4135/9781506326139.n691
Kevles, D. J. (1986). In the name of eugenics: Genetics and the uses of human heredity. University
of California Press.
Kharpal, A. (2018). A.I. is in a ‘golden age’ and solving problems that were once in the realm of
sci-f, Jef Bezos says. Retrieved October 9, 2021, from www.cnbc.com/2017/05/08/
amazon-jef-bezos-artifcial-intelligence-ai-golden-age.html
Kibby, B. (2022, April 6). Why hyper-personalization is critical for higher ed. eCampus
News. www.ecampusnews.com/2022/04/06/why-hyper-personalization-is-critical-
for-higher-ed/
Kim, T. (2018, April 11). Goldman Sachs asks in biotech research report: “Is curing
patients a sustainable business model?” CNBC. www.cnbc.com/2018/04/11/gold
man-asks-is-curing-patients-a-sustainable-business-model.html
Klingler, W. (2017). Silicon Valley’s radical machine cult. Vice. www.vice.com/en/article/
kz7jem/silicon-valley-digitalism-machine-religion-artifcial-intelligence-christianity-
singularity-google-facebook-cult
Kranzberg, M. (1990). Software for human hardware. In P. Zunde & D. Hocking (Eds.),
Empirical foundations of information and software science V. Plenum Press.
Krause, M. J., & Tolaymat, T. (2018). Quantifcation of energy and carbon costs for min-
ing cryptocurrencies. Nature Sustainability, 1(11), 711–718. https://doi.org/10.1038/
s41893-018-0152-7
Kruglanski, A. W., & Orehek, E. (2011). The need for certainty as a psychological nexus
for individuals and society. In Extremism and the psychology of uncertainty (pp. 1–18).
https://doi.org/https://doi.org/10.1002/9781444344073.ch1
Kühl, S. (1994). The Nazi connection: Eugenics, American racism, and German national socialism.
Oxford University Press.
LaFrance, A. (2021, September 27). The largest autocracy on earth. The Atlantic.
www.theatlantic.com/magazine/archive/2021/11/facebook-authoritarian-hostile-
foreign-power/620168/
Lanier, J. (2013). Who owns the future? (First Simon & Schuster hardcover edition. ed.).
Simon & Schuster.
204 References

Layton, R. (2022, April 23). Commerce’s BIS can help stop China’s quest for AI dominance.
Forbes. www.forbes.com/sites/roslynlayton/2022/04/23/commerces-bis-can-help-
stop-chinas-quest-for-ai-dominance/
Legg, S., & Hutter, M. (2007). A collection of defnitions of intelligence. Frontiers in Arti-
fcial Intelligence and Applications, 157, 17–24. arXiv:0706.3639 [cs.AI]
Leslie, M. (2000, July/August). The vexing legacy of Lewis Terman. Stanford Magazine.
https://stanfordmag.org/contents/the-vexing-legacy-of-lewis-terman
Levine, A. (2006). Educating school teachers. The Education Schools Project.
Levine, A., & Van Pelt, S. (2021). The great upheaval: Higher education’s past, present, and
uncertain future. Johns Hopkins University Press.
Linton, R. (1951). Review of Hollywood, the dream factory – an anthropologist looks at
the movie-makers, Hortense powdermaker. American Anthropologist, 53(2), 269–271.
www.jstor.org/stable/663894
Littman, M. L., Ajunwa, I., Berger, G., Boutilier, C., Currie, M., Doshi-Velez, F., Had-
feld, G., Horowitz, M. C., Isbell, C., Kitano, H., Levy, K., Lyons, T., Mitchell, M.,
Shah, J., Sloman, S., Vallor, S., & Walsh, T. (2021). Gathering strength, gathering storms:
The one hundred year study on artifcial intelligence (AI100) 2021 study panel report. Stanford
University. http://ai100.stanford.edu/2021-report.
Lombardo, P. A. (2002). “The American breed”: Nazi eugenics and the origins of the
Pioneer fund. Albany Law Review, 65(3), 743–830.
Lombardo, P. A. (2011). A century of eugenics in America: From the Indiana experiment to the
human genome era. Indiana University Press.
Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities, neo-
liberalism, and new public management. Critical Inquiry, 38(3), 599–629. https://doi.
org/10.1086/664553
Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry sponsor-
ship and research outcome. Cochrane Database of Systematic Reviews, 2(2), Mr000033.
https://doi.org/10.1002/14651858.MR000033.pub3
Lynch, K. (2015). Control by numbers: New managerialism and ranking in higher educa-
tion. Critical Studies in Education, 56(2), 190–207. https://doi.org/10.1080/17508487
.2014.949811
Marcuse, H., & Kellner, D. (1998). Collected papers of Herbert Marcuse. Routledge.
Marks, R. (1974). Lewis M. Terman: Individual diferences and the construction
of social reality. Educational Theory, 24(4), 336–355. https://doi.org/https://doi.
org/10.1111/j.1741-5446.1974.tb00652.x
McCarthy, J. (1987). Generality in artifcial intelligence. Communications of the ACM,
30(12), 1030–1035. https://doi.org/10.1145/33447.33448
McCarthy, J. (1997). AI as sport. Science, 276(5318), 1518–1519. https://doi.org/
doi:10.1126/science.276.5318.1518
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for
the Dartmouth summer research project on artifcial intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of
artifcial intelligence. A.K. Peters.
McMahon, S. D., Anderman, E. M., Astor, R. A., Espelage, D. L., Martinez, A., Reddy,
L. A., & Worrell, F. C. (2022). Violence against educators and school personnel: Crisis during
COVID (Technical Report). American Psychological Association.
Meadows, D. H., Meadows, D. L., Randers, J., & Behrens III, W. W. (1972). The limits
to growth; A report for the Club of Rome’s project on the predicament of mankind. Universe
Books.
References 205

Millar, K. (2020, June 24). HAI Fellow Kate Vredenburgh: The right to an explanation.
Human-Centered Artifcial Intelligence, Stanford University. https://hai.stanford.edu/
news/hai-fellow-kate-vredenburgh-right-explanation
Minsky, M. (1992). Alienable rights. The MIT Press. https://web.media.mit.edu/~minsky/
papers/Alienable%20Rights.html
Minsky, M. (1994). Will robots inherit the earth? Scientifc American, 271(4), 108–113.
https://doi.org/10.1038/scientifcamerican1094-108
Mohamed, E. (2021, November 30). Experts critique Prince William’s ideas on Africa pop-
ulation. Al Jazeera. www.aljazeera.com/news/2021/11/30/experts-critique-prince-
williams-ideas-on-africa-population
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism.
PublicAfairs.
Nagle, T., Redman, T. C., & Sammon, D. (2017, September 11). Only 3% of companies’
data meets basic quality standards. Harvard Business Review. https://hbr.org/2017/09/
only-3-of-companies-data-meets-basic-quality-standards
Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J., Halp-
ern, D. F., Loehlin, J. C., Perlof, R., Sternberg, R. J., & Urbina, S. (1996). Intel-
ligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. https://doi.
org/10.1037/0003-066X.51.2.77
Noelle-Neumann, E. (1974). The spiral of silence a theory of public opinion. Journal of Com-
munication, 24(2), 43–51. https://doi.org/https://doi.org/10.1111/j.1460-2466.1974.
tb00367.x
Nordquist, R. (2020, August 27). Catachresis (Rhetoric). www.thoughtco.com/what-is-
catachresis-1689826
NSCAI. (2021). Final report. The National Security Commission on Artifcial Intelligence.
www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal education.
Harvard University Press.
Obama, B. (2016, December 10). Now is the greatest time to be alive. WIRED. www.
wired.com/2016/10/president-obama-guest-edits-wired-essay/
O’Connor, J., Eberle, C., Cotti, D., Hagenlocher, M., Hassel, J., Janzen, S., Narvaez, L.,
Newsom, A., Ortiz-Vargas, A., Schuetze, S., & Sebesvari, Z. (2021). Interconnected dis-
aster risks. Interconnected Disaster Risks. UNU-EHS. Bonn.
OECD. (2019). Artifcial intelligence in society. OECD Publishing. https://dx.doi.
org/10.1787/eedfee77-en.
OECD. (2021a). AI and the future of skills (Vol. 1). https://doi.org/doi:https://doi.
org/10.1787/5ee71f34-en
OECD. (2021b). OECD digital education outlook 2021. https://doi.org/doi:https://doi.
org/10.1787/589b283f-en
OECD. (2022). OECD framework for the classifcation of AI systems. https://doi.org/
doi:https://doi.org/10.1787/cb6d9eca-en
Pasquale, F. (2015). The black box society: The secret algorithms that control money and informa-
tion. Harvard University Press.
Passell, P., Roberts, M., & Ross, L. (1972, April 2). The limits to growth. The New
York Times. https://www.nytimes.com/1972/04/02/archives/the-limits-to-growth-a-
report-for-the-club-of-romes-project-on-the.html
Pearson, K. (1911). The grammar of science. A. and C. Black.
Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013). The
life cycle of a million MOOC users. The University of Pennsylvania Graduate School of
Education. www.gse.upenn.edu/pdf/ahead/perna_ruby_boruch_moocs_dec2013.pdf
206 References

Piketty, T., & Goldhammer, A. (2014). Capital in the twenty-frst century. The Belknap Press
of Harvard University Press.
Potter, J. (2008). Entrepreneurship and higher education. OECD Publishing.
Pritchett, L. (2013). The rebirth of education: Schooling Ain’t learning. Center for Global
Development.
Project, T. T. (2017). Google academics inc. T. T. Project. www.techtransparencyproject.org/
articles/google-academics-inc
Purdy, J. (2016, November 30). The anti-democratic worldview of Steve Bannon and Peter Thiel.
Politico. www.politico.com/magazine/story/2016/11/donald-trump-steve-bannon-
peter-thiel-214490/
Reilly, P. R. (2015). Eugenics and involuntary sterilization: 1907–2015. Annual Review of Genom-
ics and Human Genetics, 16, 351–368. https://doi.org/10.1146/annurev-genom-090314-
024930
Repucci, S., & Slipowitz, A. (2022). The global expansion of authoritarian rule. Freedom in the
world 2022. Freedom House. https://freedomhouse.org/sites/default/fles/2022-02/
FIW_2022_PDF_Booklet_Digital_Final_Web.pdf
Riesman, D. (1951). Review of Hollywood: The dream factory., Hortense Powdermaker.
American Journal of Sociology, 56(6), 589–592. www.jstor.org/stable/2772480
Roberts, M. E. (2018). Censored: Distraction and diversion inside China’s great frewall. Prince-
ton University Press.
Romo, D. D. (2005). Ringside seat to a revolution: An underground cultural history of El Paso
and Juárez, 1893–1923. Cinco Puntos Press.
Röösli, E., Bozkurt, S., & Hernandez-Boussard, T. (2022). Peeking into a black box, the
fairness and generalizability of a MIMIC-III benchmarking model. Scientifc Data, 9(1),
24. https://doi.org/10.1038/s41597-021-01110-7
Rushkof, D. (2018, December 12). The anti-human religion of silicon valley. Medium.
https://medium.com/team-human/the-anti-human-religion-of-silicon-valley-
ac37d5528683
Russell, N. C., Reidenberg, J. R., Martin, E., et al. (2018). Transparency and the marketplace
for student data [Report]. Fordham Center on Law and Information Policy. https://apo.
org.au/node/175271
Santomauro, D. F., Herrera, A. M. M., Shadid, J., Zheng, P., Ashbaugh, C., Pigott, D. M.,
Abbafati, C., Adolph, C., Amlag, J. O., & Aravkin, A. Y. (2021). Global prevalence and
burden of depressive and anxiety disorders in 204 countries and territories in 2020 due
to the COVID-19 pandemic. The Lancet, 398(10312), 1700–1712.
Saul, H. (2016, February 24). Donald Trump declares “I love the poorly educated” as
he storms to victory in Nevada caucus. Independent. www.independent.co.uk/news/
people/donald-trump-declares-i-love-poorly-educated-he-storms-victory-nevada-
caucus-a6893106.html
Schefer, I. (1973). Reason and teaching. Routledge and Kegan Paul.
Sejnowski, T. J. (2018). The deep learning revolution. The MIT Press.
Senior, J., & Gyarmathy, E. (2021). AI and developing human intelligence. Future learning and
educational innovation. Routledge
Shurkin, J. N. (2006). Broken genius: The rise and fall of William Shockley, creator of the elec-
tronic age. Macmillan.
Sirius, R. U., & Joy, D. (2005). Counterculture through the ages: From Abraham to acid house.
Villard.
References 207

Sivabalan, S. (2019, August 13). Argentina’s massive sell-of had a 0.006% chance of happen-
ing. www.bloomberg.com/news/articles/2019-08-13/argentina-rout-was-4-sigma-
event-beckoning-the-bravest-of-brave
Slater, A. (2021, May 18). How artifcial intelligence depends on low-paid workers. Trib-
une. https://tribunemag.co.uk/2021/05/how-artifcial-intelligence-depends-on-low-
paid-workers
Smyth, J. (2017). The toxic university: Zombie leadership, academic rock stars and neoliberal ideol-
ogy. Palgrave Macmillan.
Sontag, S., & Rief, D. (2008). Reborn: Journals and notebooks, 1947–1963. Farrar, Straus
and Giroux.
Sternberg, R. J. (2020). The Cambridge handbook of intelligence. Cambridge University Press.
Stokel-Walker, C. (2021, November 25). AI has learned to read the time on an ana-
logue clock. New Scientist. www.newscientist.com/article/2298773-ai-has-learned-
to-read-the-time-on-an-analogue-clock/
Stonier, T. (1992). The evolution of machine intelligence. In Beyond information. Springer.
https://doi.org/10.1007/978-1-4471-1835-0_6
Stoychef, E. (2016). Under surveillance: Examining Facebook’s spiral of silence efects
in the wake of NSA internet monitoring. Journalism & Mass Communication Quarterly,
93(2), 296–311. https://doi.org/10.1177/1077699016630255
Thiel, P. (2009, April 13). The education of a libertarian. Cato Unbound. Cato Institute.
www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/
Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artifcial-intelli
gence-powered drug discovery. Nature Machine Intelligence, 4(3), 189–191. https://doi.
org/10.1038/s42256-022-00465-9
Vostal, F. (2016). Introduction: The pulse of modern academia. In Accelerating academia:
The changing structure of academic time (pp. 1–10). Palgrave Macmillan. https://doi.
org/10.1057/9781137473608_1
Wakefeld, J. (2021, March 28). AI: Ghost workers demand to be seen and heard. BBC.
www.bbc.com/news/technology-56414491
Warofka, A. (2018, November 5). An Independent assessment of the human rights impact
of Facebook in Myanmar. Meta. https://about.fb.com/news/2018/11/myanmar-hria/
Watters, A. (2021). Teaching machines. The MIT Press.
White, L. T. (1974). Medieval technology and social change. Oxford University Press.
Whitman, J. Q. (2017). Hitler’s American model: The United States and the making of Nazi race
law. Princeton University Press.
Wiener, N. (1994). Invention: The care and feeding of ideas. MIT Press.
Wilson, E. O. (1998). Consilience: The unity of knowledge. Knopf: Distributed by Random
House.
Wolfe, P. (1991). On being woken up: The dreamtime in anthropology and in Australian
Settler culture. Comparative Studies in Society and History, 33(2), 197–224. https://doi.
org/10.1017/S0010417500017011
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab,
M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura,
P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). OPT: Open pre-trained transformer
language models. ArXiv, abs/2205.01068.
Zubof, S. (2020). The age of surveillance capitalism: The fght for a human future at the new
frontier of power. Profle Books.
208 References

Zuckerberg, M. (2012, February 2). Facebook’s letter from Mark Zuckerberg – full text. The
Guardian. www.theguardian.com/technology/2012/feb/01/facebook-letter-mark-
zuckerberg-text
The Annals of Human Genetics. https://onlinelibrary.wiley.com/page/journal/14691809/
homepage/productinformation.html
DeakinUniversity. (2014, October 8). Deakin and IBM unite to revolutionise the student experience.
www.deakin.edu.au/about-deakin/news-and-media-releases/articles/deakin-and-
ibm-unite-to-revolutionise-the-student-experience
Deakin University. (2015, November 25). IBM Watson helps Deakin drive the digital
frontier. Media Release. www.deakin.edu.au/about-deakin/news-and-media-releases/
articles/ibm-watson-helps-deakin-drive-the-digital-frontier
Eticas Foundation. (2022). The external audit of the VioGén. https://eticasfoundation.org/
wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-
System.pdf
Human Rights Watch. (2019, August 11). Australia: Press Laos to protect rights dialogue
should address enforced disappearances, free speech. www.hrw.org/news/2019/08/11/
australia-press-laos-protect-rights
IPCC. (2022). Summary for policymakers [H.-O. Pörtner, D. C. Roberts, E. S. Poloc-
zanska, K. Mintenbeck, M. Tignor, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V.
Möller, A. Okem (Eds.)]. In H.-O. Pörtner, D. C. Roberts, M. Tignor, E. S. Poloc-
zanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A.
Okem, & B. Rama (Eds.), Climate change 2022: Impacts, adaptation, and vulnerability.
contribution of working group II to the sixth assessment report of the intergovernmental panel on
climate change. Cambridge University Press.
JISC. (2021). AI in tertiary education: A summary of the current state of play. JISC.
https://repository.jisc.ac.uk/8360/1/ai-in-tertiary-education-report.pdf
MIT Media Lab. (2016, January 25). Marvin Minsky, “father of artifcial intelligence,” dies at 88.
https://news.mit.edu/2016/marvin-minsky-obituary-0125
NSCAI. (2021). Final report. The National Security Commission on Artifcial Intelligence.
www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
President of Ireland (Media Library). (2016, April 7). Speech at the EUA annual conference.
NUI Galway, April 7, 2016.
Tech Transparency Project. (2012, July 11). Google academics inc. – Report. www.tech
transparencyproject.org/articles/google-academics-inc
TOC. (2020). Technology managing people. The worker experience. Trades Union Congress.
www.tuc.org.uk/sites/default/files/2020-11/Technology_Managing_People_
Report_2020_AW_Optimised.pdf
UCU. (2022). UK higher education. A workforce in crisis. www.ucu.org.uk/media/12532/
HEReport24March22/pdf/HEReport24March22.pdf
WHO. (2021, June 17). Suicide. www.who.int/news-room/fact-sheets/detail/suicide
WHO. (2021, September 13). Depression. www.who.int/news-room/fact-sheets/detail/
depression
INDEX

1984 (Orwell) 64 “All Can Be Lost” (Carr) 46


alternative facts, false claims 64
abnormal behaviour, fagging 59 Alvarez, Luis Walter 18
academia: intellectual life, erosion 173; Amazon, content/topics selection 192
pathological organisational dysfunction Amazon worker (management), AI
83–84 surveillance (usage) 117
academic body, inertia 83–84 American anti-intellectualism 84–85
academic dishonesty, fagging 59 American cultural model, global impact 82
academic ethos, dissolution 129 American culture, anti-intellectual
academic ideals, clarifcation (absence) 60 positions 85
academic integrity 59, 101; approach American Dream 31, 40, 65; cultivation
173–174; educative approach 130; 69; formula 65–66; foundational myth
examination 158; promotion, solutions 66–67; impact 82
59–60; software, specialisation 174 American eugenicists, German Nazis
academic reality/practice, problems 60 (exchanges) 16
academics: ethics, code 194–195; extremes American Eugenics Society, Terman
90–91; role 175–176 (interaction) 19
accountability principle, validity 83 American exceptionalism, importance 67
Adams, James Truslow 65 Americanisation, project (intentionality)
administration, AI adoption 7–8 68–69
“Age of Discontinuity, The” (Drucker) 78 American universities, prestige (decline)
AI Experts Group (AIGO), AI system 92–93
defnition 33 analytics, impact 168
AI Genie, impact 97 Anderson, Malcolm 90
AI Watson, adoption 96 Annals of Eugenics 17
Alexa (voice assistant), usage 169 Annals of Human Genetics 17
algorithmic fairness 49 Anti-Defamation League, Cohen
algorithmic profling 114–115 speech 171
algorithmic score, importance 115 anti-human projects, history 44–45
algorithms: black box 34; nature 37; “Anti-Intellectualism in the American
obscuring 156; problems 160; usage 188 Life” (Hofstadter) 85
“Alienable Rights” (Minsky) 43 anti-intellectual policy, statistical efect 177
210 Index

anti-religious summit, adoption 41 atemporality, imposition 23


Arab Spring 171 attention, allocation (detection) 110
Aristotle 137–138, 150 auctoritas, basis 137
artifcial intelligence (AI): adoption 7–8, authoritarianism, ethos 43
26, 114, 188–189; adoption, principles authority: dissolution 88; erosion,
191–193; advancement 120–121, 154; acceleration 137
alchemy 38; algorithms, black box 50; autocracy, combinations 182
algorithms, errors/discrimination 133; Automata Studies 26
algorithms, sum 35; algorithms, usage automation bias 33, 46–47
4; applications, implications 5; assistance automation complacency 47
6; BBC investigation 189; big data,
requirement 2; Center for American Babcock, Philip 176
Progress report 168; challenge 100; Ball, Stephen 94
change 191; concept, birth 85; bankruptcy 92
consideration, Select Committee Barbrook, Richard 46
on Artifcial Intelligence 53–54; Barnett, Alex 23
content/topics selection 192; control/ Bartoletti, Ivana 34, 37
manipulations 189–190; dangers Bath riots 21
188–189; deep learning AI, defnition beauty, nature 139
33; defnition, difculty 9; development Beer, David 94
148; development, acceleration behavioural data, log fle source 110
163; development, China (activity) “Bernoulli’s fallacy” (Clayton) 13–14
184–185; developments, project aim Bertelsmann Transformation Index (BTI),
(involvement) 7; developments/risks/ autocracies (increase) 180
opportunities 4–5; evolution 57, “Betting on Hitler” (Ferguson/Voth) 78
131–132, 161; fascism, relationship Betzig, Eric 92
148; feld of felds 31; futures, focus Bezos, Jef 117
177–178; higher education context bias correction factor, usage 177
173; human brain, relationship 45; Biden, Joe 23
imaginaries, determination 10–11; Big Data: collection 138; complexity
imagination, relationship 160–161; 36; database 36; higher education,
importance 170–171; initiative relationship 101; impact/promise 35;
32; invisible work 189; irony 178; importance/power, promotion 64;
limits, research (advancement) 132; learning, relationship 37; requirement 2
manipulation, possibilities 170; narrative Binet, Alfred 14
152–153; narrative construction 53; Binet-Simon test scales 15
positive publicity, maintenance 27; Biological Computer Laboratory
possibility 158; power 53, 168–169; (BCL) (University of Illinois) 26–27;
public discourse 39; questioning establishment 54
107–108; role 44; shaping, history Bitcoin, usage (carbon dioxide
(impact) 11; solutionism 42–43; generation) 189
solutions, application 96; study 26; black boxes, elimination 169
system, AIGO defnition 33; systems, Black Box Society, The (Pasquale)
adoption 153–154; systems, usage 34, 114
10; term, coinage 25–26; trends, Black, Edwin 44–45
importance 178–179; usage 49, 161 blinking behaviour 110
Artifcial Revolution, An (Bartoletti) 34 blockchain, term (usage) 59
artifcial, term (synonyms) 156 Bok, Derek 89
Artifcial Whiteness (Katz) 26–27, 32 Boroditsky, Lera 135
Arum, Richard 166 Bourdieu, Pierre 81
Ashby, Eric 195 brainwashing 62–63
assessment, AI usage 169 Brandist, Craig 119
assignments, fne-tuning 180 Broken Genius (Shurkin) 20
Index 211

Brooks, David 58, 64 Cold War: cessation 81–82; tensions


budgets, balancing 137–138 47–48
bullying, ethos 93 common man, achievements 65
Buolamwini, Joy 49 compliance, culture 84
computing, developments 171
Cadmus (software) 129–130 consilience, etymology 5
Californian Dream 69 contextual meanings, role/place
Californian ideology 190 (reconsideration) 57
“Californian ideology” (Barbrook/ controversy, establishment 87–88
Cameron) 46 Conway, Flo 54, 55
Cambridge Analytica, scandal 108, corporate entities, enrichment 129–130
148, 182 Counterculture Through the Ages
Cambridge University Eugenics Society, (Hofman/Joy) 160
founding 13 courage, importance 184
Cameron, Andy 46 COVID-19, global pandemic: impact 1,
Campbell, Joseph 163 45, 126–127; lessons 162
campus ethos, defning 3–4 Crawford, Kate 148
“Capital in the Twenty-First Century” creativity: absence 161; nurturing 109
(Piketty) 78 Cubain, Marc 38
capitalism: American model 100–101; Cultural Revolution (China) 86
greed, relationship 89–90; limbic culture, colonisation 160
capitalism 192; perversion 76 cybernetics 54, 55, 57; developments 171;
carbon dioxide, level (peak) 190 revolution 61
Carlin, George 65
Carr, Nicholas 46–47 Darwin, Charles 14
certainty-seeking 151–152 data: collection, process 24–25; deception
cheater, fagging 173–174 111; ethical considerations 191;
cheating: prevention 60; services, parallel glorifcation/deifcation 112–113;
industry 128 imaginations 36; limitations 37;
China, AI usage 187 manipulation 89; systemic algorithmic
choice, freedom 123 analysis 108
Chollet, François 38 data-informed solutions 134
Chomsky, Noam 161 dataism 112–113; promotion/
Chronicle of Higher Education, The 3 justifcation 64
Chun, Wendy Hui Kyong 23 data quality (DQ) scores 111
Churchill, Winston 14 Deakin University, AI Watson adoption 96
Cioran, Emil 9 decision-making, AI application 193
“City upon the Hill” 66 decision support systems 167–168
civility, crisis 126–127 decontextualisation, imposition 23
ClassDojo 118 deep learning AI, defnition 33
class prejudice pseudo-scientifc Degesch (chemical company) 21
justifcation 19 dehumanisation 153
Clayton, Aubrey 13 democracy: decline 23; idea, questions
climate change: emergency 162; IPCC 172; promotion 171
report 123 democratic institutions, attacks 171
climate crisis 126–127; existential threat depression: global impact 61–62;
1–2 treatment 63
Clinton, Bill 66, 88 deprivation, cause 151
Club of Rome 41–42 Dewey, John 17
Cohen, Adam 15 Diagnostic and Statistical Manual of
Cohen, Sacha Baron 39–40; Mental Disorders (DSM) 62–63
Anti-Defamation League speech 171 dialectical structure 137–138
Colbrook, Matthew 132 digital infrastructure, building 190
212 Index

Digitalism, redemption promise 39 educational technology (edtech) 36;


digital resignation, corporate control, importance 153–154; evolution
cultivation 118 5; extension 149–150; imaginary 138;
digital technology, advancements 4 inevitability 192; narratives, building
“Discriminating data” (Chun/Barnett) 23 139; narratives, relationship 157–158;
discriminatory immigration policies, premise 95–96; promise 107; root 101;
support 14 solultionism 106; usage 5, 115–116;
disinformation, impact 172 utopia 59; vendors, solutions purchase
disruption 193–194 37; words, usage 155–156
dissent/opinion expression (suppression), “Education of a Libertarian, The”
surveillance (impact) 114 (Thiel) 181
distinctive intellectual superiority 17 educators, aversion/hostility 87
diversion (interrelated rhetorical tactic) employability, promises 173
118–119 “Engineers of Jihad” (Gambetta/
Dockrill, Peter 176 Hertog) 150
Dorado Romo, David 20 English-language profciency tests, fraud/
doubt, product 87–88 cheating 136
Dowden, Oliver 168 entrepreneurial freedoms/skills,
“Dream Factory” (Hollywood) 67 liberation 81
Dreamtime, concept (complexity) 67–68 Enyedy, Noel 109
Drucker, Peter 78 “Epic of America, The” (Adams) 65
Duterte, Rodrigo 146 epistemological spaces, impact 135
dysfunction, ethos 93 Eros, understanding 138–139
“Essays in Eugenics” (Galton) 12
economic-technocratic approach 106 Etzioni, Oren 96
education: AI, academics adoption eugenics: adherence 16; application 15;
195–196; AI systems, usage 10; AI, tool birth 12; concept 11–12; inspiration 17;
(function) 193; Americanisation 76; projects, history 44–45; pseudo-science
commodity, perspective 88; computers, 22; solutions, history 67; studies
impact 178; courage, importance omission 23; theories, evolution 15–16;
184; failure 127–128; futures, theories, ideological sources 41
focus 177–178, 183; imaginations, exceptionalism, importance 66–67
corporate colonisation 154–155; exclusive/elitist culture, foundation 43
interest 134–135; liberalisation 107; extremism, rise 93
limitation 156; marketisation 76;
mediocrity 110; neoliberal ideology, Facebook (Meta), analysis 181
promotion 106–107; neoliberal Fairchild Semiconductor, creation 20
technocratic solutionism 76; fake-news 1
OECD-isation 138–139; packages, fascism 153; rise 93, 148
usage 126; performance-based fascist ideologies, evolution 15–16
accountability model 88–89; feebleminded: cessation 16; label 19
performance-based governance, fctitious rationality 120
outcomes 89; personalisation, reality First Nations, killings/genocide
156; personalisation, super-charging (justifcation) 13
107–108; privatisation 107; project, Fisher, Ronald Aylmer 13, 16
depletion 167; push-button education fooding 108–109
103; reformed education, pathway 43; “Flynn efect” 176
reform, horst trade 88–89; success, Flynn, James 176
Americanisation (impact) 68–69; forced sterilisation 19; laws 14, 15
surveillance 119; technologised project freedom, concept 155–156
126; theory (Vico) 140; thinking, free-market theories, re-emergence 79
colonisation (success) 154 Friedman, Milton 78
Educational Oath 194 Friedman, Thomas 58
Index 213

Fromm, Erich 68 Hayles, Katherine 56


Fukuyama, Francis 65, 123 “Hereditary Genius” (Galton) 14
Herf, Jefrey 149
Galton, Francis 11–14, 16–17, 20, 22, 24 “Hero with a Thousand Faces, The”
Gambetta, Diego 150–151 (Campbell) 163
Garicano, Luis 145 Hertog, Stefen 150–151
Garner, Howard 140–141 Hewlett, William 19
gas chamber, Arizona usage 23 Higgins, Michael D. 136, 179
Gates, Bill 65, 156 Higgs, Peter 91–92
Gebru, Timnit 49 higher education: AI, impact 3; aims,
General Agreements on Trade in Services achievement 104; change, World
(GATS) 125; components 80–81 Bank neoliberal policies (impact) 77,
genetic algorithms 44 79; competition, increase 183; crisis
genocidal applications, adherence 16 84–85; edtech, usage/evolution 5;
“Genome Revolution, The” (Goldman employability, aim 158–159; future
Sachs report) 100 143; intense courses, usage 77; learning,
German Desinfektionskammern, Zyklon determination 174–175; managerial
B (usage) 21 model, adoption 91; massifcation
German Kultur 149 69; narrative, change 175; neoliberal
German Nazis: American eugenicists, rationale 76; rethinking 194; scenarios
exchanges 16; dictatorship, analysis 191 166, 175; system, dysfunction 93;
German Volk, empowerment 149 transactional relationship 167; WTO
Giroux, Henry 149 push 127
Gitelman, Lisa 35–36 higher learning 73; re-storying 187;
global fnancial crisis (GFC) 145 surveillance, problem 60
global freedom, decline (Freedom House Hilbert, D. 132
report) 147–148 Hillman, James 2
globalisation: American life, alignment 82; H-index, proposal 20
dynamic, creation 81; impact 80–81 Hippocratic Oath 194
Goddard, Henry H. 15 Hitler, Adolf 15, 22, 45, 49, 146
Gödel, Kurt 132 Hitler’s American Model (Whitman) 15
“God, Human, Animal, Machine” Hofman, David 47
(O’Gieblyn) 107 Hofstadter, Richard 85
Gohmert, Louie 135 Hollands, Fiona M. 58
Google Academic Inc., problem/ Hollywood, impact 67
impact 181 “Hollywood: The Dream Factory”
Gould, Stephen Jay 12 (Powdermaker) 68
Gove, Michael 86 Holocaust: atrocities, justifcation/
“Grammar of Science, The” (Pearson) 13 organisation 12; eugenics,
Great Financial Crisis (GFC) 84 relationship 22
Great Upheaval, the (Scott/Van Pelt) 124 “Homo Deus” (Harari) 64
greed, capitalism (relationship) 89–90 Hoover, Herbert 67, 82
groupthink 155 “How Wrongful Arrests Based on
Guardian’s Higher Education Network AI Derailed 3 Men’s Lives” (Wired
survey 136 article) 113
Guterres, António 2 human-based inputs, usage 33
Gyarmathy, Éva 22–23 human brain, AI (relationship) 45
human-centered proctoring policy
habitual criminals, group formation 12 (ProctorU) 60
Hangzhou Bureau of Education, human destruction, automation 45
emotional evaluation system 117–118 human dignity 123
Harari, Yuval Noah 64–65 human genetics, statistics (applications) 13
Harbour, Pamela Jones 181 human hardware 170
214 Index

human intelligence: AI assistance, absence 167–168; concept, danger 9–10;


112; defning 31–32; development 22; concept, historical roots 4–5; concept,
expansion, imagination (usage) 159; ideological function 25; defnitions 10,
research 11; understanding 25 25; efciency 18–19; epistemological
humanism, values 123 evolution 23; epistemological
humanity: contributions 160; foundations 16; etymology 11;
redundancy 46 eugenistic approach 10; function 85;
humankind, problems 43–44 Galton studies 24; ideological concept
human learning, data collection 37 26; ideological determinant 24;
human life, evaluation 43 ideological foundations 24; ideological
roots 9; ideological structure,
IBM and the Holocaust (Black) 44 understanding 13–14; implications
IBM System 1500, usage 103 131; psychological features 10; shaping,
identity, loss (impact) 76 history (impact) 11; standardised
“If you’re so smart, why are you under measurements 15–16; statistics,
surveillance?” (Lorenz) 120 applications 13; taxonomies 24
I.G. Farben (conglomerate) 21 “Intelligence: Knowns and Unknowns”
imaginaries, relevance 173 (APA group report) 33
imagination: absence 161; AI, relationship intelligence quotient (IQ): assessment
160–161; impoverishment 156; power, 25; concept 17–18; scores,
cultivation 163; usage 159 increase (deceleration) 176; scores,
imaginations 31; neoliberal examples 160 measurements (considerations)
imaginative experiments 109 176–177; tests, attributes 31–32; tests,
Imbeciles (Cohen) 15 standard (setting) 18, 25
Immigration Restriction Act of 1924 22 intelligent action, lesser capacities 17
imperial imagination, birth 67–68 intelligent, term (decision) 28
Independent Commission Against Interconnected Disaster Risks, United
Corruption (ICAC) 136 Nations University report 5–6
individualized education system, usage 156 Intercontinental Ballistic Missile
indoctrination, normalisation 118 Programme 102
inequality, widening 1–2 International Eugenics Conference 14
inequities, rise 93 international institutions, American
informatics, revolution 61 leverage (strength) 40
information: context/text/mean/ international trade agreements,
knowledge, relationship 57; fnding, impact 83
suppression 108–109; impact 146; interrelated rhetorical tactics, usage
quantifcation, Shannon solution 56; 118–119
term, usage (von Forester complaint) interrogation, requirement 134
60; theory 55; theory signal 61
innovation 125, 193–194 James, William 17
Insider Higher Education 3 jargon (interrelated rhetorical tactic)
institutional entrepreneurship, university 118–119
adoption 82 Jasanof, Sheila 53, 155
institutionalised racism, history 67 Jensen, Arthur 10
instruction, language (understanding) Jewish children, study 13
136–137 Jisc, impact 168
intellect, denigration 85 Jobs, Steve 131
intellectual curiosity, subsidization 86 Johnson, Boris 127
intellectual efervescence, impact 3–4 Jones, Alex (conspiracist videos) 171
intellectual laziness 155 Judt, Tony 79
intellectual life, erosion 173 Jurieux, Marie-José 62
intellectual property, theft 173–174
intelligence: attraction/ideological Katz, Yarden 26–27, 32
strength 26; augmentation systems Keats, John 139
Index 215

Key Performance Indicators (KPIs) 82, management, neoliberal model (impact)


84; completion 88; performance, 116–117
pressure 92 managerialism: neoliberal dogma 94–95;
Kim, Sang-Hyun 53, 155 neoliberal models 133
Kirp, David 153 Mann, Thomas 149
Klingler, Wolfram 38–39, 41 Manual for the Physical Inspection of
knowledge 135; ephemeral information Aliens, publication 21
154; learning, relationship 158–159; market positioning, university adoption 82
problem 34 Marks, Mindy 176
Kruglanski, Arie 151 Marrakesh Agreement 79
Massive Open Online Courses (MOOCs)
LaFrance, Adrienne 181–182 104; creation 40; impact 57–58;
language: philosophy 55; role 131 revolution 58; solution 59
Lanier, Jaron 44, 45 McCarthy, John 25–26, 131
Laos, one-party system 146 McCrory, Patrick 125
latent algorithmic bias, source 133 Mead, Margaret 55
learning: AI adoption 7–8; AI meaning: human life, separation (illusion)
applications, implications 5; assumptions 63; information, separation 61
110–111; automation 75; Big Data 37; Mein Kampf (Hitler) 15, 22
determination 174–175; enhancement Metaphysics (Aristotle) 150
104; Eros 135, 139–141; knowledge, metaverse, alternative reality 190
relationship 158–159; love 123, metrifcation, success 94
134–135, 173; market, value (creation) Metzinger, Thomas 152
93–94; micro learning 45 micro learning 45
learning analytics 5, 108, 111–113; migration crisis 1
adoption 114; AI usage 169; usage 166 “Millennium Round” (WTO
learning management system (LMS) 35, Conference) 79
37, 108, 192; compulsory use 116; Minecraft, imagination (usage) 161
data collection/aggregation 112–113; Minsky, Marvin 27, 32, 43–44, 54
technology, role 76–77; university Mismeasure of Man, The (Gould) 12
usage, absence 115; usage 166 misnaming (interrelated rhetorical tactic)
“Leisure College, USA” (Babcock/ 118–119
Marks) 176 mistrust, culture 84
Levandowski, Anthony 41 Mohamed, Edna 41
Levine, Arthur 124 mongrelisation, American law 16
Lewinsky, Monica 66 moral engagement, impact 3–4
libertarianism, combinations 182 multiple choice questions (MCQs):
limbic capitalism 192 importance 180; teaching machine
Lim, Elvin T. 85 administration 104
“Limits to Growth, The” (MIT) 41–42 myths, truth 163–164
linear reversion 24
Lippmann, Walter 18 nanotechnology, usage 44
literacy, results 105 narrative, change 175
“London Conference on Intelligence” 16 narrative imagination 163
Lorenz, Chris 120 narrative structures, relevance 173
Lucas, George 163 National Socialism, danger 149
Lynch, Kathleen 95 national statistics, collection/analysis 24
natural inheritance 12
machine inputs, usage 33 naturalization, races (exclusion) 22
machine learning (ML) 33; algorithms Nazi Germany: capitalism, perception
49; BBC investigation 189; focus 157; 78; economic policies 77–78; power,
model predictions 183 gaining 149
machines: human-like intelligence, neoliberal dogmatism 155
presence 32; importance 46–47 neoliberal fascism 149
216 Index

neoliberal ideology, promotion 106–107 PIRLS 106


neoliberalism, impact 86–87 PISA 106
neoliberal narrative 159 placation (interrelated rhetorical tactic)
neoliberal technocratic solutionism 76 118–119
neoliberal utopia, myths 28–29 plagiarism: AI-based deterrent, usage 61;
neolilberal management solutions, approach 173–174; cases, increase 174;
impact 172 detection 5, 129; detection software,
Netfix: content/topics selection 192; limitations 60; detection software,
model 76–77 usage 166; deterrence, automated
neutral intelligent machines, falsehood 170 surveillance (usage) 95; deterrence
New Public Management (NPM) 120; software 37; examination 158; fagging
extremes 179; university adoption 82 59; hindrance/identifcation 129–130;
“New Technique of Education, The” prediction/deterrence, artifcial
(Ramo) 102 intelligence (usage) 4
Noelle-Neumann, Elisabeth 114 Plato 137, 163
numbifcation, normalisation 118 PLATO 103
Nussbaum, Martha 163 “Points of Entry” (Smithsonian
exhibition) 66
Obama, Barack (ASEAN address) population control 41–42
145–147 postmodernism 56; pathway 63–64
objective assessments 14 Potemkin villages 90
OCEAN model 23–24 PowerSchool 115–116
“Ode on a Grecian Urn” (Keats) 139 predictive analytics 112–113; adoption
O’Gieblyn, Meghan 108 114; AI usage 169
oligarchs, creation 162–163 predictive patterns, idea 24
online exams, proctoring solutions Pressey, Sidney L. 104
(adoption) 59 Prince William, Tusk Conservation
OPT-175B (AI language model) 182 Awards speech 41
Organisation for Economic Co-operation private capitalism, faith (shaking) 78
and Development (OECD): AI private information, exploitation 119
Principles 157; establishment 105; private property rights, strength 81
institutional power, functions 106 privatisation, term (genesis) 77–78
Original Sin 39 ProctorU, human-centered proctoring
Orwellian surveillance 95 policy 60
Owl Ventures 48 producers, role 129
profts, maximisation 137–138
Packard, David 19 propaganda, impact 171
partners, role 129 pseudo-innovation, credibility 3
Pasquale, Frank 34, 114 psychiatrists, brainwashing 62–63
Pearson, Karl 13, 24 psychiatry, crisis (analysis) 62
performance-based accountability model psychometric approach 32
(education) 88–89 psychopathic individualism 69
performance-based governance, public good, privatisation 160
outcomes 89 public spaces, colonisation 75
Perlmutter, Saul 91 punishment, fear 174–175
Persily, Nathaniel 181 pupil dilation 110
personalisation: edtech usage 157; impact push-button education 103
107–108 push-button schools 102
personalised education: education,
idealised/ahistorical view (link) quantum computing 28
109–110; student usage 103
personalised instruction, basis 109 race improvements 22
Peters, Gerhard 21 race prejudice pseudo-scientifc
Petrov, Stanislav 47 justifcation 19
Index 217

race-scientists, impact 16 information/communication theory,


racial hygiene, pseudo-scientifc theory 19 application (problem) 63; revolution,
racism: entrenched racism 22; evolution problem 61
15–16; invention 12–13 Shockley, William B. 18–20; H-index 20;
racist principles, adherence 16 management style, problems 20
racist projects, history 44–45 Shurkin, Joel 20
Rahimi, Ali 38 Siegelman, Jim 54, 55
Ramo, Simon 102–103 Silicon Six, power/faith (sharing) 40
Rand, Ayn 69, 124 Silicon Valley: antihuman ideology 64;
Raw Data, usage 35 ideological positions 131–132; ideology
reactionary modernism, essence 149 130–131; manifesto 131
Reagan, Ronald 47, 85; neoliberal idea Simon, Theodore 14
88; neoliberalism, support 86 Sizer, Theodore 153
real world uncertainties, simulation 153 Smale, Stephen 132
“Rebirth of Education, The” social justice 12–13
(Pritchett) 134 social media, enthusiasm 171
re-education camps 22 social mobility, improvement 156
Rekognition software (Amazon) 49 “Social Responsibility of Business Is to
reprivatisation, translation 78 Increase its Profts” (Friedman) 78
reprivatisierung 77 sociotechnical imaginaries 155
Richter, Salveen 100 Socrates 160
rightwing authoritarianism, rise 148 Socratic Eros 141
“Ringside Seat to a Revolution” (Dorado soft-marking 136
Romo) 20–21 software engineers, hubris 169–170
Roberts, Margaret E. 108 Sontag, Susan 127
robodebt, disaster 42 Soviet context, neoliberal arrangements
RoboDebt System, scandal 42, 188 (diference) 120
Rockefeller Foundation, project Spanish fu, impact 21
proposal 26 “spiral of silence” 114
Roksa, Josipa 166 STASI, impact 118
Romney, Mitt 86 STEM: disciplines 150; graduates,
Rushkof, Douglas 40 location/creation 151, 152
Russian fascism 149–151 STEM results 105
Russian war crimes 153 Sternberg, Robert 22
Russia oligarchs, creation 162–163 Stiglitz, Joseph 81, 89
storytelling, imagination 163
SafeAssign (software) 129 students: abilities, nurturing (absence)
Sandel, Michael 93 179–180; data marketplace (opening),
Schleicher, Andreas 48–50 edtech (impact) 119; intelligence,
scholarly ideals, ornamentation 80 distinction 14; learning, self-comforting
school representatives, aversion/ illusions 134–135; school choices
hostility 87 125; self-awareness, increase 166–167;
scientifc foundations 105 subject-specifc knowledge, acquisition
scientifc knowledge, decline 177 166–167; surveillance/indoctrination/
scientifc practices, debunking 148 numbifcation, normalisation 118;
scientifc racism 12–13 surveillance practices (Center for
secrecy, critical mass 34 Democracy & Technology report) 113;
self-censorship, surveillance (impact) 114 well-being, improvement 156
semi-criminals, group formation 12 subject-specifc knowledge, student
Senior, John 22–23 acquisition 166–167
sensorimotor tests, development 11 Sullivan, Teresa A. 58
Shallows, The (Carr) 47 Summit for Democracy 23
Shannon, Claude 26, 55, 56; debates/ Summit on Anti-Semitism and
ideas/solutions 65; impact 57; Hate 390
218 Index

surveillance 5, 100, 110; adoption, Tenpenny, Sherri 135


problem 60, 174; advantages 101; Terman, Fred 19
dystopian use 117; impact 114; Terman, Lewis M. 17–20; American
normalisation 118; problems 112, Eugenics Society questions 19;
116–117; ubiquity 102; usage, Lippmann debate 18
increase 118 Terman Study of the Gifted 17–18
sustained cash fow 100 “Terror of the Unforeseen, The”
symbolic power, abuse 157 (Giroux) 149
symbolism, association 130 Thatcher, Margaret 88
synthetic biology 28 Theages (dialogue) 139
theological motifs, presence 38–39
teachers: quality, indicators 137–138; Thiel, Peter 181
surveillance/indoctrination/ thinking, fight 154
numbifcation, normalisation 118 think tanks, impact 105–106
Teacher’s Ethical Pledge 194–195 threatening violence, report 118
teaching: AI adoption 7–8; AI Thunberg, Greta 161
applications, implications 5; automation Times Higher Education 3
75; causalisation/de-professionalisation TIMSS 106
76; improvement 104; machines 104; Torres, Carmelita 21
passion 140–141 totalitarian tendencies, history 67
Teaching Assistant (TA), AI transactional relationship 167
replacement 193 Trojan code 48
Teaching Machines (Watters) 156 Trump, Donald: anti-democratic/hateful
techne, concept (usage) 150, 158 rhetoric 147; anti-fascism, negative
technocracy, black box 34–35 presentation 180; COVID control/
technocratic-educationalizing networks, preparedness problem 111–112;
impact 105 oligarchs, comparison 124; poorly
technocratic solutionism, neoliberal educated, relationship 160
models 133 Trumpian ideology, problems 64
technocratic utopianism, impact 179 truth, refection 112–113
techno-democratic optimism 171 Turing, Alan 132
technological advancements/applications Turnitin (software) 129
130–131 Twitter, AI chatbot (Tay) 49–50
technological advancements, usage
(optimum) 133 Uighur Muslims, China surveillance 187
technological determinism/libertarian ultra-nationalism, rise 148
individualism (mixture) 46 United Nations University, Interconnected
technological dominance, posthuman Disaster Risks report 5–6
language 155 United States, economic opportunities
technological revolution 2 (study) 69
technological solutionism 69, 94 United States National Security
technological solutions, adoption 33 Commission on Artifcial Intelligence
technological utopianism 69 report (2021) 31
technology: advancement 127; enthusiasm universities: competitive advantage 89;
182; ethical neutrality 49; glorifcation complacency 83–84; creation 3–4;
149; impact 179; innovations 176; cultivated mediocrity 127; defunding
investors, propagandistic persistence 75; fall, contemplation 92–93; free
169–170; limits, consideration 48; courses, mass scale 58–59; intellectual
manipulation 130; overconfdence 47; crisis 136; leaders, self-description
temptation 48 179; managerial gibberish 83;
techno-solutionism: combinations 182; market paradigm 139; mediocrity/
myth 104 short-termism 76; neoliberal
Index 219

management solutions, impact 172; “Wall Street” (movie) 90


paradox 75; product, sale 135; return on war crimes, dehumanisation 153
investments (ROIs) 89 Watters, Audrey 3, 156
Universities in the marketplace (Bok) 89–90 Way of the Future church 41
Upfront Summit 38 Weiwei, Ai 109
Uruguay Round of Multilateral Trade Western Zivilisation 149
Negotiations 79 Whewell, William 5
U.S. Immigration Law of 1917 21 Whitehead, A.N. 195
U.S. schools, student surveillance practices white supremacist fascists,
(Center for Democracy & Technology impact 151
report) 113 Whitman, James Q. 15
Wiener, Norbert 54, 55, 177
Van Pelt, Scott J. 124 “Will Robots Inherit the Earth?”
Ventura, Michael 2 (Minsky) 44
verbal violence, report 118 Wilson, Edward O. 5
Vico, Giambattista 140 Winthrop, John 66
VioGén 115 “Without Consent” (report) 116
violence, triggers 87 World Bank neoliberal policies, impact
Vista Equity Partners 116 77, 79
vocational training 125–126; focus 179–180 World Trade Organization: establishment
Vojtko, Margaret Mary 90–91 79; neoliberal ideas 3–4
von Braun, Wernher 158–159 world trade, stimulation 105
von Foerster, Heinz 26–27, 54;
conferences (Josiah Macy Foundation “Year of Books, A” (Harari) 64
sponsorship) 55; debates/ideas/solutions
65; information theory critique 56–57; Zivilisation 149
meaning/information, separation 61 Zubof, Shoshana 101
von Neumann, John 55 Zuckerberg, Mark 64, 156, 171
Vredenburgh, Kate 112 Zyklon B, usage 21, 23

You might also like