Professional Documents
Culture Documents
Artificial Intelligence and Learning Futures - Critical - Stefan Popenici - 2022 - Routledge - 9781032208527 - Anna's Archive
Artificial Intelligence and Learning Futures - Critical - Stefan Popenici - 2022 - Routledge - 9781032208527 - Anna's Archive
LEARNING FUTURES
Stefan Popenici
Designed cover image: © Getty Images
First published 2023
by Routledge
605 Third Avenue, New York, NY 10158
and by Routledge
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2023 Stefan Popenici
The right of Stefan Popenici to be identified as author of this work has
been asserted in accordance with sections 77 and 78 of the Copyright,
Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced
or utilised in any form or by any electronic, mechanical, or other
means, now known or hereafter invented, including photocopying and
recording, or in any information storage or retrieval system, without
permission in writing from the publishers.
Trademark notice: Product or corporate names may be trademarks or
registered trademarks, and are used only for identification and explanation
without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Names: Popenici, Stefan, author.
Title: Artificial intelligence and learning futures : critical narratives of
technology and imagination in higher education / Stefan Popenici.
Description: New York, NY : Routledge, 2023. | Includes
bibliographical references and index. | Identifiers: LCCN 2022026178
(print) | LCCN 2022026179 (ebook) | ISBN 9781032210636
(hardback) | ISBN 9781032208527 (paperback) | ISBN
9781003266563 (ebook)
Subjects: LCSH: Artificial intelligence—Educational applications. |
Education, Higher—Effect of technological innovations on.
Classification: LCC LB1028.43 .P66 2023 (print) | LCC LB1028.43
(ebook) | DDC 378.1/7344678—dc23/eng/20220718
LC record available at https://lccn.loc.gov/2022026178
LC ebook record available at https://lccn.loc.gov/2022026179
ISBN: 978-1-032-21063-6 (hbk)
ISBN: 978-1-032-20852-7 (pbk)
ISBN: 978-1-003-26656-3 (ebk)
DOI: 10.4324/9781003266563
Typeset in Bembo
by Apex CoVantage, LLC
To Nadia,
my wife and my best friend.
CONTENTS
Introduction 1
SECTION I
Education, Artifcial Intelligence, and Ideology 7
SECTION II
Higher Learning 73
SECTION III
The Future of Higher Education 143
References 198
Index 209
INTRODUCTION
Every year some of the most popular and reputable dictionaries engage in the
ritual of selecting a word that encapsulates the most important ideas, cultural
trends, or opinions for that time. Words or formulas, such as “fake-news,” were
selected in the last years, and emphasised what defnes probably the best that
specifc period of our lives. It is interesting in this sense to note that in 2020,
Oxford Dictionaries found that current events were so complex and signifcant
that it was better to select several “Words of an Unprecedented Year.” This idea
serves to properly refect the “ethos, mood, or preoccupations” of that year. This
goes beyond simple marketing strategies for publishing houses and dictionaries; it
is a good way to reconnect us with the importance of language for our identities
and cultural milieu.
We can safely all say that the word that is defning the frst part of the 21st cen-
tury is “crisis.” The global pandemic of COVID-19 accelerated economic systems
and social crises, and revealed some unexpected truths about all of us. We have
an increasingly threatening climate crisis, which is endangering humanity and
our survival on Earth. We have a humanitarian crisis, where millions deal with
extreme poverty, famine, racism, and injustice, all disputing our commitments
to stated ideals and questioning the very idea of humanity. We have a migration
crisis with impacts across the world. We have a political crisis, with new fascist
regimes and wars arising in the last few years. We have an energy crisis and mas-
sive imbalances with impact across the world. We also have a worldwide crisis of
liberal democracy, a social crisis, a crisis of inequality, and a public health crisis.
Most importantly, we have a crisis of ideas.
Inequality is widening, the rich accumulate incomprehensible wealth and
stand disconnected from the rest of the world, while millions live in extreme
poverty in poor and developed countries. The climate crisis became a direct
DOI: 10.4324/9781003266563-1
2 Introduction
existential threat for humanity, extreme political movements arise from evil
ideologies of the past, and the reality of complete global disaster is openly dis-
cussed by leaders of the world. The situation is unprecedented. In 2021, the
U.N. General Assembly high-level meeting for leaders of 193 countries opened
with an urgent call for the world to “wake up.” António Guterres, the Secretary-
General of the United Nations, opened the meeting observing, “We are on the
edge of an abyss – and moving in the wrong direction. I’m here to sound the
alarm. The world must wake up.” He warned that “we are facing the greatest
cascade of crises in our lifetime,” at a time when people “see billionaires joyrid-
ing to space while millions go hungry on Earth” (Guterres, 20211). The feld of
education plays a crucial role in fnding the way we react to these warning. The
advancement of technology and especially of AI may help addressing these issues
in the contemporary curriculum and in the general aims of universities and col-
leges. It may also tempt to leave these critical issues aside and let technology solve
our challenges, joining the techno-utopian narrative of a world that is optimally
managed by these advancements. The aim of this book is to explore some of the
key areas that were ignored or remain superfcially investigated in the enthusiasm
for a technological revolution. It is important to underline here that this is not a
technical report on various forms of AI or on machine learning. The main aim is
to see how AI will impact and is now used in education and how current devel-
opments in this feld will determine the future of education. It is an analysis of
some of the most infuential variables that shape AI and an in-depth exploration
on how these recent developments in edtech impact on the future of graduates,
universities, on culture, societies, and our common futures. So this is not a book
written for or from the perspective of AI engineers; it is looking at the ideologi-
cal and technical roots of AI to understand the place of AI solutions in education,
especially on the impact on equity, the fair use of technology, the implications of
big data required for AI and impacts of surveillance, and the propensity of AI to
serve autocratic forces. Most importantly, it aims to provide ideas and warnings
to faculty and students that are exposed to data collection and various applica-
tions of AI in universities, and serve as an open analysis for academics, policy
makers, and anyone interested in the complex feld of learning and teaching in
higher education.
Probably a good way to describe why this is a necessary lecture is by paraphras-
ing the title of a book written by James Hillman and Michael Ventura: we’ve had
“a hundred years” of edtech and the world is getting worse. Higher education is
also getting worse. We are part of a general failure on the moral level, and we can
see systems crumbling on a practical and very real manifestation. We collectively
reached a point where the relevance of facts is disputed or entirely ignored and
ignorance is glorifed and imposed with barbarian combativeness. As it was the
case for the last one hundred years, America set again a major trend for the rest of
the world. We are Americanised in subtle and complex forms, with implications
that are separately addressed in a subchapter of this book.
Introduction 3
in contradiction with what we fnd in their glossy brochures and generous mis-
sion statements, is defned by the aim to secure profts, to stay competitive on
the market of commodifed educational services and serve market demands for
“properly trained” workers. Universities are in an ongoing crisis of identity, being
pulled apart by managerial fads, anti-intellectualism and contradictory demands
that undermine their function and ideals. In this context, edtech entrepreneurs
found billions in profts and took higher education even more further away from
its meanings and foundational beliefs.
The book explores what stands behind the label of Artifcial Intelligence (AI)
and how these roots may impact on current applications in teaching and learn-
ing in university. Advancements in digital technology, especially in the feld of
AI, bring the promise of revolutionary solutions and assistance in mundane or
critically important tasks. Importantly, promises on AI are opening to new pos-
sibilities to approach the multifaceted crisis that impacts on our lives and future.
AI is already shaping our lives in obvious or unseen ways: applications for jobs are
selected with the use of AI algorithms, which can change a career or someone’s
future; law enforcement agencies use AI for identifcation, profling, and surveil-
lance; the law is applied as AI algorithms decide if a person is more inclined to
reofend than others; armed forces in various countries are using AI for military
purposes, often on a very thin and unclear ethical line; teachers, schools, and uni-
versities use AI to predict and deter plagiarism, organise exams, set rankings, and
assign grades; mass media is using AI to decide what we like and when is the best
time to have that favoured content delivered, and AI is telling editors what type
of content we want to see and this is all what we can access. Succinctly, we can
say that our lives are signifcantly impacted by the use of AI, by algorithms and
technologies that aim and have the power to capture and shape what we know,
see, and how we can imagine the world. New technological solutions are used
to aggregate, synthesise, and apply complex forms of surveillance for data used or
sold by corporations, banks, insurance companies, law enforcement authorities,
investors, marketers, and others who can aford to play on this market.
It is important to have this in-depth analysis of developments, risks, and oppor-
tunities of AI in education, especially in universities, and investigate how this will
shape the future of education, civic culture, politics, and society. At the core of
this analysis is the interest on how AI is and can be used in education, in teach-
ing and learning, and how our imagination is enhanced or limited by edtech.
This is why we prefer to take in the following pages the perspective of users of
AI systems rather than that of engineers that are developing them; research and
case studies are used from this perspective to see how and when they can enrich
education and when it can be a hindrance. This analysis starts from the idea that
AI is created by humans, which determines the way it works; it is also strongly
connected with the historical roots of the concept of intelligence. Inevitably, as
many studies reveal, the development and functioning of AI systems are laden
with values, preferences, biases, and limited perspectives that bring important
Introduction 5
implications for the AI applications in teaching and learning, and other academic
processes (such as plagiarism detection, surveillance, or learning analytics). This
analytical approach favours a human dimension that precedes and controls the
use of technology in open societies. It is a viewpoint that should guide regula-
tions in the feld of AI developments and applications in education but remains
ignored by some of the most infuential policy makers – as an example provided
by OECD demonstrates.
To investigate these complex developments that involve economic, techno-
logical, social, and cultural dimensions, we need a multidisciplinary approach,
an ongoing intellectual efort to analyse with diferent lenses, from diferent per-
spectives, and using what information “jumps together” from various felds and
disciplines. The preferred approach for interpretation and discovery in topics at
the heart of this book integrates multidisciplinarity, which is building knowl-
edge from diferent disciplines that maintain their boundaries; interdisciplinarity,
which is using links between disciplines to create knowledge in a new and coher-
ent whole; and trans-disciplinarity, which integrates sciences and humanities for a
new context that transcends traditional boundaries of disciplines for a wider and
superior understanding (Choi & Pak, 2008, p. 359).3
The use and evolution of edtech in higher education reveal probably the best
how damaging is the fragmentation of knowledge, with a general adoption of
ahistorical theories, statements, and artifcially limited interests and studies. The
fragmentation of knowledge is on itself a signifcant source of problems that
impacts severely the way we live and progress. This is why it is important to fnd
more than inter-, multi-, and trans-disciplinarity. The sociobiologist Edward O.
Wilson restored the importance of connecting principles of diferent disciplines
in his book on consilience, a term that was frst used in 1840 by William Whewell
in The Philosophy of the Inductive Sciences. Consilience comes from two Latin
words: “con,” which can translate as “together,” and “siliens,” which means “jump-
ing.” Wilson is adopting the defnition of consilience as “a ‘jumping together’
of knowledge by the linking of facts and fact-based theory across disciplines to
create a common groundwork of explanation.” It is unifed knowledge that inte-
grates and constructs a web of causal relations suitable to ultimately connect what
seems to stand as unrelated variables and phenomena. Analysis and fndings of
this book are using the approach suggested by consilience to understand various
developments and examples through the lens of humanities and sciences, bring-
ing together explorations from felds such as sociology and semiotics, statistics
and hermeneutics. The general aim is to avoid what Wilson noted on the classical
approaches, which is that “the ongoing fragmentation of knowledge and resulting
chaos in philosophy are not refections of the real world but artefacts of scholar-
ship” (Wilson, 1998, p. 84).
We still learn how interconnected and interdependent is our world, and how
delicate is the balance that we can destroy with ignorance, hubris, and irre-
sponsibility. In 2021, comprehensive research produced by the United Nations
6 Introduction
The report fnds that climate catastrophes, pandemics, and various other crises are
much more connected than was previously considered and all have the same root
causes. In order to understand them and design appropriate solutions, it is crucial
to think in a more integrated way, allowing knowledge to “jump” and intercon-
nect freely and use this process with wisdom. This is how we can imagine and
build sustainable solutions for the future.
Re-storying narratives and realities of higher education require a clear under-
standing of what we dream and what is shaping what we imagine. In this efort
we can see how and if AI can help universities and students to avoid a dystopian
future of continuous surveillance, control, and authoritarianism. This can help
to write a diferent future, based on an inspiring vision of an education that is
suitable to create a sustainable future with a complex and inspirational narrative.
Notes
1. Guterres, A. (2021). Secretary-general’s address to the general assembly. United Nations.
Retrieved September 22, 2021, from www.un.org/sg/en/node/259241
2. Watters, A. (2021). Teaching machines. The MIT Press.
3. Choi, B. C., & Pak, A. W. (2008). Multidisciplinarity, interdisciplinarity, and trans
disciplinarity in health research, services, education and policy: 3. Discipline, inter-
discipline distance, and selection of discipline. Clinical and Investigative Medicine, 31(1),
E41–E48. https://doi.org/10.25011/cim.v31i1.3140
4. Wilson, E. O. (1998). Consilience: The unity of knowledge. Knopf: Distributed by Ran-
dom House.
5. Sett, D., Hagenlocher, M., Cotti, D., Reith, J., Harb, M., Kreft, S., Kaiser, Zwick,
A., & Garschagen, M. (2021). Disaster risk, social protection and readiness for insurance solu-
tions. UNU-EHS, MCII, LMU, BMZ, GIZ.
SECTION I
Education, Artifcial
Intelligence, and Ideology
DOI: 10.4324/9781003266563-2
1
THE IDEOLOGICAL ROOTS OF
INTELLIGENCE
neutral, scientifc, and factual term is leaving this concept susceptible to maintain
blind spots that cause serious errors, with some of its implications explored in
this chapter. “Intelligence” was used in time with diferent defnitions, but one
certain view on what this is and how we can look at it shaped most theoretical
frameworks in this feld. A paper published in 2007 brings together over 70 dif-
ferent defnitions of intelligence (Legg & Hutter, 20072), which are refecting
how economic, political, ideological, or cultural positions determine the way we
decide to see it. It also shows that these defnitions are built on few common ele-
ments. These defnitions evolved from the eugenistic approach adopted by Galton
and Binet, and claim that human intelligence is a measurable attribute. In fact, a
peculiar trick marks this concept: intelligence is defned by what tests can capture
and what is relevant for schooling, employment, or the entity conducting these
tests (e.g. military). This specifcity leads to a selection of certain psychological
features that are used to certify that one is highly intelligent, or less. This selection
actually leads to situations where some attributes, such as creativity, are marginal
in deciding the IQ level.
This concept is so malleable, so open to various forms and reasons for manip-
ulation, that some experts in the feld decided that the best solution for this
extremely complex feld is to avoid using this value-laden concept. For example,
Arthur Jensen, a prominent expert on research of intelligence, went as far as
making the case of removing the term “intelligence” from all scientifc analysis,
including psychology. He notes that the use of the concept of intelligence
It is now too late to take the advice of removing intelligence, and we can assume
that it was never a realistic solution for the confusion surrounding this term. It is
important to see how it becomes such an attractive term that fres the imagina-
tions of individuals, various groups, and governments that are funding almost
anything with an AI promise. We have to analyse succinctly how this concept
is relevant for AI, and the use of AI systems in education. This is not aiming
to be an all-embracing history of intelligence or a comprehensive analysis of its
evolution.
The main reason to look at the modern enquiries on human intelligence is
that the history of this concept stands directly linked with the evolution and
function of AI systems, and is vastly infuencing the way advanced technologies
are designed and impact on our lives. This relationship determined the AI imagi-
naries, projects for the future of AI, and promises of current solutions. In fact,
The Ideological Roots of Intelligence 11
AI and intelligence are both shaped by a common history, have same ideological
sources, and often serve common political interests. Leaving this common history
obscured may be comfortable and convenient for an industry with extraordinary
investments at stake, but replace the most relevant key for understanding AI with
magical thinking and slippery slogans. It is surprising in this sense to see that
many books and research papers on AI simply ignore the history of intelligence
and its refection on AI systems, even when there are explicitly situated at the core
of our possible futures.
How and when one can say that someone is intelligent or not? Since the
19th century, the ability to defne intelligence was ultimately a way to achieve
the power to determine destinies, the future life for people and groups of peo-
ple, and even infuence the distribution of economic resources. As Jensen clearly
suggested, there is no general defnition accepted for intelligence, but its ety-
mology is ofering some important reference points. “Intelligence” comes from
the Latin “intellegere,” which can be translated as the capacity to understand, to
comprehend. The use of this concept goes back many centuries ago in ancient
China and we fnd it mentioned by Homer in the Odyssey and by Plato in the
Republic. The meaning covered is close to a gift from Gods that is nurtured by
individuals with the love of learning and seeking of truth, to access virtue. The
most important point is that the history of humanity is closely aligned with ways
to understand intelligence. This is a signifcant component of this concept, which
is overlooked and remains obscured in our uses and understandings of AI: AI is
always much more than a series of algorithms used by computing systems. AI is
implicitly (or sometimes clearly) defning what it means to be human and what
we can accept to be defned as humanity, and what is “artifcial,” the instrumental
part that is not relevant in our understanding of humanity.
In the 19th century, Sir Francis Galton, a British mathematician, opened a
new way to look at intelligence, as a quantitative concept that can be meas-
ured. He pioneered the use of tests to assess intelligence and used statistics to
scientifcally measure it. The Cambridge Handbook of Intelligence notes that Sir
Francis Galton (1883) is “one of the earliest researchers on human intelligence,”
but strangely ignores what was at the core of Galton’s view on intelligence, the
concept of eugenics (Sternberg, 2020, p. 314). In fact, Francis Galton created
the term “eugenics,” which fundamentally infuenced his studies and ideas about
human intelligence. Galton explored the reaction time and other physical and
sensory abilities of some English noblemen. The sensorimotor tests developed
by Galton are not predictive or relevant for scholastic or signifcant cognitive
performances and remain largely irrelevant for science. The most infuential part
of his work is based on the idea to link the concept of social class to what can be
considered the inception of scientifc inquiries of intelligence.
The concept of eugenics is rooted in the Greek words of “eu” (‘well’) and
“genos” (‘birth’); this can be translated as “well born.” The idea of eugenics is
basically to improve the human race by breeding the elites and restricting the
12 Education, AI, and Ideology
inferior members of society from reproducing. Galton and his followers fnd that
intelligence and other noble qualities are hereditary. The inferior groups that
have to be controlled and eliminated are – since Galton – other races than whites,
“criminals and semi-criminals,” the poor and the unsuccessful. As Francis Galton
notes in the “Essays in Eugenics,” this new science is “the science which deals with
all infuences that improve the inborn qualities of a race,” (Galton, 1909, p. 355).
In his famous and infuential book, he presents with clarity some basic ideas for
his new science, writing that “the average intellectual standard of the negro race
is some two grades below our own” or noting that “the Australian type is at least
one grade below the African negro” (Galton, 20126). Intelligence is part of what
he named as “natural inheritance,” and height, hygiene, and external appearance
are all related to intelligence and have to be explored for a proper classifca-
tion required for a “the possible improvement of the human race and nations”
(Galton, 1901, p. 17). Galton not only sets the direction for the study on human
intelligence, but he is a representative thinker for the conviction that all can be
quantifed and measured. In The Mismeasure of Man, an excellent book for those
who try to understand the roots of what we call today “intelligence,” Stephen Jay
Gould notes that Galton was so convinced that everything can be quantifed and
measured and he “even proposed and began to carry out a statistical inquiry into
the efcacy of prayer” (Gould, 1996, p. 1078). In other writings, Galton indicates
how we can measure boredom; this interest for quantifcation also infuenced his
views on what is human intelligence and how we can measure it.
This marks the birth of the pseudo-science of eugenics, the precursor of the
Nazi programs of “racial hygiene,” a perverted euphemism used in the 20th cen-
tury to justify and organise the atrocities of the Holocaust. These toxic roots stand
as the most infuential source for most studies on intelligence, where eugenics still
plays an explicit or a subsumed role. It is important to consider how these theo-
ries infuenced the history of the 20th century and what we imagine when we
talk about what is humanity when some of Galton’s central ideas revolve around
approaches such as this: “it would be an economy and a great beneft to the
country if all habitual criminals were resolutely segregated under merciful surveil-
lance and peremptorily denied opportunities for producing ofspring” (Galton,
1909, p. 209). Decades later, Hitler noted that he studied “with great interest” the
eugenic studies provided by Francis Galton; here is the moment when the idea
that an ideology can let one decide who is part of the intelligent elite and what
groups are formed by “habitual criminals” or “semi-criminals” achieved the cover
of science.
There is no doubt that Francis Galton did not invent racism, “scientifc rac-
ism” or social injustice; these ideas were also widely shared and popular for the
British aristocracy. His unique contribution is that he built a scientifc argument
for racism and inequity. These scientifc foundations for statistics, psychology, and
psychometrics determined the way intelligence was investigated and also how it
is currently understood. Scientifc studies on intelligence are closely intertwined
The Ideological Roots of Intelligence 13
with eugenics for the inception of psychometrics, the framework of what can be
considered part of human intelligence.
Another important source for this feld is provided by Karl Pearson. He is
generally presented as the founding father of statistics. Pearson was a zealous fol-
lower of Galton, an extreme eugenist interested in the applications of statistics
on human genetics and intelligence. One of Pearson’s extensive studies on racial
diferences in intelligence stands as a relevant example for his views. Completed
with one of his colleagues at the University College London (UCL), an extensive
study on Jewish children and their parents reached this conclusion:
In “The Grammar of Science,” an infuential book published for the frst time in
February 1892, Pearson justifed widespread killings and genocide against First
Nations in America with a dehumanised rationality:
At the same time, Ronald Aylmer Fisher, a scholar who also brings crucial
contributions to statistical theory, as the scholar who left us the idea to use appli-
cations of statistical procedures to the design of scientifc experiments, also found
inspiration in Galton’s ideas. He was the founder of the Cambridge University
Eugenics Society and joined later Pearson in his work at the UCL. In 1933
Ronald Fisher became Galton Professor of Eugenics at the University College
London. Here is where their collaboration marks important steps for the progress
of statistics, psychology, and measurements of intelligence.
Aubrey Clayton observes in her book, “Bernoulli’s fallacy,” that “Pearson and
Fisher were both incredibly infuential and are jointly responsible for many of the
tools and techniques that are still used today – most notably, signifcance testing,
the backbone of modern statistical inference.” Their fundamental contribution to
statistics and advanced mathematics is universally accepted. If we want to under-
stand the ideological structure of intelligence and the role this concept plays in
14 Education, AI, and Ideology
the AI is important to look at the fact that the birth of this concept is based on
scientifc endeavours aiming to
You have made a convert of an opponent in one sense, for I have always
maintained that, excepting fools, men do not difer much in intellect,
The Ideological Roots of Intelligence 15
only in zeal and hard work; and I still think this is an eminently important
diference.
(Galton, 1908, p. 29015)
This important diference was mostly irrelevant for the American adopters of Gal-
ton’s approach, such as Henry H. Goddard, who used in 1908 the Binet-Simon
test scales in the United States. The study and understanding of intelligence in
the United States are intertwined with the history and principles of eugenics from
the very beginning. In fact, eugenics found extreme forms of application for the
frst time in America, not in Europe of Germany. The frst involuntary sterilisa-
tion law in the world, passed by the State of Indiana in 1907, and later adopted
by 29 other American states, is just one example in this sense. California was the
third state to adopt the law of forced sterilisations, which was expanded in 1913
to add among groups defned as dangerous for the racial hygiene of the nation
all people with a “mental disease, which may have been inherited and is likely
to be transmitted to descendants.” In just few decades, the United States force-
fully sterilised approximately 60,000 persons (Reilly, 201516), and its eugenic laws
were unchanged “on the books for nearly 70 years” (Lombardo, 2011, p. 9917).
There are well-documented studies and evidence on the strong relation between
American and German eugenists. One of the most relevant books on this topic
is James Whitman’s Hitler’s American Model, an extraordinary analysis based on the
evidence available in the archives. Whitman notes that “in Mein Kampf Hitler
praised America as nothing less than ‘the one state’ that had made progress toward
the creation of a healthy racist order of the kind the Nuremberg Laws were
intended to establish” (Whitman, 2017, p. 218). He also underlines that Hitler and
the Nazi party were not interested much in segregation, but in the shared com-
mitment to white supremacy and the American eugenic solutions (Whitman,
2017, pp. 4–7). Adam Cohen observes in his book, Imbeciles, that
[T]he United States in the 1920s was caught up in a mania: the drive to use
newly discovered scientifc laws of heredity to perfect humanity. Modern
eugenics, which had emerged in England among followers of Charles Dar-
win, had crossed the Atlantic and become a full-fedged intellectual craze.
(Cohen, 2016, p. 319)
[W]hen it came to the law of mongrelisation the Nazis were not ready to
import American law wholesale. This is not, however, because they found
American law too enlightened or egalitarian. The painful paradox, as we
shall see, is that Nazis lawyers, even radical ones, found American law on
mongrelisation too harsh to be embraced by the Third Reich. From the
Nazi point of view this was a domain in which American race law simply
went too far for Germany to follow.
(Whitman, 2017, p. 80)
The idea that “feebleminded” should be stopped to reproduce and that the
progress of a nation and community can be achieved if solutions are found to
keep “the unintelligent” under control was widely adopted by the American
politicians, industrialists, and academics, at least for the frst part of the 20th cen-
tury. In the United States, Harvard, Columbia, Stanford, and hundreds of other
universities were teaching eugenics. The academic backing of these theories was
already substantial when Francis Galton left his substantial fortune, including his
personal collection and archive, to the University College London. This donation
allowed to establish at UCL the Professorial Chair of Eugenics and a department
of eugenics, which determined the evolution of modern statistics and psychol-
ogy. This department later enticed Fisher to join UCL and led to the creation of
the frst department of mathematical statistics in the world. The impact of these
scholars is extraordinarily signifcant not only to understand the evolution of
theories on intelligence, but also to unravel the roots of AI, a construct based on
advanced mathematics and a certain view on intelligence that still permeate cur-
rent computer applications.
The adherence to eugenics, racist principles, and genocidal applications was
largely ignored after 1940s, or it was rebuilt with a diferent jargon that is usually
hiding the original intentions of this theory in defning and measuring “intel-
ligence.” It is evident that eugenics was not eliminated, and its epistemological
foundations for defnitions, measurements, and applications of intelligence still
stand prominent. UCL decided to change the names of buildings and learning
spaces that were directly linked with this racist past only in June 2020. The new
name for the UCL Galton lecture theatre is now “Lecture theatre 115.” The fact
that this space was renamed with a depersonalised, bureaucratic label, and not
a name of a less controversial scientist associated in the past with the university
deserves probably more attention. Only few years before the decision to rename
buildings honouring the name of eugenists it became public that “race-scientists”
and neo-Nazis held the eugenist “London Conference on Intelligence” at UCL
for the previous four years.
The Ideological Roots of Intelligence 17
the tramp who lives from hour to hour; the bohemian whose engagements
are from day to day; the bachelor who builds but for a single life; the father
who acts for another generation; the patriot who thinks for a whole com-
munity and many generations; and fnally, the philosopher and saint whose
cares are for humanity and for eternity.
(James, 1983, p. 10721)
John Dewey found that individuals with “distinctive intellectual superiority” are
the leaders, while those with “lesser capacities for intelligent action” could only
be the followers (Gonzalez & Gonzalez, 197922). In fact, the simple translation
of this theory is that upper classes are more intelligent than lower socioeconomic
classes. This taxonomy is based on the idea that lower social classes have a low
level of intelligence while upper ranks have superior levels of intelligence and
social status.
The current defnition of intelligence was defnitely shaped by one of the
longest-running scientifc studies ever conducted, the Terman Life-Cycle Study.
This long-time complex research project profoundly infuences the way we see
how intelligence is defned and used today by computer engineers and edtech.
This research project also introduced the concept of IQ in the modern science.
In 1921, Lewis M. Terman, professor of psychology in Stanford University, initi-
ated the study, and its sample comprised 1,528 children (11 years old, on average),
all with IQs of 135 or above – placing them in the top 1% of the population at
the time. The Sage Encyclopaedia of Educational Research, Measurement, and
Evaluation is presenting the Terman Study of the Gifted, which was initially
titled Genetic Studies of Genius, as “one of the most famous longitudinal stud-
ies in the history of psychology” (Frey, 201823). Its participants (labelled in time
18 Education, AI, and Ideology
[I]t is more important for man to acquire control over his biological evolu-
tion than to capture the energy of the atom – and it will probably be far
easier. The ordinary social and political issues which engross mankind are of
trivial importance in comparison with the issues which relate to eugenics.
(Marks, 1974, p. 35126)
Terman was also an active member of eugenic societies and advocated for the
forced sterilisation of those labelled as “feeble-minded” in the American society.
There is a vastly disproportionate representation of the very poor and minorities
that shows how social class impacted the use of “intelligence,” from the incep-
tion of its scientifc explorations and defnition. Recent studies reveal how much
Terman was infuenced in his of prominent work by eugenics: “While champi-
oning the intelligent, he pushed for the forced sterilization of thousands of ‘fee-
bleminded’ Americans. Later in life, Terman backed away from eugenics, but he
never publicly recanted his beliefs” (Leslie, 200027).
It is interesting to note that Lewis M. Terman left us the concept of IQ and
changed not only psychology but also our understanding of what is intelligence,
and his son, Frederick (Fred) Terman defnitely infuenced the birth and future
of Silicon Valley. Fred Terman mentored two of his graduate students, William
Hewlett and David Packard, and guided them to create their own company,
which became known as Hewlett-Packard. Moreover, Terman managed to attract
William B. Shockley to Palo Alto and developed a close collaboration him, help-
ing Shockley to fnd some brilliant scientists for his work on semiconductors.
Shockley, a winner of the Nobel Prize in Physics in 1956, is considered to
be the “founding father” of Silicon Valley. He was not only a brilliant scien-
tist in physics but also a white supremacist and an active promoter of eugenics.
Sometimes his ideas went as far as causing an emotional response and public pro-
tests. For example, in an interview published in 22 November 1965 by the U.S.
News & World Report magazine, Shockley is expressing some of his views on
race and intelligence. His racism was so strident there that the Faculty of Genet-
ics at the University of Stanford signed an open letter of protest. It was signed by
some of the most reputable scientists in the feld, such as Joshua Lederberg, the
winner of the Nobel Prize in medicine in 1958. The letter is calling Shockley’s
ideas about genetics and race a “pseudo-scientifc justifcation for class and race
prejudice.”
20 Education, AI, and Ideology
Shockley reacted to this letter and pushed the argument further, speaking on
17 October 1966 at a meeting of the National Academy of Sciences, at Duke
University. Joel Shurkin presents this moment in his biographical book, Broken
genius: the rise and fall of William Shockley, creator of the electronic age:
the birth notice of Silicon Valley. What happened to the eight is not a
digression in the story of Bill Shockley. It is the key to understanding the
rest of his life. They became known in the mythology of the valley as the
“Traitorous Eight.”
(p. 181)
This step will later lead to the emergence of companies that shaped Silicon Valley,
such as Microchip Technology, Intel, AMD and others. In the following chapters
we will explore more the fascinating intertwining of engineering, development
of most advanced technologies and extreme ideologies. Here we underline suc-
cinctly that Silicon Valley, the birthplace of the AI, was from its birth directly
infuenced by a certain view on intelligence: elitist, inherited and determined by
racial categories.
Since Galton and Terman, intelligence proved to be one of the most power-
ful and dangerous concepts for humanity. It is disconcerting to see how much
the eugenic defnition on intelligence determined not only research, but also
public policies, national and state laws, civil rights, and immigration principles.
For example, David Dorado Romo describes in the extremely well-documented
“Ringside Seat to a Revolution: An Underground History of El Paso and Juárez:
1893–1923” how the fear of alien infection was used by the eugenicists to create
sympathy for their movement and translate their ideas in laws and practice. This is
The Ideological Roots of Intelligence 21
a well-based source for the harrowing fact that the infamous Zyklon B was used
for the frst time in history against humans in the United States, in early 1920s.
This extremely toxic gas was used in diferent concentrations in “chambers” built
to disinfect aliens at the Mexican border with the United States, in El Paso. David
D. Romo notes:
Years later, these solutions drew the attention of Nazis in Germany, and the same
toxic gas was used in a higher concentration in one of the most hideous projects
in the history of humanity. Soon after his nefarious study was published, Ger-
hard Peters became a managing director of the chemical company Degesch (The
Deutsche Gesellschaft für Schädlingsbekämpfung), an afliate of the conglom-
erate I. G. Farben, which supplied Zyklon B to the Nazi death camps. This is
where the toxic product was used in a higher concentration to kill innocent men,
women and children who were declared inferior or dangerous for the health on
the nation. It is one of the most horrifying examples on how eugenics can dehu-
manise and lead to atrocities.
The U.S. Immigration Law of 1917, a result of eugenist movement, was passed
at the same time when the Manual for the Physical Inspection of Aliens was
published by the United States Public Health Service. Here we fnd the list of
undesirable people, elaborated by what Romo describes as the most prominent
medical scientists, progressive politicians, and eugenists in America. This is a
list that reveals on itself the absurd and arbitrary nature of this theory; the list
includes “imbeciles, idiots, feeble-minded persons, persons of constitutional
psychopathic-inferiority [homosexuals], vagrants, physical defectives, chronic
alcoholics, polygamists, anarchists, persons aficted with loathsome or danger-
ous contagious diseases, prostitutes, contract laborers, all aliens over 16 years old
who cannot read” (Romo, 2005, p. 229). Records show that the U.S. Public
Health Service agents “cleaned” in this year 127,173 Mexicans at the bridge that
was linking the city of Juárez to the American city of El Paso. The newcomers
were afected by the extremely toxic efects of delousing and fumigations with
kerosene, gasoline, or Zyklon B. In 1917, the revolt against inhuman treatments
started when Carmelita Torres resisted the abuse against migrants at the border
and still it remains in history as the infamous Bath Riots. In fact, the fear that
Mexican newcomers will get very sick came true when the “Spanish fu,” which
originated in Kansas, in the Haskell County, infected a large number of Mexicans
living now in El Paso.
22 Education, AI, and Ideology
The Immigration Restriction Act of 1924 was a defnitive success for eugen-
ists, and became an extremely infuential document for immigration laws adopted
by countries such as Australia, New Zealand, or Canada. The adoption of the
1924 act was praised by Adolf Hitler in his Mein Kampf, where he is noting in
laudatory terms that the United States was making an efort to impose reason on
its immigration policies by “simply excluding certain races from naturalization”
(Lombardo, 200230). These details and the connection with eugenics became
blurred and mostly lost after 1940s, but the law was not changed for the following
40 years after the adoption.
There is no doubt that the unconscionable tragedy of the Holocaust is linked
with the pseudo-science of eugenics and its solutions of controlling the existence
of those labelled as people with an inferior intelligence. The step to include other
features next to intelligence – such as race, social status, moral values, or political
preferences – was a natural and facile strategy for those who held the power to
evaluate these attributes and to assign their diagnostics. These ideological roots
of the concept of intelligence opened it to be reused in other ideological frame-
works revolving on the principle that society can be “cleaned” of that which is
unworthy, inferior, or dangerous for the general public or political elites. The
concept of intelligence was commonly used against dissidents in the extreme Left
and the extreme Right dictatorships, with rebels, protesters, or free thinkers sent
to mental health institutions, “re-education” camps or simply eliminated. Since
Terman coined the term and build the frst version of IQ tests the use of “intel-
ligence” was the palpable result of a concept turned into an ideological weapon
which is most suitable to serve monopolies of social, economic, and political
power.
The concept of intelligence remain closely associated to eugenics in the 21st
century, with a strong infuence in politics, technology and what shapes public
life. It is tempting for computing and software engineers, AI frms, and program-
mers to separate the ideological determinants of a certain view on intelligence
and leave aside the history of people who created its tools and measures. Of
course, hiding these roots cannot reduce the powerful infuence of this original
sin of “intelligence” and its inhumanity and entrenched racism. The same temp-
tation can be observed in the works of some notable psychologists. To take just
two examples, we can read how Robert Sternberg is presenting Francis Galton, as
a forefather of the testing movement in intelligence. He observes that “the critical
thing to note about Galton is that he was the frst to study intelligence in anything
that reasonably could be labeled a scientifc way” (Sternberg, 2020, p. 3231). Inex-
plicably, Galton views on “race improvements,” eugenics and intelligence as a key
to understand the improvement of human race are not mentioned by Sternberg
in his long, comprehensive, and often-cited handbook.
In another extensive study, specifcally devoted to the concept of intelligence
and AI in “developing human Intelligence, future learning and educational
Innovation,” John Senior and Éva Gyarmathy devote an entire chapter to pre-
sent “A brief history of intelligence – artifcial and otherwise.” Inexplicably, the
The Ideological Roots of Intelligence 23
association of eugenics with intelligence studies and the importance of this theory
for education and psychology are not even succinctly mentioned (Senior & Gyar-
mathy, 202132). These unexplainable blind spots are signifcant; a fact explored
by semiotics is that a missing sign can be a sign with meanings that are relevant
and decisive for a narrative. We can take the real example of the Summit for
Democracy, where the US President Biden invited in December 2021 all mem-
ber countries of European Union except Hungary. This is on itself a story of a
member country with an authoritarian, undemocratic system that is not aligned
with the EU values. If one looked at fags of participating countries, a missing
fag is signifcant for the narrative of a decline of democracy in a part of Europe.
The absence of these defning stages and epistemological evolution of intelligence
stand as a sign of indiference to toxic roots, or indicate a naive approach and
understanding of AI.
In June 2021, the New York Times published an article about the initiative of
the state of Arizona to use “Holocaust gas in the executions,” noting that
Arizona has refurbished and tested a gas chamber and purchased chemicals
used to make hydrogen cyanide, a recent report said, drawing a backlash
over its possible use on death row inmates. Headlines noting that the chem-
icals could form the same poison found in Zyklon B, a lethal gas used by
the Nazis, provoked fresh outrage, including among Auschwitz survivors
in Germany and Israel.
(Hauser, 202133)
The last time Arizona used a gas chamber for a similar an execution was in 1999,
causing international protests and stupefaction. Ignorance on the history or sym-
bolic signifcance of these methods in 1999, and again in 2021, cannot be seri-
ously considered. Nevertheless it is a story that should make all cautious about the
very real risk of ignoring the horrors of the past and ideas that still have the power
to poison present and future solutions, even in open and democratic societies.
The constant omission of eugenics in studies presenting the history of the
research on intelligence, as well as eforts to impose atemporality and decontex-
tualisation of this concept, reveals that AI is placed frmly in an intellectual and
ideological minefeld. Intentional or involuntary eforts to ignore the genesis of
intelligence do not change the fact that AI is tributary to histories that shaped this
concept across the 20th century.
It is simply impossible to separate the emergence of both, AI and machine
learning, from eugenics. One reason is clearly presented by Wendy Hui Kyong
Chun and Alex Barnett in their book “Discriminating data: correlation, neigh-
borhoods, and the new politics of recognition,” which reveals that the 20th-
century eugenics shape the 21st-century AI, machine learning and data analytics:
least a century before the advent of big data. Although methods for linking
two variables preceded his work, Francis Galton is widely celebrated for
‘discovering’ correlation and linear regression, which he frst called “linear
reversion.”
(Chun & Barnett, 2021, p. 5934)
These roots determine the framework for development of AI and the use of
advanced technologies, which are glorifed and sanitised by the Big Tech in an
unprecedented avalanche of propaganda, marketing, and narrative constructions.
The ideological foundations of intelligence still shape AI. One key element for
the ideological function of this concept is that intelligence is measurable in fully
objective, precise, and value neutral. The collection of data stands as central as it
was for Francis Galton, Karl Pearson, and other eugenists, who stated in the very
early stages of research on intelligence that the future of eugenics is determined
by the capacity to collect and analyse “national statistics.” The idea of “predictive
patterns” is as familiar in the eugenic context as it is in the context of AI sys-
tems and machine learning solutions; the common ideological source for AI and
eugenics make this identical use natural, discrete, and efortless.
The current understanding of intelligence is shaped by the fact that since its
birth, the science of intelligence and IQ tests was associated with social classes and
new tools built to measure and justify the need to protect elites, and to breed their
noble qualities. Since Galton’s pioneering studies intelligence remains socially
efective to fnd, control, and eradicate what stands associated with those living
in poverty: low levels of intelligence, mental vulnerabilities, various dysfunctions,
immoral existence, and flth. The ideological determinant of intelligence cannot
be eliminated in any serious analysis of this concept and its role in the rise of
AI. We have to start by admitting that intelligence and IQ scores were from the
very beginning a symbolic certifcate for the upper social class, a scientifc proof
of superiority and special rights. Taxonomies of intelligence, which are since
the frst attempts organised in superior and inferior intellects and social classes,
opened the road to evaluate and associate intelligence with a given score and also
stand as a source of standardised tests in education.
Most common narratives of AI systems or machine learning present a common
presumption that data is “just” data, an atemporal and value-neutral fact, shaped
only by cold, exact, and quasi-relevant evidence and representations of reality. We
analyse more extensively in a following chapter how deceiving, distorting, and
naive is to accept that big data is unbiased, neutral, completely relevant. Here it is
important to underline that data cannot be entirely objective, and biases corrupt
the process of data collection especially when very large volumes of data require
a selection. Any selection is shaped consciously or not by the adopted values of
those who chose what data is going to be considered, by individual preferences,
ideological positions, assumptions, and personal bias. Importantly, AI cannot be
reduced to an atemporal, ahistorical system in any serious consideration of this
The Ideological Roots of Intelligence 25
and cold” button for Pentagon and their generous funding. von Foerster noted:
“I talked with these people again and again, and I said, ‘Look, you misunderstand
the term [of AI]. They said, ‘No, no, no. We know exactly what we are funding
here. It’s intelligence!’.” Soon, he notes, “I was told everywhere, ‘Look, Heinz, as
long as you are not interested in intelligence we can’t fund you’” (Conway &
Siegelman, 2005, p. 32140). The American military complex poured money to
include AI in our futures and the capacity to maintain a continuous positive
publicity for AI. Securing a central role in the public imaginations, AI developed
since its inception within a context of unrealistic expectations, absurd or fool-
ishly infated promises, and a remarkable resistance to disappointments. Failure of
an AI system was immediately blurred by new narratives about AI potential and
possibilities. Yarden Katz describes the efort of opening the necessary critical
perspectives on AI as “crawling through a sewer. Reading the avalanche of ‘AI’
propaganda is a demoralizing experience” (Katz, 2020, p. 9).
It is demoralising as one cannot ignore how one legitimate call to analyse
the implications of AI centrality on our lives is smothered by an endless noise of
mediocre marketing, silly expectations, and shrewd marketing campaigns. Maybe
the most important fact that helps the efort to see beyond this “avalanche” of
nonsense is to remember that the concept of AI that it is not describing a cer-
tain system or a specifc technological solution of advanced computing; AI is an
ideological invention that covers various technologies in advanced computing,
sometimes in an incoherent manner.
In fact, even now the efort to agree on one common defnition that clearly
delimitates what exactly is AI still reaches an impasse. For example, at the end of
2021, various countries within European Union found extremely difcult during
ofcial talks to defne AI to fnalise the frst attempt in history to enact an AI Act:
Notes
1. Cioran, E. M. (2012). A short history of decay. Arcade Publishing.
2. Legg, S., & Hutter, M. (2007). A collection of defnitions of intelligence. Frontiers in
Artifcial Intelligence and Applications, 157, 17–24. arXiv:0706.3639 [cs.AI]
3. Jensen, A. R. (1998). The g factor: The science of mental ability. Praeger.
4. Sternberg, R. (Ed.). (2020). The Cambridge handbook of intelligence (2nd ed., Cambridge
Handbooks in Psychology). Cambridge University Press. doi:10.1017/9781108770422
5. Galton, F. (1909). Essays in eugenics. The Eugenics Education Society.
6. Galton, F. (2012). Hereditary genius: An inquiry into its laws and consequences. Barnes &
Noble.
7. Galton, F. (1901, October 29). The second Huxley lecture of the anthropological institute,
included in the essays in eugenics. The Eugenics Education Society.
8. Gould, S. J. (1996). The mismeasure of man (Rev. and expanded. ed.). W. W. Norton &
Company.
9. Galton, F. (1909). Essays in eugenics. The Eugenics Education Society.
10. Delzell, D. A., & Poliak, C. D. (2013). Karl Pearson and eugenics: Personal opin-
ions and scientifc rigor. Science and Engineering Ethics, 19(3), 1057–1070. https://doi.
org/10.1007/s11948-012-9415-2
11. Pearson, K. (1911). The grammar of science (3rd ed.). Adam and Charles Black.
12. Clayton, A. (2021). Bernoulli’s fallacy: Statistical illogic and the crisis of modern science.
Columbia University Press.
13. Kevles, D. J. (1986). In the name of eugenics: Genetics and the uses of human heredity. Uni-
versity of California Press.
14. Kühl, S. (1994). The Nazi connection: Eugenics, American racism, and German national
socialism. Oxford University Press.
15. Galton, F. (1908). Memories of my life. Methuen & Co.
16. Reilly, P. R. (2015). Eugenics and involuntary sterilization: 1907–2015. Annual Review
of Genomics and Human Genetics, 16, 351–368. https://doi.org/10.1146/annurev-
genom-090314-024930
17. Lombardo, P. A. (2011). A century of eugenics in America: From the Indiana experiment to
the human genome era. Indiana University Press.
18. Whitman, J. Q. (2017). Hitler’s American model. The United States and the making of Nazi
race law. Princeton University Press.
19. Cohen, A. (2016). Imbeciles. The supreme court, American eugenics, and the sterilization of
Carrie Buck. Penguin Press.
20. The Annals of Human Genetics. https://onlinelibrary.wiley.com/page/journal/
14691809/homepage/productinformation.html
21. James, W. (1983). The principles of psychology. Harvard University Press.
22. Gonzalez, G., & Gonzalez, G. (1979). The historical development of the concept of
intelligence. Review of Radical Political Economics, 11(2), 44–54. https://doi.org/10.1177/
048661347901100204
23. Frey, B. B. (2018). The Sage encyclopedia of educational research, measurement, and evalua-
tion. Sage Reference. https://doi.org/10.4135/9781506326139
24. Kell, H., & Wai, J. (2018). Terman study of the gifted. In B. Frey (Ed.), The Sage ency-
clopedia of educational research, measurement, and evaluation (Vol. 1, pp. 1665–1667). Sage
Publications, Inc. www.doi.org/10.4135/9781506326139.n691
25. Kevles, D. J. (1985). In the name of eugenics: Genetics and the uses of human heredity.
Knopf.
30 Education, AI, and Ideology
26. Marks, R. (1974). Lewis M. Terman: Individual diferences and the construction of
social reality. Educational Theory, 24(4), 336–355. https://doi.org/https://doi.org/
10.1111/j.1741-5446.1974.tb00652.x
27. Leslie, M. (2000, July/August). The vexing legacy of Lewis Terman. Stanford Maga-
zine. https://stanfordmag.org/contents/the-vexing-legacy-of-lewis-terman
28. Shurkin, J. N. (2006). Broken genius: The rise and fall of William Shockley, creator of the
electronic age. Palgrave Macmillan.
29. Romo, D. D. (2005). Ringside seat to a revolution: An underground cultural history of El Paso
and Juárez, 1893–1923. Cinco Puntos Press.
30. Lombardo, P. A. (2002). “The American breed”: Nazi eugenics and the origins of the
Pioneer Fund. Albany Law Review, 65(3), 743–830.
31. Sternberg, R. J. (2020). The Cambridge handbook of intelligence. Cambridge University
Press.
32. Senior, J., & Gyarmathy, E. (2021). AI and developing human intelligence. Future learning
and educational innovation. Routledge.
33. Hauser, C. (2021, June 2). Outrage greets report of Arizona plan to use “holocaust
gas” in executions. New York Times. www.nytimes.com/2021/06/02/us/arizona-
zyklon-b-gas-chamber.html
34. Chun, W. H. K., & Barnett, A. (2021). Discriminating data: Correlation, neighborhoods,
and the new politics of recognition. The MIT Press.
35. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for
the Dartmouth summer research project on artifcial intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
36. McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects
of artifcial intelligence. A.K. Peters.
37. Ashby, W. R., Shannon, C. E., & McCarthy, J. (1956). Automata studies. Princeton
University Press.
38. McCarthy, J. (1987). Generality in artifcial intelligence. Communications of the ACM,
30(12), 1030–1035. https://doi.org/10.1145/33447.33448
39. Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
40. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
41. Heikkila, M. (2021, October 20). Politico AI: Decoded: AI goes to school – What EU
capitals think of the AI Act – Facebook’s content moderation headache. Politico. www.
politico.eu/newsletter/ai-decoded/ai-goes-to-school-what-eu-capitals-think-of-
the-ai-act-facebooks-content-moderation-headache-2/
42. Stokel-Walker, C. (2021, November 25). AI has learned to read the time on an ana-
logue clock. New Scientist. www.newscientist.com/article/2298773-ai-has-learned-to-
read-the-time-on-an-analogue-clock/
2
IMAGINATIONS, EDUCATION, AND
THE AMERICAN DREAM
In the fnal report of the United States National Security Commission on Artif-
cial Intelligence, which was published at the end of 2021, we fnd that
This may be inspiring and promising, but it is not clear. The main premise of such
an important document, written for an unprecedented military force, detailed in
16 chapters and over 700 pages, is that AI is not one thing, but “a feld of felds.”
This is epistemological dishonesty, as we can’t seriously say that only a specifc
computing system is AI. However, it leaves open the question of what exactly
is AI. A working defnition is included in the report, which defnes AI as “the
ability of a computer system to solve problems and to perform tasks that have
traditionally required human intelligence to solve” (NSCAI, 2021, p. 602). This
explanation leaves the entire weight of a clarifying efort to describe what is AI
on the hotly debated and complex feld of human intelligence. We have already
seen in the previous pages how questionable and contentious is to state precisely
what “human intelligence” is and how we can measure it. It seems that the inten-
tion of this defnition is to compare AI with the set of attributes measured by IQ
tests. If this is the case, then we have to admit that we place at the foundation of a
very important feld of technology a fawed epistemological framework. Human
DOI: 10.4324/9781003266563-4
32 Education, AI, and Ideology
intelligence is much more than what can be measured by tools inspired by some
eugenic theories. Moreover, when we look at AI and try to understand how
such an obscure and hollow concept became so popular, universally attached to
anything – from vacuum cleaners to rockets, telephones and guided missiles, edu-
cation and medicine – we have to remember that “intelligence” is there because
it is making the AI label attractive and desirable. This concept is sending to an
enticing human-like attribute of superior intellectual performance. AI was born
as a hollow concept, as Yarden Katz details in his extraordinary book on Artifcial
Whiteness that
Some of the most respected engineers in this feld warn us that deep learning can
work only if neural networks are continuously adjusted and improved, tweaked
and adapted. In this process, results are ahead of theoretical understandings and
we just trust the system to work. In the history of humanity, technology was
always associated with progress. History is also giving us key lessons that show
34 Education, AI, and Ideology
when technology is helping rather than destroy us. When technology is idolised,
we just take the road to disaster. This is just one set of reasons to argue that human
oversight of AI is a possible solution only for a certain set of circumstances, but
never sufcient for important decisions, which must be taken only by humans.
It may be clear at this point that we need more than a technical paragraph and
academic jargon to understand what is AI. Especially when we think about the
immensely complex feld of education, we have to understand not simply a short
defnition of AI, but see how this technological construct is working, and what
its use involves for students, teachers, and institutions adopting it. A good start
is provided by Ivana Bartoletti, a specialist in AI, who ofers a more sensible and
insightful defnition in her wise book, An Artifcial Revolution:
In other words, AI is the generic name for computing systems that are able to
engage in human-like processes such as learning, adapting, synthesising, self-
correcting, and use of data for complex processing tasks. However, it is essential
to keep in mind the key feature underlined by Bartoletti, which is that AI is
determined entirely by what humans decide to feed these systems to do, what
data they select in felds they decide that are representative. This is a subjective
endeavour. AI is doing what human ask and enable the computing systems to per-
form, based on the amount of data we collect and the structure of the algorithms
used – which are also created and determined by humans.
In The Black Box Society, a seminal book written by Frank Pasquale, we fnd
why it is important to understand how the “knowledge problem” and secrecy
are intentionally cultivated in our lives by “our increasingly enigmatic technolo-
gies” and their masters, with great implications for everyone’s life. He notes that
“Secrecy is approaching critical mass, and we are in the dark about crucial deci-
sions. Greater openness is imperative” (Pasquale, 2015, p. 410). AI is developing
fast and the public is not invited to look at the extraordinary complex “black
box” of algorithms fuelling AI systems or let us understand its workings. The
most important corporations in new technologies, such as Google, Facebook
(Meta), Microsoft, Amazon, or Apple, refuse to provide access to their algorithms
and – as Frank Pasquale refected – we can fnd in general what information gets
in these systems, and we can access easily what results from aggregating this data,
but we don’t know what is happening in the middle of this process, in the “black
box” of technocracy. In general, with the exception of a very limited number of
engineers, we have no idea how data is aggregated, manipulated, and analysed by
Imaginations, Education, and the American Dream 35
algorithms. The rapid advancement of AI, and its adoption in some of the most
personal areas of our lives, is making the need of transparency on algorithms and
data practices even more acute; this is why we see in Europe an acceleration of
legislative initiatives aiming to regulate AI and limit possibilities of misuse. It is a
commendable direction that needs to be seriously analysed in this book to fnd
if legislative regulations represent a sufcient measure against abuse and wrong
applications of AI.
There are already a signifcant number of excellent books and studies on AI
that detail what is AI, and some used on this analysis. The basic element we
need to remember is that AI is a sum of algorithms that work with data, the fuel
of the AI engine. Algorithms are mathematical rules or steps involved to solve
a problem, and there are fundamentally two ways that algorithms are designed
by engineers in AI: explicit (using direct computations of data that is describing
known quantities) or implicit (which is applied in machine learning, where the
algorithm is derived from present and future data).
It is important to remember that AI is only as good as the data is, and the
collection, labelling, and entry of data determine the quality of expected results.
One of the most surprising illusions cultivated by media, academics, and centres
of narrative infuence is that data is just factual, clean, objective, unprejudiced,
and neutral; data is glorifed as the miraculous fuel of the most advanced scien-
tifc projects. The narrative of the almighty data is permeating academic culture
and practice; it is one of the most frequent invitations I heard in my work at
university. Big data and data analytics are in these contexts the magical key to
understand all that one may want to understand: student interests and student
engagement in learning, student needs and optimal teaching approaches, learn-
ing challenges and students’ future. It looks all good and innocuous until one
examines closer how all this happens. For example, only very rarely students
are informed about all data collected about them. Hardly ever they are asked
to express their consent on these practices or refuse data collection. Data col-
lected on the basis of arbitrary criteria is often used to reach conclusions about
students’ interest in learning, based on silly variables such as the number of clicks
recorded or the amount of time spent logged in the learning management system
(LMS). If this data is going to be used on AI solutions in higher education, we
can expect to see signifcant errors and damaging results. Big Data is wonderful,
we hear, because it ofers us a God-like view on everything: personal profles,
past performances, possible results, and future preferences or accomplishments.
The promise associated with Big Data as a way to do big and wonderful things
is an intentionally false representation. The frst reason for this fact is that we do
not have that clinical, virgin, neutral, and completely unbiased data to work with
simply because that is an impossible concept. Lisa Gitelman explains in the Intro-
duction of “Raw Data” is an oxymoron, a book opening diferent perspectives
on why the common phrase “raw data” is leading to an impossible situation. It is
impossible – we fnd in the book – because we always have data that is “cooked”
36 Education, AI, and Ideology
by those who collect or select what is interpreted. Gitelman draws the attention
towards imaginations of data and how this makes impossible to have entirely data
that is completely neutral:
Like events imagined and enunciated against the continuity of time, data
are imagined and enunciated against the seamlessness of phenomena. . . .
Every discipline and disciplinary institution has its own norms and stand-
ards for the imagination of data, just as every feld has its accepted method-
ologies and its evolved structures of practice.
(Gitelman, 2013, p. 311)
Someone decides what data is collected, and this limits what we collect,
regardless of quantity. No matter how “big” is data, it is impossible to imagine
a situation where we have all data on a phenomenon and it is all objective and
unrelated in what we think to be data, what we decide to collect as data and what
we decide to include in the “big data database.” In reality, the story of Big Data
is much more problematic than it looks in common narratives shared by media
and some academics self-appointed as experts in edtech. A study published in
Nature investigates if data-driven clinical decisions are biased and beneft difer-
ently some populations reaches the conclusion that “there are negative repercus-
sions of disregarding class performance disparities in the context of skewed data
distributions – a challenge still largely neglected but impacting many areas of AI
research” and that these models should “systematically undergo fairness assess-
ments to break the vicious cycle of inadvertently perpetuating the systemic biases
found in society and healthcare under the mantle of AI” (Röösli et al., 202212).
In other words, databases and AI models discriminate against some populations
and, although not addressed as a matter of validity, clinical decisions are distorted
by AI solutions to serve preferred groups. A study published by Harvard Busi-
ness Review in 2017 reveals that only 3% of companies’ data meets basic quality
standards, and concludes:
These results should scare all managers everywhere. Even if you don’t care
about data per se, you still must do your work efectively and efciently.
Bad data is a lens into bad work, and our results provide powerful evidence
that most data is bad. Unless you have strong evidence to the contrary,
managers must conclude that bad data is adversely afecting their work.
(Nagle et al., 201713)
“Big Data” is a much more complex story than this label suggests, and contains
also some equally big problems with its use. Students and the general public are
not informed on how much low-quality data that is collected on inadequate cri-
teria even though these models often shape decisions in education and the future
of students and graduates.
Imaginations, Education, and the American Dream 37
Big Data on learning stands also associated with a problem that goes beyond
the theoretical possibility to reach a highly accurate and complete database for our
AI models. Learning is contextual, infuenced by cultural, psychological, physical,
environmental, developmental, and sociological variables (to name just a few). It
is emotionally charged; quantity and the predictability of sequences in learning
do not refect much on the intensity or impact of a learning segment. To collect
all data on a phenomenon as complex as human learning is virtually impossible.
Even if we try – as an exercise of imagination – to start from the premise that this
can be possible, we have to keep in mind that just one random event can change
the weight and infuence of key data, and bring new variables that entirely change
all. There is no doubt that we can fnd patterns and create profles and pathways,
but if we lose perspective on the limits of this process, we risk to lose relevance
and dehumanise the entire process. Ivana Bartoletti is noting in her in-depth
analysis of data violence that “data has a huge faw, a faw that is widely ignored
or wilfully disguised: data is not neutral. It is inherently political. What to collect
and what to disregard is an act of political choice” (Bartoletti, 2020, p. 16).
Limitations of data and the nature of algorithms determine also areas where
AI can excel and will advance fast in the next decades, as well as limitations that
make its use misleading or damaging. AI is optimally working in areas where it is
possible to create algorithms aligned with clear sequencing, measurable and struc-
tured patterns. AI is working very well in felds such as medicine, where we have
clear DNA patterns and applications where large volumes of data can optimally
identify models and new possible applications; in learning and teaching, we have
a very diferent feld. There are political choices that limit and decide what data is
collected, how it is collected, who can have access to data and how is interpreted;
these limitations stand as crucial elements that should be open for scrutiny and
properly investigated if universities want to have agency on their project of higher
education. Universities – as we will see in some examples – have an unexplain-
able detachment and indiference on some extraordinarily important solutions
purchased from educational technology (edtech) vendors. Plagiarism deterrence
software, online solutions found under the term of LMS, surveillance, and many
other edtech applications are not considered in line with data concerns. These
applications come with serious implications for students’ learning and future, but
are simply ignored. If this looks like an unfair or exaggerated claim, I suggest a
simple experiment: consider yourself a prospective student and search online how
the university that may be chosen refects openly details on data management,
collection, aggregation, and uses of data by the third parties, corporate entities
that can access and use data collected from students.
Defnitions and understandings of AI are largely determined by the ideological
and emotional position of those who describe it. AI is more as a set of claims and
disparate software and hardware solutions. For enthusiastic computing engineers,
passionate about coding and all possibilities opened by the public interest on this
feld, AI is approached as magic power, and mastering it is reserved for the elite
38 Education, AI, and Ideology
of few creators and initiated. We also have the group of investors, naturally moti-
vated to infate the promise of AI and maximise their profts on the stock market,
and not interested to engage in conceptual delimitations. We can look here at
just one example of Marc Cuban, a billionaire active in tech investments who is
frequently quoted by magazines and newspapers that are comfortable to promote
the idea that wealth is synonymous with wisdom. This tech tycoon noted in 2017
at the Upfront Summit14 that “Artifcial Intelligence, deep learning, machine
learning . . . whatever you’re doing if you don’t understand it . . . learn it!
Because otherwise you’re going to be a dinosaur within 3 years.” Over 5 years
later, we can see that many investors in AI don’t understand it is ridiculous to
look at them as “dinosaurs.” Here we fnd a common theme associated with AI
in public spaces: most of those who talk about AI, including in higher education,
start from the assumption that they clearly know what are AI, deep learning, and
machine learning, and those who don’t will face extinction. The idea that all suc-
cessful investors on the stock market know what AI is should be taken with great
reserve, as the most prominent engineers in this feld note how much is unknown
in the way it functions and delivers results. For example, in an article published
in Science in 2018, we fnd the enquiry of an AI expert, Ali Rahimi, a researcher in
AI at Google, who is suggesting that we reached the point where AI is taking the
form of alchemy:
“Researchers do not know why some algorithms work and others don’t,
nor do they have rigorous criteria for choosing one AI architecture over
another. . . . I’m trying to draw a distinction between a machine learning
system that’s a black box and an entire feld that’s become a black box.”
Without deep understanding of the basic tools needed to build and train
new algorithms, he says, researchers creating AIs resort to hearsay, like
medieval alchemists.
(Hutson, 201815)
Top experts working for tech giants reveal recurrent difculties to explain AI’s
reproducibility problem or its “interpretability,” the complicated eforts and
impasse on explaining how a particular AI system or machine learning model has
come to its conclusions. The same article from Science is quoting François Chol-
let, a computer scientist at Google: “People gravitate around cargo-cult practices,”
relying on “folklore and magic spells.”
There is much more than “folklore and magic spells” that needs to be scru-
tinised to understand how ideological foundations of AI structure a specifc
approach for education and its futures. Theological motifs that are present in the
public discourse on AI have stronger roots than public can see and structure far
more than a simple enthusiasm for technology, or motivations limited to profts
and greed. We do not have the new alchemy in the AI, but a new cult. Wolfram
Imaginations, Education, and the American Dream 39
Klingler, founder and CEO of two technology frms based in Switzerland, noted
in an article he authored that Silicon Valley is at a stage where is institutionalising
its religious beliefs:
The Silicon Six – all billionaires, all Americans – who care more about
boosting their share price than about protecting democracy. This is ideo-
logical imperialism – six unelected individuals in Silicon Valley imposing
their vision on the rest of the world, unaccountable to any government and
acting like they’re above the reach of law.
Academia used to keep alive with pride the space of ideas and debates about chal-
lenges and risks facing society, culture, and our common future; it was not ever
perfect, but the ambition was there. The serious warnings – delivered publicly
by an actor and activist with international exposure – failed to attack the atten-
tion of universities, academics, and the vast administrative structures in higher
education. When some edtech start-ups came with the story of Massive Open
Online Courses (MOOCs) academics and administrators went extreme, absurdly
enthusiastic about impossible promises of tech-solutionism. Strangely, a proven,
obvious, and active threat for learning and the advancement of knowledge, for
democracy and civil society was and still is largely ignored. Universities are more
focused on the new shiny trick that can balance profts and “secure markets.” The
following chapters provide a closer look at the lost lessons of MOOCs in higher
education; this story reveals the fervour of faith associated with edtech in higher
education.
There is a larger group than “the Silicon Six” sharing the power and faith in
Silicon Valley. Douglas Rushkof presented few clear characteristics of this cult,
writing in Medium that
[T]here is a Silicon Valley religion, and it’s one that doesn’t particularly
care for people – at least not in our present form. Technologists may pre-
tend to be led by a utilitarian, computational logic devoid of superstition,
but make no mistake: There is a prophetic belief system embedded in the
technologies and business plans coming out of Google, Uber, Facebook,
and Amazon, among others. It is a techno-utopian and deeply anti-human
sensibility.
(Rushkof, 201819)
Imaginations, Education, and the American Dream 41
The ideology has racist connotations – in short, Black, Brown and margin-
alised people are blamed for overpopulation and consequently the environ-
ment’s demise. The idea’s origins can be traced to an essay by the English
18th-century economist Thomas Robert Malthus entitled “The Principle
of Population,” which lays the foundation for eugenics in the arena of cli-
mate change.
(Mohamed, 202121)
It is true that Silicon Valley is not making these beliefs as visible and direct as a
British royal, but it is relatively easy for anyone with some time and curiosity to
see them expressed in writings of founding fathers of AI. For example, we can
look at other interesting developments at M.I.T. that are relevant for the context
where Minsky introduces AI to the Pentagon – and to the world. A team of
experts working at M.I.T. published at that time “The Limits to Growth,” which
is the frst “report for the Club of Rome’s project on the predicament of man-
kind.” In this report we fnd the eugenic call for “population control,” explicitly
aimed to address the rapid population growth and fnd ways to have less people.
A conclusion of this report brings a new light on the favoured model that was
created by the group of experts at the Massachusetts Institute of Technology:
“some considered the model too ‘technocratic,” observing that it did not include
critical social factors, such as the efects of adoption of diferent value systems.
The chairman of the Moscow meeting summed up this point when he said,
42 Education, AI, and Ideology
Technological utopianism greatly evolved since 1970s, but some parts of that frst
report for the Club of Rome have a surprising fulflment today. Every part of our
lives is now “too technocratic,” and social factors are addressed with the applica-
tion of new technologies. To take just one example, we can look at the story of
what is now known in Australia as the scandal of RoboDebt, which was the name
of the automated process of matching data on welfare recipients with their total
income data. This “intelligent” process of identifying overpayments triggered an
avalanche of debt notices, with most vulnerable people wrongly targeted by a
fawed algorithm. Soon, the AI system of control was sending 20,000 letters a
week, targeting welfare recipients identifed with debt. The system was wrong,
and there are reports of people committing suicide after being targeted by the AI
solution. A class action was launched against these decisions and courts found the
entire RoboDebt System unlawful and, consequently, found that 373,000 people
should receive refunds, rewarding $112 million in compensations and cancelled
debts of $398 million. There is no doubt that “robodebt” was a social, cultural,
and civic disaster; it probably went further than what “The Limits to Growth”
suggested in early 1970s.
For the last decade we see that AI moved at the heart of hope for new solu-
tions to make a better world. It is a story entrenched in what Evgeny Morozov
called “solutionism,” which is “recasting all complex social situations either as
neatly defned problems with defnite, computable solutions or as transparent and
self-evident processes that can be easily optimized – if only the right algorithms
are in place!” (Morozov, 2013, p. 524). The AI solutionism is explicitly aiming to
replace humans with something “super,” most often with a “super brain.” The
new AI narrative is not simply looking at the world as a mess that will be changed
and “solved” with technology. It is a promise to change the world and humanity;
all mortal faws, problems, and imperfections of humankind have now a solution
in the new, miraculous, clinically perfect and fawless formula distilled in the
algorithms of advanced AI. The point made at that meeting in Moscow is not
valid in the new context: man can be a biocybernetic device, and if “bio-” stands
Imaginations, Education, and the American Dream 43
against this project, we can remove it in the near future with the superior solution
of advanced AI.
Marvin Minsky was seduced by the eugenic ideas of population control, and
he publicly shared and supported these views; for example, his literary writings
present a mix of science fction and manifesto for a world with eugenic principles
in place. We can take the example of “Alienable Rights,” a novel published by
Minsky in July 1993 in Discover Magazine, and publicly accessible on a website
maintained by M.I.T. It is a story where two aliens evaluate human life on Earth.
The opening scene presents the alien named Surveyor teaching the one called the
“Apprentice,” who fnds humans primitive and disappointing. He is dismayed to
see that “evolution on Earth is still mainly based on the competition of separate
genes.”
“APPRENTICE: Their genetic systems can’t yet share their records of accomplish-
ments? How unbelievably primitive! I suppose that keeps them from being
concerned with time scales longer than individual lives.
SURVEYOR: We ought to consider fxing this – but perhaps they will do it
themselves. Some of their computer scientists are already simulating ‘genetic
algorithms’ that incorporate acquired characteristics. But speaking of evolu-
tion, I hope that you appreciate this unique opportunity. it was pure luck to
discover this planet now. We have long believed that all intelligent machines
evolved from biologicals, but we have never before observed the actual tran-
sition. Soon, these people will replace themselves by machines – or destroy
themselves entirely.
APPRENTICE: What a tragic waste that would be!
SURVEYOR: Not when you consider the alternative. All machine civilizations like
ours have learned to know and fear the exponential spread of uncontrolled
self-reproduction.” (Minsky, 199225)
Here we can see that Minsky is the father of AI and a high priest of the new
technocratic religion of Silicon Valley. It is a foundation of a theology that is
bringing together a posthuman ethos of authoritarianism with a reviewed plan for
eugenism, which is manifested in the belief that technology will improve qualities
of humankind with optimal solutions of population control, the replacement of
human agency and technological imperialism. It is a foundation of an exclusive
and elitist culture with vast colonising plans where the adoration of technology
makes ecological, social, and cultural disasters quasi-irrelevant for the overall aims
of the new project. It is a fascist utopia. This ideological context is placing educa-
tion at the top of felds that require an overall change. A “reformed” education
is the practical pathway for the change indicated decades ago by Marvin Minsky
in Silicon Valley.
The technocratic elite is looking at humankind as messy, chaotic, ephemeral,
and too often unpredictable; it is a world where it makes sense what Surveyor
44 Education, AI, and Ideology
We, the technical elite, seek some way of thinking that gives us an answer to
death. . . . The infuential Silicon Valley institution preaches a story that goes
like this: One day in the not-so-distant future, the Internet will suddenly
coalesce into a superintelligent AI, infnitely smarter than any of us indi-
vidually and all of us combined; it will become alive in the blink of an eye,
and take over the world before humans even realize what’s happening. . . .
All thoughts about consciousness, souls, and the like are bound up equally
in faith, which suggests something remarkable: What we are seeing is a new
religion, expressed through an engineering culture.
(Lanier, 2013, p. 19326)
In fact, Marvin Minsky presented clearly the narrative of this new religion dec-
ades ago. He published in Scientifc American an article where the title “Will
Robots Inherit the Earth?” is answered clearly and succinctly in the subtitle: “Yes,
as we engineer replacement bodies and brains using nanotechnology. We will
then live longer, possess greater wisdom and enjoy capabilities as yet unimagined”
(Minsky, 199427).
There is no coincidence in the fact that AI is based on these fascist views.
The history of cybernetics and corporate groups in technology is intertwined
with the history of racist, anti-human, and eugenic projects, including solid col-
laborations with the Nazis. In the book IBM and the Holocaust: The Strategic
Alliance between Nazi Germany and America’s Most Powerful Corporation,
Edwin Black is presenting a well-researched and documented story of the alliance
between computing pioneers and one of the most abominable regimes in the his-
tory of humanity, the Nazis. It describes how the precursor of the computer, the
IBM punch card and its card sorting system was used to organise the Holocaust.
Imaginations, Education, and the American Dream 45
With the knowledge of IBM New York headquarters, Edwin Black notes, “IBM
Germany, using its own staf and equipment, designed, executed, and supplied
the indispensable technologic assistance Hitler’s Third Reich needed to accom-
plish what had never been done before – the automation of human destruction”
(Black, 200128).
A key tenet of the AI-cult is that the human brain is working as a computer,
and the interchangeable nature of AI solutions and human brain is beyond doubt.
We will have soon – these narratives inform us – technological solutions that
will make possible to transfer a human brain to a machine or insert a machine to
work as a part of someone’s brain. Humans can be “enhanced” with AI, which
is working as a human brain without all imperfections and conditions imposed
by the embodied condition of human life. In efect, the elite able to create intel-
ligent systems for computers look at themselves as demigods. We can see this in
the spectacular hubris exposed by the belief that some engineers specialised in
information-processing machines know exactly how a human brain works, and
how thinking, memories, emotions, creativity, imagination, and metaphors mix
and emerge from someone’s mind. This is why we have so many – obviously use-
less – apps to “fx” depression, relax, fx stress, enhance memory, and so on. This
view, along with what Lanier details in his book, refects the basics of a certain
view on what education is and how this needs to be organised. It becomes clear
why we see so often education presented as a mechanical, industrial process,
where “skills” are gained by applying sequences of instruction and assessment.
Learning is a matter of “micro” learning, which can be optimally covered by
micro-credentials. The new project of learning looks like a puzzle that is fxed by
some optimal software. It still requires some operators, but soon all data collected
on students’ patterns, teaching models, and assessment sequences will be smartly
managed entirely by AI systems. No more teachers, no more waste and useless
explorations: like a 3D printer, education is on the way to work as a technological
way advanced enough to put together all steps required to produce a graduate.
It may sound very appealing if the only interest in how learning happens was as
a student; we’ve all been students and it stands very appealing to believe that we
know exactly how education works. The anti-democratic project to debase and
value experts and hard earned expertise was all too visible in the COVID-19 pan-
demic. If good education takes a relatively long time to see results, in a pandemic
it was quite clear what will happen if someone would inject people with disin-
fectants or exposing infected patients’ bodies to UV light, as Trump suggested in
a White House public briefng in 2020. Despite life-threatening risks, a relatively
large number of people confused bizarre opinions with expert solutions, and dis-
played in general a remarkable confdence in claiming that their muses are better
than what experts recommend.
It is evident at this point that we have to see how open is higher education
to this theological proposal and proselytising passion surrounding what was also
46 Education, AI, and Ideology
called “the Californian ideology.” Richard Barbrook & Andy Cameron coined
this formula and expressed their surprise on its developments:
computers sent wrong data and fight pilots trusted blindly computers: tragedies
where the uncritical belief that machines cannot be mistaken led to the crash of
the plane with hundreds of innocent people killed. Carr, the New York Times
best-selling author of The Shallows, is presenting with vivid examples that auto-
mation is changing both the task and us, the users. He is warning that “Automa-
tion remakes both work and worker. When people tackle a task with the aid of
computers, they often fall victim to a pair of cognitive ailments, automation com-
placency and automation bias” (Carr, 2014, p. 6731). “Automation complacency”
is driving us towards a false sense of security and comfort, based on the belief that
computing systems can deal with tasks that should be completed by us. Too many
disasters reveal how important it is to remain concerned about automation bias
and complacency: aircraft crashed, ships ran aground, and many innocent people
lost their lives. It is important to realise that overconfdence in technology –
no matter how advanced it is and how aggressive it is presented as “superior to
humans” – should be looked also from the perspective of disasters that did not
happen. Unthinkable tragedies were avoided when one operator decided that
computers might be wrong and technology is not always perfect. As we have the
case of Stanislav Petrov, “the man who saved the world.”
His story should be more widely known, as it is a lesson on the complex rela-
tionship of humans with technology. Stanislav Petrov was an ofcer in the Soviet
Air Defence Forces, and in the early morning of 26 September 1983 was in his
shift as the duty ofcer at Serpukhov-15, a secret command centre outside Mos-
cow that was Soviet Union’s missile attack early warning system. Few hours into
his shift, Soviet Lt. Col. Stanislav Petrov had computer screens indicating that an
American intercontinental ballistic missile (ICBM) had been launched and was
about to hit the Soviet Union. One after another, computers identifed a total of
fve missiles launched against the Soviet Union. This was happening at an espe-
cially tense time in the Cold War, when the Soviets mistakenly hit a Korean Air-
line commercial fight, killing 269 people, including an American congressman.
It was the time when Ronald Reagan labelled Soviet Union “an evil empire.”
David Hofman reported this extraordinary event in an article published in 1999
by the Washington Post:
Petrov’s role was to evaluate the incoming data. At frst, the satellite reported
that one missile had been launched – then another, and another. Soon, the
system was “roaring” . . . Despite the electronic evidence, Petrov decided –
and advised the others – that the satellite alert was a false alarm, a call that
may have averted a nuclear holocaust.
(Hofman, 1999, p. A1932)
The world was lucky. A decision was made against the machine by an ofcer
with a mind educated to evaluate and carefully consider what information, even
when it comes from the most advanced technology used by the military in his
48 Education, AI, and Ideology
time. It’s an illusion to believe that the advancement in technology makes impos-
sible a similar situation; we are exposed more than ever before and we have new
existential risks at stake.
In a world in crisis we have many reasons to emulate that type of education
that is building the capacity to constantly consider the possible limits of technol-
ogy. Unfortunately, as we will see in the following pages, the most infuential
forces in education stand far from this position, adopting the Silicon Valley ideol-
ogy and mythology with little or no inquiry. The story of “the man who saved
the world” is also proof that total trust of technology is a dangerous comfort, since
Trojans projected their wishes on the huge, coarsely built wooden horse and had
no curiosity to carefully examine and think about it before opening the gates to
take it in. It may be a coincidence that programmers name a “Trojan (horse)” the
tricky malware designed to look as a commonly used, harmless software to gain
access to a computer to control, damage, or steal information. The power of this
metaphor was not missed in the programming world, but was diluted to a shal-
low reading, missing the importance of keeping an alert and critical mind. For
the engineers, the solution for a “Trojan” code is just more code, to protecting
the system against malware with another software. For the forgotten hero of the
Cold War, the solution was to think if America would send only fve missiles for
an attack against Soviet Union, and his thinking saved the world; the machine
was close to obliterate us. There is no reason to be dramatic – more than the
perspective that a devout technophile was in charge in that day of September in
1983 at the secret military centre – as we simply have to contemplate the lesson
of these events. AI is part to human progress only if we maintain continuously
the aim to critique and understand how it works, who controls it, and what it
involves. Moreover, we have to fnd ways to avoid the deliberate tendency of Sili-
con Valley to fnd solutions for problems that do not exist, especially when they
create real-life debacles and human misery.
In education, the technological temptation to replace a difcult and lengthy
process with a trife technological solution is very attractive for teachers and even
more for those who sell them. The global edtech market was valued at around
US$85 billion in 2021, and it is estimated that will reach more than US$200 bil-
lion in 2027. At this moment, edtech venture capital frms raise fast billions in
funding; in 2021, a Silicon Valley venture capital frm on the edtech market
named Owl Ventures secured US$1 billion in new funds. There are many simi-
lar examples showing that there is very high and understandable motivation to
secure a part of the edtech market. However, there are strong ideological reasons
to make schools and universities very attractive for corporations and investors
focused on AI solutions.
It is not just media, technological corporations, or business groups cultivat-
ing a naive look about data and how AI actually works in education and society.
A signifcant example is provided by Andreas Schleicher, the Director of the
OECD Directorate for Education and Skills, who underlined in the Introduction
Imaginations, Education, and the American Dream 49
only problem with AI is that humans ruin it. The imperfect, fawed, and biased
human teachers impair the perfect AI systems.
It is clear that human beings are always susceptible to do mistakes and be
biased, or even prejudiced. The diference is that when a human bias (which
is knowingly or unknowingly shaping choices of engineers) moves in AI, the
efects are completely diferent: it can afect potentially a much larger group of
people and stay for a long time hidden in the “black box” of AI algorithms. AI
can also gain bias when new data is taken for machine-learning processes and
it can move fast to extreme positions of bias and prejudice (such as Tay and the
chatbot). There is the solution to improve algorithms, and this is also well covered
by research and also full of traps; the argument is that in some symbolical spaces,
where important decisions are made – including education – AI should not be
considered as a replacement of human decisions and presence. Sometimes we
may even go as far as leaving technology aside for some human, not necessarily
efcient moments.
There are some details that are intriguing about what the Director of the
OECD Directorate for Education and Skills said in his opening essay in 2021.
First, OECD is an acronym standing for the Organisation for Economic Co-
operation and Development, which is an intergovernmental economic organisa-
tion with 38 member countries. Why is an economic organisation acting like an
expert in education, and not only in economics, but also on all aspects of educa-
tion? How did we get here? To understand this helps us see how AI will deter-
mine education and learning futures, and what choices stand ahead. For this, it is
important to look back at some key moments in recent history and also see how
the American perspective on the economy, society, and education infuenced the
rest of the world.
Notes
1. NSCAI. (2021). Final report. The National Security Commission on Artifcial Intel-
ligence. www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
2. Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
3. MIT Media Lab. (25 January 2016). Marvin Minsky, “father of artifcial intelligence,” dies at
88. https://news.mit.edu/2016/marvin-minsky-obituary-0125
4. Stonier, T. (1992). The evolution of machine intelligence. In Beyond information.
Springer. https://doi.org/10.1007/978-1-4471-1835-0_6
5. Butz, M. V. (2021). Towards strong AI. KI – Künstliche Intelligenz, 35(1), 91–101.
https://doi.org/10.1007/s13218-021-00705-x
6. Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J.,
Halpern, D. F., Loehlin, J. C., Perlof, R., Sternberg, R. J., & Urbina, S. (1996).
Intelligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. https://
doi.org/10.1037/0003-066X.51.2.77
7. OECD. (2019). Artifcial intelligence in society. OECD Publishing. https://dx.doi.
org/10.1787/eedfee77-en.
8. Sejnowski, T. J. (2018). The deep learning revolution. The MIT Press.
Imaginations, Education, and the American Dream 51
9. Bartoletti, I. (2020). An artifcial revolution: On power, politics and AI. The Indigo Press.
10. Pasquale, F. (2015). The black box society. The secret algorithms that control money and infor-
mation. Harvard University Press.
11. Gitelman, L. (2013). “Raw data” is an oxymoron. The MIT Press.
12. Röösli, E., Bozkurt, S., & Hernandez-Boussard, T. (2022). Peeking into a black box,
the fairness and generalizability of a MIMIC-III benchmarking model. Scientifc Data,
9(1), 24. https://doi.org/10.1038/s41597-021-01110-7
13. Nagle, T., Redman, T., & Sammon, D. (2017, September 11). Only 3% of companies’
data meets basic quality standards. Harvard Business Review. https://hbr.org/2017/09/
only-3-of-companies-data-meets-basic-quality-standards
14. The Upfront Summit 2017 was a prominent technology event for hundreds of top
American investors, representatives of startups, and corporate executives. The event
started on 31 January 31 at the GRAMMY Museum in Los Angeles.
15. Hutson, M. (2018). Has artifcial intelligence become alchemy? Science, 360(6388),
478–478. https://doi.org/doi:10.1126/science.360.6388.478
16. Klingler, W. (2017). Silicon Valley’s radical machine cult. Vice. www.vice.com/en/
article/kz7jem/silicon-valley-digitalism-machine-religion-artificial-intelligence-
christianity-singularity-google-facebook-cult
17. Warofka, A. (2018, November 5). An independent assessment of the human rights
impact of Facebook in Myanmar. Meta. https://about.fb.com/news/2018/11/
myanmar-hria/
18. Cohen, S. B. (2019, November 21). Sacha Baron Cohen’s keynote address at ADL’s 2019
never is now summit on Anti-Semitism and hate. Remarks by Sacha Baron Cohen, Recipi-
ent of ADL’s International Leadership Award. www.adl.org/news/article/sacha-baron-
cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism
19. Rushkof, D. (2018, December 12). The anti-human religion of Silicon Valley.
Medium. https://medium.com/team-human/the-anti-human-religion-of-silicon-
valley-ac37d5528683
20. Daunton, N. (2021, November 24). Why Prince William is wrong to blame habitat loss
on population growth in Africa. Euronews. www.euronews.com/green/2021/11/24/
why-prince-william-is-wrong-to-blame-habitat-loss-on-population-growth-in-africa
21. Mohamed, E. (2021, November 30). Experts critique Prince William’s ideas on Africa
population. AlJazeera. www.aljazeera.com/news/2021/11/30/experts-critique-
prince-williams-ideas-on-africa-population
22. Meadows, D. H., Meadows, D. L., Randers, J., & Behrens III, W. W. (1972). The limits
to growth; A report for the club of Rome’s project on the predicament of mankind. Universe
Books.
23. Passell, P., Roberts, M., & Ross, L. (1972, April 2). The limits to growth. The New
York Times.
24. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism.
PublicAfairs.
25. Minsky, M. (1992). Alienable rights. The MIT Press. https://web.media.mit.
edu/~minsky/papers/Alienable%20Rights.html
26. Lanier, J. (2013). Who owns the future? Simon & Schuster.
27. Minsky, M. (1994). Will robots inherit the earth? Scientifc American, 271(4), 108–113.
https://doi.org/10.1038/scientifcamerican1094-108
28. Black, E. (2001). IBM and the Holocaust: The strategic alliance between Nazi Germany and
America’s most powerful corporation. Crown Publishers.
29. Barbrook, R., & Cameron, A. (1996). The Californian ideology. Science as Culture,
6(1), 44–72. https://doi.org/10.1080/09505439609526455
30. Carr, N. (2013, November). All can be lost: The risk of putting our knowledge in the
hands of machines. The Atlantic. www.theatlantic.com/magazine/archive/2013/11/
the-great-forgetting/309516/
52 Education, AI, and Ideology
31. Carr, N. G. (2014). The glass cage: Automation and us. W.W. Norton & Company.
32. Hofman, D. (10 February 1999). I had a funny feeling in my gut. Washington Post
Foreign Service. www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter
021099b.htm
33. OECD. (2021). OECD digital education outlook 2021. https://doi.org/doi:https://doi.
org/10.1787/589b283f-en
34. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in
commercial gender classifcation proceedings of the 1st conference on fairness. Accountability and
Transparency, Proceedings of Machine Learning Research. https://proceedings.mlr.
press/v81/buolamwini18a.html
3
THE NARRATIVE CONSTRUCTION
OF AI
For example, in 2018 CNBC published an article titled “A.I. will be ‘billions
of times’ smarter than humans and man needs to merge with it, expert says”
(Kharpal, 20183); it ends with the note that “AI will turn us into ‘superhuman
workers.’”
The power to crystallise new visions, to seduce our imagination, and to blur
the boundaries of real possibilities became clear since AI was born. The story
of the birth of AI is retold in the “Dark hero of the information age. In search
of Norbert Wiener, the father of cybernetics,” a book published in 2005 by Flo
Conway and Jim Siegelman. Those moments are narrated by Heinz von Foerster,
a reputable specialist in physics and one of the most infuential pioneers of cyber-
netics. He established in early 1960s the Biological Computer Laboratory at the
University of Illinois and was actively looking to secure funding for research at the
same time when Marvin Minsky attracted the attention of the US military with
his new AI Lab at MIT. The work on AI found soon how generous can be fund-
ing from the Pentagon, especially as AI became “the new buzzword of America’s
scientifc bureaucracy.” Heinz von Foerster tried to clarify that both labs work
in cybernetics, remembering: “I talked with these people again and again, and
I said, ‘Look, you misunderstand the term,’” he said of AI.
They said, “No, no, no. We know exactly what we are funding here. It’s
intelligence!” . . . At the University of Illinois we had a big program which
was not called artifcial intelligence. It was partly called cybernetics, partly
cognitive studies, but I was told everywhere, “Look, Heinz, as long as you
are not interested in intelligence we can’t fund you.”
(Conway & Siegelman, 2005, p. 2614)
There may be a funny side of this story, showing how the military were con-
vinced that someone found a safe source of intelligence they can buy and use. It
is also concerning to see that the best funded military force in the history of the
world had such a simplistic view on technology and what computers can – and
cannot – do. There is also the aspect of dissonance that was at the core of von
Foerster’s failed attempts to help donors understand some basic facts about “intel-
ligence” and computing systems. It is probably ironic that von Foerster managed
to open his Biological Computer Laboratory (BCL) at Illinois with funding from
the US Air Force. In fact, this common source of funding for research in new
technologies is not a coincidence and we should remember that new technolo-
gies, especially AI, were all born as military research and applications; this impacts
The Narrative Construction of AI 55
on their nature and on how they shape our world and imaginations. As we will
see in the following chapter, these roots may explain the prevalence of authori-
tarian, dystopian, or anti-human imaginaries within Silicon Valley and edtech.
After he moved from Austria to the United States, von Foerster was invited to
participate in a series of conferences sponsored by the Josiah Macy Foundation,
where cybernetics and information theory were being explored by scientists like
Norbert Wiener, Margaret Mead, John von Neumann, and Claude Shannon.
Here he engaged in what is possibly one of the most important debates for our
century, on what is information and how we can defne it. Heinz von Foerster
was a profound thinker and a scientist, familiar with the philosophy of language;
he was not only inspired by the intellectual life in Vienna but he was also related
to Wittgenstein, the most signifcant philosopher of language and meaning. Prob-
ably this is why von Foerster disagreed with Claude Shannon about what can be
defned as information. Conway and Siegelman note that he argued that
information, even in the technical sense, could not be severed from mean-
ing without horrendous consequences for human understanding. “I com-
plained about the use of the word ‘information’ in situations where there
was no information at all, where they were just passing on signals,” von
Foerster remembered. “I wanted to call the whole of what they called
information theory signal theory, because information was not yet there.
There were ‘beep beeps’ but that was all, no information. The moment
one transforms that set of signals into other signals our brains can make an
understanding of, then information is born! It’s not in the beeps.”
(Conway & Siegelman, 2005, p. 262)
However, the solution to call “beep beeps” was in the same group of scholars.
This solution was defned by one of von Forester’s colleagues, Claude Shan-
non, who is considered “the father of information theory.” His revolutionary
contributions in the feld of cybernetics and information theory go beyond com-
puting, information theory, and cybernetics. It is not only that his ideas greatly
infuenced our intellectual and social lives; they changed the world. Marking his
death, an anonymous obituary published in The Times on 12 March 2001 notes
that Claude Shannon was
a playful genius who invented the bit, separated the medium from the
message, and laid the foundations for all digital communications. . . . [He]
single-handedly laid down the general rules of modern information theory,
creating the mathematical foundations for a technical revolution. Without
his clarity of thought and sustained ability to work his way through intrac-
table problems, such advances as e-mail and the World Wide Web would
not have been possible.
(James, 20095)
56 Education, AI, and Ideology
In this succinct note, we see briefy suggested the extraordinary revolution oper-
ated by Shannon, who had the idea to solve the problem of diferent meanings
that can determine how information is measured and read. Katherine Hayles is
ofering a succinct and clear explanation of Shannon’s revolutionary idea that
opened the door to the information society and new philosophical currents such
as postmodernism. Hayles notes that Claude Shannon realised that the main
problem of a new information theory and technological advancement was to fnd
reliable forms of quantifcation: the problem was that a new theoretical frame-
work that
Shannon found the solution to separate the medium from the message, the text
from context, the information from meaning; he found that information can be
evaluated and read in separation from meanings, within a determined system that
remains constant. This changed everything.
Heinz von Forester was completely against this new defnition of informa-
tion as an entity separated from meanings and this form of decontextualisation.
He argued that when we separate information from meaning, we may solve a
technical problem, but we’ll infict severe damage on human understanding with
terrifying consequences for our idea of humanity. Decades later, Claude Shannon
noted that what he created at that time was a theory of communication rather
than a theory of information, admitting that he “thought that communication
is a matter of getting bits from here to here, whether they’re part of the Bible or
just which way a coin is tossed.” Shannon basically confrmed in late 1980s that
von Forester was right in his critique of the new theory of information. However,
The Narrative Construction of AI 57
in 1980s there was already too little interest in these nuances, and cybernetics
already used the approach proposed by Claude Shannon.
There are signifcant implications of the radical reconsideration of what is the
role and place of contextual meanings and the unprecedented separation between
information and meaning, of text from context. This change opened new pos-
sibilities for information technology, including the evolution of AI, but stay as a
subversive energy that alters human values, culture and social arrangements, our
views on the world and our imaginations. Hayles notes that the impact of decon-
textualisation was immediate and dramatic, and is using the real-life example of
genetic engineering. Humanity was for millennia defned in a coherent set of
relationships, keeping genetic sources and children in a continuum and clear con-
text. In vitro fertilisation (IVF) techniques is decontextualising human reproduc-
tion in a process where eggs can be collected from a woman, frozen, transported
to great distances, inseminated to a diferent person who is unrelated and can be
completely unknown to the donor, where the following stages leading to birth
and development of another human being are situated in a new and possibly
entirely diferent culture, society, and economy. In this new contextual relation-
ship, the birth can be separated from the biological origins: “As the formerly
integral connection between the genetic text contained in an unfertilized egg and
its biological context in the mother is disrupted, traditional defnitions of ‘birth,’
‘child,’ and ‘mother’ all have to be re-examined” (Hayles, 1987, p. 277). Birth,
child, mother, and father can stand decontextualised from the original biological
context, in new personal, legal, and social defnitions. This involves an immediate
and radical rethink of laws governing human reproduction and parenthood, and a
new landscape of values and political discourse related to this process.
In this radical reconsideration of the relationship between information and
context, text and meaning, knowledge and the need for coherence, we can start
to see how Claude Shannon changed the world. It is a revolution that not only
stands associated with wonderful advancement of science and technology, but it
also leads to extraordinarily toxic efects for areas such as education and medicine,
psychology, and culture productions. These areas are less explored and, as any
other major reforms, Shannon’s revolution created some dark spaces where seeds
of destruction are ideal environments for malignant growth.
Higher education is dramatically changed by the moment of a schism between
text and context. There are numerous examples where the solution is defned and
relevant internally, within its own system, and the external context is ignored.
We can take the example of massive open online courses (MOOCs), a case of
ridiculous technosolutionism that engaged universities in a frantic competition
to spend all possible resources on the new fad. The appeal of MOOCs was that
these online courses “appealed to broader narratives such as “education is broken”
and the dominant Silicon Valley narrative” (Weller, 20158). More specifcally,
MOOCs were built on a Silicon Valley narrative that is saying that education
is broken and only technology can fx it, and generous ideas such as the claim
58 Education, AI, and Ideology
that these courses ofer “free higher education for all.” Prominent journals around
the world competed to publish editorials on the new utopia prepared by edtech
for universities. The Economist published in 2012 an op-ed where it was clarifed
not only that this is the future of higher education, but this future is opened now
for all, “especially in poor countries” (The Economist, 20129). The New York
Times published in the same year an editorial titled “The Year of the MOOC”
(Papanno, 201210), while David Brooks and Thomas Friedman wrote enthusi-
astically about the MOOC revolution and the “tsunami” that will dramatically
change (and fx) universities (Brooks, 201211; Friedman, 201312). Academics and
university leaders reluctant to share the enthusiasm for the new panacea were
marginalised or excluded; The Chronicle of Higher Education observed in Sep-
tember 2012 that “the University of Virginia board’s decision to dismiss Teresa
A. Sullivan as president in June illustrated the pressure on universities to strike
MOOC deals quickly to keep up with peer institutions” (Azevedo, 201213). In
Australia, the fad went a step further; at that time, a vice chancellor of a regional
university presented the future of his university:
MOOCs merely confrm what we’ve known for years – that the most basic
currency of universities, information, is now more or less valueless, so uni-
versities might as well give it away. . . . The freemium strategy is particularly
well suited to the developing world where small fnancial margins can be
combined with mass scale.
(Barber, 201314)
After years of gullible belief that MOOCs are the silver bullet for higher edu-
cation, it became clear that these “open” (meaning free) courses have a much
less important role in the life, and budgets, of universities. Most students taking
MOOCs are already graduates, and the majority is not interested to enrol in new
courses (Perna et al., 201315).
The people who have taken up these opportunities are not the needy of
the world – noted Fiona M. Hollands of Columbia University’s Teachers
College on the margins of extensive research and reports – also noting
that [MOOCs] are not democratizing education. They are making courses
widely available, but the wrong crowd is showing up.
(Hollands & Tirthali, 201416)
The idea that universities will create free courses that will be taken at a “mass
scale” by the poor people living in the slums of Manila or the favelas of Brazil or
even poor neighbourhoods in cities like Detroit or Washington DC reveals much
more than the disconnect from reality. These utopian expectations are not just
showing that many decision makers in education have no idea about the life of
the poor; this refects the adoption of what Claude Shannon proposed and how
The Narrative Construction of AI 59
and all students failed that exam. The author of that letter had a nervous break-
down and became a possible number of drop-ofs, leaving their university studies
for “personal reasons.” I left that university before it became clear if the student
was able to overcome the impact of a clear and absurd injustice. This exemplifes
again how fragmented and decontextualised is the academic reality and practice.
It also reveals the power of the machine: if the software is showing a score, we
believe it without a thought about the quality of data used, and the quality of
analysis within the machine.
Edtech companies such as ProctorU claim that the solution is a “human-
centered proctoring policy,” where a trained proctor is working with the AI solu-
tion to “truly prevent cheating.” This is not happening in real universities, where
real people operate. In courses with huge numbers of students (very appealing for
university budgets), there is no “working with the AI solution.” Academics see
that we have access to AI, which is presented as the perfect solution for short and
fast answers and this is how it will be used. In a feld where workloads increase to
absurd levels, it is fanciful to believe that universities will pay “experts” to work
with AI and analyse in-depth instances where software fags possible cheating.
The reality is that a score is determined and two options are realistically adopted:
students above the limit are failing, or the lecturer is looking the other way, pre-
tending that nothing happened. It is devastating for an innocent student to be
wrongly accused of stealing (or breaching academic integrity rules, if we use the
academic jargon). It can be disastrous and irreparable for students living in condi-
tions that are not optimal for the invasive surveillance of an AI system. Moreover,
this use is artifcially adding pressure on an online exam.
There are multiple arguments to ban this practice in universities, and I sub-
scribe unreservedly to at least one point, which is directly related to the rela-
tionship between text and context; the adoption of ubiquitous surveillance as
an integral part of higher learning is poisonous for any educational project. If a
university is genuinely interested in solutions against plagiarism, a serious look
at the root causes is much more important than a software. Looking at students
as potential thieves, criminals who must be observed and scared all the time is an
absurd approach for an educational endeavour. At the core of plagiarism deter-
rent, we fnd also a lack of clarifcation of academic ideals and no interest to clar-
ify scholarly destinations for students. Academics and university administrators fall
often in this trap, and forget that plagiarism is also a testimony of poor teaching,
or poor set of relationship between students and universities. Ultimately, surveil-
lance and threatening approaches are just stupid: students most at risk are those
acting honestly. Those interested to plagiarise can simply use themselves AI solu-
tions to paraphrase their essays to the extent that no plagiarism detection software
currently used by universities will be able to fag the cheat. After many decades,
universities lose the lesson that intelligent students interested to plagiarise are
always a step ahead their institutions; the solution is to build trust and explain why
cheating works in the end against the cheater, even when not caught.
The Narrative Construction of AI 61
There is medical treatment for mild, moderate, and severe depression, but there
is a new problem: the Americanisation of psychiatry across the world. This is
where the revolution of de-contextualisation can be analysed in its corrosive and
devastating impact. Separating text from context and its impact of the approach
and treatment of mental health is surprisingly clear and worrying.
Psychologists and psychiatrists across the world approach classify and treat
depression based on an American manual, named the “Diagnostic and Statistical
Manual of Mental Disorders” – or DSM. This manual, in its diferent updated
editions, lists psychiatric disorders and places them in clear taxonomies. Their
classifcations are linked to a list of possible medication and therapy approaches
for doctors in Berlin, Germany, or Vancouver, Canada; in Sydney, Australia, or
in Gdansk, Poland, etc. Regardless of where doctors, therapists, and patients are,
the same manual is used, regardless of cultural contexts, socio-cultural contexts,
or level of education. In October 2021, France24 published an interesting and
rare analysis of the profound crisis of psychiatry in a medical system, focusing on
the French case. Marie-José Durieux, a children’s psychiatrist at a Paris hospital,
explains in this article that this important feld of medicine was once complex and
fexible in France, had depth and original approaches, until the American model
became dominant and erased all other perspectives:
that dream of a land in which life should be better and richer and fuller for
every man, with opportunity for each according to his ability or achieve-
ment. It is a difcult dream for the European upper classes to interpret
adequately, and too many of us ourselves have grown weary and mistrustful
of it. It is not a dream of motor cars and high wages merely, but a dream of
a social order in which each man and each woman shall be able to attain to
the fullest stature of which they are innately capable, and be recognized by
others for what they are, regardless of the fortuitous circumstances of birth
or position.
(Adams, 1931, p. 40429)
It was from the beginning an attractive utopia, luring immigrants from all over
the world to a land where the “common man” can achieve everything with hard
work and ingenuity, regardless of religion, race, nationality, or wealth. Millions
are lured with the narrative of a country where all opportunities are open for
those who want to have a better life. It is an irresistible utopia, and Adams knew
that his magic formula of the American Dream is not relevant for a rational evalu-
ation; he noted in the same book that
The American dream – the belief in the value of the common man, and the
hope of opening every avenue of opportunity to him – was not a logical
concept of thought. Like every great thought that has stirred and advanced
humanity, it was a religious emotion, a great act of faith, a courageous leap
into the dark unknown.
(Adams, 1931, p. 198)
66 Education, AI, and Ideology
I came to America because I heard the streets were paved with gold. When
I got here, found out three things: First, the streets weren’t paved with gold;
second, they weren’t paved at all: and third, I was expected to pave them.
(Cited in Hoxhaj, 201530)
This is a great note to explain the experience of displacement and the impact
between the favourite narratives about a place and the reality.
I wasn’t as much impressed by my experiences on the East Coast of America as
I was moved by the state of decay from the poor, of intentional dehumanisation
and marginalisation of homeless and people living in extreme poverty there. I will
always remember the feeling that – beyond the ubiquitous and bizarre fascination
with Clinton’s afair with Monica Lewinsky – I was living in a place that was like
the ancient Rome, at the height of the Roman Empire. The feeling that people
living and working there cannot imagine something else to care about than the
imposing walls of the political centre of Washington, DC. Since then, spend-
ing there a hot and humid summer, a beautiful autumn, and an extremely cold
winter, I often remembered that feeling that America fnds normal to lead and
infuence the world. This is most often a well-intentioned impulse of Americans,
as the only possible model for humanity is the American model, with everything
that is associated with this concept. Exceptionalism is a keystone of the American
culture. The utopian metaphor of a “City upon the Hill,” as it was articulated by
John Winthrop on his 1630 sermon aboard the ship Arbella to the Massachusetts
Bay colonists, is part of the American Dream narrative. America was presented
by Winthrop as “a city upon a hill,” watched by the world as a guiding beacon
for a good future. Winthrop’s sermon planted the seeds to the widespread belief
in American imagination that the United States of America is God’s country
that is shining upon a hill. Far from being lost, the idea to “recapture,” “rebuild,”
or “fnd” the American Dream is an integral part of political and cultural dis-
course of presents America. The American Dream is a foundational myth that
will be always as powerful as it was when it was suggested in the 17th century and
The Narrative Construction of AI 67
the movies and the lives of their makers caricatures of American patterns
in general: the same emphasis on power, the same anxiety, the same busi-
ness values and gambling spirit, the same “escapes”-only more so. Here she
draws on the Lynds, on Erich Fromm, and on other writers and suggests
that Hollywood, in its fundamental attitudes, tends toward totalitarianism.
(Riesman, 1951, p. 59133)
In a diferent review we fnd another important aspect about debates that have an
in-depth analysis of the “dream factory,” of Hollywood or the new one, Silicon
Valley: “The picture of Hollywood which emerges from this book is a far from
pretty one but certainly worth having. It is internally consistent, and the vigor-
ous protests which have followed its publication have often naively confrmed
its author’s observations” (Linton, 1951, p. 27034). As Hollywood critics were
received with furious protests, those taking a critical position on Americanisation
found an equally counter-reaction. It is no surprise to fnd that Americanisation
remains a contested term, counterbalanced with the point that global adoption
of items and symbols specifc to the American culture – such as jeans, Mickey
Mouse, McDonalds, and Hollywood movies – are simply adopted to work very
diferently in their new environments. It is argued that the new cultural con-
text is making all diferent35. It is a false explanation. First and foremost, Ameri-
canisation is much more than a simple consumption of a Big Mac or watching
American cartoons. The adoption of all these items and values defnitely change
the context of the adopters. Secondly, Americanisation was for many decades an
intentional project. If we take only the example of the movie industry, we see that
America had a clear policy to expand its cultural and axiological codes, and eco-
nomic infuence across the world since 1920s. Since then, even when European
countries tried to impose quotas on US movies and music as a practical way to
protect their own cultural identities, American institutions and embassies pushed
strongly against all forms of resistance. Education is a perfect example of success
of the project of Americanisation, bringing together scholars and mass media,
economic mechanisms and institutions, subtle use of new technologies, and direct
propagandistic solutions. The United States of America is still infuencing the
rest of the world, and any fad, cultural trend, economic model widely popular in
The Narrative Construction of AI 69
America becomes soon a common occurrence across the world. The imperial-
ist imagination is – at least for now – extraordinarily successful, and immensely
damaging.
A comprehensive study36 on economic opportunities in the United States
looked at the promise of the American Dream, exploring if children will live
a better life than their parents. Analysing the evolution of data since 1940, the
authors of the study (from the US Census and Stanford University) have found
that income mobility had fallen sharply “primarily because of the growth in ine-
quality.” For example, the access to best universities in the United States is limited
to the privileged minority: out of 38 of the best US colleges, including those fve
in the Ivy League, the absolute majority of students come from top 1% of the
income scale, more than the entire lower 60% (Chetty et al., 202037). The report
of Harvard 2021 class concludes that “like in previous years, the surveyed mem-
bers of Harvard’s incoming class are largely white, straight, and wealthy” and only
58.8% of respondents say that they did not know of relatives who had attended
their university (Bishai & Lee, 201838). Over than one in six students at Harvard
reported that one or both parents attended their university. The imbalance is
painfully visible in a time when all celebrate the “massifcation of higher educa-
tion,” without saying which parts of the system opened for those less privileged.
The Californian ideology found a way to attach themselves to the long and
carefully cultivated American Dream. We have a Californian Dream, defned by
technological utopianism and inspired by neoliberalism and a form of psycho-
pathic individualism that emerged from the pseudo-philosophical writings of Ayn
Rand. It is built on the old structure of eugenic theories, a libertarian model
of maximum exploitation of anything that can be used as a resource, people
included, and technological solutionism. The high priests of Californian cult
genuinely believe that we can ruthlessly exploit our environments, accelerate
climate crisis, and move to another planet of space colony when life becomes
impossible on Earth.
Clarifying succinctly some of the aspects that shape education, AI, and cultural
context, we can see how an institution such as OECD, specialised in economy,
can practically shape the agenda for educational systems and universities around
the world.
Notes
1. Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and
the fabrication of power. The University of Chicago Press.
2. Select Committee on Artifcial Intelligence. (2018). AI in the UK: Ready, willing, and
able? HL Paper 100, 2017–19. Authority of the House of Lords.
3. Kharpal, A. (2018). A.I. will be “billions of times” smarter than humans and man
needs to merge with it, expert says. CNBC. www.cnbc.com/2018/02/13/a-i-will-be-
billions-of-times-smarter-than-humans-man-and-machine-need-to-merge.html
4. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
70 Education, AI, and Ideology
5. James, I. (2009). Claude Elwood Shannon 30 April 1916–24 February 2001. Biographi-
cal Memoirs of Fellows of the Royal Society, 55, 257–265. https://doi.org/doi:10.1098/
rsbm.2009.0015
6. Hayles, N. K. (1987). Text out of context: Situating postmodernism within an infor-
mation society. Discourse, 9, 24–36.
7. Hayles, N. K. (1987). Text out of context: Situating postmodernism within an infor-
mation society. Discourse, 9, 24–36.
8. Weller, M. (2015). MOOCs and the Silicon Valley narrative. Journal of Interactive Media
in Education, 2015(1), Art. 5. http://doi.org/10.5334/jime.am
9. The Economist. (2012, December 22). Free education. Learning New Lessons. www.
economist.com/international/2012/12/22/learning-new-lessons
10. Papanno, L. (2012, November 2). The year of the MOOC. The New York Times.
www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-
multiplying-at-a-rapid-pace.html
11. Brooks, D. (2012, May 3). The campus tsunami. The New York Times. www.nytimes.
com/2012/05/04/opinion/brooks-the-campus-tsunami.html
12. Friedman, T. L. (26 January 2013). Revolution hits the universities. The New York Times.
www.nytimes.com/2013/01/27/opinion/sunday/friedman-revolution-hits-the-
universities.html
13. Azevedo, A. (2012, September 26). In colleges’ rush to try MOOC’s, faculty are not
always in the conversation. The Chronicle of Higher Education. http://chronicle.com/
article/In-Colleges-Rush-to-Try/134692/
14. Barber, J. (2013, October 16). The end of university campus life. ABC Radio National
Australia. www.abc.net.au/radionational/programs/ockhamsrazor/5012262
15. Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013).
The life cycle of a million MOOC users. The University of Pennsylvania Graduate
School of Education. www.gse.upenn.edu/pdf/ahead/perna_ruby_boruch_moocs_
dec2013.pdf
16. Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report.
Center for Beneft-Cost Studies of Education, Teachers College, Columbia University
https://fles.eric.ed.gov/fulltext/ED547237.pdf
17. To take just one example, Apple was ofering iTunes U, a platform where anyone could
access a large variety of free courses, lectures and materials ofered by universities.
18. Informatics is a feld of study focused on the representation, structure, processing and
communication of information in natural and artifcial systems.
It is a discipline that encompasses various felds of computing and processing of infor-
mation, such as Artifcial Intelligence, Cognitive Science and Computer Sciences.
19. The etymological source of cybernetics is found in the Greek word “kybernetes”,
which means pilot, rudder, or a tool/device used to steer. Plato used this term in
Alcibiades to discuss the governance of people. Norbert Wiener defned cybernetics as
“the study of control and communication in the animal and the machine.”
20. Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
21. WHO. (2021, September 13). Depression. www.who.int/news-room/fact-sheets/
detail/depression
22. WHO. (2021, June 17). Suicide. www.who.int/news-room/fact-sheets/detail/suicide
23. COVID-19 Mental Disorders Collaborators. (2021). Global prevalence and burden of
depressive and anxiety disorders in 204 countries and territories in 2020 due to the
COVID-19 pandemic. Lancet (London, England), 398(10312), 1700–1712. https://doi.
org/10.1016/S0140-6736(21)02143-7
24. Mazoue, A. (2021, October 3). “French psychiatry has gone downhill in part because
of American infuence.” France24. www.france24.com/en/france/20211003-french-
psychiatry-has-gone-downhill-in-part-because-of-american-infuence
The Narrative Construction of AI 71
25. “Alternative facts” was a formula used in 22 January 2017 during an NBC interview
by Kellyanne Conway, Counselor to the US President, to describe a lie.
26. Gajanan, M. (2018, July 24). “What you’re seeing . . . is not what’s happening.” People
are comparing this trump quote to George Orwell. Time. https://time.com/5347737/
trump-quote-george-orwell-vfw-speech/
27. Brooks, D. (2013, February 5). The philosophy of data. The New York Times, A, p. 23.
www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html
28. George Carlin. Life is Worth Losing, HBO, 2005
29. Adams, J. T. (1931). The epic of America. Little, Brown, and Company.
30. Hoxhaj, R. (2015). Wage expectations of illegal immigrants: The role of networks and
previous migration experience. International Economics, 142, 136–151. https://doi.org/
https://doi.org/10.1016/j.inteco.2014.10.002
31. Hoover, H. (1927). Motion pictures, trade, and the welfare our western hemisphere.
Advocate of Peace through Justice, 89(5), 291–296. www.jstor.org/stable/20661595
32. Wolfe, P. (1991). On being woken up: The dreamtime in anthropology and in Austral-
ian settler culture. Comparative Studies in Society and History, 33(2), 197–224. https://
doi.org/10.1017/S0010417500017011
33. Riesman, D. (1951). Review of Hollywood: The dream factory., Hortense Powder-
maker. American Journal of Sociology, 56(6), 589–592. www.jstor.org/stable/2772480
34. Linton, R. (1951). Review of Hollywood, the dream factory – An anthropologist
looks at the movie-makers, by H. Powdermaker. American Anthropologist, 53(2), 269–
271. www.jstor.org/stable/663894
35. An example of this approach is provided by ‘A Mickey Mouse Approach to Globaliza-
tion’, written by Jefrey N. Wasserstrom for the Yale University, Yale Center for the
Study of Globalization
36. Chetty, R., Grusky, D., Hell, M., Hendren, N., Manduca, R., & Narang, J. (2017).
The fading American dream: Trends in absolute income mobility since 1940. Science,
356(6336), 398–406.
37. Chetty, R., Hendren, N., Jones, M. R., & Porter, S. R. (2020). Race and economic
opportunity in the United States: An intergenerational perspective. The Quarterly Jour-
nal of Economics, 135(2), 711–783.
38. Bishai, G. W., & Lee, D. (2018). Makeup of the class. The Harvard Crimson.
SECTION II
Higher Learning
This section looks at higher education and its profound crisis of identity, placing
a special focus on higher learning in the new millennium. This analysis involves
also an enquiry on what is the result of the adoption of neoliberal ideas in higher
education. The adoption of anti-democratic practices in universities that place
profts ahead of educational aims is also investigated, looking at intense surveil-
lance on students and faculty and other practices enhanced by the advancement of
technology. The Americanisation of higher education, along with the incoherent
adoption or market mechanisms for academic life, adds much more than the pres-
sure of the audit culture and the metrifcation of academic life. We witness now
a direct impact on the level of intellectual life, which is withering under the sum
of pressures on academics. The last chapter of this section aims to place the use of
AI in the context of educational aims and in relation to human values, such as the
love for learning, beauty, and passion.
DOI: 10.4324/9781003266563-6
4
AUTOMATION OF TEACHING AND
LEARNING
Universities across the world in the early twenty-frst century fnd them-
selves in a paradoxical position. Never before in human history have they
been so numerous or so important, yet never before have they sufered from
such a disabling lack of confdence and loss of identity.
(Collini, 2012, p. 51)
DOI: 10.4324/9781003266563-7
76 Higher Learning
in countries around the world, faith in private capitalism was greatly shaken
by the economic crisis of the 1930s and the cataclysms that followed. . . .
The traditional doctrine of “laissez faire,” or nonintervention by the state in
the economy, to which all countries adhered in the nineteenth century and
to a large extent until the early 1930s, was durably discredited.
(Piketty, 2014, p. 1367)
The view changed in 1970s, with one key moment marked by Milton Friedman
when he published in the New York Times magazine his essay manifesto, titled
unambiguously “The Social Responsibility of Business is to Increase its Profts”
(Friedman, 2007, pp. 173–1788). In the United States, and the part of Europe
Automation of Teaching and Learning 79
that was left outside the grip of Soviet communism, this was seen as a favourable
moment to launch a concerted attack on the role of the government, of state-led
planning, of taxation, and most of all, on the idea of common good. This new
trend emerged in early 1970s, fnding new strength in the 1980s; Tony Judt notes
that since 1973
The World Bank promoted policies to change higher education from a domain
guided by the idea of common good to a part of the market, as a commodity since
early 1990s, but the frst key step towards its objective is represented by an inter-
national event in 1994: the Marrakech Ministerial Meeting. It was here when
representative of the 124 Governments and European Communities participating
in the Uruguay Round of Multilateral Trade Negotiations met at Marrakesh,
Morocco from 12 to 15 April 1994. The Marrakesh Agreement established the
World Trade Organization (WTO) and defned the basic framework for trade
relations among all WTO members, under market-oriented policies. This is how
WTO describes this moment: “The ‘Final Act’ signed in Marrakesh in 1994 is
like a cover note. Everything else is attached to this. Foremost is the Agreement
Establishing the WTO (or the WTO Agreement), which serves as an umbrella
agreement.”
This agreement also guides what was called “education services.” It is the
moment when for the frst time an international treaty education is included
in the list of sectors that are subject of trade in international markets. In 1999,
in Seattle, the WTO conference adopted the conclusion of the “Millennium
Round,” where education is legalised as part of the market, regulated by the same
rules of trade used for commercial entities. Education is now part of the market
just like other sectors, such as fnancial services, insurance and banking services,
and construction. The academic world is at this point dramatically changed. The
nature of education and of universities becomes marginal in the new commercial
feld of higher education. Universities are considered, ranked, and evaluated with
commercial concepts, procedures, and measurements; in fact, just few years after
the adoption of Millennium Round, we see the frst international academic rank-
ings, with the use and impact they have today. The “product” had to be measured
and ranked for a proper cost; the impact of the new feld of university rankings is
extraordinary for such a recent invention.
80 Higher Learning
These steps partly explain why we fnd in the second decade of the 21st cen-
tury the feld of higher education entirely engulfed in an ecosystem that normal-
ised commercial rankings and the economic rationale for university governance
and education aims. The lowest denominators of higher education and learning,
such as reducing life to employability and economic success, represent the light-
house guiding learning. The aims of education, the ideals of a truly civil society,
with a balanced appreciation of equity and fairness, humanism, social justice, and
economic prosperity, are all drifting towards a dissolution in control and con-
formism, technologisation of spaces that must be left for humans to defne and
build, and surveillance.
The shift from learning and research to a feld governed by WTO agree-
ments and commercial considerations happened independent from academia,
in a process that was not including representatives of student bodies, academics,
or universities. The relevance of this omission became clear when ministers of
economy, trade, and commerce became the key decision makers for entire sys-
tems of higher education, universities, and academics. Scholarly ideals became
ornaments for a diferent favour; the entire raison d’etre for universities is now
structured by commercial considerations. The very aims of education changed
in reality, regardless of what some mission statements claim or what we see in
glossy strategies adopted by universities. In commerce the ultimate reason is to
make a proft, and this greatly infuenced the evolution of academia for the next
decades.
Not just academia was changed by these decisions; all sectors that were gov-
erned according to the aim of common good and social progress, such as educa-
tion or healthcare, became open to the new regulating principles of trade and
commercial value. The WTO framework sets the function of the General Agree-
ments on Trade in Services (GATS) in three main components:
• the framework of regulations that set the general obligations governing trade
in services (including market access);
• details on specifc services sectors; and
• timelines of liberalisation commitments for all WTO members.
The internal logic, the ethos, and aims of all sectors included in GATS were
altered or entirely changed under this new system of commercial agreements.
The process of globalisation came with generous promises, but the main princi-
ples guiding it revolved around the interest of the United States. Consequently,
globalisation accelerated the colonising process carried in the past by the Ameri-
can Dream; unknowingly, even declared enemies of America adopted the Amer-
ican model. Beyond caricatural representations of this process, the new wave
of colonisation with the American model goes much deeper than the adoption
of jeans or pop culture. We don’t need the level of expertise and brilliance of
thinkers such as Stiglitz or Piketty to see that when the United States adopted
neoliberalism the rest of the world followed the example. It is interesting to see
how international organisations, such as IMF, the World Bank, or the OECD
use their power and infuence to channel other countries, even those outside the
American areas of political infuence, to adopt neoliberalism for their economies,
public policies, and cultural development. David Harvey defned neoliberalism
as “a theory of political economic practices that proposes that human well-being
can best be advanced by liberating individual entrepreneurial freedoms and skills
within an institutional framework characterized by strong private property rights,
free markets, and free trade” (Harvey, 2005, p. 211). Pierre Bourdieu once noted
that neoliberalism is in essence just a programme of the methodical destruction
of collectives, of the idea of common good. The American model became syn-
onymous with the neoliberal project starting with 1970s and was promoted to the
rest of the world with the economic power of international organisations, mul-
tinational corporations, and political infuence. Of course, this is a very complex
and convoluted process that can be explored extensively; however, Joseph Stiglitz,
a Nobel Prize winner in Economic Sciences in 2001, summarised in an article
for The Guardian the key dynamic of globalisation and how it was forged in the
second part of the 20th century:
The US basically wrote the rules and created the institutions of globalisa-
tion. In some of these institutions – for example, the International Mon-
etary Fund – the US still has veto power, despite America’s diminished role
in the global economy. . . . To someone like me, who has watched trade
negotiations closely for more than a quarter-century, it is clear that US
trade negotiators got most of what they wanted. The problem was with
what they wanted. Their agenda was set, behind closed doors, by corpora-
tions. It was an agenda written by, and for, large multinational companies,
at the expense of workers and ordinary citizens everywhere.
(Stiglitz, 201712)
International organisations, such as WTO, World Bank, and OECD, had this
genesis and evolution, as tools of an American globalisation. There is a myth
that the Soviet system lost the Cold War when economies collapsed, but this is
82 Higher Learning
a superfcial judgement. That war was lost as soon as the American Dream, the
bright lights of Hollywood and the charisma of movie stars, was enlightening the
imaginations of people living in a system that was ofering a very diferent dream,
which stands as a barren land where imaginations wither. This is where the com-
petition was truly lost; the anti-communist revolutions came at a point when even
communist leaders knew that their system is a farce and the only thing keeping it
is fear and control. When these last foundations developed vulnerabilities it was
all lost. America wrote the rules for all countries, created institutions for globali-
sation, and attached to them a credible and seductive project, lit by the magical
brightness of the screen, where idealised heroes told all stories about a perfect
and beautiful life, always ending in a happy end. The important point made by
the US Commerce Secretary Herbert Hoover in 1927 became a powerful tool
in the ideological competition of the 20th century, and globalisation was aligned
with the idealised model of the American life. The efort to design protocols and
mechanisms suitable to properly serve the interests of various American corpora-
tions was the easy part.
The American cultural model infltrated the economic and public life across
the world, and the transition to the 21st century is marked by the colonisation
of last spaces of public good with the “logic of the market,” and the jargon, aims,
ideas, and utopias that shape the neoliberal project. The World Trade Organiza-
tion changed Academia in a higher education market guided by the aim to get
“value for money,” where competition, commodifcation of learning and teach-
ing, and the goals of efciency defne the ethos of universities. Professors became
commodities and service providers, and students recast as customers, with poten-
tial students as resources on the target markets. Universities adopted New Public
Management and institutional entrepreneurship, and market positioning became
the measure of judgement for various stakeholders. Leaders in academia are man-
agers, responsible for efciency, alignment with the market demands, partnerships
with the industry and other corporate structures, following the achievement of
Key Performance Indicators (KPIs). Rankings of what some unfortunate uni-
versities openly call “the product” – a ridiculous caricature of what is left from
the aims of education, devoid of coherence in learning and belief in the power
of good teaching – guide institutions of higher education across the world. It is a
competition to imitate and get as close as possible to Harvard University, the ulti-
mate model of prosperity and astute managerial practice. The aim became to get
as high as possible in international rankings on the market. For universities unable
to imitate Harvard, with mediocre research and poor indicators, new rankings
were created to suit all (and create another market, of convenient rankings that
can be manipulated). This is how we have hilarious examples of universities post-
ing proudly the Number 1 position in the country/world for green campuses,
or for sport facilities and results in gymnastics and so on. Pseudo-rankings and
deceiving practices became tools for universities to attract students, in a dynamic
of connotations that is not even remotely related to an educative intention.
Automation of Teaching and Learning 83
and experts. Virtues such as education, wisdom, erudition, and in-depth knowl-
edge are not associated with American heroes, in the pop culture or real life.
Richard Hofstadter noted as early as 1960s that we witness the defnitive success
of anti-intellectualism in the American life in his Pulitzer-winning book, “Anti-
Intellectualism in the American Life.” The success of it was sold and adopted by
the rest of the world, and education was an integral part of this. Hofstadter detailed
in 1963, around the same time the concept of AI was born, the diference in pop-
ular perception and preference between intelligence and intellect. He noted that
America was able since the beginning of the last century to present the archetype
that was – consciously or not – followed and adopted by the rest of the world. It
is relevant to see that the typical American hero is an efcient primitive, a “down-
to-earth” character who can deal with real problems in life, as opposed to ridicu-
lous educated people. It is somehow amusing to see how Superman’s alter ego is
a clumsy nerd, incapable to function properly; his alter ego is the opposite of the
ultimate hero, as it stands proposed by the American model. In the introductory
chapter of his book “The anti-intellectual presidency,” Elvin T. Lim notes that
“the denigration of the intellect, the intellectual, and intellectual opinions has, to
a degree not yet acknowledged, become a routine presidential rhetorical stance.
Indeed, intellectuals have become among the most assailable piñatas of American
politics” (Lim, 2008, p. 316). The American anti-intellectual positions found new
strengths to colonise all spaces, from the economy to political discourse, when a
mediocre actor was elected as the President of the United States, in the 1980s.
Of course, we cannot say that Ronald Reagan was solely responsible for anti-
intellectual positions in the American culture, as the suspicion and distaste for
intellect and intellectual life are well rooted in the American culture, and an inte-
gral part of the American Dream. When Reagan declared that taxpayers should
not be asked “to subsidize intellectual curiosity” he opened a new perspective
with a devastating impact on education.
86 Higher Learning
[L]ocal school ofcials across the United States are being inundated with
threats of violence and other hostile messages from anonymous harassers
nationwide, fuelled by anger over culture-war issues. Reuters found 220
examples of such intimidation in a sampling of districts.
(Borter et al., 202219)
This special report presents examples found in a sample of school districts across
the America, which reveal a climate of aversion and extreme hostility against
school representatives and educators. Various forms of violence are triggered by
issues such as what students can learn, students’ health and safety, or simply disa-
greements over what should be in the school’s curriculum. Teachers and experts
in education are not trusted to make these decisions, and are treated with most
extreme contempt. The level of violence and aggression is terrifying. Journal-
ists use the example of audio recordings, letters, and messages that include death
threats against school ofcials and their families, including their children. In one
of these instances, for example, we fnd that the
The role of education is subverted, and the efects of this radical shift refect
how dehumanised our society can become on the dystopian realities caused by
neoliberalism and colonial thinking. In fact, neoliberalism justifed and normal-
ised not only contempt for expertise and education, but cultivated ignorance and
distrust in the educated “elite.” For example, in 1969 an internal memo of a cor-
porate executive in the tobacco industry refects the incentive to cultivate igno-
rance and respect for ignorance: “Doubt is our product since it is the best means
of competing with the ‘body of fact’ that exists in the mind of the general public.
It is also the means of establishing a controversy” (Gee, 2008, p. 47420). The aim
of educative infuences fuelled by corporate structures is not simply to let people
remain ignorant; the aim is to make people think that they are educated and
know better than experts. This is the point where doubt can be successfully culti-
vated and profts collected. The multifaceted stories of Covid-19 pandemic have
88 Higher Learning
was not captured with the set of indicators relevant for immediately measurable
outcomes. This changed profoundly what we understand today as “education”
and “learning” in schools and universities.
Another efect of the “horse trade” was related to the fact that students from
disadvantaged backgrounds performed poorly; this is how schools – especially
serving disadvantaged communities – lost funds. The quick solution was to play
the system: students who typically performed poorly in tests were asked by school
administrators to stay at home during evaluations, and only “good” students were
included in tests. While some schools improved their results and secured funding,
the most vulnerable students were stigmatised and deprived of a relevant educa-
tion. It quickly became a perverted system. Other forms of manipulating data
became the norm, and more complex and subtle solutions were designed, all at
the cost of learning and real education. The idea of metrics and measurements
stand now at the centre of governance of higher education, shaping directly what
we understand by teaching and learning. Universities created space to “answer”
metrics that provide “a competitive advantage” on the international rankings.
Good positioning in rankings translates in funding and budgets for universities,
and entire programs perceived as not suitable to prove an immediate proft, such
as humanities and arts, increasingly lose fnancial sources to remain a part of
Academia.
The shift of focus from learning to test performance and results, metrics and
directly measurable outcomes of the performance-based governance is the most
signifcant step towards draining education of substance and relevance. It is not
that students do not learn anymore, but even the best students, genuinely inter-
ested to achieve as much as possible from their academic careers, learn for the test,
as the system is pushing all towards this end. The love of learning, the interest for
in-depth thinking, alternative solutions, or real foundations for a well-rounded
education stand replaced by crude instrumentalism and sloganeering, in a process
drained of signifcance, joy, and vigour. At declarative level, critical thinking,
“excellence,” and “performance” are secured by all “commodity providers”; a
simple honest look reveals that real education is happening now in spite of this
system, not nurtured by it.
Since Clinton, the American model of education is obsessively fxated on
performance-based accountability and universities’ return-on-investments
(ROIs). The international organisations, especially OECD and the World Bank,
aggressively promoted the new model, as an integral part of the Americanisation
of the world. From the structure of academic degrees, governance and adminis-
tration, publications and exchange of ideas, higher education cannot be imagined
now beyond the Anglo-Saxon model represented by American universities, and
some British policies and infuences. Unbridled greed, the key feature of Ameri-
can capitalism identifed by Stiglitz, is now proudly adopted by universities in the
21st century as a main value and raison d’être. Derek Bok, the former President
of Harvard University, astutely observed in his book Universities in the marketplace,
90 Higher Learning
that “what is new about today’s commercial practices is not their existence but
their unprecedented size and scope” (Bok, 2003, p. 221). “Greed is good,” the
speech delivered by the fctional character Gordon Gekko in the movie “Wall
Street,” is what led the world to the current crises, the slow implosion of our
systems: environment, democracies, social balance and civility, health and climate,
and our education and culture. Greed was the force that was constantly eroding
the meaning of education and changed universities in “Potemkin villages” where
incoherence, poor results, the dissolution of scholarly ideals, and the profound
malaise at the heart of academic ethos are hidden behind corporatist gibberish
and sloganeering.
The joy of learning – and teaching – was removed from higher education
with instrumental aims, crude metrics, and neoliberal managerialism. A report
published in 2017 found that
data indicate that the majority of university staf fnd their job stressful. Lev-
els of burnout appear higher among university staf than in general working
populations and are comparable to “high-risk” groups. . . . The proportions
of both university staf and postgraduate students with a risk of having or
developing a mental health problem, based on self-reported evidence, were
generally higher than for other working populations.
(Guthrie et al., 201722)
Research constantly fnds that the level of stress for academics is leading to a
signifcant increase of mental disorders. Results are often tragic and irreparable;
for example, Malcolm Anderson, a deputy head of section and a personal tutor
in accounting at Cardif University’s Cardif Business School, collapsed under
the stress and unreasonable requirements. He committed suicide and left notes
to his family and to his university; it was revealed that he had to mark 418 exam
papers in just 20 days. This translates to nine hours of work per day without any
breaks (including food or toilet breaks). This is only for assessment and mark-
ing of students’ work, excluding his work duties and expectations associated to
his role of Deputy Head of Section. Despite the fact that he complained to the
management about his allocation of work, he was ignored. The police detective
investigating this case noted that e-mails on Malcolm Anderson’s work computer
“refer to work expectations not being manageable and the number of students
going through the roof but there’s been cuts.”
In the United States, there are numerous stories of academics pushed to the
extremes. We have the story of Margaret Mary Vojtko, an adjunct professor of
French, who died of cardiac arrest when she had found that she lost her poorly
paid job at Duquesne University. Margaret’s insecure job, poorly paid, could not
cover her cancer treatment. She was forced to become homeless over the winter
as heating and basic maintenance of her house became impossible. In British
universities research reveals that academics work under the pressure of precarious
Automation of Teaching and Learning 91
Similarly, Peter Higgs – the famous physicist that named the Higgs boson at
the core of research conducted at CERN’s Large Hadron Collider – noted in
2013 that no university would hire him these days as he would not be considered
“productive” enough:
Today I wouldn’t get an academic job – he said to The Guardian on his way
to Stockholm to receive his 2013 Nobel prize for science – It’s as simple as
that. I don’t think I would be regarded as productive enough.
(Vostal, 2016, p. 126)
92 Higher Learning
He noted in the same interview doubts on the capacity of our current academic
culture to lead to a similar scientifc breakthrough; the focus now stands on the
relentless push to secure grants and publish for quantitative targets, not depth and
creativity.
There are many other examples of “unproductive” scholars that had the long-
lost privilege of solitude and time to think and discuss their fnding to revolu-
tionise their felds of study, with incalculable benefts for society and economies.
We can consider another example in this sense; it is the story form 1970s, of a
famously unproductive academic with the Arts Faculty at Harvard University.
Universities at that time, including Harvard, were much less focused in quan-
titative targets and the pressure to “perform” for KPIs. However, his colleagues
looked at him as an oddity, an idler academic with no signifcant results. One
day this colleague fnally published a manuscript: “A Theory of Justice,” by John
Rawls. It is one of the most cited studies in his feld, with tens of thousands of
studies using his study. The most prestigious awards in the feld are associated
with this study published in 1971 after a long time of preparation. Eric Betzig,
the 2014 Nobel laureate in chemistry, revealed that the key of his success was
“Just being left alone and allowed to focus 100 per cent,” adding that “not being
in academia for me has been the key.” The model for these mutations follows a
relatively simple patter: lobbying and paying for infuence in decisions, imposing
ideas that lead the entire system to bankruptcy and then, at the moment when
the system is already in ruins, the opportune salvation of privatisation. Education
and learning accelerated this dynamics with the relentless work of international
organisations in fnance and economy. It was normalised that the best expertise in
in the hands of money-makers, of economists, bankers, and technologists. This is
how we reached the point where signifcant discoveries and ideas can be found
outside the walls of academia, far from the wild marketisation of everything.
We collectively contemplate now the fall of universities, and their complete
transformation in institutions of professional training, which are serving corpora-
tions and prepare the workforce for employers. There is no vision and no capacity
left to let students choose the complex and difcult path of what we used to call
a higher education. Signifcant research, new ideas, and exploratory analysis are
now possible much more outside the walls of academia, where the impact of the
destructive ideological fundamentalism of neoliberalism in education program-
matically eroded teaching, learning, and research. In an article published by the
National Review on the decline of prestige for American universities, a scholar
from Stanford presents with courage the new reality of higher education:
$600 billion in endowments. Yet just 20 elite universities account for half
that total. And just four – Harvard, Yale, Stanford, and Princeton – account
for almost a quarter of all endowment funds.
(Hanson, 202127)
This is the American model for education: a prosperous elite of corporate enti-
ties, where learning is a marginal issue, and the fght for survival for the rest. The
irony is that we know that education is in free fall for the last decades, and we all
see that universities have a moment of identity crisis with no solutions on sight.
It was convenient to ignore the grave dysfunctions of the system, even to pretend
that it is not part of reality that “no guarantee that a graduate can speak, write, or
communicate coherently or think inductively.” The result of new managerialism
and economic models adopted for universities is an ethos of bullying and dysfunc-
tion, where resistance is futile, sanctioned, and ruthlessly eliminated. Academ-
ics became complicit in this system, for survival, playing the academic game of
production, and competition against each other. A study on this topic tragically
reveals that genuine critical thinking is excised and forms of resistance are limited
to secret grunts and whispers. Managerial demands and the ridiculous language
that show the insecurity of some failed economists or entrepreneurs are dealt with
obedience and public enthusiasm for the exploitation of others and themselves
(Kalfa et al., 201828).
We witness the end of illusions that all grave dysfunctions are not critical
for our existence and that aggressive mediocrity can be a comfortable substi-
tute for wisdom, knowledge, creativity, and educated imaginations. The pan-
demic revealed that we built collectively weak foundations that our vital systems
implode when we look at the world only in pecuniary, commercial terms. The
crisis of climate change is raising the very real possibility of catastrophic changes
and irreparable disasters; we see a continuous rise of extreme inequities, with
economic systems serving a minuscule minority of extremely wealthy profteers
and desperate poverty for the majority. The rise of extremism and fascism is
generalised and too strident to be ignored; it is normalised and already part of
our political systems. We may think that, to use the title of a book written on
this important topic by Michael Sandel, there are very few things left, if any, that
money can’t buy. We see now that we can ruthlessly exploit everything, squeeze a
proft from any part of life, but there is a cost for the amoral choices made by the
fnancial landlords and corporate megastructures. There may be almost nothing
left that “money can’t buy,” but we cannot buy a sustainable future; we have to
re-learn how to build one.
The new lexicon of academia is now refecting the new values guiding uni-
versities: “benchmarks,” “performance indicators,” “business acumen,” “customer
focus,” “talent management,” “rankings,” “outcomes,” “product,” “customers,”
“outputs,” and other terms used commonly in the neoliberal, managerialist jar-
gon. We exist in and are shaped by the language we use; creating value on the
94 Higher Learning
academics, and students alike, in a sadistic and amoral culture of “efciency” and
suspicion, which leads to the ascending trends of stress, anxiety, and suicides in
academia. Kathleen Lynch observed in an article published in 2015 that this focus
on rankings
It is one of the most powerful tools our species has created. It helps doctors
fght disease. It can predict global weather patterns. It improves education
for children everywhere. And now, we unleash it . . . on your taxes32.
The enthusiasm was kept alive for some years, but at one point, without media
releases or even academic debates on it Deakin University quietly stopped the
magical 24/7/365 help provided by Watson. There is no study on the reasons for
this decision taken by the Australian university, and no press release. It looks like it
is actually more suitable from a fnancial and organisational point of view to have
humans in charge with student administration and solutions for students’ needs.
Maybe the 24/7/365 fow of information about administrative arrangements is
not the most pressing need for a university student.
The AI Genie was returned into its bottle, but there is no signifcant discus-
sion about what happened to lead to this divorce. We have no research, no press
release and no academic debate about a university that made so much noise
about the adoption of AI and a discrete and abrupt stop of their AI applica-
tion. This is a story that should tell university administrators, teachers, students,
and anyone interested in education and the evolution of our societies that the
siren songs of marketers of edtech needs to be treated with healthy and objec-
tive scepticism. Teaching, learning, and higher education take a dangerous path
when we simplify all to ft the function of computing algorithms. Here we reach
again the important discussion about context and communication, informa-
tion, and technology and – most importantly – we have to fnd what it means
to be human. When systems start crumbling and crises become impossible to
ignore, hidden behind Potemkin screens, we have to understand what it means
to be educated; not just well informed, or employable, or intelligent, but truly
well-educated.
Notes
1. Collini, S. (2012). What are universities for? Penguin.
2. In special higher education systems in in Canada, USA, UK, Australia and New Zea-
land, but not necessarily restricted to these countries.
3. TOC. (2020). Technology managing people. The worker experience. Trades Union Con-
gress. www.tuc.org.uk/sites/default/fles/2020-11/Technology_Managing_People_
Report_2020_AW_Optimised.pdf
4. Bel, G. (2010). Against the mainstream: Nazi privatization in 1930s Ger-
many. The Economic History Review, 63(1), 34–55. https://doi.org/https://doi.
org/10.1111/j.1468-0289.2009.00473.x
5. Ferguson, T., & Voth, H.-J. (2008). Betting on Hitler – The value of political connec-
tions in Nazi Germany*. The Quarterly Journal of Economics, 123(1), 101–137. https://
doi.org/10.1162/qjec.2008.123.1.101
6. Drucker, P. F. (1969). The age of discontinuity; Guidelines to our changing society. Harper &
Row.
7. Piketty, T. (2014). Capital in the twenty-frst century. The Belknap Press of Harvard Uni-
versity Press.
8. Friedman, M. (2007). The social responsibility of business is to increase its profts.
In W. C. Zimmerli, M. Holzinger, & K. Richter (Eds.), Corporate ethics and corpo-
rate governance (pp. 173–178). Springer Berlin Heidelberg. https://doi.org/10.1007/
978-3-540-70818-6_14
9. Judt, T. (2005). Postwar. A history of Europe since 1945. The Penguin Press.
98 Higher Learning
10. Moore, M. (1999, July 1). The WTO: The challenge ahead. WTO News: Speeches,
Address to The New Zealand Institute of International Afairs. www.wto.org/english/
news_e/spmm_e/spmm01_e.htm
11. Harvey, D. (2005). A brief history of neoliberalism. Oxford University Press.
12. Stiglitz, J. (2017, December 6). Globalisation: Time to look at historic mistakes
to plot the future. The Guardian. www.theguardian.com/business/2017/dec/05/
globalisation-time-look-at-past-plot-the-future-joseph-stiglitz
13. Smyth, J. (2017). The toxic university: Zombie leadership, academic rock stars and neoliberal
ideology. Palgrave Macmillan.
14. Gewin, V. (2021). How to blow the whistle on an academic bully. Nature, 593, 299–
301. https://doi.org/10.1038/d41586-021-01252-z
15. Hofstadter, R. (1963). Anti-intellectualism in American life. Knopf.
16. Lim, E. T. (2008). The anti-intellectual presidency. The decline of presidential rhetoric from
George Washington to George W. Bush. Oxford University Press.
17. This is a quote from a TV interview with Faisal Islam of Sky News, on June 3, 2016,
and the Conservative politician and leader of the campaign for Brexit, Michael Gove.
The full sentence stated by Michael Gove is: “I think that the people of this country have
had enough of experts from organisations with acronyms saying that they know what is best and
getting it consistently wrong, because these people are the same ones who got consistently wrong”
(Gove is interrupted by the interviewer). Retrieved 6 February 2022, from https://
youtu.be/GGgiGtJk7MA
18. Romney, M. (2012, February 10). Mitt Romney – Remarks to the conservative political
action conference. Online by Gerhard Peters and John T. Woolley, The American Presi-
dency Project www.presidency.ucsb.edu/node/300160
19. Borter, G., Ax, J., & Tanfani, J. (15 February 2022). Schools under siege. A Reuters
Special Report. www.reuters.com/investigates/special-report/usa-education-threats/
20. Gee, D. (2008). [Review of doubt is their product: How industry’s assault on science
threatens your health, by D. Michaels]. Journal of Public Health Policy, 29(4), 474–476.
www.jstor.org/stable/40207213
21. Bok, D. (2003). Universities in the marketplace: The commercialization of higher education.
Princeton University Press.
22. Guthrie, S., Lichten, C., van Belle, J., Ball, S., Knack, A., & Hofman, J. (2017). Under-
standing mental health in the research environment. A Rapid Evidence Assessment. Rand
Europe.
23. UCU. (2016). Precarious work in higher education. Insecure contracts and how they have changed
over time. University and College Union www.ucu.org.uk/media/10899/Precarious-
work-in-higher-education-May-20/pdf/ucu_he-precarity-report_may20.pdf
24. Andrews, S., Bare, L., Bentley, P., Goedegebuure, L., Pugsley, C., & Rance, B. (2016).
Contingent academic Employment in Australian Universities. LH Martin Institute and Aus-
tralian Higher Education Industrial Association.
25. Lamb, H. (2017, January 12). Saul Perlmutter: “Scientifc discoveries aren’t made to order.”
Times Higher Education. www.timeshighereducation.com/features/saul-perlmutter-
scientifc-discoveries-arent-made-order
26. Vostal, F. (2016). Introduction: The pulse of modern Academia. In Accelerating aca-
demia: The changing structure of academic time (pp. 1–10). Palgrave Macmillan. https://
doi.org/10.1057/9781137473608_1
27. Hanson, V. D. (2021, April 29). American universities have lost their prestige.
National Review, April 29. www.nationalreview.com/2021/04/american-universities-
have-lost-their-prestige/
28. Kalfa, S., Wilkinson, A., & Gollan, P. J. (2018). The academic game: Compliance and
resistance in universities. Work, Employment and Society, 32(2), 274–291.
29. Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Educa-
tion Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065
Automation of Teaching and Learning 99
Public service and educational campaigns had other uses too. Many public
service and educational initiatives included a component of surveillance,
DOI: 10.4324/9781003266563-8
Surveillance, Control, and Power –AI Challenge 101
generating data that insurance companies used to refne the risk-rating and
classifcation structures used to price and determine availability of insurance
coverage. Some of these surveillance eforts collected data without consent.
(Horan, 2021, p. 442)
Insurance Era is a book that also reveals that corporate eforts to gather and con-
trol private data are not a recent phenomenon, a new idea of corporate giants to
create what Shoshana Zubof called “surveillance capitalism.” Horan’s book shows
how insurance companies used their power for surveillance at the beginning of
the 20th century, mainly to use data to condition and control who can buy a
car, a house, who can start a business or take a loan. It was a maintained illusion
that these decisions are determined simply by the access to buy insurance or the
capacity to pay more for it; data masters had the ultimate control when Internet
was not even invented. The lesson of the advantages ofered by surveillance was
kept and built on. New tech and edtech corporations did not invent surveillance
and the collection of data without consent, but just built on a solid tradition of
American capitalism on doing this. It was clear from the frst steps of new tech
that those who collect data will secure and maintain power.
On the other hand, we have another important root for educational tech-
nologies: AI, like the Internet, was born as a military project. It maintains in its
structural design the tendencies of surveillance and control, manipulation and
strategic advantage – and power. Too ignore these roots is like trying to under-
stand a newly discovered species without a basic understanding of biology. We
are all determined by our roots, and AI is not making exception. It is important
to note that the educational project proposed by the edtech and the AI revolu-
tion inconspicuously lead higher education towards a model of education close
to that of military schools of the 19th century. It is a controlling, authoritarian
model that is removing students’ agency, based on surveillance and control, based
on pedagogical myths and a common disregard for scientifc fndings that impact
on the ofcial narrative. No one presented military schools in the 19th century
as places where identity is broken and minds are forcefully leading to mediocrity;
they were presented as places where heroes, new leaders, and brilliant tacticians
are created. Big data and learning analytics are both associated in higher education
with the practice of collecting signifcant amounts of private data without student
consent, using extremely intrusive software that is euphemistically labelled with
noble words that suggest good intentions, such as “academic integrity,” again for
surveillance and control. Sensors, security measures, and Internet use surveil-
lance are now part of a complex system that collect an immense amount of data;
research on this topic constantly reveals that most students have no idea how and
what information is collected and how this can be used and misused.
It is not a coincidence that scientists with signifcant contributions in
the development of AI and edtech intersect within their careers projects for
the military. We can take the 1950s case of what was called at that time “the
102 Higher Learning
push-button schools,” which was proposal for a future educational system; this
looks today like a blue-print for education in our contemporary schools and
universities. Simon Ramo presented his manifesto for the “push-button schools”
in an article titled “A New Technique of Education,” which was published in
Engineering and Science Monthly, in October 1957. Dr. Simon Ramo was the
chief scientist of the Intercontinental Ballistic Missile Programme from 1954
until 1958, and a faculty member at California Institute of Technology. Ramo’s
contributions for the US military are so signifcant that he is often referred
as “the father of the intercontinental ballistic missile” (ICBM), developed by
the Pentagon. He noted that in the school of the future all students should be
registered, with all detailed relevant recorded, and only then they can engage
on the course of study that was determined for them. At this point, Ramo
notes, “the student receives a specially stamped small plate about the size of a
“Charga-Plate,” which identifes both him and his programme, and the alter-
native system that will allow to use “the fngerprint system” to access all data
relevant for a student:
When this plate is introduced at any time into an appropriate large data and
analysis machine near the principal’s ofce, and if the right levers are pulled
by its operator, the entire record and progress of this student will immedi-
ately be made available.
Ramo also details that students should be monitored, which is just a more palat-
able word for ubiquitous surveillance – and
after completing his registration, the student introduces his plate into one
machine on the way out, which quickly prints out some tailored informa-
tion so that he knows where he should go at various times of the day and
anything else that is expected of him.
(p. 19)
[A] typical school day will consist of a number of sessions, some of which
are spent, as now, in rooms with other students and a teacher and some of
which are spent with a machine. Sometimes a human operator is present
with the machine and sometimes not.
The teacher is not always required, and it is noted that the development of new
technologies can fnd solutions to entirely replace the teacher. Simon Ramo even
Surveillance, Control, and Power –AI Challenge 103
describes the birth of a “new industry,” with an “industrial team” that will work
with the teaching staf, which will
Ramo presents not only a techno-utopian vision for education, where machines
will be humanised in a near future and students will have more free time and
study less, but also a surprisingly accurate picture of education as we know it
today. We have now the new industry where experts in machines with no inter-
est or discernible knowledge about ideas in education, pedagogical possibilities,
or educational theories have important roles in curriculum development and
“delivery” of courses; we also have what Ramo named the new job of “teaching
engineer”; it is only that now we call them “educational technologists,” or we
fnd this group under other techno-capitalist labels*4. We have a huge industry in
the imagined function of learning analytics, as it was imagined by Ramo, where
machines are used for student surveillance and collected data leads to pathways of
study or “discover the special problems that need special attention.”
In few words, we have now “push-button” education, with push-button
classes, where machines are used as teachers, data aggregators, educational solu-
tions, and technological mentors. Technicians guide and tinker the programmes
of these machines when needed, and students have a “personalised” education.
This new project, where machines are teachers and teachers are technicians, was
presented as necessary for technological and educational advancement and, most
importantly, to secure national interests of the United States. This aim stands in
perfect alignment with later developments and intertwined evolution of military
projects in the United States. A report for the military published in mid-1980s
succinctly explains how military applications sparked an “electronic revolution”
in education:
Since 1950s we hear the same tempting promise, common in education in its
current form, that we will commission teaching to a machine and, once this is
secured, we will have better classes, education, and learning.
The promise of a “system that makes possible more education for more people
with fewer skilled teachers being wasted in the more routine tasks that a machine
should do for them” (Ramo, p. 22) is reframed for commercial reasons or restated
in new forms, as MOOCs, as learning analytics, and so on, for almost a century.
Sidney L. Pressey designed teaching machines in the mid-1920s. His inven-
tions were presented for the frst time at a conference of the American Psycho-
logical Association (APA) in 1924. His proposal was slightly improved the next
year and his primitive forms of “teaching machines” were able to administer
multiple choice questions (MCQs). Similarly to the current uses of MCQs, these
frst machines for assessment were presented as “teaching” machines, which is
obviously based on a fundamental confusion on what is teaching and what is
the role of the teacher in stimulating, guiding, and facilitating learning. These
inventions also compromised maybe forever the idea of assessment in schools
and universities. The promise of “teaching machines” is since then pushed to an
immediate future, and then again to the next few years. Currently, we are using
the same solutions and principles in a new context, with much more advanced
technologies, without the evidence that push-classes and “teaching machines”
actually enhance learning, improve teaching, and open new ways to achieve the
aims of higher education. This statement will automatically irritate the zealot
followers of edtech, most of them making a career using technology to mask
ignorance in education. However, we can back this statement with an extensive
study provided, surprisingly, by some of the most aggressive and infuential actors
promoting the idea that “teaching machines” will improve learning and teaching:
the OECD. The extensive report was compiled and published by the OECD in
2015, with a surprising refection on the myth at the core of educational reforms
promoted across the world in the last decades. It dispels the opinion that technol-
ogy is in itself a solution to our educational problems, and provides data and evi-
dence that paint a much more nuanced and diferent reality. The evidence refects
that the myth of techno-solutionism in education is not based on scientifc studies
and data, and reveals that it was a wrong and misguided solution for education.
Specifcally, the report concludes that:
• “Resources invested in ICT for education are not linked to improved student
achievement in reading, mathematics or science.
• In countries where it is less common for students to use the Internet at
school for schoolwork, students’ performance in reading improved more rap-
idly than in countries where such use is more common, on average.
• Overall, the relationship between computer use at school and performance
is graphically illustrated by a hill shape, which suggests that limited use of
computers at school may be better than no use at all, but levels of computer
Surveillance, Control, and Power –AI Challenge 105
use above the current OECD average are associated with signifcantly poorer
results” (OECD, 2015, p. 1466).
The American advisors of the armed forces during the Second World War re-
invented their work as think tanks and “technocratic-educationalizing” networks
that used strategies developed for the war in the new civilian context. Bürgi pre-
sents the unseen story of infuences that shaped the current reality of education, in
schools and universities. It also presents the constant infuence of the US Depart-
ment of Defence on educational systems and, later, of the inner workings of
schools and universities, on the administration and funding, and on curricula and
teaching. Starting with late 1950s, the OECD used its power to change the way
we understand teaching and learning and the aims of education: “education was
106 Higher Learning
subjectivity, error, bias, prejudices, partiality, and limitations were all left behind,
as old and inferior ways of understanding reality. As we briefy noted before, data
involves not only a certain selection that makes it all limited and skewed by the
intentions of those who designed methods of collection, but is inescapably linked
to the past, to features, actions, information, and events that happened. All these
elements captured in “data” can be changed by one signifcant event, which can
make it all irrelevant for the new context. In the case of AI we can look at the way
personalisation works in education and think about how many troubled students
became great innovators, extraordinary minds that shaped felds of knowledge
and how easy it would be to block all of them using what was the evident truth
of their times. Many brilliant students had a time when they were disengaged and
uninterested in studies and a personalisation at that time would actually block
their positive evolution. Personalisation is also historical: it is recent in our history
the time when it was considered that only men can be scientists. Access to educa-
tion was restricted to the black population in America. It is a great error to ignore
the fact that what is considered reliable and accurate data is constantly changing,
in line with specifc places and time.
If we imagine schools applying “learning analytics” in the past, to all students,
most exemplary cases of extraordinary artists, writers, or scientists would disap-
pear in vocationalised pathways, flled with lower level information that would be
more aligned with their interests and potential as it looked at a certain time, as it
was seen at a particular time and place. Learning analytics, the systematic algorith-
mic analysis of data collected on students, is an extraordinary instrument of sur-
veillance that is integrated in LMS, which are also surveillance traps for students.
One important lesson of the last two decades is that the big technological
companies such as Google, Facebook/Meta, or Netfix use personalisation to
reward and entrench intellectual laziness, misinformation, and confrmation of
biases, in a spinning whirlpool of superfcial, irrelevant, and low-quality informa-
tion. The “recommendation algorithms” (or curation algorithms) stand at the
centre of public scandals of unethical use of data for political manipulation, such
as the Cambridge Analytica scandal. Data analytics and curated materials are self-
limiting and build a chaotic digital universe where individuals’ thinking, analysis,
and clear judgement are constantly hindered or suppressed. In a book devoted to
new methods of manipulation and censorship, Margaret E. Roberts details one
particular method used to suppress the possibility of people to fnd important
information: fooding. This approach was reportedly used in China, when cen-
sors realised that not all information can be suppressed when citizens use social
media tools, and some of this information can be dangerous for the stability
of the regime. The solution is simple: rather than suppressing all inconvenient
information is much more efcient to “food” the digital universe with a sea of
stories for fast-use. Flooding is “the coordinated production of information by an
authority with the intent of competing with or distracting from information the
authority would rather consumers not access” (Roberts, 2018, p. 8011). In other
Surveillance, Control, and Power –AI Challenge 109
despite the advances in both hardware and software, recent studies show
little evidence for the efectiveness of this form of Personalized Instruc-
tion. This is due in large part to the incredible diversity of systems that are
lumped together under the label of Personalized Instruction. . . . In fact,
there is so much variability in features and models for implementation that
it is impossible to make reasonable claims about the efcacy of Personalized
Instruction as a whole.
(Enyedy, 2014, pp. i – 5)12
student, maximising strengths and using individual interests to open new areas
of knowledge. A common aspect for utopian projects is the impulse to suppress
moral values and principles to reach the ideal place; and this is how all fail and
end in terrible tragedies. In this case, the utopian project of an AI-personalised
instruction is asking us to suppress moral considerations and ethical considerations
on the systematic use of surveillance, the present and future vulnerabilisation of
students, and standardisation of an education defned by mediocrity. Bizarrely,
most recent OECD reports on AI and learning include a model of automation
of personalised learning based on the evolution and promises of self-driving cars
(OECD, 2021b, p. 6013), fnding that AI in education will adjust individual tasks
based on students’ knowledge and will personalise the order in which students
“work through curriculum.” In the same publication we fnd a dystopian version
of instruction and schooling, with extraordinary levels of surveillance and com-
plete indiference to students’ privacy. Part of the “optimisation” of education
through the advancement of edtech and AI, the authors recommend the use of
“behavioural data” to collect information on “students’ behaviour during learn-
ing,” noting that “one important source of behavioural data are log fles. This
data lists the sequences of learner-technology interactions at the millisecond level
leaving a trail of activities with learning technologies.” Another source of behav-
ioural data are cursor movements and keyboard entries; the more one moves the
most engaged they look in the fnal reports. Eye movements are also captured, to
indicate what students look at during learning, which is used to detect allocation
of attention: “Wearable eye trackers also assess students’ interaction with physical
objects and social interaction during learning. In addition, specifc eye-tracking
data such as pupil dilation and blinking behaviour have been correlated to cogni-
tive load and afective states” (OECD, 2021a, p. 60).
The number of assumptions about learning in this case is staggering, but what
may be the most serious implication of these suggestions is that it reveals a way of
understanding students as a fxed and singular generic being, a trainable creature
that reacts, learns, and provides only in measurable and standardised outcomes.
It is a post-human student and education is designed in a post-human paradigm,
pretending to use a hybrid model where humans use AI to enhance their ef-
ciency. The most disappointing and troubling part is the impoverishment of edu-
cation, and what it means to be an educated person. The relentless surveillance of
students starts from the same type of assumptions leading to the idea that inmates
under permanent surveillance have a better behaviour. In general, it doesn’t and
cannot escape students that they are watched, and not necessarily seen. How can
we imagine an education where students are always under surveillance, and every
act, movement, or intention – including their eye movement – is recorded, ana-
lysed, and reported in forms that will alter their future learning and, ultimately,
their life? What are the implications of continuous and ubiquitous surveillance on
students’ mental health, motivation for learning or how schooling is perceived?
What if the AI report is wrong? What if a student stares at a spot not because it
Surveillance, Control, and Power –AI Challenge 111
is cheating, but just because that is how thinking and concentration happen for
that individual?
AI and learning analytics require us to reconsider what type of data is col-
lected, how relevant it is for what is supposedly refecting, and what is the qual-
ity of that data. As mentioned before, any AI-powered system is only as good as
data that is provided to the algorithm. If we take the example of the businesses,
obsessively indicated (directly or in a subtle form) by various consultancy forms
or OECD as the model that should be followed by higher education, the qual-
ity of data used for reporting and analytics is not encouraging. In 2017, Harvard
Business Review reported that only 3% (three!) of companies met basic standards
required for data. The report found that:
It is just delusional to think that universities have a much better situation, and the
problem of reproducibility in academic research is just one factor that supports
this doubt. In efect, we can say that data-driven analytics and predictive solu-
tions require at least a serious interrogation, if not a complete refusal to reduce
an important part of higher education to elements that are so much susceptible
to error.
Data can be extremely deceiving even for the most professional reports. For
example, we have the case of COVID-19 pandemic. In early 2020, when the
number of infections was rising and people became worried for the future,
the US Administration had just complete confdence. On 26 February 2020, in
the White House briefng room, Trump made the clear point that everything
was under control: “We’re very, very ready for this – Trump said – for any-
thing.” He also said that in a report co-produced by the Johns Hopkins Center
for Health Security, which was ranking 195 countries on their readiness to con-
front a pandemic, “The United States, is rated number one most prepared.” Data
and evidence aggregated in the Global Health Security (GHS) Index placed the
United States as the most prepared country in the world to deal with a pandemic.
The reality of the following months proved that the United States was dealing
much worse than most countries afected by COVID-19. The reality was lost
somewhere between data reports and narrow indicators, which were extremely
relevant even if they weren’t identifed and measured (e.g. level of trust on sci-
ence and expertise; trust on the government and others). If we take an example
from fnance and markets, the new gods of contemporary world, we can see easily
112 Higher Learning
how data and predictive analytics can go wrong. In 2019, Argentina was suddenly
recording a market crash, in an event reported by Bloomberg as almost com-
pletely implausible: “there was a 99.994% probability that an event like Monday’s
sell-of in Argentina wouldn’t happen” (Sivabalan, 201915). The chance of that
happening was 0.006%, but it happened. AI was not helping human intelligence
and many people lost a lot of money. The lesson for education should be that data
can be partial, not including some variables that can be essential for a trend or a
report; the analytics report may be wrong because the indicators used to collect
data are biased, partial, or inconsistent. The possibilities of collecting data through
surveillance are skewed towards aspects that are in fact irrelevant for the student’s
interest and motivation for learning.
Surveillance is a constant reminder of power structures but it is not conducive
to mutual trust, intrinsic motivation for learning, or even a positive collaborative
relationship of students with their teachers and administrators. Probably the most
toxic part of ongoing surveillance, learning, and predictive analytics is that stu-
dents are not involved in reporting. Once data is collected, selected, aggregated,
and interpreted, a report is created, but the student cannot infuence the conclu-
sion of these reports. Most commonly, students cannot even see these reports, and
the conclusion, right or wrong, is impacting their academic pathways and experi-
ence. This is an unfair, wrong, and corrosive for a normal relationship required
for educational experiences. In an interview published in 2020 by the Institute
for Human-Centered AI (HAI) at Stanford University, Kate Vredenburgh notes
is using a well-placed metaphor to explain how AI’s scores and reports can take a
terrifying form. She makes the observation that Kafka’s “The Trial” is
Management Systems (LMS) in higher education is not shared with the students,
and learning analytics reports remain open only to the instructor and the institu-
tion. This is especially concerning when we see that errors of advanced AI sys-
tems led to wrongful arrests and derailed the life of innocent people. For example,
Wired published in March 2022 an article titled “How Wrongful Arrests Based
on AI Derailed 3 Men’s Lives” where it details how destructive was for people
and their families to be wrongfully identifed by facial recognition software and
arrested for crimes they did not commit (Johnson, 202217). The AI software lead-
ing to the arrest of these people was used in Detroit, where the Police Chief
admitted that it misidentifed people in 96% of cases. If AI was wrongfully used
to send people to prison, despite their obvious innocence, we can safely imagine
that in schools and universities, learning analytics reports can be erroneous and
irrelevant for students’ interests and potential.
This is far from being an isolated case: in 2022, the Associated Press published
a report about a man who was sent to prison, accused of murder, without any
other evidence than AI algorithms: “Prosecutors said technology powered by a
secret algorithm that analyzed noises detected by the sensors indicated Williams
shot and killed the man” (Burke et al., 202218). If the system of justice is opened
in some rare and fortunate cases to open enquiry and dispute, edtech is hermeti-
cally shut for students, and the “conviction” remains as it was set by the system.
Despite the wave of evidence that algorithms, created by humans who transmit
their own preferences, are open to endlessly used and reinforced biases, edtech is
unchanged and uninterested in limits and risks. The most infuential centres of
power, such as OECD, or corporations with a presence in edtech, international
consultancies frms, and various think tanks insist to present the advancements
in surveillance in education as a positive development for students, teachers, and
institutions of education. This is an unfortunate position not only because it is
treating AI as a perfect solution, without faws, in an uncritical and unscientifc
manner, but also because it leaves aside the fact that schools and universities use
proprietary software for surveillance and analysis. In efect, education is based on
black box algorithms, with no idea about how exactly information is processed
and what are potential risks and downsides.
A report on student surveillance practices in the US schools, released by the
Center for Democracy & Technology in 2021, reveals that
watched and any action outside what is permitted will have consequences; it is
a constant reminder that those holding power are watching. It is an obvious fact
that a regime of surveillance hinders personal expression, creativity and independ-
ence, imagination and courageous experimentation. In other words, the current
arrangements within education and the uncritical adoption of AI “learning ana-
lytics,” “predictive analytics,” and extensive surveillance enhanced by AI obstruct
and eliminate students’ creativity, spontaneity, and free and independent thinking.
It should not escape us that this is what education needs to nurture; it is impor-
tant to see the part in the mission statements and institutional strategies where
this is mentioned translated clearly in action. We have a profound inconsistency
to declare the need to nurture and elevate individual diferences, creativity, and
self-expression while using AI for surveillance. Ignoring these contradictions do
not serve anyone and create on a longer term the context for internal dissonance
and confict; for students it generates an ethos where love for learning, creativity,
and engagement can happen in spite of the system not because of it.
A study published in 2016 shows that surveillance is causing self-censorship
and suppression of dissent or expression of opinions that may look diferent from
what is accepted by the majority (Stoychef, 201620). While the authors acknowl-
edge that researchers “have consistently showed that perception of hostile opinion
climates – or when individuals believe their views difer from the majority – sig-
nifcantly chills one’s willingness to publicly disclose political views” (p. 296), this
study refects directly that online surveillance is enabling a culture of conformity
and self-censorship, which is against minority groups and opinions. This conclu-
sion, on the impact of surveillance in online environments, is just another confr-
mation of research on self-censorship, such as the “spiral of silence” presented by
Elisabeth Noelle-Neumann in 1974 (Noelle-Neumann, 197421).
Universities, schools, and edtech corporations are using the model of Big
Tech, the large online platforms such as Google, Netfix, YouTube, and Amazon,
that use various forms of data collection to provide personalised services at a
large scale. These companies collect any kind of user data in discrete and hidden
forms, sometimes covered by a general user agreement, which is designed as a
long, jargon-flled and technical text that most users can’t – or won’t – read. Data
collected is, as it is the case of edtech and institutions of education, seemingly
unobjectionable, including websites visited, applications used, time and duration
of use, and location. The problem for users starts when all this data is aggre-
gated with data collected by other companies, such as fnancial services; these
personal packages reveal where and when a credit card was used, why, how this
may impact on the purchasing preferences and possibilities in the future or what
personal health and fnancial risks are most probable for a certain individual.
There are well-documented and important books revealing the type of informa-
tion and personal risks, such as “The Black Box Society” by Frank Pasquale. His
book provides maybe the best analysis on the “one way mirror” used by tech-
lords and numerous real-life examples where algorithmic profling is afecting or
Surveillance, Control, and Power –AI Challenge 115
devastating people’s lives (Pasquale, 201522). Here is one important problem for
education: when the black box of AI is profling students, deciding what they can
and cannot do, placing them in arbitrary categories such as “at risk of failing,”
“isolated,” or “uninterested,” there is no possibility of recourse. Students cannot
appeal these decisions and most often not even know that a label was attached to
them to shape in an invisible and powerful way their academic life. The algorithm
decides and not even teachers can do much about it. Learning analytics and pre-
dictive analytics are not accountable forms of management of data or education;
students do not have their say in what label is attached to them, and schools or
universities simply don’t know how the AI algorithms used are working, and
cannot even fnd this. It is a strange fact, but universities, with schools of engi-
neering and top specialists in programming, with engineers who are educating
the engineers of the future, do not use their own platforms for online education,
or LMS. In Australia, for example, there is no university using an LMS created
within, with proprietary software owned by the university and people who can
actually take responsibility when something goes wrong, and can explain what
exactly was wrong in the algorithms. In efect, decision on students’ education is
very much infuenced by corporate entities with neither expertise nor interest in
educating people per se.
The importance of an algorithmic score can be vital; we can take the example
of an AI system used for over a decade by police in Spain, called VioGén. This
system was used by the Spanish Police to assess the risk levels for women who
fled a complaint of abuse. An external audit, presented in March 2022, reveals
that it has severe faws that lead to women’s risk being ranked too low. The use
of this system is associated with the disturbing fnd that VioGén system “dis-
cards” most cases “by giving them an ‘unappreciated’ risk score,” “only 3% of
the women who are victims of gender violence receive a risk score of ‘medium’
or above and, therefore, “efective police protection’.” For example, “women
who were killed by their partners and did not have children were systematically
assigned lower risk scores than those who did, with a recall diference between
groups of 44%” (Eticas Foundation, 2022, p. 3223). The audit of VioGén is fnd-
ing among its troubling conclusions that “VioGén is, in practice, an automated
system with minimal and inconsistent human oversight” (p. 32).
Edtech is massively used in universities as technologies of domination, where
AI is pushing with unprecedented efciency students and teachers to submit and
accept their manipulative authority and surveillance with apathy and resignation.
The New Management of universities is actively employing without much scru-
tiny surveillance and inadequate educational solutions partly because it is keeping
students and teaching staf docile and controllable, leaving the illusion of stability.
The most common reaction is that data collected in education is somehow
secured and used far from amoral data brokers who monetise people’s vulner-
abilities, preferences, and lives. The truth is that this assumption is far from what
is really happening. We can take just one example: PowerSchool, an edtech
116 Higher Learning
are not necessarily specifc to education; schools and universities are using against
students what is already available for corporations and various companies to watch
and control employees. In this sense, we can say without doubt that the trend is
not to humanise the workplace but to make employees work like robots. Uber is
using AI to monitor and rate workers’ performance and rank them on a fve-star
basis, and a certain number of low rating is leading to workers’ termination (stop
them have access to the app). In 2021, Amazon’s CEO Jef Bezos’ sent a letter to
shareholders to announce that Amazon workers will be managed through the use
of AI surveillance, specifcally watching what muscles are engaged in their work,
noting that: “We’re developing new automated stafng schedules that use sophis-
ticated algorithms to rotate employees among jobs that use diferent muscle-
tendon groups to decrease repetitive motion and help protect employees from
MSD risks” (Bezos,26 2021). The hellish conditions of Amazon’s employees are
well documented, including the dystopian use of surveillance on Amazon drivers
and warehouse workers. Schooling is developing on the same trend. In 2018 the
Hangzhou No. 11 High School in China introduced an AI system that is using
facial recognition to evaluate students’ level of engagement, collecting data that is
aggregated to report if a student is engaged or daydreaming, it is angry or bored,
engaged in reading, writing, actively listening the teacher, or if they are happy or
surprised. This system is scanning the entire classroom every 30 seconds to cap-
ture every movement or facial expression on all students in the classroom, labelled
as “the smart classroom behavioural analysis system.” This is just a part of the
“smart campus,” which integrates other areas of surveillance and features associ-
ated with facial recognition in other areas, such as the high school canteen, vend-
ing machines, or the library. All this information is aggregated and the teacher can
read it for “a better classroom management.” The Hangzhou Bureau of Education
made public their intention to extend the emotional evaluation system in over
190 schools and kindergartens. China is also using technologies that are mining
extraordinary large volumes of data from workers’ brains: “Government-backed
surveillance projects are deploying brain-reading technology to detect changes
in emotional states in employees on the production line, the military and at the
helm of high-speed trains” (Chen, 201827). It is tempting to believe that this level
of surveillance is common only for dictatorial regimes, but it is very far from the
truth. AI powered surveillance is already used in university campuses, in police
proflings and surveillance, and in the everyday life of every citizen. Policing
the classroom is grotesquely enhanced by AI systems, ofering the possibility of
extensive surveillance and control in the name of efciency and personalisation,
as a key for student engagement; results are the opposite of these promises. Edu-
cation at all levels is in a state of crisis, from an identity crisis of universities and
schooling in general to the rapid decline of prestige and social respect for educa-
tion and teachers, to extreme cases of extreme reactions against schools and uni-
versities. A report released by the American Psychological Association, based on
a survey of over 15,000 teachers and other school staf across America, found that
118 Higher Learning
Surveillance increased in schools across the world, with results that are speaking
now more about the impact of naive, uninformed, and toxically positive ideas
about education. Violence and distrust for institutions of education is another
indicator that we follow a wrong model.
Surveillance and monitoring of individual performance, along with the adop-
tion of the bizarre concept of “key performance indicators” to measure faculty
“outputs,” are much more suitable for a car engine than any good-intentioned
attempt to organise education. The language in higher education governance
and reporting reveals an industrialised vision for education, reducing the aims
of education to the lowest common denominators. This language creates and
supports a culture of audit, hierarchical control, and distrust in higher education,
normalising surveillance and control. The normalisation of surveillance, indoc-
trination, and numbifcation of students and teachers is noted and researched in
media or academic studies. We can take just one example provided by Forbes
in 2019, where we fnd the example of ClassDojo, an edtech product used by
schools: “ClassDojo, one of the most ubiquitous tools used to manage classrooms
and students, not only indoctrinates students into a surveillance culture, but is also
susceptible to security breaches that put student data at risk” (Baron, 201929). In
the normalisation of surveillance we hear often that it is a necessary compromise,
as surveillance is required for benefcial solutions made available to those who are
watched. This is exactly what STASI, the famously cruel secret police in the times
of East Germany (GDR), and other dictatorial regimes, argued: that surveillance
is in the interest of those who are surveilled, for a good functioning of their
society. Research reveals that we have extensive invasions of privacy favoured by
the efciency of what is called “the corporate cultivation of digital resignation,”
a complex strategy designed to suppress resistance against surveillance and numb
those who are permanently watched. A study published in 2019 is fnding four
main areas used by corporations to normal and foster digital resignation, which
consist of four “interrelated rhetorical tactics”:
The British academic ended his analysis with a note that underlined the major
diference between the Soviet context and our current neoliberal arrangements,
where academics have the freedom to openly critique the system, organise, and
fght for their work conditions. The irony makes that as a consequence for the
publication of that analysis Brandist was called by his university’s department of
human resources to be warned; he details in a subsequent article published by
Times Higher Education that he
noted the important diference – that academics in the UK, unlike those in
the Soviet Union of the 1930s, do not face routine censorship and repres-
sion for voicing critical views. But a few days later, I received a formal letter
from human resources suggesting that I should desist from publishing such
material and instead raise concerns internally.
(Brandist, 201633)
Notes
1. Kim, T. (2018, April 11). Goldman Sachs asks in biotech research report: ‘Is curing
patients a sustainable business model?’CNBC. www.cnbc.com/2018/04/11/goldman-
asks-is-curing-patients-a-sustainable-business-model.html
2. Horan, C. (2021). Insurance era: Risk, governance, and the privatization of security in postwar
America. The University of Chicago Press.
3. Ramo, S. (1957). A new technique of education. Engineering and Science, 21, 17–22.
4. such as “product specialists.”
5. Fletchert, D. J., & Rockway, M. (1986). Computer based training in the military. In J.
A. Ellis (Ed.), Military contributions to instructional technology. Praeger.
6. OECD. (2015). Students, computers and learning: Making the connection. OECD Publishing.
7. Bürgi, R. (2016). The free world and the cult of expertise: The rise of OECD’s edu-
cationalizing technocracy. International Journal for the Historiography of Education, 6(2),
159–175.
8. PISA is the OECD’s Programme for International Student Assessment
9. TIMSS and PIRLS are OECD’s international assessments of outcomes and trends in
student achievement in mathematics, science, and reading.
10. Potter, J. (2008). Entrepreneurship and higher education. OECD Publishing.
11. Roberts, M. E. (2018). Censored: Distraction and diversion inside China’s great frewall.
Princeton University Press.
12. Enyedy, N. (2014). Personalized instruction: New interest, old rhetoric, limited results, and the
need for a new direction for computer-mediated learning. National Education Policy Center.
http://nepc.colorado.edu/publication/personalized-instruction.
13. OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with artifcial intel-
ligence, blockchain and robots. OECD Publishing. https://doi.org/10.1787/589b283f-en.
14. Nagle, T., Redman, T. C., & Sammon, D. (2017, September 11). Only 3% of
companies’ data meets basic quality standards. Harvard Business Review. https://hbr.
org/2017/09/only-3-of-companies-data-meets-basic-quality-standards
15. Sivabalan, S. (2019, August 13). Argentina’s massive sell-of had a 0.006% chance of happen-
ing. www.bloomberg.com/news/articles/2019-08-13/argentina-rout-was-4-sigma-
event-beckoning-the-bravest-of-brave
16. Millar, K. (2020, June 24). HAI Fellow Kate Vredenburgh: The right to an explanation.
Human-Centered Artifcial Intelligence, Stanford University. https://hai.stanford.
edu/news/hai-fellow-kate-vredenburgh-right-explanation
17. Johnson, K. (2022, March 7). How wrongful arrests based on AI derailed 3 men’s lives.
www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/
18. Burke, G., Mendoza, M., Linderman, J., & Tarm, M. (2022, March 6). How AI-
powered tech landed man in jail with scant evidence. The Associated Press. https://apnews.
com/article/artifcial-intelligence-algorithm-technology-police-crime-7e3345485
aa668c97606d4b54f9b6220
19. Hankerson, D. M. (2021, September 21). CDT original research examines privacy implica-
tions of school-issued devices and student activity monitoring software. https://cdt.org/insights/
cdt-original-research-examines-privacy-implications-of-school-issued-devices-and-
student-activity-monitoring-software/
20. Stoychef, E. (2016). Under surveillance: Examining Facebook’s spiral of silence efects
in the wake of NSA internet monitoring. Journalism & Mass Communication Quarterly,
93(2), 296–311. https://doi.org/10.1177/1077699016630255
21. Noelle-Neumann, E. (1974). The spiral of silence: A theory of public opinion. Journal
of Communication, 24, 43–51. https://doi.org/10.1111/j.1460-2466.1974.tb00367.x
122 Higher Learning
22. Pasquale, F. (2015). The black box society: The secret algorithms that control money and infor-
mation. Harvard University Press.
23. Eticas Foundation. (2022). The external audit of the VioGén. https://eticasfoundation.org/
wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-
System.pdf
24. Dixon, P. (2020). Without consent: An analysis of student directory information practices in
U.S. schools, and impacts on privacy. World Privacy Forum. www.worldprivacyforum.
org/wp-content/uploads/2020/04/ferpa/without_consent_2020.pdf
25. Golbeck, J. (2014, September). All eyes on you. Psychology Today. www.psychologyto
day.com/us/articles/201409/all-eyes-you
26. Bezos, J. (2021). 2020 Letter to shareholders. www.aboutamazon.com/news/company-
news/2020-letter-to-shareholders
27. Chen, S. (2018, April 29). “Forget the Facebook leak”: China is mining data
directly from workers’ brains on an industrial scale. South China Morning Post. www.
scmp.com/news/china/society/article/2143899/forget-facebook-leak-china-
mining-data-directly-workers-brains
28. McMahon, S. D., Anderman, E. M., Astor, R. A., Espelage, D. L., Martinez, A.,
Reddy, L. A., & Worrell, F. C. (2022). Violence against educators and school personnel:
Crisis during COVID (Technical Report). American Psychological Association.
29. Baron, J. (29 January 2019). Classroom technology is indoctrinating students into
a culture of surveillance. Forbes. www.forbes.com/sites/jessicabaron/2019/01/29/
classroom-technology-is-indoctrinating-students-into-a-culture-of-surveillance/
30. Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation.
New Media & Society, 21(8), 1824–1839. https://doi.org/10.1177/1461444819833331
31. Hankerson, D. L., Venzke, C., Laird, E., Grant-Chapman, H., & Thakur, D. (2021).
Online and observed: Student privacy implications of school-issued devices and student activity
monitoring software. Center for Democracy & Technology. https://cdt.org/wp-content/
uploads/2021/09/Online-and-Observed-Student-Privacy-Implications-of-School-
Issued-Devices-and-Student-Activity-Monitoring-Software.pdf
32. Brandist, C. (2014, May 29). A very Stalinist management model. Times Higher Edu-
cation. www.timeshighereducation.com/comment/opinion/a-very-stalinist-manage
ment-model/2013616.article
33. Brandist, C. (2016, May 5). The risks of Soviet-style managerialism in UK universi-
ties. Times Higher Education. www.timeshighereducation.com/comment/the-risks-of-
soviet-style-managerialism-in-united-kingdom-universities
34. Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities,
neoliberalism, and new public management. Critical Inquiry, 38(3), 599–629. https://
doi.org/10.1086/664553
6
BEAUTY AND THE LOVE FOR
LEARNING
We enter the third decade of this new century facing across the globe multi-
ple, interconnected crises: a devastating war that was started by Russia against
Ukraine and against the values of humanism, freedom of choice, and human
dignity. We have another genocide in the heart of Europe and for months, the
world looked at new atrocities with the sense that we fail to stop it. There are
various reasons that can be identifed for the impotence of the world to stop the
carnage and obliteration of civilians in Ukraine; probably the most important is
the arrogance of thinking that the history ended with the universal acceptance
of liberal democracy and the Western model, as Francis Fukuyama said in early
1990s. Politicians, public fgures, philosophers, and commentators warn that this
confict can lead to Third World War1. Another dangerous crisis for the future
of humanity is the climate crisis, which is accelerating, already having a visible
and direct impact on our health, quality of life, on the political and social stabil-
ity of the world. The year 2022 is marked by the publication for the frst time
since 2014 of an international report on climate change by the Intergovernmental
Panel on Climate Change (IPCC). On the basis of 34,000 studies, the IPCC
report reveals “widespread, pervasive impacts to ecosystems, people, settlements,
and infrastructure,” noting that “climate change has caused substantial damages,
and increasingly irreversible losses, in terrestrial, freshwater and coastal and open
ocean marine ecosystems. The extent and magnitude of climate change impacts
are larger than estimated in previous assessments” (IPCC, 2022, p. 112). The
report indicates a clear state of emergency, with irreparable consequences for our
common future. As expected, the world paid attention for a week and weeks after
the usual competition to extract even more fossil fuels went ahead, accelerated.
Billions of people are highly vulnerable to the impact of climate change, over
half the world’s population will be afected in 2022 by severe water shortages,
DOI: 10.4324/9781003266563-9
124 Higher Learning
and extreme climate events become too frequent to allow adaptation and correc-
tions. In mid-March 2022 both of Earth’s poles recorded alarmingly abnormal
temperatures: Antarctica reached 40°C higher than normal average temperatures
for that time of the year, and the North Pole recorded 30°C above the average
temperatures. These astonishing developments in the world’s climate should be
extremely alarming for all politicians and decision makers across the world. In
reality, it was all ignored. Education is vastly responsible for this severe failure of
responsibility and wisdom.
These interconnected and compounding crises are directly linked to topics
addressed in this book. The climate crisis proves that our systems are failing,
beyond usual dichotomies such as West versus the Global South and America
versus Russia. The reality is that the entire world adopted a unique model of
cowboy capitalism, of extractive practices with sociopathic features informed by
pseudo-thinkers such as Ayn Rand. This model was informed and shaped by the
American dreams, where education is trumped by wealth, wisdom is ridiculed,
and a good ranking in the Forbes list of abysmally rich is showing all that really
matters in this world. Russian oligarchs found a familiar and permeable system,
common to their own kleptocracy in the United Kingdom and Australia, in the
United States and in Cyprus. A society where the rich set the rules and the poor
are exploited to the most extreme limits is not unique for one country, or even
for a continent; we have this model in various versions and local favours across
the world. This is why a character like Donald Trump is so similar to Russian
oligarchs, their manners, taste, and ostentatious distaste for culture and education.
It is a common platform of symbols, values, written and implicit messages, in a
truly globalised world; education was for decades under attack and we see now
the fnal blows, which are coming now from within. On the basis of these com-
mon cultural codes, on a “dream” that was relentlessly promoted and adapted to
new interests, we built economic, environmental, political, and social systems,
which are now crumbling around us.
It was impossible to fnd education unharmed; universities of the 21st century
are in a severe crisis of identity, mission, and meaning. Arthur Levine and Scott
J. Van Pelt identify in their book, The Great Upheaval, the main direction of
these changes, with themes and solutions that are commonly found within most
universities.3 According to these misguided ideas, higher education is not dif-
ferent from the music industry, or newspapers – skidding fast over the fact that
education is an infnitely more complex endeavour than printing a newspaper or
selling records. Typically for the rhetoric of the last decades in this space, Levine
and Van Pelt detail in their book a neoliberal future for the university, where we
will have “the rise of anytime, anyplace, consumer-driven content and source
agnostic, unbundled, personalized education paid for by subscription” (Levine &
Van Pelt, 2021, p. 2124). This is summarising well a good part of literature on
higher education and possible solutions for its future. It is in other words the
world of a Netfix for education, an Amazon-like platform taking the name and
Beauty and the Love for Learning 125
the pretence of Academia, with faculty playing the role of Amazon drivers or
warehouse workers. Education is reduced to a work of supervising various blocks
of training and assessments and the simple overview of edtech software and learn-
ing management platforms. The two authors just repeat there a boring mantra in
a book devoted, ironically, to innovation. According to them, students
are seeking the same kind of relationship with their colleges that they have
with their banks, supermarkets, and internet providers. They ask the same
four things of each: (1) convenience, (2) service, (3) a quality product, and
(4) low cost.
The proponents of this type of arguments ignore the possibility that this is what
students see possible, choosing only from what universities already provide. It
shouldn’t be so difcult to researchers in this feld to start from the fact that one
cannot make choices that are not even imagined, that remain detached from
what it looks like the only realistic alternatives. In other words, students indicate
what parts from what is already familiar, ofered by their university, they prefer.
There is no option to ask if they prefer the model of a university in 1960s, when
it was free of charge, or a university of the 21st century where the vast majority
of students graduate with a soul-crushing debt. Most probably, students’ choices
will look very diferent if researchers would be less inclined to simply justify their
ideological positions with misleading questions and fawed research. The argu-
ment, repeated and restated in various forms on writings on “innovation” that
lack any originality, serves the agenda set by the WTO few decades ago. Students
are customers, and all – including universities and their academics – are part of
the market.
The new industry of education, as imagined and set by GATS and WHO,
is mostly organised on myths and half-truths, on emotional advertisements and
low-standard science. AI is adopted in a system that is vulnerable to hype and
corporate-biased science, distorted by the “funding efect,” where results are
favourable to the source of sponsorship (Lundh et al., 20175). The utilitarian
view of education dominating the American thinking is radicalised, and a uni-
versity is reduced to a training institution with transactional relationships, a type
of supermarket where students can get packages of information that are useful to
fnd jobs. There are too many examples to illustrate this impoverishment of edu-
cation, which make not only a very long lecture, but a depressing one. Taking just
one example, we can look at what Patrick McCrory, the Republican Governor of
North Carolina, noted about funding for higher education, noting that “It’s not
based on butts in seats but on how many of those butts can get jobs. . . . I don’t
want to subsidize [what is not] going to get someone a job.” In forms that are less
crude we fnd this argument dominating the aims of universities across the world:
education is reduced to an institution of vocational training, and here is where
AI is viewed as the new panacea. If universities are just places where students get
126 Higher Learning
economy above anything else. Boris Johnson, the Prime Minister of the United
Kingdom, declared on BBC that “ ‘I’ve given you the most important metric,
which is: never mind life expectancy, never mind, you know, cancer outcomes –
look at wage growth.”8
We have the scientifc solutions and a very concerning number of citizens not
interested – and not able to understand the implications – on how the fndings of
science can build a collective solution to our problems. Higher education is not
pressured to fnd solutions to advance technology; it should be focused to think
about the aims of higher education, to contribute actively to a civil society, to
elevate a wiser citizenry and contribute to the search of sustainable solutions for
graduates’ lives and our common futures.
It is naive to think that universities are in their identity and intellectual cri-
sis just after the WTO pushed higher education to work exclusively for mar-
kets, with extraordinarily damaging managerial models. These developments just
accelerated the cultivated mediocrity within universities, and the dissonance of
seeking profts and market positions and the need to stay relevant for education,
with a positive role for society. The trade agreements on education represent
just a decisive push in the wrong direction, a fatal blow with efects that become
visible just now. In late 1990s universities had already too many problems left
unsolved. In 1949, Susan Sontag wrote in her journal about the possibility to be
“taken” by academia:
students may feel less ripped of by essay mills than by universities. Prospec-
tuses promise a collegial atmosphere, an unforgettable “student experience”
and unrivalled preparation for a rewarding career. In reality, university man-
agers are running a no-frills, bums-on-seats business with costs pared to the
bone and tight control imposed on academics by performance measures.
Student satisfaction is purchased with lax academic standards.
(Macdonald, 201711)
It’s a bit like airport security – he noted on the need to use plagiarism
detection software – it’s a massive hassle to the vast number of people who
Beauty and the Love for Learning 129
have nothing to do with terrorism, but they accept that it is something they
have to do for the integrity of the system.
(Hare, 201612)
This is not just an unfortunate metaphor but stands as a natural refection within
a system that normalised fear and surveillance, oppressive structures of power, and
neoliberal nonsense. That university, where students are imagined as customers
and potential thieves, or “terrorists,” rule breakers who secure fnancial fows vital
for good market positions and top rankings, cannot claim with credibility that
learning, thinking, and discovery are something that matter the most. Treating
anyone like a potential terrorist, or thief, is not conducive to nurture a climate
of mutual trust and collaboration. This is a key of many failures for universities
of post-WTO decades: dissolving the academic ethos of the campus in cynical
calculations of the market, where customers do their tricks to get a better price
for what they want to get and those who sell taking precautions to catch thieves is
leading to a mercantile, cynical, and ugly reality. This is a part of an extraordinar-
ily impoverished view on education, where humanity is reduced to transactional
relations, and love and imagination are derided or ignored and stand separated
from the university’s concerns. In this managerialised and technologised para-
digm, education is dehumanised and reduced to functions relevant for internal
relations of power.
What is surprising is how much the process of claiming interests in what stu-
dents learn is mindlessly applied in the case of plagiarism. There is a proftable
industry of software solutions for what is called “plagiarism detection,” which is
based on the open distrust on the so-called customers of universities, who are
now also called “partners,” or “producers,” in a clumsy efort to make it sound less
lucrative. The vast industry of “plagiarism-detection” or “plagiarism-deterrence”
is just an implicit admission of a fundamental failure of teaching and learning, of
education in universities. These software solutions, such as Turnitin, Cadmus, or
SafeAssign, promise to fnd “cheats,” but are reduced to text-matching capabilities
that are often inferior to a Google search. In other words, students who are so lazy
and negligent and just cut and paste texts are identifed by the software. Other
software are focused on surveillance, and invade students’ private life and measure
behaviour and environments based on very troubling assumptions, leading to so
many error results and intense stress for students who were dropped by lecturers
and entire universities.
Student cheating and software used to hinder and identify plagiarism deserve
separate chapters devoted entirely on the symbolism attached to them, on the
profoundly corrupt practice to use without explicit agreements students’ work
to create massive databases used to enrich external corporate entities. There are
important ethical and educational aspects related to the industry that is funda-
mentally failing to protect students from external surveillance and so many other
important aspects related to the idea of using edtech without students’ knowledge
130 Higher Learning
and control over the implications related to data collected and possible misuses.
There is no doubt that these issues will become prominent in the future for any
institution of education; in this chapter we will limit the analysis on the impact of
AI on academic integrity and plagiarism, and on the symbolism associated with
current solutions used across higher education. For example, it is surprising to see
that Cadmus, a software solution that is informing potential users on its website
that “Cadmus takes an educative approach to academic integrity,” is taking the
name of the Greek mythological slayer of monsters. It is not difcult to see that
this software is a “slayer” of plagiarism, which should leave an education question
if a student plagiarising is a monster or is creating one. Turnitin and SafeAssign
are also open to diferent interpretations, but none can claim that their symbolism
starts from a position of mutual trust and love for learning.
Developments in AI already change the range of possibilities for plagiarism
and scrutiny on academic integrity. AI-based writing assistants, free or widely
afordable, already ofer a wide range of possibilities to paraphrase, rephrase, and
generate original texts in diferent languages, eliminating the possibility to be
identifed with current software used by universities. Of course, we will have in
the near future AI systems used to identify AI-generated texts, in a meaningless
race to cheat and catch, plagiarise, and sanction breaches of academic integrity
rules. These developments should make visible a simple and self-evident fact
about plagiarism: that there is always a possibility to beat the system. This pos-
sibility is much more appealing if the system is seen as meaningless or absurd.
Stating the obvious, we have to note again that the use of fear and threats to make
students learn and produce assignments is a wrong approach for an educational
project. Technological advancements and their applications in everyday life are
associated in human evolution with progress and with new possibilities to solve
important problems. There is a natural tendency to assume that technology is
associated with solutions even when we need more thinking and infnitely more
complex approaches. Heidegger warned us that technology is helping only as
long as we remain alert about the need to keep the meditative thinking alive.
There is a proper way of interacting with technology, which is not only preserv-
ing our humanity but also preparing us to see risks associated with it. This is why
“everything depends on our manipulating technology in the proper manner as
a means” (Heidegger, 1977, p. 513). Technology is associated with our evolution
with its power of helping humans, and human capacity to create and use technol-
ogy is the key to our progress and dominance on Earth. We survived and thrived
thanks to our ability to invent and use technology. What we tend to ignore is that
we had progress and a sustainable evolution only as long as we used technology as
a tool serving our humanity. There is an important message in the ancient Greek
myth of Icarus, who built himself waxen wings to fy, but soared too high and too
close to the Sun; the heat melted his wonderful technology and he plunged to
his death. The ideology of Silicon Valley is based on an opposite view, one that
is placing in an unnatural, asocial, cultural, and aesthetically indiferent position,
Beauty and the Love for Learning 131
where technology is turning one into God. In the Whole Earth Catalog, the
manifesto of Silicon Valley, which was labelled by Steve Jobs, as “one of the bibles
of my generation,” the relationship of humans with technology is clearly solved:
We are as gods and might as well get used to it. So far, remotely done power
and glory – as via government, big business, formal education, church –
has succeeded to the point where gross obscure actual gains. In response
to this dilemma and to these gains a realm of intimate, personal power is
developing-power of the individual to conduct his own education, fnd his
own inspiration, shape his own environment, and share his adventure with
whoever is interested. Tools that aid this process are sought and promoted
by the WHOLE EARTH CATALOG.
(Brand, 1968, p. 314)
AI is now the label attached to technology that is promising again to give us “the
power and glory” and make individuals “gods” with the power to create their
own education, environments, adventures, and knowledge. This is a dangerous
illusion. Technology is not viewed just as a set of tools, but an aim. This view
reveals the importance of the frst part of AI, the adjective of “artifcial.” What is
the signifcance of the word “artifcial” in the AI? Is it even a relevant question
to ask?
The role of language is often minimised, so it may be useful here to restate
that we exist and live in the language available to us. It shapes our understand-
ing of the world, our emotions, and our way of being. This is why it is not a
healthy position to take in assuming that one word is irrelevant, or marginal to
understand technologies or any other aspect. “Artifcial” is an intriguing choice,
and this becomes clearer if we try to create an alternative to the AI formula; we
can imagine that in late 1950s, it was possible to name this new feld “comput-
ing intelligence” or “cyber intelligence.” We already found how important it is
to understand all implications of “intelligence” and the particular history of this
term, and how it is infuencing the ideological positions taken by Silicon Valley
these days. The adjective “artifcial” is synonymous to words such as “synthetic,”
“fake,” “false,” “imitation,” “simulated,” or “manufactured,” “unnatural,” and “fab-
ricated.” In a diferent sense, “artifcial” is defning emotions such as “feigned,”
“false,” “pretended,” or “hollow,” “insincere.” It is a term that is rarely used with
a positive connotation, which may explain the reticence of the group of thinkers
and engineers to use the formula of AI in the years that followed the workshop
organised in 1955 by John McCarthy. There are many ways to read the birth and
the evolution of AI, under its current label and most probably some are better
than the one chosen for this book, which is that AI is containing an unintentional
warning for those who use it. It is a powerful form of intelligence, which has the
potential to grow exponentially in the future, but it is leading to an “artifcial”
world. It is a disembodied world immensely powerful on just some elements of
132 Higher Learning
Researchers explored these limits in line with fndings of Stephen Smale, the
mathematician who proposed a list of 18 unsolved mathematical problems for
the 21st century, where the 18th problem was related to the limits of AI. One of
the authors, Dr. Matthew Colbrook, explained:
The paradox frst identifed by Turing and Gödel has now been brought
forward into the world of AI by Smale and others . . . There are fundamen-
tal limits inherent in mathematics and, similarly, AI algorithms can’t exist
for certain problems.15
and AI reveal that its algorithms are most susceptible to errors and discrimina-
tion when someone or something is stepping out of average. We cannot say that
we want to cultivate independent thinking, and strong and creative identities,
ignoring at the same time that our tools are directly opposed to that, and remain
indiferent to the fact that students navigate a world organised and restricted by
edtech. Education is the feld where we should actively see that an individual is
suddenly fnding resources and the environment to bloom, to contradict the his-
tory of average results with extraordinary new achievements. When that history
becomes a label attached to a complex and often unpredictable mind, the result
can be the limitation and disenchantment of students with learning. Historical
data is not only the kind of data mostly used by algorithms, but it is also identi-
fed by the AI engineers as a main source of a latent algorithmic bias. Bias and
algorithmic forms of compartmentalisation invite mediocrity and apathy or can
give rise to protest and revolt. In a world of education where AI is pigeonhol-
ing individuals based on individual’s historical data, the majority of humanity’s
most signifcant fgures in the history of science and human culture would be
stuck under a limiting label, which also selects only the type of content suitable
for a mediocre or poor student. This would be a world where a student like
Einstein would have remedial content that is assumed to be able to entertain
such a student.
The current approach promoted by decision makers in education within local,
national, and international levels is informed by the neoliberal models of mana-
gerialism and technocratic solutionism. This is reducing the educational project
of universities to quantitative and instrumental outcomes. It is important to “turn
upside down” the optimism of edtech and the managerialised approach of teach-
ing and learning and see current limits and risks, areas of discontent and possibili-
ties for optimal use of technological advancements. A blind trust and enthusiasm
on AI can cost us our collective future. It is the time to approach with a healthy
scepticism fnancial consultants’ claims of expertise in education, corporate giants
with signifcant interests in the edtech market giving the next optimal solutions
for universities. Importantly, the current proposition in higher education is leav-
ing aside the most essential parts of humanity: love, beauty, imagination, passion,
and inspiration are left aside in a process that is left ugly and commercial. It is
limited to measurements, tests with extrinsic value for learning, grades and rank-
ings, credentialisation, and accreditation of empty products of instruction. That is
indeed a hollow, simulated, unnatural, and artifcial education.
We tend to forget that the relevance of education is not how it looks in reports,
or what we decide to certify in a space where credentials are given in exchange of
schooling fees, but what students learn and apply in their lives, at work, in their
families, and in society. A book based on extensive research published in 2013
makes this point:
learn is too darn fat. Children learn too little each year, fall behind, and
leave school unprepared . . . Schoolin’ just ain’t learnin.
(Pritchett, 2013, p. 1417)
You have betrayed us. I have worked like a brute my whole life because,
without school, I had no skills other than those of a donkey. But you told
us that if I sent my son to school, his life would be diferent from mine. For
fve years I have kept him from the felds and work and sent him to your
school. Only now I fnd out that he is thirteen years old and doesn’t know
anything. His life won’t be diferent. He will labor like a brute, just like me.
(Pritchett, 2013, p. 2)
Good grades that serve only to present statistics and “data-informed solutions”
require a solid interrogation; AI is giving bad datasets immense power, and this
require a constant critique and interrogation as well as a radical change on data
collection, evaluation, and on mechanisms designed to identify errors and bias.
Most of all, it requires an intellectual opening required to realise that sometimes
data cannot capture the entire picture and most signifcant parts can be missed by
quantitative measures.
Education cannot remain uninterested – as it is now – in the fact that our
systems are crumbling and that we already live and contemplate dystopian dis-
asters and realities. The time to build a new and more realistic, human project
for education is running out. We can start from the fact that genuine education,
that type of education that is making an impact on students’ life, that can bring
information and wisdom, that nurtures responsibility and civility, is intertwined
with narratives and dialogues, with the birth of love for learning, based on mutual
trust. If education ignores how humans make sense of information and life we
enter the whirlpool of self-comforting illusions about what students really learn,
Beauty and the Love for Learning 135
what is really academic integrity, including the integrity of all involved in educa-
tion, not just the current criminalising view on students and how they can be bet-
ter sanctioned. For this there are some basic facts that contribute to a more solid
foundation of learning and teaching in higher education, using the advancements
of AI and the emergence of new solutions. It is important to rethink the fact
that humans make sense of their life according to diferent temporal dimensions,
which are determined by cultural variables. Lera Boroditsky is one of the most
prominent academics exploring how languages and cultures construct our under-
standing of time, or how diferent we understand time and spatiality in cultures
shaped by diferent languages (Boroditsky, 201119). It is also crucial to understand
that humanity is shaped by an aesthetic and emotional dimension before a cogni-
tive construct is defned. This is why what we can call “the eros of learning” is
vital for a realistic and positive view on education.
AI, and edtech in general, starts from a disembodied, decontextualised and
atemporal view on how students learn. But the most concerning part is that the
eros of learning is the forgotten ingredient for academic endeavours. The love for
learning arises most often from a unique mix of tremendous eforts, discomfort,
frustrations and discoveries, new perspectives, and wider understandings. Learn-
ing by heart, not as memorisation, but as a deep love for new ideas, knowledge
and epistemological spaces opened through learning and imaginations, should be
reconsidered in the educational projects in the technological era. This is the part
ignored in the process of industrialisation, commercialisation, and trivialisation
of higher education. As we cannot simplistically measure imagination, or test the
love for learning, or the beautiful nature of an educational experience, or how
we refect on the ideas of goodness or civility, or how much we truly nurture
imagination through learning experiences, educators, and policy makers are led
to simplify and trivialise education, ignoring the human dimension of learning.
This trivialisation makes possible to have illiterate university graduates. There are
voices claiming that these concerns are not based on realities, that universities
work better than ever before by “selling” their “product.” We are told that man-
agement procedures optimise the instruction and new credentials are fawlessly
aligned with the needs of the “market” and “employers.” It is almost convincing
to listen to these opinions, but reality speaks in strident tones about our collective
failures of higher education. For example, we can look at the example of a medi-
cal doctor, Dr. Sherri Tenpenny, who was invited to give testimony at the Ohio
House Health Committee meeting in June 2021, and said that metal objects are
sticking to the bodies of vaccinated people. At the same time, a US Congress-
man (Rep. Louie Gohmert, R-Texas) was asking in the US Congress whether
there was anything that the U.S. Forest Service can do “to change the course of
the moon’s orbit or the Earth’s orbit around the Sun,” in order to combat climate
change (Gregorian, 202120). How was it possible to pass their exams in university?
What was measured in Gohmert’s education to make possible for him to receive
a Juris Doctor degree?
136 Higher Learning
world, where a graduate diploma does no guarantee the level of literacy required
to write properly a postcard. This is a direct efect of a general refusal to look
at and openly admit signifcant failures and abnormalities. Taking a blindingly
positive approach of what is happening in universities will not help anyone, and
this obvious refusal of intellectual honesty rapidly erodes the pillars of Academia.
These failures accumulate and lead to an accelerated erosion of authority in
education, a collapse of trust in what institutions of education certify and cre-
ate. The authority of educated people, another lost dimension, was sourced in
the Latin auctoritas, the attribute associated with the wisdom of the elders, the
keepers of tradition, knowledge, wisdom, and virtue. That type of authority is
distinct from power, which in the Roman world was held by potestas. “Auctori-
tas” is based on wisdom, and this power went beyond legal or institutional rights.
Socrates and Plato had knowledge and wisdom, and this is how the frst Academia
in the world was created. Replacing auctoritas and wisdom with the “market” left
education blocked in hubris, with contradictory aims and demands, with a pro-
found loss of identity and a self-imposed mediocrity. It is a model that is violently
hostile to a vibrant intellectual life. The work of teachers is undermined by the
current governance models and by cultural arrangements inherently promoted
with the neoliberal models. The authority is conferred to the market and its suc-
cessful players, capitalists, the rich members of the economy, by people accumu-
lating wealth. In this world teachers are perceived as poorly integrated individuals
in the market, people who are unable to do something better with their life and
earn a decent wage. At the end of 2021, a grotesque show was briefy noted in the
avalanche of stories presented by the media, where teachers had to fght against
each other in front of crowds gathered to watch a hockey game in the US state
of South Dakota, to scoop as many dollar bills as they could, so they can pay for
school supplies22. In fact, research shows that being a teacher is not an appealing
career; a report published in 2006 reveals that “the United States is facing nearly
200,000 teacher vacancies a year at a cost to the nation of $4.9 billion annually”
(Levine, 2006, p. 1123). In the following years this situation became even more
serious. In 2019, a report published by the Economic Policy Institute found that
“teacher shortage is real, large and growing, and worse than we thought. When
indicators of teacher quality (certifcation, relevant training, experience, etc.) are
taken into account, the shortage is even more acute than currently estimated.”24
AI presents the possibility of automation, which is especially appealing for insti-
tutions interested to maximise profts and balance their budgets. Research shows
that automation is associated with the tendency to favour business owners over
wage earners.25 In other words, we can expect that AI will increase the tendency
to reduce the number of academics highly specialised in their felds and replace
them with automated solutions. This will only accelerate the current trend of
precarious employment arrangements for faculty. The dialectical structure of the
system is shaped by market and profts, creating a crisis of identity, ideas, and
solutions for education at all levels across the world. Ironically, AI and edtech
138 Higher Learning
prove not only incapacity to suggest and build efective solutions to the crisis, but
accelerate it. Since Aristotle – and probably before him – technology is naturally
associated with the potential of human emancipation and solutions for our ethical
problems. Aristotle noted that
The fundamental problem of this view is that it separates education from our
human condition and nature. It is a decontextualised, narrow view that is reduc-
ing the human nature of learning to technics, leading to alienation. It can create
multiple aberrations.
In the dialogue Theages, Plato is presenting Socrates claiming that he knows
nothing, except one subject of learning: “the things of Eros.” This is placing Eros
at the core of Socratic methods of teaching and learning. We have here a com-
plex key left by Plato, placed as a central area of Socrates’ extraordinary expertise
in teaching; for Plato’s master Eros is the key to learning. Socrates is quoted in
Phaedrus as saying that Eros is “a certain desire,” and later in the same dialogue,
he is noting that Eros is related to “the nature of beauty,” which opens another
serious topic ignored by education, that of “beauty.” A lecture of Plato’s dialogues,
and a serious refection on its meanings, reveals that there is no algorithm to
create genuine Eros, and its artifcial surrogates lack power and depth. In-depth
learning, the kind of learning experience that is opening the desire to learn at all
moments of one’s life, is the part related to the Eros of learning, with the passion
and human desire to access new mysteries, to understand and touch inaccessible
spaces. The love and beauty are perfectly explained much later by John Keats in
his poem “Ode on a Grecian Urn” where he writes that:
Somehow, we fnd new ways to forget that all we need to know for our
humanity is linked to truth and beauty, to love of beauty. There is much more
in learning and teaching than a mechanical device that can optimally facilitate
passing the information and knowledge to multiple recipients, called students (or
“customers,” or “producers” – which is a clumsy attempt of some universities to
get out from the market paradigm ignoring the fact that producers are fundamen-
tally a basis for the extractive practices of capital). The Eros of learning is related
to the desire to learn, to do what Socrates was doing thanks to his expertise in
Eros: caring and nurturing human souls, opening pathways to wisdom.
Edtech is building its narratives based on the idea that the mechanics of teach-
ing and learning are enhanced by engineering solutions and aims of education
can be optimally achieved in this co-dependent dynamic. It obviously leaves out
the human passion, our desire for love, mystery, passion, our fundamental corpo-
reality, and need for embodied experiences. The crude simplifcation of eforts
of nurturing humanity as a process of manipulating information, delivering and
testing its mastery, is not serving the interest of educators, and not the long-term
interests of edtech companies. Designing instruction in schools and universities as
a process disconnected from the Eros of learning – from love, inspiration, beauty,
passion, happiness and friendship, mystery and hope – is building an alienated life,
an impaired humanity.
140 Higher Learning
It is nothing really new in fnding that education lost its meanings, identity,
and aims. Giambattista Vico, the remarkable philosopher of the Italian Enlight-
enment, noted at the end of the 17th century that the meaning of education
is recurrently forgotten, as the focus naturally shifts towards specialisation and
technical aspects. We have the responsibility to stop ourselves from the whirlpool
of technicalities and technological progress and ask what the aim of education
in the era of AI is. What does it mean to be an educated person? A graduate
diploma, and other forms of credentials, cannot be an aim of education. This
cannot answer the question of what it means to be an educated person. Acquir-
ing knowledge does not equal an educated mind, as any amount of data is not on
itself close to wisdom. Vico suggested that education is vital for our progress and
survival because good individuals create good communities and good societies.
It is a view opposed to current paradigms of governance in education, as Vico’s
theory of education is revolving around the idea of common good, a concept
eliminated in neoliberal arrangements. Vico’s theory of education is interested in
the eforts to bring together information and emotions, mind and heart:
Individuals who complete their own human nature make good citizens.
Without good citizens, there is no basis for a good society. Good citizens
act for the common good. . . . Vico’s conception of education is based on
the art of rhetoric directed by a vision of the Good that improves and pro-
motes the ethos of the individual as a member of the human community.
(Bayer, 2009, p. 2328)
This is another way to look at the Eros of learning, where the Eros is similar to
the ancient Greek perspective, of life and vitality, but one linked directly to the
idea of good and common good.
Academia is fundamentally opposed to the idea of Eros for two main reasons:
it is a concept linked to sexual desires and sexuality in general, which is a mine-
feld for universities. On the other hand, universities shifted their interest from
the idea of love, beauty, Eros of learning, and passion for teaching to concepts
that are easily covered by direct measurements. It is a feld of economic transac-
tions and market mechanisms where reputation is expressed algorithmically, and
academic life to quantitative measurements of all that can be measured: the num-
ber of students and the number of publications, the number of citations and the
number of graduates, and so on. Here is the space where software and algorithms
permeate all aspects of academic life, colonising spaces that are defning for what
we understand when we say human. In efect, talking about the Eros of learning
and beauty in education, especially in higher education, requires courage. This is
not part of universities’ agenda of research or part of the ideological preferences
common across higher education. The idea to marginalise beauty from education
is not limited to its scarcity in educational design, but is a dimension avoided even
in arts. Howard Garner observed that “[t]oday, particularly in the contemporary
Beauty and the Love for Learning 141
West, the status of beauty in relation to the arts could scarcely be more diferent.
Some observers eschew the term beauty altogether, while others use it in ways
quite diferent than in the past” (Gardner, 201129).
The Socratic Eros is inherently connected to beauty, again, not necessarily an
external, physical beauty. It is the beauty of knowing and thinking, of the way
to discover the Eros of learning and open real pathways to lifelong learning. The
beautiful education is built on the Eros of learning, the inherent desire and love
for learning and wisdom. Learning by heart is opened here as learning something
that is speaking to the heart; it is so meaningful for the student that it becomes
part of the “heart,” memory, and emotions. It is an educational project indiferent
to tricks and hints for assessments and grades, which is taking the aesthetic nature
of education as a foundation for teaching and learning. AI and other edtech appli-
cations can supplement and complete learning, but not replace education if we
choose to build it as a human project.
Notes
1. Herszenhorn, D. M. (2022, March 4). The fghting is in Ukraine, but risk of World War
III is real. Politico. www.politico.eu/article/fght-ukraine-russia-world-war-risk-real/
2. IPCC. (2022). Summary for policymakers [H.-O. Pörtner, D. C. Roberts, E. S. Poloc-
zanska, K. Mintenbeck, M. Tignor, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V.
Möller, & A. Okem (Eds.)]. In H.-O. Pörtner, D. C. Roberts, M. Tignor, E. S. Poloc-
zanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A.
Okem, & B. Rama (Eds.), Climate change 2022: Impacts, adaptation, and vulnerability.
Contribution of working group II to the sixth assessment report of the intergovernmental panel
on climate change. Cambridge University Press.
3. It is remarkable to see how much universities in China or USA, Russia or UK, EU or
Latin America share the same views, aims and inherent contradictions. There is a com-
mon quantitative measure for the number of published research (quality is marginal),
for the number of students and money secured for profts etc.
4. Levine, A., & Van Pelt, S. (2021). The great upheaval: Higher education’s past, present, and
uncertain future. Johns Hopkins University Press.
5. Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry spon-
sorship and research outcome. The Cochrane Database of Systematic Reviews, 2(2),
MR000033. https://doi.org/10.1002/14651858.MR000033.pub3
6. Heidegger, M. (1977). The question concerning technology, and other essays. Harper &
Row.
7. Heidegger, M. (1969). Discourse on thinking. A translation of gelassenheit. Harper & Row.
8. Nicholson, K. (2021, October 12). Timing of Boris Johnson’s holiday under fre after
damning Covid report slams his handling of the pandemic. HufPost.www.hufngtonpost.
co.uk/entry/boris-johnson-holiday-covid-report_uk_61653846e4b0cc44c510372f
9. Sontag, S., & Rief, D. (2008). Reborn: Journals and notebooks, 1947–1963. Farrar,
Straus and Giroux.
10. Furedi, F. (2004, August 6:14). Plagiarism stems from a loss of scholarly ideals. Times
Higher Education Supplement. www.timeshighereducation.com/features/plagiarism-
stems-from-a-loss-of-scholarly-ideals/190541.article
11. Macdonald, S. (2017, May 25). It’s not essay mills that are doing the grinding. Times
Higher Education. www.timeshighereducation.com/opinion/its-not-essay-mills-that-
are-doing-the-grinding
142 Higher Learning
12. Hare, J. (2016, April 13). University of Melbourne start-up Cadmus targets cheats. The
Australian.www.theaustralian.com.au/higher-education/university-of-melbourne-
startup-cadmus-targets-cheats/news-story/f5e2677aea4a90b54f5c5ee0e4d3eee7
13. Heidegger, M. (1977). The question concerning technology, and other essays. Garland
Publishing.
14. Brand, S. (1968, Fall). Purpose. In S. Brand (Ed.), Whole earth catalog. Portola Institute.
15. University of Cambridge. (2022, March 17). Mathematical paradoxes demonstrate the
limits of AI. ScienceDaily. Retrieved March 30, 2022, from www.sciencedaily.com/
releases/2022/03/220317120356.htm
16. Colbrook, M. J., Antun, V., & Hansen, A. C. (2022). The difculty of computing
stable and accurate neural networks: On the barriers of deep learning and Smale’s
18th problem. Proceedings of the National Academy of Sciences, 119(12). https://doi.org/
doi:10.1073/pnas.2107151119
17. Pritchett, L. (2013). The rebirth of education: Schooling Ain’t learning. Center for Global
Development.
18. World Bank. (2018). World development report 2018: Learning to realize education’s promise.
World Bank.
19. Boroditsky, L. (2011). How languages construct time. In S. Dehaene & E. Brannon
(Eds.), Space, time and number in the brain: Searching for the foundations of mathematical
thought (pp. 333–341). Elsevier Academic Press. https://doi.org/10.1016/B978-0-
12-385948-8.00020-7
20. Gregorian, D. (2021, June 10). Lunar new deal: GOP Rep. Gohmert suggests alter-
ing moon’s orbit to combat climate change. NBC News. www.nbcnews.com/politics/
congress/lunar-new-deal-gop-rep-gohmert-suggests-altering-moon-s-n1270219
21. President of Ireland (Media Library). (2016, April 7). Speech at the EUA annual confer-
ence. NUI Galway.
22. ABC News. (2021, December 15). United States teachers compete for cash for their classrooms and
critics liken it to Squid Game. www.abc.net.au/news/2021-12-14/south-dakota-dash-for-
cash-teachers-compete-money-squid-game/100699340
23. Levine, A. (2006). Educating school teachers. The Education Schools Project.
24. EPI. (2019). The teacher shortage is real, large and growing, and worse than we thought. Eco-
nomic Policy Institute. https://fles.epi.org/pdf/163651.pdf
25. Karabarbounis, L., & Neiman, B. (2014). Global decline of the labor share. The Quar-
terly Journal of Economics, 129(1), 61–103.
26. Aristotle & McKeon, R. (2001). The basic works of Aristotle. Modern Library.
27. Gray, R. (2022, April 2). Teachers in America were already facing collapse. COVID
only made it worse. BuzzFeed, Politics. www.buzzfeednews.com/article/rosiegray/
america-teaching-collapse-covid-education
28. Bayer, T. I. (2009). Vico’s pedagogy. New Vico Studies, 27, 39–56.
29. Gardner, H. (2011). Truth, beauty, and goodness reframed: Educating for the virtues in the
twenty-frst century. Basic Books.
SECTION III
This last section presents an in-depth analysis of the role imagination can play in
education and explores the relationship between intelligence, imagination, and
AI. Looking at the possible futures of education, this section demonstrates that
the key challenges facing universities and open societies in this tumultuous start
of the 21st century are not technological but political, educational, and cultural.
Technological advancements, especially in the feld of AI, open to new areas of
knowledge, with possibilities not even explored today. In the contexts of extraor-
dinary technological adoption and acceleration, we have a concerning rise across
the world of authoritarian and fascist ideologies and movements and a widening
gap of socioeconomic and cultural segregation. The most plausible scenario for
the future of education includes AI as an integral part of future developments
and challenges for universities and education in general. This possibility requires
some key principles for the adoption and use of AI in higher education, which
are presented with the intention to assist educators and students in a constructive
and ethical integration of AI systems in the complex endeavour of casting and
creating a meaningful education.
DOI: 10.4324/9781003266563-10
7
IMAGINATION AND EDUCATION
DOI: 10.4324/9781003266563-11
146 The Future of Higher Education
Nations (ASEAN) member countries in Laos, making again his favourite argu-
ment that that was the best time in human history. Specifcally, Obama noted that
just because we have so much information from all around the world on
our televisions, on our computers, on our phones, it seems as if the world
is falling apart. . . . But the truth is that when you look at all the measures
of wellbeing in the world, if you had a choice of when to be born and you
didn’t know ahead of time who you were going to be – what nationality,
whether you were male or female, what religion – but you had said, when
in human history would be the best time to be born, the time would be
now. The world has never been healthier, it’s never been wealthier, it’s
never been better educated. It’s never been less violent, more tolerant than
it is today1.
The President of the United States at that time made the same point just few
months later when he published as a guest editor for Wired magazine an opinion
piece entirely structured around the idea that “now” is the best time to be alive
in human history. Not only that that present was the best, but the future looks
just good:
We know how those young ASEAN found their countries led by authoritarians
such as Rodrigo Duterte in the Philippines, who publicly praised Hitler making
the note that “Hitler massacred three million Jews. Now, there is three million
drug addicts. I’d be happy to slaughter them3.” Duterte proved that his stated
intentions are followed by real murders. If we take the example of Laos, we hardly
fnd a reason at that time to embrace this feverish optimism. In the same year
when the US President found that we live the best possible times Laos was far
from being a free and democratic country, a fair society where young people can
dream for a bright future. In 2019, the Human Rights Watch noted that
The government of Laos controlled at that time television, radio, and publications
and all are still subject of governmental censorship. Some of Laos’ dissidents were
found “disemboweled and stufed with concrete.” Obama’s inspiring discourse
looks now, after we know how ASEAN countries shaped their future, far from
the rule of law, democracy, and a civil society, closer to Voltaire’s caricatural char-
acter Dr. Pangloss, who is saying that “In this best of possible worlds . . . all is for
the best5.” The region where the future was confdently predicted in such positive
terms includes now extremely violent authorial regimes (e.g. Myanmar) and a fast
decline of individual and collective freedoms.
The so-called best years to be alive in human history, that were announced
as ideal times to prepare for a good future, proved to be far from the overly
optimistic predictions, marking the opposite: authoritarian tendencies, extrem-
ist ideas, and movements gained prominence and strength. Donald Trump, the
surprise candidate with a long list of anti-democratic and hateful rhetoric, was
elected despite his record of racist attitudes, sexist remarks, and pro-authoritarian
and anti-democratic views. Some analysts say that this background was actually
helping the atypical candidate to become the 45th President of the United States.
His administration is marked by extreme points that stand defnitely far from
that techno-utopia presented in 2016 at the ASEAN meeting of young leaders.
Freedom House summarised in a public report published in 2018 the evolution
of democracy across the world like this:
Political rights and civil liberties around the world deteriorated to their
lowest point in more than a decade in 2017, extending a period character-
ized by emboldened autocrats, beleaguered democracies, and the United
States’ withdrawal from its leadership role in the global struggle for human
freedom.
(Abramowitz, 20186)
The report also documented at that time the 12th consecutive year of decline
in global freedom, and an accelerating decline of civil rights and freedom in the
United States.
In 2022, the Freedom House report documents the 16th consecutive year of
decline of global freedom, and describes a world where totalitarian ideologies and
authoritarians are on the rise. The report states:
The current view on technology, especially edtech, combines what Henry Gir-
oux identifed as “neoliberal fascism”, which is revolving around the recurring
tendency of technology to serve authoritarian impulses, integrating control,
and surveillance for antidemocratic projects. Henry Giroux is operating in his
book “The Terror of the Unforeseen” an important discussion for education,
noting that
we have been living the lie of neoliberalism and white nationalism for over
forty years and because of the refusal to face up to that lie, the United States
has slipped into the abyss of an updated American version of fascism of
which Trump is both a symptom and endpoint.
(Giroux & Casablancas, 2019, p. 1910)
Other contemporary fascisms, such as the crude and profoundly version rep-
resented by the Russian fascism, also favour a particular view on technology, one
that is serving ideological positions, nationalisms, and irrational actions. Espe-
cially now, in the context of rapid advancement of AI and increasing interest of
administrators to adopt to a greater extend edtech, it is important to look at the
role and meanings associated to “technology” to determine the type of education
150 The Future of Higher Education
futures we can expect and want to build. It is naive and unrealistic to believe that
the eugenic roots of AI and cybernetics are just a matter of the distant past, some
bad dreams of scientists in the 19th and 20th centuries. We don’t have to look too
hard to see it in political approaches and discourses, in technological manipula-
tions and formative projects.
In ancient Greece, techne was a concept intertwined with the possibility of
teaching, assigned to arts and manual crafts; it also included sciences, such as
mathematics and medicine. In Metaphysics, Aristotle is defning techne as the
ability to acquire knowledge of universal principles and causes. The current use of
technology is divorced from philosophy in academic disciplines, and engineering
stands symbolically opposed to humanities, as a “practical,” a certain route to a
successful career. There is a common trend – at least across the Anglosphere – of
cutting budgets for humanities and orient funding towards STEM disciplines (sci-
ence, technology, engineering, and mathematics). This is recurring topic on pub-
lic debates related to higher education, and stands refected in most reports that
include funding priorities for universities and colleges. It is a futile and quixotic
endeavour to try to change the opposite view on STEM and humanities, or even
to repeat why it is a dangerous idea to favour funding for employability rather
than secure a broader education, which can create not only good employees and
managers but also responsible citizens and independent thinkers, with a wider
understanding of politics, work, technology, and economics. This debate was
lost and decided at least since when WTO agreements pushed higher education
in the area of tradable commodities. As noted, any scrutiny on funding alloca-
tion shows the disproportionate preference in funding STEM felds across higher
education. What is important to note is that AI is bringing to the front not only a
religious admiration and infated expectations, but also a disconnect between our
humanity and a particular view on technology. Engineers of AI represent a new
type of priesthood, with its own jargon and codes, and the same certainty about
their own superiority that was assigned to Church clerics in the past.
Two academic researchers, Diego Gambetta and Stefen Hertog, explored for
years a striking and surprising overrepresentation of engineers in the extremist
movements. Their fndings are presented in a fascinating book titled “Engineers
of Jihad. The curious connection between violent extremism and education.”
Data analysed by the two researchers confrms that there is a signifcant overrep-
resentation in extremist movements of graduates in engineering. The correlation
was a serendipitous discovery: as a professor of social theory at the University of
Oxford, Gambetta was assigning students to investigate with a scientifc method
any random bit of trivia selected by them as interesting. One group of students
selected as a topic for their assignment the study of engineering graduates among
extreme Islamist movements. Hertog was at that time a doctoral student in the
same university with Gambetta and found this topic very intriguing. The frst step
of research found clearly a frequency of engineers in jihadist movements that was
far exceeding a normal statistical representation. When the research was extended
Imagination and Education 151
to include Russian fascist groups, white supremacist fascists in the United States,
and other violent groups on the far right, Gambetta and Hertog found again a
widely disproportionate numbers of engineers. Data analysis also revealed that
engineers were mostly absent on the political Left and its extreme movements.
From here, the research is well nuanced and opens to fascinating possibilities and
warnings for universities and policy makers. It is dispelling myths, such as the
cause of deprivation, where it is assumed that individuals join violent groups as
a result of deprivations and feelings of injustice based on professional status infe-
rior from that deserved with a higher qualifcation. What the research found was
that graduates in engineering were not only overrepresented in violent terrorist
groups, but most individuals come from afuent families and had personal possi-
bilities for a successful and well-paid career. To explain these fndings the authors
present as a case study the story of a leader of Al-Qaeda who moved in America
as a child and had, as a graduate in computer engineering, real opportunities for
a good career in the United States. Obviously, his choice was to use his skills and
intelligence for nefarious purposes. The most important aspect of these fndings
is that Gambetta and Hertog pose a question that is crucially important for educa-
tion’s future:
[W]hy are engineers not only proportionally more prone than all other
graduates to join Islamist extremists but also to do so even where the eco-
nomic situation is not so dire? And there is more -they note – we found
evidence that engineers are more likely to join violent opposition groups
than nonviolent ones, to prefer religious groups to secular groups, and to be
less likely to defect once they join an Islamist group. None of these fndings
seems explicable in terms of relative deprivation.
(Gambetta & Hertog, 2016, p. 16111)
What the authors identify as a “perfect match between types of degree and types
of extremists” is not consistently explained in the book. It is underlined that
we can talk about tendencies rather than an absolute causality, eliminating the
possibility to see that engineers are necessarily associated with forms of extrem-
ism. The overrepresentation is also associated with a counterintuitive fact that
engineers are joining these movements mostly as ideologues, as leaders of violent
groups.
The explanation for this tendency to fnd graduates of STEM, and in particu-
lar engineering, over-represented in extremist and violent movements is further
explored in relation to the need for search for certainty, and a reaction to the
unknown. Arie Kruglanski, a professor of psychology at the University of Mary-
land, explored the phenomenon known as “certainty-seeking,” and found that in
Moreover, the need for closure leads to a preference for one’s own groups,
leading to the stereotyping, derogation, and support for violence against
out-groups.
(Kruglanski & Orehek, 2011, p. 1512)
In fact, the distaste for thinking and uncomfortable ideas, reasoning and knowl-
edge explorations is cultivated and exploited. Technology completed its project
of eliminating diferent ways of thinking about being and possibilities, and is
reinforcing now narratives that are serving the colonising aim. The “man today”
is living under an avalanche of ephemeral information presented as knowledge,
media gossip, and “stories of the day,” an avalanche of meaningless information
that is cultivating superfciality and ephemeral interests. Big Tech and its satellites
manage the project of controlling our imaginations, and this is a possibility that
should worry educators to a higher degree than the possibility of colonisation of
thinking that was contemplated by the German thinker in the middle of the last
century.
Corporate world is actively engaged in colonising imaginations in education
at all levels, and this explains partly why we see that concepts such as “innova-
tion” cover semantically the adherence to the mantra of techno-solutionism and
neoliberal policies. Universities stridently make the point that all that is called
“innovation” is not creative and imaginative, and can be immediately identifed
as a narrow limitation of what we understand by higher education, higher learn-
ing, and participation in the civil society. The last aspect is even more reduced
to graduate employability, and it remains unclear what universities see as their
own contribution to the “civil” part of our societies in the 21st century. The way
we imagine our communities, our responsibilities, and how students and faculty
engage with society is becoming even more important at a point where various
fascisms grow and tempt in spaces that were very recently considered safe from
such attacks. In other words, the contradiction of technological advancement and
extensive adoption of edtech in teaching and learning is associated now with the
need to limit our imaginations to the “right” narratives, which are excluding and
introspective critique of technological projects and solutions. How is the limita-
tion of imaginations even possible? Many will protest and “assert the opposite,”
claiming that education is now more open than it was ever before. The frst argu-
ment is that it is impossible to ignore the responsibility of education in the rise of
Imagination and Education 155
fascisms across the world. The shift of focus from substance to management, from
ideas to market positionings and “product” development for profts led education
towards intellectual laziness, groupthink, and indiference towards the challenge
raised by narrow, localist, intolerant, and hateful ideas; unfortunately, this is what
has happened in the last decades and it cannot be denied or ignored anymore.
Dismissed or accepted, these developments make even more evident that we
have to explore the fascinating relationship between technology and imagina-
tion in education, challenging at the same time the indiference for this topic
openly manifested by higher education to this enormously powerful faculty of
human mind. Trying to understand the place of imagination in edtech can help
us explore what can be useful to make the aims of higher education more relevant
for students and for the intellectual life of the campus.
Imagination is the basic ingredient for AI, which was always part of what
was labelled as “sociotechnical imaginaries,” defned by Sheila Jasanof and Sang-
Hyun Kim as “collectively held, institutionally stabilized, and publicly performed
visions of desirable futures, animated by shared understandings of forms of social
life and social order attainable through, and supportive of, advances in science
and technology” (Jasanof & Kim, 2015, p. 416). The narrative structures of AI
imaginaries are extremely complex and powerful, playing most of the time a
much more important role than real technological possibilities of AI systems pro-
posed. In fact, all funding for the advancement of AI, and research interest, is
based on the ability to structure and present coherent and compelling narratives
based on sociotechnical imaginaries. There is no part of these narratives that is
irrelevant or redundant; their complexity requires a semiotic analysis. Meanings
and connotations are changed and altered to act as powerful tools that are used
for utopian visions and corporate agenda. The use of language is weaponised by
edtech capitalists to occupy the imaginary of the campus with a certain narra-
tive and suppress alternative narratives or credible critiques. It is an application of
codes and beliefs associated with the new colonisations, which is the posthuman
language of technological dominance and neoliberal dogmatism.
Especially in the case of AI we can observe that a preferred rhetorical device
for its stories is represented by the overuse of catachresis, the device of narrative
re-description and abusive misuse of meanings for key terms. Richard Nordquist
defned catachresis as “a rhetorical term for the inappropriate use of one word
for another, or for an extreme, strained, or mixed metaphor, often used delib-
erately” (Nordquist, 202017). Edtech is constantly using words in contexts that
assign new or even opposite meanings, sometimes as part of a new jargon adopted
by the initiated to communicate and signal their adherence to the set of beliefs
specifc to technologism. This is an intentional abuse, similar to the use of the
concept of “freedom” by antidemocratic, authoritarian movements and leaders;
the attraction created by the misuse of positive and attractive ideas is parasitically
exploited to confer power to a diferent meaning, which is now suitable to serve
the new narrative. A visible case is represented by the misuse of “artifcial” in the
156 The Future of Higher Education
Teaching Machines isn’t just a story about machines. It’s a story about peo-
ple, politics, systems, markets, and culture. It’s a story of the twentieth-
century education technologists and education psychologists and education
publishers and education reformers who built and sold (or at least tried
to build and sell) machines they claimed could automate self-instruction,
could engineer a more personalised – or as they were more likely to call it,
“individualized” – education system. It’s a story of how education became
a technocracy, and it’s a story about how education technology became big
business. It’s a story of how the science of teaching and learning, as well
as our imagination about teaching and learning came to be caught up in
mechanization, in efciency, and, to quote the French philosopher Jacques
Ellul, in “psychopedagogic technique.”
(Watters, 2021, pp. 9–1019)
AI changes how people learn, work, play, interact and live. As AI spreads
across sectors, diferent types of AI systems deliver diferent benefts, risks
and policy and regulatory challenges. Consider the diferences between a
virtual assistant, a self-driving vehicle and an algorithm that recommends
videos for children.
(OECD, 2022, p. 6 21)
This report is useful for structured framework of critical examination and clas-
sifcation of AI systems, which can help developers and policymakers fnd and
evaluate specifc risks in the AI systems, such as bias. At the same time, the report
is opening with the idea that AI is changing how people learn, a hypothesis that
is not based on research, theoretical structures, or academic literature. In other
words, there is no scientifc argument to claim that AI is “changing how people
learn.” The OECD’s text is stating the intention to help users pose critical ques-
tions for the adoption and development of AI systems, to “guide an innovative
and trustworthy approach to AI as outlined in the OECD AI Principles.” This
makes even more surprising to fnd the idea that AI is changing the way humans
learn, in an ambiguous and misleading form. Focused on machine learning and
how AI systems function, the report is not approaching at all how AI is changing
human learning. It leaves that important statement is isolated, unexplained, and
not backed by references or possible sources in favour of this claim. It is possible
to admit that learning is changed by any new medium, even by the use of Blue-
tooth speakers and Internet, but it is clear that this is not what is intentionally
implied here. This is why the example is so relevant: it states a vague hypothesis as
a fact, abusing the symbolic power of an international economic organisation, and
opens narrative possibilities for exaggerations and mistakes. There is no scientifc
argument to prove that we have a new way of learning thanks to AI, but provides
a solid source for this claim in future research, as an idea proven by the OECD.
The example presented here is also indicating something else relevant about AI:
the power of technology to limit and engineer human imagination. It is implied
that AI is changing how people learn so we can let AI deal with this extraordinar-
ily complex task. We have a solution; we can focus our interests and imaginations
on other, more entertaining, areas of interest.
Edtech is narratively related to the efort to narrow human imagination,
to subsume its force, and colonise spaces where our imaginations can four-
ish. Imagination is our human power to transcend the immediate boundaries
of senses and knowledge, to navigate across time and spaces, and to transcend
158 The Future of Higher Education
present conditions. It is the attribute that allows us to build and explore endless
possibilities before we can experience them in real life. Progress in neurology
and psychology revealed that imagination is central in regulating mental health,
in problem-solving, creativity, and learning. AI narratives in education have the
power to trigger the emergence and adoption of new systems and AI solutions,
translating in real life what was frst imagined and narrated. At the same time, we
can see that AI can build unreasonable expectations, and let edtech applications
manage areas that cannot be properly supported by the specifc system or app.
Building false expectations can impact on students’ and public trust on the use
of AI in education and also mark the quality of students’ experience and their
education, in general.
The ancient Greek term of techne was interrelated to another key term for
education, which is named in Plato’s works poiesis, which is naming the act in
which one brings something that did not exist before into being. In fact, AI exists
only because this idea enlightened the imagination of mathematicians, thinkers,
and engineers, and someone called this possibility “artifcial intelligence.” This is
why it is even more surprising to see that imagination is in general ignored or
considered mere “fuf” in academia, taking no space in studies on curriculum, in
strategies or in the administration of institutions of higher education. Philosophi-
cally, the concept of imagination and its crucial relation to education is an empty
space for universities. A simple look at the vast area of policies and procedures
of – let’s say “plagiarism” and “academic integrity” – reveals that the semiotic
message sent by academia is that we have to look only at skills and employability,
and that “imagination” is not part of serious concerns in teaching and learning.
This is important, as imagination is not only a source for learning and expand-
ing horizons, but also can be the platform to accelerate misinformation, narrow
views, and errors and ignorance, which can be fuelling fascist views and nefarious
tendencies. Imagination is the main battlefeld where we decide if education or
manipulations create the future.
The error perpetuated by current views on the role of higher education is that
universities are concerned in teaching and learning only with the efort to nur-
ture students’ intelligence, provide knowledge, and create skills that make gradu-
ates employable. Employability is one of the aims of a higher education, but not
the most important and defnitely not the single one. If a graduate is not a good
citizen, a moral person, one who can function well in a civil society, we do not
have anything else than a failure. The 20th century showed in horrifying exam-
ples what a highly technological society that is indiferent to a shared concept
of humanity and social progress can cause. Nazi Germany opened the feld of
knowledge for space exploration with the advancement of studies on rockets,
with scientists such as Wernher von Braun, but this cannot erase the fact that
forced labour was used on development of the V-2 rocket, and crimes against
humanity are linked to that project. It is also a warning to all that technology is
never value neutral, and misuses should never be dismissed. Knowledge is just a
Imagination and Education 159
This short quote refects currently a set of some of the most overused arguments
in the feld of higher education policies, governance, and theories of teaching
and learning. The “innovative” part is a depressingly futile exercise of parroting
in various forms the idea that universities are like Uber or any other “platform
companies,” that a Netfix-model for learning is the future, that we have no dif-
ference between a highly exploitative company with an unclear and debatable
160 The Future of Higher Education
future and an institution of learning and research with so many hundreds of years
of development and contributions to humanity. What really changed dramatically
is that the feld of educational imagination is colonised by greed and cynicism,
completely occupied by a logic of unfettered markets with the glorifcation of
ruthless and psychopathic solutions.
Privatisation of public good and colonisation of culture and imaginations with
neoliberal examples of extreme exploitation of workers and the cultural impover-
ishment stand directly associated with the intentional destruction of civil society
and democracy. This is another example of catachresis as an intentional efort to
twist semantics and rhetorical redescription of meanings (Parker, 199023). This
project is much more complex than a simple appropriation of words against their
meanings and signifcation, to suppress democratic citizenship and responsibility,
and use freedom to restrict, ban, and eliminate all that stand against the construc-
tion of extreme imbalances of power. Since Socrates, it became evident that only
when we have educated and responsible citizenry we can expect positive demo-
cratic choices. Consequently and the “poorly educated” loved by Donald Trump
(Saul, 2016) remain vulnerable to manipulations and control, and adopt resistance
to social and cultural changes even when it is evident that they will beneft from
social evolution. History is flled with examples in this sense. We can remember
that in what is considered a key event for civil rights movement in the United
States, the violent clashes at the Chicago Democratic Convention in 1968 are
marked by an apparent contradiction, noted by Hofman and Joy observed in
their book Counterculture Through the Ages: “While the New Leftists were
chanting ‘Power to the people,’ polls showed that the people approved of beating
the hell out of them (Chicago Democratic Convention, 1968) and even shooting
them dead (Kent State, Black Panther Party)” (Sirius & Joy, 2005, p. 5724). This is
just one of numerous examples that prove that democracy is impossible with an
ignorant citizenry, and that education can create the desire to build and maintain
civil and democratic societies. Progress requires an in-depth understanding of
learning and education that nurtures compassionate imagination and intelligence.
The compounded crisis of the world requires a reimagination of education with
a much wider sense than a simple delivery of content limited by our previous
choices, personal history, and algorithmic aggregation of data. Algorithms, as
complex and fascinating as they are at work, cannot properly contextualise, adapt,
and understand what is needed for such an education.
The relationship between imagination and AI is also developing in machine
learning systems that are able to generate new content, art, and imagery. I remem-
ber a very interesting point made by a participant at a conference on learning
spaces, on the kind of software used as an example of AI and human imagination
and creativity. The participant was an architect involved in various projects with
schools and children and used edtech and online games to generate new solutions
for learning spaces. He underlined that his practical experience stands against the
assumption that these systems nurture and generate imaginative solutions and
Imagination and Education 161
creative alternatives. In fact, he underlined, it was surprising for him to see that
children played online games such as Minecraft where the limit to build new
structures and castles is given mostly by one’s imagination; children built their
structures as Disney-like castles or other old, boring, quasi-similar structures.
There was no creativity and imagination in a virtual reality with almost limitless
possibilities. This should not surprise much when we actually know how imagi-
nation is impoverished and exploited by corporate narratives. Educators cannot
aford under the sum of administrative, economic, cultural, and social pressures to
nurture a rich and generative imagination.
A comprehensive report on the evolution AI published in 2021 by Stanford
University notes that research on the performance of human-AI teams shows that
AI-only teams currently outperform human-and-AI teams. It also notes that at
this moment we can expect “several near-term opportunities for AI to improve
human capabilities and vice versa” (Littman et al., 2021, p. 4825). A similar report,
which was published in 2022 by the Stanford Institute for Human-Centered AI
at Stanford University, fnds that AI “language models are more capable than
ever, but also more biased” (Zhang et al., 202226) and reinforces again the crucial
importance of large data sets with quality data for AI.
There is no doubt that we will witness an extensive use of AI in education,
and this not only requires an application of AI functionalities to augment human
capabilities, but also requires mechanisms able to check and address biases and
errors in a feld that is infnitely more complex and relevant for our societies
than Netfix, Uber, or a shopping mall. The challenge to address this prop-
erly cannot be more clear and vital for educators and decision makers: human
imagination and humanistic education can save us. If we fail our imaginations
and choose to ignore the importance of the common good in education and
across our societies we can only accelerate our demise, as we are currently doing.
Noam Chomsky noted in April 2022 that “we’re approaching the most danger-
ous point in human history. . . . We are now facing the prospect of destruc-
tion of organised human life on Earth” (Eaton, 202227). The “grim cloud of
fascism” across the world is getting thicker and more sufocating, and climate
change is regarded as a serious problem only at a declarative level. The nuclear
annihilation or the climate disaster can lead fast to our demise. The UN report
on climate change, which is warning us all that we live now the days of the last
chance to address climate change and stop burning fossil fuels, is explicit in say-
ing that our survival will be at risk. The report is grim, but we may have there
the most optimistic version; the activist Greta Thunberg noted on Twitter on
the 5 April 2022 that in
reading the new #IPCC report, keep in mind that science is cautious and
this has been watered down by nations in negotiations. Many seem more
focused on giving false hope to those causing the problem rather than tell-
ing the blunt truth that would give us a chance to act.
162 The Future of Higher Education
We can add that national governments used lobbying and all available forms of
infuence to redact and change this fnal report in line with their own economic
interests. These economic interests are at the moment also a leading cause to
the climate change emergency. The solution is to stop our failures of imagina-
tion and try fnding new solutions, new ways of thinking, and drop neo-fascist
and extremist neoliberalism for new projects that are capable to consider the
common good. An article published in Scientifc American makes an important
point about one of the most meaningful lessons of the COVID19 pandemic: the
extreme individualism and ongoing competition stand as a destructive approach
for our interests. It notes that “A microbe revealed the lie of rugged individual-
ism. We are not self-sufcient and independent; we never have been. Our fates
are bound together. Taking care of others is taking care of ourselves” (Nelson,
2022, p. 3328). As the article concludes, the most important lesson of this pan-
demic is that we have to design and adopt “national policies of communal care,”
and reimagine the common good, for our common future and survival.
As noted in previous pages, human life is shaped by what we imagine, and
before direct experiences and living situations we make sense of the world and
its possibilities with symbols and meanings and with our capacity to imagine.
Imagination is one key human attribute for our humanity, and stands as a faculty
that made the hominin evolution unique and dominant over other species. One
of the main problems with the conversion of education into a proft-oriented
industry is that imagination is not included in the overall narrative of higher
education. Profts, market, managerialism, efciency, competitive advantages,
and other terms stand relevant in the narrative structure of academia. Imagina-
tion, inspiration, and compassionate empathy are now felds reduced to simple
statements in institutional strategies, usually without any action attached or any
specifc pathway indicated. In fact, the concept of learning is consistently and
increasingly shadowed by a focus on efciencies of the system, which makes
credentials much more important than any intrinsic values and transformations
for students. A simple and common model of privatisation was, especially since
1990s, to reduce funding for the target of privatisation and infict as much dam-
age as possible through obviously inappropriate policies until the target is in a real
debacle. This is the stage when the saviours come with the solution of privatisa-
tion, which was in fact the main aim that was undermining the system from the
very beginning. There are books and fascinating case studies on these mecha-
nisms, adopted by countries such as Russia, creating the new “oligarchs.” In this
chapter it is relevant only to accept an invitation to think about the similarities of
this strategy with the last decades of development in higher education: a constant
cut of public funding, an aggressive push to adopt the neoliberal model of gov-
ernance, even when it became visible that it is not suitable for universities. When
the ethos of the campus became an example of institutionalised mediocrity and
dysfunction, the new “saviours” from corporate structure come regularly to show
what should be done next. The range of solutions is depressingly common and
Imagination and Education 163
Mythological structure maintains the power to invite and engage our imagina-
tions; as Plato noted in The Republic “surely the myths are, as a whole, false,
though there is truth in them too.” The “truth” in myths is maintained in cultural
motifs and archetypes, that are currently ignored by higher education and vastly
exploited by marketing, advertisement and other industries to attract and create
connection with people. Lucas asked an academic what are the structures that are
inherently appealing for people, directly related to primordial stories and visions
about time and being; the Hero, the embodiment of evil, the universal balance
were built in line with what can be found in Campbell’s book “The Hero with
a Thousand Faces” (Campbell, 196830). Education can learn from this example
and fnd new and superior energies to create more engaging narratives, where
the power of imagination goes well beyond the aim of employability or efcien-
cies for study time. These new models are much more important and, at the
same time, more appropriate for a world of uncertainties and unprecedented
164 The Future of Higher Education
Notes
1. Remarks by US President Obama at YSEALI Town Hall. Souphanouvong Uni-
versity, Luang Prabang, Laos, September 7, 2016. The White House, Ofce of the
Press Secretary. https://obamawhitehouse.archives.gov/the-press-ofce/2016/09/07/
remarks-president-obama-yseali-town-hall
2. Obama, B. (2016, December 10). Now is the greatest time to be alive. WIRED. www.
wired.com/2016/10/president-obama-guest-edits-wired-essay/
3. BBC News. (2016, September 30). Jewish leaders react to Rodrigo Duterte Holocaust
remarks. www.bbc.com/news/world-asia-37515642
4. Human Rights Watch. (2019, August 11). Australia: Press Laos to protect rights
dialogue should address enforced disappearances. Free Speech. www.hrw.org/
news/2019/08/11/australia-press-laos-protect-rights
5. Dr Pangloss is the foolish tutor and inept philosopher in Voltaire’s classic tale of Can-
dide (1759).
6. Abramowitz, M. J. (2018). Freedom in the world 2018. Democracy in Crisis. https://
freedomhouse.org/report/freedom-world/2018/democracy-crisis
7. Repucci, S., & Slipowitz, A. (2022). The global expansion of authoritarian rule.
freedom in the world 2022. Freedom House. https://freedomhouse.org/sites/default/
fles/2022-02/FIW_2022_PDF_Booklet_Digital_Final_Web.pdf
8. Crawford, K. (2017, March 12). Dark days: AI and the rise of fascism. 2017 SXSW Con-
ference. https://youtu.be/Dlr4O1aEJvI
9. Herf, J. (1984). Reactionary modernism: Technology, culture, and politics in Weimar and the
Third Reich. Cambridge University Press.
10. Giroux, H. A., & Casablancas, J. (2019). The terror of the unforeseen. Los Angeles Review
of Books.
11. Gambetta, D., & Hertog, S. (2016). Engineers of jihad: The curious connection between
violent extremism and education. Princeton University Press.
12. Kruglanski, A. W., & Orehek, E. (2011). The need for certainty as a psycholog-
ical nexus for individuals and society. In Extremism and the psychology of uncertainty
(pp. 1–18). https://doi.org/https://doi.org/10.1002/9781444344073.ch1
13. Ananthaswamy, A. (2015, August 5). What if . . . Intelligence is a dead end? New Scientist.www.
newscientist.com/article/mg22730330-900-what-if-intelligence-is-a-dead-end/
14. Sizer, T. R., & Kirk, L. D. (1970). Technology and education: Who controls? Academy for
Educational Development. https://eric.ed.gov/?id=ED039732
15. Heidegger, M. (1969). Discourse on thinking. Harper & Row.
16. Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and
the fabrication of power. The University of Chicago Press.
17. Nordquist, R. (2020, August 27). Catachresis (Rhetoric). www.thoughtco.com/what-is-
catachresis-1689826
18. JISC. (2021). AI in tertiary education: A summary of the current state of play. JISC.
https://repository.jisc.ac.uk/8360/1/ai-in-tertiary-education-report.pdf
19. Watters, A. (2021). Teaching machines. The MIT Press.
20. Dunn, T. (2020). Inside the Swarms: Personalization, gamifcation, and the net-
worked public sphere. In J. Jones & M. Trice (Eds.), Platforms, protests, and the chal-
lenge of networked democracy. Rhetoric, politics and society. Palgrave Macmillan. https://doi.
org/10.1007/978-3-030-36525-7_3
Imagination and Education 165
21. OECD. (2022). OECD framework for the classifcation of AI systems. https://doi.org/
doi:https://doi.org/10.1787/cb6d9eca-en
22. Kibby, B. (2022, April 6). Why hyper-personalization is critical for higher ed.
eCampus News. www.ecampusnews.com/2022/04/06/why-hyper-personalization-
is-critical-for-higher-ed/
23. Parker, P. (1990). Metaphor and catachresis. In J. Bender & D. E. Wellbery (Eds.), The
ends of rhetoric: History, theory, practice. Stanford University Press.
24. Sirius, R. U., & Joy, D. (2005). Counterculture through the ages: From Abraham to acid
house. Villard.
25. Littman, M. L., Ajunwa, I., Berger, G., Boutilier, C., Currie, M., Doshi-Velez, F.,
Hadfeld, G., Horowitz, M. C., Isbell, C., Kitano, H., Levy, K., Lyons, T., Mitchell,
M., Shah, J., Sloman, S., Vallor, S., & Walsh, T. (2021). Gathering strength, gathering
storms: The one hundred year study on artifcial intelligence (AI100) 2021 study panel report.
Stanford University. http://ai100.stanford.edu/2021-report.
26. Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo,
H., Niebles, J. C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, R.
(2022). The AI index 2022 annual report. AI Index Steering Committee, Stanford Insti-
tute for Human-Centered AI, Stanford University.
27. Eaton, G. (2022, April 6). Noam Chomsky: “We’re approaching the most dangerous
point in human history.”New Statesman. www.newstatesman.com/encounter/2022/04/
noam-chomsky-were-approaching-the-most-dangerous-point-in-human-history
28. Nelson, R. G. (2022). A microbe proved that individualism is a myth. Scientifc Ameri-
can, 326(3), 32–33. doi:10.1038/scientifcamerican0322-32
29. Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal educa-
tion. Harvard University Press.
30. Campbell, J. (1968). The hero with a thousand faces (2nd ed.). Princeton University Press.
8
SCENARIOS FOR HIGHER
EDUCATION
When we talk about edtech and AI, we have to accept that at a time of
compounding crises we need the courage to ask – after so many decades of
intense use of learning analytics, LMSs, plagiarism detection software, and all
other edtech applications – why education is getting worse if all these solutions
work so well, as the edtech sector is claiming. A research report released by the
University and College Union in March 2022 shows that in the United Kingdom
two-thirds of university staf was considering leaving higher education, and that
university staf is “demoralised, angry and anxious about the future of higher
education itself ” (UCU, 2022, p. 21). The report concludes that “the results of
the survey simply reinforce what our members have been telling us for years –
that they and their colleagues have reached breaking point” (p. 9). At the same
time, in the United States we witness the same feeling of imminent collapse, with
The Chronicle of Higher Education noting that faculty is facing “common chal-
lenges: Far fewer students show up to class. Those who do avoid speaking when
possible. Many skip the readings or the homework. They have trouble remem-
bering what they learned and struggle on tests.” Faculty describe “a disconcerting
level of disconnection among students, using words like ‘defeated,’’exhausted,’
and ‘overwhelmed’” (McMurtrie, 20222). This is part of trend that goes back
decades before the pandemic can be used as another convenient excuse. Research
conducted in the United States by Richard Arum and Josipa Roksa, who con-
clude that
are not improving their skills in critical thinking, complex reasoning, and
writing.
(Arum & Roksa, 2011, p. 363)
didn’t reveal anything that college leaders didn’t know, in quiet rooms
behind closed doors, all along. Academe was so slow to produce this
research because it told the world things that those in academe would rather
the world didn’t know.
(Carey, 20124)
This reassertion was followed by another book, which shows that this percent-
age of poorly prepared students are equal to what authors called “aspiring adults
adrift” (Arum & Roksa, 20145), underemployed, unemployed, and with a poor
social integration. It is a profound crisis for institutions that are incapable and
uninterested in learning, in fnding meaning in knowledge and create active and
responsible citizens, or at least to clarify what is a meaningful life in society, com-
munity, economy, or why the diference between right and wrong still matters.
Higher education stands reduced to a transactional relationship where stu-
dents learn that they’ll become employable, which is a lie, an absurd claim, and
a terrible depletion of any project of education. This reality, which is exten-
sively documented in research, books, journal articles, and reports, is not afect-
ing the enthusiasm for edtech to claim that the new app, system, or platform will
“unleash” the potential of our institutions for higher education. Learning analyt-
ics and predictive analytics promise for decades that “personalised” education is
the key to engage students, to make better teachers, to facilitate an optimal design
for curriculum. The OECD notes that “[t]he goal of learning analytics is to use
the increasing amounts of data coming from education to better understand and
make inferences on learners and the contexts which they learn from” and also
clarifes the potential of predictive analytics as:
If only the silver bullet of “analytics” and big data will be the magical button that
can solve curriculum, teaching, and learning. It was not and there are no reasons
to believe that it will be. A new promise is now taking centre stage: the use of
AI in education. A recent report of the Center for American Progress notes that
“Artifcial intelligence can help students learn better and faster when paired with
high-quality learning materials and instruction” (Jimenez & Boser, 20217).
In the United Kingdom, the usual suspects opened a new national centre for
(AI) in higher education:
But AI in higher education is not at all a new idea, nor the promise of “unleashing”
this technological panacea. A book published in 1987 on AI in higher education
opens with this paragraph:
The Prague conference in 1989 focused on three main topics: the teaching of
AI in higher education; the uses of AI in higher education, and the research and
development in AI in higher education. We can easily imagine this conference
dealing with exactly these issues starting from the premise that AI may be “suc-
cessfully used in the feld of education” to unleash its magical power. Since 1990
to the third decade on the 21st century, we hear the same promise that AI will be
the solution for universities and colleges across the world. What happened since
the announcement on the AI’s blooming in education is played on repeat for the
last six or seven decades? Why do we hear the same words and we are told that
Scenarios for Higher Education 169
merely opens a door; it does not compel one to enter” (White, 1974, p. 2812).
Technology is just a tool; a complex one, but just a tool that can be used to fx
or to wreck. It can help us build foundations for complex future structures or we
can deceive ourselves that technology is on itself is a solution for our problems.
Human history is flled with examples of disasters created by a misunderstanding
of technology. Not only that technology is a tool for humans but we have to
remember that the tempting narrative stating that technology is neutral is absurd
for a historical and even engineering perspective. This falsehood, of neutral
intelligent machines, is emerging as an easy way to hide the AI faws and risks,
the abuse of trust, the vast possibilities of manipulation, and the concentrations
of power conferred to those who control the AI solutions. This myth is repeated
and sustained by well-funded lobbying groups, by celebrities of Silicon Valley
and investors, or by “expert” international organisations. The new persistence
to state something so easily denied by independent studies conducted by repu-
table researchers and organisations is not only abating but is also not making
it true. One of the most notable experts in the history of technology, Melvin
Kranzberg, a Professor of the History of Technology at Georgia Institute of
Technology, noted that human decisions are determined not solely by scientifc
fndings and rational mechanisms, but also by our humanity: emotions, fears
and hopes, preferences, and aversions. He noted that “only if information and
software systems take into consideration human feelings and capabilities – the
‘human hardware’ – can they capitalize fully on the computer’s growing infor-
mational and analytical capacity” (Kranzberg, 1990, p. 313). When we sacrifce
meaning and subsume the aspirations of humanity to the cult and deifcation
of technological solutions, the efect is an ongoing erosion of foundations and
overlapping crises. Technology is promising for a long time solutions for our
ecological crises, but we reached a point where the science tells us that parts of
our planet will become uninhabitable and we have a last chance to survive if
we stop emissions causing climate change. Technology is useless if the human
mind is not prepared to control and steer it. If we wait for a new miraculous
technological solution for our crumbling world order, for the rise of fascism, to
stop the horrors of wars, for ecological and social systems that are crumbling in
fragments, it should be obvious that we will fail. AI, an extraordinary develop-
ment of technology, is also a tool that is shaped by humans, by what we decide
to be part of data collected, by our preferences and unconscious prejudices, by
our emotions and cultural horizon. AI is also a technology that is achieving
what our minds imagine that it can reach, and can hinder or ruin when our
expectations are not realistic or based on real limitations of this complex tool.
In this complex and interconnected system of cultural systems, economic and
demographic determinants, and political decisions, it is natural to expect that
universities will be changed by the current turmoil of our systems and by the
technological advancement. The AI is an integral and important part of futures
of higher education, which can take radically diferent routes. Here is where the
Scenarios for Higher Education 171
Today, our society has reached another tipping point. . . . There is a huge
need and a huge opportunity to get everyone in the world connected, to
give everyone a voice and to help transform society for the future. . . . At
Facebook, we build tools to help people connect with the people they want
and share what they want, and by doing this we are extending people’s
capacity to build and maintain relationships. People sharing more – even if
just with their close friends or families – creates a more open culture and
leads to a better understanding of the lives and perspectives of others.
(Zuckerberg, 201214)
Facebook, YouTube and Google, Twitter and others – they reach billions of
people. The algorithms these platforms depend on deliberately amplify the
type of content that keeps users engaged – stories that appeal to our baser
instincts and that trigger outrage and fear. It’s why YouTube recommended
videos by the conspiracist Alex Jones billions of times. It’s why fake news
outperforms real news, because studies show that lies spread faster than
truth. And it’s no surprise that the greatest propaganda machine in history
has spread the oldest conspiracy theory in history – the lie that Jews are
somehow dangerous. As one headline put it, “Just Think What Goebbels
Could Have Done with Facebook.”
(Cohen, 201915)
We know now that Facebook was instrumental in the organisation and initiation
of mass killings and ethnic cleansing (e.g. Myanmar, Ethiopia). Social media is
increasingly revealed to be responsible for the instigation of hate crimes, and it is
working as a nursery platform for dangerous conspiracy theories, for fascist and
extremist movements. Social media failed to deliver on all its generous promises;
172 The Future of Higher Education
the extreme surveillance adopted in the name of academic integrity, but are also
used even after testimonials and researches proved how fawed and dangerous for
learning they are. Just think how these proctoring technologies have impact on
students with a disability, especially as some do not want to declare ofcially their
conditions. AI proctoring solutions are at the moment dystopian, misleading
edtech applications that are responsible for traumatic experiences for students and
work as a clear endorsement of mutual mistrust as foundation for the educational
project of higher education.
The constant increase of plagiarism cases in universities remains a largely
unexplained phenomenon: why is cheating rising in higher education? Why uni-
versities largely fail to stop it? Why is this becoming – for the past decades – a
common problem for universities and colleges, creating a market for edtech com-
panies specialised in “academic integrity software”? A possible explanation is that
the overall approach of plagiarism is itself the main cause, not so much the generic
student who gets blamed and suspected so much. This new relationship invites a
reaction against an oppressive force interested only on bureaucratic arrangements,
cheating on delivery and ofers while pretending to care about what is learned. If
“customers” are led to believe that payment is the guarantee of a commodity and
later discover that what was sold to them is far from what was presented in glow-
ing terms, the recourse to cheat (plagiarism) the vendor can be seen as a natural
reaction against deceiving and abusive practices. It is a way to react, consciously
or just intuitively, to a fraud packaged as higher learning; universities have lost
interest in learning placing now the focus on various rankings, profts and mar-
ket shares. It is a matter of immediate evidence to see that institutions of higher
education don’t even consider the need to clarify and students see why learning
matters, not a certain diploma, credential, and bureaucratic recognition. It is dis-
ingenuous to hear the complaints from academia that students cheat, and highly
ofensive to see how students are imagined in this common story: they have to
learn that cheating is bad, or get an intensive course on “academic integrity,”
designed especially for international students, who are supposedly not familiar
with standards in academic integrity. This can be explained as racist contempt,
looking at students as inert beings, educated in schools where teachers appar-
ently told them: “this is your assignment, go home and copy/paste the answers.”
Higher education is now looking at itself as fxing all with student-training and
intensive courses on academic plagiarism; an arrogant and self-sufcient posi-
tion. Students rarely protests, as resistance is obviously much less efective than
indiferent compliance. Some students can use these experiences to learn how to
chat better, to avoid making the obvious mistakes that get one caught. Learning
in higher education is not determined by fear of punishment, customer relations
and not even by some new glitzy technology. Higher learning is determined by
the way students are imagined and addressed as human beings, with all that this
entitles. Students can be sick and depressed, can be distracted or focused, can have
problems related to their own contexts that cannot be read at all by eye-tracking
Scenarios for Higher Education 175
leisure time. The motivation for learning and student engagement further eroded
and short, intensive, vocational courses became much more suitable for demand
and alignment with the market needs. Research on this topic documented well
this trend, for a relatively long time; we can take the example of “Leisure College,
USA: The Decline in Student Study Time,” a study completed by Philip Babcock
and Mindy Marks on study time for students in American universities:
Researchers also found that time allocated for leisure activities increased with an
average of nine hours per week between 1961 and the 2000s.
In fact, education fails to address the most obvious challenge related to intel-
ligence: at a time when AI is rapidly progressing, human intelligence is declining,
in a clearly defned trend. The IQ levels, the most common measurements for
human intelligence, recorded a constant increase for the most of the 20th century,
in a phenomenon labelled as the “Flynn efect,” a label given after James Flynn, an
intelligence researcher who discovered and documented this trend. The constant
increase in the IQ scores slowed in 1970s and new research found a reversal of the
increasing trend. IQ scores are since then decreasing. Peter Dockrill summarised
this development in an article for Science Alert: “An analysis of some 730,000 IQ
test results by researchers from the Ragnar Frisch Centre for Economic Research
in Norway reveals the Flynn efect hit its peak for people born during the mid-
1970s, and has signifcantly declined ever since” (Dockrill, 201819). In fact, the
overall decline of the IQ for the world’s population was reconfrmed in a study
published in 2018 by James Flynn, the scientist who gave the name to this process.
He notes in this study that “The IQ gains of the 20th century have faltered,” and
warns that “during the 20th century, society escalated its skill demands and IQ
rose. During the 21st century, if society reduces its skill demands, IQ will fall”
(Flynn & Shayer, 201820).
We can consider the measurements of IQ scores as a relatively limited perspec-
tive on human intelligence, and consequently question the hypothesis of decline
of human intelligence. New and extensive research conducted on other variables
Scenarios for Higher Education 177
what time bias correction factor is used, it is clear that scientifc knowledge
has been in decline since the early 1970s for the Flow of Ideas indices and
since the early 1950s for the Research Productivity indices until 1988, the
end of our database. Moreover, because we use the general population as a
proxy for the number of researchers, this decline is in fact underestimated
as mentioned above.
(Cauwels & Sornette, 2022, p. 1021)
Other fndings prove that this trend is not reversed for the last decades. These
conclusions invite for a more in-depth analysis, opening with their complexity
and implications an in-depth search that can be covered properly by other books.
This is not a recent discovery and was previously noted by one of the founders of
cybernetics, Norbert Wiener. He noted in early 1950s:
I consider that the leaders of the present trend from individualistic research
to controlled industrial research are dominated, or at least seriously
touched, by a distrust of the individual which often amounts to a distrust
in the human. . . . The general statistical efect of an anti-intellectual policy
would be to encourage the existence of fewer intellectuals and fewer ideas.
(Wiener, 1994, p. 2222)
the very raison d’être of the university, I believe, is at stake. Academics all
over the world should be concerned that future generations may weep for
the destruction of the concept of the university that has occurred in so
many places, which has led to little less than the degradation and debase-
ment of learning, the substitution of information packaging for a discursive
engagement or search for knowledge.
(Higgins, 202124)
students from low socio-economic and deprived areas. The Netfix or the Ama-
zon model of higher education, virtually of any highly exploitative corporations
that is placing users under extensive surveillance for predictions, manipulation,
and monetisation, will be further adopted by most universities as optimal models
for organisation and organisational identity. In few words, education will be com-
pletely commodifed in a model that will use the rapid advancements of AI not in
relation to learning and higher levels of cognitive tasks, but as means to maximise
profts and reduce costs.
In this new paradigm, assessment is mostly reduced to standardised exams
and practical assessment tasks, under AI supervision. Multiple Choice Questions
take a signifcant part of assessment tasks, while other assignments are fne-tuned
with the help of technical staf managing the AI systems in coordination with
corporate vendors, engaging rarely part-time academics for curriculum align-
ment, once or twice a year. This evolution was accelerated by a rethink of budget
allocation in higher education, further cuts leading to a total elimination of public
funds for universities, all as part of a signifcant reform across higher education,
which was promoted by a bipartisan group of politicians and Big Tech representa-
tives. Since employability is the stated aim of universities, curriculum is limited
to accreditation requirements and employers’ requirements; a list of skills and
content will be reviewed every three or fve years. The idea of higher learning,
with blue-sky research and in-depth explorations of various topics, which was
considered a poor investment for short-term returns, is mostly eliminated from
curriculum and research.
The most useful approach to see how AI will change our education futures
is to stop and consider what happened in the last decades and what we see now.
The rise of the extreme right and fascist currents is vastly documented and visible
in our everyday lives. Authoritarianism and the general decline of democracy are
recorded across the world: Bertelsmann Transformation Index (BTI) has recorded
more autocratic than democratic states in 2022, and it is notable to see that an
extreme dictatorship line Russia is evaluated in this index as a “moderate autoc-
racy.” This is a very optimistic evaluation for a country with a foreign minister
who is publicly expressing some of the most vile themes of antisemitic discourse,
where free press is replaced by an extreme state propaganda machine. The “mod-
erate” dictatorship ordered the arrest of opposition leaders and dissenting jour-
nalists. In fact, the public discourse regressed to the point that anti-fascism is a
negative term, being presented as such by Donald Trump and his administration,
in media and public discourse across the world. Big Tech companies are directly
linked to the antidemocratic developments, and these key actors guiding the use
of AI present an important warning for education and the future of civil societies.
The message is that it is not only the risks associated with ubiquitous surveil-
lance, which is largely employed at the core of the business model for large tech
and edtech companies, but a monopoly on technologies of immense power in a
model that is fundamentally opposed to democracy and civil society.
Scenarios for Higher Education 181
The most visible actors in the sector have a very problematic relationship with
democracy, expressed publicly in various contexts. Pamela Jones Harbour, a law-
yer for Microsoft and a former commissioner of the Federal Trade Commission,
wrote for the New York Times the article “The Emperor of All Identities,” present-
ing Google as the Internet “emperor,” that is amassing “unbridled control over
data gathering, with grave consequences for privacy and for consumer choice”
(Harbour, 201225). More recent investigations and reports such as “Google Aca-
demic Inc.” show that Google engaged in practices opposed to transparency and
democratic practices: Google was discreetly “paying millions of dollars each year
to academics and scholars who produce papers that support its business and policy
goals” (Tech Transparency Project, 201726). In May 2022, Nathaniel Persily, Pro-
fessor of Law and Director of the Stanford Cyber Policy Center, Stanford Law
School, noted in his hearing for the US Senate Judiciary Subcommittee on Pri-
vacy, Technology, and the Law that “we cannot live in a world where Facebook
and Google know everything about us, and we know next to nothing about
them.” The implications of leaving this problem unaddressed become now more
visible, with extreme right and fascist ideas moving closer to the centre of public
discourse and events.
Peter Thiel, the rich and infuential tech mogul, was once described by Polit-
ico as “the Silicon Valley libertarian who spoke at Trump’s convention, gave more
than $1 million in support of his campaign and is now a member of Trump’s
transition team” (Purdy, 201627). Thiel, who became powerful beyond the feld
of investments in new technologies, shaping and infuencing political life in the
United States, clarifed in one of his essays, “The Education of a Libertarian,” that
his view is that democracy and capitalism are not compatible, while arguing for
a libertarian formula that “makes the world safe for capitalism” (Thiel, 200928).
If we scrutinise Facebook – rebranded as Meta – we can say without any doubt
that it acts as an unprecedented machine for misinformation, a platform for the
propagation of hate speech and undemocratic movements. Adrienne LaFrance
noted in The Atlantic, in 2021, that Facebook/Meta is acting like a hostile entity,
as “the largest autocracy of Earth” that is actively undermining and attacking lib-
eral democracies. She observed that
Notes
1. UCU. (2022). UK higher education. A workforce in crisis. www.ucu.org.uk/media/12532/
HEReport24March22/pdf/HEReport24March22.pdf
2. McMurtrie, B. (2022, April 5). A ‘stunning’ level of student disconnection. The
Chronicle of Higher Education. www.chronicle.com/article/a-stunning-level-of-student-
disconnection
3. Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses.
University of Chicago Press
4. Carey, K. (2012, February 12). “Academically adrift”: The news gets worse and
worse. The Chronicle of Higher Education. http://chronicle.com/article/Academically-
Adrift-The/130743/
5. Arum, R., & Roksa, J. (2014). Aspiring adults adrift: Tentative transitions of college gradu-
ates. The University of Chicago Press.
6. Baker, R. S. (2021). Artifcial intelligence in education: Bringing it all together. In
OECD digital education outlook 2021: Pushing the frontiers with artifcial intelligence, block-
chain and robots. OECD Publishing. https://doi.org/10.1787/f54ea644-en
7. Jimenez, L., & Boser, U. (2021, September 16). Future of testing in education: Arti-
fcial intelligence. CAP. www.americanprogress.org/article/future-testing-education-
artifcial-intelligence/
Scenarios for Higher Education 185
8. JISC. (2021, April 27). New national Centre will “unleash the power of AI” in educa-
tion. JISC. www.jisc.ac.uk/news/new-national-centre-will-unleash-the-power-of-ai-
in-education-27-apr-2021
9. Marík, V., Stepánková, O., & Zdráhal, Z. (1990, October 23–25). Artifcial intelli-
gence in higher education. CEPES-UNESCO International Symposium, Prague, CSFR,
Proceedings.
10. O’Flaherty, K. (27 February 2022). The data game: What Amazon knows about you and
how to stop it. www.theguardian.com/technology/2022/feb/27/the-data-game-what-
amazon-knows-about-you-and-how-to-stop-it
11. Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human
decision-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.
https://doi.org/doi:10.1073/pnas.2016921117
12. White, L. (1974). Medieval technology and social change. Oxford University Press.
13. Kranzberg, M. (1990). Software for human hardware. In P. Zunde & D. Hocking
(Eds.), Empirical foundations of information and software science V. Plenum Press.
14. Zuckerberg, M. (2 February 2012). Facebook’s letter from Mark Zuckerberg – full
text. The Guardian. www.theguardian.com/technology/2012/feb/01/facebook-letter-
mark-zuckerberg-text
15. Cohen, S. B. (2019, November 23). Read Sacha Baron Cohen’s scathing attack on
Facebook in full: “Greatest propaganda machine in history.” The Guardian. www.the
guardian.com/technology/2019/nov/22/sacha-baron-cohen-facebook-propaganda
16. Shahbaz, A. (2018). The rise of digital authoritarianism. In Freedom on the net. Free-
dom House. https://freedomhouse.org/sites/default/fles/FOTN_2018_Final.pdf
17. Csaky, Z. (2021). The antidemocratic turn. In Nations in transit 2021. Freedom House.
https://freedomhouse.org/sites/default/fles/2021-04/NIT_2021_fnal_042321.pdf
18. Babcock, P. S., & Marks, M. S. (2010). Leisure College, USA: The decline in student study
time. American Enterprise Institute for Public Policy Research.
19. Dockrill, P. (2018, June 13). IQ scores are falling in “worrying” reversal of 20th cen-
tury intelligence boom. Science Alert. www.sciencealert.com/iq-scores-falling-in-wor
rying-reversal-20th-century-intelligence-boom-fynn-efect-intelligence
20. Flynn, J. R., & Shayer, M. (2018). IQ decline and Piaget: Does the rot start at the top?
Intelligence, 66, 112–121. https://doi.org/https://doi.org/10.1016/j.intell.2017.11.010
21. Cauwels, P., & Sornette, D. (2022). Are “fow of ideas” and “research productivity”
in secular decline? Technological Forecasting and Social Change, 174, 121267. https://doi.
org/https://doi.org/10.1016/j.techfore.2021.121267
22. Wiener, N. (1994). Invention: The care and feeding of ideas. The MIT Press.
23. Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). Viewpoint: When
will AI exceed human performance? Evidence from AI experts. Journal of Artifcial
Intelligence Research, 62, 729–754. https://doi.org/10.1613/jair.1.11222
24. Higgins, M. D. (2021, June 8). On academic freedom. Address at the Scholars at Risk
Ireland/All European Academies Conference: President of Ireland. Speeches. https://
president.ie/en/media-library/speeches/on-academic-freedom-address-at-the-schol
ars-at-risk-ireland-all-european-academies-conference
25. Harbour, P. J. (2012, December 19). The emperor of all identities. The New York
Times. www.nytimes.com/2012/12/19/opinion/why-google-has-too-much-power-
over-your-private-life.html
26. Tech Transparency Project. (2012, July 11). Google academics inc. Report. www.tech
transparencyproject.org/articles/google-academics-inc
27. Purdy, J. (2016, November 30). The anti-democratic worldview of Steve Bannon
and Peter Thiel. Politico. www.politico.com/magazine/story/2016/11/donald-trump-
steve-bannon-peter-thiel-214490/
28. Thiel, P. (2009, April 13). The education of a libertarian. Cato Unbound. Cato Institute.
www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/
186 The Future of Higher Education
29. LaFrance, A. (2021, September 27). The largest autocracy on earth. The Atlantic.
www.theatlantic.com/magazine/archive/2021/11/facebook-authoritarian-hostile-
foreign-power/620168/
30. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab,
M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D.,
Koura, P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). OPT: Open pre-trained
transformer language models. ArXiv, abs/2205.01068.
31. Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artifcial-intelli
gence-powered drug discovery. Nature Machine Intelligence, 4(3), 189–191. https://doi.
org/10.1038/s42256-022-00465-9
9
RE-STORYING HIGHER LEARNING
There is no need to engage much our imagination to see risks in the adoption
of AI solutions in education. China is using AI to impose a dystopian version of
a dictatorship that is weaving the most toxic ideas and practices from neoliberal
capitalism and communism. Social scores are assigned to all Chinese citizens, and
these are determined by the data collected from extensive surveillance and are
aggregated by obscure algorithms; a negative social score can restrict the possibil-
ity to board a train or a bus, to travel free, access services or get a job, and so on.
In China AI is used to reinforce and secure the authoritarian control of citizens,
from continuous surveillance to predictive policing and immediate punishment
for breaches of rules, dissent, or the possibility of dissent. Forbes reported in 2022
that China is
already known for its use of AI for civilian repression. IPVM exposed the
chilling aims of the People’s Republic of China (PRC)’s AI-automated
racism to surveil Uighur Muslims. Its reporting has been corroborated and
published jointly with the Washington Post, NYTimes, and BBC. Huawei
partnered with leading PRC AI/facial recognition developer Megvii to
patent the so-called ‘Uighur alarm’ to identify Uighurs by face and track
their movement, turning the Xinjiang province in a de-facto “open-air
prison” for 25 million people.
(Layton, 20221)
China is one of the most active players in the development of AI, and some ana-
lysts predict that China will be the most important actor in this feld in less than
a decade. The applications of AI in the Chinese classrooms are already developed
DOI: 10.4324/9781003266563-13
188 The Future of Higher Education
and at work. Media reported the case of the Middle School No. 11 in Hangzhou,
where students are under permanent surveillance and the algorithm assigns them
a score on attention, engagement, and work.
If we want to see how AI is dangerously used in our daily lives, we can look
to other countries, not just China. Unfortunately, liberal democracies provide a
long list of examples of misuse and abuse of AI and algorithms against citizens
living in free societies. In the Netherlands, the use of a self-learning algorithm
to create risk profles for possible childcare beneft fraud resulted in the unlawful
and ethnical profling of childcare applicants with dual nationalities. The “child-
care benefts scandal,” which is more a scandal about the abuse of AI capabilities
for profling and control, led to the resignation of the entire Dutch government
in January 2021. Tens of thousands of families were pushed to poverty as the
Dutch tax agency placed enormous debts based on a set of indicators used by
the algorithm. There were also cases of suicide recorded for people targeted by
this scheme. Over a thousand children were taken into foster care as a result of
this AI solution. In the end, the Dutch Tax and Customs Administration (Belast-
ingdienst) received a fne of €3.7 million, which is an unprecedented record. In
Australia we have the example of Robodebt, an unlawful use of AI algorithms
for debt assessment and recovery, which afected 443,000 people. A federal court
approved in 2021 a settlement worth $1.8 billion between the Australian Gov-
ernment and victims of robodebt, which targeted fnancial vulnerable people,
most of them without any debts. It is known that this scheme led to suicides,
but an exact number of victims is not possible to estimate. In the United States,
an Associated Press report supported by the Pulitzer Center for Crisis Reporting
reveals how AI is used to identify possible cases of child abuse or neglect, which
is so obscure that some people fagged don’t even know that the decision was
taken by an algorithm. Recent research shows that the algorithm used
If families accused of neglect or abuse go to court not even the score given in the
algorithm can be found by them or their attorneys. The obscure nature of simi-
lar AI systems is already part of extensive research and media reports, leaving no
doubt that the problem of faulty AI and secret algorithms is far from being solved.
There is already widespread adoption of AI in various industries, legal sys-
tems, and public administration, and all that they refect is the desire to use the
capacity of AI to reinforce a certain managerial framework, focused entirely on
cutting costs and maximise profts by squeezing out of employees as much labour
Re-storying Higher Learning 189
as possible for a low income. This is more evident when we look at the visible
part of AI but is overwhelming on the unseen part, or the implicit part of AI,
which is represented by extremely poorly paid workers higher to add and label
data. A BBC investigation on this topic found that “Artifcial intelligence and
machine learning exist on the back of a lot of hard work from humans,” with
extremely intense and low paid work, which can be anything “anything from
labelling images to help computer vision algorithms improve, providing help for
natural language processing, or even acting as content moderators for YouTube
or Twitter” (Wakefeld, 20213). Extensive invisible work is making possible AI,
and universities should think carefully about who is going to do this work, how
it will be paid, and what other costs are involved, especially considering that
current AI systems depend on the extreme exploitation of energy and mineral
resources from the planet, as well as cheap labour and large amounts of data. An
article in Tribune published at the same time starts with the note that “the hype
around artifcial intelligence and its potential to liberate us from work often misses
a crucial fact – that AI in its current form depends on low-paid human work-
ers to function in the frst place” (Slater, 20214). The article reveals that some of
the most prominent AI systems “rely on workers from Southeast Asia and sub-
Saharan Africa, where they tend to receive below or barely a living wage.” The
environmental impact is also an important aspect, which will not be possible to
ignore in the near future. A study published by Nature notes that in 30 months,
from 2016 to 2018, only Bitcoin generated about as much carbon dioxide over
30 months as 1 million cars in the same period, and this measure does not include
other carbon dioxide-generating activities such as computer cooling, building, or
operators (Krause & Tolaymat, 20185). Since 2018, the trend of energy consump-
tion for cryptocurrencies is only ascending, raising some important challenges not
only for the environment but also for the users who are interested to maintain a
socially and environmentally responsible profle. AI is opening new possibilities
for data modelling, essential to create sustainable models for climate action, cli-
mate prediction, and pollution reduction.
There are new and unprecedented forms of control and manipulations, which
become overpowering when AI is multiplying exponentially their force. In this
context, universities fnd that most students are naturally comfortable with tech-
nology, especially young students who were growing up in a medium where new
technologies are an integral part of socialisation and communication. The hype of
AI may be as unprecedented as the powers unleashed by the AI systems. We can
choose an example from a sea of promises, warnings, and predictions about the
miracle and panacea of AI; but we can briefy stop at the example of the report
released in May 2022 by the Special Committee on Artifcial Intelligence in a
Digital Age, for the European Parliament. One of the key drivers of the Report,
presented also in media releases, is the statement that “To be a global power means
to be a leader in AI.” We can imagine how appealing this statement is for a group
of politicians dreaming to be part of a leading power in the world. The Report
190 The Future of Higher Education
is not actually explaining how such a global power is navigating our most serious
crises: accelerated climate change and increasing events of extreme weather, the
global rise of fascism, the constant erosion of democracy and democratic values,
including for some of the founding members of the EU. These existential crises,
which are very real and urgent, are entirely ignored by this document focused
on the tasks of building a “digital infrastructure,” regulatory framework, and “the
development of AI skills. There is no doubt that this is a very important area for
technological advancement, but it is already late to realise that our main chal-
lenges are not so much technological as they are determined by the erosion of
democratic values, climate emergency, impoverished imaginations that are eas-
ily colonised by conspiracy theories, and extremist ideologies. The most serious
challenge is not to join the contest of the most global leader, but to fnd solution
for our educational and imaginative crisis.
Data from Hawaii’s Mauna Loa Observatory, released by the U.S. National
Oceanic and Atmospheric Administration (NOAA), shows that in May 2022
the Earth’s CO2 level hit the highest recorded point in history. At the same
time, large parts of India and Pakistan recorded an extraordinarily abnormal and
also unprecedented heat waves, which melted fast glaciers and caused devastating
foods. These events were mostly lost on the avalanche of worrying news about
the war in Ukraine, with Russia regularly threatening with nuclear attacks and
possible annihilation of human race. It is hard to miss the irony that we hear
everywhere how we have this God-like powerful invention named artifcial intel-
ligence at a time when we lost almost everything from our control. The global
new rich, fnding their fortunes in the “Californian ideology,” consider that our
future is secure in the alternative reality of the metaverse or on the ridiculous
idea of extra-terrestrial (Mars) colonies, which will be set by the trillionaires who
were tired to live in a destroyed Earth. If their incomprehensible wealth wouldn’t
be so real, it would be probably easy to dismiss with a laugh the wacky ideas of
this class. Infuenced by these infantile delusions, higher education is caught in
utopian or naive tech-imaginaries related to these dystopian dreams. The cur-
rent context – of extremely dangerous military conficts, unprecedented climate
change and environmental destruction, social and cultural segregation, inequity,
and inequality – is not drawing much the attention of techno-utopians. The most
signifcant crisis facing now higher education is a crisis of meaning; higher edu-
cation needs as a matter of priority to become more human and meaningful for
students, which is much more than an entry-level job after graduation. This type
of priorities and concerns should be addressed, and this can start by considering
that the neoliberal paradigm imposed by WTO and OECD for education may
not be the most suitable model for a sustainable future for students, for universi-
ties, and for civil societies across the world.
Looking at a more detailed level, which is related directly to teaching, learn-
ing, and governance in higher education, we can fnd guiding principles for a
responsible and constructive adoption of AI.
Re-storying Higher Learning 191
that is a “limbic capitalism,” which exploits and extracts value from the human
vulnerability to addictions, maximising fndings of psychology and psychiatry for
profts (Courtwright, 20197). Zubof notes that we have a surveillance capitalism,
adopted widely by giant tech corporations and governments. It is clear that we
have a new form of capitalism, obsessively exploitative and psychopathic. In 1941,
Marcuse found that this type of society invites authoritarianism and is perfectly
structured to serve dictatorships. Higher education should not remain exposed by
placing edtech owned and controlled by corporations, especially at a time when
a common anti-democratic is part of their business model.
The inevitability of edtech, which is obsessively cultivated and promoted
along with the idea of a defnitive technological determinism of modern world,
which refects the structural dependence of human life on technological advances,
should be taken with reserve by universities. Edtech and its mastery is not an aim
of higher education although most universities forget this simple fact. They place
more efort and interest (and budgets) on cultivating the ability of students to use
software and edtech platforms (LMS) than learning, student engagement, and the
capacity to stir intellectual curiosity for independent study. Zubof presents very
well what stands behind technological determinism, noting that
products. The real change will arise from the sum of tensions and crises socie-
ties and humanity face these days, with universities still able to ofer solutions of
substance and the ability to reinvent themselves. In early 1970s we were warned
that “If we narrow the scope of education, we narrow our operative conception
of civilisation, and we impoverish the meaning of participation in civilised com-
munity” (Schefer, 1973, p. 609). We have missed entirely this message and all
self-proclaimed managers and accountants of educational products, able to empty
higher education of meaning and substance, moved academia far from the aim
of building a civilised community. The last chance is to reimagine and re-story
the aim of education. In this sense, the challenge is to bring the hypothesis that
education is an endeavour that is too complex to be organised and constructed by
the AI. Edtech can be a solution for administrative problems, or even set instruc-
tion and training but not a well-rounded education and meaningful educational
experiences.
An important step to rethink higher education in the age of AI is to imagine
the impact of teaching on students’ lives, on their values and ways of thinking,
on creating the potential for them to expand their horizon and learn more, as
independent thinkers. This sense of responsibility faded, as universities adopted
the absurd language of trade, with educational experienced named “the product”
in policies adopted by some universities, and faculty’s work as “customer care.”
There is the same type of life-changing responsibility that shapes the identity of
medical doctors, which is rooted in the Hippocratic Oath. Similarly, we have to
start thinking about an Educational Oath that is symbolically taken by all who
teach in higher education the moment when the real and signifcant responsibil-
ity for students’ future is properly contemplated by those who will teach and
by those who organise teaching. This oath can change the nature of teaching
arrangements in universities, as marginal experiences that can be properly covered
by overworked and overwhelmed sessional employees, casual staf exploited in
precarious and insecure work arrangements. The temptation to further reduce
costs and use AI instead of this exploited and precarious class will soon arise, and
it will undoubtedly open universities for grave errors and future misconduct and
public disapproval.
A code of ethics for academics, which is specifcally addressing the need to
place teaching and learning in a specifc ethical and professional framework, is
required for higher education to invite faculty and students to understand the
impact and the importance of teaching. A Teacher’s Ethical Pledge can also set a
clear set of commitments, rights, and expectations that contribute to a culture of
quality, which is widely shared. Students have the right to learn, access knowl-
edge, and develop self-cultivation skills and beneft from teachers who are able
and willing to help and guide them with expertise, compassion, commitment,
and ethical responsibility. This involves the student’s right to have access to equi-
table education, with engaging, relevant, and high-quality curriculum, assess-
ment, and teaching practice, in a safe and suitable climate for learning, where the
Re-storying Higher Learning 195
intellectual space allows students to freely learn and evaluate what they learn with
an educated and independent mind. From a teaching perspective, faculty have the
responsibility to create what Eric Ashby (1969) defned as a key attitude of the
university teacher, which is “to teach in such a way that the pupil learns the dis-
cipline of dissent” (Ashby, 1969, p. 6410). Students have the right to gain what A.
N. Whitehead named the mastery of the art of using knowledge, and the ability
to react against “inert ideas” and manipulations, which is increasingly important
at a time when AI is opening possibilities for deepfakes, for vast manipulations
and intrusions in peoples’ lives.
A code that is setting the main duties in teaching is not only helping faculty
understand the impact of their practice but also signalling the students that their
interest are de facto at the heart of university’s interests, not less important than
research or fnancial arrangements. The role of the teacher is to enable students
to use unhindered access to higher learning and to educate them to seek future
learnings and enjoy the mastery of knowledge. Students have the right to learn in
a context of a meaningful and engaging education, with knowledge and teaching
solutions that are fexible and adaptive to complex changes, such as new devel-
opment in edtech and AI. We can briefy sketch here a possible framework for
a Teaching Pledge, as a commitment to the student’s right to quality education,
in a space and place defned by mutual respect, curiosity to learn, and the free
exploration of knowledge. It is an expression of academic’s allegiance to help
each student achieve maximum potential as a member of society, with respect for
learning and interest to actively contribute to the common good in the society.
In adopting, AI in education academics may take the pledge to:
This teaching pledge can also serve as a guide that nurtures mutual trust and
helps academics guide practice in line with the signifcant responsibilities in
teaching and the long-term and complex impact of their important profession.
In the book An Artifcial Revolution, Ivana Bartoletti is writing the last sentence
of the book with a hopeful thought:
Amid the global turmoil, maybe, just maybe, the promise of the Artifcial
Intelligence will force us to confront our shared humanity and the physi-
cal and digital environments we inhabit. For many of us, disappointed yet
optimistic, this is the time to dare to imagine.
(Bartoletti, 2020, p. 12611)
Education is the space where we can start contemplating the power of our shared
humanity in a campus that moved fast in superfcially known digital environ-
ments. The rise of AI is just another reason to accept that this is the time when
we must start to imagine.
Notes
1. Layton, R. (2022, April 23). Commerce’s BIS can help stop China’s quest for AI dom-
inance. Forbes. www.forbes.com/sites/roslynlayton/2022/04/23/commerces-bis-can-
help-stop-chinas-quest-for-ai-dominance/
Re-storying Higher Learning 197
2. Ho, S., & Burke, G. (2022, April 30). An algorithm that screens for child neglect raises concerns.
Associated Press. https://apnews.com/article/child-welfare-algorithm-investigation-
9497ee937e0053ad4144a86c68241ef1
3. Wakefeld, J. (2021, March 28). AI: Ghost workers demand to be seen and heard.
BBC. www.bbc.com/news/technology-56414491
4. Slater, A. (2021, May 18). How artifcial intelligence depends on low-paid work-
ers. Tribune. https://tribunemag.co.uk/2021/05/how-artifcial-intelligence-depends-
on-low-paid-workers
5. Krause, M. J., & Tolaymat, T. (2018). Quantifcation of energy and carbon costs
for mining cryptocurrencies. Nature Sustainability, 1(11), 711–718. https://doi.
org/10.1038/s41893-018-0152-7
6. Marcuse, H., & Kellner, D. (1998). Collected papers of Herbert Marcuse. Routledge.
7. Courtwright, D. T. (2019). The age of addiction: How bad habits became big business. The
Belknap Press of Harvard University Press.
8. Zubof, S. (2020). The age of surveillance capitalism: The fght for a human future at the new
frontier of power. Profle Books.
9. Schefer, I. (1973). Reason and teaching. Routledge and Kegan Paul.
10. Ashby, E. (1969). A Hippocratic oath for the academic profession. Minerva, Reports and
Documents, 8(1), 64–66.
11. Bartoletti, I. (2020). An artifcial revolution: On power, politics and AI. Indigo Press.
REFERENCES
Chetty, R., Hendren, N., Jones, M. R., & Porter, S. R. (2020). Race and economic
opportunity in the United States: An intergenerational perspective. The Quarterly Jour-
nal of Economics, 135(2), 711–783.
Choi, B. C., & Pak, A. W. (2008). Multidisciplinarity, interdisciplinarity, and transdiscipli-
narity in health research, services, education and policy: 3. Discipline, inter-discipline
distance, and selection of discipline. Clinical and Investigative Medicine, 31(1), E41–E48.
https://doi.org/10.25011/cim.v31i1.3140
Chun, W. H. K., & Barnett, A. (2021). Discriminating data: Correlation, neighborhoods, and
the new politics of recognition. The MIT Press.
Cioran, E. M. (2012). A short history of decay. Arcade Publishing.
Clayton, A. (2021). Bernoulli’s fallacy: Statistical illogic and the crisis of modern science. Colum-
bia University Press.
Cohen, A. (2016). Imbeciles. The Supreme Court, American eugenics, and the sterilization of
Carrie Buck. Penguin Press.
Cohen, S. B. (2019, November 21). Sacha Baron Cohen’s keynote address at ADL’s 2019 never
is now summit on Anti-Semitism and Hate. Remarks by Sacha Baron Cohen, Recipient
of ADL’s International Leadership Award. www.adl.org/news/article/sacha-baron-
cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism
Colbrook, M. J., Antun, V., & Hansen, A. C. (2022). The difculty of computing stable
and accurate neural networks: On the barriers of deep learning and Smale’s 18th prob-
lem. Proceedings of the National Academy of Sciences, 119(12), e2107151119. https://doi.
org/doi:10.1073/pnas.2107151119
Collini, S. (2012). What are universities for? Penguin.
Conway, F., & Siegelman, J. (2005). Dark hero of the information age: In search of Norbert
Wiener, the father of cybernetics. Basic Books.
Courtwright, D. T. (2019). The age of addiction: How bad habits became big business. The
Belknap Press of Harvard University Press.
Daunton, N. (2021, November 24). Why Prince William is wrong to blame habitat loss
on population growth in Africa. Euronews. www.euronews.com/green/2021/11/24/
why-prince-william-is-wrong-to-blame-habitat-loss-on-population-growth-in-africa
Delzell, D. A., & Poliak, C. D. (2013). Karl Pearson and eugenics: Personal opinions
and scientifc rigor. Science and Engineering Ethics, 19(3), 1057–1070. https://doi.
org/10.1007/s11948-012-9415-2
Dezfouli, A., Nock, R., & Dayan, P. (2020). Adversarial vulnerabilities of human deci-
sion-making. Proceedings of the National Academy of Sciences, 117(46), 29221–29228.
https://doi.org/doi:10.1073/pnas.2016921117
Dockrill, P. (2018, June 13). IQ scores are falling in “worrying” reversal of 20th century
intelligence boom. Science Alert. www.sciencealert.com/iq-scores-falling-in-worrying-
reversal-20th-century-intelligence-boom-fynn-efect-intelligence
Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation. New
Media & Society, 21(8), 1824–1839. https://doi.org/10.1177/1461444819833331
Drucker, P. F. (1969). The age of discontinuity; guidelines to our changing society. Harper &
Row.
Dunn, T. (2020). Inside the swarms: Personalization, gamifcation, and the networked pub-
lic sphere. In J. Jones & M. Trice (Eds.), Platforms, protests, and the challenge of networked
democracy. Rhetoric, politics and society. Palgrave Macmillan. https://doi.org/10.1007/
978-3-030-36525-7_3
Eaton, G. (2022, April 6). Noam Chomsky: “We’re approaching the most dangerous point
in human history”. New Statesman. www.newstatesman.com/encounter/2022/04/
noam-chomsky-were-approaching-the-most-dangerous-point-in-human-history
References 201
Hankerson, D. M. (2021, September 21). CDT original research examines privacy implications
of school-issued devices and student activity monitoring software. https://cdt.org/insights/
cdt-original-research-examines-privacy-implications-of-school-issued-devices-and-
student-activity-monitoring-software/
Harari, Y. N. (2016). Homo deus: A brief history of tomorrow. Harvill Secker.
Harbour, P. J. (2012, December 19). The emperor of all identities. The New York Times.
www.nytimes.com/2012/12/19/opinion/why-google-has-too-much-power-over-
your-private-life.html
Hare, J. (2016, April 13). University of Melbourne start-up Cadmus targets cheats. The Aus-
tralian. www.theaustralian.com.au/higher-education/university-of-melbourne-startup-
cadmus-targets-cheats/news-story/f5e2677aea4a90b54f5c5ee0e4d3eee7
Hauser, C. (2021, June 2). Outrage greets report of Arizona plan to use ‘holocaust gas’
in executions. New York Times. www.nytimes.com/2021/06/02/us/arizona-zyklon-b-
gas-chamber.html
Hayles, N. K. (1987). Text out of context: Situating postmodernism within an information
society. Discourse, 9, 24–36. www.jstor.org/stable/41389085
Heidegger, M. (1969). Discourse on thinking. A translation of Gelassenheit. Harper & Row.
Heidegger, M. (1977). The question concerning technology, and other essays (1st ed.). Harper &
Row.
Heikkila, M. (2021, October 20). POLITICO AI: Decoded: AI goes to school – What
EU capitals think of the AI act – Facebook’s content moderation headache. Politico.
www.politico.eu/newsletter/ai-decoded/ai-goes-to-school-what-eu-capitals-think-
of-the-ai-act-facebooks-content-moderation-headache-2/
Herf, J. (1984). Reactionary modernism: Technology, culture, and politics in Weimar and the Third
Reich. Cambridge University Press.
Herszenhorn, D. M. (2022, March 4). The fghting is in Ukraine, but risk of World War
III is real. Politico. www.politico.eu/article/fght-ukraine-russia-world-war-risk-real/
Higgins, M. D. (2021, June 8). ‘On academic freedom’ – Address at the scholars at risk
Ireland/all European academies conference: President of Ireland. Speeches. https://
president.ie/en/media-library/speeches/on-academic-freedom-address-at-the-schol
ars-at-risk-ireland-all-european-academies-conference
Ho, S., & Burke, G. (2022, April 30). An algorithm that screens for child neglect raises concerns.
Associated Press. https://apnews.com/article/child-welfare-algorithm-investigation-
9497ee937e0053ad4144a86c68241ef1
Hofman, D. (1999, February 10). I had a funny feeling in my gut. Washington Post Foreign
Service. www.washingtonpost.com/wp-srv/inatl/longterm/coldwar/shatter021099b.
htm
Hofstadter, R. (1963). Anti-intellectualism in American life. Knopf.
Hollands, F. M., & Tirthali, D. (2014). MOOCs: Expectations and reality. Full report.
Center for Beneft- Cost Studies of Education, Teachers College, Columbia Univer-
sity. https://fles.eric.ed.gov/fulltext/ED547237.pdf
Hoover, H. (1927). Motion pictures, trade, and the welfare our western hemisphere. Advo-
cate of Peace through Justice, 89(5), 291–296. www.jstor.org/stable/20661595
Horan, C. (2021). Insurance era: Risk, governance, and the privatization of security in postwar
America. The University of Chicago Press.
Hoxhaj, R. (2015). Wage expectations of illegal immigrants: The role of networks and
previous migration experience. International Economics, 142, 136–151. https://doi.org/
https://doi.org/10.1016/j.inteco.2014.10.002
References 203
Hutson, M. (2018). Has artifcial intelligence become alchemy? Science, 360(6388), 478–
478. https://doi.org/doi:10.1126/science.360.6388.478
James, I. (2009). Claude Elwood Shannon 30 April 1916–24 February 2001. Biographi-
cal Memoirs of Fellows of the Royal Society, 55, 257–265. https://doi.org/doi:10.1098/
rsbm.2009.0015
James, W. (1983). The principles of psychology. Harvard University Press.
Jasanof, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the
fabrication of power. The University of Chicago Press.
Jensen, A. R. (1998). The g factor: The science of mental ability. Praeger.
Johnson, K. (2022, March 7). How wrongful arrests based on AI derailed 3 men’s lives. www.
wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/
Judt, T. (2005). Postwar. A History of Europe since 1945. The Penguin Press.
Kalfa, S., Wilkinson, A., & Gollan, P. J. (2018). The academic game: Compliance and
resistance in universities. Work, Employment and Society, 32(2), 274–291.
Katz, Y. (2020). Artifcial whiteness: Politics and ideology in artifcial intelligence. Columbia
University Press.
Kell, H., & Wai, J. (2018). Terman study of the gifted. In B. Frey (Ed.), The Sage ency-
clopedia of educational research, measurement, and evaluation (Vol. 1, pp. 1665–1667). Sage
Publications, Inc. www.doi.org/10.4135/9781506326139.n691
Kevles, D. J. (1986). In the name of eugenics: Genetics and the uses of human heredity. University
of California Press.
Kharpal, A. (2018). A.I. is in a ‘golden age’ and solving problems that were once in the realm of
sci-f, Jef Bezos says. Retrieved October 9, 2021, from www.cnbc.com/2017/05/08/
amazon-jef-bezos-artifcial-intelligence-ai-golden-age.html
Kibby, B. (2022, April 6). Why hyper-personalization is critical for higher ed. eCampus
News. www.ecampusnews.com/2022/04/06/why-hyper-personalization-is-critical-
for-higher-ed/
Kim, T. (2018, April 11). Goldman Sachs asks in biotech research report: “Is curing
patients a sustainable business model?” CNBC. www.cnbc.com/2018/04/11/gold
man-asks-is-curing-patients-a-sustainable-business-model.html
Klingler, W. (2017). Silicon Valley’s radical machine cult. Vice. www.vice.com/en/article/
kz7jem/silicon-valley-digitalism-machine-religion-artifcial-intelligence-christianity-
singularity-google-facebook-cult
Kranzberg, M. (1990). Software for human hardware. In P. Zunde & D. Hocking (Eds.),
Empirical foundations of information and software science V. Plenum Press.
Krause, M. J., & Tolaymat, T. (2018). Quantifcation of energy and carbon costs for min-
ing cryptocurrencies. Nature Sustainability, 1(11), 711–718. https://doi.org/10.1038/
s41893-018-0152-7
Kruglanski, A. W., & Orehek, E. (2011). The need for certainty as a psychological nexus
for individuals and society. In Extremism and the psychology of uncertainty (pp. 1–18).
https://doi.org/https://doi.org/10.1002/9781444344073.ch1
Kühl, S. (1994). The Nazi connection: Eugenics, American racism, and German national socialism.
Oxford University Press.
LaFrance, A. (2021, September 27). The largest autocracy on earth. The Atlantic.
www.theatlantic.com/magazine/archive/2021/11/facebook-authoritarian-hostile-
foreign-power/620168/
Lanier, J. (2013). Who owns the future? (First Simon & Schuster hardcover edition. ed.).
Simon & Schuster.
204 References
Layton, R. (2022, April 23). Commerce’s BIS can help stop China’s quest for AI dominance.
Forbes. www.forbes.com/sites/roslynlayton/2022/04/23/commerces-bis-can-help-
stop-chinas-quest-for-ai-dominance/
Legg, S., & Hutter, M. (2007). A collection of defnitions of intelligence. Frontiers in Arti-
fcial Intelligence and Applications, 157, 17–24. arXiv:0706.3639 [cs.AI]
Leslie, M. (2000, July/August). The vexing legacy of Lewis Terman. Stanford Magazine.
https://stanfordmag.org/contents/the-vexing-legacy-of-lewis-terman
Levine, A. (2006). Educating school teachers. The Education Schools Project.
Levine, A., & Van Pelt, S. (2021). The great upheaval: Higher education’s past, present, and
uncertain future. Johns Hopkins University Press.
Linton, R. (1951). Review of Hollywood, the dream factory – an anthropologist looks at
the movie-makers, Hortense powdermaker. American Anthropologist, 53(2), 269–271.
www.jstor.org/stable/663894
Littman, M. L., Ajunwa, I., Berger, G., Boutilier, C., Currie, M., Doshi-Velez, F., Had-
feld, G., Horowitz, M. C., Isbell, C., Kitano, H., Levy, K., Lyons, T., Mitchell, M.,
Shah, J., Sloman, S., Vallor, S., & Walsh, T. (2021). Gathering strength, gathering storms:
The one hundred year study on artifcial intelligence (AI100) 2021 study panel report. Stanford
University. http://ai100.stanford.edu/2021-report.
Lombardo, P. A. (2002). “The American breed”: Nazi eugenics and the origins of the
Pioneer fund. Albany Law Review, 65(3), 743–830.
Lombardo, P. A. (2011). A century of eugenics in America: From the Indiana experiment to the
human genome era. Indiana University Press.
Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities, neo-
liberalism, and new public management. Critical Inquiry, 38(3), 599–629. https://doi.
org/10.1086/664553
Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry sponsor-
ship and research outcome. Cochrane Database of Systematic Reviews, 2(2), Mr000033.
https://doi.org/10.1002/14651858.MR000033.pub3
Lynch, K. (2015). Control by numbers: New managerialism and ranking in higher educa-
tion. Critical Studies in Education, 56(2), 190–207. https://doi.org/10.1080/17508487
.2014.949811
Marcuse, H., & Kellner, D. (1998). Collected papers of Herbert Marcuse. Routledge.
Marks, R. (1974). Lewis M. Terman: Individual diferences and the construction
of social reality. Educational Theory, 24(4), 336–355. https://doi.org/https://doi.
org/10.1111/j.1741-5446.1974.tb00652.x
McCarthy, J. (1987). Generality in artifcial intelligence. Communications of the ACM,
30(12), 1030–1035. https://doi.org/10.1145/33447.33448
McCarthy, J. (1997). AI as sport. Science, 276(5318), 1518–1519. https://doi.org/
doi:10.1126/science.276.5318.1518
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for
the Dartmouth summer research project on artifcial intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of
artifcial intelligence. A.K. Peters.
McMahon, S. D., Anderman, E. M., Astor, R. A., Espelage, D. L., Martinez, A., Reddy,
L. A., & Worrell, F. C. (2022). Violence against educators and school personnel: Crisis during
COVID (Technical Report). American Psychological Association.
Meadows, D. H., Meadows, D. L., Randers, J., & Behrens III, W. W. (1972). The limits
to growth; A report for the Club of Rome’s project on the predicament of mankind. Universe
Books.
References 205
Millar, K. (2020, June 24). HAI Fellow Kate Vredenburgh: The right to an explanation.
Human-Centered Artifcial Intelligence, Stanford University. https://hai.stanford.edu/
news/hai-fellow-kate-vredenburgh-right-explanation
Minsky, M. (1992). Alienable rights. The MIT Press. https://web.media.mit.edu/~minsky/
papers/Alienable%20Rights.html
Minsky, M. (1994). Will robots inherit the earth? Scientifc American, 271(4), 108–113.
https://doi.org/10.1038/scientifcamerican1094-108
Mohamed, E. (2021, November 30). Experts critique Prince William’s ideas on Africa pop-
ulation. Al Jazeera. www.aljazeera.com/news/2021/11/30/experts-critique-prince-
williams-ideas-on-africa-population
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism.
PublicAfairs.
Nagle, T., Redman, T. C., & Sammon, D. (2017, September 11). Only 3% of companies’
data meets basic quality standards. Harvard Business Review. https://hbr.org/2017/09/
only-3-of-companies-data-meets-basic-quality-standards
Neisser, U., Boodoo, G., Bouchard Jr, T. J., Boykin, A. W., Brody, N., Ceci, S. J., Halp-
ern, D. F., Loehlin, J. C., Perlof, R., Sternberg, R. J., & Urbina, S. (1996). Intel-
ligence: Knowns and unknowns. American Psychologist, 51(2), 77–101. https://doi.
org/10.1037/0003-066X.51.2.77
Noelle-Neumann, E. (1974). The spiral of silence a theory of public opinion. Journal of Com-
munication, 24(2), 43–51. https://doi.org/https://doi.org/10.1111/j.1460-2466.1974.
tb00367.x
Nordquist, R. (2020, August 27). Catachresis (Rhetoric). www.thoughtco.com/what-is-
catachresis-1689826
NSCAI. (2021). Final report. The National Security Commission on Artifcial Intelligence.
www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal education.
Harvard University Press.
Obama, B. (2016, December 10). Now is the greatest time to be alive. WIRED. www.
wired.com/2016/10/president-obama-guest-edits-wired-essay/
O’Connor, J., Eberle, C., Cotti, D., Hagenlocher, M., Hassel, J., Janzen, S., Narvaez, L.,
Newsom, A., Ortiz-Vargas, A., Schuetze, S., & Sebesvari, Z. (2021). Interconnected dis-
aster risks. Interconnected Disaster Risks. UNU-EHS. Bonn.
OECD. (2019). Artifcial intelligence in society. OECD Publishing. https://dx.doi.
org/10.1787/eedfee77-en.
OECD. (2021a). AI and the future of skills (Vol. 1). https://doi.org/doi:https://doi.
org/10.1787/5ee71f34-en
OECD. (2021b). OECD digital education outlook 2021. https://doi.org/doi:https://doi.
org/10.1787/589b283f-en
OECD. (2022). OECD framework for the classifcation of AI systems. https://doi.org/
doi:https://doi.org/10.1787/cb6d9eca-en
Pasquale, F. (2015). The black box society: The secret algorithms that control money and informa-
tion. Harvard University Press.
Passell, P., Roberts, M., & Ross, L. (1972, April 2). The limits to growth. The New
York Times. https://www.nytimes.com/1972/04/02/archives/the-limits-to-growth-a-
report-for-the-club-of-romes-project-on-the.html
Pearson, K. (1911). The grammar of science. A. and C. Black.
Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013). The
life cycle of a million MOOC users. The University of Pennsylvania Graduate School of
Education. www.gse.upenn.edu/pdf/ahead/perna_ruby_boruch_moocs_dec2013.pdf
206 References
Piketty, T., & Goldhammer, A. (2014). Capital in the twenty-frst century. The Belknap Press
of Harvard University Press.
Potter, J. (2008). Entrepreneurship and higher education. OECD Publishing.
Pritchett, L. (2013). The rebirth of education: Schooling Ain’t learning. Center for Global
Development.
Project, T. T. (2017). Google academics inc. T. T. Project. www.techtransparencyproject.org/
articles/google-academics-inc
Purdy, J. (2016, November 30). The anti-democratic worldview of Steve Bannon and Peter Thiel.
Politico. www.politico.com/magazine/story/2016/11/donald-trump-steve-bannon-
peter-thiel-214490/
Reilly, P. R. (2015). Eugenics and involuntary sterilization: 1907–2015. Annual Review of Genom-
ics and Human Genetics, 16, 351–368. https://doi.org/10.1146/annurev-genom-090314-
024930
Repucci, S., & Slipowitz, A. (2022). The global expansion of authoritarian rule. Freedom in the
world 2022. Freedom House. https://freedomhouse.org/sites/default/fles/2022-02/
FIW_2022_PDF_Booklet_Digital_Final_Web.pdf
Riesman, D. (1951). Review of Hollywood: The dream factory., Hortense Powdermaker.
American Journal of Sociology, 56(6), 589–592. www.jstor.org/stable/2772480
Roberts, M. E. (2018). Censored: Distraction and diversion inside China’s great frewall. Prince-
ton University Press.
Romo, D. D. (2005). Ringside seat to a revolution: An underground cultural history of El Paso
and Juárez, 1893–1923. Cinco Puntos Press.
Röösli, E., Bozkurt, S., & Hernandez-Boussard, T. (2022). Peeking into a black box, the
fairness and generalizability of a MIMIC-III benchmarking model. Scientifc Data, 9(1),
24. https://doi.org/10.1038/s41597-021-01110-7
Rushkof, D. (2018, December 12). The anti-human religion of silicon valley. Medium.
https://medium.com/team-human/the-anti-human-religion-of-silicon-valley-
ac37d5528683
Russell, N. C., Reidenberg, J. R., Martin, E., et al. (2018). Transparency and the marketplace
for student data [Report]. Fordham Center on Law and Information Policy. https://apo.
org.au/node/175271
Santomauro, D. F., Herrera, A. M. M., Shadid, J., Zheng, P., Ashbaugh, C., Pigott, D. M.,
Abbafati, C., Adolph, C., Amlag, J. O., & Aravkin, A. Y. (2021). Global prevalence and
burden of depressive and anxiety disorders in 204 countries and territories in 2020 due
to the COVID-19 pandemic. The Lancet, 398(10312), 1700–1712.
Saul, H. (2016, February 24). Donald Trump declares “I love the poorly educated” as
he storms to victory in Nevada caucus. Independent. www.independent.co.uk/news/
people/donald-trump-declares-i-love-poorly-educated-he-storms-victory-nevada-
caucus-a6893106.html
Schefer, I. (1973). Reason and teaching. Routledge and Kegan Paul.
Sejnowski, T. J. (2018). The deep learning revolution. The MIT Press.
Senior, J., & Gyarmathy, E. (2021). AI and developing human intelligence. Future learning and
educational innovation. Routledge
Shurkin, J. N. (2006). Broken genius: The rise and fall of William Shockley, creator of the elec-
tronic age. Macmillan.
Sirius, R. U., & Joy, D. (2005). Counterculture through the ages: From Abraham to acid house.
Villard.
References 207
Sivabalan, S. (2019, August 13). Argentina’s massive sell-of had a 0.006% chance of happen-
ing. www.bloomberg.com/news/articles/2019-08-13/argentina-rout-was-4-sigma-
event-beckoning-the-bravest-of-brave
Slater, A. (2021, May 18). How artifcial intelligence depends on low-paid workers. Trib-
une. https://tribunemag.co.uk/2021/05/how-artifcial-intelligence-depends-on-low-
paid-workers
Smyth, J. (2017). The toxic university: Zombie leadership, academic rock stars and neoliberal ideol-
ogy. Palgrave Macmillan.
Sontag, S., & Rief, D. (2008). Reborn: Journals and notebooks, 1947–1963. Farrar, Straus
and Giroux.
Sternberg, R. J. (2020). The Cambridge handbook of intelligence. Cambridge University Press.
Stokel-Walker, C. (2021, November 25). AI has learned to read the time on an ana-
logue clock. New Scientist. www.newscientist.com/article/2298773-ai-has-learned-
to-read-the-time-on-an-analogue-clock/
Stonier, T. (1992). The evolution of machine intelligence. In Beyond information. Springer.
https://doi.org/10.1007/978-1-4471-1835-0_6
Stoychef, E. (2016). Under surveillance: Examining Facebook’s spiral of silence efects
in the wake of NSA internet monitoring. Journalism & Mass Communication Quarterly,
93(2), 296–311. https://doi.org/10.1177/1077699016630255
Thiel, P. (2009, April 13). The education of a libertarian. Cato Unbound. Cato Institute.
www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/
Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artifcial-intelli
gence-powered drug discovery. Nature Machine Intelligence, 4(3), 189–191. https://doi.
org/10.1038/s42256-022-00465-9
Vostal, F. (2016). Introduction: The pulse of modern academia. In Accelerating academia:
The changing structure of academic time (pp. 1–10). Palgrave Macmillan. https://doi.
org/10.1057/9781137473608_1
Wakefeld, J. (2021, March 28). AI: Ghost workers demand to be seen and heard. BBC.
www.bbc.com/news/technology-56414491
Warofka, A. (2018, November 5). An Independent assessment of the human rights impact
of Facebook in Myanmar. Meta. https://about.fb.com/news/2018/11/myanmar-hria/
Watters, A. (2021). Teaching machines. The MIT Press.
White, L. T. (1974). Medieval technology and social change. Oxford University Press.
Whitman, J. Q. (2017). Hitler’s American model: The United States and the making of Nazi race
law. Princeton University Press.
Wiener, N. (1994). Invention: The care and feeding of ideas. MIT Press.
Wilson, E. O. (1998). Consilience: The unity of knowledge. Knopf: Distributed by Random
House.
Wolfe, P. (1991). On being woken up: The dreamtime in anthropology and in Australian
Settler culture. Comparative Studies in Society and History, 33(2), 197–224. https://doi.
org/10.1017/S0010417500017011
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab,
M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura,
P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). OPT: Open pre-trained transformer
language models. ArXiv, abs/2205.01068.
Zubof, S. (2020). The age of surveillance capitalism: The fght for a human future at the new
frontier of power. Profle Books.
208 References
Zuckerberg, M. (2012, February 2). Facebook’s letter from Mark Zuckerberg – full text. The
Guardian. www.theguardian.com/technology/2012/feb/01/facebook-letter-mark-
zuckerberg-text
The Annals of Human Genetics. https://onlinelibrary.wiley.com/page/journal/14691809/
homepage/productinformation.html
DeakinUniversity. (2014, October 8). Deakin and IBM unite to revolutionise the student experience.
www.deakin.edu.au/about-deakin/news-and-media-releases/articles/deakin-and-
ibm-unite-to-revolutionise-the-student-experience
Deakin University. (2015, November 25). IBM Watson helps Deakin drive the digital
frontier. Media Release. www.deakin.edu.au/about-deakin/news-and-media-releases/
articles/ibm-watson-helps-deakin-drive-the-digital-frontier
Eticas Foundation. (2022). The external audit of the VioGén. https://eticasfoundation.org/
wp-content/uploads/2022/03/ETICAS-FND-The-External-Audit-of-the-VioGen-
System.pdf
Human Rights Watch. (2019, August 11). Australia: Press Laos to protect rights dialogue
should address enforced disappearances, free speech. www.hrw.org/news/2019/08/11/
australia-press-laos-protect-rights
IPCC. (2022). Summary for policymakers [H.-O. Pörtner, D. C. Roberts, E. S. Poloc-
zanska, K. Mintenbeck, M. Tignor, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V.
Möller, A. Okem (Eds.)]. In H.-O. Pörtner, D. C. Roberts, M. Tignor, E. S. Poloc-
zanska, K. Mintenbeck, A. Alegría, M. Craig, S. Langsdorf, S. Löschke, V. Möller, A.
Okem, & B. Rama (Eds.), Climate change 2022: Impacts, adaptation, and vulnerability.
contribution of working group II to the sixth assessment report of the intergovernmental panel on
climate change. Cambridge University Press.
JISC. (2021). AI in tertiary education: A summary of the current state of play. JISC.
https://repository.jisc.ac.uk/8360/1/ai-in-tertiary-education-report.pdf
MIT Media Lab. (2016, January 25). Marvin Minsky, “father of artifcial intelligence,” dies at 88.
https://news.mit.edu/2016/marvin-minsky-obituary-0125
NSCAI. (2021). Final report. The National Security Commission on Artifcial Intelligence.
www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
President of Ireland (Media Library). (2016, April 7). Speech at the EUA annual conference.
NUI Galway, April 7, 2016.
Tech Transparency Project. (2012, July 11). Google academics inc. – Report. www.tech
transparencyproject.org/articles/google-academics-inc
TOC. (2020). Technology managing people. The worker experience. Trades Union Congress.
www.tuc.org.uk/sites/default/files/2020-11/Technology_Managing_People_
Report_2020_AW_Optimised.pdf
UCU. (2022). UK higher education. A workforce in crisis. www.ucu.org.uk/media/12532/
HEReport24March22/pdf/HEReport24March22.pdf
WHO. (2021, June 17). Suicide. www.who.int/news-room/fact-sheets/detail/suicide
WHO. (2021, September 13). Depression. www.who.int/news-room/fact-sheets/detail/
depression
INDEX