You are on page 1of 21

2NC

K---Innovation
2NC---OV
Claims that AGI is a “benign good” that will produce a “virtuous
cycle of innovation” are products of a neoliberal tech industry
that authorizes technofascism.
AI advocates like the 1AC assume technology exists in a neutral space and vastly
overstate its effectiveness to procure greater investments and to solidify their hegemony
over the data ecosystem. Millions of workers are sentenced to permanent data mining
operations, where they annotate billions of webpages under 14-hour workdays – and
these generalized AI models are then re-purposed toward racialized ends, including ICE
raids and immigration targeting. This is not a mere externality to the system. The 1AC’s
analogy between humans and computers re-narrates society’s development as path
toward ultra-rationalism. It sanctions a society conditioned on a sociopathic
commitment to Bayesian statistics, which self-justifies racism by suggesting that
“statistics” are the progenitor of social values – this frame is deployed in the service of
race science only care about the prior distribution of people and justify inequalities
through numerical abstractions.

Turns the innovation ecosystem.


The whole AI economy is premised on labor theft. Only 10% of the US workforce is
labeled as “tech workers”, yet ML companies harvest data from every internet user on the
books. It hollows out the innovation ecosystem from the bottom up – which leads to
brittle innovations that automate activities with AI models that require constant human
input to be functional, so they’re unusable within a decade. That’s Lanier.

The alternative is humanist technology.


It’s a refusal to analogize the human with the computer, breaking the Silicon Valley
narrative that sees citizens are mere information processing agents. This re-imagination
is wholly possible. The internet is not a “material” institution, but a social sphere. Google
relies on spreading narratives of humans as computers because it deprives us of our
agency to make affirmative choices in our orientation to technology. The modern
internet as we know it has not existed for even 2 decades, so any naturalization of this
institution speaking from authoritarian ideology, not science. Our Lanier evidence says
that Taiwan successfully created a system of data cooperatives that lobbied over 50% of
its population, so you should assume that this is well within the realm of feasibility.
2NC---FW
1. Debate is a question of “should” not “would” – “should”
implies a set of value assumptions that must be interrogated.
That’s predictable – the 1AC was an AGI advocacy piece, and the
plan is a subset of that ideology. There’s no way to rationalize
the plan without endorsing the “humanness” of machine
learning algorithms, which is the starting condition of our
critique.

2. No offense. Normal means arguments are never entirely


about the plan and no policy proposal is ever self-contained
from an external interpretation of the world. What’s
“predictable” for them is totally arbitrary anyway – and it’s
always entangled with substance since our links demonstrate
that you should not take the 1AC as a serious court brief and see
it instead as a tech novella.

3. Technocracy. AI personhood advocates thrive on the


aggressive naturalization of non-agency. Any AFF justification to
bracket out critique dominoes back into our critique.
Bourne 19 – Clea Bourne is Senior Lecturer and convenor of the MA Promotional
Media: Public Relations, Advertising and Marketing at Goldsmiths, University of
London, UK, 2019 (“AI cheerleaders: Public relations, neoliberalism and artificial
intelligence”, Public Relations Inquiry, Available online via USC Libraries, Accessed 10-
21-2022)
AI and neoliberalism: Machina economicus
The term ‘Artificial Intelligence’ or AI was coined in the 1950s by John McCarthy, who
defined it as ‘the endeavor to develop a machine that could reason like a human’
(Dignum, 2018). Sixty years later, this endeavour is not yet reality. AI includes a host of
activities, including cognitive robotics and human-agent–robot interaction (Dignum,
2018). However, much of what we currently call AI is ‘machine learning’, where
machines are taught through complex algorithms, enabled by greater 21st-century
computing power. Machines gather and learn information from the world’s biggest
‘school book’ – the avalanche of ‘big data’ shared by humans online.
The PR industry has much to gain from promoting AI, because of the sheer range of
economic sectors currently profiting from AI technologies. The healthcare sector uses AI
in wearable tech, facial recognition and other diagnostic tools designed to detect vital
signs and physical wellness. The financial sector uses AI to trade securities, offer
‘roboadvice’ to investors and track consumer data for insurance policies. The travel,
leisure and retail sectors all use AI to take customer orders, redirect customer queries
and respond to customer complaints. The defence sector has invested billions in AI-
assisted surveillance, target and decoy technologies.
However, while the PR industry benefits directly from the expansion of AI enterprise, PR
industry bodies have been forced to respond to professional uncertainty about PR jobs
and status, where threatened by AI technologies (Valin, 2018). These industry responses
have taken the form of fact-finding missions, for example, identifying the number of AI
tools now used in PR – estimated to be at least 150, according to one UK study (Slee,
2018). Industry reports generally assess AI tools as a ‘good thing’, enabling PR
professionals to act ‘more quickly and with greater intelligence’ (Weiner and Kochhar,
2016: 5). AI tools are even portrayed as the ‘genie in the lamp’ – able, at long last, to
bestow the respect of company directors on their PR and marketing functions (Tan,
2018). Ultimately, PR industry research cheerily predicts that AI technologies can never
replace PR’s human touch with client-organisations (Davis, 2018).
While industry research is designed to comfort PR professionals, a more unsettling
industry tone has surfaced. Writing in November 2018, PR consultant and former
political adviser, Guto Hari (2018) warned the PR industry that taking a reactive stance
to AI amounted to standing in the way of progress. The correct way for the PR industry
to approach AI, urged Hari, was with anticipation and excitement:
Artificial Intelligence (AI) poses little threat to our industry – but it provides
plenty of opportunities, not least because it will wreak havoc in some sectors and
bring mind-blowing breakthroughs in others.
[…] As communicators, we have a responsibility to talk about AI in a positive way,
to help ease the way for its assimilation into everyday life. Harnessing AI will
allow us to focus on the more human aspects of jobs. Society needs to embrace
machines, seeing them as friends, not foes. (Hari, 2018)

Hari further identified a strategic role for PR in AI discourses, stating that companies
best able to capitalise on AI were ‘the ones with communicators driving the
debate ’. Hari’s observations about AI wreaking ‘havoc’ in some sectors, while
bringing ‘breakthroughs’ in others, establishes a direct link between PR and the
current phase of neolib eralism, which venerates disruption and winner-take-all
(Davies, 2014). From the PR industry’s perspective, disruption guarantees PR
consultants a place on corporate speeddials. This is an important consideration at a time
when the global PR industry has experienced slower rates of annual growth (Sudhaman,
2018). Unfortunately, the PR industry’s quest for growth has meant that cautionary tales
about questionable AI ethics and practices have been largely overlooked (Gregory, 2018).
Legitimising neoliberal futurity through discourse work
Contemporary neoliberalism thrives on a pattern of disruption, followed by growth.
Hence, neoliberal futurity never arrives passively (Canavan, 2015). Sector disruption
results in high-stakes battles to establish and reinforce particular truths (Motion and
Leitch, 2009; Roper, 2005). To this end, neoliberal discourses are typified by
disparities in voice and hegemonic struggles favouring economically powerful
institutions (Roper, 2005). In the case of AI discourses, dominant voices include
governments, think tanks, tech nology firms , AI investors , global management
consultancies, as well as multinational corporations able to purchase AI technologies.
PR’s strategic objective is to position the latest neoliberal disruption as inevitable and
‘common-sense’ and consequently a ‘public good’ (Roper, 2005).

As with previous phases of neoliberalism, the emerging AI economy is being


aggressively naturalised as the common-sense way of life (Pueyo, 2017) – and a
‘ public good ’ – through persuasive doctrines enabled by PR . Unfettered
support for 21st-century AI technologies goes right up the food chain , from
multinational businesses to national and regional governments , all keen to
compete in the AI ‘space race’. AI spending is forecast to grow from US$640 million in
2016 to US$375 billion by 2025 (Peet and Wilde, 2017). Some scholars argue that the
advent of a so-called ‘superintelligent’ economy will enable neoliberalism to enter a
transformative phase, in which machina economicus will replace homo economicus as
the ultimate rational economic actor (Pueyo, 2017). AI is humanity’s inevitable future –
or so we are told. Just as with previous phases of neoliberalism, it would seem ‘There is
no alternative’.1
Neoliberalism’s crisis of voice
Neoliberalism is perhaps best associated with Friedrich von Hayek; although the broader
neoliberal intellectual project took on many forms as it evolved in London, New York,
Chicago, Freiburg and Vienna (Davies, 2014). Today, neoliberalism encompasses an
ideology, a mode of governance, and a policy package (Steger and Roy, 2010). Neoliberal
ideology has been acutely instrumental in shaping contemporary global capitalism,
through its focus on a nation’s capacity to generate wealth (Davies, 2014: 114).
Neoliberalism’s common mantra is to organise society by an economistic market logic.
This has clear implications, not just for nations, but for individuals in all walks of life.
Whether we are at school, work or play, the project of neoliberalism is to judge us and
measure us as if we were acting in a market (Davies, 2014).
Media sociologist, Nick Couldry (2010) argues that neoliberalism’s excessive valuation of
markets and market logic has created a crisis of ‘voice’ in democratic societies .
Couldry distinguishes ‘voice’ on two levels: voice as process and voice as value. Voice as
process is a basic feature of human action; it is how we give account of our lives and
conditions through our own stories and narratives. To deny anyone her capacity for
narrative, her potential for voice, ‘is to deny a basic dimension of human life’ (Couldry,
2010: 7). Couldry (2010) argues that, under neoliberalism, voice as a process is not
valued precisely because neoliberalism imposes a reductive view of economic life as
‘ market ’ onto the political (p. 135). Couldry goes further, contending that
neoliberalism denies voice altogether, by operating with a view of human life that is
incoherent. This is why voice as value is the more significant concept, because choosing
to value voice means discriminating in favour of frameworks for organising human life
and resources that put the value of voice into practice.
Couldry (writing with Powell, 2014) later turns his attention to voice and agency in an
era of algorithmic-driven automation, advising scholars to rethink how voice might
operate in digital environments. Couldry and Powell (2014) argue that the value of voice
is ‘not immediately compatible with a world saturated with the automated aggregation of
analytic mechanisms that are not, even in principle, open to any continuous human
interpretation or review’ (p. 4). The suggestion here is that AI and automation, with its
more automatic sensing and calculative logic, eliminates the accountability of voice as a
subjective form of expression. Couldry and Powell (2014) maintain, however, that
‘something similar to “voice” is required in this new world’ (p. 4). They argue that, at
present
the proxy for voice in the algorithmic domain is the notion that data gathering
processes ought to be transparent, and the logic of calculation revealed. A focus
on transparency could begin to foreground notions of accountability in data
calculation, ownership and use. (Couldry and Powell, 2014: 4)
Couldry and Powell suggest that a refined concept of transparency, ‘sensitive to the
meaning that data trails might form’ would begin to address the concerns of voice in the
automated age. This has implications for how an ethical PR might emerge in an age of AI
and automation.
PR, corporate voice and market logic
PR has a complex relationship with voice as value, as Anne Cronin argues in her 2018
book, Public Relations Capitalism, in which she draws heavily on the work of
philosopher, Hannah Arendt (1998 [1958]). Just as neoliberal democracies appear
evacuated of true democratic content, PR promises voice to the public ‘to engage as
consumers or stakeholders in debates and to impact on decisions and issues they
consider significant … PR promises that the public will be heard’ (Cronin, 2018: 54). In
other words, says Cronin (2018), PR has come to inhabit and exploit neolib eralism’s
democratic gap, ‘speaking the language of democracy and offering to both publics and
organisations modes of engagement , agency and voice’ (p. 2). This Arendtian promise
is an important pillar of normative PR scholarship, which stipulates that ‘excellent’ PR
ought to give voice to publics in management decisions affecting them (Grunig et al.,
2002). This laudable perspective is honoured more in the breach than the observance.
Even the most communicative organisation may only create the impression of providing
the public with voice, a fantasy further perpetuated as more organisations communicate
via social media (Moore, 2018).
Attempting to position PR as a source of public voice is deeply problematic, considering
that PR has been a chief advocate of neoliberal capitalism for nearly a century (Steger
and Roy, 2010). Even before neoliberalism emerged, Logan (2014) traces a very early
connection between market ideology and PR to the 1800s, when PR worked in
tandem with other occupations to bestow ‘ personhood ’ on US corporations .
Corporations were once deemed ‘ artificial persons ’ under the law. But, after costly,
sustained bombardment of the US legal system, corporations successfully co-opted the
Fourteenth Amendment2 to acquire legal ‘personhood’ , securing greater rights over
individuals (Logan, 2018; Mark, 1987). This corroborates Couldry’s (2010) view that
having a voice requires practical resources and symbolic status in order to be recognised
by others. Logan (2014) goes on to demonstrate how US big business, ably supported by
PR and advertising, successfully erased the 19th-century image of the ‘ruthless,
corporate giant ’, to create a more familiar, accessible ‘ economic man ’; specifically,
the efficient ‘American business man’. With newfound legal rights, corporations
expanded aggressively across the US landscape into the 20th century.
Surma and Demetrious (2018) add insight from the 1930s and 1940s, tracing PR’s
involvement in the emergence of the neoliberal project, and ‘unabashed marshalling of
public relations as the voice of business, the free market and free enterprise’ (p. 93).
Drawing on Poerksen (1995: 4), Surma and Demetrious identify one of PR’s
contributions to neoliberal rhetoric, through disseminating hollow, portable, ‘ plastic
words’ , which enter everyday language to become accepted as ‘ common sense ’.
Surma and Demetrious point to PR’s role in propagating plastic words such as
‘management’, ‘ solution ’ and ‘ progress’ , which connote positive, futuristic notes,
underscored by apocalyptic imperative . Reading Surma and Demetrious’s (2018)
work in conjunction with Logan’s establishes convincingly how often PR provides voice –
not to the public – but to corporations, professional lobbyists, think tanks and big
business interests (Miller and Dinan, 2008). Twentieth-century alignment between PR
and neoliberalism is further cemented in the work of J. Grunig’s excellence theory, which
traces its roots back to his 1966 thesis on the role of information in decision making
(Verčič and Grunig, 2000: 10). Grunig’s subsequent body of work acknowledged that
stakeholders’ lack of information could produce detrimental asymmetries. Grunig’s
symmetrical communication framework was eventually devised as a means of providing
voice to all stakeholders. Symmetrical communication is characterised by an
organisation’s willingness to listen and respond substantively to stakeholder concerns
and interests (Roper, 2005). At the other end of the spectrum lies two-way asymmetrical
communication, which Grunig describes as organisations gleaning information from
stakeholders, using the information to allay stakeholder concerns, but failing to alter
behaviour accordingly (Roper, 2005).
The 1980s was another pivotal phase for neoliberalism, when many developed nations
deregulated their economies and opened market borders. The PR industry experienced
its greatest expansion during this period. Much of this expansion was due to PR’s
successful rhetorical backing of neoliberal market logic over other ideas of how
economies might be run (Edwards, 2018, citing Dinan and Miller, 2007). For example,
UK policymaking since deregulation has primarily benefitted corporate elites.
Throughout the 1980s and 1990s, corporate arguments and ideas were given greater
share of voice across mass media. PR expertise was regularly summoned to achieve
public support for neoliberal policy-making and pro-business political parties (Davis,
2002). By the 2000s, it became increasingly common to find books like Argenti and
Forman’s (2002) The Power of Corporate Communication: Crafting the Voice and Image
of Your Business. This treatise asserts that the real value of PR is providing corporations
with a ‘coherent, consistent voice’ in a noisy, chaotic business environment (p. 13). The
book suggests that the PR profession sees voice in a manner that is antithetical to either
Couldry’s (2010) or Cronin’s (2018) view of voice and democracy, but that it also
propagates flawed Grunigian logic.
AI, neoliberalism, non-human voice
Despite sustained informational disparities, the public temporarily found new voice after
the 2008 financial crisis, through organised grassroots protest including the high-profile
Occupy movement, which made sophisticated use of PR (Kavada, 2015). But Occupy, and
other amorphous collectives, eventually fizzled out. Their vain hope was that market
failures exposed by the crisis would spell neoliberalism’s demise. Instead, neoliberalism
entered a ‘state of exception’, as governments ignored usual rules of competitive
economic activity in order to rescue the financial system and preserve the status quo
(Davies, 2014). Resolving this post-crisis mode would require a single cognitive
apparatus to gain dominance, providing a shared reality for political, business and expert
actors to inhabit (Davies, 2014). Populist ideologies such as US protectionism, or
Britain’s ‘Brexit’ from Europe, have proved too divisive for business interests to deliver a
successful, shared narrative of the future. This is because different economic sectors –
agriculture, manufacturing, retail, financial services – view international trade and
labour markets in different ways. By contrast, AI and automation offers the entire
business sector a shared narrative , a demonstrable vision of the future and a promise
to ‘ jump start ’ the global economy through advanced tech nologies. Newness and
change are precisely the messages PR is so well-equipped to sell (Moore, 2018), and it
has set about doing so. In the process, PR has helped reinvigorate some of
neoliberalism’s most portable, hollow plastic words; ‘ efficiency ’, ‘ innovation ’ and
‘ progress ’. Whereas neoliberalism’s homo economicus has supposedly proved himself
both demanding and perennially inefficient, AI is the coveted machina economicus, able
to process information rationally as homo economicus never could, while remaining
unaware of, and unaffected by, political uncertainty and societal malaise.

According to Mirowski and Nik-Khah (2017), an obsession with computing


machines over humans can be traced back to neoliberal thinker , Friedrich von
Hayek, and his preoccupation with rational choice as an organising principle for
understanding the political economy. Hayek’s vision of the ultimate rational actor was
paralleled in the US military , which served as the incubator for the modern
computer (Mirowski and Nik-Khah, 2017). Through a marriage of engineering and
mainstream economic thought, markets were reconceptualised as info rmation
processors . As for humans, we were modelled less as thinkers and more as
inefficient , low-powered processors , a means of information circulation, rather
than thinking subjects (Mirowski and Nik-Khah, 2017). With the advent of AI, the
economy supposedly has the ultimate capital-labour hybrid, the ‘perfect’ model of
efficiency. Yet what is too often sidestepped in AI discourses, is that each phase of
neoliberalism has its winners and losers. Not everyone can be successful in this new,
information-driven system (Kember, 2002). Furthermore, an AI-led economy must
inevitably incorporate non-human agents into existing human social
exchanges . Hence, just as the 20th century legislated corporate personhood, the 21st
century could eventually legislate non-human personhood for AI . While this may
be a long way off, AI has already further deepened neoliberalism’s crisis of voice , as I
discuss in the next section.
Promoting AI’s national competitiveness
PR activity in support of AI’s national competitiveness is driven by assorted business
interests. Yet AI business interests are often more ably promoted by governments, which
provide the unifying voice for ‘seizing and reimagining’ countries and regions as
competitive actors in a global contest (Davies, 2014). Whichever state can prove its
economy and society to be most ‘adaptable, networked and future-oriented’ will emerge
victorious in neoliberalism’s latest global game (Davies, 2014: 118). The United States
started out as the clear leader in AI. China quickly joined the United States as a
frontrunner. The United Kingdom then signalled its intent to harness AI innovation to
‘underpin future prosperity’, through an ‘AI Sector deal’, widely promoted by respective
government PR teams. A UK government policy paper, co-authored by Facebook’s Vice
President of AI, equated the transformative power of AI to that of the medieval printing
press (UK Government, 2018):
The huge global opportunity AI presents is why the industrial strategy white
paper identified AI and data as 1 of 4 Grand Challenges – in which the UK can
lead the world for years to come. […] The UK is well placed to do this. We are
already home to some of the biggest names in the business […] We need to be
strategic … focusing on the areas where we can compete globally. […] A
revolution in AI is already emerging. If we act now, we can lead it from the front.
(UK Government, 2018)
Through its co-authorship of the policy paper, Facebook received the UK government’s
tacit endorsement of its AI business interests, as well as a platform to voice particular
truths about AI technologies. European countries have also displayed AI ambitions, as
have cities and regions. The EU pledged €1.5 billion to AI during 2018–2020; France
pledged the same amount to French businesses until 2022 (Sloane, 2018). Germany has
invested in a new ‘Cyber Valley’, while the Mayor of London has set out the city’s stall as
‘the AI capital of Europe’ (Mayor of London, 2018: 5).
Numerous vested interests – institutional investors, investment banks, tech companies,
management consultancies and advisory firms – have employed PR to campaign for a
future shaped by AI. Expert status , ample resources and geographic reach give
these firms a booming corporate voice in the AI race. Many are global organisations
with large, well-resourced communications teams , able to invest hundreds of
millions in sustained AI thought leadership activity, including white papers ,
surveys and reports . Global firms are able to generate their ‘big ideas’ about AI
internally. These big ideas are developed and distilled by analysts and technical
writers , distributed and curated by PR specialists and sales people, and endorsed by
company spokespeople at senior levels (Bourne, 2017). The largest of these firms
can also tailor their thought leadership to different economic sectors and geographic
regions, helping ideas about AI to circulate and gain traction with policymakers
and business leaders around the world (Bourne, 2017).
2NC---AT: Extinction Outweighs
“Extinction outweighs” assumes that society should make decisions in accordance with a
linear utility optimization. It analogizes with the ultra-rationalism that we’ve critiqued
earlier and justifies hyper-eugenics. You should instead place exponential value on
marginalized groups in assessing impacts.
2NC---AT: Perm
Not sure how complex this has to be. They said that AGI is
possible and good, they said that we should see AI as a person.
We are axiomatically opposed to that description – all our
offense above assumes this point, so believing the claims in the
1AC are true eviscerates the alternative, so we’re mutually
exclusive.

Just for the record, they did read an AGI advantage. Inserting
their two pieces of Turner evidence below.

Jacob 1AC Turner 19. MA, Law, Oxford; LLM, International Law and Legal Studies,
Harvard Law; Barrister, Fountain Court Chambers; not John. “Unique Features of AI.”
Chapter 2 in Robot Rules: Regulating Artificial Intelligence. Palgrave MacMillan. 2019.
https://doi.org/10.1007/978-3-319-96235-1

independent development in AI. Though the


AlphaGo Zero is an excellent example of the capability for
other versions of AlphaGo were able to create novel strategies unlike those used by human players, the program did
so on the basis of data provided by humans . Through learning entirely from first
principles , AlphaGo Zero shows that humans can be taken out of the loop
altogether soon after a program’s inception . The causal link between the initial human
input and the ultimate output is weakened yet further.
deepMind say of AlphaGo Zero’s unexpected moves and strategies: “These moments of creativity give us confidence that
AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges
humanity is facing”.131 This may be so, but with
such creativity and unpredictability comes
attendant dangers for humans , and challenges for our legal system.

Jacob 1AC Turner 19. MA, Law, Oxford; LLM, International Law and Legal Studies,
Harvard Law; Barrister, Fountain Court Chambers; not John. “Unique Features of AI.”
Chapter 2 in Robot Rules: Regulating Artificial Intelligence. Palgrave MacMillan. 2019.
https://doi.org/10.1007/978-3-319-96235-1
Thirdly, applying human moral requirements to AI may be inappropriate given that AI does
not function in the same manner as a human mind. Laws set certain minimum moral standards for
humanity but also take into account human frailties. Many of the benefits of AI result from it
operating differently from humans , avoiding unconscious heuristics and biases that
might cloud our judgement .102 The director of the digital Ethics Lab at oxford University, Luciano Floridi,
has written:

AI is the continuation of intelligence by other means. ... It


is thanks to this decoupling that AI can
colonise tasks whenever this can be achieved without understanding , awareness, sensitivity,
hunches, experience or even wisdom. In short, it is precisely when we stop trying to
reproduce human intelligence that we can successfully replace it.
otherwise, AlphaGo would have never become so much better than anyone
at playing Go.103
2NC---AT: No Intrinsic Link
Mischaracterization. The 1AC’s critical narrative thread is a blanket endorsement of the
application of AI to society. They have no reason why their scenario is insulated from our
critique. The supposed “neutrality” of technology is a self-justifying deferral from social
context, manifesting in drone chip manufacturers absolving themselves of genocide
because the chips “weren’t meant to be used like that”. You should reject appeals to “not
intrinsic to our AI”. While in the abstract, ML models might not be inherently racially
motivated, any suggestion that innovation is a benign good is more than likely to amplify
and cement social inequality.
2NC---AT: AI Innovation
1. Artificial, not intelligent. AFF authors overstate the use-
case of the technology, ignoring that most advancements in
the field have been made through brittle supervised neural
networks that require constant human annotation. Google
Search would be useless without billions of manually
indexed Wikipedia articles. Google Translate must have
corpus that’s updated daily with new translated words. To
claim AGI, they will need to procure evidence that
rigorously demonstrates an AI system that can label its
own inductive samples across any real-world context.

2. Unfalsifiable. Any shiny AFF study or hyperbolic claim


about the merits of AI should be treated with intense
skepticism. AI papers do not publish datasets,
hyperparameters, or computational packages, so the whole
field is closer to alchemy than science – which is proven by
its ongoing replication crisis where medical classifiers are
deployed and then result in a worse-than-random
performance, causing thousands of deaths. That’s Horgan.

3. Overwhelming data. AI fails for pandemic prevention.


Chakravorti 22 – Bhaskar Chakravorti is the Dean of Global Business at The
Fletcher School at Tufts University and founding Executive Director of Fletcher’s
Institute for Business in the Global Context. He is the author of The Slow Pace of Fast
Change; March 17th (“Why AI Failed to Live Up to Its Potential During the Pandemic”,
Harvard Business Review, Available online at https://hbr.org/2022/03/why-ai-failed-
to-live-up-to-its-potential-during-the-pandemic, Accessed 11-08-2022)

TheCovid -19 pandemic was the perfect moment for AI to, literally, save the world. There was
an unprecedented convergence of the need for fast, evidence-based decisions and large-
scale problem solving with datasets spilling out of every country in the world . For health
care systems facing a brand new, rapidly spreading disease, AI was — in theory — the ideal tool. AI could
be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process
vast amounts of information and make lifesaving decisions.

Or, that was the idea at least. But what actually happened is that AI mostly failed .
There were scattered successes, no doubt. Adoption of automation picked up in retail warehouses and airports; chatbots
took over customer service as workers were in lockdown; AI-aided decisions helped narrow down site selections for
vaccine trials or helped speed up border crossings in Greece.

In general, however, in
diagnosing Covid , predicting its course through a population,
and managing the care of those with symptoms , AI-based decision tools failed to
deliver . Now that some of the confusion of the pandemic’s early days has settled, it’s time to reflect on how AI
performed on its own “Covid test.” While this was a missed opportunity, the experience provides clues for how AI systems
must evolve to realize the elevated expectations for what was the most talked about technology of the past year.

Where AI Failed

At the outset, things looked promising. Machines beat humans in raising the early alert about a mysterious new virus out
of Wuhan, China. Boston Children’s Hospital’s HealthMap system, which scrapes online news and social media for early
signals of diseases, along with a Canadian health news scraper, BlueDot, picked up warning signs. BlueDot’s algorithm
even predicted cities most at risk if infected people were to travel, all days before the WHO and weeks before the rest of
the world caught up.

As the world officially went into lockdown in 2020, it was clear that AI’s game-changing contribution would be in rapid
prediction — diagnosis, prognosis, and forecasting the spread of an emergent unknown disease, with no easy way to test
for it in a timely way.

Numerous AI-enabled teams mobilized to seize the opportunity. At New York’s Mount
Sinai hospital, for example, a team designed an AI system to quickly diagnose Covid-19 using
algorithms trained on lung CT scans data from China . Another group at MIT created a
diagnostic using algorithms trained on coughing sounds. A third team, an NYU and
Chinese collaboration, used AI tools to predict which Covid-19 patients would develop
severe respiratory disease. We had heard for years about AI’s transformative potential, and suddenly there was
an opportunity to see it in action.

So, how did these AI-powered Covid predictors work out ? Put bluntly, they landed with a
thud. A systematic review in The BMJ of tools for diagnosis and prognosis of
Covid-19 found that the predictive performance was weak in real-world clinical
settings. Another study at the University of Cambridge of over 400 tools using deep-
learning models for diagnosing Covid-19 applied to chest x-rays and CT scans data
found them entirely unusable . A third study reported in the journal, Nature,
considered a wide range of applications, including predictions , outbreak detection ,
real-time monitoring of adherence to public health recommendations , and
response to treatments and found them to be of little practical use .
We can learn from these disappointments as we gear up to build back a better AI, however. There are four places where
the fault lines appeared: bad datasets, automated discrimination, human failures, and a complex global context. While
they relate to Covid-19 decisions, the lessons are widely applicable.

The Danger of Bad Datasets

AI decision-making tools are only as good as the data used to train the underlying
algorithms. If the datasets are bad, the algorithms make poor decisions. In the context of Covid,
there are many barriers to assembling “good” datasets.

First, the
breadth of Covid symptoms underscored the challenge of assembling
comprehensive datasets . The data had to be pulled from multiple disparate
electronic health records , which were typically locked away within different
institutional systems and their corresponding siloes. Not only was each system separate,
they also had different data governance standards with incompatible consent and
confidentiality policies . These issues were amplified by health care systems
spanning different countries , with incompatible patient privacy, data governance,
and localization rules that limited the wholesale blending of such datasets.
The ultimate impact of such
incomplete and poor-quality data was that it resulted in poor
predictions , making the AI decision tools unreliable and untrustworthy .

A second problem arose from the way data was collected and stored in clinical settings. Aggregated
case counts
are easier to assemble, but they may omit key details about a patient’s history and other
demographic, personal, and social attributes. Even finer details around when the patient
was exposed, exhibited symptoms, and got tested and the nature of the symptoms , which
variant they had been infected with, the medical interventions and their outcomes, etc., are
all important for predicting how the virus might propagate . To compound the problems, some
datasets were spliced together from multiple sources, introducing inconsistencies and redundancies.

Third, a comprehensive dataset with clues regarding Covid symptoms, how the disease might spread, who is more or less
susceptible, and how to manage the disease ought to draw from multiple sources, given its newness. In addition to data
from the formal health care settings, there are other critical information sources, datasets, and analyses relevant for
predicting the pathways of a novel and emergent disease. Such additional data may be drawn from multiple repositories,
effectively tapping into the experiences of people grappling with the disease. Such repositories could include Twitter,
professional message boards, analyses done by professionals and amateurs on “open-source” platforms, medical journals,
blogs, and news outlets. Of course, once you account for so many disparate sources of relevant data, the process of
integration, correcting for wrong or misinformation, fixing inconsistencies, and training algorithms increased the
complexity of creating a full dataset.

Automated Discrimination

Even when there were data available, the predictions and decisions recommended by
health care management algorithms led to potentially highly discriminatory
decisions — and concerns that some patients received worse care . This is because the
datasets used to train the algorithms reflected a record of historical anomalies and
inequities : lower levels of access to quality healthcare ; incorrect and incomplete
records; and deep-seated distrust in the health care system that led some groups to avoid
it.
There are broad concerns about the negative impacts of AI bias , but during the pandemic,
the consequences of such bias were severe. For example, consider a pre-Covid study in Science that found
that Black patients were assigned the same risk level by an algorithm as white
patients , even though the latter were not as sick — leading to inadequate medical
care for the Black patients. Looking ahead, as Black and Hispanic Covid-19 patients suffered higher
mortality rates than white patients, algorithms trained on such data could
recommend that hospitals redirect their scarce resources away from Black and Hispanic
patients.

The ultimate impact of such automated discrimination is even more distortionary when we consider that these
disadvantaged groups have also been disproportionately affected by the most severe cases of Covid-19 — in the U.S., Black,
Hispanic, and Native Americans were about twice as likely to die from the disease as white patients.

Human Error

The quality of any AI system cannot be decoupled from people and organizations .
Behaviors, from choosing which applications and datasets are used to interpreting the decisions, are shaped by
incentives and organizational contexts.

The wrong incentives can be a big problem. Managers


overseeing health care systems often had few
incentives to share data on patients — data may have been tied to revenues , or
sharing it may raise concerns over patient confidentiality . For researchers,
rewards were often aligned with sharing data with some select parties but not everyone.
Moreover, there were few career incentives to validating existing results , as there is
greater glory in producing new findings rather than replicating or validating other
studies. This means that study results may not have applied in a wide enough variety of
settings, making them unreliable or unusable and causing caregivers to hesitate to use tools that had not been proven in
multiple settings. It is particularly risky to experiment with human health.

Then, there’s
the issue of data entry errors. Much of the data accumulated on Covid-19
involved environments in which health care workers were operating under pressure
and extraordinarily heavy caseloads . This may have contributed to mislabeled and
incomplete datasets — with mistakes showing up even in death certificates. In many countries, health
care systems were underreporting Covid-19 cases, either because they were encouraged to do so by the
authorities, because of unclear guidelines, or simply because staff were overwhelmed.

Even with AI tools on hand, the


humans responsible for making decisions often lacked
critical interpretive capabilities — from language to context awareness or the ability to spot biases and
mistakes. There isn’t, as yet, a uniformly accepted code of ethics, or a checklist, that gives caregivers a sense of when to
apply AI tools versus mitigating harms by using judgment. This could lead to inconsistent use or misuse of the AI tools
and eventually undermine trust in them.

Complex and Uneven Global Context

A pandemic, by definition, cuts across different political, economic, and sociocultural systems. This complicates the
process of assembling a comprehensive dataset that aggregates across different countries with widely applicable lessons.
The pandemic underscored the challenge of deriving universally applicable decision tools to manage human health across
all health care settings regardless of geographic location. Appropriate medical interventions depend on many factors, from
biology to institutional, sociopolitical, and cultural forces to the local environment. Even if many facets of human biology
are common across the world, the other factors vary widely.

For one, there


are differences across countries in terms of their policies regarding data
governance. Many countries have data localization laws that prevent the data from
being transported across borders . There is no international consensus on how
health care data should be shared. While the preexisting international network for the sharing of influenza
genome sequence data was extended to the sharing of sequences for Covid-19, deeper data-sharing collaborations between
countries could have helped with ongoing management of the disease. The absence of broader sharing agreements and
governance was a critical barrier.

Second, there were differences between developed and developing countries regarding sharing of health care data. Some
researchers argue that genome sequences should be shared on open databases to allow large-scale analyses. Others worry
about exploitation; they are concerned that researchers and institutions from poorer countries weren’t given adequate
credit and the benefits of the data sharing would be limited to rich countries.

Third, history and the sociopolitical contexts of countries and their ethical frameworks for data sharing even within their
own citizenry are different, giving rise to differences in the willingness to have personal data collected, analyzed, and
shared for public use. Consider the varied experiences with AI-aided exposure identification and contact tracing apps.

South Korea presented an extreme example of intrusive data collection. The country deployed contact tracing technology
together with widespread testing. Its tracking apps were paired with CCTV footage, travel and medical records, and credit
card transaction information. Koreans’ willingness to tolerate this level of intrusion can be traced to the country’s history.
The previous administration had botched its response to the 2015 MERS outbreak, when it shared no information about
hospitals visited by infected citizens. This led to public support for legislation giving health authorities access to data on
infected citizens and the right to issue alerts. In contrast, the German government’s contact tracing app was rejected by
the public once a highly critical open letter from experts raised fears of state surveillance. As a result, Germany abandoned
the centralized model for a decentralized alternative. Again, history provides an explanation. Germans have lived through
two notorious surveillance regimes: the Gestapo during the Nazi era and the Stasi during the Cold War. Centrally
controlled state data collection was not destined to be popular.

Finally, the
data on patients from one country may not be good predictors in other
countries. A variety of other factors from race, demographics, socioeconomic
circumstances, quality of health care, immunity levels, co-morbidities, etc., make a
difference.

1. Pharmaceutical innovation has peaked – proteins are


genetically limited, and orphan drugs dominate
investment.
Smith 18 – Drew Smith, Former Director of Research and Development at
MicroPhage, PhD in Molecular Biology from the University of Colorado, 2018 (“We Have
Reached Peak Pharma. There’s Nowhere to Go But Down”, Undark Magazine, January
2nd, Available Online at https://undark.org/2018/01/02/peak-pharma-drug-discovery/,
Accessed 04-01-2022)
Drugs are the agents that accomplish these goals. To a first approximation, drugs are
small molecules that bind to specific large molecules . This is the one-disease ,
one-protein , one-drug paradigm, and it is the essential value proposition of the
pharmaceutical industry. Companies identify protein targets and make drugs that alter
target behavior. They are very, very good at this. So good that the supply of new drugs
largely depends on the supply of new drug targets.

The number of protein molecules that are plausible drug targets is large, but far from
infinite . Each of these proteins is encoded by a gene; one of the surprises of human
genomics is just how few protein-coding genes there are. Pre-genome estimates assumed
that creatures as complicated and exquisite as humans could not possibly be specified by
less than a hundred thousand genes. The true number is closer to 19,000, a bit fewer
than small worms that live in the soil.
The number of proteins encoded by these genes that have anything to do with disease is
much smaller, amounting to perhaps a thousand in total. Of these, more than half
have already been “ mined ” by pharma: a current estimate is that our pharmacopeia
targets 555 proteins in total. If we knew nothing else about drug discovery and
development, we would know that the pace of new drug introduction is bound to decline.
But we do know a good deal more. We know that the rate of new drug approval (about 27
per year) has held steady for the past two decades, with no sign of a bump from
genomics. And we know that the clinical value of these new drugs is shrinking ,
even as the search and exploitation of new targets intensifies.

We know that the cost of bringing each of these new drugs to market increases at an
exponential rate . Indeed this rate, 9 percent per year, is so steady, persisting
unchanged through different regulatory regimes and new technological advances, that
it has been given a name: “ Eroom’s Law ” (Moore’s law in reverse). Nine percent may
not sound like much, but it means that costs double every 8 years . In less than a
decade, the cost of a new drug approval, now $2.6 billion, will be at $5 billion. In 16
years, it will be $10 billion .
The dynamics of this decline are precisely those of a gold mine. The fist-sized nuggets
have all been found, the gravel and sand is getting more expensive to recover, and
soon there will be nothing but dust.
Knowing this, you don’t have to have a degree in economics to figure out that the day will
come when the average new drug candidate is a money loser. That day has already
arrived for some drugs. Britain’s Office of Health Economics calculates that the value of
new antibiotic candidates is negative $45 million . Pharmaceutical companies
figured this out several years ago and most have eliminated their antibiotic R&D
programs . Other R&D programs are on the chopping block .

Increasingly, the new drugs that are approved share two characteristics: (1) they are
“ orphan ” drugs that treat limited patient populations and (2) they are obscenely
expensive . Prior to the Orphan Drug Act of 1983, U.S. drug companies introduced
about one orphan drug (defined as treating fewer than 200,000 patients) per year.
Orphans — which include “ personalized ” medicines — now constitute about 40
percent of all new drug approvals and will soon be a majority. Their median price? A
cool $83,000 per patient per year .

Get used to this. Moderately-priced mass-market drugs will disappear . Or rather,


they will go off-patent and become generics . With no R&D expenses to recoup, they
will become cheap commodities, costing a few dozen or hundred dollars per treatment.
New drugs, especially those protected from competition by the Orphan Drug Act, will
cost hundreds of thousands of dollars per year. Don’t be surprised when the first
million-dollar treatment hits the market.

These developments are no kind of tragedy — drugs cannot do much more to lengthen
human lifespan . Nor are these developments the result of a conspiracy or of unusual
levels of greed. They are just the end-stage of the depletion of a resource.

The one-disease , one-protein , one-drug paradigm has served us well, but is


approaching the limit of its usefulness . We have already found most of the safe
and effective drugs that we will ever find. In time, we will no longer look for salvation
in a pill, but will learn to engineer our internal environments — particularly our immune
systems and microbiomes — and detoxify our external environments. Some drug
companies will adapt and thrive, others will die. None of them will reliably continue to
make money by cranking out small molecules that bind to big molecules. I’m not alone in
thinking that we may have reached peak pharma , and there is nowhere to go but
down .

2. The math doesn’t add up. We have 3 billion nucleotides in our


body. Enumerating possible ATCG combination would be 4 to
the 3 billion, which even on the fastest supercomputer possible
would take longer than the heat death of the universe. That was
cross-x.

You might also like