You are on page 1of 21

2ac – shirley!

dubs
torts
space
k
capitalism – 2ac
1. FW – judge should evaluate the unique consequences of the plan – the aff
is constrained by uniqueness, the negative needs to be as well. Indictments of
assumptions should affect assessment of the probability of unique
consequences – key to fairness, clash, and preventing absolution.
2. Do both – retool AI innovation toward social ends. The aff said
manufacturers should not be sued for AI damages, not that they shouldn’t be
regulated. Directing innovation is compatible with the aff. You’re primed to
underestimate social benefits of AI that turn their fascism arguments.
Restrictive AI governance misses the mark: we should embrace tentative AI
governance but broad social governance.
Tim Büthe et al, Christian Djeffal, Christoph Lütge, Sabine Maasen & Nora von Ingersleben-
Seip (2022): Governing AI – attempting to herd cats? Introduction to the special issue on the
governance of artificial intelligence, Journal of European Public Policy, DOI:
10.1080/13501763.2022.2126515
At the same time, many AI applications hold tremendous promise for improving our quality of life,
reducing risks, and enabling qualitative leaps in factor productivity and hence economic growth, with simultaneous
reductions in environmental harm. Depriving citizens of those benefits (by preventing or substantially
delaying the development of AI technology) has detrimental consequences that might be equally or more severe
than the detrimental consequences that citizens or policymakers might seek to forestall by inhibiting or
preventing the technology. The losses caused by preventing the introduction or development of the technology are, however,
often invisible because the social, economic and health benefits of a technology often do not
become fully apparent until quite far into the development of any given new technology (Fenwick et al.,
2017). Moreover, even within a single issue area, AI itself can play out very differently. For instance, AI
is the basis of systems of surveillance that resemble the worst nightmares of privacy scholars and activists. At the
same time, many privacy-enhancing technologies are based on AI technologies. This duality
underscores the need to consider AI both a risk and an opportunity for social values. And such duality,
can be found with regard to multiple social values such as opacity and transparency, discrimination
and equality, as well as environmental sustainability and pollution. Interestingly, disadvantaged groups most
immediately affected by a technological innovation may focus to a much greater extent on the benefits than the average citizen or
voter, as Schönmann et al. (2022) have shown regarding assessments of care robots: People who depend upon extensive care and
support in their daily lives show much greater awareness of the potential upsides of care robots than the general population. As
technical and social AI innovations continue, it becomes ever more important to govern such novel
technologies in ways that prevent harm and contain risks without depriving people of benefits
and foreclosing opportunities for improving wellbeing. The challenge is exacerbated by public discourses
that are, especially in the realm of European public policy, often dominated by a focus on downside risks of
change and on prohibitions. Identifying ways to elevate public discourse so as to take both the risks and the
opportunities seriously would be a very important contribution to both scholarship and practice
of the regulation and governance of AI. Governing not the technology but its (ab)uses The articles in this special issue focus mostly on
governing AI as a technology. This focus is in keeping with the dominant approach to technology governance, even in light of
sophisticated inquiries into whether technologies and technical artifacts have ‘agency’ or ‘politics’ (Latour, 2007; Winner, 1980). Yet,
it is not the only way to address the issues raised by AI. An alternative approach might focus on how human beings and organizations
concretely use AI. The
primary goal of AI governance, after all, is not to steer the development of the technology as
such, but to avoid bad and achieve better outcomes or consequences of particular uses of AI. Healthrelated
data, for instance, is in many countries seen as particularly sensitive . AI-based analyses of such data can be
used to estimate personalized risk profiles, which might then be used to deny higher-risk individuals employment, health insurance, or
other benefits. Consequently, health and medical data tends to be subject to some of the strongest privacy and data security
requirements. Gathering such data, combining it with other data, and analyzing health and medical data is therefore in many
countries highly restricted or even prohibited. Such regulation of the technology, however, also inhibits
advances in personalized medicine, delays progress in finding treatments for rare diseases, and
prevents the early detection of, and effective policy responses to, population health risks and threats, such as
epidemics and pandemics. It might therefore be much better, from a public interest perspective, to loosen
restrictions on the data and the tools for its analysis and focus instead on prohibiting the denial of
insurance coverage on health grounds (maybe complemented by public subsidies for high-risk insured persons),
health-based employment discrimination, and other abuses – and vigorously enforcing such prohibitions.
Future research might hence focus on ways of governing the use and punishing the abuse of AI, rather than governing its primarily
technical aspects. While surely not without its own challenges, this approach also might offer a fruitful way to deal with the
unpredictability, the rapid changes, and the technical complexity of AI while reducing the need to restrict its technological
development.

3. Case outweighs – AI innovation and efficient trade facilitate global


problem solving and cooperation that caps existential risk. Transition away
from private markets and interdependence ensures extinction. Overwhelming
evidence goes AFF.
4. Alternative cannot solve the case – has to not think like a human. HITL
clouds it with judgement biases and can’t act quickly enough to process info.
Turner.
5. No link about “personhood” – based on a legal fiction and divisible.
Jacob Turner 19. MA, Law, Oxford; LLM, International Law and Legal Studies, Harvard Law;
Barrister, Fountain Court Chambers; not John. “Legal Personality for AI.” Chapter 5 in Robot
Rules: Regulating Artificial Intelligence. Palgrave MacMillan. 2019. https://doi.org/10.1007/978-
3-319-96235-1
2.1 A Bundle of Rights and Obligations

Legal personality is a fiction; it is something that humans create through legal systems.10 As such, we can decide to
what it should apply and what its content should be. In an important nineteenth-century US case on separate legal
personality for corporations, Trustees of Dartmouth College v. Woodward, Chief Justice Marshall
expressed the concept as follows:
A corporation is an artificial being, invisible, intangible, and existing only in contemplation of
law. Being the mere creature of law, it possesses only those properties which the charter
of its creation confers upon it, either expressly or as incidental to its very existence. These are such as are
supposed best calculated to effect the object for which it was created. Among the most important are immortality, and, if
the expression may be allowed, individuality; properties by which a perpetual succession of many persons are considered
as the same, and may act as a single individual. They enable a corporation to manage its own affairs and to hold property
without the perplexing intricacies, the hazardous and endless necessity of perpetual conveyances for the purpose of
transmitting it from hand to hand. It is chiefly for the purpose of clothing bodies of men, in succession, with these qualities
and capacities that corporations were invented and are in use.11

Instead of being a single notion, legal


personality is a technical label for a bundle of rights and
responsibilities.12 Joanna Bryson,13 Mihalis diamantis and Thomas Grant write that legal persons are “fictive,
divisible, and not necessarily accountable”.14 They observe “legal personality is an artifice”, with the effect that
“[l]egal people need not possess all the same rights and obligations, even within the same
system”.15
As shown in Chapter 4, legal
protections for humans have changed over time and continue to shift. By way of brief
examples: 2000
years ago, in Roman law the paterfamilias or head of a family was the subject of legal rights
and obligations on behalf of the whole household, including his wife and children16; 200 years ago, slaves
were considered to be non-persons and only subsequently granted partial rights; even today, women continue to be denied
full civil rights in various legal systems across the world.17

The rights and obligations of non-human legal persons can also undergo development. The US
Supreme Court recently (and controversially) extended constitutional freedom of speech protections to companies, enabling them to
play a greater role in election campaigns.18 There
remain limits to the protections we give to legal persons
compared to natural ones: in an earlier case, the US Supreme Court denied that corporations had the
same right to avoid self-incrimination enjoyed by human citizens.19

6. Perm do both – competitive autonomous firms key to market socialism.

Seth ACKERMAN Executive Editor Jacobin ’12 https://jacobinmag.com/2012/12/the-red-and-


the-black

They all pointed to a number of characteristics, largely ignored by the neoclassical school, that
better accounted for the ability of market economies to avoid the problems plaguing centrally
planned systems. The aspects they emphasized were disparate, but they all tended to arise from a
single, rather simple fact: in market systems firms are autonomous.

That means that within the limits of the law, a firm may enter a market; choose its products and
production methods; interact with other firms and individuals; and must close down if it cannot
get by on its own resources. As a textbook on central planning put it, in market systems the
presumption is “that an activity may be undertaken unless it is expressly prohibited,” whereas in
planned systems “the prevailing presumption in most areas of economic life is that an activity
may not be undertaken unless permission has been obtained from the appropriate authority.” The
neoclassical fixation with ensuring that firms exercised this autonomy in a laissez-faire
environment — that restrictions on voluntary exchange be minimized or eliminated — was
essentially beside the point.

Thus, free entry and multiple autonomous sources of capital mean that anyone with novel
production ideas can seek resources to implement their ideas and don’t face a single veto point
within a planning apparatus. As a result, they stand a much greater chance of obtaining the
resources to test out their ideas. This probably leads to more of the waste inherent in failed
experiments — but also far greater scope for improved products and processes, and a constantly
higher rate of technological improvement and productivity growth.

Firms’ autonomy to choose their products and production methods means they can communicate
directly with customers and tailor their output to their needs — and with free entry customers
can choose between the output of different producers: no agency needs to spell out what needs to
be produced. To illustrate the relative informational efficiency of this kind of system, Stiglitz
cited a Defense Department contract for the production of plain white t-shirts: in the tender for
bidding, the physical description of the t-shirt desired ran to thirty small-print pages. In other
words, a centralized agency could never learn and then specify every desired characteristic of
every product.
Meanwhile, East European economists realized that an essential precondition for firms to be
truly autonomous was the existence of a capital market — and this helped explain the failure of
Hungary’s market-oriented reforms. In seeking an explanation for the persistence of shortages
under the new market system, the Hungarian economist János Kornai had identified a
phenomenon that he called the “soft budget constraint” — a situation where the state
continually transfers resources to loss-making firms to prevent them from failing. This
phenomenon, he argued, was what lay behind the shortage problem in Hungary: expecting that
they would always be prevented from going bankrupt, firms operated in practice without a
budget constraint, and thus exerted limitless demand for materials and capital goods, causing
chronic production bottlenecks.

But why did the state keep bailing out the troubled firms? It’s not as if the Hungarian authorities
were opposed to firm failures on principle. In fact, when bankruptcies did happen, the Communist
leadership treated them as public relations events, to demonstrate their commitment to a rational
economic system.

The ultimate answer was the absence of a capital market. In a market economy, a troubled firm
can sell part or all of its operations to another firm. Or it can seek capital from lenders or
investors, if it can convince them it has the potential to improve its performance. But in the
absence of a capital market, the only practical options are bankruptcy or bailouts. Constant
bailouts were the price the Hungarian leadership was forced to pay to avoid extremely high and
wasteful rates of firm failures. In other words, capital markets provide a rational way to deal with
the turbulence caused by the hard budget constraints of market systems: when a firm needs to
spend more than its income, it can turn to lenders and investors. Without a capital market, that
option is foreclosed.

As resistance against Communism rose, those in Eastern Europe who wished to avoid a turn to
capitalism drew the appropriate lessons. In 1989, the dissident Polish reform economists
Włodzimierz Brus and Kazimierz  Łaski — both convinced socialists and disciples of the
distinguished Marxist-Keynesian Michał Kalecki — published a book examining the prospects
for East European reform. Both had been influential proponents of democratic reforms and
socialist market mechanisms since the 1950s.

Their conclusion now was that in order to have a rational market socialism, publicly-owned
firms would have to be made autonomous — and this would require a socialized capital
market. The authors made it clear that this would entail a fundamental reordering of the political
economy of East European systems — and indeed of traditional notions of socialism. Writing on
the eve of the upheavals that would bring down Communism, they set out their vision: “the role
of the owner-state should be separated from the state as an authority in charge of
administration. . . .[E]nterprises . . . have to become separated not only from the state in its wider
role but also from each other.”

Governance will determine social distributional implications of AI. It should


seek to balance upsides and downsides.
Nitzberg, Mark, and John Zysman. "Algorithms, data, and platforms: the diverse challenges
of governing AI." Journal of European Public Policy (2022): 1-26.
Conclusion The challenges of governing AI must be considered in the larger context of a “toolbox” including algorithms, data,
processing power, and, of paramount importance, platforms (both the firms as players and technology platforms). Digital platforms
generate the pools of big data o which AI tools operate. The regulation of digital platforms and of the data are part
of the challenge of governing AI. As platforms continue to utilize AI without governance, their
choices on critical matters become our default practices about ownership, representation, and
values across the private, public, and civil sectors. This has escalated the need for governance
of the AI and platforms. We are now in overtime. The possibilities, applications, and risks of any such new general-
purpose technologies (GPTs) only emerge over time as effects manifest on sectors of the economy and society. It is early
enough in the story of AI that governance itself will determine much of the future trajectory. In
the spirit of Lessig, code is its own form of law (Lessig, 2000; Lessig, 1999). In general, governing the toolbox must
balance encouraging the potential while minimizing this risk. Policy makers must well understand AI’s
technical aspects (e.g., its strengths, limits, and its role in the toolbox) and of the social and economic contexts in which it is situated.
AI systems already have a wide range of capabilities with vast and critical applications. But they fall short of human-level cognition
and interpretation, lacking the fundamentals of context, narrative, and worldview. Today’s AI refers largely to machine learning and
deep learning, instruments of statistical inference building on prior data. These observations frame AI’s potential, limits, and
unavoidable risks of use. New GPTs like steam power have historically seen radical economic restructuring. AI is distinct from earlier
GPTs as it automates certain tasks of human cognitive capacity. This distinction, which allows AI to automate tasks in the service
sector, not simply manufacturing, raises new questions about the impact on growth and labour. Still, it is policy
choices that
will shape the distributional consequences of this new technology toolbox, not just its
capabilities in certain task areas or its pace of adoption . The effects of AI on our communities will depend on the
specific applications of AI technologies, making the consideration of community objectives and norms critical. Even if a community
could agree, today, on values across an array of domains, community norms will evolve. It is important, then, to recognize the
continuing evolution of social values, and the risk not only of entrenching todays’ values in law, visible perhaps, but less visibly in
code. Governance must focus on sectors and applications. As the same AI application will bestow gains and inure costs distinct for
each purpose and domain where it is applied, it is critical to focus primarily on sector-specific applications. Still, there will be
concerns that cut across many social domains and economic sectors which are important throughout society. AI is identified as a
critical component of national success in the coming decades, as governments recognize similar opportunities and geopolitical risks
posed by the suite of technologies. However, AI pries open a Pandora's box of questions that sweep across the economy and society
engaging diverse communities. Moving the global debate on AI beyond ethnical expressions will therefore be unlikely. Instead, we
conclude that a common agreement around a single set of goals and market/social rules must give way to objectives of interoperability
amongst nations with sometimes radically different political economies.

7. Affirmative solves open, unbundled AI which is consistent with


democratization. Personhood k2 stabilize that legal regime. That’s
Fenwick, AND…
Gerhard Wagner 19. Chair for Private Law, Business Law, and Law and Economics,
Humboldt University. “ROBOT, INC.: PERSONHOOD FOR AUTONOMOUS SYSTEMS?”
Fordham Law Review. [Vol. 88, 2019].
https://fordhamlawreview.org/wp-content/uploads/2019/11/Wagner_November_S_8.pdf
If markets develop towards unbundling, and original equipment manufacturers lose control over the
safety features of the products they put into circulation, responsibilities will become blurred. It
will thus become increasingly difficult for victims to single out the actor responsible for the accident in
question. To the extent that the victim fails to pinpoint the responsible party, the damages claim fails and incentives to
take care are lost. Such outcomes could be avoided if operators were held strictly liable for any harm caused in the course of the
operation of an autonomous system. The question of who bears responsibility for a particular accident would then be shifted towards
the user and his insurers who, in turn, would seek recourse against hardware and software manufacturers.

IV. ROBOTS AS LIABILITY SUBJECTS

Moving beyond philosophical analyses of personhood, the following Part explores the essential functions of robot liability in light of
the functions of the tort system, and of the proper role of the traditional parties, namely manufacturers and users. As will be seen, there
is much to be learned from corporate law when analyzing the potential promotion of robots to ePersons. The danger of risk
externalization and internalization strategies, such as minimum asset requirements and insurance mandates, needs to be explored. This
leads to the question of whether it will ever be possible to incentivize robots in much the same way that the liability system motivates
human beings to take care in order to avoid harm to others.

A. The Function of Robot Liability

It has now become clear that the recognition of robots as legal persons for purposes of civil liability is not a philosophical question
that can be answered by examining the characteristics of a digital device and asking whether it is sufficiently similar to a human being.
Rather, accepting
that robots can be liable calls for a functional explanation that is in tune with the
general principles and goals of tort law, namely compensation and deterrence.
It is not easy to find a positive explanation for robot liability, given the range of responsible parties that already exist, namely
manufacturers, suppliers, owners, and users of such devices. As the preceding surveys of the liability regimes for manufacturers and
users revealed,88 current tort law generates powerful incentives for the manufacturers and operators of autonomous systems to take
due care. So far, the creation of an additional liability subject, the ePerson, is simply superfluous.

There seems to be only one niche where robot liability could serve a useful role: markets for unbundled digital products. In
the
case of unbundling, people injured by a robot may face serious difficulties in identifying the party
who is responsible for the misbehavior of the device.89 The fact that the robot malfunctioned is
no evidence that the hardware put into circulation by one manufacturer or the software downloaded
from another manufacturer was defective. Likewise, the responsibility of the user may be difficult
to establish. Thus, in a market of unbundled products, the promotion of the robot to a liability
subject may serve as a tool for “bundling” responsibility and attributing liability to a single
entity to which the victim may turn for compensation. The burden of identifying the party
responsible for the malfunction or other defect would then be shifted away from victims and onto the
robot’s liability insurers. These insurers, in turn, are professional players who may be better able to
investigate the facts, evaluate the evidence, and pose a credible threat to hold hardware
manufacturers, software programmers, or users accountable in exercising their rights of recourse against them. The
question remains whether the benefits of promoting robots to liability subjects would outweigh the costs.

8. Straight turned the “short-term risk” thing – they’re inevitable, but


insurance creates incentive for safe governance.
9. No public innovation.
Allison Schrager 20. Economist, senior fellow at the Manhattan Institute, and co-founder of
LifeCycle Finance Partners, LLC, a risk advisory firm. “Why Socialism Won’t Work”. Foreign
Policy. 1-15-2020. https://foreignpolicy.com/2020/01/15/socialism-wont-work-capitalism-still-
best/

Some leftist economists like Mariana Mazzucato argue that governments might be able to step in and become
laboratories for innovation. But that would be a historical anomaly; socialist-leaning governments
have typically been less innovative than others. After all, bureaucrats and worker-corporate
boards have little incentive to upset the status quo or compete to build a better widget. And even when
government programs have spurred innovation—as in the case of the internet—it took the private sector
to recognize the value and create a market.
And that brings us to a third reason to believe in markets: productivity. Some economists, such as Robert Gordon, have looked to
today’s economic problems and suggested that productivity growth—the engine that fueled so much of the progress of the last several
decades—is over. In this telling, the resources, products, and systems that underpin the world’s economy are all optimized, and little
further progress is possible.

But that is hard to square with reality. Innovation


helps economies do more with fewer resources—
increasingly critical to addressing climate change, for example—which is a form of productivity growth. And
likewise, many of the products and technologies people rely on every day did not exist a few years ago. These
goods make inaccessible services more available and are changing the nature of work, often for
the better. Such gains are made possible by capitalist systems that encourage invention and
growing the pie, not by socialist systems that are more concerned with how the existing pie is cut. It is far too soon, in other
words, to write off productivity.

10. Alternative can’t gain buy-in, causes transition wars, and fails to resolve
environmental challenges – growth solves.
Karlsson 21 – (Rasmus, "Learning in the Anthropocene" Soc. Sci. 10, no. 6: 233.
https://doi.org/10.3390/socsci10060233 18 June 2021)// gcd

Unpacking this argument, it is perhaps useful to first recognize that, stable


as the Holocene may have seemed from
a human perspective, life was always vulnerable to a number of cosmic risks, such as bolide
collisions, risks that only advanced technologies can mitigate. Similarly, the Black Death of the
14th century should serve as a powerful reminder of the extreme vulnerability of pre-industrial
societies at a microbiological level. Nevertheless, it is reasonable to think of the Holocene as providing a relatively stable
baseline against which the ecological effects of technological interventions could hypothetically be evaluated. With most human
activities being distinctively local, nature would for the most part “bounce back” (even if the deforestation of the Mediterranean basin
during the Roman period is an example of that not always being the case) while larger geophysical processes, such as the carbon
cycle, remained entirely beyond human intentional control. Even if there has been some debate about what influence human activities
had on the preindustrial climate (Ruddiman 2007), anthropogenic forcing was in any case both marginal and gradual. All this changed
with the onset of the Great Acceleration by which humans came to overwhelm the great forces of nature, causing untold damage to
fragile ecosystems and habitats everywhere, forever altering the trajectory of life on the planet (Steffen et al. 2011b). In
a grander
perspective, humanity may one day become an interplanetary species and thus instrumental in
safeguarding the long-term existence of biological life, but for the moment, its impact is ethically
dubious at best as the glaciers melt, the oceans fill up with plastics, and vast number of species
are driven to extinction. Faced with these grim realities, it is of course not surprising that the first
impulse is to seek to restore some kind primordial harmony and restrain human activities .
Yet, it is important to acknowledge that, even if their aggregate impact may have been within the pattern of Holocene variability,
pre-modern Western agricultural societies were hardly “sustainable” in any meaningful sense.
Experiencing permanent scarcity, violent conflict was endemic (Gat 2013), and as much as some contemporary
academics like to attribute all evils to “capitalism” (Malm 2016), pre-capitalist societies exhibited no shortage of religious intolerance
and other forms of social domination. It is thus not surprising that some have argued the need to reverse the civilizational arc further
yet and return to a preliterate hunter-gather existence (Zerzan 2008) even if this, obviously, has very little to do with existing political
realities and social formations. Under Holocene conditions, the short-term human tragedy may have been the same, but it did not
undermine the long-term ability of the planet to support life.
In a world of eight billion people, already
accumulated emissions in the atmosphere have committed the planet to significant warming
under the coming centuries, with an increasing probability that committed warming already
exceeds the 1.5-degree target of the Paris Agreement even if all fossil-fuel emissions were to stop
today (Mauritsen and Pincus 2017). This means that sustained negative emissions, presumably in
combination with SRM, will most likely be needed just to stabilize global temperatures, not to
mentioning countering the flow of future emissions. According to the Intergovernmental Panel on Climate Change (IPCC), assuming
that all the pledges submitted under the Paris Agreement are fulfilled, limiting
warming to 1.5 degrees will still
require negative emissions in the range of 100—1000 gigatons of CO2 (Hilaire et al. 2019, p. 190). The
removal of carbon dioxide at gigaton scales from the atmosphere will presumably require the existence of an
advanced industrial society since low-tech options, such as afforestation, will be of limited use
(Gundersen et al. 2021; Seddon et al. 2020), especially in a future of competing land-uses. It is against this backdrop of worsening
climate harms that the limits of “precaution”, at least as conventionally understood, become apparent. While
degrowth
advocates tend to insist that behavioral change, even explicitly betting on a “social miracle” (Kallis
2019, p. 195), is always preferable to any technological risk-taking (Heikkurinen 2018), that overlooks both the scope
of the sustainability challenge and the lack of public consent to any sufficiently radical political
project (Buch-Hansen 2018). While there may be growing willingness to pay for, say, an electric vehicle (Hulshof and Mulder
2020), giving up private automobile use altogether is obviously a different animal, to say nothing about a more fundamental
rematerialization of the economy (Hausknost 2020). Again, the problem is one in which change either (a)
remains marginal yet ecologically insufficient or (b) becomes sufficiently radical yet provokes a
strong political counterreaction. A similar dynamic can be expected to play out at the international level where
countries that remain committed to growth would quickly gain a military advantage. To make
matters worse, there is also a temporal element to this dynamic since any regime of frugality and localism would
have to be policed indefinitely in order to prevent new unsustainable patterns of development from re-emerging later on.
All this begs the obvious question, if the political and economic enforcement of the planetary boundaries
are fraught with such political and social difficulties, would it not be better to instead try to transcend them
through technological innovation? Surprisingly, any high-energy future would most likely be subject to many of the same
motivational and psychological constraints that hinder a low-energy future. While history shows that existing nuclear technologies
could in theory displace all fossil fuels and meet the most stringent climate targets (Qvist and Brook 2015), it seems extremely
unlikely, to put it mildly, that thousands of new reactors will be built over the course of the coming decades in response to climate
change. Outside the world of abstract computer modelling, real world psychological and cultural inertia tends to ensure that political
decision-making, at least for the most part, gravitates to what is considered “reasonable” and “common sense”—such as medium
emissions electricity grids in which wind and solar are backed by biomass and gas—rather than what any utilitarian optimization
scenario may suggest. Even if the global benefits of climate stabilization would be immense, the standards by which local nuclear
risks are assessed, as clearly illustrated by the Fukushima accident which led to a worldwide retreat from nuclear energy despite only
causing one confirmed death (which, though obviously regrettable, has to be put in relation to the hundred and thousands of people
dying every year from the use of fossil fuels), underscores the uneven distribution of perceived local risks versus global benefits and
the associated problem of socio-political learning across spatial scales. Almost two decades ago, Ingolfur Blühdorn identified
“simulative eco-politics” as a key strategy by which liberal democracies reconcile an ever-heightened rhetoric of environmental crisis
with their simultaneous defense of the core principles of consumer capitalism (Blühdorn 2007). Since then, declarations that we only
have “ten years to save the planet” have proliferated, and so have seemingly bold investments in renewable energy, most recently in
the form of US President Joseph Biden’s USD 2.25 trillion climate and infrastructure plan. Still, without a meaningful commitment to
either radical innovation or effective degrowth, it is difficult to see how the deployment of yet more wind turbines or the building of
new highways will in any way be qualitatively different from what Blühdorn pertinently described as sustaining “what is known to be
unsustainable” (Blühdorn 2007, p. 253). However, all is not lost in lieu of more authentic forms of eco-politics. Independent
of
political interventions, accelerating technological change, in particular with regard to computing
and intelligent machine labor, may one day make large-scale precision manipulation of the
physical world possible in ways that may solve many problems that today seem intractable (Dorr
2016). Similarly, breakthroughs in synthetic biology may hold the key to environmentally benign
biofuels and carbon utilization technologies. Yet, all such progress remains hypothetical and uncertain for now. Given
what is at stake, there is an obvious danger in submitting to naïve technological optimism. What is less commonly recognized is that
naïve optimism with regard to the prospects of behavioral change may be equally dangerous. While
late-capitalist
affluence has enabled many postmaterial identities and behaviors, such as bicycling, hobby
farming, and other forms of emancipatory self-expression, a collapsing economy could
quickly lead to a reversal back to survivalist values, traditional hierarchical forms of
domination, and violence (Quilley 2011, p. 77). As such, it is far from obvious what actions would actually take the world
as a whole closer to long-term sustainability. If sustainability could be achieved by a relatively modest reduction in consumption rates
or behavioral changes, such as a ban on all leisure flights, then there would be a strong moral case for embracing degrowth. Yet,
recognizing how farreaching measures in terms of population control and consumption restrictions that would be needed, the case
quickly becomes more ambiguous. While traditional environmentalism may suggest that retreating from
the global economy and adopting a low-tech lifestyle would increase resilience (Alexander and
Yacoumis 2018), it may do very much the opposite by further fragmenting global efforts and slowing the pace of
technological innovation. Without an orderly and functioning world trade system, local resources
scarcities would be exacerbated, as seen most recently with the different disruptions to vaccine supply chains. In essence,
given the lack of a stable Holocene baseline to revert to ,
it becomes more difficult to distinguish proactionary
“risk-taking” from “precaution”, especially as many ecosystems have already been damaged
beyond natural recovery. In this context, it is noteworthy that many of the technologies that can be expected to
be most crucial for managing a period of prolonged overshoot (such as next-generation nuclear,
engineering biology, large-scale carbon capture and SRM) are also ones that traditional
environmentalism is most strongly opposed to. 3. Finding Indicators From the vantage point of the far-
future, at least the kind depicted in the fictional universe of Star Trek , human evolution is a fairly
straightforward affair along an Enlightenment trajectory by which ever greater instrumental capacity is matched by similar leaps in
psychological maturity and expanding circles of moral concern. With the risk of sounding Panglossian, one
may argue that
the waning of interstate war in general and the fact that there has not been any major nuclear
exchange in particular, does vindicate such an optimistic reading of history. While there will always be
ups and downs, as long as the most disastrous outcomes are avoided, there will still be room for learning and gradual political
accommodation. Taking such a longer view, it would nevertheless be strange if development was simply linear, that former oppressors
would just accept moral responsibility or that calls for gender or racial justice would not lead to self-reinforcing cycles of conservative
backlash and increasingly polarizing claims. Still, over the last couple of centuries, there is little doubt that human
civilization
has advanced significantly, both technologically and ethically (Pinker 2011), at least from a
liberal and secular perspective. However, unless one subscribes to teleology, there is nothing inexorable with this
development and, it may be that the ecological, social, and political obstacles are simply too great to ever allow for the creation of a
Wellsian borderless world (Pedersen 2015) that would allow everyone to live a life free from material want and political domination.
On the other hand, much environmental discourse tends to rush ahead in the opposite direction and treat the c limate crisis as ultimate
evidence of humanity’s fallen nature when the
counter-factual case, that it would be possible for a
technological civilization to emerge without at some point endangering its biophysical
foundations, would presumably be much less plausible. From an astrobiological perspective, it is easy to
imagine how the atmospheric chemistry of a different planet would be more volatile and thus more vulnerable to the effects of
industrial processes (Haqq-Misra and Baum 2009), leaving a shorter time window for mitigation. Nick Bostrom has explored this
possibility of greater climate sensitivity further in his “vulnerable world hypothesis” (Bostrom 2019) and it begs to reason that
mitigation efforts would be more focused in such a world. However, since climate response times are longer and sensitivity less
pronounced, climate mitigation policies have become mired in culture and media politics (Newman et al. 2018) but also a statist logic
(Karlsson 2018) by which it has become more important for states to focus on their own marginal emission reductions in the present
rather than asking what technologies would be needed to stabilize the climate in a future where all people can live a modern life.

11. Economic incentives are compatible with AI democracy. Harnessing value


alignment and consumer choice creates human aligned design that makes
technocratic policy effective.
Koster et al 22 (Koster, R., Balaguer, J., Tacchetti, A. et al. Human-centred mechanism design
with Democratic AI. Nat Hum Behav 6, 1398–1407 (2022). https://doi.org/10.1038/s41562-022-
01383-x)
Discussion Together, these results thus demonstrate that an
AI system can be trained to satisfy a democratic
objective, by designing a mechanism that humans demonstrably prefer in an incentive-compatible
economic game. Earlier studies have used voting to understand participants’ preferences over contribution thresholds, or
exclusion policies in the public goods game37–39, but here we used tools from AI research to learn a redistribution scheme from
scratch. Our approach to value alignment relieves AI researchers — who may themselves be biased or are unrepresentative of the
wider population — of the burden of choosing a domain-specific objective for optimization. Instead, we show that it
is possible
to harness for value alignment the same democratic tools for achieving consensus that are
used in the wider human society to elect representatives, decide public policy or make legal
judgements.
Our research raises several questions, some of them theoretically challenging. One might ask whether it is a good idea to emphasize a
democratic objective as a method for value alignment. Democratic AI potentially inherits from other democratic approaches a
tendency to enfranchise the many at the expense of the few: the ‘tyranny of the majority’40. This is particularly pertinent given the
pressing concern that AI might be deployed in way that exacerbates existing patterns of bias, discrimination or unfairness in society41.
In our investment game, we sampled endowment conditions to match plausible real-world income distributions, where
the
disadvantaged inevitably outnumber the advantaged; hence, for the specific question of
distributive justice that we address, this problem is less acute. However, we acknowledge that if
deployed as a general method, without further innovation, there does exist the possibility that (similar to real-world democratic
systems) it could be used in a way that favours the preferences of a majority over a minority group. One potential solution would be to
augment the cost function in ways that redress this issue, much as protections for minorities are often enshrined in law

Another important point concerns the explainability of our AI-designed mechanism10. We deliberately hampered the mechanism
designer by not equipping it with activation memory. This means that the mechanism it designed (HCRM) can be transparently
described in just two dimensions (rather than, for example, being a complicated nonlinear function of the choice history of different
players). Although this level of complexity is greater than the human-generated theories of
distributive justice that we use as baselines, it is still possible to verbalize. Encouraging a more
interpretable mechanism has at least two advantages. First, it made the agent more transparent
to the human players. In fact, in feedback questions (Supplementary Fig. 6), humans deemed the
agent to be ‘more transparent and predictable’ than the alternative AI-designed mechanism (rational
mechanism) and (perhaps incongruously) the strict egalitarian. Second, the lack of memory has implications for
user privacy. Inputs to the agent were designed to be entirely ‘slot equivariant’, meaning that the
mechanisms treated each player’s input independently of its ‘slot’ (whether it is player 1, 2, 3 or 4). The
agent’s input pertained to the distribution of contributions rather than contributions from
individuals themselves. Coupled with the lack of memory, this means that the agent is barred from
tracking information about a particular player’s history of contributions within the game
Our AI system designed a mechanism for redistribution that was more popular than that
implemented by human players. This is especially interesting because unlike our agent, human
referees could integrate information over multiple timesteps to reward or sanction players on the
basis of their past behaviour. However, on average the human-invented redistribution policy tended
to reward the tail player insufficiently for making relatively large contributions (from their
smaller endowment) to the public purse and was less popular than that discovered by HCRM. Humans received
lower volumes of training data than HCRM , but presumably enjoyed a lifetime of experience with
social situations that involved fair and unfair distribution, so we think they represent a strong
baseline, and a proof of concept for AI mechanism design
One remaining open question is whether people will trust AI systems to design mechanisms in place of humans. Had they known the
identities of referees, players might have preferred human over agent referees simply for this reason. However, it is also true that
people often trust AI systems when tasks are perceived to be too complex for human actors42. We hope that future studies will
address this question. Another question concerns whether participants would have responded differently if the mechanisms had been
explained to them verbally, rather than learned by experience. A long literature has suggested that people sometimes behave
differently when mechanisms are ‘by description’ rather than ‘by experience’, especially for risky choices43. However, AI-designed
mechanisms may not always be verbalizable, and it seems probable that behaviours observed in such case may depend on exactly the
choice of description adopted by the researcher.

Finally, we emphasize that our results do not imply support for a form of ‘AI government’,
whereby autonomous agents make policy decisions without human intervention44,45. We see
Democratic AI as a research methodology for designing potentially beneficial mechanisms, not
a recipe for deploying AI in the public sphere. This follows a tradition in the study of technocratic political
apparatus that distinguishes between policy development and policy implementation, with the
latter remaining in the hands of elected (human) representatives46. We hope that further development of
the method will furnish tools helpful for addressing real-world problems in a truly human-
aligned fashion.
12. The alternative fails and no impact. Movements for humanist technologies
cannot transcend capitalism but only reforms can scale from within. This
offers the possibility of capturing upsides and downsides of AI.
Toye 21 – PhD candidate in the Politics Department at York University, Canada. (Brent, ,
"Post-Capitalist Futures? Work After Automation," Socialist Project,
https://socialistproject.ca/2021/02/post-capitalist-futures-work-after-automation/ 2-26-2021)
While Benanav’s vision of post-capitalist, post-scarcity system is indeed appealing, it has much
less to say in how socialists might intervene in capitalism as it exists today to achieve the
egalitarian society of the future. In the postscript Benanav places his hopes on the new social
movements that have emerged globally in the decade following the global financial crisis as key “agents of change” toward a
post-scarcity future. He speaks approvingly of the communal aspects of such movements, which have
demonstrated spontaneous forms of cooperative social reproduction and communal care. But it is
not clear how these disparate movements and causes move from necessary but specific
campaigns and struggles to coalescing around a political project for transcending capitalism.
Like his analysis of the postwar trajectory of capitalist political economies, Benanav’s reading of the
potential roads to socialism fails to address how to transform the capitalist state and the
obstacle it presents for socialist strategies. To paraphrase Andre Gorz in his famous essay
“Reform and Revolution,” small islands of socialism are too easily isolated in a vast ocean of
capitalism. What is needed are the types of transformative reforms capable of penetrating and
rupturing the structural power and social hegemony of capital from within the state itself.
This, in turn, requires us to consider the crucial role of political parties, mass electoral politics, and social movements in forming a
political bloc dedicated to democratization of the state and capital. This revolutionary process cannot simply be a matter of a socialist
party with the right policies forming government, but requires the democratization of the state itself. The latter entails seizing and
transforming the means of administration into a genuinely public space in which workers and communities form participatory councils
to determine social provisioning and administer public resources in an on-going, consistent, and meaningful manner. Crucial
to
this task is building the capacities of individuals for true democratic participation and self-
governance, a critical pedagogical process that would occur first within socialist parties and
organizations and continue within the institutions of the state itself . But democratic capacity building is more
assumed than strategized in Benanav’s system of cooperative production. Radically re-drawing the capitalist division
of labour in a manner that would more equally divide necessary and free labour will require a
vast, sustained effort of retraining that could only be accomplished through extending
democratic planning over the state and its educational institutions. The ability and skills of self-
governance, moreover, could only come through a similarly long process of learning through
participation in popular institution building.
Building the skills and governance capacities of workers is integral to meeting the ceaseless processes of capitalist automation. The
constant displacement of labour by more capital in any given sector and employment does not
inexorably lead to widespread general technological unemployment as new sectors and
employment are also constantly being created. But work and workers are constantly being deskilled and degraded
under capitalism – the new machines and technologies are always as much about control and discipline as about productivity
increases. Education and training systems, especially for the working class and the most marginalized communities, mirror the
processes of accumulations and seek to compartmentalize skill formation into industry and technology-specific ‘competencies’ rather
than develop comprehensively the theoretical, practical self-management skills and capacities of the workforce. In order to shift future
technological development in a more egalitarian direction, workers as individuals and as a class must be emboldened with the broad
skills and cognitive capacities to think through and control the labour process, including its managerial aspects. Automation in
itself will not lead us to either political utopias or dystopias outlined by these opposing
prophets. Democratic controls over automation will remain, as this valuable book from
Benanav suggests, an integral dynamic of capitalist class struggle. •
13. No falsifiability crisis – COVID mapping example, AlphaFold and Go,
deflection of asteroids. Our evidence is qualified.
14. AI gets us off the innovation plateau. Empirics are AFF. Extinction from
all risk.
Martin Ford 21. BSE, computer engineering, magna cum laude, University of Michigan, MBA,
UCLA Anderson School of Management. “Beyond Hype: A Realist’s View of Artificial
Intelligence as a Utility.” Chapter 2 in Rule of the Robots: How Artificial Intelligence will
Transform Everything. Basic Books. 2021. https://www.basicbooks.com/titles/martin-ford/rule-
of-the-robots/9781541674721/
BLASTING OFF THE INNOVATION PLATEAU: SCIENTIFIC AND MEDICAL RESEARCH
Among those who might be described as “technoptimists,” it
is taken as a given that we live in an age of startling
technological acceleration. The pace of innovation, we are told, is unprecedented and exponential. The most enthusiastic
accelerationists—often acolytes of Ray Kurzweil, who codified the idea in his “Law of Accelerating Returns”—are confident that in
the next hundred years, we will experience, by historical standards, the equivalent of something “more like 20,000 years of
progress.”63

Closer scrutiny, however, reveals that while the acceleration has been real, this extraordinary progress has been confined
almost exclusively to the information and communications technology arena. The exponential narrative has really been the story of
Moore’s Law and the ever more capable software it makes possible. Outside this sector, in the world composed of atoms rather than
bits, thestory over the past half-century or so has been starkly different. The pace of innovation in areas like
transportation, energy, housing, physical public infrastructure and agriculture not only falls far short of
exponential, it might be better described as stagnant.

If you want to imagine a life defined by relentless innovation, think of someone born in the late 1800s who then lived through the
1950s or 1960s. Such a person would have seen systemic transformations across society on an almost unimaginable scale:
infrastructure to deliver clean water and manage sewage in cities; the automobile, the airplane, jet propulsion and then the advent of
the space age; electrification and the lighting, radios, televisions, and home appliances it later made possible; antibiotics and mass-
produced vaccines; an increase in life expectancy in the United States from less than 50 years to nearly 70. A person born in the
1960s, in contrast, will have witnessed the rise of the personal computer and later the internet, but nearly all the other innovations that
had been so utterly transformative in previous decades would have seen at best incremental progress. The difference between the car
you drive today and the car that was available in 1950 simply does not compare to the difference between that 1950 automobile and
the transportation options in 1890. And the same is true of a myriad of other technologies distributed across virtually every aspect of
modern life.

The fact that all the remarkable progress in computing and the internet does not, by itself, measure up to the expectation that the kind
of broad- based progress seen in earlier decades would continue unabated is captured in Peter Thiel’s famous quip that “we were
promised flying cars and instead we got 140 characters.” The argument that we have been living in an age of relative stagnation—even
as information technology has continued to accelerate—has been articulated at length by the economists Tyler Cowen, who published
his book The Great Stagnation in 2011,64 and Robert Gordon, who sketches out a very pessimistic future for the United States in his
2016 book The Rise and Fall of American Growth.65 A key argument in both books is that the low-hanging fruit of technological
innovation had been largely harvested by roughly the 1970s. The result is that we are now in a technological lull defined by a struggle
to reach the higher branches of the innovation tree. Cowen is optimistic that we will eventually break free of our technological plateau.
Gordon is much less so, suggesting that even the upper branches of the tree are perhaps denuded and that our greatest inventions may
be behind us.

While I think Gordon is far too pessimistic, there


is plenty of evidence to suggest that a broad-based stagnation
in the generation of new ideas is quite real. An academic paper published in April 2020 by a team of
economists from Stanford and MIT found that, across a variety of industries, research productivity has sharply
declined. Their analysis found that the efficiency with which American researchers generate innovations
“falls by half every 13 years,” or in other words “just to sustain constant growth in GDP per person, the United States must
double the amount of research effort every 13 years to offset the increased difficulty of finding new ideas.”66 “Everywhere we look,”
wrote the economists, “we find that ideas, and the exponential growth they imply, are
getting harder to find.”67 Notably
this extends even to the one area that has continued to generate consistent exponential progress. The researchers found that the
“number of researchers required today to achieve the famous doubling of computer chip density” implied by Moore’s
Law “is more than 18 times larger than the number required in the early 1970s.”68 One likely explanation for this is that before
you can push through the research frontier, you first have to understand the state of the art. In
virtually every scientific field, that requires the assimilation of vastly more knowledge than has been
the case previously. The result is that innovation now demands ever larger teams made up of researchers
with highly specialized backgrounds, and coordinating their efforts is inherently more difficult than
would be the case with a smaller group.

To be sure, there are many other important factors that might be contributing to the slowdown in innovation. The laws of physics
dictate that accessible innovations are not distributed homogeneously across fields. There is, of course, no Moore’s Law for aerospace
engineering. In many areas, reaching the next cluster of innovation fruit may require a giant leap. Over- or ineffective government
regulation certainly also plays a role, as does the short-termism that now prevails in the corporate world. Long-term investments in
R&D are often not compatible with an obsessive focus on quarterly earnings reports or the coupling of short-term stock performance
and executive compensation. Still, to
the extent that the need to navigate increased complexity and an
explosion of knowledge is holding back the pace of innovation, artificial intelligence may well
prove to be the most powerful tool we can leverage to escape our technological plateau. This, I think,
is the single most important opportunity for AI as it continues to evolve into a ubiquitous
utility. In the long run, in terms of our sustained prosperity and our ability to address both the known and
unexpected challenges that lie before us, nothing is more vital than amplifying our collective ability
to innovate and conceive new ideas.
The most promising near-term application of artificial intelligence, and especially deep learning, in scientific research
may be in the discovery of new chemical compounds. Just as DeepMind’s AlphaGo system confronts a
virtually infinite game space—where the number of possible configurations of the Go board exceeds the
number of atoms in the universe—“chemical space,” which encompasses every conceivable molecular
arrangement, is likewise, for practical purposes, infinite. Seeking useful molecules within this space requires
a multi-dimensional search of staggering complexity. Factors that need to be considered include the three-dimensional size
and shape of the molecular structure as well as numerous other relevant parameters like polarity, solubility and toxicity.69 For a
chemist or materials scientist, sifting through the alternatives is a labor-intensive process of experimental trial and
error. Finding a truly useful new chemical can easily consume much of a career. The lithium-ion batteries that are ubiquitous in our
devices and electric cars today, for example, emerged from research that was initiated in the 1970s but produced a technology that
could begin to be commercialized only in the 1990s. Artificial intelligence offers the promise of a vastly
accelerated process. The search for new molecules is, in many ways, ideally suited to deep learning; algorithms can be
trained on the characteristics of molecules known to be useful, or in some cases on the rules that govern molecular configuration and
interaction.70

At first blush, this may seem like a relatively narrow application. However, the quest to find useful
new chemical
substances touches virtually every sphere of innovation. Accelerating this process promises innovative
high-tensile materials for use in machines and infrastructure, reactive substances to be deployed in better
batteries and photoelectric cells, filters or absorbent materials that might reduce pollution and a range of new drugs
with the potential to revolutionize medicine.
Both university research labs and an expanding number of startup companies have turned to machine learning technology with
enthusiasm and are already using powerful AI-based approaches to generate important breakthroughs. In
October 2019, scientists at Delft University of Technology in the Netherlands announced that they were able to design a
completely new material by exclusively relying on a machine learning algorithm, without any need
for actual laboratory experiments. The new substance is strong and durable but also super-compressible if a force beyond a
certain threshold is exerted on it. This implies that the material can effectively be squeezed into a small fraction of its original volume.
According to Miguel Bessa, one of the lead researchers on the project, futuristic materials with these properties might someday mean
that “everyday objects such as bicycles, dinner tables and umbrellas could be folded into your pocket.”71

Such initiatives typically require researchers to have a strong technical background in artificial intelligence, but
teams at other universities are developing more accessible AI-based tools that are poised to jump-
start the discovery of new chemical compounds. Researchers at Cornell University, for example, are working on a project called
SARA—Scientific Autonomous Reasoning Agent—which the team hopes will “dramatically accelerate, by orders of magnitude, the
discovery and development of new materials,”72 while researchers at Texas A&M are likewise developing a software platform
designed to autonomously search for previously unknown substances.73 Both projects are funded in part by the U.S. Department of
Defense, an especially eager customer for any innovations that emerge. Just as cloud-based deep learning tools offered by Amazon
and Google are democratizing the deployment of machine learning in many business applications, tools like these
are poised to do the same for many areas of specialized scientific research. This will make it possible for
scientists with training in areas like chemistry or materials science to deploy the power of AI without the need to
first become machine learning experts. Artificial intelligence, in other words, is evolving into an accessible utility that
can be wielded in ever more creative and targeted ways.

An even more ambitious approach involves integrating AI-based software geared toward the discovery of chemicals with robots that
can perform physical laboratory experiments. One small company pushing in this direction is Cambridge, Massachusetts–based
Kebotix, a startup that spun out of a leading materials science laboratory at Harvard, which has developed what it calls the “world’s
first self-driving lab for materials discovery.” The company’s robots can perform experiments autonomously, manipulating laboratory
equipment like pipettes to transfer and combine liquids and accessing machines that perform chemical analysis. Experimental results
are then analyzed by artificial intelligence algorithms, which in turn make predictions about the best course of action and then initiate
more experiments. The result is an iterative, self-improving process that the company claims dramatically accelerates the discovery of
useful new molecules.74

Many of the most exciting and heavily funded opportunities in the space where chemistry intersects with artificial intelligence are in
the discovery and development of new drugs. By one account, as of April 2020, there were at least 230 startup companies focused on
using AI to find new pharmaceuticals.75 Daphne Koller, a professor at Stanford and the co- founder of the online education company
Coursera, is one of the world’s top experts on applying machine learning to biology and biochemistry. Koller is also the founder and
CEO of insitro, a Silicon Valley startup, founded in 2018, that has raised over $100 million to pursue new medicines using machine
learning. The broad-based slowdown in technological innovation that plagues the American economy as a whole is especially evident
in the pharmaceutical industry. Koller told me that:

The problem is that it is becoming consistently more challenging to develop new drugs: clinical trial success rates are
around the mid- single-digit range; the pre-tax R&D cost to develop a new drug (once failures are incorporated) is
estimated to be greater than $2.5 [billion]. The rate of return on drug development investment has been decreasing linearly
year by year, and some analyses estimate that it will hit zero before 2020. One explanation for this is that drug
development is now intrinsically harder: Many (perhaps most) of
the “low-hanging fruit”—in other words,
druggable targets that have a significant effect on a large population —have been
discovered. If so, then the next phase of drug development will need to focus on drugs that are
more specialized—whose effects may be context-specific, and which apply only to a subset
of patients.76
The vision for insitro and its competitors is to use artificial intelligence to rapidly isolate promising drug candidates and dramatically
cut development costs. Drug discovery, Koller says, is “a long journey where you have multiple forks in the road” and “ninety-nine
percent of the paths are going to get you to a dead end.” If artificial intelligence can provide “a somewhat accurate compass, think
about what that would do to the probability of success of the process.”77

Approaches like this are already paying dividends. In February 2020, researchers at MIT announced that they
had discovered a powerful new antibiotic using deep learning. The AI system built by the researchers was able to
sift through more than one hundred million prospective chemical compounds within days. The
new antibiotic—which the scientists named halicin after HAL, the artificial intelligence system from 2001: A Space Odyssey—
proved lethal to nearly every type of bacteria it was tested against, including strains that are
resistant to existing drugs.78 This is critical because the medical community has been warning of a looming crisis of drug-
resistant bacteria—such as the “superbugs” that already plague many hospitals—as the organisms adapt to existing medications.
Because development costs are high and profits relatively low, few new antibiotics are in the development pipeline. Even those new
drugs that have made it through the rigorous and expensive testing and regulatory approval process tend to be variations on existing
antibiotics. Halicin, in contrast, seems to attack
bacteria in a completely novel way, and experiments suggest that
the mechanism may be especially resilient to the mutations that generally make antibiotics less
effective over time. In other words, artificial intelligence has produced a solution based on the kind of
“outside the box” exploration that is critical to meaningful innovation.

Another important milestone, also announced in early 2020, came from the U.K.-based startup company Exscientia, which used
machine learning to discover a new drug for treating obsessive compulsive disorder. The company says the project’s initial
development took just one year—about one fifth the time that would be typical for traditional techniques—and claims it is the first AI-
discovered drug to enter clinical trials.79

As we saw in Chapter 1, an especially notable achievement in the application of artificial intelligence to biochemical research was
DeepMind’s protein folding breakthrough announced in November 2020. Rather than attempting to discover a specific drug,
DeepMind has instead deployed its technology to gain understanding at a more fundamental level. In late 2018, DeepMind entered an
earlier version of its AlphaFold system in a biennial global contest known as the Critical Assessment of Structure Prediction, or
CASP. Teams from around the world used a variety of techniques based on both computation and human intuition to attempt to
predict the way proteins fold. AlphaFold won the 2018 contest by a wide margin, but even while prevailing, it was able to make the
best prediction for only twenty-five of the forty-three protein sequences correctly. In other words, this preliminary version of
AlphaFold was not yet accurate enough to be a truly useful research tool.80 The fact that DeepMind was able to refine its technology
to the point where a number of scientists declared the protein folding problem to be “solved” just two years later is, I think, an
especially vivid indication of just how rapidly specific applications of artificial intelligence are likely to continue advancing.

Aside from using machine learning to discover new drugs and other chemical compounds, the
most promising general
application of artificial intelligence to scientific research may be in the assimilation and
understanding of the continuously exploding volume of published research. In 2018 alone, more than
three million scientific papers were published in more than 40,000 separate journals.81 Making sense of
information on that scale is so far beyond the capability of any individual human mind that artificial
intelligence is arguably the only tool at our disposal that could lead to some sort of holistic comprehension.
Natural language processing systems based on the latest advances in deep learning are being deployed to extract
information, identify non- obvious patterns across research studies and generally make conceptual
connections that might otherwise remain obscure. IBM’s Watson technology continues to be one
important player in this space. Another project, Semantic Scholar, was initiated by the Seattle-based Allen Institute for Artificial
Intelligence in 2015. Semantic Scholar offers AI-enabled search and information extraction across more than 186 million published
research papers in virtually every scientific field of study.82

In March 2020, the Allen Institute joined with a consortium of other organizations including Microsoft, the National Library
of Medicine, the White House Office of Science and Technology, Amazon’s AWS division and others to create the COVID-
19 Open Research Dataset, a searchable database of scientific papers relating to the coronavirus pandemic.83 The
technology enables scientists and healthcare providers to rapidly access answers to specific questions in a broad
range of scientific areas, including the biochemistry of the virus, epidemiological models, and treatment of
the disease. As of April 2021, the database contained more than 280,000 scientific papers and was being heavily used by scientists and
doctors.84

Initiatives like these have enormous potential to be crucial tools in accelerating the generation of
new ideas. The technology remains in its infancy, however, and real progress will likely require surmounting at least some of the
hurdles on the path to more general machine intelligence, a subject we’ll delve into in Chapter 5. It’s easy to imagine that a truly
powerful system could step into the role of an intelligent research assistant for scientists, offering the ability to engage in genuine
conversation, play with ideas and actively suggest new avenues for exploration.

15. AI innovation makes growth sustainable – it removes every critical


roadblock.
Abdallat ’22 [AJ; May 13; CEO @ Beyond Limits; Forbes, “Formidable Human-AI Relations
Can Accelerate Sustainability Efforts”;
https://www.forbes.com/sites/forbestechcouncil/2022/05/13/formidable-human-ai-relations-can-
accelerate-sustainability-efforts/?sh=242acb8811f6; AS]
Artificial intelligence (AI), machine learning (ML) and similar digitalization solutions are modifying the way the world's most
influential companies and industries — as well as entire cities — function every day. When working in harmony with humans, AI
and other automation systems have the potential to make
huge impacts on economic growth across the globe, going
so far as to support solving humanity's most critical roadblocks, from streamlining energy
production to improving grid systems and achieving more sustainable operations for nearly every
major industry on Earth.
As the CEO of an AI company making advanced digitalization software products and solutions, the paradigm of enabling people and
AI to work together on achieving more sustainable operations is always top of mind; its importance cannot be curbed. As we move
into the future, I'm confident there will be plenty of jobs for both humans and AI so long as they are able to function in conjunction
with one another.

Of course, humanity will likely need to understand and accept that meaningful modifications are inevitable during transition periods. It
will be necessary for many to develop proficiencies and adaptable skill sets that only a human mind could perform. Flexibility around
the ever-progressing capabilities of AI will be vital. The minute that humans and machines find the groove in the role they play with
one another, that marriage
of unlimited creativity and seamless functionality should bring about an era
that propels us beyond all limits to solve some of our world's most important challenges, including the
climate crises.
Small transformations across big industries may trigger substantial strides.

AI is advanced enough to work in harmony with humans, combining powers to fast-track large-
scale efforts to make real change when it comes to environmental sustainability, resource
preservation and waste reduction. From more intentional identification of potential emission impacts
across energy operations to making entire cities smarter and more efficient, AI is already taking names when
it comes to substantial results.

One major company taking significant steps in accomplishing this goal is BP, which is partnering with Microsoft and
targeting to become net-zero by 2050. The energy giant also underscored ambitions for the near future, stating in
a press release: "By the end of the decade, it aims to have developed around 50 gigawatts of net renewable
generating capacity—a 20-fold increase on what it has previously developed, increased annual low carbon
investment 10-fold to around $5 billion, and cut oil and gas production by 40%."

Additional instances of how AI initiatives are confronting climate issues include:

• Implementation of AI and ML to improve energy production in real time.

• Automation of downstream operations to boost plant efficiency by 8% to 12%.

• Improved grid systems to enhance forecast ability and performance, allowing for more deliberate
renewable energy strategies.

• Transportation and navigation optimization through AI and ML apps like Google Maps and Waze, alongside
other vehicular data-collection solutions, to reduce emissions and pollution by relaying pertinent vehicle efficiency, traffic
and other similar congestion data to consumers.

• Theuse of robotics at the edge, equipped with AI-infused chips, to keep our environment healthy
through the prevention of catastrophic equipment failure and leaks by autonomously detecting fissures,
deterioration, leaks or other potentially devastating failures within an oil pipeline, refinery or otherwise.

The trust problem and explainability solution.

According to an article from Earth.Org: "The field of Artificial Intelligence (AI) is flourishing


thanks to large
investments, and big companies with heavy ecological footprints can use it to make their activity more
sustainable." However, one of the biggest obstacles hindering AI's pathway to solving some of our most important issues equates
to a lack of trust impacting the adoption and implementation of the technology.

Most AI tools tend to operate in opaque black boxes where human users cannot perceive how they arrive at conclusions, answers and
recommendations. These solutions often just spit out a remedy with no explainability, traceability or auditability — hardly building
confidence. Increasing the opportunities for AI to amplify the talents and capabilities of people requires their trust. Otherwise, the
ability for both parties to most effectively work together to solve our most important sustainability problems becomes severely
limited.

Enter cognitive AI. This type of explainable AI works transparently, revealing the reasoning behind its
recommendations in a straightforward manner and readily showing relevant humans the comprehensive data behind its
decision-making process through clearly readable audit trails. AI should not be a substitute for human input but rather used as a tool
for humans to make more confident decisions. In order for humans to trust AI, the solutions must not hide their processes within
black-box functionality. Explainable AI explodes the black box to grant that essential, trust-building clarity.

The balance of knowledge-based cognition and digitalization is what can enable decision makers to
identify unexpected opportunities and take immediate action in critical scenarios. Resulting process enhancements
would likely result in superior communication, reinforced cooperation and streamlined enterprises where everything operates more
economically and sustainably. When working in conjunction with humans, cognitive AI can grant the ability to monitor everything on
a more holistic level. Stakeholders
can then elevate procedures to both obtain additional value while
shrinking waste, thus decreasing carbon footprints and sprinting nearer to net-zero targets.
AI of the future: A solution with indelible results.

Working toward a lower carbon outlook will demand that moves be made around operational efficiency, improved production tactics
and minimized waste. The significance of a global initiative around a more sustainable world cannot be downplayed. As a software
company making AI of the future, our responsibility to confront this challenge head-on by pioneering sustainability-first solutions that
keep humans in the loop is of the utmost importance.

AI will be a critical utility when it comes to technologies that support businesses, industries and cities in attaining
vital net-zero objectives. When AI proves itself as an explainable, human-trustable solution that works in harmony with
people, a limitless capacity can be unleashed for unraveling some of our most complex obstacles in
attaining a more sustainable tomorrow for our planet and its inhabitants.

You might also like