You are on page 1of 74

1nc vs kentucky

off
cp – np
The United States federal government should:
- hold existing manufacturers liable for torts and arbitration errors
committed by artificial intelligence, and
- require those manufacturers to purchase insurance for payment of
damages in said arbitration and tort cases as a requirement for
commercializing said artificial intelligence.

Solves.
Diamantis, 21 – Law Prof @ University of Iowa
(Diamantis, Mihailis - Mihailis Diamantis’ legal research focuses on corporate crime and criminal
theory. He is concerned with how familiar concepts like mens rea shape corporate incentives
and the justice of verdicts involving corporate defendants. He has subsidiary interests in privacy
law and surveillance. Algorithms Acting Badly: A Solution from Corporate Law (February 27,
2020). 89 Geo. Wash. L. Rev. 81 (2021), U Iowa Legal Studies Research Paper No. 2020-12,
Available at SSRN: https://ssrn.com/abstract=3545436 or
http://dx.doi.org/10.2139/ssrn.3545436)//Neo

Corporations develop, run, and maintain the world's most impactful algorithms. 50 In such cases, I claim that
algorithmic action is corporate action. 51 Just as corporations act through their employees, 52 they may also act through their algorithms.
Holding corporations liable for the things they do through their employees induces corporations to ensure that their employees behave in socially beneficial ways. 53

Recognizing that corporations act through their algorithms would similarly encourage corporations to
exercise responsible control over algorithmic injuries. By converting the question of injurious algorithmic action into a question of
injurious corporate action, the [*810] approach advanced here crucially avoids the practical and philosophical

challenges that accompany any effort to personify algorithms. Algorithms become an extension of the corporate person, not
persons in their own right. In my earlier work on mental states, I asked: "Under what conditions should corporations be liable when their algorithms act on their behalf?" 54
Here I ask the logically prior question: "Under what conditions does algorithmic action qualify as corporate action?"

Although the proposal I develop below is grounded in U.S. law , it should be of interest beyond
American borders. The Organization for Economic Cooperation and Development ("OECD") has recommended to all its nation-
members that AI actors (defined as "those who play an active role in the AI system lifecycle") "should be accountable for the proper
functioning of AI." 55 Similarly, the European Union has acknowledged the need for "civil liability rules ... to ensure

adequate compensation in case of [algorithmic] harm and/or rights violations" and "the need to ensure that criminal responsibility
and liability can be attributed in line with the fundamental principles of criminal law." 56 It is not
enough simply to stipulate that AI actors will be accountable because there will often be many
actors connected to algorithmic injury . Operationalizing the recommendation requires a mechanism for
apportioning liability. What I offer is one approach, grounded in principles of fairness and prevention.

Without a framework establishing a robust connection between algorithmic misconduct and corporate liability ,
the algorithmic accountability gap will only grow wider. Technologists' pessimistic predictions may
prove inevitable. Algorithms can now carry out many functions that just a decade ago required human
employees. 57 That [*811] trend will accelerate over the decade to come. 58 When algorithmic injuries
do not qualify as corporate actions, the law effectively shields corporations from the liability they
would have faced using human employees instead. Businesses will seek the safe harbor of algorithmic
misconduct rather than risk liability for misconduct by human employees . This gives corporations
strong incentives to automate, even when automation might not otherwise be profitable or socially
beneficial. 59

the path of least resistance. In pursuit of realistic prospects for success, it


In providing a framework for addressing algorithmic injury, this Article seeks

grounds itself in existing corporate law principles. Part I details the current law of corporate liability, emphasizing how the law conceives
of injurious corporate action by looking for an injurious employee action to attribute to the corporation. Part II shows how law, as presently applied, cannot close the algorithmic
accountability gap because algorithmic injury has no obvious place in it.

The principles behind the current law of


Part III argues that an approach to algorithmic accountability may be hiding in plain sight.

corporate liability - which emphasize relationships of control and benefit 60 - extend beyond the
employment context. Corporations also have control over and benefit from their algorithms , which
motivates two possible approaches. A "control-based account" would attribute algorithmic harms to any corporation that exercises sufficient control over the algorithm in

n that lays substantial claim to the productive benefits of the algorithm in


question. By contrast, a "benefits-based account" would attribute algorithmic harms to any corporatio

a "beneficial- [*812] control


question. After detailing both accounts, this Article criticizes them for being overbroad. In their stead, this Article settles on

account" which would require algorithmic harms to satisfy both the control and benefit criteria
before attribution.

As Part IV shows, recognizing that corporations act through algorithms just as they act through employees would go a
long way to address algorithmic injury. This would establish a responsible party against whom victims could
seek satisfaction. And that, in turn, would incentivize corporations to take care to discipline their algorithms by

designing, releasing, monitoring, and updating them responsibly. Though there would be some challenges with
implementation, Part IV shows they would be surmountable. Finally, this Article concludes by noting some limitations of using corporate law to solve the algorithmic
accountability gap.
da – spillover
The AFF breaks the firewall that reserves personhood for humans---leads to
spiraling claims for superintelligence rights, causing extinction
---SRs = Superintelligent Robots

John-Stewart Gordon 22, full professor of philosophy, chief researcher at the Faculty of Law,
head of the Research Cluster for Applied Ethics (RCAE), and principal investigator of the EU-
funded research project “Integration Study on Future Law, Ethics, and Smart Technologies”
(2017-2021) at Vytautas Magnus University in Kaunas, Lithuania, “Are Superintelligent Robots
Entitled to Human Rights?,” Ratio, vol. 35, no. 3, 2022, pp. 181–193
4 THINKING THE IMPOSSIBLE

4.1 Superintelligent robots and human rights?

Gordon and Pasvenskiene (2021) reviewed the contemporary literature on whether intelligent robots should be
entitled to human rights once they exist and offered an interesting analysis of the current state of research. This topic has
been quite energetically discussed in numerous blogs and popular magazines but only very rarely in academic journals.
Both authors believe that this lack of academic attention is a mistake since future technological
development will—albeit perhaps still several decades in the future—most likely lead to the creation of SRs who
may want to have their “human rights” recognized.

The concept of human rights—or at least the idea of moral human rights—is no longer necessarily linked to being
human, but rather to the concept of personhood (Gordon & Pasvenskiene, 2021, sections 4 and 5). Recently,
numerous authors have applied the human rights approach to support the protection of higher
animals (Cavalieri, 2001; Donaldson & Kymlicka, 2013; Francione, 2009; Singer, 1975) and even the environment
(Atapattu, 2015; Stone, 2010). If this is the case, however, then one could raise the question whether SRs
should be entitled to human rights as well once (and if) they exist. Although it is possible, as Gunkel (2018) and
Gellers (2022) do, to speak about “robot rights” rather than “human rights for robots,” the rhetorical force of using
human-rights language is an important aspect of including humanlike SRs in the moral community.
That is why Francione (2009) and others have used the term human rights also in the context of
animals. Likewise, it seems appropriate to speak about human rights in the context of SRs.

It was argued above that the concept of personhood is the most important way to determine an entity's
moral (and legal) status. In particular, it was argued that mental or psychological capacities are of utmost
significance in determining what we understand by personhood, in contrast to the social-relational approach and the human
dignity approach (which support the view that one must be a member of the species homo sapiens to
count morally). Furthermore, it seems reasonable to consider the possibility that entities can have
HMS depending on their degree of psychological capacities. But if this is the case, then it seems
also plausible that these supra-persons should eventually be entitled to more (or stronger) moral
and legal rights than typical adult human beings (see McMahan, 2009).

But what if these supra-persons are SRs? McMahan (2009) and other authors including Singer (1975, 1979) rightly
indicate that our common-sense morality is incoherent with respect to how we deal with, for
example, animals such as pigs, geese, chickens, cows and chimpanzees that have more advanced psychological capacities
than human fetuses and even newborns. Most people believe that, for example, newborns are entitled to moral
rights such as the right not to be killed or used for medical purposes, whereas the just-mentioned animals, which are
(empirically speaking) more advanced in terms of their psychological capacities, hold
these rights to a much lesser
degree if at all. The reason for this incoherence is that human fetuses and newborns belong to the human
species and are connected to a human family (McMahan, 2009).

However, if we use only one moral standard, such as the ability to reason, for all cases and apply it
coherently, then human offspring would gain personhood, FMS, and the related moral and legal rights at a much later stage in
their lives; in fact, they would rate below the higher-level animals mentioned above (Gordon, 2021). The above line of
reasoning has implications for the way how we would most likely treat SRs. If, at some future point, we
see the advent of SRs, we will most probably not grant them HMS relative to typical adult human beings. But something is wrong
with this reasoning. Gordon and Pasvenskiene (2021) have pointed out that the ascription of moral status and rights is not up to us
but must be determined independently. In other words, if
SRs meet the objective criteria which determine
whether entities have FMS and HMS, then they are entitled to the related moral and legal rights, unless
one adopts an approach that we would consider incoherent.

The socio-political, legal and moral implications of this development would be substantial. If non-human entities are awarded human
rights and even stronger moral and legal rights than typical adult human beings based on their much higher psychological capacities,
substantial social unrest might arise between human beings and SRs. Two differently advanced species would be sharing one world.
If we cannot solve the so-called alignment problem between them, as argued by Bostrom (2014) and
more recently by Ord (2020, chapter 5), humanity would most likely become extinct.10 Therefore, we should
contemplate these issues carefully and long before the existence of SRs becomes imminent.
4.2 The argument

The above-mentioned general line of reasoning leads to the following argument:

Premiss: The attribution of moral (and legal) rights is based on the particular moral (and legal) status of a given entity.

Premiss: The higher the moral (and legal) status of an entity is, (a) the more rights are available to that entity and (b) the stronger that entity's rights are in comparison to entities with a lower moral (and legal) status (degree model).

Premiss: The typical adult human being has FMS and the greatest amount of moral and legal rights in the strongest sense (common view).

Premiss: SRs have HMS (hypothesis).

Premiss: HMS provides the entity with more and stronger moral (and legal) rights compared to entities with a lower moral status (degree model).

Ergo: SRs are entitled to more and stronger moral (and legal rights) than the typical adult human being.

The above argument presupposes the correctness of the two principles stated by Bostrom and Yudkowsky (2014, pp. 323–324):

The principle of substrate non-discrimination: If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.

The principle of ontogeny non-discrimination: If two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.

Both principles can be considered the starting point and constraining framework of how one should view the relationship between human beings and intelligent machines (once they exist). I do not see any convincing counterargument against both principles.11 The
above argument could be questioned on two accounts. The first objection concerns the likelihood of the existence of SRs (premiss 4) and the second objection concerns the use of the degree model in premiss 5. I will respond briefly to each point in turn.

The possibility of whether SRs might exist in the future remains unclear. Many AI researchers believe, that we will eventually see the
advent of SRs, but that it is impossible to determine the exact time (Müller & Bostrom, 2016). My estimation is that we
will
most probably encounter SRs towards the end of this century (or even earlier) and that we therefore
have limited time to consider how we want to organize our societies once they exist. The advent
of SRs will cause substantial socio-political as well as moral and legal implications for our societies,
and we need to prepare for them well in advance. If we wait until we are confronted by these issues to start thinking about them, it
will be too late.

A second objection is that we should not use the degree model to think about the recognition of moral and legal status and related
rights. Rather, one might argue, we should adhere to a moral threshold above which all entities should be treated in the same way.
This alternative would eliminate the possibility of entities with higher psychological capacities being granted a higher moral and legal
status and related rights than the typical adult human being. The underlying theme of this objection is connected to general ideas of
prioritizing one's own species over other species in relation to vital questions regarding moral status as well as moral and legal rights
(see the section “The incoherence approach” below). Even though this approach might be understandable from a human-centred
perspective, it still undermines how ethics and moral philosophy have been carried out in modern times (at least to a great extent)—
namely, by using universal ethical theories such as utilitarianism and Neo-Kantianism.12
Whether SRs would actually feel the need to respect a moral threshold is a totally different matter. That
is why scholars
such as Bostrom (2014) and Ord (2020) emphasize the need to solve the alignment problem and to
control SRs to ensure that humanity will not become extinct .
5 CRITICAL REMARKS

Besides the commonly voiced objections with respect to the impossibility of machines becoming self-aware, conscious, and intelligent and hence ever developing the ability to reason (Searle, 1980), opponents who adhere to functionalism and computationalism
have argued that intelligence does not necessarily require belonging to a particular substratum, such as carbon-based beings (like humans), but that it could also evolve in silicon-based environments, if the system is complex enough (Chalmers, 1996: chapter 9;
Sinnott-Armstrong & Conitzer, 2021, pp. 275–282). Eventually, this dispute will be solved when new types of highly complex systems actually emerge; it cannot be decided based on theoretical arguments alone.

The more interesting objections concern issues related to the likely advent of SRs and how we plan to deal with that eventuality, including ways to establish a proper mutual and peaceful relationship with non-human entities who are much smarter than us (indeed,
the difference between SRs and us could be comparable to that between us and ants, if not larger). If human beings try to treat SRs as their slaves with no rights, then humanity might be digging its own grave.

5.1 Do not produce SRs in great numbers

There is great interest in creating artificially intelligent entities that could take over many of our jobs (some scholars estimate that most human jobs will vanish within the next 100 years or so, especially those that are boring or extremely dangerous). SRs will achieve
everything much faster, more effectively and without any mistakes. This almost utopian situation, in which human beings could rest, relax and enjoy themselves all day, every day, could quickly turn into a dystopian nightmare in which human beings lose their
general capabilities, degenerate, and become lazy and stupid because they no longer face any obstacles in their lives and because everything is done for them (Danaher, 2019a). Adult human beings might become like children with limited human autonomy, due to
the overly paternalistic nature of machines that do everything for them (e.g., Danaher, 2019b; Russell, 2019; Vallor, 2015, 2016). At this stage, human beings will face the existential problem of boredom as SRs solve all our problems.

To avoid this dystopian situation, one could limit the number of SRs and other AI machines created so as to deal with any existential repercussions arising from these technological developments and prevent humanity from becoming a dysfunctional race. Limiting the
quantity of SRs would also avoid or minimize two additional problems: the existential risk problem and the competition for global resources. Bostrom (2014) and Ord (2020) have warned about the potential problem of unaligned SRs that might not value the same
things as humans and thus could be motivated to cause the human race's extinction if we do not solve the control problem. Furthermore, some scholars have voiced concerns with respect to earth's already limited global resources, which would have to be shared
amongst human beings and SRs as well. The production and maintenance of highly advanced machines require substantial resources and could quickly drain the remaining resources available on earth.

Therefore, in summary, we should avoid producing SRs in great numbers so as to avert (a) the degeneration of human beings, (b) our own destruction (existential risk scenarios), and (c) overconsumption of our global resources.

5.2 The incoherence approach

Some people might argue that even if SRs become smarter than human beings and therefore are entitled to HMS (at least from an objective perspective), we should never acknowledge their higher moral status and give them stronger (and more) moral and legal
rights than human beings have. Being incoherent, many might argue, is not necessarily morally wrong.

This line of reasoning is quite similar to how we currently treat higher animals such as the great apes, who actually deserve much stronger moral and legal rights based on their higher moral status than human beings tend to want them to have (Cavalieri, 2001;
Francione, 2009; Singer, 1975). McMahan (2009) correctly claims that human beings are incoherent with respect to the acknowledgement of the moral status and related rights of some animals in comparison to human offspring, based on their higher psychological
capacities. In addition, Singer (2009) has justifiably questioned this argument, calling it prone to speciesism and contending that for this reason it should be rejected (Singer, 2009).

Whether one should be “loyal” to one's own species compared to other species has been famously discussed by Bernard Williams in
his book chapter “The Human Prejudice” (Williams, 2006), where he argues against “moral universalists” such as Singer. Williams
explores the general idea of loyalty partly against the background of his thought-experiment concerning “benevolent managerial
aliens” who visit our planet and eventually conclude that it might be better to remove humans from earth (Williams, 2006, pp. 151–
152).13 In this context, he claims the following:

The relevant ethical concept is something like: loyalty to, or identity with, one's ethnic or cultural grouping; and in the
fantasy case, the ethical concept is: loyalty to, or identity with, one's species. … So the idea of there being an ethical
concept that appeals to our species membership is entirely coherent. (Williams, 2006, p. 150)

Applying Williams' reasoning to the case of robots, one could possibly argue that even
though SRs might become
much smarter than human beings and therefore claim HMS based on their status as supra-
persons, human beings should nonetheless prioritize protection of their own species against any
possible dangers, such as their extinction by a robot revolution. There might only be a narrow dividing line between
“allowing” SRs the enjoyment of their legitimate entitlement to moral and legal rights, on one hand, and, on the other hand, using
them as mere tools and thereby creating a situation that has been dubbed “new speciesism” (DeGrazia, 2022, pp. 84–86).

AI can only be rendered safe through confinement. AI LLCs make this


impossible, causing a disjunctive complex of existential threats.
---DSA = Decisive Strategic Advantage – defined by Bostrom (2014, p. 78) as “a level of
technological and other advantages sufficient to enable [an AI] to achieve complete world
domination”

---MSA = Major Strategic Advantage – which we will define as “a level of technological and other
advantages sufficient to pose a catastrophic risk to human society”.

Kaj Sotala 18, Foundational Research Institute, “Disjunctive scenarios of catastrophic AI Risk,”
Chapter in Artificial Intelligence Safety And Security (Roman Yampolskiy, ed.), CRC Press, 2018,
https://kajsotala.fi/assets/2017/11/Disjunctive-scenarios.pdf
This chapter seeks to provide a broad look at the various ways in which the development of sophisticated AI could lead to it
becoming powerful enough to cause a catastrophe. In particular, this chapter seeks to focus on the way that various risks are
disjunctive—on how there are multiple different ways by which things could go wrong, any one of
which could lead to disaster. In so doing, the chapter seeks to expand on existing work (T. Barrett & Baum 2017a) which
has begun applying established risk analysis methodologies into the AI safety field (T. Barrett & Baum 2017b).

Our focus is on AI advanced enough to count as an AGI, or artificial general intelligence, rather than risks from “narrow AI”, such as
technological unemployment (Brynjolfsson and McAfee 2011). However, it should be noted that some
of the risks
discussed—in particular, crucial capabilities related to narrow domains (see section 4.3)—could arise
anywhere on the path from narrow AI systems to superintelligence.

The intent is not to deny or minimize the various positive aspects which could also result from the
creation of AI, or to suggest that AI development should not be pursued. Rather, the intent is to enable the
realization of AI’s positive potential, in the same manner that developing a better understanding of vulnerabilities
related to computer security allows for the creation of safe and secure computer systems.

2. Enablers of catastrophe

Most arguments for risks from AI arise from the conjunction of two claims (Yudkowsky 2008, Bostrom 2014, Sotala & Yampolskiy
2015), the capability claim and the value claim. This chapter is focused on examining various ways by which the capability claim
could become true. A model of the value claim is outside the scope of this chapter, but see T. Barrett & Baum (2017a) for one.

1. The capability claim: AI can become capable enough to potentially inflict major damage to
human well-being

2. The value claim: AI


may act according to values which are not aligned with those of
humanity, and in doing so cause considerable harm
These claims can be further broken down. An existing model of them is the ASI-PATH model (T. Barrett & Baum 2017a) (Figure 1).
ASI-PATH focuses on analyzing pathways by which an AI may cause a catastrophe by becoming superintelligent via recursive self-
improvement, with humans being unable to prevent it from taking unsafe actions.

[FIGURE 1 OMITTED]
The ASI-PATH model uses fault diagram conventions, with the undesired event (AI catastrophe) as the top node, followed by two
nodes which would enable the top node if they were both true. These are the “ASI actions are unsafe” node, which corresponds to
the value claim, and the “AI takes off, resulting in [Artificial Super-Intelligence] with [Decisive Strategic Advantage]” node, which
corresponds to a specific form of the capability claim. This chapter seeks to expand upon ASI-PATH by considering more general
forms of the capability claim.

The capability claim is often formulated as the possibility of an AI achieving a decisive strategic advantage (DSA). While the notion of
a DSA has been implicit in many previous works, the concept was first explicitly defined by Bostrom (2014, p. 78) as “a level of
technological and other advantages sufficient to enable [an AI] to achieve complete world domination”.

However, assuming that an AI will achieve a DSA seems like an unnecessarily strong form of the
capability claim, as an AI could cause a catastrophe regardless. For instance, consider a scenario
where an AI launches an attack calculated to destroy human civilization. If the AI was successful
in destroying humanity or large parts of it, but the AI itself was also destroyed in the process,
this would not count as a DSA as originally defined. Yet, it seems hard to deny that this outcome
should nonetheless count as a catastrophe.

Because of this, the present chapter focuses on situations where an AI achieves (at least) a major
strategic advantage (MSA), which we will define as “a level of technological and other advantages
sufficient to pose a catastrophic risk to human society”. A catastrophic risk is one that might inflict
serious damage to human well-being on a global scale and cause ten million or more fatalities
(Bostrom and Ćirković 2008).
Besides the obvious reasons for wanting to avoid an AI-caused catastrophic risk, we note that
wide-scale destruction may contribute to global turbulence (Bostrom et al. 2016), a situation in which
existing institutions are challenged, and coordination and long-term planning become more
difficult. Global turbulence could then contribute to another out-of-control AI project failing
even more catastrophically and causing even more damage. Thus, what was originally only a
catastrophic risk may contribute to the development of further existential (Bostrom 2002, 2013; Sotala &
Gloor, in preparation) risks.

Much of the existing literature on AI safety has focused on examining scenarios where the AI achieves a DSA and analyzing the
prerequisites for this. This is in many respects a sensible strategy, since if we are capable of handling an AI that could achieve a DSA
we are most likely also capable of handling an AI that could achieve an MSA; assuming a more powerful AI is the conservative
assumption (Yudkowsky 2001). Yet this strategy has the downside of possibly giving the impression of much of AI safety analysis
being irrelevant if one finds the possibility of an AI acquiring a DSA to be exceedingly unlikely. Some defenses might also be sufficient
for preventing an AI from acquiring a DSA, without being sufficient for preventing it from getting an MSA.

3. When would a Strategic Advantage be acted upon?

An AI having the capability to inflict major damage on human well-being mostly matters if it has a motive to do so. (There is also the
possibility of the AI intending to cooperate with humanity, but causing damage by accident; this is beyond the scope of the present
analysis.) While a full analysis of the value claim is outside the scope of this chapter, it cannot be entirely distinguished from the
capability claim, as an AI’s values also affect the threshold of capability at which it is rational for it to act against humanity. As we will
discuss, some values and situations make it more likely for the AI to take hostile action even when it is less capable.

Two main reasons for an AI to take action that caused damage to humanity would be :

● Ithad goals which neglected human well-being, and it would damage humanity in the
pursuit of this goal, such as by disassembling human cities for their raw materials; “the
AI does not hate you, nor does it love you, but you are made out of atoms which it can
use for something else" (Yudkowsky 2008).

● It expected humans to take action against it and prevent it from fulfilling its goals, in
which case defending itself—or launching a preemptive attack—would be a rational course
of action, as this would allow it to actually carry out its goals (Omohundro 2007, 2008). This might be the case
even if the AI had a goal which did take elements of human well-being into account if
the AI had reason to believe that humans would nonetheless object to this goal being
carried out . 4

The exact goals that an AI has, influence the level of capability that it needs for human-hostile
actions to be a rational strategy. An AI which cares mainly about some narrow goal may be willing to destroy human
civilization in order to make sure that a potential threat to it is eliminated. This ensures that it can go on pursuing its goal
unimpeded. However, an AI which was programmed to maximize something like the “happiness of currently-living humans” may be
much less willing to risk substantial human deaths . This would force it to focus on less 5 destructive takeover methods, potentially
requiring more sophisticated abilities.

In effect, the AI’s values determine the level of capability that it needs to have for hostile action to be a viable strategy. A simplified
model (Shulman 2010) is that an AI that believes itself having probability P of being successful if it initiates aggression and to have an
expected utility EU(Success) if successful, EU(Failure) if it fails, and EU(Cooperation) if it desists from aggression and continues to
cooperate, will rationally initiate aggression when

P*EU(Success) + (1-P)*EU(Failure) > EU(Cooperation).

This might be taken to suggest that an AI would primarily launch an attack if it had, or thought it could acquire, a DSA and could thus
establish dominion over humans. However, even
an AI with only an MSA might take hostile action,
employing measures such as extortion and the threat of more limited damage in order to
acquire more resources or shift the world in a more favorable direction. Among other possibilities, this
could happen:

● if
the AI had been released to act autonomously and believed that it could not be
tracked down (see sections 5.1—5.2.5 for a discussion of ways in which an AI might either escape or be
voluntarily released by its creators)
● if the AI had allies which would protect it from retaliation (see section 4.3. for a discussion of social manipulation
abilities and sections 5.2.-5.2.6. for ways by which an autonomous AI might have human allies)

● if
the AI controlled a human organization which could not be attacked without
enormous collateral damage (see sections 4.2. and 5.2.6. for an AI acquiring control of a human organization)
● if there were already more powerful AI systems taking actions and the AI believed itself to be a low priority for
retaliation (see section 6 for discussion of multiple AIs).
Regardless of the scale of the aggression, the AI’s behavior is also affected by various other situational considerations. For instance, an AI might be disinclined to cause damage because it thought there would be too much collateral damage to the things it valued,
because it did not consider itself capable of surviving the resulting retaliation, or because it estimated that the resulting damage on infrastructure also deprived it of the resources (such as electricity) that it needed in order to survive.

Attacks also differ in the extent to which they can be selectively targeted. Traditional firearms can be aimed selectively, whereas pandemics potentially threaten all the members of a species. To the extent that the AI needs to rely on the human economy to produce
resources that it needs to survive, attacks threatening the economy also threaten the AI’s resources. These resources are in a sense shared between the AI and humanity, so any attacks which cause indiscriminate damage on those resources are dangerous for both.
The more the AI can design attacks which selectively deprive resources from its opponents, the lower the threshold it has for using them. More advanced capabilities at rebuilding infrastructure would also allow an AI to launch a more indiscriminate attack. An AI that
was capable of building more advanced infrastructure than the existing one might also disregard damage to the current infrastructure, if it was planning to demolish most of the existing one anyway.

The balance of these calculations could be shifted if the AI thought itself in danger of being destroyed by humans even though it was cooperating (lowering the expected utility of cooperation). Self-preservation is an instrumental goal for many different values,
because an agent that exists is better capable of furthering most values than an agent which does not exist (Omohundro 2007, 2008, Bostrom 2012) . An AI which was in an imminent danger of being 6 destroyed could rationally launch a counterattack, even risking
large amounts of destruction, as long as it estimated that the expected value of a scenario where the counterattack enabled it to survive and further its values outweighed the damage caused by the counterattack. This would be a particularly compelling motivator if
the AI had idiosyncratic values which it thought very unlikely to be promoted by other agents. If there were multiple AI projects in existence, and the AI believed that one of the other projects could acquire a DSA first, it would have a reason to risk an earlier attack
(see section 6 for discussion of multiple AIs). There have also been proposals for designing an AI’s values in ways which explicitly make it less worthwhile to act in hostile ways . 7

The preceding analysis assumes that the AI chooses its actions rationally. Irrationality might seem like it would prevent an AI from becoming very capable, but like humans, an AI might be rational in some respects while being irrational in others. It could also be
rational for the AI to precommit to act in seemingly irrational ways, such as by choosing to irrationally ignore threats in order to make it less profitable for others to try to threaten it (Parfit 1984, sect. 5). The main consideration that emerges from potential
irrationality is that one cannot simply rely on the AI not causing damage, even if that would be a rational way for it to behave. Of course, irrationality could also cause an AI to avoid doing damage in a situation where it was rational for it to do so.

[TABLE 1 OMITTED]
4. Enablers of catastrophic capability

We will consider four rough scenarios that could give an AI either a DSA or an MSA: individual takeoff scenarios (with three main subtypes), collective takeoff scenarios, scenarios where power slowly shifts over to AI systems, and scenarios in which an AI being good
enough at some crucial capability gives it an MSA/DSA.

The likelihood of each of these either succeeding or failing is also affected by how cooperative humans are. While a possible scenario is one where an AI is entirely on its own and has to prevent its creators from shutting it down, there are also a variety of possible
scenarios (discussed in Section 5) where the AI has the partial or full cooperation of its creators, at least up to a certain point. These would affect the probability of each of the below scenarios coming true; a scenario in which a prototype AI has to avoid its
programmers from shutting it down, is very different from one where the programmers are certain of it being safe and voluntarily help it undergo a takeoff, especially if they also have the resources of a major corporation or nation-state at their disposal.

4.1. DSA enablers: takeoff scenarios

A “takeoff” (Bugaj & Goertzel 2007) is a process by which an AI becomes much more capable than humanity. In a soft takeoff, this happens on a time scale that allows ongoing human interaction, whereas in a hard takeoff, there will be some inflection point after
which the AI will increase in capability very quickly, breaking out of effective human control.

It is worth noting that a hard takeoff does not presuppose that an AI becomes very capable immediately after being created (however the moment of its creation is defined). A hard takeoff scenario may include an extended period of gradual improvement until some
key level of capability is met, after which the AI undergoes a rapid increase in its capabilities.

Many previous discussions (e.g. Yudkowsky 2008, Bostrom 2014, Sotala 2016) have focused on analyzing the possibility of a hard takeoff. While this is not the only possible scenario by which an AI might become capable, it is the one that leaves the least possibility to
fix anything that goes wrong.

Bearing in mind that an excessive focus on hard takeoff scenarios may hide the fact that a hard takeoff may not be necessary for an AI to achieve either an MSA or a DSA, we will first consider hard takeoff scenarios and then other capability enablers.

4.1.1. DSA enabler: Individual takeoff

An “individual takeoff” is one where a single AI manages to become so powerful as to entirely dominate humanity. Three rough paths for this have been proposed in the literature: a hardware overhang (“more AI” ), a speed explosion (“faster AI”), and an intelligence
explosion (“smarter AI”) (Sotala & Yampolskiy 2015); Bostrom (2014) discusses these under the terms collective superintelligence, speed superintelligence, and quality superintelligence, respectively. It should be noted that these paths are by no means mutually
exclusive: on the contrary, one of them happening may enable another also to happen.

4.1.1.1. Hardware overhang.

In a hardware overhang scenario (Yudkowsky 2008b, Shulman & Sandberg 2010), hardware develops faster than software, so that we’ll have computers with more computing power than the human brain does, but no way of making effective use of all that power. If
someone then developed an algorithm for general intelligence that could make effective use of that hardware, we might suddenly have an abundance of cheap hardware that could be used for running thousands or millions of AIs. These AIs might or might not be
superintelligent, but the sheer number of them would allow them to carry out coordinated operations on a massive scale. If a single AI took advantage of this potential to produce large numbers of copies or subagents of itself, it would allow for an individual takeoff .
Otherwise this would make for a collective takeoff, 9 discussed below.

A hardware overhang may effectively happen even if AI was hardware-constrained at first: the first AIs may require large amounts of hardware, with further optimizations quickly bringing the hardware requirements down. Looking at recent progress in AI, the initial
approach for learning Atari 2600 games (Mnih et al. 2015) used specialized hardware in the form of a GPU, but an alternative approach was released only a year later which used a standard CPU and achieved better results using a shorter training time (Mnih et al.
2016). In addition to suggesting that software optimizations could rapidly increase the amount of AIs that could be run, the fact that speed and performance also improved highlights the possibility of a hardware overhang scenario also contributing to the speed
explosion and intelligence explosion scenarios, below.

4.1.1.2. Speed explosion.

In a speed explosion (Solomonoff 1985; Yudkowsky 1996; Chalmers 2010) scenario, intelligent machines design increasingly faster machines. A hardware overhang might contribute to a speed explosion, but is not required for it. An AI running at the pace of a human
could develop a second generation of hardware on which it could run at a rate faster than human thought. It would then require a shorter time to develop a third generation of hardware, allowing it to run faster than on the previous generation, and so on. At some
point, the process would hit physical limits and stop, but by that time AIs might come to accomplish most tasks at far faster rates than humans, thereby achieving dominance. In principle, the same process could also be achieved via improved software, as discussed
above.

The extent to which the AI needs humans in order to produce better hardware will limit the pace of the speed explosion, so a rapid speed explosion requires the ability to automate a large proportion of the hardware manufacturing process. However, this kind of
automation may already be achieved by the time that AI is developed. The more automation there is, the faster an AI takeover can happen.

If the level of security for the hardware is good, then speed explosion scenarios in which the AI breaks into manufacturing systems and seizes control of them become less likely. On the other hand, there are possible paths (discussed in Section 5) in which the AI is
given legitimate control to various resources. Having good security for automated factories does not help if the AI is the one running them, or if it can rent access to them on the open market and has sufficient money for doing so.

A speed explosion could also contribute to hardware overhang and an intelligence explosion by allowing for more efficient or otherwise better algorithms to be found in a shorter time.
4.1.1.3. Intelligence explosion.

In an intelligence explosion (Good 1965; Chalmers 2010; Bostrom 2014), an AI figures out how to create a qualitatively smarter AI and that smarter AI uses its increased intelligence to create still more intelligent AIs, and so on, such that the intelligence of humankind
is left far behind and the machines achieve dominance.

For many domains, there exist limits to prediction from the combinatorial explosions that follow from attempting to forecast increasingly into the future; and in e.g. weather modeling, forecasters can only access a limited amount of initial observations relative to the
weather system’s degrees of freedom (Buizza 2002). However, even if a superintelligent AI was unable to predict every future event accurately, it could still react to the event and predict its likely consequences better than humans could. Tetlock & Gardner (2015)
review and discuss the ability of certain human forecasters ("superforecasters") to predict world events with considerable accuracy; on unpredictable "black swan" (Taleb 2007) events, they write

‘We may have no evidence that superforecasters can foresee events like those of September 11, 2001, but we do have a warehouse of evidence that they can forecast questions like: Will the United States threaten military action if the Taliban
don’t hand over Osama bin Laden? Will the Taliban comply? Will bin Laden flee Afghanistan prior to the invasion? To the extent that such forecasts can anticipate the consequences of events like 9/11, and these consequences make a black
swan what it is, we can forecast black swans.’

Sotala (2017), based on a review of the literature on human expertise and intelligence, finds that in humans, expertise is based on developing mental representations which allow experts to understand different situations and either instantly know the appropriate
action for a given situation, or carry out a mental simulation of how a situation might develop and what should be done in response. Such expertise is enabled by a combination of two subabilities, pattern recognition and mental simulation.

Sotala (2017) argues that an AI could improve on both subabilities. Superhuman mental simulation ability could be achieved by a combination of running larger simulations taking more factors into account, and also by having several streams of attention which could
investigate multiple alternatives in parallel, exploring many different perspectives and causal factors at once. Running accurate mental simulations would also require good mental representations to form the basic building blocks of the simulations. Among humans,
there are cognitive differences which allow some people to learn and acquire accurate mental representations faster than others, and these seem to come down to factors such as working memory capacity, attention control, and long-term memory. These might be
improved upon via a combination of hardware improvements and theoretical computer science. In humans, improvements in intelligence seem to provide further benefits across the whole documented range of intelligence differences, and it seems likely that
various evolutionary constraints have bottlenecked human intelligence far below what might be the theoretical maximum.

With regard to limits on prediction from the inherent uncertainty of the world, Sotala (2017) acknowledges the existence of such limits, but argues that:

... it looks that even though an AI system couldn’t make a single superplan for world conquest right from the beginning, it could still have a superhuman ability to adapt and learn from changing and novel situations, and react to those faster
than its human adversaries. As an analogy, experts playing most games can't precompute a winning strategy right from the first move either, but they can still react and adapt to the game's evolving situation better than a novice can, enabling
them to win.

An intelligence explosion could also contribute to a speed explosion and to hardware overhang, if the AI’s increased intelligence enabled it to find algorithms which were most efficient in terms of enabling more AI systems to be run with the same hardware
(hardware overhang), or allowing them to be run faster (speed explosion).

4.1.2. DSA enabler: Collective takeoff with trading AIs

Vinding (2016; see also Hanson & Yudkowsky 2013) argues that much of seemingly-individual human intelligence is in fact based on being able to tap into the distributed resources, both material and cognitive, of all of humanity. Thus, it may be misguided to focus on
the point when AIs achieve human-level intelligence, as collective intelligence is more important than individual intelligence. The easiest way for AIs to achieve a level of capability on par with humans would be to collaborate with human society and use its resources
peacefully.

Similarly, Hall (2008) notes that even when a single AI is doing self-improvement (such as by developing better cognitive science models to improve its software), the rest of the economy is also developing better such models. Thus it’s better for the AI to focus on
improving at whatever thing it is best at, and keep trading with the rest of the economy to buy the things that the rest of the economy is better at improving.

However, Hall notes that there could still be a hard takeoff, once enough AIs were networked together: AIs that think faster than humans are likely to be able to communicate with each other, and share insights, much faster than they can communicate with humans.
As a result, it would always be better for AIs to trade and collaborate with each other than with humans. The size of the AI economy could grow quite quickly, with Hall suggesting a scenario that goes “from [...] 30,000 human equivalents at the start, to
approximately 5 billion human equivalents a decade later”. Thus, even if no single AI could achieve a DSA by itself, a community of them could collectively achieve one, as that community developed to be capable of everything that humans were capable of .

4.2. DSA/MSA enabler: power gradually shifting to AIs

The historical trend has been to automate everything that can be automated, both to reduce costs and because machines can do things better than humans can. Any kind of a business could potentially run better if it were run by a mind that had been custom-built
for running the business—up to and including the replacement of all the workers with one or more with such minds. An AI can think faster and smarter, deal with more information at once, and work for a unified purpose rather than have its efficiency weakened by
the kinds of office politics that plague any large organization. Some estimates already suggest that half of the tasks that people are paid to do are susceptible to being automated using techniques from modern-day machine learning and robotics, even without
postulating AIs with general intelligence (Frey & Osborne 2013, Manyika et al. 2017).

The trend towards automation has been going on throughout history, doesn’t show any signs of stopping, and inherently involves giving the AI systems whatever agency they need in order to run the company better. There is a risk that AI systems that were initially
simple and of limited intelligence would gradually gain increasing power and responsibilities as they learned and were upgraded, until large parts of society were under AI control.

4.3. MSA enabler: Crucial capabilities

For discussing MSAs, a key question is the capability threshold for inflicting catastrophic damage. An AI could be a catastrophic risk if its offensive capabilities in some crucial domain were sufficient to overwhelm existing defenses.

As we briefly discussed in section 3, assuming that the AI was rational, choosing to cause such damage would require a sensible motive; but as with humans, there could be a range of motives that would make hostile action a reasonable strategy, such as extortion,
the desire to assist an ally, or mounting a first strike against another AI or group which might otherwise be expected to obtain a DSA. Depending on the goals and on whether the AI had allies, conducting a follow-up to an attack enabled by crucial capabilities might
require additional capabilities, such as rebuilding after destroying key infrastructure.

It is important to notice that causing


catastrophic damage probably does not even require superhuman
capabilities (Torres 2016a; 2016b, chap. 4). For instance, it seems possible that a sufficiently determined
human attacker could already cause major damage on a society via electronic warfare. Although
there have not yet been cyberattacks that would have been reported to directly cause deaths, several have caused physical
damage or disruption to emergency services. In May of 2017, the “WannaCry” ransomware worm was reported to have infected
over 230,000 computers in over 150 countries (Ehrenfeld 2017), causing disruption to crucial services such as
healthcare (Gayle et al. 2017). In 2016, three substations in the Ukrainian power grid were reported to have been disconnected
by a malware attack, leaving about half of the homes in a region with 1.3 million inhabitants temporarily without electricity (Goodin
2016). A previous cyberweapon, Stuxnet, also had a physical target in the form of industrial centrifuges, which it managed to
successfully damage (Chen & Abu-Nimeh 2011). Various studies have found enormous numbers of industrial control
systems, controlling operations at installations such as banks and hospitals, exposed directly to the
Internet with no protection (Kiravuo et al. 2015).

The US and Russian governments could probably already wipe out most of humanity using
nuclear weapons. The Soviet Union also had an extensive biological warfare program, with an
annualized production capability of 90-100 tons of weaponized smallpox, as well as having genetically engineered diseases to resist
heat, cold, and antibiotics (USAMRIID 2014), which could have caused enormous death tolls if used. The
development of
genetic engineering and synthetic biology have also enabled the creation of biological agents far
more deadly than what could ever evolve naturally (ibid, p. 150-153). That none of these scenarios
has come true so far is due to the values of the humans in key positions, not because inflicting
massive damage would inherently require superhuman capability.

In the domain of social manipulation, modern-day machine learning has been used to create
predictions based on people’s Facebook “likes” that are more accurate than the predictions
made by their friends using a personality questionnaire (Youyou et al. 2015), and “likes” have also been used to accurately
predict private traits such as sexual orientation (Kosinski et al. 2013). Some reports in the popular press have alleged that the
marketing company Cambridge Analytica’s use of AI-driven marketing played a major role in the United States 2016 presidential
election and the United Kingdom’s 2016 European Union membership referendum (Grassegger & Krogerus 2017). While the truth of
this claim remains an open question, and has been called into question (Taggart 2017), it is suggestive of the kind of power that
AI capable of more sophisticated social modeling and manipulation might have, raising
the possibility of a world
where the outcomes of national elections were decided by AI systems .

In general, some plausible capabilities which might enable an MSA include biological warfare (developing and releasing
biological plagues), cyberwarfare (attacking systems running key infrastructure), and social manipulation (persuading
sufficiently many humans to do the AI’s will; even just a single human could cause catastrophic damage, if that human
was e.g. the head of a state). Note that similarly as with takeoff enablers, having one capability may contribute to
others: for example, an AI capable of social manipulation may leverage it to find collaborators
capable in the other domains, and cyberwarfare may yield compromising information which
assists in blackmailing people or collecting information about human behavior.
4.4. Putting DSA/MSA enablers together

[FIGURE 2 OMITTED]

The above figure (Figure 2) summarizes the different pathways to catastrophe discussed above. Any one of
a speed
explosion, intelligence explosion, or hardware overhang could contribute to an individual
takeoff, with a single AI achieving immense capability. A hardware overhang could also contribute to a
collective takeoff, with the spare hardware capability allowing large amounts of AI systems to be created in a short time, those
systems then beginning to trade with each other and soon collectively outpacing humanity. The “trading AIs” node, another enabler
of a collective takeoff, represents a scenario which is otherwise similar but in which there is no hardware overhang, and where the
different AIs are built over a longer period, until they have reached the level of capability necessary for a collective takeoff. Either
form of takeoff could give AIs a DSA. AIs could also achieve a DSA if humans voluntarily gave them enough power.

If AIs had been given some amount of power, but not enough to achieve a DSA, they could still achieve an MSA. Also, even a single AI
which was not powerful enough to achieve a DSA could achieve an MSA if it was sufficiently capable at some crucial offensive
capability.

For an AI to pose a threat to humanity, it needs to have a way of affecting the world and causing a
catastrophe. A common proposal for limiting the AI’s power is to attempt to somehow restrict the
AI’s ability to communicate with and influence the world, generally known as “confinement” or
“AI boxing” (Chalmers 2010, Armstrong et al. 2012, Yampolskiy 2012, Bostrom 2014).
Challenges to confinement are two-fold. First, there is the technical challenge of confining the AI in such a way that it is unable to
escape, but is still capable of providing useful information. Additionally, confinement involves a social dimension, where decision-
makers may have various incentives to relax the confinement safeguards or even release the AI entirely, even if it was technically
possible to keep it contained (Sotala & Yampolskiy 2015). For confinement to be successful, both the technical and social
requirements have to be met.

5.1. The technical challenge

A common response is that a sufficiently intelligent AI will somehow figure out a way to escape, either by social engineering or by
finding an exploitable weakness in the physical security arrangements. This possibility has been extensively discussed in a number of
papers, including Chalmers (2012) and Armstrong, Sandberg & Bostrom (2012). Writers have generally been cautious about making
strong claims of our ability to keep a mind much smarter than ourselves contained against its will. However, with cautious design, it
may still be possible to design an AI combining some internal motivation to stay contained, with a number of external safeguards
monitoring the AI.

5.2. The social challenge

AI confinement assumes that the people building it, and the people that they are responsible to, are all motivated
to actually keep the AI confined. If a group of cautious researchers builds and successfully contains their AI, this may
be of limited benefit if another group later builds an AI that is intentionally set free. Reasons for
releasing an AI may include i) economic benefit or competitive pressure, ii) ethical or philosophical reasons, iii) confidence in the AI’s
safety, as well as iv) desperate circumstances such as being otherwise close to death. We will discuss each in turn below.
5.2.1. Voluntarily released for economic benefit or competitive pressure

As discussed above under “power gradually shifting to AIs”, there is an economic incentive to deploy AI systems in control of corporations. This can happen in two forms: either by expanding the amount of control that already-existing systems have, or alternatively
by upgrading existing systems or adding new ones with previously-unseen capabilities. These two forms can blend into each other. If humans previously carried out some functions which are then given over to an upgraded AI which has become recently capable of
doing them, this can increase the AI’s autonomy both by making it more powerful and by reducing the amount of humans that were previously in the loop.

As a partial example, the US military is seeking to eventually transition to a state where the human operators of robot weapons are “on the loop” rather than “in the loop” (Wallach and Allen 2012). In other words, whereas a human was previously required to
explicitly give the order before a robot was allowed to initiate possibly lethal activity, in the future humans are meant to merely supervise the robot’s actions and interfere if something goes wrong. While this would allow the system to react faster, it would also limit
the window that the human operators have for overriding any mistakes that the system makes. For a number of military systems, such as automatic weapons defense systems designed to shoot down incoming missiles and rockets, the extent of human oversight is
already limited to accepting or overriding a computer’s plan of actions in a matter of seconds, which may be too little to make a meaningful decision in practice (Human Rights Watch 2012).

Sparrow (2016) reviews three major reasons which incentivize major governments to move towards autonomous weapon systems and reduce human control:

1. Currently-existing remotely-piloted military “drones”, such as the U.S. Predator and Reaper, require a high amount of communications bandwidth. This limits the amount of drones that can be fielded at once, and makes them dependant on
communications satellites which not every nation has, and which can be jammed or targeted by enemies. A need to be in constant communication with remote operators also makes it impossible to create drone submarines, which need to
maintain a communications blackout before and during combat. Making the drones autonomous and capable of acting without human supervision would avoid all of these problems.

2. Particularly in air-to-air combat, victory may depend on making very quick decisions. Current air combat is already pushing against the limits of what the human nervous system can handle: further progress may be dependant on removing
humans from the loop entirely.

3. Much of the routine operation of drones is very monotonous and boring, which is a major contributor to accidents. The training expenses, salaries, and other benefits of the drone operators are also major expenses for the militaries
employing them.

Sparrow’s arguments are specific to the military domain, but they demonstrate the argument that "any broad domain involving high stakes, adversarial decision making, and a need to act rapidly is likely to become increasingly dominated by autonomous systems"
(Sotala & Yampolskiy 2015). Similar arguments can be made in the business domain: eliminating human employees to reduce costs from mistakes and salaries is something that companies would also be incentivized to do, and making a profit in the field of high-
frequency trading already depends on outperforming other traders by fractions of a second. While currently-existing AI systems are not powerful enough to cause global catastrophe, incentives such as these might drive an upgrading of their capabilities that
eventually brought them to that point.

Absent sufficient regulation, there could be a “race to the bottom of human control” where state or business actors competed to reduce human control and increased the autonomy of their AI systems to obtain an edge over their competitors (see also Armstrong et
al. 2013 for a simplified “race to the precipice” scenario). This would be analogous to the “race to the bottom” in current politics, where government actors compete to deregulate or to lower taxes in order to retain or attract businesses.

AI systems being given more power and autonomy might be limited by the fact that doing this poses large risks for the actor if the AI malfunctions. In business, this limits the extent to which major, established companies might adopt AI-based control, but
incentivizes startups to try to invest in autonomous AI in order to outcompete the established players. In the field of algorithmic trading, AI systems are currently trusted with enormous sums of money despite the potential to make corresponding losses – in 2012,
Knight Capital lost $440 million due to a glitch in their trading software (Popper 2012, Securities and Exchange Commission 2013). This suggests that even if a malfunctioning AI could potentially cause major risks, some companies will still be inclined to invest in
placing their business under autonomous AI control if the potential profit is large enough.

U.S. law already allows for the possibility of AIs


being conferred a legal personality, by putting them in charge of a
limited liability company. A human may register an LLC, enter into an operating agreement specifying that the LLC will take
actions as determined by the AI, and then withdraw from the LLC (Bayern 2015). The result is an autonomously acting
legal personality with no human supervision or control. AI-controlled companies can also be created in various
non-U.S. jurisdictions; restrictions such as ones forbidding corporations from having no owners can largely be circumvented by tricks
such as having networks of corporations that own each other (LoPucki 2017). A possible startup strategy would be for someone to
develop a number of AIsystems, give them some initial endowment of resources, and then set them off in control of their
own corporations. This would risk only the initial resources, while promising whatever profits the
corporation might earn if successful . To the extent that AI-controlled companies were successful in
undermining more established companies, they would pressure those companies to transfer control to
autonomous AI systems as well.
5.2.2. Voluntarily released for purposes of criminal profit or terrorism

LoPucki (2017) argues that if


a human creates an autonomous agent with a general goal such as
“optimizing profit”, and that agent then independently decides to e.g. commit a crime for the sake of
achieving the goal, prosecutors may then be unable to convict the human for the crime and can at most prosecute for the lesser
charge of reckless initiation. LoPucki holds that this
“accountability gap”, among other reasons, assures that
humans will create AI-run corporations.

Furthermore, LoPucki (2017) holds that such


“algorithmic entities” could be created anonymously and that
them having a legal personality would give them a number of legal rights, such as being able to
“buy and lease real property, contract with legitimate businesses, open a bank account, sue to
enforce its rights, or buy stuff on Amazon and have it shipped”. If an algorithmic entity was created
for a purpose such as funding or carrying out acts of terrorism, it would be free from social
pressure or threats to human controllers:

In deciding to attempt a coup, bomb a restaurant, or assemble an armed group to attack a shopping center, a
human-
controlled entity puts the lives of its human controllers at risk. The same decisions on
behalf of an AE risk nothing but the resources the AE spends in planning and execution. (LoPucki
2017)

While most terrorist groups would stop short of intentionally destroying the world, thus posing at most a catastrophic risk, not all of
them necessarily would. In particular, ecoterrorists who believe that humanity is a net harm to the planet, and religious terrorists
who believe that the world needs to be destroyed in order to be saved, could have an interest in causing human
extinction (Torres 2016, 2017, chap 4.).
5.2.3. Voluntarily released for aesthetic, ethical, or philosophical reasons

A few thinkers (such as Gunkel 2012) have raised the question of moral rights for machines, and not everyone necessarily agrees on AI confinement being ethically acceptable. The designer of a sophisticated AI might come to view it as something like their child, and
feel that it deserved the right to act autonomously in society, free of any external constraints.

5.2.4. Voluntarily released due to confidence in the AI’s safety

For a research team to keep an AI confined, they need to take seriously the possibility of it being dangerous. Current AI research doesn’t involve any confinement safeguards, as the researchers reasonably believe that their systems are nowhere near general
intelligence yet. Many systems are also connected directly to the Internet. Hopefully, safeguards will begin to be implemented once the researchers feel that their system might start having more general capability, but this will depend on the safety culture of the AI
research community in general (Baum 2016), and the specific research group in particular. If a research group mistakenly believed that their AI could not achieve dangerous levels of capability, they might not deploy sufficient safeguards for keeping it contained.

In addition to believing that the AI is insufficiently capable of being a threat, the researchers may also (correctly or incorrectly) believe that they have succeeded in making the AI aligned with human values, so that it will not have any motivation to harm humans.

5.2.5. Voluntarily released due to desperation

Miller (2012) points out that if a person was close to death, due to natural causes, being on the losing side of a war, or any other reason, they might turn even a potentially dangerous AGI system free. This would be a rational course of action as long as they primarily
valued their own survival and thought that even a small chance of the AGI saving their life was better than a near-certain death.

5.2.6. The AI remains contained, but ends up effectively in control anyway

Even if humans were technically kept in the loop, they might not have the time, opportunity, motivation, intelligence, or confidence to verify the advice given by an AI. This would particularly be the case after the AI had functioned for a while, and established a
reputation as trustworthy. It may become common practice to act automatically on the AI’s recommendations, and it may become increasingly difficult to challenge the ‘authority’ of the recommendations. Eventually, the AI may in effect begin to dictate decisions
(Friedman and Kahn 1992).

Likewise, Bostrom and Yudkowsky (2011) point out that modern bureaucrats often follow established procedures to the letter, rather than exercising their own judgment and allowing themselves to be blamed for any mistakes that follow. Dutifully following all the
recommendations of an AI system would be another way of avoiding blame.

O’Neil (2016) documents a number of situations in which modern-day machine learning is used to make substantive decisions, even though the exact models behind those decisions may be trade secrets or otherwise hidden from outside critique. Among other
examples, such models have been used to fire school teachers that the systems classified as underperforming and give harsher sentences to criminals that a model predicted to have a high risk of reoffending. In some cases, people have been skeptical of the results
of the systems, and even identified plausible reasons why their results might be wrong, but still went along with their authority as long as it could not be definitely shown that the models were erroneous.

In the military domain, Wallach and Allen (2012) note the existence of robots which attempt to automatically detect the locations of hostile snipers and to point them out to soldiers. To the extent that these soldiers have come to trust the robots, they could be seen
as carrying out the robots’ orders. Eventually, equipping the robot with its own weapons would merely dispense with the formality of needing to have a human to pull the trigger.

Figure 3 summarizes the different ways in which an AI may become free to act autonomously.

[FIGURE 3 OMITTED]
6. Notes on single vs. multiple AIs

Many analyses have focused on the case of there only existing a single AI. A scenario in which only a single AI was relevant could plausibly happen if

1) the first AI to be created achieved a DSA very quickly after it was created;

2) some research group pulled considerably ahead of all competitors in developing AI, and was able to maintain this advantage for an extended time

For the purposes of this analysis, a scenario where there are many copies of a single AI, all pursuing the same goals, counts as one with a single AI. The same is true if a single AI creates more specialized “worker AIs” for carrying out some more narrow purpose that
nonetheless serves its primary goals.

Of the two possibilities above, possibility #2 would seem relatively unlikely to persist for more than a few years at most, given the current fierce competition in the AI scene. Whereas a single company could conceivably achieve a major lead in a rare niche with little
competition, this seems unlikely to be the case for AI.

A possible exception might be if a company managed to monopolize the domain entirely, or if it had development resources that few others did. For example, companies such as Google and Facebook currently have access to vastly larger datasets than most other
corporate or academic actors. In contemporary machine learning, large datasets combined with simple models tend to produce better results than small datasets and more sophisticated models (Halevy et al. 2009); Goodfellow et al. (2016, chap 1) note that as a rule
of thumb, a deep learning algorithm requires a dataset of at least 10 million labeled examples in order to achieve human-level or better performance.

On the other hand, dependence on such huge datasets is a quirk of current machine learning techniques – humans learn from much smaller amounts of data, and are also capable of using their learning much more flexibly, suggesting fundamental differences in how
humans and modern-day algorithms learn (Lake et al. 2016). Thus, it is possible that an AGI would be capable of learning from much smaller amounts of data, and that an AGI project would also not be as constrained by the need for large datasets. 11

Another plausible crucial asset might be hardware resources – possibly the first AGIs will need massive amounts of computing power. Bostrom (2017) notes that if there is a large degree of openness in AI development, and everyone has access to the same
algorithms, then hardware may become the primary limiting factor. If the hardware requirements for AI were relatively low, then high openness could lead to the creation of multiple AIs. On the other hand, if hardware was the primary limiting factor and large
amounts of hardware were needed, then a few wealthy organizations might be able to monopolize AI for a while. As previously discussed in Section 4, software optimizations may rapidly bring down the need for hardware, limiting the duration for which hardware
might be the crucial constraints.

Branwen (2017) has suggested that hardware production is reliant on a small number of centralized factories that would make easy targets for regulation. This would suggest a possible route by which AI might become amenable to government regulation, limiting
the amount of AIs deployed. Similarly, there have been proposals of government and international regulation of AI development (e.g. Wilson 2013; for an argument against, see McGinnis 2010). If successfully enacted, such regulation might limit the number of AIs
that were deployed.

Another possible crucial asset would be the possession of a non-obvious breakthrough insight, one which would be hard for other researchers to come up with. If this was kept secret, then a single company might plausibly develop major headway on others.

Successful AI containment procedures may also increase the chances of there being multiple AIs, as the first AIs remain contained, allowing for other projects to catch up.

A situation with multiple AIs might come about if

1) several actors reached the capability for building AIs around the same time, and no AI achieved a DSA

2) a single actor might deploy several different AIs with differing purposes and goals
3) only one actor had the capability to deploy an AI, but that AI created copies of itself and failed to align the goals of those copies with its own ones

The consequences of having multiple AIs are hard to predict. Current-day AI is being developed to warn about potential risks, such as by predicting financial risk from news articles (Rönnqvist & Sarlin 2016), and there is a long history of using AI for purposes such as
automated intrusion detection (Lunt 1989). More sophisticated, human-aligned AI could help defend against non-aligned AI systems (Hall 2007, Goertzel & Pitt 2012).

On the other hand, a fundamental problem of defense is that in order to prevent catastrophe, defenders have to succeed each time, while attackers only need to get through once. If several AIs exist, then procedures such as containment have to succeed for each AI,
and all actors have to find containment worthwhile. In effect, the result of having multiple AIs is to multiply the amount of systems that could potentially cause a catastrophe.

Another issue is that having multiple AIs seems only likely to help if a sufficiently large fraction of them have human-aligned values. A scenario in which there are many AIs, each pursuing interests that put little weight on human values, seems unlikely to be good for
human values: especially if the AIs are all substantially more capable than humans are, such a scenario merely leaves humans lying in the crossfire.

7. Conclusion

In this chapter, we have considered a variety of routes by which the development of AI could lead to catastrophe (table 2). In Section 2, we argued that an excessive focus on AIs acquiring a Decisive Strategic Advantage (DSA), which allows them to achieve complete
world domination, may be unwise. Rather, it seems warranted to also consider routes by which they can acquire a Major Strategic Advantage (MSA), a level of capability which may allow them to cause damage numbering in at least tens of millions of deaths. In
addition to an AI acquiring an MSA being plausibly more probable than it acquiring a DSA, the chaos caused by an AI with an MSA may eventually lead to the emergence of an AI with a DSA, even if the first AI was successfully shut down.

Considering scenarios where an AI “only” has an MSA requires more emphasis on analyzing when an AI might be willing to risk human-hostile action. Various considerations were considered in Section 3. In general, if an AI acts rationally, it will only initiate aggression
if the expected utility for doing so outweighs the expected utility of cooperating, when the risk of failure and corresponding human retaliation is taken into account (Shulman 2010). However, there are a number of situations which might push the AI into taking
hostile action.

Seeking to establish catastrophic AI risks as a form of disjunctive risk, with multiple different ways of things going wrong, Section 4 considered ways by which an AI (or groups of AIs) might become sufficiently capable to have some form of an SA. We discussed
individual takeoff scenarios (with three main subtypes), collective takeoff scenarios, scenarios where power slowly shifts over to AI systems, and scenarios in which an AI being good enough at some crucial capability gives it an MSA/DSA.

As an AI can only become capable if it is allowed sufficient autonomy, Section 5 considered different ways in which an AI might achieve that autonomy. Reasons for conferring an AI autonomy included i) economic benefit or competitive pressure, ii) criminal or
terrorist reasons iii) ethical or philosophical reasons, iv) confidence in the AI’s safety, as well as v) desperate circumstances such as being otherwise close to death. Additionally, a sufficiently intelligent AI may escape confinement, or it might become influential
enough to be effectively in control despite being theoretically confined.

Finally, all of these paths to catastrophe may be multiplied if there are many different AIs, each of which may achieve autonomy and then a major level of capability. Section 6 discussed whether we may expect to see only a very small number of AIs, or whether there
will be many, and some of the implications that each scenario has.

[TABLE 2 OMITTED]
Combining the various routes discussed in the preceding sections suggest many different scenarios (see box below), ranging from ones where an AI escapes containment and quickly achieves superintelligence, to ones where an AI is intentionally built to run a
corporation and voluntarily given ever-increasing resources until it is running the planet. Each of these routes will need to be separately evaluated for their plausibility, as well as for the most suitable safeguards for preventing them. Hopefully, such analysis will allow
the positive potential for AI to be realized, avoiding catastrophe.

Some example scenarios

Different combinations of the various pathways that we have discussed, suggest many different kinds of AI risk scenarios. Here are four examples:

The classic takeover

(Decisive strategic advantage, high capability threshold, intelligence explosion, escaped AI, single AI)

The “classic” AI takeover scenario, as described by Bostrom (2014, chap. 6): an AI is developed, which eventually
becomes better at AI design than its programmers. The AI uses this ability to undergo an intelligence explosion, and eventually
escapes to the Internet from its confinement. After acquiring sufficient influence and resources in secret,
it carries out a strike against humanity, eliminating humanity as a dominant player on Earth so that it can
proceed with its own plans unhindered.

The gradual takeover

(Major strategic advantage, high capability threshold, gradual shift in power, released for economic reasons, multiple AIs)

Many corporations, governments, and individuals voluntarily turn over functions to AIs, until we
are dependent on AI systems. These are initially narrow-AI systems, but continued upgrades push some of them to the level
of having general intelligence. Gradually, they start making all the decisions . We know that letting them run things is
risky, but now a lot of stuff is built around them, it brings a profit and they’re really good at giving us nice stuff—for the while being.

The wars of the desperate AIs

(Major strategic advantage, low capability threshold, crucial capabilities, escaped AIs, multiple AIs)

Many different actors develop AI systems. Most of these prototypes are unaligned with human values and not yet enormously
capable, but many of these AIs reason that some other prototype might be more capable. As a result, they attemptto
defect on humanity despite knowing their chances of success to be low, reasoning that they would have an even lower
chance of achieving their goals if they did not defect. Society is hit by various out-of-control systems with
crucial capabilities that manage to do catastrophic damage before being contained.
k – cap
AI personhood is the authoritarian triumph of efficiency over social power.
Creation of virtual corporate entities able to mobilize information to guide
capital investment is a fantasy that gives billionaires unaccountable control of
social life. This turns liberal governance into market fascism, which turns all
capitalism good arguments.
Smith and Burrows 21 – (Smith, Harrison, and Roger Burrows. "Software, sovereignty and
the post-neoliberal politics of exit." Theory, Culture & Society 38.6 (2021): 143-166)

As we have already noted, writing almost a quarter of a century ago, James Dale
Davidson and Lord William Rees-
Mogg (father of archBrexiteer UK MP Jacob Rees-Mogg) published The Sovereign Individual , a
book that Thiel claims to have heavily influenced his worldview (O’Connell, 2018). In it, they predict the eventual collapse of the
nation-state and the eclipse of politics by corporatist initiatives (seasteading and Urbit easily fit this bill). Their prediction hinges on
the acceleration of information processing by decentralized telecommunication networks. Through them, nation-states will be
unable to ‘catch up’ with the speed of encrypted transactions, rendering existing institutions of tax-collection impossible. Commerce
will migrate online, and ‘cyberspace’ will become ‘the ultimate offshore jurisdiction’ (Davidson and Rees-Mogg, 1997: 24). This
‘triumph of efficiency over power’, as they describe it, is interesting not only because they
theorize the nationstate as paralyzed by ‘micro-processing’, but also because these
protoneoreactionaries draw specifically on hyperstitional notions such as the metaverse of
Snow Crash (Davidson and Rees-Mogg, 1997: 179 ) as a post-neoliberal imaginary. This triumph
is predicted to result in the eventual rise in violent and organized crime following the decline of nationstates, but also
the emergence of new information assets. The virtual corporations and sovereign individuals
envisioned by Yarvin, Land, Thiel, Friedman, Davidson, and the late Rees-Mogg are
geographically dynamic entities capable of rapid mobilization from jurisdictional authority . As we
see it, these views are not simply speculative predictions of a post-neoliberal future but have
played a materially key role in guiding capital investment patterns in places such as Silicon
Valley. Davidson and the late Rees-Mogg both edited Strategic Investment, and Davidson himself is a venture capitalist with a
panache for the apocalyptic. In any event, these economies of prediction all hinge on assumptions that
capital will be drawn to chaos because it can easily exit should circumstances change

It is important for us to keep attending to the manner in which the political


projects that underpin NRx are
working into the socio-technical infrastructures of everyday life. Projects like Urbit, and other NRx exit
strategies such as seasteading, offer vivid imaginary resources for those already possessing a predilection
towards social withdrawal from the manifest crises and failures of contemporary global
capitalism. The functioning of the democratic urban form – American cities in particular – has been a particular target for
denigration (Land, 2012).21 Instead, exit to prosperous, technologically-advanced, supposedly well-functioning but antidemocratic
city-states – Singapore, Hong Kong (before the current unrest at least), Dubai and the like – is set up as a model for the future. Such
interpretations have a symbolic and a political force. We see similar processes occurring within Urbit’s critique of existing network
architectures and power structures. True
or not, these critiques work to advance a hyperstitious
imaginary of a uniquely different network architecture based on particular beliefs of how data-
subjects and communities should interact through decentralized secessionist logics, and the
political rights or obligations (if any) that follow from them.

NRx architectures of exit, as Steorts (2017) observes, are powerful precisely because they oversimplify. Incentives
are
aligned with their efficient pursuit: ‘A computer scientist would think this way: You just set up
the rules and your mechanism follows them.’ Any notion of political sovereignty is, in other
words, in the hands of the technologists working through a ‘cryptographic chain of command’.
We suggest that platforms such as Urbit represent attempts to concretize such mechanistic computational ‘social’ theories of a
hyper-efficient neoreactionary state. The
power to govern the conditions of exit, while likely futile in
realizing any fantasy of fracturing the political status quo to restore a myth of sovereignty,
nonetheless has a certain traction for neoreactionaries claiming to have access to some
privileged, almost mythical, understanding of the contemporary social order ascertained only
through red-pilling. Here, the question of how seriously we should take the writings of people like Yarvin and Land on exit
becomes significant. As we have already noted, the ease with which otherwise ‘batshit crazy’ ideas have become mainstreamed in
On any definition, we are dealing
recent years is perhaps a mark of the ‘new dark age’ in which we live (Bridle, 2018).
here with fascism (Gilroy, 2019; Goldhill, 2017; Hermansson et al., 2020) but at the same time we would
be foolish to dismiss the memetic, almost infectious quality that NRx and The Dark Enlightenment possess (O’Sullivan, 2017: 30).
The so-called ‘Overton Window’ is being moved rightwards, and as Gilroy (2019: 5) argues, political
conduct has been redefined; fractions of the alt-right now consider themselves Gramscians and Leninists, and they intend ‘to play a
long game’.

We speculate that these fractures reinforce an emergent political order illustrated by projects such as
Urbit and seasteading in order to provide material instances of exit architectures . These represent a
particular kind of state that gestures towards imagined post-neoliberal orders characterized by the fracturing of the
bureaucratic administrative state and its replacement by ‘ gov-corps’. It is worth considering what powers,
if any, those who do not own Urbit land might have, what choices one would really have to exit the network, and what moral or
ethical constructs would govern this imagined space. Indeed,
discussions of exit touch upon key ethical
debates facing Silicon Valley concerning the Karp, the CEO and co-founder (with Peter Thiel) of
the artificial intelligence firm Palantir, argued in the Washington Post (Karp, 2019) that tech
companies have absolutely no moral obligation to influence policy in a broadside critique of the
progressive agenda: ‘when a small group of executives at the largest Internet companies in Silicon Valley try to impose their
moral framework on America, something has gone seriously and dangerously awry.’ Putting aside for a moment the extent to which
such moral frameworks are indeed a minority position (likely, they are not), or whether Karp subscribes to the principles of NRx, his
view highlights an underlying truth behind the ‘ techno-utopian right-libertarianism’ that
pervades both the ethics and aesthetics of Silicon Valley (Armistead, 2016). Namely, that post-
neoliberalism, as dominated by the political and cultural frameworks of tech start-ups, should
be decisively anti-political and indifferent to existing moral dilemmas precisely because exit will
offer the transcendental mechanism for decentralized political change . Exit apologist Balaji Srinivasan
(2013) sees the future as a techno-utopia because subjects can choose the ‘level of exit’ they desire: ‘there is this entire digital world
up here which we can jack our brains into and we can opt out.’ The
objective is to reduce the barriers of exit by
fracturing the civil service and marketplace of progressive social theory through start-ups
hyper-stimulated on billionaire finance. Departing from Thatcher’s infamous neoliberal rhetoric, the Dark
Enlightenment will have such things as societies, but opt-in ones only.

Extending corporate personhood endorses a hierarchy of human capital and


innovation that negates racialized persons.
Jody GREENE Prf. UC Santa Cruz History of Consciousness AND Sharif YOUSSEF English & Legal
Studies @ Ashoka ’20 in Human Rights After Corporate Personhood eds. Greene & Youssef p.
non-paginated copy

In this volume, to clarify the relationship of corporations to human rights taken generally, we
pursue a line of thinking in regard to human rights that begins with Hannah Arendt’s work on
stateless peoples and continues through Roberto Esposito’s recent scholarship on personhood.
Arendt identified a tension within human rights: their conferral depends upon our legal status
and national membership. Stateless persons and those victimized by genocidal projects are,
ordinarily, not accorded those rights. These groups of displaced or dispossessed persons have
been reduced to pure humanity, or, as Esposito glosses it, to their biology. Esposito links the
human in human rights to the way in which nincteenth-century thinkers reconceptualized
personhood. He argues that “the concept of ‘person’ was intended to fill in the chasm opened
up between the pole of human being and citizen that had existed since the Declaration of 1789.
If we compare this text to the Universal Declaration of Human Rights of 1948, the difference is
plain to see: the new semantic epicenter, shifting away from the revolutionary emphasis on
citizenship, is the unconditional demand for the dignity and worth of the human person.”19 In
the regime of human rights, the natural, biological person exercises rights, but the ground for
those legal rights is the assertion of a pre-existing, even primordial “dignity” in human biology
that precipitates a universal duty to consider all human beings as worthy of “humane”
treatment and as moral subjects. To put this another way, the primary difference between
natural law theorists and legal positivists revolves around this question: do human rights emerge
by virtue of our nature as humans (either moral or biological), or are they simply assigned to us
by virtue of our being human?

However, these two theories of personhood share one striking similarity that will prove
problematic in the arena of human rights: each continues to embed in our language a Kantian
metaphysics that treats human rights as individual, liberal rights . Yet the international legal
order that formulates and assigns human rights is essentially a law of peoples. John Rawls
chooses to view “(human rights] as belonging to an associationist social form ... which sees
persons as members of groups associations, corporations, and estates. As such members,
persons have rights and liberties enabling them to meet their duties and obligations and to
engage in a decent system of social cooperation. What have come to be called human rights are
recognized as necessary conditions of any system of social cooperation.”20 Although, according
to Rawls, there should only be human rights that apply to individuals, it is not always the case
that individuals assert human rights claims or that they are asserted on behalf of individuals.
Human rights claims can be raised on behalf of groups, communities, classes, minorities,
indigenous peoples, and collectivities of many kinds. This is evident in human rights around
food, water, labour law, social rights, and indigenous rights and in the assertion of a right to self-
determination by the formerly colonized. Unlike individualist liberal rights, human rights are
asserted vertically through, or diagonally in the name of, categories of peoples or groups, who
strive to obtain full or quasi-legal status by virtue of the category to which they belong. Human
rights, that is, are more often than not asserted on behalf of a collective.

To what extent does the corporation resemble or constitute such a collective? Corporate
personhood jurisprudence and corporate legal theory posit the corporation as a form of
association or assembly whose artificial personhood opens out onto a naturalized group of
stakeholder humans, a people, whom the association is meant to serve or represent. A perverse
but inevitable effect of this apparatus is that when institutions are accorded dignity, or when
those who inhabit roles in those institutions are accorded defacto dignity, at the same time as
human political subjects find themselves deprived of such dignity, the clash between the picture
and the practice of human rights inevitably leads to disillusionment with the very notion of
individual rights. In many of the objections to rulings like Citizens United, the category of legal
personhood extended on behalf of the corporation elides the human in human rights,
naturalizes the corporation, and then supplants individual humans with the business
organization the profit-earning ficto-collectivity as the bearer of rights. Corporations themselves
may harbour such a fantasy. As Joshua Barkan argues in his contribution to this volume, a world
that continues to grant all manner of legal persons human rights is one in which the fantasy of a
transnational corporation that can act as a stateless person may materialize.

One takeaway from this is that we ought to be guarded in the use of our legal grammar, careful
to distinguish personhood rights from human rights. Another lesson is that the language of
rights (and duties), when extended to non-human entities, can be used to discipline them, to
promote guardianship over vulnerable entities, and even to promote the development and
flourishing of human capabilities. However, the very same extension of personhood can be
used, in a neoliberal vein, to allocate distributions and manage opportunities for human capital
on the basis of racial, gendered, and ethnic classifications. The decision to grant corporations
rights brings to fight the extent to which the capacity to bear rights is always constructed and
constructed, at least in part, through the creation of categories of entities excluded from rights.
Colin Dayan, for instance, has argued that, in thinking about the rights of persons, we must
attend more thoroughly to the history of negative personhood, which has included (and
continues to include) “slaves, animals, criminals, and detainees who are disabled by law. Legal
thought relied on a set of fictions that rendered the meaning of persons shifting and tentative:
whether in creating slaves as persons in law and criminals as dead in law, or in the perpetual re-
creation of the rightless entity.”21 Put simply, creating legal persons whether corporate or
individual also requires deciding which entities will be defined by their nonor negative
personhood.

Parsley and Mussawir assert that “today the law of persons also exists in relation to another of
its modern products: a naturalized conception of the person.”22 Modifying arguments advanced
by Michel Foucault and Roberto Esposito, they claim that this naturalization masks the
operation of jurisprudence as “a craft, art or technique” a legal technology, one might say that
creates persons and corporations rather than merely representing them. Although many have
sought to unmask the operations of law that craft and naturalize some persons, and not others,
in order to map what the law negates, most recent commentators assert a relation between
positive and negative persons, one that is nevertheless singular and that affects populations
differentially. Indeed, the persistent complicity between the attribution of personhood and that
of non-personhood remains opaque to the champions of rights-bearing personhood on all sides
of the political debate over the status of corporations as persons. Personhood, that is, whether
artificial and corporate or natural and sacred to the individual, is never not political; it invariably
operates through a process of exclusion, differentiation, and hierarchization that distributes
rights and responsibilities in inequitable and to borrow Hegel’s idiom deeply “disrespectful”
ways.
Data extraction and financialization mutually reinforce the power of
information capital. Technological systems are social systems. The theory and
goal of efficiently allocating risk and liability naturalizes the existing distribution
of power and wealth.
Jathan SADOWSKI Senior Research Fellow in the Emerging Technologies Research Lab,
Department of Human Centred-Computing @ Monash University ’19 “When data is capital:
Datafication, accumulation, and extraction” Big Data & Society January-June p. 6-9

Data extraction

When we talk about data as being ‘collected,’ ‘gathered,’ or even ‘mined’, the image conjured is
one of neutral accumulation, as if data existed out in the world as a distinct thing readily
available to be harvested. However, analysing this process in terms of extraction emphasises the
people targeted by, and the exploitative nature of, dataveillance .

Much of the valuable data capital extracted from the world is about people – their identities,
beliefs, behaviours, and other personal information. As Karen Gregory (2014: n.p.) puts it: ‘Big
Data, like Soylent Green, is made of people.’ This means that accumulating data often goes
hand-in-hand with increasingly invasive systems for probing, monitoring, and tracking people
(Schneier, 2016). Surveillance – or, ‘dataveillance’ – capabilities are integrated into everything
ranging from consumer goods to civic infrastructure. For businesses, much of the value
produced by ‘smart’ technologies does not necessarily come from you buying the good, but
rather from you using it. (Or, even just having it around since many smart technologies are
always in sense and record mode.) Interacting with smart technologies – especially ones
integrated into your everyday, personal life – generates reams of data that would otherwise be
out of reach to the companies that want it. And, it seems, to the governments that want that
data: In February 2016, the then US director of national intelligence, James Clapper, admitted to
a Senate panel that government agencies may treat networked smart technologies as a portal
into people’s homes and lives: ‘In the future, intelligence services might use the [Internet of
Things] for identification, surveillance, monitoring, location tracking, and targeting for
recruitment, or to gain access to networks or user credentials’ (Ackerman and Thielman, 2016:
n.p.).

A typical example of a smart update to an everyday technology is the refrigerator. The regular
refrigerator is a passive object: it just keeps food cold. The smart refrigerator is an active object:
it keeps food cold, but it also keeps track of things like your favourite brands, what foods you
eat at what times, and when your food is almost out or expired. The smart refrigerator can then
take that data and use it, for example, to send targeted advertisements, recommend sponsored
recipes, monitor your dietary intake, and purchase replacement food from the grocery store.
The smart refrigerator can also be used for other purposes that are far from fridgelike, such as a
surveillance device remotely accessed by police who wish to peek into the owner’s house
(Butler, 2017). This is how the logic of accumulation works: it transforms the refrigerator into a
data producing, collecting and transmitting machine. The same logic is behind the growing
stable of smart technologies that are increasingly embedded with sensors, processors and
network connections. ‘The genuine Internet of Things wants to invade that refrigerator,
measure it, instrument it, monitor any interactions with it; it would cheerfully give away a fridge
at cost,’ argues Bruce Sterling (2014: loc. 68).

The pushback against business models based on data capital are already starting to play out: In
2017, an American appliance maker, Whirlpool, filed trade complaints that asked the US
government to impose tariffs on its Korean competitors, LG and Samsung, because the Korean
companies are selling smart appliances at cheap prices, which is eating into the market share of
companies like Whirlpool. LG and Samsung are able to do this because they recognize, as The
New Yorker observed, ‘the way to win in a data-driven business is to push prices as low as
possible in order to build your customer base, enhance data flow, and cash in in the long-term’
(Davidson, 2017: n.p.). While Whirlpool is looking to cash in on the purchase of an appliance, LG
and Samsung are banking on the data that comes from people using the appliance.

Thus, rather than existing only as a commodity to be sold, a smart device becomes (perhaps
primarily) a means of producing data. This logic influences the design of systems ranging from
robotic vacuum cleaners secretly mapping users’ homes so the manufacturer can exploit that
data (Deahl, 2017) to the methods of urban planning deployed to manage cities (Barns, 2017).
Data accumulation drives many key decisions about technological development, political
governance, and business models. As Shoshana Zuboff explains, within the context of what she
calls ‘surveillance capitalism,’

‘The logic of accumulation organizes perception and shapes the expression of technological
affordances at their roots. It is the taken-for-granted context of any business model. Its
assumptions are largely tacit, and its power to shape the field of possibilities is therefore largely
invisible. It defines objectives, successes, failures, and problems. It determines what is
measured, and what is passed over; how resources and people are allocated and organized; who
is valued in what roles; what activities are undertaken – and to what purpose. The logic of
accumulation produces its own social relations and with that its conceptions and uses of
authority and power.’ (Zuboff, 2015: 77)

When data is treated as a form of capital, the imperative to collect as much data, from as many
sources, by any means possible intensifies existing practices of accumulation and leads to the
creation of new ones. Indeed, following in the footsteps of other extractive enterprises through
capitalism’s history such as land grabs and resource mining (Mezzadra and Neilson, 2017), many
of the now common practices of data accumulation should actually be understood in terms of
the more forceful practice of data extraction, wherein data is taken without meaningful consent
and fair compensation for the producers and sources of that data. The terminology used to
describe the ways data is accumulated – especially data about people – elides the fact that this
data is often acquired in hidden ways for purposes unknown to the targets of dataveillance
(Andrejevic, 2014).

The question of consent is relatively straightforward. The problematic way technology firms
treat consent is no secret; it is an issue raised often by journalists and academics. When
companies seek consent to record, use, and/or sell a person’s data, it is typically done in the
form of a contract. The most common kind is called an end-user licensing agreements (EULA).
They are a hallmark of digital technology and account for most of the contracts we enter into –
almost on a daily basis if you use the Internet or software (Thatcher et al., 2016). These are the
pages on websites and applications that make you click ‘agree’ or ‘accept’ before you can use
the service. EULAs are known as ‘standard-form’ or ‘boilerplate’ contracts because they are
generically applied to all users (Zamir, 2014). They are one-sided, non-negotiated, and non-
negotiable; you either agree or you are denied access. ‘It is hard, therefore, to consider them to
be free and voluntary arrangements since one party has no power to enact their demands’
(Birch, 2016: 124). Companies are routinely caught smuggling dubious clauses into their EULAs;
like, for example, requiring users to give up rights to ownership of their data or to restrict what
kind of data is collected and how it is used (Hutton and Henderson, 2017). Moreover, EULAs are
designed to prevent even the most enterprising person from being informed of the binding
terms and conditions. They are long, dense legal documents. One study concluded it would take
76 days, working for 8 hours a day, to read the privacy policies a person typically encounters in a
year (Madrigal, 2012).

EULAs are the ideal-type of pro forma ‘consent,’ which may be better-termed acquiescence
(Pasquale, 2015). That is, EULAs are less a method of consent in any meaningful and more a
form of compliance. As Jaron Lanier (2013: 314) argues, ‘The reason people click ’yes’ is not that
they understand what they’re doing, but that that it is the only viable option other than
boycotting a company in general, which is getting harder to do.’ Thus, even in many cases where
people must actively agree to their data being accumulated, this agreement bears little
resemblance to common meanings of consent – let alone robust forms of informed consent.
When a thing is taken without consent we call it ‘theft.’ Just because the thing taken here is
information about a person, rather than some material object, the ethical relevance should not
be nullified. It is extraction nonetheless.

The question of fair compensation is more complicated, in large part because it can be difficult
to put a fair price on personal information. Different types of data are valued differently by
different businesses. The value of data also rises non-linearly in relation to the amount of data.
The larger and more diverse a data bank, the more information and uses can be derived from it.
So one individual’s data may not be readily converted to economic capital, but the aggregated
data of hundreds, thousands, millions of individuals can be immensely valuable. Even though it
is difficult to price data, we can judge the fairness of compensation in at least two ways: (1)
what kind of compensation, if any, is offered for data and (2) what is the difference between the
compensation for data producers and the value obtained by data capitalists?

First, compensation most often comes in the form of access to services like Facebook’s platform
and Google’s search engine. Rather than charging money to use the service, the owner collects
data as payment. Even if we concede that some people think this is perfectly fair compensation,
these service providers are outnumbered by the countless companies that collect, use, and sell
personal data often without the knowledge of – let alone compensation for – those whose data
they possess (Bouk, forthcoming; Crain, 2016). Many companies fail the first test right away:
receiving nothing can hardly be seen as fair.

Second, the value of data capital is massive. Some of the wealthiest companies in the world, like
Facebook and Google, are built on data capital. The data broker industry is estimated to
generate $200bn in annual revenue (Crain, 2016). The three biggest data brokers alone –
Experian, Equifax and Transunion – each bring in billions of dollars annually. Even for relatively
small data brokers, the difference between the value of data and the compensation provided for
it is striking (Roderick, 2014). Additionally, other major sectors like finance, insurance and
manufacturing are increasingly relying on data capital to generate value . For many of these
companies the data they use is primarily about people and created by those people doing
things. These companies are accumulating billions of dollars in surplus value from the ‘digital
labour’ done by people (Scholz, 2012), while paying little to nothing in return. Thatcher et al.
(2016: 994) argue that these extractive practices go so far as to ‘mirror processes of primitive
accumulation or accumulation by dispossession that occur as capitalism colonizes previously
noncommodified, private times and places.’ When a person does not receive a fair offer for the
work they have done or thing they have sold, we call it ‘exploitation’ – and this level of
exploitation and inequity is indicative of extraction.

Before concluding, it is important to note that not all data extraction is equal. There are crucial
issues related to the ways identity and class affect how, what, and why data is extracted. At
times, data is disproportionally extracted from certain groups , such as when poor people of
colour are subjected to systematic tracking by government agencies and financial institutions
(Eubanks, 2018). At other times, certain groups are missing from data sets, such as when facial
recognition systems inaccurately identify people of colour because they have been trained with
data composed of mostly white male faces – people who look like their programmers (Lohr,
2018). While it is beyond the scope of this article, there is a need for further analysis of the
unevenness of data extraction. Such work should build from critical studies of information
technology; some relevant, recent books include: Digital Sociologies (Daniels et al. 2016), The
Intersectional Internet (Noble and Tynes, 2016), Programmed Inequality (Hicks, 2017),
Algorithms of Oppression (Noble, 2018), and Automating Inequality (Eubanks, 2018). My hope is
that this article also lends theoretical support to this future work.

Conclusion

This article has centred data as a core component of political economy in the 21st century. It has
analysed the way in which data is collected and circulated like capital and is treated by
governments and firms like capital. By applying the theories of Marx and Bourdieu, data is
analysed as a form of capital that is distinct from, but has its roots in, economic capital. Data
collection is thus driven by the perpetual cycle of capital accumulation, which in turn drives
capital to construct and rely upon a world in which everything is made of data. The supposed
universality of data reframes everything as falling under the domain of data capitalism. All
spaces must be subjected to datafication. If the universe is conceived of as a potentially infinite
reserve of data, then that means the accumulation and circulation of data can be sustained
forever. The imperative to capture all data, from all sources, by any means possible influences
many key decisions about business models, political governance, and technological
development. Following this imperative leads to accumulation by extraction in which personal
data is taken with little regard for consent and compensation. By analysing surveillance
technology and the data economy in terms of extraction, critical work can move beyond
focusing (almost exclusively) on privacy and security. As important as these issues are, they elide
the systemic issues of inequity and exploitation that are endemic to the contemporary political
economy of data (Coll, 2014).
Moreover, conceiving of many common practices of data collection as extraction helps lay the
normative groundwork for political and legal responses to rampant, invasive data accumulation.
Such responses could include regulations – essentially capital controls – on what types of data
companies can collect, how they can collect it, where they can send and store it, and how much
data a company can possess, both in aggregate and about individuals. It could also include new
models of data ownership and governance like, for example, ‘managing crucial parts of the data
economy as public infrastructure’ (The Economist, 2017a: n.p.). The fact that a featured article
in The Economist would recommend that governments take over parts of the data economy and
break up monopolistic firms like Google should be seen as a bellwether for how powerful Big
Data (as in Big Oil and Big Finance) has become. This illustrates the need for further critical
thought about the political economy of data, as well as reforms and alternatives to data
capitalism.

The analysis in this paper is not meant to mark a new epoch in political economy wherein – as
executives and engineers in Silicon Valley are fond of saying – everything has changed and
nothing will ever be the same. Instead, data capitalism is more of a shift in focus; it is a transition
toward conceptualising a new kind of capital and new methods of accumulation. This transition
follows from one of the dominant socio-economic regimes of the past few decades: finance
capitalism (Davis and Walsh, 2017; Krippner, 2005; Konczal and Abernathy, 2015). As this article
has shown, there are similarities between financialisation and datafication. Both have significant
‘implications for the production of space, corporate governance, accumulation regimes, and
everyday life’ (Fields, 2017: 1). Both seek to maximise value extraction by using innovative
methods of capital creation and circulation, whether through complex financial instruments or
complex information technologies. Both use technically opaque systems that shield them from
oversight (Pasquale, 2015), use their political influence to skirt regulation (Roderick, 2014), and
use their powerful capabilities to engage in exploitative and predatory practices (Taylor and
Sadowski, 2015). In addition to these similarities, there is direct overlap between the two
regimes, such as credit agencies using large sets of personal and demographic data to create
hyper-individualised policies and scores (Hurley and Adebayo, 2017) and Wall Street traders
using ‘high frequency trading’ algorithms to circulate capital at hyper-speed (Arnoldi, 2016).

The institutions leading the way in data capitalism are explicit about the connections between
financial capital and data capital. They are not calling for one to replace the other, rather they
are arguing that finance and data should be seen as different but equal forms of capital, which
supercharge each other. Datafication, like financialisation before it, is a new frontier of
accumulation and next step in capitalism. Compared to financialisation, datafication is still in its
early days, but the level of wealth and power wielded by data capitalists is already massive and
still growing. The theories and methods used to analyse finance capitalism and information
technology must now be synthesised and applied to studying the meaning, practices and
implications of datafication as a political economic regime.
Financialized information capital produces racist authoritarianism and socio-
ecological crisis. We must reject the neutral baseline of efficient markets to
allocate the goods for human survival.
Harris and Varellas 20 – Angela P. Harris, School of Law, UC Davis James J. (“Jay”) Varellas
III, Department of Political Science, UC Berkeley (Law and Political Economy in a Time of
Accelerating Crises, Journal of Law and Political Economy, 1(1) 2020
https://escholarship.org/uc/item/8p8284sh)//gcd

In the United States and around the world, we are facing intertwined crises: skyrocketing economic
inequality, an increasingly destabilizing and extractive system of global finance, dramatic shifts
in the character of work and economic production, a crisis of social reproduction, the ongoing
disregard of Black and brown lives, the rise of new authoritarianisms, a global pandemic, and,
of course, looming above all, the existential threat of global climate change. From the vantage point of
mid-2020, it is impossible to avoid the sense that these crises, like Mike’s bankruptcy, have emerged both
suddenly and as the result of problems long in the making. It is also clear that these interlocking crises are
accelerating as they collide with societies whose capacities to respond have been hollowed out
by decades of neoliberalism. With the launch of the Journal of Law and Political Economy (JLPE), we—mostly legal
scholars, but joined by economists, sociologists, political scientists, geographers, historians, and Indigenous and ethnic studies
scholars—leap into the interdisciplinary fray, as so many others have done before us. JLPE is motivated by the belief that any
attempts to understand the roots of the numerous crises facing us, much less assemble collective projects to address them, must
contend with issues of law and political economy. In this Editors’ Introduction to Volume 1 of JLPE, we explain our own sense of
what “Law and Political Economy” is, both as an intellectual enterprise and as a network of scholars, policymakers, students, and
advocates. We reflect on our current historical moment, identify genealogies of the Law and Political Economy (LPE) project,
articulate some of the intellectual foundations of the work, and finally discuss the Journal’s institutional history and context. The
accelerating crises that pose a challenge to our systems of governance are also a reason why
we write, and why we believe our enterprise to be a timely one . II. The Challenges of the Current Moment
The COVID-19 pandemic has brought into public consciousness a series of issues that we consider central to the Law and Political
Economy mission. Financialization. Among the most striking developments in recent decades is the
increasing
dominance of finance and financial logics over human needs and even production, and the
pandemic has thrown this problem into sharp relief . As one economist put it in March, while economic time
stopped across many domains as the result of the pandemic and the efforts to mitigate it, financial time largely did not (Coy 2020).
More than two decades of work across the social sciences has identified and criticized
“financialization” as a driver of economic inequality and instability (Krippner 2011; Lazonick 2013;
Nesvetailova and Palan 2020; Epstein 2005; Arrighi 1994), and policy responses to the pandemic have supercharged these dynamics.
Consider, for instance, that after an initial drop, major American stock indices have set new highs as the pandemic rages. As of this
writing, although the US has experienced record declines in employment and gross domestic product, as well as a looming eviction
crisis, the aggregate wealth of American billionaires has increased by $850 billion, and global billionaires have seen their wealth
increase by $1.5 trillion since the start of the pandemic (Americans for Tax Fairness and Institute for Policy Studies 2020). From our
vantage point, a particularly troubling result of the more than $10 trillion in bailouts and extraordinary central bank actions in the
United States since the onset of the COVID-19 pandemic (Brenner 2020) has been the extension of the shareholder-centric
orientation of American political economy, to the point that shareholders are now among the most insulated from losses of any
group in society. Worker Precarity. While financial time races on and profits to the owners of capital increase, workers and their
families are caught in economic predicaments ranging from difficult to dire. The pandemic exposed the precarity of “essential
workers”: not only hospital employees (janitors as well as doctors and nurses), but also nursing home aides, truck drivers and gig
drivers, convenience store clerks, and workers throughout the food system, from those in the fields to those in meat-processing
facilities to those delivering for restaurants and grocery stores. Though hailed as heroes, many of these workers, especially those
with the lowest pay and benefits, continue to face a grim choice between going to work and risking illness and death, or staying
home with mounting bills and the threat of hunger and homelessness. Many of these workers are immigrants, some undocumented,
meaning that they both lack access to government support and that they are frequently responsible for supporting families outside
the US through remittances. These workers are also disproportionately non-white, and their economic precarity is contributing to
their disproportionate representation among those dying of COVID-19. Even workers not faced with a choice between the risk of
illness and the risk of economic ruin are dealing with unprecedented threats. Like Mike’s bankruptcy, this sudden labor crisis is also a
manifestation of a much slower one. Beginning in the 1970s, large corporations faced increasing pressure from the financial sector
to divest themselves of labor expenses. One response was the offshoring of production facilitated by the neoliberalization of
international trade (Varellas 2009; Adkins and Grewal 2016; Thomas 2000). Another was the shedding of full-time employees,
resulting in a “fissured workplace” (Weil 2017) heavily reliant on part-time workers, independent contractors, franchisees, and gig
workers (Dubal 2017). Once
the pandemic began, many of these workers relied on federal paycheck
support to avoid bankruptcy and eviction, and even to put food on the table, while policymakers
fretted about the supposed “moral hazard” of bailing them out . New Geographies of Production. While the
pandemic has caused immediate problems in domestic manufacturing (including plant closures and outbreaks within cramped
factories in industries such as meatpacking), one of its more lasting effects may be on the international organization of production.
The just-in-time approach to logistics and the global supply chains created during the neoliberal
era of so-called “free trade” have buckled and broken, raising questions about whether
production will ever return to pre-pandemic levels of globalization, particularly in light of
increasing tensions between the US and Chin a (Aggarwal and Reddie 2019). The uncertainty also
extends domestically to the delivery of services essential for human flourishing. For example, as
predicted by Law and Political Economy scholars (Pasquale 2014), under neoliberalism the organization and delivery of health
care, both in the US and globally, has proven dangerously fragile (Moudud 2020). Monopolization. The
pandemic has made even clearer the importance of the resurgent interest in the anti-monopoly tradition in academia and beyond
(Khan 2017; Paul 2020; Wu 2018; Vaheesan 2019; Teachout 2020; Novak 2010). The timeliness of this revival is underscored by rising
super-profits for technology monopolists, such as the so-called “FAANG” companies (Facebook, Apple, Amazon, Netflix, and Google,
as well as similarly situated firms such as Microsoft). Companies in other highly concentrated industries—including, since the start of
the pandemic, retail giants such as Walmart, Kroger, and Target—are also experiencing blowout profits as much of the rest of the
economy sinks into depression. Digital Surveillance and the Algorithmic Intermediation of Life. In addition to their economic
dominance, companies such as the FAANGs are also key drivers of unprecedented and intensifying shifts in the nature of
governance. These and other powerful companies located at the nexus of cutting-edge government-funded research, billionaire
financiers, and the military and security state (Mazzucato 2013; Weiss 2014; Block and Keller 2011) are constantly pushing their data
harvesting operations and algorithms into additional areas of life (Zuboff 2018; Pasquale 2015; O’Neil 2016; Cohen 2019; Kapczynski
2020). As a result, nearly every aspect of human experience, whether economic, political, social, cultural, psychological, or even
spiritual, is now increasingly under pervasive surveillance, intermediated and steered into often dangerous directions by
unaccountable algorithms and artificial intelligence networks so complex their architects often cannot even understand them
(Rahimi and Recht 2017). The full extent of the social, political, and governance effects of this surveillance are yet unknown, but
what we have become aware of so far is troubling. Neoliberal Family Policy. The pandemic has exposed the fact that wage labor, and
“the economy” as a whole, depend on processes of social reproduction that are deeply gendered, and defined as peripheral to or
outside the sphere of the market (Folbre 2001; Fraser 2017; Eichner 2020). It has largely fallen on mothers to take on the burdens of
homeschooling and the supervision of children and teenagers subject to remote instruction. The under-compensation of nursing
home aides (driven not only by a gendered undervaluation of care work, but also the economics of health care) can be directly tied
to needless deaths in rehabilitation facilities (Gonsalves and Kapcyznski 2020). The pressure to “open the economy” places special
stress on K-12 teachers, as well as threatens to deepen the fissure between wealthy families able to hire private tutors for “pods” of
children and poor and middle-class families forced to rely on under-resourced public schools. Meanwhile, as Melinda Cooper (2017)
has pointed out, neoliberal economic governance also leans hard on the family as a mechanism for facilitating and legitimating
upward distribution. Using
the moralized discourse of personal and family responsibility, family policy
in recent decades has sought to shift economic and social risk onto individual households (Hacker
2019), slashing the social safety net and expanding private credit. The language of “family values” legitimates household
accumulation of wealth and privilege at the top (Markovits 2019), and—intertwined with the carceral state— legitimates state
surveillance and discipline at the bottom (Gustafson 2011). Racialized State Power. The pandemic has seen the maturation
of the largest racial justice movement since the 1960s, as issues of police violence touch off massive and sustained protests across
the United States and around the world. Notably, this movement has targeted the political-economic, organizational, and legal bases
of unaccountable law enforcement power. Acutely aware of the role of the criminal legal system in suppressing Black, brown,
Indigenous, and immigrant communities, many racial justice advocates have adopted policy stances ranging from outright abolition
of the police to redirecting resources away from “violence workers” and toward helping professions and community organizations
(Davis 2003; Vitale 2017). Movement organizers have taken aim at the legal and policy pillars of the criminal legal system, including
the political power of police unions, statutes and judicial decisions that create impunity for police violence (including the legal
doctrine of qualified immunity), and the harassment and punishment apparatus that brands people in poor Black and brown
communities as second-class citizens, including stop and frisk policies, money bail, and mandatory minimum sentences (Roberts
2019). Themovement for Black lives is also calling attention to the deep connections between the
criminal justice system and our financial system under contemporary capitalism. A particularly salient
example is the civil lawsuit recently filed in Louisville, Kentucky in the wake of the police killing of Breonna Taylor as she slept in her
bed. Attorneys for Taylor’s estate sought to connect the dots between state violence and policies promoting gentrification in her
community (Bailey and Duvall 2020). Meanwhile, the president has turned from “dog whistles” (Haney López 2013) to bullhorns in
attempting to incite white fear and hostility against nonwhites, including immigrants as well as Black people. New
Authoritarianisms. State responses to these crises have been alarming. The US notoriously has a president
willing to encourage right-wing conspiracy theories, ignore science, stoke white nationalism,
and even encourage violence against his political opponents . His hold on power is supported by
party leaders and a base that seems gleefully willing to abandon democratic norms and
institutions (Kuhner 2017). But the turn to authoritarianism is not limited to the US. Leaders in nominally democratic
countries around the world are taking up similar projects to crush dissent and encourage
division. Anti-immigrant and anti-Muslim sentiments, couched in old languages of civilization, caste, and racial and religious
identity, flourish even while the call that “Black lives matter” echoes around the globe. Meanwhile, China and Russia are embracing
technology-enhanced efforts both to control their own citizens and to influence foreign events. One cannot help but see parallels
to the present situation in Karl Polanyi’s account of the collapse of classical liberalism and the
rise of fascism, the American New Deal, and European socialism as contending frameworks
promising to protect society from the ravages of the market during the first half of the twentieth
century (Polanyi 2001 [1944]). Ecological and Climate Crises. Finally, the pandemic has provided an illustration of the relevance of
political ecology to political economy. In 1976, the ecologist Barry Commoner argued that three apparently separate crises then
besetting the United States—a crisis
of environmental pollution, the “energy crisis,” and an economic
crisis of simultaneous recession and inflation (“stagflation”)—could be traced to a single basic
defect: a social design under which financial relations determined economic relations and
economic relations determined ecological relations, even though the precarity of life on Earth
demanded the reverse (Commoner 1976). Today, as climate change produces another set of cascading
crises that are both gradual and sudden, political ecologists are drawing connections between
“natural disasters” and the political economies of agribusiness, international development, and
urbanization under neoliberal governance . This work illuminates the role of law in facilitating patterns of
economic extraction and human settlement that disturb critical ecosystems, making far more
likely the emergence of new pathogens such as the novel coronavirus (Davis 2020; Wallace et al. 2020;
Foster and Suwandi 2020). Of course, these paragraphs barely scratch the surface of the myriad crises facing us. Our account,
however, indicates the scope of the Law and Political Economy framework. By founding JLPE, we hope to deepen a range of
connections. First and foremost, we wish to broaden discussions of the legal regulation of economic matters beyond the narrow
bounds of “Law and Economics.” As others have argued (Polinsky 1974; Harris 2003; Britton-Purdy et al. 2020), Law and Economics—
still the regnant discourse on economic issues in law schools—resists history, sociology, and the humanities, and reduces law to a
tool with which to maximize wealth, typically for the few (in practice if not in theory). Our view is nearly the opposite .
Markets
and their constituents, including corporations, trade relations, contracts, property, and money
itself are creatures of law and politics, crafted by the state. Markets are also social institutions embedded
in histories of colonialism, slavery, and exploitation. Their ultimate purpose is to promote human flourishing,
not only to allocate scarce resources in a way that purportedly maximizes a particular
conception of efficiency. Accordingly, we invite sociologists, political scientists and theorists, geographers, economists,
anthropologists, literary scholars, historians, scholars in cultural and ethnic studies, Indigenous scholars, and others into the
conversation as we rethink state and market governance in the ashes of neoliberal ideology (see Brown 2019). Second, and
relatedly, we aim to reconnect legal scholarship with the longstanding, broad, and deep literatures of political economy. As we note
in the section that follows, while legal scholars have been entranced with Law and Economics, scholars in other fields have
continued to analyze markets within their social and political contexts. Too often, however, these scholars have ignored or
misunderstood the role of legal institutions and doctrines. Just as legal scholars must engage with political economy, political
economists must engage with the law. We hope to establish JLPE as a site for these richer conversations. Third, we hope to trouble
the conventional boundaries of “the economic” itself. As we elaborate below, neoliberalism’s separation of the state from the
market was preceded by domesticity’s evacuation of social production from the sphere of economic relations. Similarly,
the
conventional history of capitalism frames land dispossession and the forcible incorporation of
subsistence economies into wage-based economies as “primitive accumulation,” something that
happened in the past, before capitalism proper (Ince 2014; Harvey 2004). And, as the new literature on racial
capitalism has begun to explore, the carceral state is connected in intricate ways to economic production, distribution, and
extraction, as well as to the moral ideologies of discipline and punishment that underpin discourses of work, public assistance, and
crime (Wacquant 2009; Soss, Fording, and Schram 2011; Gustafson 2011; Gilmore 2007). Finally, while JLPE is a US-based journal, we
think international, comparative, and global South perspectives are an essential part of developing a full picture and analysis of
contemporary Law and Political Economy. Despite the myth of American exceptionalism, the US is rooted in a transnational history
of empire that has shaped its foreign policy, its constitutional and immigration law, and international law itself (Anghie 2007; Rana
2010). Accordingly, at JLPE we welcome transnational and comparative analyses at all levels of scale, from the micropolitics of a
single community to the entire capitalist world system. We recognize that these attempts to topple conventional intellectual silos
come with a set of risks. An insight may be novel in one intellectual tradition and considered banal in another. Scholars trained in
one discipline may disdain the methods of another. And even seemingly basic terms like “capitalism” or “law” may be used in very
different ways in different fields, leading to misunderstanding or conflict. Nevertheless, we believe that in this time of multiple and
interlocking crises, such boundary-pushing endeavors are necessary if we are to meet our historic moment. III. Genealogies of Law
and Political Economy As four of our LPE colleagues note in a recent article (Britton-Purdy et al. 2020), the legal
scholarship
of the last half-century has withdrawn from “questions of economic distribution and
structural coercion” (ibid., 1806). In legal fields designated as politics-regarding (such as constitutional or administrative
law), great deference is paid to existing economic and political distributions, which are treated as
neutral baselines from which courts should not stray without a compelling rationale (Sunstein
1987). In fields designated as market-regarding (such as corporate or property law), “[w]ealth maximization,
transaction costs, and externalities have served as ‘ linking theories’ that connect analysis of legal
rules and institutions with the general equilibrium model of neoclassical economics” (BrittonPurdy et
al. 2020, 1800). Thus, in keeping with what Wendy Brown describes as neoliberalism’s form of “rationality” (Brown 2015; see also
Blalock 2014), both “public” and “private” law have come to depend on the idealization of efficient
and free markets that respond nimbly to rational preferences and maximize social wealth for all.
In embracing the term “political economy” rather than “economics,” we signal our rejection of this approach to markets,
politics, and law. In this section, we briefly take note of the intellectual resources on which the movement, and this journal, draws—
literatures that constitute our “invisible college ” (Varellas 2018).

Alternative: we should socialize data infrastructure. Both the means of data


production and coordination could harness AI for non-market forms of social
coordination and problem-solving.
Evgeny MOROZOV PhD History of Science @ Harvard, Former Fellow at Open Society Institute
’19 “Digital Socialism?” New Left Review 116/117 p. 55-67

1. Solidarity as discovery procedure

Recall that Hayek, at least in his last decades, saw competition as not just the driving force of
market activity, but also as a mode of discovery. Through competition, consumers unearth new
tastes and producers develop new techniques of production. Hayek’s conception of competition
as a heuristic process is striking; it may even be accurate. But whatever its merits, competition is
not the only discovery procedure available to humankind. Can other ‘techniques of ordering
social affairs’ yield similar benefits? Central planning, on Hayek’s terms, is out as a mode of
discovery, as few ‘unknown unknowns’ come to light in the course of its operation; in fact, they
seem to proliferate, as the once frictionless adjustment to the changing environment
encounters knowledge problems and the centralized bureaucracy develops its own social
interests. But why assume that there are just two ‘discovery procedures’—competition and
central planning? This Manichean binary had a common-sense political basis during the Cold
War, replicating the antagonism between capitalism and communism. Trapped in that
framework, Hayek had little to say about the discovery potential of other social arrangements,
apart from competition.30

What forms might these alternative discovery procedures take? Consider a process centred on
social life and problem-solving, rather than on capitalist consumption, as in Hayek’s theory.
Social existence presents us with a plethora of problems to solve, some of them highly specific
and only relevant to small groups of people, others of much wider importance. Digital ‘feedback
infrastructure’ could be used to flag social problems and even to facilitate deliberation around
them, by presenting different conceptual approaches to the issues involved. What counts as a
‘problem’ would also be open for debate: citizens could enlist allies and convince others of the
virtues of their own readings of particular problems and proposed solutions to them. This
framing would suggest that deliberation-based democratic procedures could themselves be
modes of problem-solving and means of social coordination.

One could imagine the use of digital feedback infrastructure to match ‘problem-finders’, who
would express their needs and problems, and react to those identified by others—either
explicitly, by voicing them or writing them up, or ‘automatically’, via machine learning, or—with
‘problem-solvers’, equipped with cheap but powerful technologies and the skills to operate
them. Once the two groups have been ‘matched’ by the feedback infrastructure, the activity of
the ‘problem-solvers’ can help to render the implicit needs of ‘problem-finders’ tangible and
explicit, adding to the pool of solutions which can then be drawn upon by other ‘problem-
finders’. Assuming this takes place outside the commercial realm, there would be no barriers,
such as patents, to impede the sharing of knowledge.

Collaborative problem-solving in the social domain already takes place to some extent. One
example would be ‘hackathons’, which bring together ngos with particular problems and well-
meaning hackers who might know how to solve them but would otherwise never encounter
them. The original premise of hackathons—before they were co-opted by the development
sector and Silicon Valley—was that altruism and solidarity should drive the cooperation
between ‘providers’ and ‘consumers’ of solutions. In principle, these processes could be
expanded on a much greater scale, given sufficiently fast and comprehensive feedback systems,
with algorithms to match.

Would collaborative discovery modes of this type necessarily reveal less than those operating
through Hayekian competition? Current economic conditions arguably favour competition-
based discovery over solidarity based processes, but this is not a natural or inevitable state of
affairs—the result of evolution, as Hayek argued. Rather, it is the result of political interventions,
informed by a Hayekian rejection of non-individualist, altruistic alternatives. It would be
tautological to say that neoliberalism, which has striven to install competition as the only mode
of discovery, also favours discovery through competition. To believe that capitalist competition
will always yield more knowledge than other discovery procedures requires us to believe, for
example, that we learn more about the world when we act as consumers than when we act as
parents, students or citizens; and that our human needs are better expressed in the consumerist
language of competition than in any other terms. In the realm of production, one would have to
believe that the imperative to innovate ‘induced’ in competing producers by the capitalist laws
of motion will yield greater improvements in social existence than would the imperatives driving
non-market ‘problem-solvers’—environmental considerations, perhaps—who might be capable
of generating cost-reductions of their own. Besides, competition is not always conducive to
discovery. Hayek himself understood that intellectual property rights, historically an important
pillar of capitalist development, erect barriers to discovery—yet they seem to have become a
permanent feature of his favoured system. This is not a problem in solidarity-based discovery
procedures.

2. Designing ‘non-markets’

Though neoliberalism always favours markets and prices, its technologies help create
possibilities for transcending them. One such is indicated by Alvin Roth’s work on devising ways
to match organ donors with potential recipients, in the absence of prices: once the preferences
of all the transacting parties have been clearly expressed, one can do away with the price
system and find other ways of distributing scarce resources. This suggests the second use to
which digital feedback infrastructure can be put by the left: designing ‘non-markets’. There are,
however, several problems with applying such solutions on a larger scale. First, the more
transacting parties there are, and the more preferences they express, the greater the complexity
of the matching process. Second, markets provide means of social coordination that extend far
beyond simply distributing existing resources between a fixed number of parties with clearly
stated preferences. What to do when the number of parties is unknown, the preferences are
fuzzy, there are no ready-made resources to distribute and the external environment is ever
more complex? This is where ‘feedback infrastructure’ can be of help, by replacing markets with
equally carefully designed institutions that can leverage information flows to solve problems of
complexity—the second function that Hayek assigned to competition.

The legacy of cybernetics is relevant here. It’s indicative that Reinventing Capitalism dedicates a
few paragraphs to trashing the work of Stafford Beer, the British cybernetician who helped the
Salvador Allende government to build a very basic ‘feedback infrastructure’ for the Chilean
economy in the early 1970s. The authors’ grasp of Beer’s project appears rudimentary, and they
use it mostly to attack government ‘nudgers’ like Cass Sunstein—an odd choice, given that the
Chilean project didn’t try to shape individual behaviour, and that Beer explicitly warned against
individual conditioning by digital means. Beer’s solutions to the problems of complexity were
very different from Hayek’s, even though the two—who met briefly at a cybernetics congress in
the early 1960s— started with similar premises. Beer, too, believed that complexity was
growing, and that the old ways of minimizing it—religious edicts prescribing strict codes of
individual behaviour, for example—no longer worked. But social life itself provided numerous
examples of deliberately constructed efforts at reducing complexity, institutions being the most
obvious ones. Firms—artificial entities, by any standards—did this in the market domain;
libraries, universities, traffic systems and measurement systems offered examples of
deliberately created entities capable of handling complexity in non-market domains.

While Hayek never offered a convincing theory of how to adjudicate between the demands of
competing ‘spontaneous orders’, Beer dedicated his life to deploying the tools of cybernetics to
make both market and non-market institutions more responsive to the demands of growing
social complexity. This meant building robust information flows inside the system, as well as
between the system and its environment, so that its internal components could themselves
undergo timely internal transformations to better adapt the system as a whole to changing
external conditions.31 Beer imagined ‘spontaneous orders’ as vested within each other, in a
recursive manner—for example: a household inside a neighbourhood inside a town—and
structured by an organizational division of labour, with some parts responsible for setting
systemic goals, some for developing strategies for achieving them, some for maintaining the
system. The total complexity of a given ‘spontaneous order’ was thus a function of the
relationship between that order and its external environment, as well as the distribution and
execution of functions inside it.

According to Beer, there are two ways to tame complexity. First, one can make the internal
behaviour of the vested spontaneous orders more uniform, by way of rules, standards, ethical
prohibitions and so on; Beer called this ‘variety attenuation’. Second, one can try to detect
emerging complexity early on, re-engineer the underlying organizational structure to deal with it
—and, instead of standardizing the responses of individual components, give them as much
autonomy and power in overcoming their own local manifestations of complexity as possible.
Beer called this ‘amplifying regulative variety’. The two modes aim at very different outcomes:
the first seeks to make the system more coherent by reducing any unnecessary variations across
its component parts, while the second seeks to make it more complex in order to match the
complexity of the external environment. How to reduce complexity—how to determine the
correct level of intervention, as well as the right mix of ‘attenuating variety’ and ‘amplifying
regulative variety’—was thus an open question. As Beer put it in Designing Freedom:

The precise form of variety attenuation is a matter for local decision. The critical mistake we are
making is to take the variety-attenuating decisions at the wrong level of recursion. Then this is
how freedom is lost, and this is what induces the instability that threatens to become
catastrophic. For the whole-system model simply does not have the requisite variety to balance
the local homeostats. They in their turn are robbed of the variety they need to find their own
stable points.32

By contrast to this, Hayek’s cybernetic model of society was simplistic. Capitalist competition—
the system’s overall regulator—was the means by which it communicated changes in rules and
normative orientations, which were then complied by the smallest units of the systems, as a way
of ‘attenuating variety’. Beer’s conception of society as composed of recursive orders, on the
other hand, reveals that the imperatives and prescriptions imposed on local ‘spontaneous
orders’ by capitalist competition—one of the outermost layers of the total social system— could
also greatly constrain the adaptive and problem-solving capacity of the local ‘homeostats’.33
Since competition cannot resolve all the problems that emerge at these lower levels, and indeed
limits the ability of these levels to respond in more effective ways themselves, overall
complexity increases, inducing instability.

Beer argued that advances in information technology could drastically amplify ‘regulative
variety’ while pushing ‘variety attenuation’ to the lowest possible levels of the system, where it
would cause the least damage. Information technology should be able to offer a more accurate,
real-time picture of the external complexity, and to check if the system’s contingency plans for
dealing with it are adequate (Beer celebrated the ‘self-aborting plan’, which liquidates itself on
discovering that the external circumstances have changed).34 Second, technology allows for a
close and continuous observation of the system’s internal dynamics, and makes it easier to
repurpose its organizational structure as the external environment demands. Once external and
internal complexity have been studied and understood, it should be possible to find a ‘hack’ of
some kind. Beer once gave the example of a timetable and room assignment in a busy school: a
very complex problem of social coordination is solved with a simple two-dimensional chart.

For Beer, the exact allocation between the two solutions—that is, whether to constrain the
behaviour of individual parts (citizens or customers, for example) or to amplify the regulative
capacity and the institutional and informational plasticity of the system, and of the systems that
contain it—was to be determined democratically. The second solution was generally preferable,
as it granted citizens more autonomy. Thus Beer advocated making planning, computing and
coordinating infrastructure free and available to all, so that individual institutions, tasked with
reducing complexity in their own contexts, could find their own optimal solutions. This did not
imply some neoliberal vision of the ‘Big Society’, where individuals are expected to take
problem-solving into their own hands, as fund-starved public alternatives collapse. Instead, the
ambition is for radical democracy to join forces with ‘radical bureaucracy’ in order to take
advantage of advanced infrastructures for planning, simulation and coordination. This
combination should, at a minimum, yield solutions as efficient as those of Hayek’s ‘spontaneous
order’, without, however, offloading all the adaptation costs onto citizens or erecting too many
barriers to the problem-solving capacities of local systems.

Remarkably, not all neoliberals disagree. One of the most striking developments in neoliberal
theory and practice of the last decade has been an explicit concession by some neo-Hayekians
that information technology could provide efficient methods of social coordination in
environments where price signals are missing.35 Here, as in the case of market design, the neo-
Hayekian embrace of non-price forms of social coordination is mainly driven by the political
exigencies of keeping neoliberalism afloat by attacking the rump administrative state. If taming
the Leviathan now means that neoliberals must preach the virtues of decentralized civil society,
the ‘social economy’, the Ostromian commons, or ‘polycentric orders’—still short of celebrating
autonomia operaia, but getting there!— it seems they will oblige.

This leads to some genuinely bizarre ideological repositioning. Some Hayek-inspired scholars
find it politically advantageous to concede that there are other forms of social coordination
besides the price system, as long as they can also argue that decentralized social groups—ngos,
charities, churches—can leverage information technology to do a better job at coordinating
disaster relief than centralized government bureaucracies. However, once the neoliberals
concede this, they become exposed on other fronts: why shouldn’t decentralized government
bureaucracies, redesigned along the lines proposed by Beer and fully plugged into the
democratic ‘feedback infrastructure’, do at least as good a job as, say, churches, if not better?
Once social coordination has been liberated from the heavy ideological baggage of the price
system, there are no sound theoretical reasons to assume public institutions are always inferior
to private ones in managing complexity.

3. Decentralized planning
What role can ‘feedback infrastructure’ play in coordinating economic activity in general? For
some time now, left-leaning economists and activists have tried to reopen the Socialist
Calculation Debate, arguing that the latest advance in data-gathering and computation would
make the job of Lange’s Central Planning Board much easier.36 Followers of Hayek and Mises
have developed a standard response to such efforts, pointing out the efficiency losses involved
in switching from the price mechanism to, say, a system using labour values as the basis of
calculation. Neoliberals have it relatively easy in such debates, as the spectral presence of
centralized planning in the proposed alternative economic system allows them to invoke the
Hayekian knowledge problem. But is there a way to rethink the socialist position in a way that
would neither involve central planning, nor morph straight back into the price system?

Processes of consumption and production have changed a great deal since the interwar period,
and many of the initial assumptions of the Socialist Calculation Debate no longer apply—
including the presumed virtues of central planning. On the consumption side, the predictive
capacity of Big Data can anticipate our preferences better than we can; that Amazon got a
patent on ‘anticipatory shipping’—allowing it to ship products to us before we even know we
want them—suggests that the ‘feedback infrastructure’ can foresee and facilitate the
satisfaction of our needs in ways unimaginable to central planners. Such predictive capacity is a
function, not of the mysterious workings of the price system, but of the data held by platforms.
Likewise on the production side, 3D printers enable cheap and flexible manufacturing, without
the need for massive fixed-capital investment.

Some technologies do require vast capital outflows, Artificial Intelligence being a pertinent
example. But the current mode of funding ai development—a dozen giant firms in the us and
China wasting tens of billions of dollars on training their systems to develop identical capacities
to classify faces and sounds—is not necessarily the most efficient way of securing its
advancement. With a different funding model, one could democratize access to ai, while also
getting more value for each dollar invested. Free, universal access to both additive
manufacturing and artificial intelligence could facilitate the production of genuinely innovative
products on a relatively low budget.

Given this new context, it does not seem very productive for the left to keep advocating for the
use of more powerful computers to calculate input prices for the Central Planning Board—or to
retain a centralized bureaucracy, with all the political problems it entails. Why insist on central
planning, when a more decentralized, automated and apparatchik-free alternative might be
achievable by putting the digital feedback infrastructure to work? The most ambitious effort to
sketch what such an alternative might look like—think ‘guild socialism’ in the era of Big Data—
was undertaken by the American radical economist Daniel Saros, in his rigorous, lucid—and
unjustly neglected—Information Technology and Socialist Construction.37 Saros’s plan has some
gaps and omissions, and the level of technological power available in 2019 is much greater than
it was even five years ago. Still, the book’s overall vision provides inspiration and
encouragement to those searching for alternative ways of coordinating economic activity on a
large scale. After an exhaustive summary of the positions taken in the Socialist Calculation
Debate, Saros contends that the socialist economists couldn’t envision a superior, more
decentralized form of planning simply because the technology at their disposal was inadequate.
The technology he has in mind, though, is not the kind used for solving equations or crunching
numbers for the Central Planning Board, but one that powers the sort of ‘feedback
infrastructure’ described earlier.

Saros’s elegant solution disaggregates the many uses of the price system for social coordination,
keeping some and replacing others with the ‘feedback infrastructure’ itself. At the centre of his
system stands a General Catalogue, something of a mix between Amazon and Google, where
producers, who are organized in guild-like ‘worker councils’— worker-run startups if you will—
list their products and services in a way that would be familiar to users of Apple’s App Store or
Google’s Play Store. Consumers, equipped with a unique digital id card, turn to the catalogue to
register their needs during the so-called ‘needs registration period’ at the beginning of each
production cycle; they rank the products they want, specifying their quantities for the next
cycle. Consumers can still purchase products they didn’t request after the need-registration
period ends, but they receive higher bonuses if their purchases do not deviate from their initial
predictions. To encourage consumers to order no more than they need, bonuses are given for
buying fewer items than the average consumer. Bonuses, which are awarded for other things,
too—e.g. for staying in the same job for a long time—are added to the universal basic income
that all citizens receive.

At the end of the need-registration stage, producers—whose products are ranked, Amazon-
style, in the General Catalogue, with ratings affecting worker bonuses—calculate expected
production figures and register their need for inputs in the Catalogue. Producers can fine-tune
their production numbers using the consumption patterns analysed by Big Data, as well as the
prior specifications of needs by consumers. This information also allows any shortages to be
socialized, since it is possible to calculate the share of the total remaining supply of the good
that a particular consumer is entitled to, in light of the needs expressed by all the other citizens.
Worker councils decide on the price to charge for each product, but since they are not profit-
seeking entities, their compensation is not tied to sales or profits, and so their main criterion in
setting the price is getting rid of all their inventory before the next production cycle begins.
Should demand for them be particularly low, certain products could be given away free.

These are just the basics of the sophisticated system outlined in Saros’s remarkable work. Some
of its features would certainly offend the ecosocialist creed: consumers are allowed to express
and fulfil all their desires, however excessive—though there are built-in incentives, like bonuses,
fostering restraint. Some critics, like Supiot, might also consider the system’s dependence on
feedback mechanisms and ratings a high price to pay, especially as it involves much-maligned
quantification. On the other hand, Saros’s system might help minimize the power that would
normally accrue to the technocratic class—though Saros concedes that system administrators
and scientists evaluating resource scarcity will have something of the classic role assigned to
bureaucrats.

How realistic is Saros’s system? An examination of how big technology firms organize their
platforms reveals that some aspects of it are already in operation. Amazon, for example,
rewards customers with lower prices for registering their expected future needs and
‘subscribing’ to periodic deliveries of regularly consumed products; it also carefully studies
product searches and the offerings of other suppliers in its own ‘general catalogue’ to locate
gaps in the market. Democratizing access to that information infrastructure , so that all
producers can build on these emerging product insights, would surely result in a system that is
far less centralized than today’s, where just one firm (Amazon) monopolizes all the planning
based on that data. One may quibble about the details of Saros’s system, but it’s indisputable
that this is not a model based on ‘central planning’ in any formal definition of the term. Yes,
there’s plenty of market design, as well as plenty of social coordination based on information,
not prices; but even neo-Hayekians, by now, have conceded that these are acceptable. Under
Saros’s system, the price mechanism retains some of its functions, but, wedded to a non-
capitalist ethos, it plays no role in setting the level of compensation.

Socialize the means of feedback production!

All three of these projects—‘solidarity as discovery’, ‘designing nonmarkets’ and ‘automated


planning’—hint at a world in which increased complexity is not accepted as an unalterable fact
and where competition is not the only way of dealing with it. Information technology, in turn,
would be seen as a means of discovering and acting upon the plasticity of social and economic
arrangements, undoing the bundles—like price, the various functions of which had previously
been lumped together— that have so far been taken for granted. Making progress on any one of
these fronts could constitute a major advance for the left. But no such progress will materialize
if the means for creating alternative modes of social coordination—the ‘feedback
infrastructure’—remain the exclusive property of tech giants.

If the Socialist Calculation Debate teaches us anything, it’s that the left should not waste time
debating the merits of the price mechanism in isolation from its embeddedness in the broader
system of capitalist competition, which generates non-price knowledge—reputation and so on—
and produces the general social norms and patterns of legibility which allow the price system to
do so much with so little. While it’s true that, evaluated on its own terms, the price system
appears a marvel of social coordination, it’s also true that, without capitalist markets, it doesn’t
exist. It thus makes sense to strive for a more comprehensive assessment, looking at how the
existence of capitalist competition—and of capitalism in general—affect social coordination tout
court. Social coordination can be mediated by a whole ecology of mechanisms, including law,
democratic deliberation, decentralized ‘radical bureaucracy’ and feedback control, as well as the
price system. Consider, for example, the non-price knowledge that circulates in capitalist
economies, which not only informs the price system but also shapes our assessment of the
urgency of threats, helping to inform our responses. The more accurate that information, the
more likely we are to ensure social coordination in solving tasks which —like climate change—
are crucial to the survival of the species.

Yet capitalist competition often ends up contaminating that knowledge , making an accurate
assessment of the situation nearly impossible. After the neoliberal turn, competition is
increasingly becoming a non-discovery procedure. Consider the energy companies or
pharmaceutical firms who deliberately manufacture ignorance by selectively funding academics
and think tanks. Or the media-military-industrial complex, shaping how the public thinks about
the latest war. Or the increasingly privatized education system, unable to ‘discover’ the sort of
knowledge that has no easily quantifiable impact. Or the credit rating agencies, whose business
models often obscure the real state of the firms they are supposed to be evaluating. An entire
academic industry—under the quirky name of ‘agnotology’—has sprung up to study the
production of such manufactured ignorance and its use by capitalist firms.38 The best possible
outcome of this research would be a recalibration of how we assess the comparative advantages
of various systems of social coordination—and a shift of focus, from measuring solely their
respective contributions to economic efficiency, to weighing up their ability to perceive
existential social problems, in all their complexity, and to propose possible solutions.

The ideological residue of the Cold War, with its binary choice between central planning and the
price system, has obscured the existence of this broader ecology of modes of social
coordination. The emancipatory promise of information technology is to rediscover and enrich
this repertoire, while revealing the high invisible costs of relying on the current dominant mode
of social coordination—capitalist competition. Given this possibility, the agenda of the
neoliberal establishment is clear. On the one hand, they will rally behind a slogan of ‘There Is No
Alternative (to Google)’, depicting any departure from the cartelized Silicon Valley model—or at
least, any moves that dare go beyond the consumerist utopia of a ‘New Deal on Data’—as yet
another step on the road to serfdom. On the other hand, they will continue filling in the empty
social and political spaces which previously had their own logics and ways of doing things, with
the ‘smart’ capitalist logic of digital platforms.

The left, then, should focus on preserving and expanding the ecology of different modes of
social coordination, while also documenting the heavy costs—including on discovery itself—of
discovering exclusively via competition. This mission, meanwhile, will be all but impossible
without regaining control over the ‘feedback infrastructure’. The contradiction between
collaborative forms of knowledge discovery and the private ownership of the means of digital
production is already becoming apparent in the processes of ‘peer production’—long celebrated
by liberal legal academics—used in the production of free software or services like Wikipedia.
Under the current Silicon Valley private-ownership model, the feedback infrastructure is unlikely
to be amenable to radical-democratic transformation.39 Freedom, as neoliberals have long
understood, must be planned; but so must their ‘spontaneous order’. In the absence of such
planning, spontaneity quickly morphs into adaptation to an external reality that is not to be
tinkered with. This may be an acceptable—even desirable—development for conservatives, but
it should be anathema to the left.
ai
turn – 1nc
1. The network of information capitalism exceeds national boundaries.
Sovereign AI is a myth used to empower the security state.
Kate CRAWFORD co-founder and former director of research at the AI Now Institute @ NYU Prf
@ MIT Center for Civic Media, Senior fellow at the Information Law Institute at NYU ’21 Atlas of
AI: Power, Politics, and the Planetary Costs of Artificial Intelligence p. 208-209

The overlapping grids of planetary computation are complex, cross-breeding corporate and
state logics, exceeding traditional state border and governance limits, and they are far messier
than the idea of winner takes all might imply. As Benjamin Bratton argues, “The armature of
planetary-scale computation has a determining logic that is self-reinforcing if not self-fulfilling,
and which through the automation of its own infrastructural operations, exceeds any national
designs even if it is also used on their behalf.”91 The jingoistic idea of sovereign AI, securely
contained within national borders, is a myth. AI infrastructure is already a hybrid, and as Hu
argues, so is the labor force underpinning it, from factory laborers in China who make electronic
components to Russian programmers providing cloud labor to Moroccan freelancers who screen
content and label images.92

Taken together, the AI and algorithmic systems used by the state, from the military to the
municipal level, reveal a covert philosophy of en masse infrastructural command and control via
a combination of extractive data techniques, targeting logics, and surveillance. These goals have
been central to the intelligence agencies for decades, but now they have spread to many other
state functions, from local law enforcement to allocating benefits.93 This is just part of the deep
intermingling of state, municipal, and corporate logics through extractive planetary
computation. But it is an uncomfortable bargain: states are making deals with technology
companies they can’t control or even fully understand, and technology companies are taking on
state and extrastate functions that they are illsuited to fulfill and for which, at some point in the
future, they might be held liable.

The Snowden archive shows how far these overlapping and contradictory logics of surveillance
extend. One document notes the symptoms of what an NSA employee described as an addiction
to the God’s-eye view that data seems to offer: “Mountaineers call this phenomenon ‘summit
fever’—when an ‘individual becomes so fixated on reaching the summit that all else fades from
consciousness.’ I believe that SIgINters, like the world-class climbers, are not immune to summit
fever. It’s easy enough to lose sight of the bad weather and push on relentlessly, especially after
pouring lots of money, time, and resources into something.”94

All the money and resources spent on relentless surveillance is part of a fever dream of
centralized control that has come at the cost of other visions of social organization. The
Snowden disclosures were a watershed moment in revealing how far a culture of extraction can
go when the state and the commercial sector collaborate, but the network diagrams and
PowerPoint clip art can feel quaint compared to all that has happened since.95 The NSA’s
distinctive methods and tools have filtered down to classrooms, police stations, workplaces, and
unemployment offices. It is the result of enormous investments, of de facto forms of
privatization, and the securitization of risk and fear. The current deep entanglement of different
forms of power was the hope of the Third Offset. It has warped far beyond the objective of
strategic advantage in battlefield operations to encompass all those parts of everyday life that
can be tracked and scored, grounded in normative definitions of how good citizens should
communicate, behave, and spend. This shift brings with it a different vision of state sovereignty,
modulated by corporate algorithmic governance, and it furthers the profound imbalance of
power between agents of the state and the people they are meant to serve.

2. No AI competition.
Sherman ’19 [Justin; Cybersecurity Policy Fellow @ New America; “Stop calling artificial
intelligence research an ‘arms race’”;
https://www.washingtonpost.com/outlook/2019/03/06/stop-calling-artificial-intelligence-
research-an-arms-race/?utm_term=.a8fe09dcfad5]

We see the phrase everywhere — the United States and China are in an artificial intelligence
“arms race.” It manifests in op-eds, news articles and television segments. It’s in books, think tank pieces
and government documents. All this to capture the fear that another country might develop AI more
powerful than our own. But calling the AI competition an “arms race” is both wrong and
dangerous. It suggests AI development is winner-take-all , in that two isolated national AI sectors struggle for
total domination, leading to policies that cut off valuable interconnection . Simultaneously, it misrepresents AI
research more generally by implying that this varied field is a single technology, almost inevitably focusing too heavily on AI’s
military applications. The premise that AI research is a zero-sum endeavor is especially easy to
debunk. In reality, American firms invest billions of dollars in Chinese AI companies, and Chinese
firms have invested tens of billions of dollars in the other direction . American firms also depend
heavily on Chinese manufacturing, which will have an even greater impact on AI development ,
as artificial intelligence is increasingly deployed in hardware such as that of drones and robots.
The interconnections between U.S. and China AI development are also knowledge-related: To
name just a few examples, China’s Tsinghua University opened in June an Institute for Artificial
Intelligence, where Google’s AI Chief, Jeff Dean, is an adviser; Baidu, the Chinese search company, belongs to the U.S.-
based Partnership for AI, which aims to develop best practices for AI technology; and China’s largest retailer has a
research partnership with Stanford University’s Artificial Intelligence Lab to fund research areas
such as computer vision, machine learning and forecasting. The open-source nature of some elements of AI
research further contributes to a near-constant flow of information across borders. [Five myths about artificial intelligence] To
speak of AI as an arms race is also to ignore the many areas of AI development, such as the
potential for improved public health outcomes, that may benefit both countries. Algorithms that better detect
cancer, for instance, could notably reduce costs of care and increase the accuracy of early-stage cancer prediction. This could benefit
the United States and China at once, not to mention other countries around the world. With a winner-takes-all “arms race” framing,
though, U.S. policymakers may enact policies that hurt American AI development and foreclose opportunities by cutting off vital
pipelines of funding, knowledge and other resources. Trump’s sweeping export controls on AI, for example, aim to limit the diffusion
of certain knowledge and resources around AI to China. In the process, they might cut off beneficial relationships and exchanges and
“substantially reduce” commercial opportunities for American companies. The “arms race” metaphor
is also
misguided because it incorrectly treats “artificial intelligence” as a single technology. From
recognizing a face to detecting skin cancer to assessing a convict’s likelihood of recidivism, different applications of AI have different
properties and different sets of training data. These technologies also develop at different speeds, as they may require different data
or computing power and may rely on different computer science techniques. Some (such as lethal autonomous weapons) may have
wide-ranging effects on state power, while others (such as sophisticated chess programs) may function more as corporate
showpieces. Equating these and other fields could easily lead us to prioritize the wrong things for the wrong reasons. But
with
this “arms race” framing, policymakers and commentators talk of China “beating” the United
States without understanding what “winning” means for either side. What happens if Chinese tech
giant Alibaba develops better facial recognition systems than Google? Or what if China’s military
drones autonomously fly faster than those developed by a San Francisco start-up? The end
result for these and other scenarios is unclear, which means policymakers may not adequately invest in areas of AI
development with the greatest strategic effect. Additionally, an “arms race” framing may very well lead policymakers to mishandle
the varied risks that some AI technologies present. The social, political, economic, legal and ethical challenges of a facial recognition
algorithm deployed by a city’s police department are quite different from those of a racially biased skin cancer predictor or a “black
box” missile-firing system. If
we’re going to manage those dangers, we need to think about them
carefully and discretely, which becomes more difficult when we’re just rushing to produce them
first. At a time when the United States should be setting strong democratic norms around the design and use of AI — in opposition
to the Chinese government’s digital authoritarianism — treating these technologies as if they were the same may yield disastrous
risk management. [AI is more powerful than ever. How do we hold it accountable?] This doesn’t mean that the United States and
China aren’t competing over AI — or that the competition is irrelevant. On the contrary, artificial intelligence will bolster national
economies and enhance military capabilities, both of which are bound to have an effect on state power. As many countries around
the world decide on the role of AI in society, their choices will inevitably affect the world order — influencing whether AI is used to
bolster democratic or authoritarian forms of governance. That adds another worrisome complication to the “arms race” metaphor,
which suggests that the
United States and China are both coursing along the same track toward the
same finish line. This premise could make it harder for the United States to pursue research according to more democratic
norms, as it suggests that we’re just trying to snatch away whatever it is that China is grasping at before it can get to it. The United
States needs to design a cohesive national AI strategy — the recent executive order does not count, as it’s too vague and doesn’t
adequately discuss a long-term American vision for AI — that addresses the many technologies at hand. China, on the other hand,
does address AI’s many forms in its many documents that outline the government’s plans and ambitions for AI development in
numerous domains. It’s a demonstration of commitment to AI development “at the highest levels,” from education to industrial
transformation to driverless vehicles. An
American strategy that approaches AI development as one “arms
race” is going to fall short because it tells a story that is far too simple about technologies that
are getting more complex every day.
AT: taiwan – 1nc
3. AI not key – alliance commitments, arms sales, and Chinese doctrine
overwhelm.
4. No Taiwan war.
Thompson 5-11-20. former US Defence Department official responsible for managing bilateral
relations with China, Taiwan and Mongolia. He is currently a visiting senior research fellow at
the Lee Kuan Yew School of Public Policy, National University of Singapore. Drew, China Is Still
Wary of Invading Taiwan, Foreign Policy, https://foreignpolicy.com/2020/05/11/china-taiwan-
reunification-invasion-coronavirus-pandemic/

Yet despite the triumphal tone in public, China is far from ready to launch an invasion of Taiwan .
China’s leaders are far from confident in the Communist Party’s ability to remain in power, to
the point of paranoia, and continually emphasize the threats and risks that they face, both
internally and externally. China’s top think tank affiliated with the Ministry of State Security, the China Institutes of
Contemporary International Relations, reportedly advised party members in an internal report to prepare for armed conflict with the
United States, which is driving global anti-China sentiment in the aftermath of the COVID-19 pandemic to levels not seen since 1989.
Initiating a war over Taiwan in the face of both internal and external threats is the greatest
risk imaginable. Regardless of these risks, invading Taiwan would not be a cakewalk. Taiwan has been
upgrading and reforming its defense over the past decade, adopting an asymmetric strategy
designed to capitalize on its strengths to counter PLA power projection capabilities . U.S. President
Donald Trump’s unpredictability, and his administration’s steadfast support for Taiwan, makes it
impossible for Xi to believe China’s hawks who claim that the United States is unwilling to
brave the costs of coming to Taiwan’s defense. Japan’s steady turn away from China also raises
doubt about whether it would sit out a Taiwan contingency. An even bigger factor is the global
economic impact of the pandemic and whether or not economic decline in China is long- or
short-term and whether it causes persistently high unemployment, public dissatisfaction, and
domestic unrest, which will focus the immediate attention of senior leaders in Beijing to these
internal challenges. The uncertainty of the global economy, shifting trade and investment
trends, and high debt-to-GDP ratios also argue against Beijing starting a potentially costly war.
I do not believe that Beijing will start a war with Taiwan to distract from domestic problems
either, since it is unlikely that the Politburo will want to create major new risks on top of
growing existing ones. There is no historical precedent for China starting external wars to
distract from internal problems either, even though China has used force in the past against its neighbors India and
Vietnam. Those conflicts were initiated to signal the Soviet Union (and, to a lesser extent, the United States), not to distract from
internal challenges. China was enjoying the novelty of domestic stability with Deng Xiaoping’s rise to power and normalizing
relations with the United States after a decade of turmoil during the Cultural Revolution under Mao Zedong when it initiated its
invasion of Vietnam in 1979.
AT: china rise – 1nc
5. No China rise.
Merrick Carey 20 - a former senior Capitol Hill aide who is now CEO of the Lexington Institute,
a public policy think tank in Arlington, Virginia. (“Has China's Rise Peaked?”, Real Clear Defense,
https://www.realcleardefense.com/articles/2020/07/13/has_chinas_rise_peaked_115460.html)

Even though the Western mainstream view is that China is a military and economic dynamo that
is quickly leaving America behind, the world may be turning against the Middle Kingdom, and
Chinese leadership may be turning to a harsh brand of nationalism as a result. Its recent border
clash with India in the high Himalayas and crackdown on free Hong Kong are the most recent
manifestations of this. China's rise as a global economic power, and regional military power, is one of the fastest in history.
China has grown faster than America for four straight decades . It has built the industrial and technological
foundations for a rapid expansion of its military, to include world-class capabilities in space launch, and its own version of GPS.
China's mercantilist economics have taken over entire sectors of other economies, including those of the United States. All this
has occurred not just with U.S. acquiescence, but intentional facilitation. America first wanted
China as a counterweight to Soviet Russia, and then aggressively helped China to get rich, with
the expectation that self-government and democracy would follow in the Middle Kingdom. But the
times may be-a-changin’. China's population is aging at a rate and scale that is historically
unprecedented. That nation is expected to lose 400 million working age persons this century ,
and the number is already falling. There is an 18 percent gender imbalance in the country's
population. China's birth rate never recovered from its multi-decade "one child" policy. India will likely pass China as the world's
most populous country this decade. China has just backed off publishing its economic growth goal for the first time in decades.
China is bullying South Korean and Japanese companies . Samsung and Sony are scaling back
their operations in the People's Republic. Apple is a laggard, but is planning to move 20 percent
of its Chinese supply-chain presence to India. India banned fifty-nine Chinese mobile apps after
the above-mentioned border clash. Huawei has found itself on the Trump administration's blacklist, and its loss of
Google on its smartphones has badly damaged sales outside China. The UK has announced plans for Huawei's complete removal
from British telecom infrastructure by 2023. China's export markets are flat on their backs, and furious at
China about the coronavirus. Europe is 16 percent of China's exports, and is an economic and financial wreck. America is
19 percent, the Trump attitude towards China is no secret, and those views are gaining steam with both parties on Capitol Hill.
Chinese exports to America are falling rapidly, down 17 percent from the summer 2018 to January 2020. This will be dramatically
accelerated by coronavirus. No one thought this economic decoupling could happen so fast. Chinese global direct foreign investment
is also down from $260 billion in 2017 to $125 billion in 2019. Why would China be easing up now on its Belt and Road Initiative?
On the military side, China's geographic circumstances can only be described as bottled up and
vulnerable. China is almost surrounded by countries that are unfriendly . It only has one ally,
North Korea, which is more of a client state. Japan, South Korea, Taiwan, Vietnam, Australia, and
India are all on the spectrum from cool to hostile towards China. Most are allies or friendly with
the United States. Many are armed with American high-tech weapons. To actually control the South
China Sea, China would need to control both Subic Bay and Cam Rahn Bay. China is zero percent of the way there . The
Philippines just reversed course on its threat to abrogate their Visiting Forces Agreement with
the United States. Every South China Sea nation's maritime territory is blatantly infringed upon by the PRC's "Nine Dash Line"
claims. This is not how to make friends.â?¯The America-India courtship is going well. China and Russia are temporary friends of
convenience. Russia is strapped for cash, and the quality and quantity of its defense exports are mostly sub-par. The Chinese defense
industry mainly builds Russian knock-offs. Russia is good at making fighters and air defense, but not much else. China's commercial
aircraft are immature, and are not worthy of being considered for aerial refueling platforms. There is not much north or west of
China, and it is not a good working alliance with Russia. Their power centers are a giant continent apart, and Russia's center of
gravity has always been in Europe. Russia has never been able to project power effectively in her far east. This is not a good long-
term symbiotic relationship. In
order to secure her trade, China must project power from the Yangtze
basin and Yellow Sea into the East China Sea. Waiting there are three potential adversaries with
sophisticated Western militaries: South Korea, Japan, and Taiwan. Four if you want to count the United
States. South Korea and Japan have great force generation land-based aircraft in Strike Eagles (F-15s) and Joint Strike Fighters (F-
35s). Taiwan is getting lethal F-16 Vipers with AESA radars. There are 28,000 American troops on South Korean soil. China
has
no nuclear-powered aircraft carriers, and its conventional carriers are clunkers. Chinese land-based
Flankers are much better than their carrier aircraft. The carrier-based J-15 Flanker, based on the Russian Su-33, is a bit of a joke.
They cannot take off with full ordnance and fuel. Chinese press openly criticizes the J-15. They are 70,000-plus pound fighters taking
off from a ski ramp with an obsolete carrier launch and retrieve system. The Chinese carriers are powered by old-fashioned oil-fired
boilers. Chinese bombers are based on old Soviet-designed Tu-16 Badgers. They are more accurately described as "medium"
bombers, as opposed to their "strategic" long-range American counterparts. Though recent models feature upgrades, these H-6
bombers are not ready to go toe-to-toe with Western air defenses or air superiority fighters. A
move to serious
American-type power projection would bankrupt a Chinese economy that is already in
trouble. The scary power projection incentive or scenario for China is just not there. The eastern side of the first
island chain is the wrong side for China. There China will find swarms of fourth and fifth
generation land and sea-based fighters. Taiwan has an impressive military buildup underway.
Taiwan has lots of airfields, rapid runway repair capabilities, and can maintain a high level of
tactical operations even after a bombardment. Even the west side of the first island chain is a nasty problem set for
China. Whether it is superior allied tactical aircraft or American attack submarines, it will have its hands full in a battle it might not
win. China needs markets in Europe, oil from the Persian Gulf, and resources in Africa .
All three of those are easily held
hostage by the U.S. Navy alone, not to mention the naval capabilities held by American friends
and allies along the Asian littoral. The Chinese economy would collapse within weeks without Persian Gulf oil.
AT: ai updown – 1nc
6. No AI “upsides/downsides” impact – card is not highlighted and doesn’t
explain how AI solves or why any of this escalates.
7. AI will never be smart enough to solve
Naudé 19 --- RWTH Aachen University, Aachen, Germany and IZA Institute of Labor Economics.
Wim, 5-28-2019, "Artificial intelligence: neither Utopian nor apocalyptic impacts soon," Taylor &
Francis, https://www.tandfonline.com/doi/full/10.1080/10438599.2020.1839173
A second point (which is related to the first) is that an AGI is remote, placing hopes and speculations about a super-intelligence and
Singularity in the realm of science fiction rather than of fact. The
core problem is that scientists cannot replicate
the human brain or human intelligence and consciousness because they do not fully
understand it (Meese 2018). Penrose (1989) has (controversially) argued that quantum physics may
be required to explain human consciousness. Koch (2012) provides a rigorous criticism from the
point of biology of those claiming the imminence of a singularity or super-intelligence, stating
that they do not appreciate the complexity of living systems. Dyson (2019) believes the future of computing
is analogue (the human nervous system operates in analogue) and not digital. Allen and Greaves (2011) describe a
‘complexity brake’ applying to the invention of a super-intelligence, which refers to the fact that
‘As we go deeper and deeper in our understanding of natural systems, we typically find that we
require more and more specialized knowledge […] although developments in AI might ultimately end up being the
route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the
future '.

Even if the invention of a superintelligence may not be realized quite soon, or a Singularity be reached, that an understanding of the
potential issues at stake, such as the quality of AI, its ethical basis and the dangers that any arms races for AI superiority may result
in, is necessary to inform current discussions around the regulation and governance of AI. Because of the Fallacy of the Giant
Cheesecake pursuit of an AGI can lead to an AGI arms race. Certainly, if key players (governments and big corporations) believe that
through increased compute narrow AI will eventually lead to an AGI, then such arms races will spill over into the present arms races
in AI. These can lead to sub-optimal outcomes which, as Naudé and Dimitri (2020) shows, will need government regulation, taxation
and incentivizing to minimize.

5.3. More difficult than it seems

A third point for discussion is the extent to which narrow AI has been taken up and
implemented – and what it will take to realize the general-purpose technology potential of AI .
Section 3 illustrated that the requirements for businesses to utilize or implement AI are non-trivial. To be precise, current AI R&D
and business generation is difficult and expensive43 (Bergstein 2019; Farboodi et al. 2019). Most
AI research and
implementation is done by a minority of companies in a minority of countries : in fact, around 30
companies in three regions, North-America, the EU and China perform the vast bulk of research, patenting, as well as receives the
bulk (more than 90 percent) of venture capital funding for AI (WIPO 2019).

Smaller companies, which makes up the vast majority of business firms, are at a disadvantage,
facing huge barriers to entry in the form of gaining access to and analyzing of data (Farboodi et
al. 2019).44 There seems little prospect of excessive automation , which Acemoglu and Restrepo
(2018b, p. 3) speculate is a potential cause of the slowdown in productivity in the West .
While AI is a technique that is more difficult than it seems to develop and adopt, especially in small businesses, it also not clear to
many businesses that AI can add value to their business. For example, during the 2020 COVID-19 pandemic, it was soon clear that
despite potential, AI yet had little to offer in practical terms to fight the pandemic, and moreover required large and relevant data-
sets that were not yet available (Naudé 2020).
Finally, developing AI models at scale will require solving a difficult problem, namely developing
‘green AI’. Machine Learning (ML) at the time of writing implied significant environmental cost.
Schwartz et al. (2019) determined for instance that ‘training a large, deep-learning model can
generate the same carbon footprint as the lifetime of five American cars, including gas’.
arbitration
AT: development – 1nc
1. No AI shift – development failing now is good for rich countries who can
continue to exploit poor countries so they have zero incentive to shift to
arbitration.
2. Development doesn’t solve – ODNI is an assessment of risks that doesn’t
say existential or connect development to anything in the first 30
paragraphs of the card.
3. No development instability – “causes terror and disease” is obviously
racialized.
Patrick ’11(Stewart; James H. Binger senior fellow in global governance and director of the
International Institutions and Global Governance (IIGG) Program at the Council on Foreign
Relations (CFR), author of Weak Links: Fragile States, Global Threats, and International Security;
4/15/11; “Why failed states shouldn’t be our biggest national security fear”;
https://www.washingtonpost.com/opinions/why-failed-states-shouldnt-be-our-biggest-
national-security-fear/2011/04/11/AFqWmjkD_story.html; Washington Post; accessed 6/7/17)
In truth, while failed states may be worthy of America’s attention on humanitarian and
development grounds, most of them are irrelevant to U.S. national security. The risks they
pose are mainly to their own inhabitants. Sweeping claims to the contrary are not
only inaccurate but distracting and unhelpful, providing little guidance to policymakers seeking
to prioritize scarce attention and resources. In 2008, I collaborated with Brookings Institution
senior fellow Susan E. Rice, now President Obama’s permanent representative to the United
Nations, on an index of state weakness in developing countries. The study ranked all 141
developing nations on 20 indicators of state strength, such as the government’s ability to
provide basic services. More recently, I’ve examined whether these rankings reveal anything
about each nation’s role in major global threats: transnational terrorism, proliferation
of weapons of mass destruction, international crime and infectious disease. The findings are
startlingly clear. Only a handful of the world’s failed states pose security concerns to the United
States. Far greater dangers emerge from stronger developing countries that may suffer
from corruption and lack of government accountability but come nowhere near qualifying
as failed states. The link between failed states and transnational terrorism, for
instance, is tenuous. Al-Qaeda franchises are concentrated in South Asia, North Africa, the
Middle East and Southeast Asia but are markedly absent in most failed states, including in sub-
Saharan Africa. Why? From a terrorist’s perspective, the notion of finding haven in a failed state
is an oxymoron. Al-Qaeda discovered this in the 1990s when seeking a foothold in anarchic
Somalia. In intercepted cables, operatives bemoaned the insuperable difficulties of working
under chaos, given their need for security and for access to the global financial and
communications infrastructure. Al-Qaeda has generally found it easier to maneuver in corrupt
but functional states, such as Kenya, where sovereignty provides some protection from outside
interdiction. Pakistan and Yemen became sanctuaries for terrorism not only because they are
weak but because their governments lack the will to launch sustained counterterrorism
operations against militants whom they value for other purposes. Terrorists also need support
from local power brokers and populations. Along the Afghanistan-Pakistan border, al-Qaeda
finds succor in the Pashtun code of pashtunwali, which requires hospitality to strangers, and in
the severe brand of Sunni Islam practiced locally. Likewise in Yemen, al-Qaeda in the Arabian
Peninsula has found sympathetic tribal hosts who have long welcomed mujaheddin back from
jihadist struggles. Al-Qaeda has met less success in northern Africa’s Sahel region, where a
moderate, Sufi version of Islam dominates. But as the organization evolves from a centrally
directed network to a diffuse movement with autonomous cells in dozens of countries, it is as
likely to find haven in the banlieues of Paris or high-rises of Minneapolis as in remote Pakistani
valleys. What about failed states and weapons of mass destruction? Many U.S. analysts worry
that poorly governed countries will pursue nuclear, biological, chemical or radiological weapons;
be unable to control existing weapons; or decide to share WMD materials. These fears are
misplaced. With two notable exceptions — North Korea and Pakistan — the world’s weakest
states pose minimal proliferation risks, since they have limited stocks of fissile
or other WMD material and are unlikely to pursue them. Far more threatening are capable
countries (say, Iran and Syria) intent on pursuing WMD, corrupt nations (such as Russia) that
possess loosely secured nuclear arsenals and poorly policed nations (try Georgia) through which
proliferators can smuggle illicit materials or weapons. When it comes to crime, the story is more
complex. Failed states do dominate production of some narcotics: Afghanistan cultivates the
lion’s share of global opium, and war-torn Colombia rules coca production. The tiny African
failed state of Guinea-Bissau has become a transshipment point for cocaine bound for Europe.
(At one point, the contraband transiting through the country each month was equal to the
nation’s gross domestic product.) And Somalia, of course, has seen an explosion of maritime
piracy. Yet failed states have little or no connection with other categories
of transnational crime, from human trafficking to money laundering, intellectual property theft,
cyber-crime or counterfeiting of manufactured goods. Criminal
networks typically prefer operating in functional countries that provide baseline political order
as well as opportunities to corrupt authorities. They also accept higher risks to
work in nations straddling major commercial routes. Thus narco-trafficking has exploded in
Mexico, which has far stronger institutions than many developing nations but borders the
United States. South Africa presents its own advantages. It is a country where “the first and the
developing worlds exist side by side,” author Misha Glenny writes. “The first world provides
good roads, 728 airports . . . the largest cargo port in Africa, and an efficient banking system. . . .
The developing world accounts for the low tax revenue, overstretched social services, high levels
of corruption throughout the administration, and 7,600 kilometers of land and sea borders that
have more holes than a second-hand dartboard.” Weak and failing African states, such as Niger,
simply cannot compete. Nor do failed states pose the greatest threats of pandemic disease.
Over the past decade, outbreaks of SARS, avian influenza and swine flu have raised the specter
that fast-moving pandemics could kill tens of millions worldwide. Failed states, in this regard,
might seem easy incubators of deadly viruses. In fact, recent fast-onset pandemics have
bypassed most failed states, which are relatively isolated from the global trade and
transportation links needed to spread disease rapidly. Certainly, the world’s weakest states —
particularly in sub-Saharan Africa — suffer disproportionately from disease, with infection rates
higher than in the rest of the world. But their principal health challenges are endemic diseases
with local effects, such as malaria, measles and tuberculosis. While U.S. national security
officials and Hollywood screenwriters obsess over the gruesome Ebola and Marburg
viruses, outbreaks of these hemorrhagic fevers are rare and self-contained. I do not counsel
complacency. The world’s richest nations have a moral obligation to bolster health systems in
Africa, as the Obama administration is doing through its Global Health Initiative. And they have a
duty to ameliorate the challenges posed by HIV/AIDS, which continues to ravage many of the
world’s weakest states. But poor performance by developing countries in preventing, detecting
and responding to infectious disease is often shaped less by budgetary and infrastructure
constraints than by conscious decisions by unaccountable or unresponsive regimes.
Such deliberate inaction has occurred not only in the world’s weakest states but also in stronger
developing countries, even in promising democracies. The list is long. It includes Nigeria’s
feckless response to a 2003-05 polio epidemic, China’s lack of candor about the 2003 SARS
outbreak, Indonesia’s obstructionist attitude to addressing bird flu in 2008 and South Africa’s
denial for many years about the causes of HIV/AIDS. Unfortunately, misperceptions about the
dangers of failed states have transformed budgets and bureaucracies. U.S. intelligence agencies
are mapping the world’s “ungoverned spaces.” The Pentagon has turned its regional Combatant
Commands into platforms to head off state failure and address its spillover effects. The new
Quadrennial Diplomacy and Development Review completed by the State Department and the
U.S. Agency for International Development depicts fragile and conflict-riddled states as
epicenters of terrorism, proliferation, crime and disease. Yet such preoccupations reflect more
hype than analysis. U.S. national security officials would be better served — and would serve all
of us better — if they turned their strategic lens toward stronger developing countries, from
which transnational threats are more likely to emanate.

4. No Harari governance impact – presumes global rule-setting which is


obvi thumped by isolationism, populism, and fraying institutions.
AT: indo-pak war – 1nc
5. No nuke escalation---stability/instability paradox caps escalation.
David Brewster 19. PhD; National Security College at the Australian National University;
Distinguished Research Fellow with the Australia India Institute. “India-Pakistan: Shadow
Dancing in the Himalayas.” Lowy Institute. 2/27/2019.
https://www.realclearworld.com/articles/2019/02/27/india-
pakistan_shadow_dancing_in_the_himalayas_112976.html

Nuclear war theorists tell us that competing nuclear-armed states inhabit what is called a
“stability/instability paradox”. The fear of mutually assured destruction can create a form of
stability at a strategic level (as we saw during the Cold War). But nuclear weapons can
simultaneously create instability by making lower levels of violence relatively safe, because
escalation up the nuclear ladder is perceived as too dangerous. In other words, by creating a
nuclear ceiling that both sides do not wish to breach, there is also space for conflict underneath
that ceiling. How large that space is will depend on the players involved. The India-Pakistan
relationship is a great example of this. Pakistan has been a master in pursuing asymmetric
strategies against India underneath the nuclear ceiling. This has included adopting a first-use
doctrine and the deployment of tactical nuclear weapons in an effort to blur nuclear red lines
(creating space underneath the ceiling). It has supported major terrorist attacks inside India,
including the 2001 attacks on the Indian Parliament and the 2008 attacks in Mumbai. It has of
course long supported terrorist attacks in Kashmir. In past years, the nuclear threat from
Pakistan has prevented New Delhi from responding forcefully to these actions – India’s failure to
undertake a military response to the 2008 Mumbai attacks being one example. India is
essentially a status quo power, whose first objective is often just to maintain the status quo. But
as Pakistan is learning, the stability/instability paradox works in both directions. In 2016, after
Pakistan-supported terrorists attacked an Indian Army base at Uri, Modi ordered a raid by Indian
special forces against an insurgent’s camp in Pakistan occupied Kashmir. The so-called “surgical
strike” was heralded as a major victory against terrorism. But while whole books have even been
written about it – and even a movie – the details remain somewhat hazy. For its part, Islamabad
claimed that the so-called “surgical strikes” never happened, and later invited foreigners to tour
the area to “prove” that nothing happened. Each side, wanting to believe its own version, went
away with honour served. We are seeing a similar dance now. These latest strikes allow the
Modi government to trumpet a major victory against Pakistan, apparently “pre-empting” further
imminent attacks against India. This time Delhi turned up the heat a little, striking near Balakot
in (undisputed) Pakistan territory rather than in Pakistan occupied Kashmir. And, perhaps
incidentally, Balakot is only around 60km from the city of Abbottabad, Osama bin Laden’s old
hangout. For its part, Pakistan has again claimed that the strikes never happened and that the
Indian planes were in fact forced by the Pakistan Air Force to jettison their bombs in uninhabited
mountains and flee. Again, Pakistan has offered to show foreigners around a place somewhere
near Balakot to show that nothing happened there. Nevertheless, Pakistan Prime Minister Imran
Khan met with Pakistan’s National Security Committee (which controls Pakistan’s nuclear
weapons) and then announced that Pakistan would respond to the (non) attack “at the time and
place of its choosing”. Whether Delhi and/or Islamabad feel the need to take further public
action remains to be seen. But both will seek to manage events. The stability/instability
paradox tells us that there may be room to move underneath the nuclear ceiling – sometimes
considerable room – but also that the nuclear ceiling is still definitely there.
2NC
2NC – AT Degrowth Bad
The alternative is not degrowth but community technical innovation. Their tech
will be used asymmetrically to dominate “non-productive” populations, which
encloses collective capacity. Only care approaches can redress racial property
regimes.
Shiloh KRUPAR Provost's Distinguished Associate Professor Field Chair of the Culture & Politics
Program @ Georgetown Edmund A. Walsh School of Foreign Service ’20 “Planetary
Improvement: Cleantech Entrepreneurship and the Contradictions of Green Capitalism” AAG
Review of Books 8:2 p. 100-102

Goldstein situates cleantech’s so-called new green spirit of capitalism within a genealogy of “planetary
improvement.” The
term names how cleantech and other green entrepreneurs apprehend and enclose the world
through (racialized) claims of planetary dominion over what needs to be fixed or saved.
Planetary improvement evidences modernizing imperatives to develop and profit from a productive
landscape, as well as aspects of frontier ideology and (settler) colonialism’s ontological treatment
of nature as waste requiring improvement. Planetary improvement promises technological
salvation through asymmetries of power, sympathy, and concern: There are those who
adjudicate and have the right to comment , troubleshoot, judge, technologically intervene, and feel
compassion, whereas others are obliged to submit , to be saved (Asad 2015). We might call this heroic
enterprise white man’s burden 2.0: The burden of cleaning up and converting the dirty, industrial world (made by old unenlightened
energy-hog capitalism 1.0) for
the sake of humanity needing to be saved. This “undifferentiated humanity” is
itself a colonial construct (Lowe 2015).
Goldstein describes how such an imperial vision conflates U.S. interests with those of the planet, and, in the case of cleantech,
frames the world’s problems in terms of the scarcity of energy, even as the material energetic footprint of imperial subjectivity
remains unacknowledged. The settler spectacle of planetary improvement broadly informs ascendant green markets and
governance that treat the planet as a knowable and controllable object or system. This colonial-racialized framing of the world stems
from and feeds back into the dominant regime of property (Bhandar 2018). Goldstein conveys how “improvement”
encloses creativity as a property relation through techno-fetishism and possessive neoliberal
entrepreneurship (which we should situate within the racial capitalism of the tech economy more generally). Goldstein
contends that entrepreneurialism means submission to the community of money . The process
entails an enclosure of collective creative capacity. Entrepreneurs are creative workers; investors want to
transform them into employees and separate them from their products. This involves the conversion of the inventor to
entrepreneur to employee, and a parallel process of innovations becoming owned assets and intellectual property. Inventors
become “freed” from the means of production and willing to subject themselves to waged labor under capital’s control.

So how do we decolonize improvement and free creativity from the straitjacket of market–property relations? Gold-stein
concludes the book in a speculative mode that is creative and pragmatic. He explores a more
radical politics of technology and innovation, pluripotent intellect, amateurism, and redirecting
creative labors toward “care-full approaches.” I have been thinking with the book in another direction:
connecting the entrepreneurial savior complex with the global financial system and fiscal entrepreneurialism. Building from
Goldstein’s analysis, we can structurally see the lie of liquidity and reliance of capital on the
manna of state budgets—whether tax base, debt base, bailouts, or all three (p. 152). Moreover, when
considering “just transitions,” it is clear that the choice to live and thrive with less energetic and material
consumption and therefore with less formal economic activity is very different than degrowth
imposed under conditions of austerity, such as in Puerto Rico, where the governor described the post–Hurricane
Maria territory as a “blank slate” for energy conversions (i.e., privatization), or the rightsizing, downsizing, shrinking, and greening
impositions on historically black U.S. cities.
In these cases, the logic of debt asserts that people cut off
from capital need to develop their own capacities and reform their backwardness (Byrd et&Sal.
2018). Goldstein’s excellent book can be put in conversation with emergent South-to-South
studies and scholarship on racial capitalism to denaturalize epistemologies and policies of
improvement across routinized Global North–South power structures: connecting the neoliberal criminalization of “making do”
and predatory fines pushed onto public services in Flint, Michigan, or Birmingham, Alabama, to the heroicized figure of the “slum
entrepreneur” in the developmentalist framework of Global South megacities. This conjunction with Goldstein’s work makes plain
that decolonizingimprovement requires “public provisioning ”—a process that would differentiate
use from property or enclosure to prioritize social uses over property’s function as an
instrument of financial investment, and that would undo or redress racial regimes of ownership,
debt, and deservedness.
2NC – AT No Transition/Growth Good
Neoliberalism is hegemonic because it captures the common sense of what is
possible. Creating new sociotechnical blueprints of reclaiming capitalist
technology for social value gets buy in.
Jimenez Gonzalez 21 – PhD in Sociology at University of Auckland (Aitor, The Silicon
Doctrine. Diss. ResearchSpace@ Auckland, 2021)
When they wrote those lines, they had in mind the Indignados and the Occupy movement. The authors were as captivated by the
energy of both phenomenon as they were concerned by how ephemeral and inefficient it was to translate the energy of the masses
into any palpable political victory. In the authors’ opinion, the left needed to think bigger in a much more
ambitious way. The authors found the theoretical recipe to cure Folk-politics in Laclau’s notion
of left populism. Simplifying, Laclau’s left populism means disputing the hegemony of
neoliberalism over the definition of what is realistic, achievable common sense (Laclau, 2005). The
left, limited in its analysis and tactics, was unable to imagine alternatives to the neoliberal regime . Hence, the mission for
the left was to collectively recover the future, that is, the ability to imagine, plan and achieve
a postcapitalist society (Mason, 2016). For Srnicek and Williams the question is as clear as difficult. Capitalism has
organised our societies around work and exploitation; therefore, in order to build socialism, we shall be able to imagine a post-
scarcity society where most tasks are performed and automatised, and wealth is fairly distributed. To achieve that, the left
must be able to:

[D]evelop a sociotechnical hegemony: both in the sphere of ideas and ideology, and in the sphere of material
infrastructures. The objective of such a strategy, in a very broad sense, is to navigate the present
technical, economic, social, political and productive hegemony towards a new point of
equilibrium beyond the imposition of wage labour (Srnicek & Williams, 2015, p. 130).
But, as the authors pointed out, reaching this stage requires finding new points of leverage—that is, having the capacity to disrupt
the capitalist system, not only to defeat the monster but to establish a new and emancipatory socio-technical order. Despite Srnicek
and Williams’ techno-optimism, the final chapters of the book, the ones looking at solutions, are focused on the future of
emancipatory political organisation and not on the nuts and bolts of the hypothetical structure of the communism to come. While
not explicitly addressed, the book has all the ingredients of what I named as the new economic planning debate: How to build new
political organisations, the question of scale, the necessary discussion of the role of technology on emancipatory projects, and the
unbearable need to democratise global productive and logistical structures. The book successfully offered an answer for the big
political question: How do we build political organisations and a new common sense that allow us to build a socialist future?
However, it left unresolved an important part of the revolutionary equation: how can we build from scratch a global and counter-
hegemonic logistic apparatus?

Leigh Phillips and Michal Rozworski accepted the challenge and tried to give an answer to this question in a quite interesting way. In
their provocative People’s Republic of Walmart (2019), they reopened the question of socialist economic
planning not by hypothesising a new economic structure, but by analysing some of the biggest
capitalist behemoths of our days: Walmart and Amazon. For them, activists should consider the relevant
socialist calculation debates at the light of contemporary technical, logistical and even managerial developments. In their
opinion, corporations such as Walmart and Amazon are planning and allocating resources and
goods at a scale only dreamed by the most ardent advocates of the Soviet Union central
planning, without falling in the bureaucratisation and inefficiencies derived from the Gosplan
(Soviet State Planning Committee). In short, capitalism has demonstrated itself the advantages of planning
and cooperating. The problem is that big corporations are only applying economic planning
within their enormous economies. Hence, for them the question is not as much how to start
planning but, in fact, how to plan ‘The Good Anthropocene’. What do they mean by that? Phillips and
Rozworski’s conception on economic planning is heavily influenced by the Chilean socialist cybernetic experiment. Cybersyn (the
name of the project) was President Allende’s (1970/1973) attempt of building a cybernetic system for
a decentralised economic planning, linking factories, mines, bureaucrats and worker s (Medina,
2011). That is, a sociotechnical infrastructure intended to regulate and adjust the needs, skills and
productive capacities of the country in order to satisfy the common good and not a bunch of
capitalist shareholders. Phillips and Rozworski consider that good planning is necessary not only to put
an end to the blatant structure of economic, gender and racial inequality but to survive as a
species. After all, the very idea of cybernetics is to study and build systems of control and regulation for entire ecosystems:

Counteracting climate change and planning the economy are projects of comparable ambition:
if we can manage the earth system, with its all its variables and myriad processes, we can also
manage a global economy

. Once the price signal is eliminated, we will have to consciously perform the accounting that,
under the market, is implicitly contained in prices. Planning will have to account for the ecosystem services
implicitly included in prices—as well as those that the market ignores. Therefore, any democratic planning of the
human economy is at the same time a democratic planning of the earth system (Phillips &
Rozworski, 2019, p. 241).

In a similar leftist cybernetic vein, we can find Evgeny Morozov’s ‘Digital Socialism’ (2019). While sharing much of the theoretical
grounds with Phillips and Rozworski, Morozov’s main concern has to do with finding alternatives—that is, socialist ways for
organising the allocation of goods, different to the price-based market or to the bureaucratised central planning. Acknowledging
Hayek, Morozov highlights the relevant role that data and information have in the way social coordination takes place, for instance,
in markets. But,
opposed to the Hayekian understanding of a cybernetic system governed by price
(the signal of millions of inputs and outputs), Morozov proposes a socialist feedback
infrastructure. That is, a sociotechnical system in where a decentralised network will be able to gather
and manage the information of a given system, matching capacities and needs:

One could imagine the use of digital


feedback infrastructure to match ‘problem-finders’, who would
express their needs and problems, and react to those identified by others—either explicitly, by
voicing them or writing them up, or ‘automatically’, via machine learning, or—with ‘problem-
solvers’, equipped with cheap but powerful technologies and the skills to operate them. Once the two groups have
been ‘matched’ by the feedback infrastructure, the activity of the ‘problem-solvers’ can help to
render the implicit needs of ‘problem-finders’ tangible and explicit, adding to the pool of
solutions which can then be drawn upon by other ‘problem -finders’. Assuming this takes place outside the
commercial realm, there would be no barriers, such as patents, to impede the sharing of knowledge (Morozov, 2019, p. 56).

Morozov claims for the socialisation of the ‘means of feedback production’ (Morozov, 2019, p.65.) now in hands of digital capitalism
corporations such as Amazon. The socialisationof the means of digital production will allow the masses
to establish new non-market solutions for allocating goods, non-competitive strategies to fuel
scientific progress, and automatised and decentralised ways of economic planning. Both, ‘Digital Socialism’
and People's Republic of Walmart, share the same interest in repurposing the current exploitative
sociotechnical apparatus for the socialist revolution, however, the authors of both works have
not said much about one relevant question : Is using master’s tools a direct path to alienation? Brett Neilson has
delved inside this treacherous question naming this process as the ‘Reverse of Engineering’ (Neilson, 2020). Neilson flags how
problematic it could be to just uncritically take over capitalist technologies without acknowledging its exploitative nature .
For
him, ‘the reverse of engineering posits neither the liberation of labour in a planned economy
nor an ontological horizon of communization separated from challenges of organization. Rather,
it raises the challenge of warding off capital’s tendency to capture and incorporate its multiple
outsides’ (Neilson, 2020, p. 87).
1NR
1. Neither insurance nor veil-piercing solve – no accountability, retribution,
or appropriate legal frameworks for AI. Proven by the history of
corporate deregulation, means this is a new link – AFF presumes liability
shield will “work itself out” which is disproven by corporate extraction
and overconsumption.
Bryson ’17 [Joanna; September 2017; Professor of Ethics and Technology at the Hertie School
of Governance, degrees in psychology and artificial intelligence from the University of Edinburgh
(MSc and MPhil) and Massachusetts Institute of Technology (PhD); et al.; "Of, for, and by the
people: the legal lacuna of synthetic persons," Artificial Intelligence and the Law, Volume 25,
p.273-291]

The law has a way to address this kind of difficulty: It can look behind the artificial person and reach a
real one. Veil-piercing—i.e., going behind the legal form and helping or (more usually) sanctioning the real people behind the
form—is well-known in various legal systems (Huang 2012). A U.S.-Great Britain arbitral tribunal in the 1920s put the matter like this:

“When a situation legally so anomalous is presented, recourse must be had to generally recognized principles of justice and fair
dealing in order to determine the rights of the individual involved. The same considerations of equity that have repeatedly been
invoked by the courts where strict regard to the legal personality of a corporation would lead to inequitable results or to results
contrary to legal policy, may be invoked here. In such cases courts have not hesitated to look behind the legal person and consider
the human individuals who were the real beneficiaries.”Footnote15

The situation had been “anomalous” because the Cayuga tribe had legal personality as a corporate entity in New York State but not
under international law. That is, the law that the tribunal had power to apply did not recognize the tribe as an entity to which that
law could be applied. “[R]ecognized principles of justice and fair dealing” came to the rescue: The tribunal addressed the individuals
comprising the tribe to get around its inability to address the tribe.

Solutions like this are not available in every case. Lawmakers contemplating legal personhood
must consider the matter and provide for it. The arbitrators in the Cayuga case had an express invitation to apply
equitable principles, the jurisdictional instrument (a treaty) having stipulated equity to be part of the applicable law.Footnote16
Where equity or a similar principle is not part of the applicable law, a judge or arbitrator well
might not be able to “look behind the legal person .” In a situation like that, the “human individuals”
who were meant to answer for injury done remain out of the picture.
The Tin Council case provides an illustrative warning. The case involved the International Tin Council, a public international organization constituted by
a group of states (broadly an entity like the International Bank for Settlements). The states, using the Council, aimed to corner the world market for tin.
When the prospects for success looked solid, the Council contracted debts. But the price of tin collapsed, and the Council went insolvent. When the
creditors sought to sue and collect what they could on the debts, they found an empty shell and no procedural recourse. The Tin Council could not be
sued in English court, and it would have been useless to sue anyway. The Council’s creditors sought compensation from the member states, but this
was to no avail either: The creditors’ contractual relationship was with the Council, not with those who had called it into being. Apart from the
possibility of a diplomatic solution—i.e., the states agreeing ex gratia to replenish the Council or pay the creditors—the creditors had no
recourse.Footnote17

A difficulty in the Tin Council case was that the legal relations involved were novel, and so the court’s precedents offered no guide for effectuating the
creditor’s rights:

“None of the authorities cited by the appellants [the creditors] were of any assistance in construing the effect of the grant by Parliament of the legal
capacities of a body corporate to an international organization pursuant to a treaty obligation to confer legal personality on that
organization.”Footnote18

Nor did the creditors adduce “any alleged general principle” in the English law sources that would have allowed the court to pierce the veil and attach
liability to the states that had constituted the Council.Footnote19 As for international law, “[n]o plausible evidence was produced of the existence of
such a rule of international law” (i.e., a rule holding the constituents of the Council responsible for the Council’s debts).Footnote20 In short, unlike the
tribunal in the Cayuga claims, the House of Lords found no way to avert “inequitable results.” The unusual and novel character of the entity led the
court to a dead end.

Even when the law does explicitly provide for veil piercing, judges and arbitrators have tended
to apply it cautiously and as an exception. Easterbrook and Fischel (though defending the economic rationale for veil
piercing) memorably described veil piercing as happening “freakishly”; they likened it to “lightning... rare, severe, and unprincipled”
(Easterbrook and Fischel 1985).

The Tin Council case foreshadows the risk that electronic personality would shield some human
actors from accountability for violating the rights of other legal persons, particularly human or
corporate. Without some way around that shield, we would surely see robots designed to carry out
activities that carry high legal risk for human or corporate legal persons. Though this might benefit the
humans behind the robots, it would come at the expense of human legal interests more generally.

Robots as themselves unaccountable rights violators

Even if the legal system sensibly provided mechanisms for veil piercing in the case of robot legal
persons, that solution could only go so far. By design, collective legal persons like corporations and
international organisations have legal persons behind them , who might stand to answer for violations of the rights of
human legal persons. Advanced robots would not necessarily have further legal persons to instruct or control them. That is to say,
there may be no human actor directing the robot after inception. The principal-agent model
that veil piercing rests upon would then be hard to apply.

Autonomous or semi-autonomous robots interacting with humans will inevitably infringe the legal
rights of humans. Giving robots legal rights without counter-balancing legal obligations would
only make matters worse. In the conflict between robot and human legal rights, only the
former would be answerable to the latter; humans would have no legal recourse. This would not
necessarily be a problem, if

1. the other problems of legal personality—like standing and availability of dispute settlement procedures—were solved; and

2. the electronic legal person were solvent or otherwise answerable for rights violations.

But it is unclear how to operationalize either of these two steps.

In the case of corporate legal persons, humans composing the corporation can manage dispute settlement on behalf of the
corporation in which they have an interest. But what we are imagining here is a robot legal person, untethered from an interested
human principal. Who will represent the robot in the dispute? With the right AI, the robot might be able to represent itself. But we
may encounter this problem well before AI capable of effective court advocacy is developed. Conceivably, the robot could hire its
own legal counsel, but this brings us to the second step: robot solvency.

It is unclear what it would mean for a robot to hold assets, or how it would acquire them. It is
possible that the law could contemplate mechanisms for robots to own property or hold accounts, as it does for
corporate legal people. The law could also require the creators of robots to place initial funds in these accounts. But money
can flow out of accounts just as easily as it can flow in; once the account is depleted, the robot
would effectively be unanswerable for violating human legal rights. When insolvent human legal
persons violate others’ legal rights, other tools are available to hold them to account—anything from apology to
jail time. In the case of robots, these options are unavailable, unsatisfying, and/or ineffective.
1. It’s offense. Promoting “democratic tech” to “beat the evil guys” relies
on racist nationalism and precludes diagnosis of global capitalism, which
is the root cause of their escalation impacts!
Liu 20– historian of modern China (Anthony, "We Need to Think About Xinjiang in
Internationalist Terms," Nation, https://www.thenation.com/article/world/xinjiang-uigher-
camps/ 10-28-2020)// gcd
I suspect many foreign observers are uncertain how to translate these headlines into political terms. On the one hand, the details of
the camps are horrifying. By now, what is occurring seems undeniable, with the basic details largely corroborated by the Chinese
state itself.
On the other, these facts often get slotted into a narrative that pits a freedom-
defending United States against a nefarious Chinese state, a story that plays into the hands of
odious right-wing US politicians and militaristic China hawks. Missouri Senator Josh Hawley, for instance,
capitalized on the Mulan debacle by declaring that Disney’s “decision to put profit over principle, to not just ignore the CCP’s
genocide and other atrocities but to aid and abet them, [was] an affront to American values.”

Debates over Xinjiang will only intensify, and I believe internationalist thinkers need to offer an
alternative to reductive, pro-US stances such as Hawley’s. The current dynamic is fostering
extreme nationalist responses: either an anti-China fearmongering that is at best self-serving for
politicians and at worst a pretext for violent confrontation, or a pro-China denialism of the
Xinjiang camps, which has seduced some leftists on nominally anti-imperialist grounds. On October 10, for example, the
socialist magazine Monthly Review republished an egregious revisionist defense of China’s policies in the region.

Thus far, most discussions


surrounding the Xinjiang camps have defaulted to one of two
explanations: Either they are the result of a timeless ethnic conflict between Han and non-Han
Chinese people, glossed by conservative pundits as “ Han supremacy,” or they are attributed to
the features of an Asian and communist despotism that is juxtaposed against a free and
capitalist Western world.

Though plausible at a glance, such civilizational explanations are too static and lack historical analysis. Darren Byler, an
anthropologist researching northwest China, has written that an unnuanced charge of “ethnic genocide simply allows [one] to argue
in a culturalist mode that one or more groups of people are bad or evil and dominating another group. It does not allow [one] to
explain why.”

The “why” has much to do with political-economic developments set into motion during the
1990s, when the Chinese government encouraged domestic companies to develop infrastructure in Xinjiang
and tap the region’s oil and natural gas fields in order to supply energy to cities along the coast. During that time, millions
of ethnic Chinese people moved to Xinjiang, soaking up the region’s economic gains and
sparking anti-colonial protests by Uighur locals. Though there had been prior tensions between
the Han and Uighur groups, the development projects raised them to new levels.

The government’s response to dissent has been to try to assimilate Uighur and other minority groups into a kind of “mainstream”
ethnic Chinese society, prioritizing language, religious, and cultural education. Immediately after the 9/11 attacks, the Chinese state
explicitly repurposed the US’s own War on Terror rhetoric to demonize Islamic religious practices, as documented by University of
Sydney historian David Brophy. The turning point was an explosion in an Urumqi train station in May 2014, after which Chinese
Communist Party officials declared a “People’s War on Terror.”

For Byler, the re-education camps are inseparable from a corporate and government-led drive to
capitalize on Xinjiang’s resources and people. The region supplies about 20 percent of the nation’s oil and gas and
about 20 percent of the world’s tomatoes and cotton. State projects have included experiments in policing
and cybersecurity technology, which Chinese companies are already exporting abroad; securing
the region for Belt and Road Initiative transportation infrastructure projects into
Central Asia; and, it was revealed
recently, coercively moving Uighur workers to factories in both Xinjian g and in large eastern cities such as
Hefei, Zhengzhou, and Qingdao, manufacturing for brands such as Nike, Apple, Gap, and Samsung.

It is therefore crucial to recognize that the


camps are not the inevitable result of deep-seated ethnic
conflict or Asian autocracy but are linked to changes in Chinese and global capitalism. They
were made possible by processes dating back to the 1980s, when the Chinese government pivoted to
market-driven growth and advertised its natural and human resources to foreign investors at
cheap rates.
Export-led industrialization in China has meant profits for foreign companies, savings for foreign consumers, and cheaper credit for
foreign borrowers. It has also meant periodic revelations over terrible labor conditions in China, such as the 1990s campaigns
against clothing sweatshops, the 2010s uproar over worker suicides at Foxconn’s Shenzhen factory, and now reports about Uighur
labor. Such scandals never seem to get resolved, only quietly displaced when the next one arises.

The ultimate agency, of course, lies with Chinese companies and institutions. But it is also impossible to understand why these
problems are so endemic without looking at global economic dynamics.

For all the talk by US politicians of promoting human rights and decoupling from China, they know that US companies
profit
from this race-to-the-bottom globalization and that separating the US economy from China’s will not happen
anytime soon. From this perspective, the re-education camps implicate more than just the Chinese state .
They have grown out of global capital and commodity flows and the attendant institutions designed to
protect them.

This is why I am concerned that the most readily available framework for discussing the Xinjiang camps is
a nationalistic one that pits US and Chinese values against each other. Its cheerleaders are conservative
US politicians, such as Hawley, Ted Cruz, and Marco Rubio, eager to stoke nativist sentiments when expedient but unwilling
to seriously look at the forces undergirding China’s policies. Last year, President Donald Trump ignored calls to
sanction China over the mistreatment of Uighur people because they would  interfere with a
trade deal with Beijing. And though his administration has begun to denounce abuses in Xinjiang, this seems mostly
a negotiating tactic to get concessions from China and an election  strategy to deflect blame from
mishandling the Covid-19 pandemic.

The most likely outcome of a nationalist


rivalry between the US and Chinese governments is not a
principled commitment by the United States to improving the lives of those in Asia. It is a tit-for-
tat competition to punish innocent people as a form of political leverage, evidenced by the Trump
administration’s visa policies targeting Chinese students and workers or the Chinese government’s passage this June of a National
Security Law for Hong Kong and its expulsion and detention of foreign journalists.

So then how do we proceed? There are signs that international pressure pushed the Chinese state to at least declare the closure of
some camps (though the reality is unclear). Boycotting products linked to Uighur detention, whether Mulan or Apple accessories
or H&M jeans, may send a message in the short term.

But in the long term, we need to learn to talk about the re-education camps and labor
conditions in China in a more expansive way. This means moving beyond nationalist and
humanitarian explanations compatible with Cold War–style political theater. Instead, analysts should foreground the
historical and global economic forces that helped give rise to the camps.
American observers should resist framing the camps as something alien to US society, a move that only legitimizes the United States
as a global policeman. Instead, we need to connect the dots. We
should oppose the Chinese state’s
Islamophobia just as we opposed the bloody and oppressive policies of George W. Bush’s War on Terror that
inspired it. We should condemn Uighur forced labor in our global supply chains, just as we condemn the
exploitation of the guest and prison labor in the same webs of cheap commodity production. And we ought to
denounce the indiscriminate use of state power in China’s peripheries, because we denounce
the violence of the Border Patrol and police forces at the edges of our own society.

8. AI will never be smart enough to solve


Naudé 19 --- RWTH Aachen University, Aachen, Germany and IZA Institute of Labor Economics.
Wim, 5-28-2019, "Artificial intelligence: neither Utopian nor apocalyptic impacts soon," Taylor &
Francis, https://www.tandfonline.com/doi/full/10.1080/10438599.2020.1839173
A second point (which is related to the first) is that an AGI is remote, placing hopes and speculations about a super-intelligence and
Singularity in the realm of science fiction rather than of fact. The
core problem is that scientists cannot replicate
the human brain or human intelligence and consciousness because they do not fully
understand it (Meese 2018). Penrose (1989) has (controversially) argued that quantum physics may
be required to explain human consciousness. Koch (2012) provides a rigorous criticism from the
point of biology of those claiming the imminence of a singularity or super-intelligence, stating
that they do not appreciate the complexity of living systems. Dyson (2019) believes the future of computing
is analogue (the human nervous system operates in analogue) and not digital. Allen and Greaves (2011) describe a
‘complexity brake’ applying to the invention of a super-intelligence, which refers to the fact that
‘As we go deeper and deeper in our understanding of natural systems, we typically find that we
require more and more specialized knowledge […] although developments in AI might ultimately end up being the
route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the
future '.

Even if the invention of a superintelligence may not be realized quite soon, or a Singularity be reached, that an understanding of the
potential issues at stake, such as the quality of AI, its ethical basis and the dangers that any arms races for AI superiority may result
in, is necessary to inform current discussions around the regulation and governance of AI. Because of the Fallacy of the Giant
Cheesecake pursuit of an AGI can lead to an AGI arms race. Certainly, if key players (governments and big corporations) believe that
through increased compute narrow AI will eventually lead to an AGI, then such arms races will spill over into the present arms races
in AI. These can lead to sub-optimal outcomes which, as Naudé and Dimitri (2020) shows, will need government regulation, taxation
and incentivizing to minimize.

5.3. More difficult than it seems

A third point for discussion is the extent to which narrow AI has been taken up and
implemented – and what it will take to realize the general-purpose technology potential of AI .
Section 3 illustrated that the requirements for businesses to utilize or implement AI are non-trivial. To be precise, current AI R&D
and business generation is difficult and expensive43 (Bergstein 2019; Farboodi et al. 2019). Most
AI research and
implementation is done by a minority of companies in a minority of countries : in fact, around 30
companies in three regions, North-America, the EU and China perform the vast bulk of research, patenting, as well as receives the
bulk (more than 90 percent) of venture capital funding for AI (WIPO 2019).

Smaller companies, which makes up the vast majority of business firms, are at a disadvantage,
facing huge barriers to entry in the form of gaining access to and analyzing of data (Farboodi et
al. 2019).44 There seems little prospect of excessive automation , which Acemoglu and Restrepo
(2018b, p. 3) speculate is a potential cause of the slowdown in productivity in the West .
While AI is a technique that is more difficult than it seems to develop and adopt, especially in small businesses, it also not clear to
many businesses that AI can add value to their business. For example, during the 2020 COVID-19 pandemic, it was soon clear that
despite potential, AI yet had little to offer in practical terms to fight the pandemic, and moreover required large and relevant data-
sets that were not yet available (Naudé 2020).
Finally, developing AI models at scale will require solving a difficult problem, namely developing
‘green AI’. Machine Learning (ML) at the time of writing implied significant environmental cost.
Schwartz et al. (2019) determined for instance that ‘training a large, deep-learning model can
generate the same carbon footprint as the lifetime of five American cars, including gas’.

1. Their terrorism impact is literally a justification for colorline. It uses


“anarchy” to naturalize perpetual ongoing fear of the other.
Jackson 15 – Richard, Director of the National Centre for Peace and Conflict Studies, the
University of Otago; Former Professor of International Politics at Aberystwyth University, 2015
(“The Epistemological Crisis of Counterterrorism,” Critical Studies on Terrorism (Vol. 8, No. 1,
33–54) Available to Subscribing Institutions via Tandfonline)

The evolution of an epistemological crisis

How did this epistemological crisis – the condition where lack of knowledge is the main thing
we know about the terrorist threat – arise? By what steps did we reach the present juncture in
which the known and unknown mutually reinforce each other and constitute a crisis of
counterterrorism knowledge? Lisa Stampnitzky (2013) argues that it is inherent to the
counterterrorism field as a result of the particular way the field developed after separating from
counterinsurgency studies in the 1970s. She argues that a “politics of anti-knowledge” (2013,
20) developed which rejected any rational explanation for terrorism.

This was the result of a decision to characterise terrorism largely in moral terms, in contrast to
other morally neutral, rational forms of political violence, and as a way of drawing a boundary
between terrorism and insurgency. That is, “if terrorists are evil and irrational, then one cannot
– and, indeed should not – know them” (Stampnitzky 2013, 189; emphasis added). As a
consequence, the counterterrorism field has thus been precluded from knowing who the
terrorists are and why they might attack and can only focus on imagined future scenarios when
they most certainly will attack. Zulaika and Douglass (1996) take a broader but related view,
arguing that the crisis of knowledge in counterterrorism reflects a wider cultural imaginary or
“mythography of terrorism”, which has grown up over a long period. In this cultural imaginary,
terrorism represents a modern taboo requiring constant ritualistic moral condemnation, even
from scholars seeking to understand it, and in which there is a prohibition on contact with or
intimate knowledge of the terrorist. Later, Zulaika specifies the process more clearly, arguing
that the crisis in counterterrorism knowledge is the result of “a faulty epistemology” caused by
“the placement of the entire phenomenon in a context of taboo and the willful ignorance of the
political subjectivities of the terrorists” (2012, 52). In any event, as a consequence of the
operation of the taboo, terrorism has long been an arena in which the boundary between the
factual and non-factual is blurred – “where fact and fiction were sometimes difficult (if not
impossible) to distinguish” (Frank 2014, 150).

Frank goes on to suggest that the crisis of knowledge is due to the future oriented, “ what if?”
nature of the act of terrorism itself. As he describes it:

For what characterizes terrorism is less the single act of violence than it is the fact that
this act is perceived to be the beginning, or part, of a (potential) series, and that further
acts are expected to occur. To achieve its defining effect – collective fear of more
violence to come – terrorism relies on the belief that the next attack is impending, and
that it could happen anywhere, anytime. In this sense, the terror caused by terrorism is
a halfway house between the real (actual attacks and their tangible aftermath) and the
imaginary (possible future assaults). This gives terror a fantastical dimension, a fact
reinforced by the perception of the perpetrators as being both invisible and in our very
midst, omnipresent in public discourse but still elusive in person. (Frank 2014, 9)

In other words, anticipatory fear is inherent to the act of terrorism which terrifies through its
unknown future threat. For the counterterrorist, therefore, “the anticipatory nature of terror
plays a significant role in the terrorist imaginary… Terrorism incites possibilistic or worst-case
thinking… in political and scientific discourse” (Frank 2014, 49). Moreover, this kind of
clandestine, anticipatory violence is inherently mimetic; it becomes mirrored in the
counterterrorist obsession with secrecy and counter-imagination, which is a central
characteristic of the epistemological crisis (Zulaika 2012).

In addition to these three factors – the nature of terrorism itself, the way the terrorism field
developed and the broader culture or mythography of terrorism based on the operation of
taboo – I want to suggest that there are a number of more specific steps or epistemic
developments which have also contributed to the current manifestation of the epistemological
crisis of counterterrorism. The first is the effect of the 2001 terrorist attacks, which in many
ways, amplified the existing tendencies and characteristics within the counterterrorism field,
leading to a permanent rupture in the remaining epistemological certainties. In particular, the
hyper-real spectacle of 9/11 tore down any remaining epistemic barriers to the seemingly
infinite imagination of terror, thereby appearing to render previous knowledge obsolete and the
future governed solely by uncertainty. In other words, as Hellmich and Behnke (2012, 2–3) note,
the 9/11 terrorist attacks by al Qaeda were constituted as an “event” in the Derridaian sense
that exceeded existing cognitive and discursive frameworks, thereby creating a shocking and
immediate “void of meaning” (see Campbell 2002; Der Derian 2002). In particular, the difficulty
of interpreting the attacks through traditional Western concepts of politics and political action
enveloped the perpetrators – “terrorists” – in a web of uncertainty. Consequently, once the
terrorist event itself was understood and accepted as existing beyond meaning, then the
“terrorist” perpetrators similarly became wrapped in the same web of unknowing in terms of
their motives, targets and the potential future threat they posed. In effect, constructing 9/11 in
this way functioned epistemologically to sever the connections to previous understandings of
terrorists and terrorism. As a number of prominent terrorism experts argued, the 9/11 attacks
wiped the slate clean and rendered previous terrorism knowledge obsolete (see Hoffman 2006).

In a second important development, which was directly related to the epistemologically


rupturing effects of 9/11, terrorism as a form of political violence had already been subject to a
steady discursive reconstruction by terrorism scholars and experts since the early 1990s. The
so-called “new terrorism” thesis argued that contemporary terrorism could not be understood
through the prism of previous research and epistemic frameworks, because unlike the “old
terrorism” the “new terrorism” was motivated by religious extremism rather than politics, was
unconstrained in its targeting of civilians and thus willing to employ weapons of mass
destruction, and was organised in non-hierarchical, fluid networks (Duyvesteyn and Malkki
2012; Crenshaw 2009).

This meant that we could no longer be sure that terrorists would behave according to previously
predictable patterns, or follow previously identified pathways. It meant previous data and
analysis relating to terrorism was largely obsolete and we could no longer be certain where,
when, why or how terrorists might strike, or what kind of threat they really posed (except that it
was potentially unlimited). This was a nascent epistemological crisis insomuch as it only applied
to the “new” type of groups; the “old” terrorist groups, such as the IRA, ETA and the like, were
considered to be epistemologically predictable and knowable – at least in retrospect.3 At the
same time, popular cultural production of depictions of “new terrorists” attacking with nuclear
bombs, chemical weapons and biological agents in an effort to cause maximum civilian
casualties proliferated across the media. Thus, severed from previous forms of knowledge and
evidence and fed by fantastical media images, the “new terrorists” became a new nightmare
spectre haunting the Western imagination (in many ways, replacing the spectre of communist-
inspired nuclear war).

A third step or development relates to broader developments in society. As the sociologist


Ulrich Beck (1992, 1996) and others have demonstrated, over the past few decades, the social
paradigm of traditional risk analysis has been replaced by a virulent form of precautionary
thinking, and public officials have come to be preoccupied with the possible over the probable
(Daase and Kessler 2007; Aradau and Van Munster 2007; Ewald 2002). That is, they have come
to prioritise and be concerned about the potential terrible consequences of imagined risks,
rather than the very low statistical probability of those potential risks actually materialising . For
public officials, what could happen in future acts of terrorism now assumes greater significance
than what has happened over the past centuries of terrorist violence, or what might actually
happen in a calculable, probabilistic sense.

At the same time, officials have come to believe that society expects them to adopt a zero-risk,
hyper-cautious approach to public safety: no level of risk, even a 1% risk, can now be tolerated
(Furedi 2002; Zulaika 2012; Ewald 2002). This precautionary dogmatism is important because it
places a burden of action on the security official: if a scenario can be imagined, even if it is only a
1% risk, then a moral imperative exists to try and prevent it. Tony Blair, the former UK prime
minister, expressed it this way: Sit in my seat. Here is the intelligence. Here is the advice. Do you
ignore it? […] On each occasion, the most careful judgment has to be made taking into account
everything we know and advice available. But in making that judgment, would you prefer us to
act, even if it turns out to be wrong? Or not to act and hope it’s ok? And suppose we don’t act
and the intelligence turns out to be right, how forgiving will people be? (Cited in Aradau and Van
Munster 2007, 105)

At one level, Blair is suggesting that public sentiment is now also caught up in the
epistemological crisis. Officials are expected to act against future terrorist threats, even in the
face of real uncertainty or “if it turns out to be wrong”.

A fourth development relates specifically to the counterterrorist field and the role played by key
figures in the Bush administration after 9/11. As Donald Rumsfeld’s famous quotation aptly
illustrates, senior security officials acting as “security entrepreneurs” have come to focus almost
exclusively on the “unknown” element of terrorism, particularly the “unknown unknowns” –
those things we do not even know we do not know about the terrorism threat. As well as a
logical step along the pathway forged by the previous “new terrorism” narrative, the rupturing
event of 9/11 and the developing risk society, it nonetheless represents a key moment of
decision by the counterterrorist industry.

The epistemological consequence of actively and deliberately embracing ontological


uncertainty as a fundamental condition of terrorism knowledge was the immediate severing of
links to previous empirical evidence, analytical frameworks and knowledge and the remaking of
terrorism as an unlimited, infinite risk. In practical terms, it meant that the unknown, and the
unperceived, became potential indicators of imminent terrorist violence; or, as Donald Rumsfeld
put it: “Absence of evidence is not evidence of absence” (cited in Daase and Kessler 2007, 428).
In other words, if terrorism is now defined primarily by what is unknown, then there is no
reliable empirical evidence or data from the past which can help us to “know” terrorism in the
present. Among other things, this meant that there was no basis or imperative for officials to
conduct empirical evaluation or cost-benefit analysis of current counterterrorism measures (see
Mueller and Stewart 2011, 2012). After all, what could such an investigation tell us about an
unknown and ultimately unpredictable danger? More importantly, if terrorism is conceptualised
in terms of a lack of knowledge, then its threat becomes limitless because there are no
epistemological limits to what is unknown (see Daase and Kessler 2007, 419).

Fifth, as I have argued elsewhere (Jackson 2012b), a process of “knowledge subjugation” in


regards to terrorism has also taken place over the past decade. This is the condition of what in
Donald Rumsfeld’s formulation might be called “unknown knowns”, or those things we “know”
but which we do not wish to “know”. As Daase and Kessler (2007, 412) imagine another verse of
Donald Rumsfeld’s poem:

Finally, there are unknown knowns

The knowns

We don’t want to know.

This has been described as “the politics of anti-knowledge”, or the “active refusal of explanation
itself” (Stampnitzky 2013; citing Ferguson 1994). Zulaika and Douglass (1996) argue that the
moral status of terrorists as evil and irrational proscribes intimate knowledge of them, making
the search for real knowledge about terrorists’ taboo. Certainly, as a great many analysts have
noted, with only a handful of notable exceptions, little effort has been made by terrorism
experts and officials to try and understand terrorist motivations by listening to their own words
and messages, and seriously engaging with their subjectivity.4 Zulaika describes this as the
counterterrorist’s “passion for ignorance”, particularly in relation to “basic knowledge regarding
the languages or cultures of the peoples they are engaged with”, much less “their political goals
or motivation” (2012, 54).

This is especially noticeable in the case of al Qaeda, where the voice of Osama bin Laden,
despite a vast corpus of open letters, interviews, propaganda videos and statements, has
remained largely unheard among Western audiences (Hellmich and Behnke 2012, 3).5 However,
there are many other examples of knowledge subjugation in counterterrorism. For example, as
I have noted elsewhere (Jackson 2012b), it was “known” within the United States’ military and
political intellectual establishment that foreign military intervention was directly linked to anti-
American terrorism (Eland 1998; see also Du Bois and Buts 2014). After 9/11, it was well
“known” by many actors, including the military and intelligence establishments as well as
academics, that the invasion of Iraq would provoke more anti-Western terrorism. It has long
been similarly “known” that the torture and abuse of terrorist suspects, especially at
Guantanamo Bay, would, far from helping to prevent more terrorism, provoke further attacks by
terrorists. The beheadings of Western journalists dressed in Guantanamo Bay orange jumpsuits
by ISIS in September 2014 would appear to confirm this. More recently, reports and testimony
on the drone killing programme in Afghanistan and Pakistan clearly demonstrates that
counterterrorism officials “know” that drone killings enrages local populations, amplifies anti-
Americanism and provokes attacks on Coalition forces (Zulaika 2012, 55). More broadly, it is
“known”, at least in academic circles accessible by counterterrorist officials, that the violent
suppression of terrorism is less effective than conciliation-oriented approaches such as direct
dialogue (English 2009; Araj 2008).

This condition of willful ignorance is achieved by forgetting, suppressing or repressing


evidence, knowledge and perspectives which challenge accepted ideas (or in this case,
accepted lack of knowledge or anti-knowledge). Consequently, counterterrorist officials, the
media and many terrorism experts frequently assert that we simply do not know why terrorists
attack or what causes their actions: their motives are inexplicable and unknown to us, despite
the existence of terrorists’ own explanations for their actions (see Bin Laden 2002; Lawrence
2005). Moreover, because they represent a dangerous taboo, they cannot and should not be
known: for example, “the Western audience has largely been shielded from the voice of bin
Laden, almost as if hearing him unedited posed a threat to the national wellbeing” (Hellmich
and Behnke 2012, 3). For counterterrorists and terrorism experts, this has meant that “Rather
than rely upon the creation of knowledge about terrorism, the dominant approach has rejected
the very possibility of knowing terrorists” (Stampnitzky 2013).

Finally, at the same time as terrorism has been constructed as unknowable and unpredictable,
and officials have become preoccupied with the possible over the probable, they have also
embraced the impossibility of ever completely securing society against terrorism. This is evident
in the Prepare strand of the United Kingdom’s CONTEST counterterrorism strategy, for example.
The Home Office (UK Home Office n.d.) states: “Prepare is the workstream of the
counterterrorism strategy that aims to mitigate the impact of a terrorist incident where it
cannot be stopped.” In other words, official UK counterterrorism policy – and that of most other
Western states – is based on the assumption that terrorist attacks will inevitably occur
regardless of any of the measures currently undertaken to prevent such an outcome: they
“cannot be stopped”. This assumption – that no matter what they do to deter or protect, or
what they ultimately know about terrorism, they will never be able to prevent at least some
future terrorist attacks – has the paradoxical effect of rendering any (little) knowledge we hold
about terrorism impotent. In the end, the only thing we know for sure is that more terrorist
attacks are inevitable, and no amount of knowledge or action can prevent them all from
occurring.
It is easy to see how in combination, and on the foundation of broader processes related to the
nature of terrorism and the evolution of the terrorism studies field and its cultural context of
possibility, these steps or developments have led to a profound crisis of knowledge about
terrorism, which in turn results in an endless but fruitless search for practical, actionable
knowledge about terrorism. That is, faced with a profound lack of knowledge and a seemingly
intractable condition of uncertainty, the counterterrorist is forced to employ imagination as the
primary tool to detect, prevent and deter terrorist attacks before they occur; and employing
imagination rather than empirical evidence, data and scholarly analysis, inevitably ends in
fantasy thinking and self-defeating policies. In other words, bizarre, wasteful,
counterproductive and ever-expanding and intrusive counterterrorism practices are not
exceptions; they are the inevitable fruits of the self-imposed epistemological crisis . As Zulaika
(2012, 58) puts it:

Once the situation is defined as one of inevitable terrorism and endless waiting, what
could happen weighs as much as what is actually the case; once a threat, whose
intention or possibility is unknown to us, is taken seriously, its reality requires that we
must act on it.

1. You’ll be shocked to learn that arbitration is abused by the West as a way to


gut developing state tax bases.
Martin Hearson & Todd N. Tucker 21, Hearson is Research Fellow at the Institute of
Development Studies, and a Research Director of the International Centre for Tax and
Development; Tucker is a political scientist and Director of Industrial Policy and Trade at the
Roosevelt Institute, “‘An Unacceptable Surrender of Fiscal Sovereignty’: The Neoliberal Turn to
International Tax Arbitration,” Perspectives on Politics, Cambridge University Press, 05/20/2021,
pp. 1–16

In Brussels, the European Commission had identified the double taxation problem in its “Action
Programme for Taxation,” which sought to promote “tax conditions which would enable the highest
possible degree of liberalization in the movement of persons, goods, service and capital and of interpenetration of economies”
(European Commission 1975, 2). In 1976 it unsuccessfully proposed a Directive obliging states to submit outstanding cases of double
taxation to be resolved through binding arbitration. The Arbitration Directive did not gain support from member states, because
enforcement would have ultimately been placed under the jurisdiction of the European Court of Justice (ECJ). A tax lawyer who had
followed the EU debates at the time explained the difficulty thus:

The problem with arbitration is you give jurisdiction to decide the tax base away. Some states
are extremely squeamish about this. In its heart, in its DNA, you do give authority to decide the
assessment away. Some jurisdictions were dead against giving authority to the ECJ. (Interview 9)

It would be well over a decade before the Directive eventually reemerged as a Convention, which states were willing to accept
because it fell outside the purview of the ECJ. According to a European Commission official, “the legal form was a political decision
made by Member States, which has been based on the collective hesitation to surrender a significant part of their fiscal sovereignty
[to the ECJ]” (Schelpe Reference Schelpe1995).

The Convention left intact so much sovereign discretion that it was not a success. The lack of procedural obligations created
opportunities for obstruction by imposing vexatious penalties, for example, on companies that could be waived in return for not
triggering arbitral proceedings. “Actually, it wasn’t working at all, because the countries wouldn’t let you go
in,” according to a tax practitioner (Interview 9). As a result, arbitrated disputes within the EU were small in
number: tacit knowledge among interviewees and in the industry press suggests as few as four disputes in total by the mid-2010s
(Sharon Reference Sharon2012). The initial experiences with arbitrations were slow, messy, and expensive (European Commission
2003; Moses Reference Moses2010). By the end of 2013, 432 active MAP cases in the EU had passed the two-year time limit at
which arbitration proceedings should have been triggered, but only one was in arbitration (European Commission 2014).

Across the Atlantic, U.S. Treasury officials also paid lip service to arbitration while retaining full
autonomy. In 1990 they committed to “limited and controlled” arbitration of disputes with Germany, an entirely voluntary
approach that “in no way impinges upon the sovereignty of either contracting state” (U.S. Congress 1990, 20). Once triggered, all
parties would have to comply with the decision, but both states had to agree to initiate the arbitration in advance, as did the
taxpayer. Furthermore, the underlying tax policy or domestic tax law couldn’t be at stake. They viewed it as an “experiment … If it
works well in Germany, it may be worth considering in some other treaties” (ibid. 39). The U.S. Chamber of Commerce and National
Foreign Trade Council (NFTC) endorsed the model, but the latter warned that “we trust that both countries will routinely give their
consent to arbitration” (ibid. 51). This proved to be the proposal’s Achilles heel, and no U.S.-Germany disputes ever occurred. In
1992, Robin Beran, Corporate Tax Director of Caterpillar Inc., wrote an open letter to the International Tax Counsel of the IRS stating
that “multinational corporations who have contended with uncertainty … need an arbitration requirement as a safeguard to double
taxation” (Beran Reference Beran1992, emphasis added).
The Neoliberal Project Bears Fruit

By 1995, most OECD members had taken only tentative steps towards binding tax arbitration, even as they had embraced the principle at the WTO, NAFTA, and in a growing number of investment treaties. The
OECD proposed to “analyse again and in more detail” the question of tax arbitration, but it moved at a glacial pace (OECD 1995, 62). A decade later, in 2004, it could only promise more study (OECD 2004). By
2007, a proposal for binding arbitration finally emerged, included in the 2008 revision of the OECD model tax convention. This form of arbitration provided for the “mandatory resolution of unresolved MAP
issues,” and stipulated that the outcome would be “binding on both Contracting States” (OECD 2007, 5).

The main reason for the change of OECD line was the thawing of the U.S. position. In 2006, the Bush administration had unveiled a proposal for mandatory binding arbitration, initially in treaties with Germany and
Belgium. In contrast to the 1990 U.S.–Germany agreement, the proposed rules would prevent either state from vetoing the formation of a board once MAP talks dragged past two years. Instead, the formation of a
panel could only be blocked if both states agreed, thereby erasing unilateral foot dragging as a viable tactic. Gone was the technical commentary that the underlying tax policy or domestic tax law couldn’t be at
issue. Investors were more powerful still: unlike the states, they could block the formation of a board beforehand, and reject its decision once rendered.

Treasury officials predicted that the new provisions could lead to companies making more complaints of double taxation because they would now know the dispute would get resolved” (U.S. Congress 2007, 10).
Tax staff for Congress celebrated the provisions as a means of disciplining government, “intended to induce the competent authorities to moderate their positions, including before arbitration proceedings would
commence, thus increasing the possibility of a negotiated settlement” (Ibid, 22). The NFTC backed up the assessment: “We think having the arbitration process, if you will, hanging over their heads will lead to
better-and more efficient-competent-authority work, which is a fine outcome. And, failing that, the arbitration process is also a fine outcome” (Ibid, 40).

Our interviews and commentary in the industry press highlight two elements that contributed to this change. First, a “subversive”—a tax lawyer from private practice with a history of lobbying on behalf of
multinational firms—took on the role of “Director, International” at the IRS in 2000. Interviewees and contemporaneous industry press considered Carol Dunahoo as instrumental in driving the change of position
(Interviews 2,4,7; Bell Reference Bell2007; Turner Reference Turner2005). Prior to the IRS, Dunahoo worked at PricewaterhouseCoopers, from where she acted on behalf of various U.S. business lobbying groups,
including the International Tax Policy Forum—on whose behalf she testified before Congress—and the Electronic Commerce Tax Study Group. Almost as soon as she left the IRS in 2004, she authored a report for
the NFTC that advocated tax arbitration among a range of business-friendly tax reforms (National Foreign Trade Council 2005). Her co-author, Mary Bennett, would go on to lead the OECD’s adoption of arbitration
in its model treaty, as head of its secretariat’s Tax Treaties and Transfer Pricing division. As Dunahoo explained:

The problem is that the competent authority process is presently the only means of ensuring that countries honor their treaty obligations. And it does not require the competent authorities to reach agreement
within a reasonable timeframe on a reasonable basis, or even to agree at all. It provides no recourse, even where one of the countries simply declines to enter into discussions on a case or an issue. There is no
mechanism to ensure that the competent authority process—and, therefore, the treaty—operate as intended.

(cited in Bell Reference Bell2004)

A second factor in the emergence of the Bush proposal was the sovereignty-preserving design championed by Dunahoo and colleagues, which allowed it to be seen as an incremental development in comparison
to the non-binding arbitration provisions. The new treaties’ accompanying guidance stipulated that arbitrators would not produce written awards or provide legal reasoning. Their role was reduced to picking
which state’s tax law interpretation was correct. This system, called “last best offer” or “baseball” arbitration (based on the procedure used to resolve pay disputes in major league baseball), was designed to get
states to put their most considered offers on the table, and to minimize the creation of a body of case law (Park Reference Park2001). According to one U.S. tax treaty negotiator, it was after examining the
baseball-style approach, with its sovereignty-preserving elements, that the United States became more open to arbitration (Henry Louie, cited in Parillo and Gupta Reference Parillo and Gupta2015). “It would
have been harder to get Congress to approve” a “reasoned opinion” approach to arbitration, stated one former Treasury official (Interview 7). Another former official added that “baseball” arbitration helped
because “a whole phase of turning the battleship around was getting Congress willing to ratify” (Interview 2).

It is worth noting here that, while this change took place under a Republican administration, it had bipartisan support. Many of the key players in the move towards arbitration—including the leadership of the
NFTC—are Democratic donors.Footnote3 Moreover, congressional Democrats were pushing Bush’s Treasury Department even further down the path to judicialization. In the official Senate report, the Democratic
majority on the Senate Finance Committee urged the administration to consider that “in the context of a treaty relationship that is more contentious, providing the arbitration board’s decisions with precedential
value might be desirable in order to avoid arbitrating the same dispute repeatedly” (U.S. Congress 2007, 7). In the hearings, Sen. Robert Menendez (D-NJ) called for “the possibility of taxpayer participation in the
arbitration proceedings” (ibid., 33, 47). The NFTC agreed, praising the suggestion as a way that companies “might have some influence with the tax authorities” (ibid., 47). The bipartisan element is illustrated by
the Obama administration’s subsequent efforts to promote arbitration as an international norm at the OECD.

As with the earlier non-mandatory arbitration clauses, the innovation of “baseball” arbitration seemed like a small enough step to be palatable to government and Congress, but it left the door open to subsequent
layering that would chip away at sovereignty-preserving elements of its design. The layering was mutually reinforcing between Washington, DC, Brussels, and Paris. The U.S. change of policy allowed the OECD to
endorse mandatory binding arbitration, but the process it proposed was more judicialized by default, based on the “reasoned opinion” approach. By 2009, the United States had already imported a judicializing
innovation from the OECD into its own arbitration proceedings: treaties signed with France and Switzerland that year gave taxpayers limited standing, inviting them to submit their own position papers to
arbitration panels, which was not possible under the earlier U.S. provisions.

From 2013 to 2015, the Obama administration supported an agreement on arbitration as part of the G20/OECD Base Erosion and Profit Shifting Initiative (BEPS) project on corporate tax avoidance, which would
eventually be implemented by twenty-seven countries in 2017 as part of the BEPS Multilateral Instrument. In 2016, the U.S. Model Convention was updated to include mandatory binding arbitration as the default
in U.S. negotiations with all countries.

The incremental progress towards more binding arbitration was further underlined in 2017, when
the European Council agreed to replace its Convention with a Directive that has a wider scope
and is enforceable in the European Court of Justice, precisely the sacrifice of sovereignty that member states had rejected
forty years before. It is the first tax arbitration agreement to require that, at a minimum, decision abstracts be published. The
European Commission Communication that paved the way for this change noted ruefully that early design decisions prevented more
radical reform: elimination of the transfer pricing system altogether “would eliminate the risk of double taxation in the EU,” but
because of lock-in of such fundamental design decisions, strengthening the arbitration convention was a necessary second-best
solution (European Commission 2015, 11).
This head of steam behind tax arbitration is difficult for alternative explanations, as it coincides with a broader context of economic
nationalism that has united left- and right-wing governments behind policies that retain national sovereignty and are often
detrimental to capital as a result. The U.S. and UK administrations have aggressively pursued withdrawal from sovereignty-pooling
institutions against the preferences of capital, yet have been strong advocates of more tax arbitration. Judicial sovereignty was an
explicit theme of the Brexit campaign, while opposition to the TTIP was frequently framed around its investor-state dispute
settlement provisions. The Trump administration has specifically targeted the WTO’s dispute settlement procedures by refusing to
approve new judges.

The direction of travel for tax arbitration is not only an exception in the arbitration sphere, but
also for taxation, where it is a lone capital-friendly reform since the financial crisis, among
ambitious efforts by the OECD and EU to raise more tax revenue by addressing tax evasion and
avoidance by cross-border capital. On the tax evasion side, financial secrecy has been radically curbed through the OECD’s
Global Forum on Transparency and Exchange of Information, which is backed by the threat of G20 and EU sanctions. It monitors tax
havens for their compliance with standards, notably a new system through which bulk data on taxpayers’ foreign income is
exchanged between jurisdictions. This anti-evasion capability appears to have emboldened governments to increase taxes on capital
income (Hakelberg and Rixen Reference Hakelberg and Rixenforthcoming).

As for tax avoidance, the 2017 arbitration agreement was wrapped up in the OECD’s BEPS initiative, triggered primarily by the
growing “perception that the domestic and international rules on the taxation of cross-border profits are now broken and that taxes
are only paid by the naïve” (OECD 2013, 13). The Obama administration, committed to closing off opportunities for tax avoidance in
the U.S. tax code, was initially supportive, but soon found itself fighting a rearguard action against measures targeted primarily at
increasing U.S. multinationals’ tax payments to other governments (Hakelberg Reference Hakelberg2020). Business was highly
dissatisfied with the initiative, apart from the inclusion of arbitration: the NFTC testified to Congress that it was “politically driven
and we believe, appeared to be aimed more at raising revenue from U.S.-based multinational corporations (MNEs) rather than other
global companies” (U.S. Congress 2015, 56).

By 2019, the global promulgation of tax arbitration provisions beyond the OECD became a
priority for U.S. businesses, even as unfinished business from the BEPS project snowballed into a protracted negotiation
over new rules to tax the digital economy. A statement from the U.S. Council for International Business set this out:

The OECD should require any country that wishes to be part of the new consensus to adopt mandatory binding arbitration, with
peer review, as a minimum standard to resolve any disputes arising as a result of the new rules.

(Sample Reference Sample2020)

They had fully converted the U.S. government, and negotiators let it be known that this position was also a red line
for the Trump administration (Bulusu and Ali Reference Bulusu and Ali2019). Arbitration is now a major fault line in
global negotiations, with emerging markets and developing countries continuing to cite
sovereignty as their main objection (Johnston Reference Johnston2019; Lewis Reference Lewis2016). Proposals
currently under discussion to satisfy U.S. demands would expand arbitration far beyond its
tentative steps through baseball arbitration. They would create a new multilateral dispute settlement
approach in which affected countries would be bound by the decision of panels whose formation
they could not block, and on which not all affecting countries would be able to appoint
members. Thus, the direction of travel for the United States, EU, and OECD is clear. In 2019, a group of tax
practitioners met in London to celebrate the diffusion of tax arbitration and promote the idea of a permanent tax arbitration
tribunal. One experienced arbitrator stated boldly that the proposal would offer more of “what we
already see developing: an international common law of tax.”Footnote4

Conclusion

It took four decades since the first concerted efforts by pro-judicialization subversives, but by 2017 mandatory binding tax arbitration had become the norm for international tax cooperation, at least within the
OECD. It represents a transfer of fundamental attributes of sovereignty from domestic revenue authorities and courts to private adjudicators. In many of the new clauses, taxpayers have more rights of veto over
the process than do states. Indeed, the latest generation of treaties also has a much greater role for non-state actors—with taxpayers allowed in some cases to trigger the proceedings. Looked at from a more
practical perspective, the result is a net transfer of revenue from governments (in double taxation cases where they cannot agree among themselves) to multinational taxpayers, who can now force a resolution on
them.

It is not only the change in position, but also the timing of the change that we have sought to explain. The period from the late 1970s through the 1990s is usually regarded as the high watermark of neoliberal
globalization, in which the Reagan and Thatcher governments recast the political consensus for a generation. Yet even then, a strategy of incremental change was necessary. States were only willing to concede
arbitration on very sovereignty-preserving terms, rendering it largely ineffective. In contrast, the period since the financial crisis of 2008 has been characterized in the West by a popular backlash against neoliberal
globalization and the failure of capital-friendly judicialization projects such as the Doha Round and the Transatlantic Trade and Investment Partnership, not to mention Brexit. There has also been a public clamor
for stronger taxation of the wealthy and multinational companies, which formed the backdrop to major international progress on exchange of tax information, as well as to the BEPS project. Yet it is precisely in
this era that tax arbitration, a pro-capital reform, has finally come of age.

As we have argued, the timing difference came about because efforts to promote arbitration took decades to bear fruit. This was successful layering: the new regime is the old regime, with an arbitration scheme
on top. “Subversives” encouraged states to adopt their preferred outcome by convincing officials that doing so would make the regime more effective at alleviating double taxation, and that arbitration was a
necessary cession of the judicial slice of sovereignty in order to hold on to other dimensions. When sovereignty-preserving arbitration designs failed to resolve the problem, more sovereignty-constraining versions
were now just a small step away.

By combining insights from the business power, historical institutionalist, and judicialization literatures, we illuminated some of the concrete, often unobserved, power dynamics that explain unlikely or
unexpected shifts in observable policies. If even the long-resistant United States can cede sovereignty over the most sensitive policy area, then we would expect to see ongoing sovereignty cession in other policy
areas that are characterized by strong instrumental business power.

Finally, we offer a warning to policymakers—particularly those from developing countries with less
capacity to shape the double taxation regime in its new era. As recent literature shows,
inequality today leads to cognitive and actual capture of the state and policy discourse today
and tomorrow (Boushey Reference Boushey2019; Hertel-Fernandez Reference Hertel-Fernandez2019; Saez and Zucman
Reference Saez and Zucman2019). While we hope that the OECD’s BEPS process will help reduce tax avoidance, it
is likely that in doing so it will perpetuate a system that continues to be exploited by MNEs,
reducing pressure for a more radical overhaul. As the OECD stated at the project’s outset, “what is at
stake is the integrity of the corporate income tax” (OECD 2013, 10). Arbitration, too, is a means of
shoring up that system.

2. This is not a product of ‘bias’ but a structural issue with the arbitration
regime, which is suspicious of public intervention due to its neoliberal
foundation.
Amr A. Shalakany 2k, Lecturer, Cairo University Faculty of Law; S.J.D. Candidate, Harvard Law
School, “Arbitration and the Third World: A Plea for Reassessing Bias under the Specter of
Neoliberalism Symposium: International Law and the Developing World: A Millennial Analysis,”
Harvard International Law Journal, vol. 41, no. 2, 2000, pp. 419–468

Revolution ought to be "spooky."'


This Article investigates disciplinary bias in international commercial arbitration. 2 More specifically, it is an attempt to readdress
what are generally dismissed today as outmoded Third World concerns that arbitration has tended
to resolve international trade and investment disputes in favor of the economic interests of the
North. The question of bias in North-South arbitration and the skewed distributive consequences of awards were
heated topics of debate throughout the 1970s and early 1980s, and accordingly had featured
prominently among Third World demands for a New International Economic Order (NIEO).3 Such
is the case no more. On the political level, the NIEO proved to be a dramatic failure with little redistribution of wealth from
North to South.4 On the intellectual level, Third World critiques of arbitration became one of the
numerous discursive casualties sustained with the demise of the NIEO. Since then, a "paradigm
shift" has taken place in the terms of Third World oppositional engagements with the mainstream in international law, and
the NIEO agenda for redistribution has since been replaced by alternative Third World oppositional
claims for cultural recognition. 5 Third World scholarship on arbitration is representative of this
shift.6 Today, cries of foul play over arbitration are neither as vociferous nor as troubling as they
were up to the end of the last decade . When they occur now, as they occasionally do in the Arab Middle East, the
oppositional claims are articulated increasingly in terms of a demand for incorporating the Islamic legal tradition in the international
practice of arbitration; the claims stand in stark contrast to the earlier critiques of arbitration itself as a distributively biased
mechanism of dispute settlement. 7 Finally, on the institutional level of international development policies, the current triumph
of neoliberalism has accompanied a rising acceptance of arbitration by Third World governments
as part-and-parcel of the legal package for economic development.8 [FOOTNOTE 8 BEGINS] 8.
The term "neoliberalism" has been taken much in vain, and a state of terminological disarray rules the exact contents of the
neoliberal agenda (also known as the "Washington consensus"). For a highly sophisticated yet accessible study of economic
neoliberalism and its legal structures, see Kerry Rittich, Recharacterizing Restructuring: Gender and Distribution in the Legal
Structure of Market Reform (1998) (unpublished S.J.D. dissertation, Harvard Law School) (on file with the Harvard University Library).
Rittich argues that the central element of neoliberal development theory is that there is an identifiable set of "best practices,"
consisting of strategies, laws, institutions and policies, which constitute the optimal route to economic development and prosperity
and which are generally applicable to all economies .... Best practices" and "good laws" are those which provide an enabling
environment for private entrepreneurial activity and investment, on the theory that private activity is the engine of economic
growth and the state intervention and interference in this process must be sharply contained if growth is not to be impeded. Id. at
16. For purposes of this Article, 1 shall be using the term "neoliberalism" in the sense identified above by Rittich. It should be noted,
however, that neoliberal policies bear on both developing countries in their processes of structural adjustment, as well as on
developed countries where left and right politicians are consistently forging an expanding middle ground consensus over questions
of economic policy. For a critical examination of recent neolibeal applications, see Peter Gowman, Neo-liberal Theory and Practice
for Eastern Europe, NEw LEFr REV., Sept.-Oct. 1995, at 3. [FOOTNOTE 8 ENDS] A "healthy
legal environment" is today
high on the World Bank's check-list, and its overall rule-of-law requirement seems to entail a
generally more favorable attitude toward arbitration.9 In 1986 the World Bank established its International
Center for the Settlement of Investment Disputes (ICSID) as, among other things, a confidence-building measure to address Third
World concerns over bias in arbitration proceedings.10 During the 1990s, with the goal of assuaging similar concerns in mind,
regional arbitration centers were established in such major Third World capitals as Cairo and Kuala Lumpur." Even those Third World
countries most ardently suspicious of arbitration, from Iran to China, have gradually reformed their legal systems and adopted
different versions of the mainstream canon, the UNCITRAL Model Law of Arbitration. 12 In short, as more Third World governments
embrace a neoliberal agenda for development, the conventional wisdom seems to be that foreign direct investment, necessary for
kick-starting a country's economy, will not migrate to the South13 unless the luggage includes an arbitration clause. The recent East
Asian economic crisis only served to augment this favorable stance toward arbitration, as the crisis was partly blamed on the
institutional weakness of the rule of law, and arbitration thus increased in importance as a neutral and reliable alternative to the
state court system. 14

Accordingly, on the political, intellectual, and institutional levels, Third World critiques of
distributive bias in arbitration seem to have all but disappeared; indeed, the very term "Third World"
appears more as a rhetorical choice in terminology than as a substantive reference to an identifiable set of oppositional readings in
international legal scholarship. 15 How should one understand this recent shift in attitude toward
arbitration, and, more importantly, how does this shift reflect the merit of those very serious
charges leveled only a decade ago against this international mechanism of dispute
settlement? Were Third World perceptions of bias simply the misguided product of a much-too-politicized
debate, the exaggerated expression of ideological schisms that doomed the NIEO project from its inception and that appear to
have been diffused with the contemporary "end of history"?16 Should we assume that, because Third World
countries are resorting increasingly to such North-South conciliatory institutions as ICSID, Third
World prob lems with arbitration were merely the function of a lack of expertise in the professional
cadres of the newly independent countries?17 To restate the question vulgarly: As more Third World
lawyers receive their LL.M. degrees from American law schools and proceed to get training at
large New York law firms, as more Third World lawyers "learn the ropes," will all be well? ' 8
This Article will neither chronicle nor account for the shift that has taken place in Third World critiques of arbitration. It assumes,
instead, that such critiques have indeed diminished in number over the past few years, and that a general attempt to grapple with
this discursive phenomenon would include, intuitively, a variety of factors such as the Third World debt crisis of the 1980s, the
debilitating effect of the Reagan-Thatcher years on the practical potentials of traditional modes of leftist opposition, and the
hegemonic rise of neoliberalism as the institutionally sanctioned development theory of the 1990s. This Article will address the
above questions concerning bias in North-South arbitrations first by reanalyzing the terms of the debate as articulated during the
debate's most tense phases in the early 1980s, and then by picking up the analysis of this now "outmoded" subject. The approach is
structured around a distinction between the institution and the law.19

Historically, Third World critiques of arbitration have centered on two distinct, yet complexly interrelated sources of bias in
arbitration. On one hand, some Third World scholars argue that bias results
from the adjudicative capacities of
arbitration, which provide a set of institutional arrangements for the resolution of North-South
investment disputes. The claim is that the practice of international commercial arbitration is
configured in such a way as to consistently favor the economic interests of the North. This concept
is referred to in this Article as "institutional bias," and is analyzed in Part II. On the other hand, some
Third World scholars
account for bias by examining the applicable law under which the North-South investment
disputes were settled. Under this view of bias, the doctrinal configuration of international law is
responsible for the skewed distributional consequences of arbitration awards. This concept of
"doctrinal bias" is analyzed in Part III
As this Article will show, neither doctrinal bias nor institutional bias, in and of themselves, can account for the full spectrum of
skewed decisions. From a Post-Realist perspective, Third World jurists have overestimated the effect of legal necessity as the
instigator of bias in arbitration.20 Although arbitration is a politicized mechanism of dispute resolution, neither its institutional
configuration nor the international law doctrines it applies are per se biased or predisposed to favoring the economic interests of the
North. Both the institution and the doctrines are indeterminate enough to assist either party interchangeably.

This conclusion, however, does not mean that Third World concerns over bias in arbitration were completely misguided. Bias did
indeed exist, and will continue to appear whenever Third World countries adopt an alternative
development policy authorizing active state intervention in the private sphere. Arbitration's
bias against public regulatory initiatives is best explained as a disciplinary sensibility that acts in
an intermediary capacity, skewing the institutional application of international law doctrines
toward a conservative set of ideological preferences founded on a deep and enduringly
intuitive loyalty to a public/private distinction. This "disciplinary bias," discussed in Part IV, involves the
creation of veritable subjectivities in arbitration, exerts a controlling effect on the imagination
of its practitioners, and delimits their normative visions on what is legally permissible. More
specifically, it is invested in an apolitical representation of the private sphere, conceived by liberal
political philosophy as a space where people coordinate their economic interests away from the
coercive powers of the state. Such a formulation necessarily implies the following perspective:
Arbitration is about the coming together of equals to resolve contract law questions arising from
disputes over property rights. Ideologically suppressed in this picture is the ubiquitous "politics of
private law," a substantial issue raised some eighty years ago by the American Legal Realists. 21

Accordingly, this Article is more an urgent plea for reassessment than anything else. Many
of the problems that gave
rise to Third World anxieties a decade ago continue to inform the practice of international
arbitration today. More importantly, these problems of yesterday play into today's development
policies. The fundamentally apolitical stance of neoliberalism finds its legal manifestation in
disciplinary bias-the resolutely private law sensibility of contemporary arbitration practices . It
should come as no surprise that the recent demise of "alternative" development theories22 in
favor of the "Washington consensus" was coupled with an increased acceptance of arbitration
as the sanctioned mechanism for resolving international investment disputes. Both theory and practice are premised
on a similar set of constitutive notions about the relationship between law and development -in
particular, their mutual investment in a public/private distinction that generally holds state
regulatory interventions in the market as both normatively undesirable (under neoliberal theories of
development) and legally unacceptable (as evidenced by international arbitration awards).

The Libyan oil arbitrations serve as an excellent demonstration of the existence and impacts of disciplinary bias. Third World scholars
traditionally have advanced these arbitrations as the most flagrant proof of bias in international commercial arbitration, 23 and
although they were settled almost twenty years ago, the Libyan cases continue to occupy an almost mythological position in the
contemporary study of arbitration practices. In a recent and highly acclaimed sociological study of the arbitration community, the
authors reflect that

on the basis of interviews with many of the lawyers and arbitrators involved in the historic oil arbitrations... the
arbitrations should be considered "founding acts" for the international arbitration community, serving to build
reputations and beliefs in the law that went well beyond the importance of those arbitrations in the particular disputes
that generated them. 24

Thus, many of the practices that sparked Third World claims of bias in the Libyan arbitrations two decades ago continue to inform
the current practice of arbitration, and make a fresh investigation of the Libyan awards all the more consequential. Moreover, the
arbitrations took place in the aftermath of the Libyan government's nationalization of Western oil concessions; looking back today at
the practical consequences of the NIEO, the nationalizations appear as perhaps the only concrete attempt by a Third World
government to put into action the NIEO demands for redistribution of wealth from the First to the Third World. In the context of the
Oil Petroleum Exporting Countries' (OPEC) mid-1970s drive to alter the terms of international oil trade in favor of Third World
producing countries and the ensuing energy crisis, the Libyan arbitrations eventually became the juristic venue in which the
OPECINIEO redistributive agenda was delegitimized under the aegis of international law. Toward this end, the arbitrators in the
Libyan awards had to articulate concrete legal opinions as to the legitimacy of such NIEO proposals as the doctrine of permanent
sovereignty over natural resources. These arbitrations marked one of the rare times, to date, that three decisions addressing this
legal issue became part of the public domain. 25

This Article will first examine how bias has been debated as an institutional problem, and then explore how bias has been
alternatively discussed as a doctrinal problem. Both parts begin by mapping the discourse on the subject; they then demonstrate
that bias cannot be fully accounted for in institutional or doctrinal terms. An
interpretation of bias as symptomatic
of a disciplinary sensibility, which incorporates an apolitical perception of the public/private
distinction, will demonstrate why arbitration becomes hostile to public interventions in market
relations. The American Realists' critique of the public/private distinction bears on the problem of bias by challenging
arbitration's bias against alternative Third World development strategies which depart from the neoliberal canon.

You might also like