You are on page 1of 29

1NC --- Kritik

Black ethical life is characterized by an aporia of impossible hospitality, the


contradictory expectation of both absolute vulnerability and generosity. Debating the
politics of Black death leaves the political ontology of antiblackness “unscathed if not
emboldened.”
Mubimurusoke, 22—Intercollegiate Department of Africana Studies, Claremont McKenna College
(Mukasa, “Black Ethical Life,” Black Hospitality, Introduction, 1-31, SpringerLink, dml)

These dire conditions are often thought of and brought to us through the lens of the political. The news reports black
death through a constant stream of devastating images , sound bites, empty analysis, and overfull tears. Pleas are
made to our politicians to arrest, punish, or just acknowledge crimes against black people as crimes against humanity, and while black
people react in various ways to the always insufficient responses and demands for accountability within the familiar terms and
policies of our everyday political life, for example, ‘arrest him’, ‘fire her’, ‘reprimand them’, ‘vote blue’, ‘vote green’, ‘don’t bother
voting’, the political ontology of white supremacy remains unscathed if not emboldened. Black hospitality asks,
how might we imagine these same situations not exclusively from the political but from the ethical, specifically using
the rubric of an impossible hospitality.

The structure of Jacques Derrida’s impossible hospitality is appealing because it can account for two familiar qualities of
blackness: vulnerability and generosity. Following from a modern antiblack political ontology, black people
remain unimaginably vulnerable to the whims of others, this is a direct effect of their objectification. Their
open vulnerability is a manifestation of their subordination to the order of whiteness; it is that which
fortifies the (white) human materially, but primarily libidinally. The openness of blackness can also
characterize a generosity; that is, black people are expected to be open to their subjugation but also
generally open or porous in their constitution, to be generous with their welcoming of others—and they are—meanwhile
any action or inaction can be read as violation of the way things are supposed to be , the way black
people are supposed to act. In other words, blacks are expected and constituted as hospitable (vulnerable and
generous), and yet in practice they are always already presumed to be hostile.

Derrida’s account of hospitality finds these contradictory conditions to be essential to its structure. It pivots around
an essential aporia whereby any proper ethics is not decided by autonomous individuals and their ability to rationally derive maxims,
calculate happiness, or prudently apply virtues, but instead is an impossible response, a responsibility to the other through an unmediated
welcoming. Derrida’s hospitality is aporetic in that its
possibility involves a structural impossibility, as opposed to
irreconcilable antagonism; it positions the host, that is, the ethical actor, to be completely open to the other, to the
point of being completely vulnerable and generous, without hesitation or reserve ; however, the practical
application of such generosity from this position of vulnerability is actually impossible, since any gesture of welcoming
necessarily implies a mediation of that generosity and thus is not completely open. Even a ‘hello’, ‘come in’, or ‘good
evening officer’ violates the standards of complete openness, since they imply a condition on that welcoming— for example, a condition of
acknowledgment by the other, the guest —and thus said hospitality is also a hostility. This ethics, therefore, is im-possible.

Black Hospitality argues that this structure of ‘im-possibility’ characterizes black ethical life specifically. The problem of an im-
possible black ethics is the response to the question of the possibility of an ethics in white supremacy. This is a departure from other readings of
Derrida, including some of his own, which suggest that marginalized people are not the ‘hosts’ of ethical experience in modernity, but they are
the guest, that is, the other. For Derrida it is often the immigrant other who is attempting to gain access to the host’s abode, such as the United
States or Europe. Black hospitality, however, argues that the
paraontological sociality of blackness actually situates black
people in the position of the vulnerable host, whereby their ethical demands are simultaneously made
legible and impossible by their political circumscription. Nevertheless, blackness perseveres, escapes its
circumstances and circumscriptions, through an ethical responsibility and impossible welcoming of the totalizing other of
antiblackness from the black social other within, that is, the excesses of blackness that escape identity. This black sociality of ‘the other
within’ opens a crack in the world that black people are metaphysically beholden to and paraontologically
may escape from, if only for a moment.

The expansion of legal personhood psychically coheres whiteness as a shield against


the symbolic threat of Black sociality.
Han, 15—Associate Professor of Criminology, Law and Society, the School of Law, and African
American Studies at UC Irvine (Sora, “Racial Profiling,” Letters of the Law: Race and the Fantasy of
Colorblindness in American Law, Chapter 3, 87-90, dml)

What is this lie, exactly, at the heart of the legal notion of whiteness as property? It
is true that whiteness is a social
construction protected by a law that treats it as a kind of property that can be possessed. Yet we can
also read in Harris’s theory a tacit acknowledgement of whiteness as a psychical formation organized by an
unconscious relationship to blackness. Although the dominant tendency is to read Harris’s piece as an analysis of how the legal
ideology of property legitimizes the social privileges and powers that accrue to white identity, it is also perhaps an immanent theoretical
engagement with race in the dreamwork of the law.

Let us look at this embedded theory more closely. Harris arrives at whiteness’s resemblance to property through an exposition of the legal
position of the slave as both thing and person, and how the law deals with the slave’s resistance to this paradox. For purposes of political
representation, the slave is both citizen and property; for purposes of commodity production, the slave is both thing and human; and for
purposes of exchange, the slave is both money and person. Against the threatening capacity of humanity’s “market-alienability,”61 and against
the universalizable “threat of commodification,”62 Harris observes, somewhat in passing, “whiteness became a shield.”63

We are dealing here with immaterial things like benefits, values, capacities, and expectations accrued to the social identity of whiteness.64 Few
have appreciated Harris’s focus on how modern property law administers classical forms of property, including intangible things.65 It is this
focus that expands the notion of whiteness
as property to include not only identity as a source of property interests (for
example, that
which is legally protected by a cause of action against defamation), but also as the very capacity and
expectation to have a past, a community, and a future that the idea of ownership implies. But no one has
seized upon an even more crucial insight in Harris’s theory: under her legal metaphorization of whiteness as property is yet another literary
move—not by metaphor but by a striking declarative. In writing about how property law enforces white identity and power, Harris’s assertion
that “whiteness became a shield” grounds the conceptual metaphor of “whiteness as property.”

Thus we can extend Harris’s formulation “whiteness as property” to “whiteness as property is shield.” The resemblance between the syntagm
“whiteness as property is shield” and Frantz Fanon’s figuration of “white masks” is a suggestive overlap that emphasizes the
fantasmatic structure of whiteness. For it was one of Fanon’s tasks in Black Skin, White Masks to meditate on white
culture’s symbolic reliance on phobic fantasies of blackness.66 On this extended formulation, when Harris finds
that the “absence [of whiteness] meant being the object of property,”67 her analysis of formally recognized whiteness
is also an implicit analysis of the fantasmatic sense of threat, anxiety, and defensiveness crucial to white
cultural life. As a shield protecting one from being made into an object, so the legal fantasy goes,
whiteness as property provides the assurance of personhood itself. This shield of whiteness is
inalienable, personal; it can be used and enjoyed; it is a marker of standing; and it is a mode of self-
possession.68 Most interestingly, this white shield is also a symbolic representation in law of the legal fantasy
of personhood.

So when Harris writes further, “Owning white identity as property affirmed the self-identity and liberty of whites and, conversely,
denied the self-identity and liberty of Blacks,”69 we should pause at this precise problem: a racial position from which claims to
various forms of value can be made, and from which representational acts can signify personhood—
both for the world and for the self. Whiteness, as an object of law, is private property; and whiteness, as a
representational act in law, “self-identity,” is the cultural capacity to imagine oneself a subject of the law.
Harris’s analysis of the
relationship between race and law exceeds issues of unequal access to the privileges of private property;
it encompasses the larger legal problem of blackness as a form of life lived in the unassimilable depths
of the law’s language and its interpretive world of meaning . It points us to the possibility of
understanding (or at least how the law understands) blackness as the experience of the law’s rendering of social
reality into words. Lacan calls this a symbolic death, where “the symbol first manifests itself as the killing
of the thing,” and thus, any mobilization of symbolic power always bears the trace of a murderous intimacy
with “the thing,” or a form of social existence irreducible to the symbolic life of the law.70
This brings us to the central legal case for Harris’s text, Plessy v. Ferguson (1896), and an arresting recollection embedded in her discussion of it.
In Chapter 1, I expanded on Plessy’s ruling that formal racial segregation is constitutional and its implications for thinking about a multiracial
citizenship structured by the fantasy of colorblindness. It is worth revisiting Plessy through Harris’s analysis because it, like Lawrence and Prigg,
presents the racial profile through apostrophization. I am specifically interested in Harris’s vital focus beyond the ruling on how Homer Plessy’s
lawyer, Albion Tourgée, rhetorically challenged Jim Crow. Harris quotes from Tourgée’s filings, “Probably most white persons if given a choice,
would prefer death to life in the United States as colored persons. Under these conditions, is it possible to conclude that the reputation of being
white is not property? Indeed, is it not the most valuable sort of property, being the master-key that unlocks the golden door of
opportunity?”71

Tourgée’s larger argument here was about how the imprecision of formal racial categories applied to social practices results in arbitrary
deprivations of property interests in racial identity. As Harris observes, the Court evaded this argument by simply asserting that Plessy’s racial
classification was clear. On my reading of this evasion, the problem with it is not the refusal to acknowledge the mishaps of due process in the
application of formal racial categories. It is that the evasion admits
a more disturbing truth about the function of formal
racial categories. Toward this end, the Court pointed out that if Plessy was in fact a white man, according to Louisiana’s rules of racial
categorization, he could claim money damages for defamation.72 And in this dismissive gesture, we witness the affirmation not
only of whiteness as property, but also, the fantasy of a world where it is better to be dead than black.
“Under these conditions, is it possible to conclude that the reputation of being white is not property?”73 By answering in the negative (that it is
not possible to conclude that whiteness is not property), the Court by implication also affirmed the truth of “these conditions”:
that being black is worse than biological death. The black claim invokes the terror of being cast under
the sign of blackness as “the thing” that the law protects itself and its subjects against. While the Court
acknowledges the possibility of injury to Plessy’s whiteness, it sees the injury of how this property interest materializes over
and against the worse-than-death condition of being black, the realness of the fantasy of black life in
symbolic death,74 and of a black sociality beyond the boundary of any kind or form of representation .75
Contained in Plessy, as well as in Harris’s focus on this fragment of the case, is the truth of the relation between law and projection, between
rule and fantasy: whiteness as property is a shield against symbolic death.
In Lawrence, overturning Lawrence’s and Garner’s convictions depends on the representability of same-sex intimacy as either consensual or
deviant behavior. In Prigg, overturning Prigg’s conviction pivots on the representability of Morgan as either person or property. In Plessy,
upholding the railroad’s segregation policies relies on the representability of Plessy’s racial categorization as white or black. And yet, in each of
these cases, because interpretation is limited by the literary imagination of the racial profile , only injury to
whiteness can be given symbolic representation in the word of law. Lawrence apostrophizes an
unquestioned black vulnerability to police surveillance and intrusion through its recognition of injury as state encroachments on
sexual privacy. Prigg apostrophizes black persons as owned human beings through its recognition of injury as state
obstruction of owners’ rights to use and enjoy their property. And Plessy apostrophizes blackness as a symbolic death
through its recognition of injury as the contamination of self-identity.

The metaleptic relationship between blackness and injury here is staged through the law’s inevitable
encounter with claims against and issued from an inhabited symbolic death. Against, but mobilized by, these
claims, racial injury in the fantasy of colorblindness is always appropriated by the grievance of a (white) legal
subject, and is always an exploitation of the spectacle of blackness haunting those most fundamental
and most celebrated constitutional values that provide an endless source of legal issues for judicial
interpretation. Given this, the symbolic foreclosure of black racial injury has less to do with the practical
limits of legal interpretation, and more to do with the unstated fact that legal interpretation cannot
address, let alone redress, the thingly form of the black claim. This is where acquiescing to the political
sensibility of not arguing that which the law declares should not be argued (recall Kennedy’s curious observation
about the absence of a Fourth Amendment challenge in Lawrence) appears as a fundamental failure.

There are calculable claims for legal relief. And then there are things that cannot be argued because
legal interpretation as a symbolic structure owes its reproductive capacity to the foreclosure of those
things. Indeed, Plessy did not ask for personal compensation. His was not only a claim for legal relief. He argued the perversity of black
exclusion and asked for an injunction on Jim Crow law instead. The Court’s interpretive response to Plessy’s plea was an admonishment: if
Plessy was not Plessy, he could have asked the Court to compensate him for his property interests in whiteness. It was a mediated avowal of
the fact that Plessy could not ask for anything from his position, let alone ask the Court to compensate him for his transgression from one
biologically given racial caste into another. The Court’s response to Plessy noted what could have been asked for, but in an impossible
hypothetical situation. Still, in the Court’s reproach that Plessy should not ask what cannot be asked for, it also admitted
the reality of
blackness as the thingly life of symbolic death. A devastating recognition if ever there was one .

The plan uses AI a surrogate for human labor, which “carries forward and
reuniversalizes” antiblack hierarchies by establishing communist AI as “something to
be achieved.”
Atanasoski and Vora, 19—Professor of Feminist Studies and Critical Race and Ethnic Studies at the
University of California, Santa Cruz; Associate Professor of Gender, Sexuality and Women's Studies at UC
Davis (Neda and Kalindi, “The Surrogate Human Effects of Technoliberalism,” Surrogate Humanity: Race,
Robots, and the Politics of Technological Futures, Introduction, 8-16, dml)

Like the “others” of the (white) liberal subject analyzed by decolonial and postcolonial scholarship, the
surrogate human effect of
technology functions first to consolidate something as “the human,” and second to colonize “the human”
by advancing the post-Enlightenment liberal subject of modernity as universal.8 The concept of the surrogate brings together
technoliberal claims that technological objects and platforms are increasingly standing in for what the human does, thus rendering the human
obsolete, while also foregrounding the history of racial unfreedom that is overwritten by claims of a postrace and postgender future generated
by that obsolescence. In our usage, the longer history of the
surrogate human effect in post-Enlightenment modernity stretches
from the disappearance of native bodies necessary for the production of the fully human , through the
production of the fungibility of the slave’s body as standing in for the master , and therefore also into the
structures of racial oppression that continue into the post-slavery and post-Jim Crow periods, and into the disavowal
of gendered and racialized labor supporting outsourcing , crowdsourcing, and sharing economy platforms.
Framing technologies through the lens of the surrogate effect brings a feminist and critical race perspective to bear on notions of technological
development, especially in the design and imagination of techno-objects and platforms that claim a stand-in role for undesirable human tasks.

As part of the surrogate effect, the surrogate is a racialized and gendered form defining the limits of human
consciousness and autonomy. Saidya Hartman conceptualizes the surrogate by citing Toni Morrison’s formulation of slaves as
“surrogate selves for the meditation on the problems of human freedom.”9 Hartman proposes that “the value of blackness resided
in its metaphorical aptitude, whether literally understood as the fungibility of the commodity or understood as
the imaginative surface upon which the master and the nation came to understand themselves .”10 The
slave, the racialized fungible body , also acts as a “surrogate for the master’s body since it guarantees his
disembodied universality and acts as the sign of his power and domination.”11 As Hartman elaborates, these
racialized structures of the surrogate did not simply disappear after emancipation. Rather, “the absolute
dominion of the master, predicated on the annexation of the captive body, yielded to an economy of bodies, yoked
and harnessed, through the exercise of autonomy, self-interest, and consent. . . . Although no longer the extension
and instrument of the master’s absolute right or dominion, the laboring black body remained a medium of others’
power and representation.”12
While Hartman is referencing the rise of new modes of bonded labor following emancipation that were encapsulated by the liberal formalities
of contract, consent, and rights, her theorization of surrogacy as a racialized and gendered arrangement producing autonomy and universality
of and for the master is useful for thinking about the contemporary desire for technology to perform the surrogate human effect. The
racialized and gendered scaffolding of the surrogate effect continues to assert a “disembodied
universality” that actually offers the position of “human” to limited human actors, thereby guaranteeing
power and domination through defining the limits of work, violence, use, and even who or what can be visible
labor and laboring subjects.
Tracking the endurance of the racial form of slavery as the (not so) repressed or spectral frame for the imaginary of what surrogate
technologies do, or who or what they are meant to replace, we insist throughout this book that human emancipation (from work, violence, and
oppressive social relations) is a racialized aspiration for proper humanity in the postEnlightenment era. In the US context, reading technologies
as they reflect the dominant imagination of what it means to be a human thus means that they are situated in social relations of race, gender,
and sexuality, as these derive from embodied histories of labor, Atlantic chattel slavery, settler colonialism, and European and US imperialism,
to name the most dominant. The preeminent questions of the politics of the subject, and the derivative politics of difference that consume
critical theory—questions that are about political consciousness, autonomy with its attendant concepts of freedom and unfreedom, and the
problem of recognition—also drive the preeminent questions we must ask of technologies that perform the surrogate human effect.

The surrogate effect of technological objects inherits the simultaneously seeming irrelevance yet all-encompassing centrality of race and
histories of enslavement and indenture against which the liberal subject is defined. As Lisa Lowe writes:

During the seventeenth to nineteenth centuries, liberal colonial discourses improvised racial terms for the non-European peoples whom
settlers, traders, and colonial personnel encountered. We
can link the emergence of liberties defined in the abstract
terms of citizenship, rights, wage labor, free trade, and sovereignty with the attribution of racial difference to those
subjects, regions, and populations that liberal doctrine describes as unfit for liberty or incapable of civilization,
placed at the margins of liberal humanity.13

Lowe explains that while


it is tempting to read the history of emancipation from slave labor as a progress narrative of
liberal development toward individual rights and universal citizenship, in fact, “to the contrary, this linear conception of
historical progress—in which the slavery of the past would be overcome and replaced by modern freedom—concealed the
persistence of enslavement and dispossession for the enslaved and indentured” and racialized populations necessary to the
new British-led impe- rial forms of trade and governance “expanding across Asia, Africa, and the Americas under the liberal rubric of free
trade.”14 Moreover, according to Lowe, “the liberal experiment that began with abolition and emancipation continued with the development
of free wage labor as a utilitarian discipline for freed slaves and contract laborers in the colonies, as well as the English workforce at home, and
then the expanded British Empire through opening free trade and the development of liberal government.”15 While the history of capitalism
tends to be written as the overcoming of serf, slave, and indentured labor through free contract and wage labor, that is, as freedom
overcoming unfreedom, as Lowe demonstrates, it is actually the racialized coupling of freedom and unfreedom that undergird and justify
capitalist and imperial expansionism.

Rather than freedom being on the side of modernity, which overcomes the unfreedom that is the condition of premodernity, in fact the states
of both freedom and unfreedom are part of the violent processes of extraction and expropriation
marking progress toward universality. Undergirding Euro-American coloniality, political liberalism maintains the racial temporality
of post-Enlightenment modernity that depends on innovating both bodies and resources (and how each will be
deployed). David Theo Goldberg argues that liberalism is the “defining doctrine of self and society for modernity,” through which articulations
of historical progress, universality, and freedom are articulated.16 Because liberalism’s developmental account of Euro-American moral
progress has historically been premised on the transcending of racial difference, as Goldberg puts it, under the tenets of liberalism, “race is
irrelevant, but all is race.”17

To articulate freedom and abstract universal equality as the twin pillars of liberal modes of governance,
racial identity categories and how they are utilized for economic development under racial capitalism are
continually disavowed even as they are innovated. In her writing about how such innovations played out in the post–World
War II context, the historical period in which we locate our study, Jodi Melamed has argued that US advancement toward equality ,
as evidenced by liberal antiracism such as civil rights law and the professional accomplishments of black and other minority citizens,
was meant to establish the moral authority of US democracy as superior to socialist and communist nations.18 Highlighting
antiracism as the central tenet of US democracy, the US thus morally underwrote its imperial projects as a struggle
for achieving states of freedom abroad over illiberal states of unfreedom , racializing illiberal systems of belief as a
supplement to the racialization of bodies under Western European imperialism.19 The assertion that the US is a space of racial
freedom, of course, covered over ongoing material inequalities based on race at home. As part of the
articulation of US empire as an exceptional empire whose violence is justified because it spreads
freedom, the history of slavery is always acknowledged, but only insofar as it can be rendered irrelevant
to the present day—that is, the history of slavery is framed as a story of US national overcoming of a past aberrant from the ideals of US
democracy, and as a story of redemption and progress toward an inclusion as rights-bearing subjects of an ever-proliferating list of others
(women, black people, gay people, disabled people, etc.).

Technoliberalism and Racial Engineering of a “Post”-Racial World

“Will robots need rights?” This dilemma was included in Time magazine’s September 21, 2015, issue as one of the most important
questions facing US society in the present day. In his written response, Ray Kurzweil, an inventor and computer scientist, wrote that “If an
AI
can convince us that it is at human levels in its responses, and if we are convinced that it is experiencing the subjective states
that it claims, then we will accept that it is capable of experiencing suffering and joy,” and we will be compelled to grant it rights
when it demands rights of us.20 In other words, if a robot can prove that it can feel human (feel pain, happiness, fear, etc.), its
human status can be recognized through the granting of rights. Philosophical and cultural meditations upon
questions of artificial personhood, machinic consciousness, and robot autonomy such as that in Time magazine announce the
advent of what we term in this book technoliberalism by asserting that in the current moment, the category of
humanity can be even further expanded to potentially include artificial persons. According to Hartman, under
liberalism, the “metamorphosis of ‘chattel into man’” occurs through the production of the liberal individual
as a rights-bearing subject.21 However, as Hartman elaborates, “the nascent individualism of the freed
designates a precarious autonomy since exploitation, domination, and subjection inhabit the
vehicle of rights.”22
Autonomy and consciousness, even when projected onto techno-objects that populate accounts of capitalist
futurity, continue to depend on a racial relational structure of object and subject. We describe this symbolic
ordering of the racial grammar of the liberal subject the “surrogate human effect.” As technology displaces the human chattel-
turned-man with manmade objects that hold the potential to become conscious (and therefore autonomous,
rights-bearing liberal subjects freed from their exploitative conditions), the racial and gendered form of the human
as an unstable category is further obscured. Technoliberalism’s version of universal humanity heralds a
postrace and postgender world enabled by technology , even as that technology holds the place of a
racial order of things in which humanity can be affirmed only through degraded categories created for
use, exploitation, dispossession, and capitalist accumulation. As Lisa Lowe articulates, “racial capitalism suggests
that capitalism expands not through rendering all labor, resources, and markets across the world identical, but by
precisely seizing upon colonial divisions, identifying particular regions for production and others for neglect, certain populations for
exploitation, and others for disposal.”23 As we show throughout the chapters of this book—which range in scope from examining how
technological progress is deployed as a critique of white supremacy since the advent of Trumpism, effectively masking how the fourth industrial
revolution and the second machine age have accelerated racialized and gendered differentiation, to how the language of the sharing economy
has appropriated socialist conceptions of collaboration and sharing to further the development of capitalist exploitation— within
present-
day fantasies of techno-futurity there is a reification of imperial and racial divisions within capitalism . This
is the case even though such divisions are claimed to be overcome through technology .

Surrogate Humanity contends that the


engineering imaginaries of our technological future rehearse (even as they
refigure) liberalism’s production of the fully human at the racial interstices of states of freedom and unfreedom. We use the
term technoliberalism to encompass the techniques through which liberal modernity’s simultaneous and contradictory obsession with
race and its irrelevance has once again been innovated at the start of the twenty-first century, with its promises of a more just future enabled
by technology that will ostensibly result in a postrace, postlabor world. This is also a
world in which warfare and social
relations are performed by machines that can take on humanity’s burdens . Technological objects that are
shorthand for what the future should look like inherit liberalism’s version of an aspirational humanity such that technology now mediates the
freedom–unfreedom dynamic that has structured liberal futurity since the post-Enlightenment era. Put otherwise, technoliberalism proposes
that we are entering a completely new phase of human emancipation (in which the human is freed from the embodied constraints of race,
gender, and even labor) enabled through technological development. However, as we insist, the
racial and imperial governing
logics of liberalism continue to be at the core of technoliberal modes of figuring human freedom . As Ruha
Benjamin puts it, “technology . . . is . . . a metaphor for innovating inequity.”24 To make this argument, she builds on David
Theo Goldberg’s assessment of postraciality in the present, which exists “today alongside the conventionally or historically racial. . . . In this, it is
one with contemporary political economy’s utterly avaricious and limitless appetites for the new.”25 Yet amid assertions of technological
newness, as Benjamin demonstrates, white supremacy is the default setting.

Technoliberalism embraces the “post”-racial logic of racial liberalism and its conception of historical, economic, and social newness, limiting the
engineering, cultural, and political imaginaries of what a more just and equal future looks like within technological modernity. As we propose,
race and its disciplining and governing logics are engineered into the form and function of the technological objects that occupy the political,
cultural, and social armature of technoliberalism. Rather
than questioning the epistemological and ontological
underpinnings of the human, fantasies about what media outlets commonly refer to as the revolutionary nature of
technological developments carry forward and reuniversalize the historical specificity of the category
human whose bounds they claim to surpass.
Our book addresses not just how technologies produce racialized populations demarcated for certain kinds of work, but also how race produces
technology in the sense that it is built into the imaginaries of innovation in engineering practice.26 To do so we build on and expand on the
work of scholars like Wendy Chun and Beth Coleman, who have proposed thinking about race as technology. Chun demonstrates that
conceptualizing race as a technology (not as an essence, but as a function) lets us see how “nature” and “culture” are bound together for
purposes of differentiating both living beings and things, and for differentiating subjects from objects.27 This formulation allows us to trace the
conceptual origins of race as a political category rooted in slavery and colonialism that has enduring legacies (both in terms of classifying people
and in terms of inequities). Similarly, Beth Coleman argues that conceptualizing race as a technology highlights the productive
work that race does (as a tool, race can in some contexts even be seen to work in ways that are separable from bodies).28 While such
reconceptualizations of race as a category are valuable, they do
not fully account for race as the condition of possibility
for the emergence of technology as an epistemological , political, and economic category within Euro-
American modernity. As such, technology undergirds the production of the human as separate from the machine, tool, or object.
Technology is a racial category in that it reiterates use, value, and productivity as mechanisms of
hierarchical differentiation and exploitation within racial capitalism.
Our focus on race and gender, and freedom and unfreedom, within the technoliberal logics that configure the aspirational temporality of
feeling human in the twenty-first century brings a critical race and ethnic studies perspective to the imaginary of historical progress that pins
hopes for achieving universal human freedom on technological development. Decolonial thought, critical race studies, and feminist science
studies, each of which has differently engaged post- and antihumanism to extend an analysis of the vitality and agency of objects and matter to
problematize the centrality of modern man in the field of the political, can thus productively be put into dialogue as a starting point for
theorizing technology beginning with difference. According to Alexander Weheliye, “the greatest contribution to critical thinking of black
studies—and critical ethnic studies more generally . . . [is] the transformation of the human into a heuristic model and not an ontological fait
accompli.”29 Weheliye argues that, given developments in biotechnology and informational media, it is crucial to bring this critical thought to
bear upon contemporary reflections on the human.30 As is well known, eighteenth- and nineteenth-century European colonialism, a structure
that instituted a global sliding scale of humanity through scientific notions about racial differences and hierarchies,
undergirded systematic enslavement and subjugation of nonwhite peoples to advance European capitalism and
the industrial revolution. Developed alongside and through the demands of colonialism, this scale designated a distinction
among human beings, not just between humans and animals, such that humanity was something to be achieved.31
Decolonization, Frantz Fanon wrote, is in this respect “quite simply the replacing of a certain ‘species’ of men by another ‘species’ of men.”32
At stake in the Fanonian concept of decolonial revolution is the reimagining of the human–thing relation as a precondition for freedom. This is
precisely the relation that the techno-revolutionary imaginary scaffolding technoliberalism fails to reenvision. This failure is due in part to the
fact that, at the same time that colonialism was without a doubt a project of dehumanization, as scholars like David Scott and Samera Esmeir
show, European colonialism through its discourses of technological innovation, progress, and civilization also aimed to “humanize” racialized
others. 33
Engineering imaginaries about technological newness that propose to reimagine human form and function
through technological surrogates taking on dull, dirty, repetitive, and reproductive work associated with racialized, gendered,
enslaved, indentured, and colonized labor populations thus inherit the tension between humanization and
dehumanization at the heart of Western European and US imperial projects. On the one hand, there is a fear
that as technologies become more proximate to humans , inserting themselves into spheres of human activity, the
essence of humanity is lost. On the other hand, the fantasy is that as machines take on the sort of work that
degrades humans, humans can be freer than ever to pursue their maximum potential. As we postulate, this tension
arises because even as technoliberalism claims to surpass human raced and gendered differentiation , the
figuration of “humanity” following the post- of postracial and postgender brings forward a historically universalizing
category that writes over an ongoing differential achievement of the status of “the human.”

Impossible hospitality is life in hell. Social death is a constant everywhere war on


Blackness enabled by the ethical positioning of Black life as vulnerable.
Castro, 21—assistant professor of political science at the University of Massachusetts Boston (Andrés
Fabián Henao, “Ontological Captivity: Toward a Black Radical Deconstruction of Being,” differences
(2021) 32 (3): 85–113, dml)

From Du Bois’s body “turned asunder” to Frantz Fanon’s body “spread-eagled, disjointed, redone, and draped in mourning,” we face what
Maldonado-Torres has called “the coloniality of being,” Warren has referred to as “onticide,” and Jackson conceptualizes as
“ontologized plasticity.”11 “Any ontology,” Fanon argued, “is made impossible in a colonized and acculturated society”
(Black 89). One’s socially marked body does not articulate a differential way of experiencing the world; rather, the color line that fixes
the body in the condition of being the problem turns it into the first cage, one by which the world is kept
away, apart from the self. Hannah Arendt analyzed such dehumanizing worldlessness through the concepts of uprootedness and
superfluousness, but she misplaced these conditions of colonial history in the totalitarian power of Nazi Germany (475). The uprootedness, that
is, not having a place in the world, and superfluousness, that is, not belonging to the world, by which Arendt distinguished totalitarianism
from tyranny can already be distinguished in the capacity of settler colonial capitalist logics of elimination
and exclusion to dehumanize natives and aliens. The territorial alienation to which native African
populations were subjected, when they were literally uprooted from their communities of origin and put first into dungeons and then
into slave ships, prepared the ground for their future superfluousness. Enslaved, Black people were subjected
to social death, which Patterson defines as “the permanent, violent domination of natally alienated and generally dishonored persons”
(13). Socially dead, Black people are lost to extrajudicial killings by the police and subjected to accelerated
rates of slow death via mass incarceration, among other forms of new Jim Crow segregation , that do not
register as losses for the Symbolic order of the settler colony. Death drops below the threshold of the
human and comes to define which losses are recognizably human, and thus collectively grievable, and
which can continue to be violated even postmortem.
Although he defined it as the problem of the twentieth century, Du Bois did not present the problem of the color line as an ontological
question, unlike, for instance, Nahum Chandler in X—The Problem of the Negro as a Problem for Thought (2014). In The Souls of Black Folk, the
color line remains a problem of recognition, more cogito than Dasein, more double consciousness than nonbeing. But Du Bois’s emphasis on
the problem gives the key to the existential turn in Black studies (see Gordon, Existence). From Fanon’s zone of nonbeing to Lewis Gordon’s
anti-Black violence, understanding the problem as the ability of the body to ask questions—the prayer with which Fanon ends Black Skins,
White Masks (1952)—situates colonial violence at the level of ontology (Gordon, Bad Faith). Gordon thus recognizes his debt to Du Bois when
he claims that Du Bois first understood the color line as a way to distinguish groups of people who “are studied as problems instead of as
people with problems” (“Problematic” 124). Problematic people do not pursue, unfold, or project through the active thinking or becoming that
an engagement with problems affords them. Rather, by being made into problems, they are pursued, fixed, and held captive. As Achille
Mbembe summarizes it, Black people are “trapped in a lesser form of being” (17).

From an Ethical to a Political Critique of Being


No one has given such analytic depth to the understanding of the coloniality of being as Maldonado-Torres. Undoubtedly influenced by Gordon,
given their similar investments in phenomenology, Heidegger, and Fanon, Maldonado-Torres focuses on the Latin American tradition of
decolonial theory. Turning especially to the work of Sylvia Wynter to rethink anti-Black violence in the context of the coloniality of power
(Wynter), Maldonado-Torres distinguishes coloniality, a term first introduced by Aníbal Quijano, from colonialism in that coloniality “refers to
long-standing patterns of power that emerged as a result of colonialism, but that define culture, labor, intersubjective relations, and knowledge
production well beyond the strict limits of colonial administrations” (243). Among those patterns of power, the naturalization of slavery holds a
unique status, as slavery comes to naturalize, that is to say, render stable and longstanding, an otherwise
undeclared war. As Maldonado-Torres puts it, in a definition that I would argue translates Patterson’s concept of social death into the
philosophical vocabulary of ontology: “Damnation, life in hell, refers here to modern forms of colonialism which constitute a
reality characterized by the naturalization of war by means of the naturalization of slavery, now justified
in relation to the very physical and ontological constitution of people—by virtue of ‘race’—and not to their faith or belief” (247). The
damné (an allusion to Fanon’s Les damnés de la terre, [the wretched of the earth]) are no ordinary Daseins. The ability of slavery to
naturalize a form of death otherwise only experienced in war, to guarantee the continuation of war by
other means (to play on Michel Foucault’s own play on Carl von Clausewitz), dispossesses the damné of an ontogenetic
death. When war is all there is, death, as Maldonado-Torres argues, “is not so much an individualizing factor as a constitutive feature of
reality”; death can be said to arrive “always too late, as it were, since death is already beside [the damné]” (251). Death, in other words, is what
colonialism so radically modifies when it turns ordinary the otherwise “extraordinary event of confronting mortality” (255). When
the
extraordinary death of war comes to be lived as the overwhelming reality of ordinary existence under
chattel slavery and slavery’s aftermath, death changes so radically that it becomes paradoxically “social.”

But is the damné, as Maldonado-Torres concludes when contrasting this figure with the European Dasein, simply “the being who is ‘not there’ ”
(253)? Would it not be more adequate to say that the damné is the being who is held captive “there”? How to confront the fact that the da, the
“there” that grants Being an ontogenetic space for difference to become otherwise, is the same da that holds the becoming of the damné in
perpetual check? There is no “there,” after all, that has not already been enclosed by racial capitalism. We are all, in other words, affected by
this history, even if we are all affected differently: not all of us are equally constructed as damné.

Fanon would have put it differently. In my view, he would have claimed not that the damné is “the being who is ‘not there,’ ” but that it is the
being who is there in the form of nonbeing. The settler house-habitat-polis is materially built to house the human who can dwell by holding
captive the Indigenous native and the Black alien whose “dark hands” must labor for the captor, for the captor’s enlarged Being, for the
captor’s ability to ask questions from the comfortable remove of their protective privative fences. The coloniality of being, in my view, needs a
more radical confrontation with the settler colonial dispossession of da that spatializes the temporality of Being’s unfolding. It is not that the
damné is not there, but that the whole of “there” has been enclosed, fenced, not only expropriated and reappropriated but subjected to a
private proprietorial relationship that slowly but steadily extinguishes any possibility for a commons. If Being can only take place within already
confined spaces, we are not all subjected to the same forms of confinement, nor are all spaces confined in the same way—including the body as
perhaps the space par excellence of Being’s motion and the first racial capitalist prison cell of ontological difference.

Warren’s notion of onticide registers the nonbeing that colonialism forces on Black people when the
settler colonial logic of exclusion reduces them to a fungible commodity. Enclosed in the commodity form, that is to
say, radically transformed into an object, the Negro is not the “being who is ‘not there,’ ” but “the quintessential tool Dasein uses” to
experience and establish “the facticity of its thrownness in the world” (Warren, Ontological 8). In the tool-like character of Warren’s
description, one can hear the echo of Spillers’s “being for the captor,” the resonance of Césaire’s equation of colonialism with “thingification”
(Discourse 42), and Fanon’s account of his experience of a “suffocating reification” when he is turned into “an object among objects” (Black 89).
Warren conceptualizes the fungibility of the object , recast in ontological terms, as “availableness,” that is to
say, as “a mode of existence dominated by internecine use and function ” (Ontological 45). Jackson, by contrast, prefers
the vocabulary of plasticity, as ontological captivity does not merely confine what does not preexist capture but comes to plasticize the flesh
that it holds captive to pluralize its uses. Plasticity, Jackson argues, “is a mode of transmogrification whereby the fleshy being of blackness
is experimented with as if it were infinitely malleable lexical and biological matter, such that blackness is produced as sub/super/human at
once, a form where form shall not hold: potentially ‘everything and nothing’ at the register of ontology” (3).12 Racial
capitalism not
only forces Black people to be available for other people’s use but plasticizes their flesh to pluralize
those uses, uses for which they must also remain available. While Jackson and Warren are not saying the same thing, their
ways of understanding what it means to be for the captor complement each other.

In confronting such “availableness” and the “plasticization” that it presupposes, Warren finds Levinas’s ethical framework insufficient
(Ontological 197n29). Warren thus parts ways with Maldonado-Torres insofar as ethics presupposes an insufficiently interrogated political
relation. Plasticity and availableness both describe an instrumental relation that makes it impossible for Black people to be for the other, as
they are not for the “other,” but for the captor. Captivity, to put it differently, makes alterity impossible, as the difference
that racial capitalism forces on the captive body (what Fanon refers to as the racial schema that comes to overdetermine the
body schema) transforms any relation between a self and another into an instrumental relation of
domination. The captive is not just another version of the “other,” as there is no expectation of a response nor any face
to engage; there is only availability and plasticity to endlessly instrumentalize.

Vote negative to celebrate Black social life.


Mubimurusoke, 22—Intercollegiate Department of Africana Studies, Claremont McKenna College
(Mukasa, “The Black Home as Black Social Space/Time,” Black Hospitality, Chapter 3, 75-142,
SpringerLink, dml)
Moten’s illumination of the ‘irreducibility’ of black sociality plays a crucial role for the conception of the black home. With traditional accounts
of the liberal autonomous agent, the constitutive role of space, sociality, and even history, for the most part, go unaccounted for or
underappreciated. However, the concept of home already evokes a sense of sociality, materiality, spatiality and temporality. In the home,
one’s actions and decisions take place within an excessive economy of desires and inspiration that
precede and exceed any one place or time even though they evolve and revolve around a conception of location and
temporality. It is in this respect that I want to begin to suggest that the black home helps conceptualize the nature of blackness
and black selfhood against the transcendental autonomous subject that dominates modern conceptions
of agency, even though in the end these characteristics of home will be seen in a much different light.

In the realm of the political—that is, civil society, institutions of the state, and so on—the grammar of black
suffering is decidedly set upon exclusions and oppressions and, therefore, there is no true or tangible
sense of the black self when one aims to conceptualize it within the terms or the desires to participate in this realm.
However, at the level of the social, there is a sense of black life and a black self, even in these shadows, precisely
because black sociality is beyond individuation. The two are mutually constitutive: this self is exposed by a social communion,
which, of course, implies otherness. Consequently, through the black home, the paraontological relationship of the black self and black social
life to blackness come together, even though this home is never settled because it is black, and blackness is always unsettling. In
social life,
black people’s affirmation is undeniable, even amidst hardship and the terror of civil society. We might
even contend that black social life is something to be celebrated or, better, the very meaning of
celebration. In “Blackness and Nothingness” Moten provocatively introduces the idea of celebration to his conception of black thought. He
writes:

Our aim, even in the face of the brutally imposed difficulty of black life, is cause for celebration … [T]he
cause for celebration turns out to be the condition of possibility of black thought, which animates the
black operations that will produce the absolute overturning, the absolute turning of this
motherfucker out. Celebration is the essence of black thought, the animation of black operations,
which are in the first instance, our undercommon, underground, submarine sociality. (Moten 2018, 197; my
emphasis)

Now, Moten does not give any extensive articulation of what he may mean by celebration, or even who he may include using the possessive
pronoun ‘our’. However, the term may be a great way to start thinking of the self beyond individuation or individualism that we have in
mind.Footnote16 Thepossibility of black thought, of black sociality and selfhood, despite the circumstances that
civil society has placed on black people , is almost impossible to conceive and therefore its existence is
undoubtedly a means for celebration. And yet, Moten appears to contend that celebration is not simply
the reaction to black social life but in reality the essence of its possibility, that blackness as the condition
of possibility and black social life as its manifestation can only exist as celebration. In other words, possibility
as blackness, the “turning of this motherfucker out,” is not a reason for celebration but always is
celebration. Therefore, the existence or actions of the black self, as manifest in the fugitive movement of
black social life and the spacing/temporalizing of the black home, are not a means for celebration like a traditional
political subject may celebrate themselves after the purchasing of a home or the birth of a child. The black self through
black social life is celebration as the exuberant condition of possibility of always and necessarily being
be-side oneself together; it is a celebration so common, so mundane, there is not even the possibility of black celebrities.

Celebration, which is the ‘animation of black operations’ and black optimism as black thought, should not be
seen as an opposition to afropessimism, but may be helpfully distinguished from a concept of political hope,
afropessimism’s true antagonist. Modern political ontology may be conceptualized as a politics of hope,
a politics whereby the principles of modernity, which include the democratic nation state and the capitalist political
economy, have settled on hope as the coercive force that motivates political subjectivity. Hope for citizenship
and for economic success reifies a subject formation and a capacity that correlates with the despair of afropessimism as its explicit consensus
fluctuates it terms of a politics of politeness. Returning to his reflections on Arendt and her politics of consensus universalis Moten writes, “[If]
hope cannot be kept alive, this need not lead to despair since what is beyond hope, in terrible enjoyment, is an absolute sufficiency, an
irreducible optimism, given in more in less, in everything in nothing, as scheme and variation, critically anticipating, speculatively
accompanying, on the edge of arrival, never to return” (Moten 2018, 74). Black optimism, as irreducible as black sociality, lays
beyond hope, not in the sense as a further extension along the same trajectory, but in an expanse beyond trajectory,
beyond telos, an exonomy without return, a celebration that is so much more than hope. Hope cannot be
extracted from the nihilistic, solipsistic, individualistic milieu of the political ontology that overdetermines so much of
the experience of being human, of human beings, today. Celebration, as optimism and sociality, is not really a
transcendence of the temporal logic of hope, nor, as I understand it, is it an analogy for an emotional articulation of optimism;
that is, it is ‘not black joy’, at least in the colloquial sense of an existential reprieve from the toils and alienation
from society. The idea of black celebration may invoke a feeling, but it is not an affect, positive or negative. It is not an
escape from a day of toiling physical or affective labor, it is an escape into the night, it’s both big and small, life
before, after, and in-between death, it’s, in effect, a broadening of the impossible giving attention to those
other truths; it is an inauguration, an inaugural midnight dance that never ends and only absconds from
basement to basement, which in the end would be the most celebratory understanding of celebration,
since whether you’re happy, sad, or mad, it is always more, it’s always past midnight.
1NC --- Case
1. Paul Dunbar DA:
We wear the mask that grins and lies,
It hides our cheeks and shades our eyes,—
This debt we pay to human guile;
With torn and bleeding hearts we smile,
And mouth with myriad subtleties.

Why should the world be over-wise,

In counting all our tears and sighs?

Nay, let them only see us, while

We wear the mask.

We smile, but, O great Christ, our cries

To thee from tortured souls arise.

We sing, but oh the clay is vile


Beneath our feet, and long the mile;
But let the world dream otherwise,
We wear the mask!
Okello, et al, 20—Assistant Professor of Educational Leadership at the University of North Carolina
Wilmington (Wilson Kwamogi, with Stephen John Quaye, Associate Professor of Educational Studies at
Ohio State University. Courtney Allen, doctoral candidate of Educational Leadership at Miami University,
Kiaya Demere Carter, Diversity and Inclusion Specialist at DHL Express, and Shamika N. Karikari, doctoral
candidate of Educational Leadership at Miami University, ““We Wear the Mask”: Self-Definition as an
Approach to Healing From Racial Battle Fatigue,” Journal of College Student Development, Volume 61,
Number 4, Jul-Aug 2020, pp. 422-438, dml)

We frame the body after Hill (2017), as a mental, emotional, spiritual, physiological, and spatial construct,
always mediated through history. “The mask” becomes the various representations Black people wear to
be legible—that is, palatable—in the presence of whiteness (Frankenberg, 1997). Herein, we understand whiteness* as

a location of structural advantage, of race privilege. Second, it is a “standpoint,” a place from which white people look at [themselves], at
others, and at society. Third, “whiteness” refers to a set of cultural practices that are usually unmarked. (Frankenberg, 1997, p. 1)

Educational contexts, specifically higher education, are manifestations of whiteness, from their racist origins to their guiding principles (Wilder, 2013). In this way,
Black student affairs educators are intimately familiar with applying the mask as a shapeshifting maneuver (Cox, 2015) to negotiate higher education contexts.

Dunbar’s thesis is resolute and emblematic of an early expression of one of the tenets of critical race theory: racial realism. In 1992, Bell, coining the term racial

realism, made a sweeping statement about the preeminent disease confronting Black people in a US context: “I
would urge that we begin . . . with a statement that many will wish to deny, but none can refute. It is this:
Black people will never gain full equality in this country” (p. 373). Bell offered a sobering analysis on the sustained condition
of Black existence across the diaspora that dates back to the very idea that some lives were incapable of
becoming anything other than property (Kendi, 2016; Sharpe, 2016).

Erasing the possibilities of humanity has had the trickling effect of shaping the degree to which Black
life-worlds understand wellness in a Western context. More specifically, the unceremonious, yet deeply familiar,
contestation that Black people bump up against —when they audaciously think themselves allowed to
move, think, feel, and live in a way that represents the citizenship they were awarded—is usually met with
some sense of disapproval by white people (Feagin, 2010; Kendi, 2016). In higher education settings, Black educators have persisted in spite of this

disapproval, skillfully donning the mask to confront dehumanizing acts and behaviors . For this article, we asked: At what cost?

The surveillance of Black people’s space, time, energy, and movement can lead to persistent states of anxiety, stress, and racial

battle fatigue.

Racial battle fatigue represents the visceral, psychological, and emotional effects that bear on the
material lives of Black people (Smith, 2004). Although researchers have chronicled the effects of racial battle fatigue (Smith
2004, 2008a, 2008b; Smith, Yosso, & Solórzano, 2007) and have discussed strategies people may employ to resolve feelings of racial battle fatigue, most notably
ideas for self- care (McGee & Stovall, 2015), we sought to address the conundrum of healing from racial battle fatigue.

Pointedly, if Bell’s (1992) thesis, in concert with the work of Hartman (1997) and Sharpe (2016), positions the afterlife of slavery as an
irreconciled event that is ongoing and permanent, what possibilities are there for Black people to heal (i.e., a holistic treatment of self,
return to wholeness) in and against institutions of higher education? If the assumption is that racism exists, and subsequently, racial battle fatigue

is and will be an enduring sickness from it, what does healing look like for Black student affairs educators?
We trouble the notion of self-care, high- lighting white racial frames (Feagin, 2010) that inform how Black student affairs educators survive racial battle fatigue.
White racial frames are combined racial stereotypes, metaphors and interpretive concepts, images, emotions and inclinations (Feagin, 2010) birthed out of the
materiality of racial oppression. This dominant framing has functioned to preserve whiteness and benefit white people.

Importantly, this examination is not an attempt to fault or place blame on Black educators. On the contrary, their actions and activities reflect the genius of Black
survivability. While that discussion is beyond the scope of this article, it is important both to notice and to affirm the historical, made present, ingenuity of Black
people. That said, in order to look more closely at the terrains of healing, we propose that movement toward such a place requires an alternative theoretical ground
for thinking about healing. That place is possible in and through what Okello (2018) termed self-definition.

LITERATURE REVIEW

Racial Battle Fatigue and Gendered Racism

Coates (2015) highlighted that racism is a visceral reaction felt on the body. Given the permanence of racism discussed above (Bell, 1992),
Black people navigate racism regularly. This continued exposure to racism over time leads to racial battle
fatigue, which has emotional/behavioral, psychological, and physiological stress responses (Smith, Hung, &
Franklin, 2011). The emotional or behavioral response describes how People of Color behave or react in the midst of racial battle fatigue. Coping

strategies, such as overeating or loss of appetite, smoking, alcohol consumption, isolating from others,
and poor performance academically or at work are examples of emotional and behavioral responses to racial battle fatigue.
Psychological responses include anger, worry, disappointment, or disbelief, and the physiological stress
response describes how racial battle fatigue feels in one’s body, including headaches, chest pain,
shortness of breath, and inability to sleep.

Because the health and well-being of Black people are significantly depleted as they navigate racism, the
combination of these three responses makes it particularly difficult for them to accomplish the necessary tasks
needed during the day, such as concentrating on their work. Instead, Black people are spending their time and energy
working through racism, which leaves them with little else to give to more creative, life-affirming
activities (Smith, 2008a, 2008b).
2. “Revolutionary mathematics” is a sham.
Gardiner, 20—Professor of Sociology at the University of Western Ontario (Michael, “Automatic for
the People? Cybernetics and Left‐Accelerationism,” Constellations: An International Journal of Critical
and Democratic Theory, August 6, 2020, dml)

The notoriety of Project Cybersyn in Left‐accelerationist circles and beyond is perhaps not entirely surprising insofar it is the best‐known
example of consciously deploying cybernetic principles for what were felt to be emancipatory ends, rather than the augmentation of state or
corporate power.3 Due to circumstances very much beyond its control, the system was never brought fully online, and of course we will never
know the directions in which it might have developed. However, thanks to Medina's detailed and exacting research, we have been made
aware of the sometimes yawning gap between Beer's vision and how it aligned politically with the undeniably
admirable goals of Chilean socialism, and the actual nature of Cybersyn's attempted implementation. For instance,
worker participation was token at best, and not an integral part of system design; engineers and factory
managers didn't really overcome their professional and class bias; and gender inequities with regard to design
and organizational management were barely acknowledged, never mind meaningfully addressed (see also Espejo,
2009, p. 79). However, we are less concerned here with the historical realities of Cybersyn or the specific features of Chilean socialism
than more general cybernetic principles and how they might lend support to any viable postcapitalist
transition. Put differently, to indulge in a spot of “immanent critique,” do the claims of Left‐accelerationist cybernetics
regarding enhanced possibilities for human freedom, solidarity, and autonomous self‐actualization
match the reality (or potential reality)? What is crucial vis‐à‐vis any such discussion is the (often implicit) suggestion, outlined in the
previous section, as to the qualitative differences between first‐ and second‐order cybernetics, together with the idea that Left criticisms
typically, and illegitimately, conflate the two. Rather than the use of negative feedback oriented to the maintenance of order by inhibiting
counteraction, so the
argument goes, second‐order cybernetics is concerned with positive feedback, working through
amplification and enhancement of the original signal, whereby the presence of complexity and chaotic
states demonstrates the non‐linearity of systems and their capacity for unpredictable change in the pursuit of open‐ended (but
self‐correcting) goal attainment. And yet, a careful examination of writings by the likes of Tiqqun or Châtelet demonstrate that they were
generally aware of different currents in cybernetic thinking, but nevertheless argue that, whatever
its ostensible methods and
goals, second‐order cybernetics promulgates a new regime of power and control that dovetails in many respects
with the requirements of today's supercharged technocapitalism. Going further, they intimate that even some
version of “cybernetic socialism,” with presumably novel human‐machine assemblages, might not necessarily
escape this morass.
Arguments concerning this shift to a new regime of power often make reference to one of Deleuze's late essays, or at least show its influence:
the brief but tantalizing “Postscript on Societies of Control.” In nuce, Deleuze's position is that the type of “disciplinary” society theorized by
Foucault, marked by various enclosures (schools, factories, military barracks, bureaucracies) wherein social behaviors were scrutinized and
minutely organized in space‐time so as to enhance their productive efficacy during an era of industrial capitalism, has been superseded by a
quite different system of ruling more relevant to the present situation of powerful global corporations and the centrality of the “knowledge
economy.” That is, whereas disciplinary societies concern themselves with a process of homogenizing subjectification largely through
panoptical means, by which compliant individuals are integrated seamlessly into the mass, controlsocieties are post‐panoptical, and
rely instead on “ultrarapid forms of free‐floating control” (Deleuze, 1992, p. 4). Crucial with regard to the latter is the
continuous accumulation of statistical information via the elicitation of communicative exchange across
the entire social field. The focus ceases to be the atomized individual, but rather a numerically‐based assessment of the “dividual,” by
which Deleuze means a generically average subject made comprehensible through opinion surveys, sampling techniques, and market research.
Control is now exercised, not through hierarchical, top‐down management, much less by fostering techniques of hermeneutical
self‐examination, but the pattern analysis of myriad electronic traces and the subtle shaping (or “nudging”) of
micro‐behaviors via what Deleuze calls “universal modulation.” The key is that these environments are not
segmented and closed, but fluid and open, and that social actors participate in and maintain the system
dynamically through their own seemingly voluntaristic choices and actions, à la Lefebvre's “splendid impression of
spontaneity and harmony.”4
The relevance of Deleuze's “Postscript” to our concerns should be fairly obvious. First‐order cybernetics is in lockstep with the nature and
demands of what we might call late‐disciplinary societies. Second‐order cybernetics, by contrast, appears
more compatible with
progressive, even liberatory aims. An indication of this latter orientation is that many of the key figures in British
cybernetics situated themselves on the Left of the political spectrum, and cultivated non‐conformist and often explicitly
anti‐authoritarian interests, even if Beer himself was something of a “champagne socialist.” Yet, in embracing complexity,
contingency, and openness, second‐order cybernetics is not wholly immune to the mentality of control and
governance. Indeed, the types of non‐linear self‐organization as discussed by Deleuze are necessarily premised on
disequilibrium and chaos: the multiplication of horizontal, autonomously‐structured communicative
networks is the new mode of control, not any sort of emancipation from prevailing
systems of power. Control societies depend precisely on the constant elicitation of affects and
desires, as opposed to their repression or curtailment, provided they can be channeled into forms of communicative
action subject to ongoing surveillance and statistical quantification. In second‐order cybernetics, as Maroš Krivý (2018, p. 18) usefully puts it,
“power relations reproduce through proliferating indeterminacy, nonlinearity and complexity, rather
than by curbing these into determinate, linear and unidirectional forms.”
Writing from the perspective of the French context of the 1990s, but hardly irrelevant to our own era of “nudge theory,” smart cities, and the
like, Châtelet (2014, p. 23) suggests that the mania for incorporating
concepts of “chaos” and “self‐organization” into
what he regards as pseudo‐liberationist thinking was part and parcel of the intellectuals’ post‐1968 capitulation to
“market democracy.” The latter is foursquare in favor of the “right to difference,” calling for an end to
heavy‐handed state interference and concomitantly eulogizing social mobility and permanent “nomadism.” But that's
only because the neoliberal market itself loves fluidity, movement, and constant acceleration , seeking to
capture the “creative power of chaos” through a “cyberpolitics” that generates order out of the
disorder of self‐regulation. Authoritarianism of the obvious variety is replaced by the covert injunction to produce and consume
information, to subscribe enthusiastically to a universal “will to communicate.” Yet the encouragement to speak in the context of today's
“social (or “global”) factory,” to cooperate, to express one's “authentic” thoughts and feelings, is ultimately a coerced and deadening gesture.
For Châtelet, the “chaos of opinions and microdecisions” relies on a rhetoric of freedom via auto‐
emergence, but there is always an apparatus of control working discretely behind the scenes, and hence a
crucial distinction to be made between powerful designers and operators and those being operated on. Since the conventional state
apparatus is now too slow and clumsy to respond effectively to the demands of the new fluid social ontology, scientific
management of political sovereignty is rendered much more palatable when presented in the guise of
refined “pressures exerted by an anonymous and nonlocalized entity” (p. 33). This constitutes a “ventriloquism” of
power‐effects operating through such ubiquities as globalized market forces, intermeshed communicative networks, and the relentless
organization of “public opinion.” Any particular social atom, the locus classicus of disciplinary societies, is irrelevant here; echoing
Deleuze, for Châtelet what's important is the modulation of network fluidity via “hydro‐cybernetics,” and the
effectuation of valuational equivalences across numerous domains through a universal system of inputs and outputs. Whereas the Young Turks
of the new cybernetic order (the children of Lefebvre's cybernanthropoi?) conflate horizontality with enhanced democracy, Châtelet is adamant
that the former does not in any way necessarily vouchsafe the latter. Indeed, horizontal
formations concentrate power in
vital nodal points, and are more effective for being anonymous and unseen, everywhere and nowhere
at once, in contrast with “overly visible verticalities” that might precipitate resentment and opposition.
The result is the “well‐mannered anarchism” of the market, which, unlike the “romantic” anarchism of old,
threatens no societal upheavals ‐ first, because geared towards optimal management of a coolly technocratic nature, but also
insofar as there is no worker “downtime” in an age of 24/7 networked production/consumption, and
hence little opportunity to foment dreams of revolt.
From the vantage‐point of the early 2000s, in The Cybernetic Hypothesis Tiqqun takes some of these arguments further. Although Cybersyn
isn't referenced directly here, they hone in on the technophile Left's contemporaneous fascination with cybernetic possibilities, anticipating
later positions advanced by the Left‐accelerationists and “fully automated luxury communists.” According to Tiqqun, the period of upheaval
around 1968 could be interpreted as the last reverberation of a cycle of struggles that dominated Western societies over the two previous
decades. Facing the manifold shocks of rising worker militancy, the energy crisis, and precipitously‐declining rates of profit, global
capitalism required full‐scale reconstruction, and, as discussed above, cybernetics fit the bill very well. However,
the logic of cybernetics appealed to certain technologically‐oriented critics of capitalism as well, such as those
advocating an “ecosocialism” premised on equilibrium and a steady‐state economy through decentralization and differentiation, especially in
light of the Club of Rome's famous 1972 document “Limits to Growth.” For Tiqqun (2020b, p. 98), however, this
represents a kind of
“social capitalism” seeking change through the democratization or socialization of the “decisions of
production,” as if a full‐blown post‐Fordist society could emerge spontaneously from a dispersed,
popular “collective intelligence.” As an example, a “new social contract” like universal basic income adopts the logic of the current
system's emphasis on “human capital” and the metaphysics of production. It is not incompatible with money, commodity exchange, or markets,
and would only free up more disposable income so as to accelerate the circulation of goods and information at the behest of processes of
value‐capture (see also Beech, 2019, p. 93). Ultimately, for Tiqqun this
would make the labor force itself more, rather than
less pliable. If the “new spirit of capitalism” is cybernetic to the core, so are “Left” solutions to the
present crisis that rely extensively on repurposing existing infrastructures, neoliberal subjective dispositions, and
logistics, so as to end up with a “communism of capital.” Or, to put it differently, any approach advocating the
“framing of the world in terms of problems” is not a genuine communist project, but in reality another
path to capitalism (Tiqqun, 2020b, p. 109; also Culp, 2018, p. 167). In this way, cybernetic capitalism has absorbed its
ostensible opponents into an overarching paradigm of social regulation governed by a managerial
reason, disposed to what The Invisible Committee (2015, p. 124) terms the “cult of the engineer,” that can serve the
political objectives of “Left” just as well as “Right.” Even Pickering (2010, p. 273) admits that Cybersyn could have been re‐
engineered by technicians and state functionaries of the Pinochet regime, and deployed to more nefarious ends than Beer would probably have
imagined, which is likely not the kind of “repurposing” Left‐accelerationists have in mind.

It is noteworthy that Alex Williams has written independently about the relationship between Deleuze's theory of control societies and
cybernetics, and it is therefore important to consider his arguments here. Rather than contrast the US and UK developments, and primarily
associate “first‐order” cybernetics with the former and “second‐order” with the latter (a convention we have followed here), Williams advances
a different set of distinctions. That is, he reserves the term first‐order for 20th‐century cybernetics in general, whatever the differences
between, say, Weiner or Beer (odd in light of his admiration for Cybersyn, which gets only passing mention here), and suggests second‐order is
a phrase better‐suited to the networked “platform” systems of the 21st‐century, such as Airbnb, Facebook, or Uber. First‐order cybernetics, by
Williams’ reckoning, follows the domineering control logic as characterized above: it aims to modulate action via recourse to homeostatic
equilibrium so as to realize pre‐set goals. In contrast, “platforms” are design architectures that work primarily not through constraint, but by
enabling actions through positive feedback circuits that cannot be prefigured in advance. Platforms, writes
Williams (2015, p. 223), are “materialised transcendentals – they act as conditions of possibility for other processes and entities to exist.” As
“entrenched” infrastructures they do restrain in certain ways (for example, Microsoft owns the vast majority of home computer operating
systems and forces users to conform to its licensing arrangements and surreptitious forms of data collection), but they also provide
the
ground for unpredictably contingent or “generative” outcomes, and hence contain hitherto‐untapped
potentialities for autonomous self‐organization outside the aegis of state and capital. Yet, Williams is
notably vague on what forms of such self‐organization might be possible here, or what exactly is being
“enhanced” through the utilization of such platforms in ways that might be considered “emancipatory,”
assuming this doesn't bolster the hegemonic power and virtual ubiquity of existing platforms . As argued
earlier, control systems work precisely through such “enhancements,” via the solicitation, reinforcement,
and augmentation of myriad desires and affects , so long as they can be successfully captured and “put to use.” “Platform
capitalism” emerged after the 2008 crisis, argues Sebastian Olma (2016, p. 171), because of capital's need to both create and exploit a situation
of permanent entrepreneurialism and precariousness in an era of falling profits, disinvestment, and declines in manufacturing productivity. In
other words, the harnessing of auto‐exploitation is integral to these systems’ very design, whereby “platform proletariats” are pauperized both
materially (participants in the “gig economy,” once time, expenses, and insurance are factored in, earn much less than even the minimum
wage) and in terms of a relentless degradation of skill and knowledge. As such, it's
difficult to see the liberatory potential
here, insofar as such platforms are essentially about extending market logic into any and all domains of
human life. In this context, Beer's algedonic meters, however crude or well‐intentioned, seem to anticipate today's omnipresent data
capture and the vast amounts of unpaid digital labor it exploits (Amazon user reviews, Facebook “likes,” etcetera), which are all forms of “soft”
coercion encouraging the formation of certain subjective dispositions in line with the demands of hyper‐productivity and acquisitive
consumption. Towards the end of the article, Williams belatedly suggests
that alternative platforms could be constructed
in the service of non‐capitalistic ends. Yet, it's far from clear how these “socialized” systems could ever
be designed and implemented, never mind constitute any sort of threat to the monopolistic, privately‐
owned platforms dominating Western societies today, and even if they were, such a scheme remains
vulnerable to the objections of Tiqqun et al. as to the foibles of “social capitalism.”

3. Governance turn. Revolutionary mathematics as a duty for AI is premised on


and replicates exploitation of the global south.
Adams 21 [Rachel, a Human Sciences Research Council, South Africa;b Information Law and Policy
Centre, Institute of Advanced Legal Studies, University of London. “Can artificial intelligence be
decolonized?”. Interdisciplinary Science Reviews. Published: March 7, 2021. Available @
https://doi.org/10.1080/03080188.2020.1840225. Accessed: 8/30/2022//!PI!]
Ethics and the rationality of Empire Computer scientist has Timnit Gebru stated that ethics is the ‘language du jour’ in AI discourse (2020; see
Discursively, it is posited that through advancing ideas such as ‘AI for Good,’ ‘Fair
also Ulnicane et al., this issue).
and Responsible AI’ and ‘AI for Humanity’ (in France and Canada), particularly by incorporating these
aspirations and values within normative frameworks for ethical AI, discontent within the field can be
addressed and resolved. This has met with some criticism. Pratyusha Kalluri, for example, has pointed out that ‘‘fair’ and ‘good’ are
infinitely spacious words that any AI system can be squeezed into,’ and aptly stresses that the question AI ethics should be examining is how
power works through such systems and to what effect (2020; see also Crawford et al. 2019). In
addition, Greene, Hoffmann, and
Stark (2019) have emphasized how AI ethics assumes a universality of concerns which can be objectively
measured and addressed, summarizing the assumptions upon which the discourse is based as follows:

(a) the positive and negative impacts of AI are a matter of universal concern, (b) there is a shared
language of ethical concern across the species, and (c) those concerns can be addressed by
objectively measuring those impacts (2126). ‘This,’ they write with irony, ‘is a universalist project that brooks little
relativist interpretation’ (2019, 2126).

This is where the language of decoloniality has proffered some new thinking. In their piece on ‘Decolonial AI’,
Mohamed, Png, and Isaac (2020) have, amongst other propositions, advocated for dialogue between the AI metropoles and peripheries as a
Specifically, they write that dialogue can facilitate ‘ reverse
means of developing ‘intercultural ethics’ (17).
pedagogies’ wherein the metropoles can learn from the peripheries, and that ‘intercultural ethics
emphasizes the limitations and coloniality of universal ethics – dominant rather than inclusive ethical
frameworks – and finds an alternative in pluralism, pluriversal ethics and local designs’ (2020, 17). Sabelo
Mhlambi has taken this a step further by developing a framework for AI ethics based on the Nguni philosophy of Ubuntuism (2020).
Critiquing Western reason of rationality in shaping the philosophical terms on which AI and AI ethics is
dominantly conceived, Mhlambi details how the Sub-Sahara African notion of Ubuntu, which centres on
the relationality of personhood, can undergird a framework for addressing the two major challenges in
AI: surveillance capitalism and data colonialism (2020).
While this is important, the emerging discourse around ethics and decolonizing AI has yet to develop critical thought around the idea of ethics
As above, Mohamed, Png, and Isaac briefly note ‘the limitations and coloniality of universal ethics’
itself.
(2020, 17), but it is critical to understand precisely why the dominance of this particular version of
ethics – vested as it is in the history of Eurocentric thought around morality, legality/governance and
personhood – is so problematic, and what the effects might be of uncritically drawing decoloniality into
this discourse. Put differently, should decoloniality be subsumed as a new tool for AI ethics, without critique of the way in which the idea
of ethics has been historically put to work in rationalizing colonial practices (see Mbembe 2017, 12; Spivak 1988, 9), it runs the risk of not only
appropriating decoloniality as an abstract metaphor, as Tuck and Yang (2012) warned against, but also of reproducing the very logics of race
that colonialism instituted. Let us now more closely examine this problematic formulation, ‘AI ethics.’

In 2019, a study was published in Nature identifying over 84 ethical standards for the use and
development of AI developed globally in the last five years (Jobin, Ienca, and Vayena 2019; Ulnicane et
al., this issue). Despite being titled ‘The Global Landscape of AI Ethics Guidelines,’ amongst these 84 AI
ethics standards, none listed are from the African continent or even the Global South. Most were
developed in the United States, UK or by international institutions. Mohamed, Png, and Isaac (2020)
similarly note how national AI policies or strategies are almost exclusively found in the Global North, and
where efforts to develop a national policy around AI are arising in countries within the Global South,
this is being driven by supra-state bodies such as the World Economic Forum. As ethical benchmarks,
these standards are paternalistically positioned as universal: applicable for all, everywhere. In addition,
the scientific practice of promoting ethical AI through strengthening or testing the ‘fairness’ of AI
systems (the extent to which they exhibit social biases, in particular) performs a similar conceit in
presuming the scene of the Global South – or more specifically in this case, the African region – to be a
place where ‘ethics,’ as such, is yet to be fully established. Now a well-documented case (Ballim and Breckenridge 2018;
Arun 2020; Arun 2020), in 2018, when the issue of racial bias and the non-recognition of Black faces by AI-driven facial recognition technologies
was peaking following the work of Buolamwini and Gebru (2018),13 a Chinese facial recognition company signed a deal with the Zimbabwean
government for access the records of the national population registry, which contained facial imagery of millions of Zimbabweans, to train the
company’s algorithmic technologies to better recognize Black faces. By reducing the potential for bias, the system would ultimately be more
ethical. While Ballim and Breckenridge (2018) condemn this incident for exploiting the inadequate data protection provisions in Zimbabwean
law, it is not all that different from the practice of beta-testing newly developed AI systems in African countries (Mohamed, Png, and Isaac
2020). Calling it ‘ethics-dumping,’ Mohamed, Png, and Isaac point to the notorious company Cambridge Analytica as exemplar, in that it
This follows the now
developed algorithmic systems for use in the US and UK by beta-testing them in Nigeria and Kenya (2020, 11).
centuries-old colonial conceit of what Jan Smuts euphemistically called the ‘ laboratory of Africa’ (1930),
where the collateral damage of scientific advancement could be safely externalized to places and
people considered expendable (see Bonneuil 2000; Tilley 2011; Taylor 2019). Moreover, the
epistemological foundations of AI cannot be extricated from Francis Galton’s work in the development
of statistics – particularly on inference, regression, correlation, and the normal distribution curve –
which arose out of his explorations in Southern Africa, where he applied his statistical science to native
populations in order to measure human differences and intelligence (Breckenridge 2014, Chapter 1).

In these instances, the idea of ‘ethics’ is situated as the supreme value of the Occident, to be
proselytized on the Africa region, which is, in turn, and in relation to the ‘ethical West,’ positioned as
‘pre-ethical’ (Mbembe 2017, 49) – as a world apart. Indeed, that Europe believed itself to be ‘helping’
and ‘protecting’ its African colonies constituted the central creed of the civilizing mission of colonialism
(Césaire 2001); as Spivak reminds us, ethics ‘served and serves as [its] energetic and successful defense’
(1988, 5). Yet as ethics was put to work to justify both the civilizing mission of colonialism and the
utilization of Africa as a laboratory for Western scientific progress, it enacted another conceit of the
colonial order of things: that Western reason is neutral, universal, and objective; that it could be
dislocated from the context in which it arose and applied elsewhere. Positioned as a ‘point zero’ (Santiago Castro-
Gomez unpublished work, cited in Grosfugal 2011, 6) from which to survey the world, Western knowledge and rationality claimed ascendency
as the only real way of knowing and understanding the world. This is a critical problematic within decolonial thought (Grosfoguel 2007; Ndlovu-
Gatsheni 2013), and a central assumption within AI: that intelligence and the production of knowledge can be outsourced to a machine
presupposes such knowledge to be both separable from the context in which it was produced and applicable to other contexts and realities.

Dividing practices The production of ‘the world of apartness’ (Madlingozi 2018) takes place through what I am calling ‘dividing practices.’ In this
section I explore briefly the provenance of systems of enumeration, quantification, and classification within colonialism, and the ways in which
AI reproduces the divisive logics of race, before turning to critique the notion of intelligence in particular. I take the term from both Michel
Foucault who, writing on the production of the objectivization of the subject, speaks of ‘dividing practices’ which divide the subject from others
and within itself (1982, 777–778), as well as Edward Said’s critique, set out in Orientalism, of the dividing line – discursively formed – between
the Occidental and Oriental worlds, which the former ‘paradoxically presupposes and depends on’ (1978, 336). For both, power resides with
those who can make the catechistic decision to divide.

It is well-noted how AI systems sort personal data according to socially ascribed normative markers. At times, these markers are directly
racialized or gendered (Keyes 2018; see also Keyes, Hitzig, and Blell, this issue), such as a system that only allows women access to a female
changing room (Ni Loideain and Adams 2019). Other times, these markers may be implicitly biased, such as systems for targeting advertising
and policing based on postal codes (Benjamin 2019). These systems classify, sort, and rank personal data through processes of data collection,
curation, and annotation, using advanced statistical methods for modelling distribution and measuring correlation (as first developed by Galton)
in order to calculate risk, predict behaviour, and optimize the systems’ own functions. In these contexts, data assemblages constitute a
representation of the individual that are taken (by commercial and state power) as a sign of the real (Baudrillard 1994). Moreover, as Birhane
(2019) has pointed out, these systems of abstract representation work to further marginalize those who do not fit the ‘data-type’. Indeed,
Quijano (2017) has spoken of modern systems that function through identifying and classifying individuals as fundamentally ‘de-equalizing’,
presumably as the application of these practices to human subjects supposes a fixed, a priori and quantifiable difference, the social-
construction of which is forgotten.

As noted above, much has been published about how these systems reproduce social biases (see also Holzmeyer, this issue) with many
accounts noting the racial logic and imperial power at work (Buolamwini and Gebru 2018; Keyes 2018; Noble 2018; Benjamin 2019). However,
rather less examined in relation to practices of AI today, is the way in which these statistical systems were developed and appropriated within
former colonies to control and divide colonial subjects.14 Indeed, Said wrote that, ‘rhetorically speaking, Orientalism is absolutely anatomical
and enumerative: to use its vocabulary is to engage in the particularizing and dividing of things Oriental into manageable parts’ (1978, 72).
Similarly, in narrating the enumerative practices of colonialism in India – which he critiques as having both a disciplinary and pedagogical effect,
in delimiting colonial subjectivity and in training colonial administrators respectively – Appadurai writes: The link between colonialism and
orientalism […] is most strongly reinforced […] at the loci of enumeration, where bodies are counted, homogenized, and bounded by their
Thus the unruly body of the colonial subject (fasting, feasting, hookswinging, abluting, burning,
extent.
and bleeding) is recuperated through the language of numbers that allows these very bodies to be
brought back, now counted and accounted, for the humdrum projects of taxation, sanitation, education,
warfare, and loyalty. (1993, 334)

Enumeration and the production of statistical knowledge in the colonies performed a number of
functions, including entrenching and policing colonialist binaries of colonizer/colonized and their
derivatives, but also in enforcing divisions between colonial populations ,15 and as a form of remote colonial rule.
On both a structural and individual level these colonial archives functioned as a kind of palimpsestic16 form of abstract representation that
were taken as a token of ‘radical realism’ (Said 1978, 72): a fixing of the ontology of the colonies and its people by Western knowledges, just as
Writing of forms of representation
the data assemblages of today work to fix individuals by taking their data as a sign of the real.
at work within systems of racism, Mbembe speaks of a ‘will to representation [which] is at bottom a
will to destruction aiming to turn something violently into nothing’ (2019, 139). In this way, to
constitute something in the form of something else – something more manageable and more malleable
to forms of racializing power – consists of an essential and violent erasure of the original. Imperial
knowledge practices based on abstract and racialized representations constituted not only a way of
dividing the self from others and from itself, but worked to erase those who fell on the wrong side of the
dividing line through substituting them with their representation.
Comparably, Simon Gikanda chronicles the slave masters’ fastidious recordingkeeping of the actions of their slaves, such that this archive
constituted the evidence of the latter’s objectification: ‘as chattel, as property, and indeed as the symbol of the barbarism that enabled white
That Simone Browne now writes of data-driven surveillance
civilization and its modernist cravings’ (2015, 92).
systems being put to work to surveil and bind Black lives in particular, as exacting the self – its body
and behaviour – to testify as evidence against itself, holds then, a critical provenance within the history
of the colonial management of blackness. The effects of these systems, such as AI-enabled biometric
technologies in public spaces, which Browne describes as reifying structures for racial difference, is to
produce an ‘ontological insecurity’ – an alienation within, or a dividing practice of, the racialized self
(2015, 109).
4. The aff would never get off the ground.
Moreno-Casas, et al, 22—Department of Applied Economics I, History and Economic Institutions
and Moral Philosophy, Social and Legal Sciences Faculty, Rey Juan Carlos University (Vicente, with Victor
Espinosa and William Wang, “The political economy of complexity: the case of Cyber-Communism,”
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4012265, dml)
Cyber-communism is based on computational complexity while advocating the control of the economy. Cockshott and Cottrell are explicit about
the goal of controlling the economy throughout all their works. That is to say, the dominant theme in cyber-communism political economy is
control, attributing a strong role to the government (table 2). This may challenge the complexity political economy we previously presented,
which energetically rejects control and supports cultivation. However, as we showed in the previous sub-section, cyber-communism
faces fundamental problems in abstract terms even from a computational complexity view, e.g., self-reference,
rendering central planning unfeasible. From this automatically follows that control is likewise impossible, which
disarticulates the core of the cyber-communist political economy . Nevertheless, there is still a risk that, from any
complexity grounded perspective similar to cybercommunism, one can think that the economy can dispense with elementary institutions such
as money or private property rights. Due to this risk, we will elaborate here the fundamental problem of cyber-communism in light of a
complexity political economy.

As it can already been inferred, the main issue with cyber-communism is its disregard for institutions . It outlines a
socialist system in which private ownership of the means of production and money are abolished, and the state becomes the owner of the
means of production while labor certificates perform the role of money for consumer goods. The same happens to the figure of the
entrepreneur, which Cottrell and Cockshott want to replace by a combination of expert opinion and democratic methods. They
propose
this alternative social organization without barely analyzing the relevance of these institutions and the
possible consequences of substituting them. They assume that their designed institutions will work even
more efficiently than current market institutions. The reason for this disregard for institutions is their skepticism about the
evolutionary view of the economy. Cottrell and Cockshott (1997a) criticize Hayek for marking superficial analogies and metaphors in economics
from biology. They assert that, while there can be some parallelisms, evolution in biology and evolution in economics differ because the
economy acts as a single processor, while this is not the case in biology due to the variety of species. Along the same lines, the authors then
conclude that one cannot affirm that the capitalist system results from evolution. They make clear that evolution is not the same as history, and
that capitalism is a historical result, not an evolutionary outcome. This is because an authentic evolutionary process, they contend, will require
a considerable number of simultaneous economic systems to compete, and, in history, we only had two systems that competed for a short
period of time, which is not a statistically valid sample.

Contrary to Cockshott and Cottrell, there is a vast literature on the economy as an evolutionary process, which precisely forms complexity
economics. Many authors have shown the advantages of taking metaphors from biological process rather than mechanical process to address
economic issues (Hodgson, 1995). One can apply the biological concept of diversity of species as diversity of products in economics to explain,
for instance, the cause of wealth (Koppl et al., 2015). Moreover, this evolutionary perspective has usually gone hand in hand with
institutionalism. Both combined allow to understand how change takes place in the economy through the evolution of its institutions,
conceived as transpersonal coordination mechanisms. These are two perspectives integrating complexity economics and have helped to
overcome the restrictive and unrealistic assumptions of neoclassical economics and traditional political economy.

Property rights, money, and entrepreneurship are institutions emerged through a long evolutionary
process. As such, they all embody a great amount of factual and tacit knowledge, which means that these institutions have not
been consciously created, but spontaneously emerged from the interaction of millions of individuals
(Hayek, 1973). They allow transpersonal coordination in complex system notably populated, in the same way that language does (Horwitz,
1996). Consciouslyremoving them or other relevant institutions from the economy , as cybercommunism
aims to do, can create harmful effects on coordination among agents, which can impair the emergent
process and algorithmic working of the economy.
From this discussion, it has to be clear that any political economy claiming to be based on complexity theory, as cyber-communism does, should
take institutions as central, and then cultivation as the dominant theme. It is contradictory to represent a complexity approach and not
accounting for institutions in the field of political economy.

5. Conclusion
The introduction of complexity theory into economics can result in paradigmatic shifts. This article has dealt with the main implications of
complexity theory for political economy from the two most widespread definitions of complexity in economics: dynamic and computational. In
this way, we have elaborated a complexity political economy focused on cultivating the economic system rather than controlling it.

Complexity theory shows that central planning of the economy is impossible, due to the nonequilibrium,
nonlinear, or even chaotic dynamics present in the economy. Ultimately, global controlling finds that: (1) optimal parameters cannot
be computable due to the problem of selfreference, and (2) the emergent processes feeding economic dynamics
are, by definition, not planned or controlled, but spontaneous or self-organized. These findings clash with
neoclassical political economy, which beliefs in effective control of the economy due to its equilibrium and mechanistic approach to economic
science. Consequently, complexity political economy moves away from mainstream political economy, from its focus on the control of
variables, and puts the spotlight on the cultivation of an environment, of institutions and transpersonal mechanisms, which allow the
algorithmic operation of the economy and the emergence of new processes. Cultivation, as the central concept for a complexity political
economy, warns us that the
economy is not a perfect mechanism that can be effectively manipulated without
causing harmful consequences, drawing our attention to take care of economic institutions such as
private property rights or money.
From this complexity political economy, we have analyzed the political economy of cybercommunism. As shown, despite relying on
technological advances in computation and simulation and even sharing a computational complexity view of the economy, cyber-communism
advocates 23 the control of the economy. In believing in global optimizations and computation, cybercommunist theory does not
realize the noncomputability of optimal outcomes, the problem of self-reference parallel to Gödel’s incompleteness theorems, and the
emergent, self-organized processes of the economy. At the same time, it does
not recognize the importance of institutions
such as private property rights or money and tries to manipulate them without accounting for the dire
consequences their control may have for the operation of the economic system . Thus, cybercommunism appears
closer to the mainstream, traditional political economy of control than to a complexity political economy of cultivation. Ultimately, complexity

theory and complexity political economy show that any implementation of the cyber-communist
ideal is doomed to failure.
This work also responds to a more general question: can advances in computer technology ease central planning? As shown, central
planning or global optimizations of the economy are not possible in light of complexity theory, not due to a
technological or practical issue, but due to ontological and epistemological reasons related to the
nature of complex systems and the cognitive limitations of the human mind . In this sense, this article may prompt
those who seek to control or model the economy through technology and computation from alternative perspectives to cyber-communism to
consider the noncomputability of optimal parameters and the emergent dynamics of the economy, which cannot be fully anticipated.
Additionally, the complexity political economy outlined here, emphasizing the notion of cultivation, can be used in future works on political
economy aiming to consistently follow the principles of complexity economics. It can also support a great deal of research on the role of
institutions as transpersonal mechanisms in the economy, such as agent-based models.
Block
Kritik
2- racial capitalism- 1AC Verdegem proves they maintain the status quo capitalist
economy. Turns case.
Pieter Verdegem 22, Senior Lecturer in Media Theory in the Westminster School of Media and
Communication and a member of the Communication and Media Research Institute (CAMRI), University
of Westminster, UK, 4-9-2022, "Dismantling AI capitalism: the commons as an alternative to the power
concentration of Big Tech", SpringerLink, https://link.springer.com/article/10.1007/s00146-022-01437-
8, //yeed

Imagining alternatives

Introducing the commons

In most simple terms, the commons are the natural and cultural resources that are accessible to all members of
society. What is typical about them is that they are held in common, instead of being owned privately
(Bollier 2014). Public debate about the commons has become more mainstream due to environmental
degradation and has been popularised—amongst others—by the first female winner (2009) of the Nobel Memorial Prize
in Economics, Elinor Ostrom. Her work includes Governing the Commons (Ostrom 1990), in which she refutes the Tragedy of
the Commons thesis (Hardin 1968). She has inspired thinking about the design and organisation of
cooperative alternatives beyond markets and states .
Ostrom, together with her colleague Hess, has also worked on extending the debate about commons to knowledge. Hess and Ostrom (2007)
approached knowledge as a complex ecosystem that operates as a common, similar to what Benkler (2006) theorised as commons-based peer
production. In a similar vein, others have been working on the
concept of digital commons, which refers to the
communal ownership and distribution of informational resources and technology (Birkinbine 2018). Taking the
ideas of knowledge and digital commons together opens up opportunities to inquire about alternative structures for
AI ownership and governance.
We are confronted with intense competition and concentration in AI capitalism, a situation similar to what has been labelled the enclosure of
the commons. According to Bollier (2014), the latter refers to a situation in which corporate interests appropriate our shared wealth and turn it
into expensive private commodities. This is happening also in the digital sphere, whereby platforms
control access to data and
increasingly enclose the digital world within their private sphere . Resisting this—by pushing for alternatives—can
be done by stressing the importance of data and AI as public goods, produced by society and its
members (Taylor 2016; Viljoen 2021). The important task then is to explore how the commons can be reclaimed .

While thinking about the commons has its roots in radical political economy, there is a disagreement
about what the end goal of its project should be. Some position the commons as an emergent value
system that has the potential to transform or even replace capitalism (Broumas 2017), while others
perceive the value of the commons in how it can respond to the excesses and exploitative tendencies of
capitalism (De Angelis 2017). As such, the commons are not per se a replacement of capitalism but
rather something that can co-exist and couple with capital circuits through the commodity firm.
Data commons

How can we think about the commons in the context of AI capitalism? First of all, we
need to conceptualise the data
commons. Bria (2018) defines data commons as a shared resource that enables citizens to contribute, access and
use data as a common good, without or with limited intellectual property restrictions. Instead of
considering data as a commodity or capital (Sadowski 2019), it can be thought of as a collective resource
(Viljoen 2021). As such, it can empower citizens and help them solve shared—common—problems.

The bigger picture of negotiation and agreements around data commons is part of calls for a New Deal on
Data (Bria 2018). A report of the Decode projectFootnote2 explains what such a deal on data could entail (Bass et al. 2018): First, there is a
need to push for more transparency, accountability and trust in data projects ; Second, individuals should
be given more control and people should be empowered to decide how their data is collected and used ;
and, Last, it should be an important ambition to unlock more value of data as a common good while
protecting people’s privacy and encouraging fair terms of use .

Of course, there are questionshow to practically organise this. A lot of inspiring work on the data commons
proposes solutions in terms of data infrastructure and data trusts (Coyle 2020). A new data infrastructure
should help dealing with institutional and regulatory aspects of how data can be shared, what standards
and policies should be set up and which organisations and communities should be involved in
contributing to and maintaining this data infrastructure . One approach for an innovative data
infrastructure has been developed and trialled in several countries: data trusts. Data trusts can exist in
many forms and models but the general principle is that they sit between an individual generating data
and a company or institution wanting to use that data (Delacroix and Lawrence 2019). In this system, control over
data is transferred to a third party, which can use the data for pre-defined purposes . Data trusts can use
data from different sources and allow to steward data use for all. Important in its governance is data
solidarity, meaning that corporate and public data shareholders share the benefits and risks of data access and
production (Bunz and Vrikki 2022). Coming up with a system for sharing and giving access to data does not only benefit
society; it is also necessary for AI innovation (Hall and Pesenti 2017).
Compute capacity for the commons

Compute capacity is the second element of a commons approach, as an alternative to the power
concentration of AI capitalism. Some even position computing infrastructure as part of the data
commons itself (Grossman et al. 2016). I discussed already how crucial computing power is for the development of AI. Only Big Tech
(and some elite universities) have the resources to upgrade their infrastructure—contributing to an AI
compute divide (Ahmed and Wahed 2020)—while leading AI companies collect rent from and keep control over
what is happening on their compute infrastructure (Srnicek 2019). As an alternative, investments in common/public
compute capacity could help society becoming less dependent on the private infrastructure of Big Tech.

While the corporate sector often claims that public investment stifles innovation, (Mazzucato 2013) debunks this myth and actually argues that
the radical technologies behind, for example, the iPhone (e.g., GPS, touch screen display and Siri) were
all backed by government funding. Another example is Google’s search algorithm, which was publicly
funded through the National Science Foundation (NSF).
The first supercomputers were used by universities (in the US and the UK) and governments should consider pooling (more) resources to invest
in (national or international) compute capacity that will drive the future of AI. Common
investment in AI compute capacity
will also help to democratise AI (Riedl 2020), meaning that more people and organisations can be involved in
developing AI systems. This is particularly relevant for quantum computing, which is considered crucial for
revolutionary breakthroughs in the future of AI—the so-called quantum AI (Taylor 2020). Public/common
investment in computing infrastructure could also mean a de-commodification of compute capacity
and create a new public service that can be made available to society, accessible to different
organisations, companies and interest groups.
A commons approach to AI human capital

While not often considered as part of the data commons, an argument can be made about common
investment in AI human capital too. Having an upgraded computer infrastructure is one thing, AI human capital—the AI talent and
human resources that are necessary to develop AI innovations—is as important.

Given the high level of specialisation, success in research on machine/deep learning is dependent on
people who have accumulated large expertise through formal training (e.g., PhD) or years of applied work
(Ahmed and Hamed 2020). As a result, there is a growing gap between the increasing demand for AI expertise and
the limited supply, resulting in a talent scarcity (Metz 2017).

A commons approach to AI human capital would, for example, include to provide more funding for
public IT services and universities allowing them, respectively, to reduce outsourcing and facilitate more
research labs to keep their faculty members instead of being recruited by larger, corporate,
organisations with deep pockets.

Towards an alternative political economy of AI

Investment in public infrastructure and resources can support commons-based economies and models
of organisation which allow to depart from an incentive structure focused on value creation rather than
value extraction (Kostakis and Bauwens 2014). However, this depends on new regimes in terms of ownership, control and governance.

First, a central aspect of envisioning an alternative political economy of AI is rethinking ownership.


Regulation is often proposed as a strategy to limit the market/monopoly power of Big Tech (Posner and Weyl 2018). Competition and
antitrust law, for example, could be used to break up the AI/tech giants. However, such a strategy might be counter-
productive, as the power of, for example, social media platforms is that they connect everyone in society .
Common ownership might be an alternative approach that could be more productive (Kostakis and Bauwens
2014). There is a solid case for placing the technologies producing AI in public and collective ownership . It
would mean that communities have more control over how AI is produced and how the public can benefit
from its services. The end goal is to have a digital infrastructure that is available to and provides
advantages for a broad range of stakeholders in society, not just the AI behemoths.

Second, related to ownership is the aspect of promoting common governance. The goal here is the democratisation of
AI and this requires the decentralisation of power, back in the hands of the public (Posner and Weyl 2018; Riedl
2020). If we consider AI as a GPT, which will alter the structures of society, we need to make sure there is democratic
oversight and control. After all, we have installed regulators that have the power to protect the interests
of citizens in other sectors, such as postal services, electricity, broadcasting and telecommunication . The
services provided by AI are so crucial in everyday life, making it necessary that society has a greater say about it.

Inspiration
for alternative structures in terms of ownership, control and governance can be found in the
platform cooperativism model (Scholz 2017), which allows involvement from multiple stakeholders in the
ownership, development and management of platforms .

Finally, we
need to come up with a new vocabulary when thinking about AI systems and how they deliver
benefits to society. Instead of corporate discourses portraying AI as Tech for Good, boosting innovation
and entrepreneurship, it makes sense to perceive AI infrastructures as a computational utility, subject
to democratic control (Mosco 2017). Dyer-Witheford and colleagues (2019) elaborate on this and push for considering AI as a
communal utility. This means that communities and workers should be involved in determining what sort of
work should or should not be automated, and thus call for a genuine determination by the general
intellect in the design of AI. In this general intellect, collective cooperation and knowledge become a source of
value (Terranova 2000). The proposed principles of common ownership and governance should be central in developing AI as a communal
utility.

1. Impossible hospitality outweighs. You should flip traditional procedures of impact


calculus and compare impacts from the perspective of the damné, the wretched of
the earth.
Colebrook, 21—Edwin Erle Sparks Professor of English at Pennsylvania State University (Claire, “Can
Theory End the World?,” symploke, Volume 29, Numbers 1-2, 2021, pp. 521-534, dml)

Finally, playing the game of theory sustains the world. How to end the world, and open another game, and not
do so in the grand style? It amounts to this : I live and am constituted through this world of theory and yet
know it is neither just nor capable of generating justice from its own resources. Too many chances have been
given, and still the barbarism. Decades of theory and still , here we are in an age of accelerated mass extinctions
and exacerbated micro and macro aggressions. It is all too easy for me, from within the privileged space of theory, to
say it’s not worth saving; but it is perhaps a worse violence to pretend that this world must
be saved. Given that being who one is requires holding on to one’s world, it would be best for theory to accept that its
world is ending, and that it cannot and should not be saved. It can no longer be a question of saving the
world for theory, or saving theory for the sake of the world. What is left is something like a minimal theory: other than the project of
saving the world what remains is the decency of ending the world of theory well. Do “we” hang on to the
world we have, keep going as long as we can, and eke out some end days? I think there are some ways in which
theory has the resources for the end of the world, but only if it recognizes how much of it is bankrupt
and complicit—how much it has been saving itself and its world—and how much other worlds offer.
Conclusion

The truth of the relative. Rather than think of exiting theory to find THE truth of some other world, it is possible to draw from theory to think
to look at “the world”
the truth of worlds. This would not be the relativism of truth but the truth of the relative. What might it be like
from a point of view in which it has no value? Such a project would be counter-apocalyptic. Rather than pre-
emptively mourning the world we have now, such that the very possibility of its non-being elicits a
desire to save the world at all costs, one might imagine looking at “the world” from the point of view of
those for whom it has no value. This is not as metaphysically audacious as it sounds; it happens all the time.
There is certainly a world in which theory does not matter, in which the type of thinking and questioning one finds in theory does not matter.
This end of the world is theory somehow rendering itself parochial, and perhaps approaching modes of theory in which what “we” do as theory
seems oddly mythic, which of course it is. I think the path towards this county-theory or para-theory or hyper-theory is multiple: by
thinking
of those for whom this world does not matter—the wretched of the earth—by thinking of the capacity
within this world to imagine another “we” or another “us,” and then perhaps also imagining that this world
that has saved itself at all costs in order to become the world takes up a minor role in the worlds of the
cosmos.
(4) Repetition Compulsion – their model of debate indoctrinates niggas into the
interpassivity of hallucinatory whiteness that anxiously blackmails us to “act” with full
knowledge that nothing changes – refuse to participate in their psycho-activist
blackmail
Sexton & Barber 17 (Jared Sexton is an Associate Professor of African American Studies and Film and Media Studies at the
University of California, Daniel Colucciello Barber is an Assistant Professor of Philosophy and Religious Studies at Pace University, PhD from
Duke University, “ON BLACK NEGATIVITY, OR THE AFFIRMATION OF NOTHING,” Society and Space, [AB])

So, Fanon moves initially from this deceptively recognizable psycho-political activist guideline, where the unreason of
alienated compliance gives way to the reason of disalienated resistance, to a parenthetical clinical modulation, where he no longer seeks to enable action

per se, and action in a particular direction at that, but rather decision ; decision in the proper sense, rather
than the forced choice , the vel , of hallucinatory whiteness : “turn white or disappear.” No decision can
be made within the terms of a forced choice , Fanon reveals, only a decision about the terms of its
imposition . (Aside: the Philcox translation has it as: “whiten or perish.” I like the Markmann phrasing better here because it stays with the dynamics of hyper/in/visibility that Fanon
is exploring, the peculiar problem of overdetermination from without, which is to say of anti-black racialization, of victimized appearance, but also of a certain ethics or aesthetics of
disappearance that we can glean from a reading of Fanon. Kara Keeling (2007) and Huey Copeland (2013) and Simone Browne (2015) have elaborated on this nexus generatively in their

respective work.) Wilderson’s question was to the effect of: What would a properly decided , freely chosen ,
passivity toward the social structure look like? Is there such a thing—ethically, politically—as radical
passivity ? (I ended my first book with a slightly modified reference from Thomas Carl Wall’s (1999) text bearing that very title. I wonder about this genuinely still and tend to think,
yes, there is such a thing.) Žižek, to take another well-known example, has played on the pop psychological notion of “passive aggressive behavior” in his withering critique of so much leftist

activism today. In The Parallax View, he writes: perhaps, one should assert this attitude of passive aggressivity as a proper
radical political gesture, in contrast to aggressive passivity , the standard ‘interpassive’ mode of our
participation in socio-ideological life in which we are active all the time in order to make it sure that
nothing will happen , that nothing will really change . In such a constellation, the first truly critical
(‘aggressive’, violent) step is to withdraw into passivity , to refuse to participate —Bartleby’s ‘I would
prefer not to’ is the necessary first step which as it were clears the ground for a true activity , for an act
that will effectively change the coordinates of the constellation (Žižek, 2009: 342). Now, Zizek’s “Bartleby politics”
are obviously not quietist, insofar as they are meant to prepare the way for a true political act . (Frédéric Neyrat
[2014] has a related conception: “Rather than its heart, passivity should be the skin of politics. Without passivity, without a ‘negative capability,’ to refer to Keats’s notion, there isn’t any

creative imagination, this chaotic imagination that generates the promises of new worlds.” And, not for nothing, Hortense Spillers (2003) makes another,
earlier argument for “ negative capability ” in a pair of essays first published in the 1990s, “The Crisis of the Negro Intellectual: A Post-Date” and “All The
Things You Could Be By Now If Sigmund Freud’s Wife Was Your Mother: Psychoanalysis and Race.” But the interregnum that opens up between the
frenetic , aggressively passive “activism” of the current socio-ideological constellation— in which “the
anxious expectation that nothing will happen ” competes with “the desperate demand to do
something ”—and that new constellation brought into being by the introduction of some fundamental
indeterminancy —a negativity that is, as you rightly note above, strictly unfathomable —that
interregnum would seem to require the cultivation of an oxymoronic passive activity . Does it make sense to speak of
a need for “passivism” (not to be confused with the homophonic term “pacifism”)? Think of the performative contradiction of trying to relax;

the harder you try to attain it, the more it evades you . As every athlete worth their salt knows, your best
performance requires your least effort. The more you relax, the more intensely you can exert yourself . In
this scenario, you do more the less you try.
Case
Only an alternative thought of speaking from the “South” and centering Afrocentric IR
allows a counter “productive” reading of history that allows for life-affirming thought
that refuses the vectors of colonization and Western Rationality. The process of
making link arguments is the first step of decolonial thought because it problematizes
AI and it’s underpinnings rooted in Western Reason.
Adams 21 [Rachel, a Human Sciences Research Council, South Africa;b Information Law and Policy
Centre, Institute of Advanced Legal Studies, University of London. “Can artificial intelligence be
decolonized?”. Interdisciplinary Science Reviews. Published: March 7, 2021. Available @
https://doi.org/10.1080/03080188.2020.1840225. Accessed: 8/30/2022//!PI!]

Decolonial thought is far more than a tool to problematize AI. It is an invocation to make intelligible, to
critique, and to seek to undo the logics and politics of race and coloniality that continue to operate in
technologies and imaginaries associated with AI in ways that exclude, delimit, and degrade other ways
of knowing, living, and being that do not align with the hegemony of Western reason. It is located and specific.
It is about the production of race and divided worlds; it is about power and the precise effects of power on being in the world today; it is about
knowledge and how knowledge is ascribed legitimacy and value; and it is about a politics of resistance that enters and undoes the object of its
critique. This includes, as I have outlined above in relation to ethics in particular, the discourses that rationalize and obscure the history and
effects of AI. In addition to those explored here, there are many other ways in which decoloniality must be brought to bear on the field and
practice of AI,18 such as the invisibilizing labour that sustains the industry, its utilization of traditional gender binaries within systems and
products (Adams 2019), the biopolitical intersection of gender and race in the anthropocentric production of machines, and the links between
AI and contemporary modalities of capitalist modernity. Indeed, much more work is needed to fully understand the entanglement of AI with
coloniality and the pathologies of race.

However, in drawing on the discourse of decoloniality, critics of AI must resist the sublimation of
decoloniality as another rationality that justifies and legitimates AI. T o do so re-performs the very
abstractions and disembodiment of thought that decoloniality seeks to resist. To be clear, if the decree
to decolonize AI is not addressing race and historically embedded forms of Occidental power over, nor
seeking to rupture the epistemological and teleological assumptions of the discipline and related fields
from within a historical reading of their formation and appropriation within colonial regimes, then
decolonization is being misappropriated as a metaphor, and its usage in the discourse has become a
part of that to which decoloniality proper must address in its critique . This becomes ever more critical where
surveillant AI technologies are being used to thwart decolonial resistance to racism and neo-imperial power (Ndlovu-Gatsheni 2020).

Further, it does not go far enough to restate that AI is having a racializing effect, or that its ubiquitous power throughout the world is
If colonial modes of power over and dividing practices of racism are being re-
hegemonic and neo-imperialistic.
instituted through AI behind the veil of technocracy , what is the precise form of this re-institution of
race and colonialism? How can AI be located within the longue durée of colonialism and race? Through examinations of the critical
histories of the assumptions upon which the field is based and the knowledge practices in which it engages, a new understanding of its effects
of, on, and through power can emerge which can create the space for other localized and culturally diverse ways of understanding and doing AI,
such as those being explored by Alan Blackwell (this issue). However, these critical histories, as briefly set out above, also point to another
consternation: Rather than reproducing the logics of race and reaffirming the legacies of colonialism, AI depends upon them. Recall Said’s
prescription that the dividing line between the Occidental and Oriental worlds was ‘paradoxically presuppose[d] and depend[ed] on’ by the
As such, the task of decolonizing AI requires, as I have argued here, a critique of the ways
West (1978, 336).
in which AI is made possible by, and depends upon, colonial forms of power and the dividing practices
of racialization. But Said intimates another site of meaning by suggesting that the presupposition and
dependence of Western power on racialization and colonialism is paradoxical, thus leading to an
impasse to which Western reason has no answer. Within AI, these aporias can be glimpsed in industry’s
dogmatic pursuit of simulated intelligence, which latently affirms that intelligence is not hereditary but
environmental; and in the industry’s manipulation of behaviour, attention, and thought, indicating the
breakdown of the autonomous Cartesian self which, paradoxically, ideas about intelligence and
automation in AI are modelled on. For postcolonial thinkers, it is precisely in these impassive moments
that the Western equation of thought becomes unbalanced, that a politics of speaking from the South
can regenerate as productive and life-affirming (Spivak 1988; Mbembe 2017), thus articulating the
otherwises and elsewheres of ‘a humanity made to the measure of the world’ (Césaire 2001, 73).
Indeed, recognizing that the undoing of coloniality was always a possibility of the original event of conquest – that it always could have been
otherwise – simultaneously reinforces the possibility of different futures to come. With the global acceleration of AI and its supporting
discourses, it is becoming increasingly difficult to imagine a future in which it is not dominant. Within these imagined future scenarios, AI
Contained within this imaginary are neo-
constitutes another step in the evolution of humanity’s triumph over the world.
Darwinian notions that those left behind by the technological revolution are not worthy of the new
world. If it risks leaving so many behind, can AI, as currently imagined, ever be ‘ good,’ ‘benevolent,’ or
‘decolonial’? To speak of decolonizing AI not only then contains the imperative to collectively reimagine
a multifarious world space and ask whether AI can be ascribed a role within, a nd conducive to, this new
imagining – but also to be imaginative enough to conceive of a future without AI.

You might also like