You are on page 1of 19

Philosophy & Technology

https://doi.org/10.1007/s13347-019-00363-w
RESEARCH ARTICLE

Dissecting the Algorithmic Leviathan: On


the Socio-Political Anatomy
of Algorithmic Governance

Pascal D. König 1

Received: 7 June 2018 / Accepted: 1 July 2019/


# Springer Nature B.V. 2019

Abstract
A growing literature is taking an institutionalist and governance perspective on how
algorithms shape society based on unprecedented capacities for managing social
complexity. Algorithmic governance altogether emerges as a novel and distinctive kind
of societal steering. It appears to transcend established categories and modes of
governance—and thus seems to call for new ways of thinking about how social
relations can be regulated and ordered. However, as this paper argues, despite its novel
way of realizing outcomes of collective steering and coordination, it can nevertheless
be grasped with an old and fundamental figure in political philosophy: that of Thomas
Hobbes’ Leviathan. Comparing algorithmic governance with this figure serves to
highlight their similarities as socio-political arrangements, and specifically to clarify
how algorithmic governance parallels the apolitical traits of the Leviathan—it elimi-
nates the political as it requires compliance and forgoing contestation to best fulfill its
role and to produce satisfying outcomes.

Keywords Governance . Algorithms . Regulation . Coordination . Collective action .


Complexity . Thomas Hobbes

1 Introduction

With the proliferation of highly adaptive algorithmic decision-making systems, there


has been a surging interest in the social and political role of such applications. There are
two ways in which a political dimension of algorithmic decision-making systems
directly comes into view. First, they are taken up by political actors as tools in their
governing activities (e.g., Margetts and Dunleavy 2013; Williamson 2014; Dunleavy

* Pascal D. König
pascal.koenig@sowi.uni-kl.de

1
Department of Social Sciences, University of Kaiserslautern, Erwin-Schrödinger-Straße, Building
57, PO-Box 3049, 67653 Kaiserslautern, Germany
P. D. König

2016; Veale et al. 2018) and for furthering party-political goals (Hersh 2015; Bimber
2014). Second, algorithmic systems are discussed as the object of political action, with
a rapidly increasing number of studies addressing the question how they can and should
be regulated, given the potentially far-reaching societal impact of some applications
(e.g., Pentland 2013; Newell and Marabelli 2015; Danaher 2016; Mittelstadt et al.
2016; Wachter et al. 2017; de Laat 2017; Pagallo 2017).
However, algorithms are also said to have a political dimension in a more general
sense in light of their use in automated and highly adaptive decision-making systems
that enable novel forms of coordination . Contributions largely from the fields of media
and legal studies have described them as institutions that structure behaviors and
intervene into social order, and they have thus conferred algorithms a political status
(Beer 2009; Bucher 2012; van Dijck 2013; Gillespie 2014; Napoli 2014). In a similar
vein, some scholars have referred to algorithm-based coordination as a form of
governance that intervenes into society and culture through shaping constructions of
social reality (Just and Latzer 2017; Kitchin 2014b; Danaher 2016; Hofmann et al.
2017; Leszczynski 2016; Yeung 2017b).
Existing contributions underline the special character of algorithmic governance and
its unprecedented capacities for coping with complex coordination tasks. As a form of
social steering, algorithmic governance seems to stand out among other established
forms of governance. It would thus seem that existing categories and concepts are
inadequate for getting a proper grasp of algorithmic governance and of the kind of
social ordering it entails. However, as will be argued below, while algorithmic gover-
nance indeed does amount to a novel mode of achieving social coordination outcomes,
it can nevertheless be aligned with a template and figure from political philosophy that
is quite old.
Drawing on the writings by Hobbes, this paper shows how major features of
algorithmic governance as a way of shaping social order and facilitating social coor-
dination parallel those of the Leviathan. The value of probing this analogy lies in
getting a sharper picture of what kind of social order algorithmic governance materi-
alizes. While a governance perspective directs our view to the potentially complex
mechanisms with which social steering can take place, the comparison with Hobbes
foregrounds how algorithmic governance—being more than a mere instrument—
inherently establishes certain socio-political relations and roles. Specifically, it serves
to sharpen our view of how algorithmic governance remains apolitical at its core in the
same sense as Hobbes’ sovereign does.
Despite the formidable degree of responsivity that algorithmic governance can
achieve as it answers to the individual inputs that it receives, this does not equal the
kind of responsiveness and accountability of democratic rule. Rather, like Hobbes’
Leviathan, it promises to produce certain outcomes that are not attainable otherwise in
exchange for compliance. It eliminates what is essential for the political because
contestation and challenges to the way in which outcomes are brought about have no
place in that process. Recognizing this apolitical heart of algorithmic governance and
its mode of ordering society is important because it nonetheless plays a political role to
the degree that it exerts public power, and because this kind of power is never neutral as
it necessarily embodies certain values and objectives.
Before turning to the comparison between algorithmic governance and Hobbes’
Leviathan, the following, second section will first characterize algorithmic governance
Dissecting the Algorithmic Leviathan: On the Socio-Political...

as a way of ordering society by combining insights from existing contributions.


Section 3 seeks to further distinguish algorithmic coordination by showing in what
sense it appears as a novel and distinctive form of governance. The fourth section then
goes on to show how algorithmic governance can be made sense based on Hobbes’
notion of the Leviathan, before closing with a conclusion in Section 5.

2 Characteristics of Algorithmic Governance

The emergence of algorithmic governance is tied to advances in digital technologies


particularly since the 1990s, which have massively enhanced the capacities for
information-based steering of activities and dealing with complex coordination prob-
lems. The explosion in the abilities to generate, transfer, store, and process information
has made possible forms of networked interaction characterized by a co-presence of
manifold entities which can dynamically adjust their behaviors. Moreover, the massive
amounts of fine-grained information about distributed entities allows for finding
patterns in their behaviors and interactions and to produce insights that can be used
for the purpose of better understanding and ultimately coordinating these behaviors.
Forms of steering that make use of these technological capacities are discussed
under the label of algorithmic governance or regulation. This kind of governance is
generally seen as a special mode of social steering which promises a tremendous
increase in the ability to manage social complexity. A concrete vision of how this kind
of governance can be implemented in order to make use of this ability is that of so-
called smart cities. In the context of smart cities, algorithmic decision-making systems
are supposed to foster an intelligent management of city resources and processes, e.g.,
in the areas of energy, traffic, schooling, or crime, but also to better include as well as to
serve citizens and their demands (Kitchin 2014b; Brauneis and Goodman 2017; Meijer
and Bolívar 2016). Taking the example of traffic management, algorithmic governance
promises to realize an optimized city transportation system via the steering of car
traffic, e.g., by setting traffic lights and guiding drivers, and through dynamically
adapting public transport in line with actual demand (Dunleavy 2016; Coletta and
Kitchin 2017). This kind of intelligent traffic management could even include individ-
ually and dynamically directing drivers to parking spots in a fashion that optimizes the
overall costs of time and energy.
Dealing with the concrete complex coordination task involved in this example is
made possible through general features of algorithmic governance that allow it to
perform similar operations in other fields. Based on the existing literature
(Williamson 2014; Just and Latzer 2017; Yeung 2017b), fully-fledged algorithmic
governance, i.e., in its most potent form and as a means for regulating behaviors, can
be characterized by a combination of three features.1
First, algorithmic governance is marked by processes of decentralized coordination
of distributed entities. Information about the behaviors and the changing status of some
entities feed back into the algorithmic decision-making system so that the behaviors of

1
Algorithmic governance can take various forms. Yeung (2017b) has provided a taxonomy of designs that
vary along the three dimensions of standard-setting, monitoring, and sanctioning. The most potent designs
combine a flexible, dynamic standard-setting with a pre-emptive monitoring and operating.
P. D. König

the other entities can be attuned to these information updates. This is, however, not
entirely spontaneous nor is it chaotic because, second, there is also an element of
centralization. The various inputs meet in the algorithmic system, which intervenes in
the mutual adjustment and coordination of behaviors and which furthermore produces
knowledge from the inputs that forms the basis of that coordination activity.
Third, algorithmic governance performs its steering function through a specific form
of regulation, namely regulation by design (Yeung 2017b: 5). This notion is already
present in Lessig’s (2002: 10) claim that “code is law,” meaning that code has an
institutional character as it effectively structures behaviors through enabling some and
constraining other decisions. Similarly, algorithmic steering does not have to make
binding decisions to influence and steer behaviors—although it can of course also
involve decisions that have a mandatory character. Rather, it can achieve this by
designing actors’ information environments. Yeung (2017a: 120) thus uses the concept
of choice architectures to describe how forms of algorithmic governance can structure
actors’ decision situations through providing certain information, options, and sugges-
tions, thereby making some choices more and others less likely.
Associated with this regulation by design are two traits of algorithmic governance
with which it stands out most clearly from other known forms of governance. Algo-
rithmic governance (1) involves a learning and adaptation based on processing infor-
mation about behaviors in order to anticipate future behaviors of distributed entities and
thus to optimally coordinate them based on these predictions (Hildebrandt 2008;
Williamson 2014; Dunleavy 2016; Hildebrandt 2016). Optimizing outputs this way
can occur based on a combination of registered information: about individual behav-
iors, changes in the larger environmental and aggregate changes in the entirety of those
entities (Yeung 2017a: 122). This way, the adaptive algorithmic decision-making
system is capable of detecting patterns in the behaviors that it can then use for its
steering effort. It dynamically adapts itself to changing circumstances and inputs to
optimize the generated outputs.
The outputs that the algorithmic system produces, e.g., information, recommenda-
tions, or decisions, are furthermore (2) personalized to the extent that they are adjusted
to individuals and their characteristics (Just and Latzer 2017: 247–248).2 This way,
algorithmic governance aims towards a mass-customization of outputs that are at the
same time optimally coordinated. Specifically, it is the intelligent pre-emption of
behaviors based on registered patterns over many entities that helps to make sure that
the coordination is resolved effectively and efficiently. This also means that the steering
and ordering of social relations in question does not result from merely providing a
single framework or architecture of rules that collectively embed and structure individ-
uals’ behaviors. Rather, algorithmic governance is capable of individually and adap-
tively embedding and guiding behaviors—thus amounting to a sort of micro-
embedding (see also Rahwan 2017), a provision of mass-personalized outputs.
At this point, it is instructive to turn Lessig’s dictum cited above—that code is law—
around. Against the backdrop of algorithmic governance, law, on the face of it, could be
seen as a rather crude form of code. While code and algorithms can have a comparable

2
It should be noted that this does not mean that all outputs—e.g., information, suggestions, decisions—are
adapted to the particularities of every individual, but rather to features or group traits that individuals share
with others.
Dissecting the Algorithmic Leviathan: On the Socio-Political...

effect as law in regulating behavior without being authoritative—as acknowledged in


Lessig’s (2002) argument—code in the form of a highly adaptive algorithmic system
can cope with greater complexity. Its decision rules can be subject to rapid change as
they dynamically adapt to ongoing inputs; and the micro-embedding of individuals
involves selection and decision rules that are potentially very fine-grained. Thus, the
algorithmic decision-making system can interact with individuals based on learned
rules for differential treatment that accommodates their dispositions and behavioral
patterns—like a blanket that takes on the form of an uneven surface on which it is
placed. The more this blanket solidifies, to continue with this metaphor, the more its
texture of micro-rules turns into a sort of casting mold that may guide or constrict
individuals’ behaviors.
Based on these capacities, algorithmic governance can perform a social steering that
is very effective and that works even without an element of compulsion. It can be
realized based on the highly adaptive, coordinated, and targeted intervention in infor-
mation environments and decision situations of a multitude of individuals. Even if it
does not produce authoritative decisions, it parallels political decisions in the traditional
sense of collectively binding decisions to the extent that it materializes outcomes of
collective coordinated action.

3 New Possibilities for Managing Societal Complexity

Having traced the contours of algorithmic governance, it thus far emerges as a distinctive
and highly complex mode of ordering social relations and producing collective coordi-
nation outcomes on a potentially grand scale. These kinds of capacities are not merely a
theoretical possibility. There is a proliferation of applications aiming to exploit the
potential of algorithmic governance for managing complex social coordination problems.
The state is increasingly availing itself of algorithmic decision-making systems to
upgrade its steering capacities in various areas. Generally, algorithmic decision-making
systems enhance information or nodality as one of the government’s key policy
resources. This no longer primarily concerns nodality in its role as a detector, which
earlier contributions on information technologies as a tool of government still empha-
sized (Hood and Margetts 2007). Rather, advances in information and communication
technologies increasingly turn nodality into an effector (Dee 2013; Margetts and
Dunleavy 2013; Williamson 2014; Dunleavy 2016).
Based on insights from behavioral economics, the state may use nudging and other
ways of influencing behavior that draw on information power in order to structure
decision situations and make targeted interventions, including in combination with the
use of algorithmic decision-making systems (Oliver 2015; John 2016; Dunleavy 2016).
Moreover, in the areas of security and law and order, comprehensive surveillance
systems that draw on the capacities of algorithmic governance can be used to monitor,
police, and identify security risks, to make predictions and intervene in a selective,
targeted fashion (Lyon 2003; Leese 2014; Zweig et al. 2018). This way, the state is
capable of minimizing risks without having to hierarchically and collectively constrain
behaviors through rule-setting—and thus allowing a higher level of social complexity.
On the level of public services, algorithmic decision-making systems can be seen as
a core part of a “digital era governance” (Margetts and Dunleavy 2013; Clarke and
P. D. König

Margetts 2014). A digitally enhanced administration makes use of massive amounts of


distributed data which are processed to anticipate demand for services and to automate
their provision (Mackenzie 2013; Williamson 2014; Chen and Hsieh 2014). Such smart
services enable and, in turn, require citizens to act as co-producers of much more
personalized public services (Williamson 2014: 295–300). Algorithmic governance
used in this context can also make informed predictions about citizen behavior, use
them to alter expectations and to induce incentives in order to optimize coordination
efforts and attain desirable goals (Dunleavy 2016; Williamson 2014). In sum,
“[g]overning in this scenario means taught algorithms acting upon the actions of
citizens” (Williamson 2014: 308).
Algorithmic decision-making is taken up by the state to order social relations in
various areas, such as criminal justice, education, traffic, health care, and social
benefits—thus employing this kind of governance for an ostensibly political function.
However, similar forms of steering and coordination can also be found beyond the ambit
of the state. Private actors too may employ algorithmic governance in ways that
intervene into and shape social order on a large scale (e.g., Danaher 2016; Just and
Latzer 2017; Napoli 2014). For example, algorithmic decision-making systems are
employed in marketing practices using data about a multitude of individuals to catego-
rize them, anticipate their preferences, and adaptively and selectively present them
information in the form of online advertisements or recommendations (e.g., Curry
2016; Lambin 2014). More generally, filter and decision mechanisms of internet
platforms direct information streams and shape perceptions of millions of users. There-
fore, following Just and Latzer (2017: 245–246), they affect culture, knowledge, and
constructions of social reality. With regard to search engines, for instance, Napoli (2014:
238) states that “the algorithms that are at the core of search engines are functioning in a
political capacity similar to established media institutions.” While there are still only few
actors, such as the big internet companies and financial organizations, which can
establish algorithmic systems that reach large collectives, they nonetheless establish
an information infrastructure with a far-reaching societal impact.
The fact that a social ordering performed by algorithmic decision systems is not
restricted to the state or political actors in a narrow sense is very much in line with the
concept of governance—and the notion that governing does not have to be done by
governments. Such algorithm-based coordination also generally seems to fit the com-
mon understanding of governance as a steering of societal relations through coordina-
tion processes between interdependent actors and on the basis of institutionalized rules
(Treib et al. 2007). Yet even though the governance concept is notoriously broad, the
kind of complex coordination performed by algorithmic decision-making systems goes
beyond existing forms of governance and seems to transcend conventional categories
and forms of governance. There are strong reasons to consider algorithmic governance
a distinctive mode of steering and managing social complexity which calls for a new
way of thinking about how society can be ordered.
First, in view of algorithmic governance dynamically achieving decentralized coor-
dination, it resembles a market-based coordination of interactions and comes close to
one possible extreme form of governance, that of societal self-organization (Treib et al.
2007: 5). At the same time, algorithmic governance exhibits traits closer to the opposite
extreme due to its element of technocratic control. It does not rely merely on the
“intelligence” of a decentralized market-based coordination but acts upon a special
Dissecting the Algorithmic Leviathan: On the Socio-Political...

knowledge which it uses as a basis for its selections and decisions in order to achieve
optimal coordination outcomes—a kind of knowledge that the algorithmic system
produces and to which it has more or less exclusive access. Hence, like a technocratic
government, the information system is supposed to have a superior knowledge that
guides the rule-setting and prescription of behaviors.
Second, algorithmic governance is ambivalent with regard to its status as an
institution. On the one hand, it conforms to the common notion of an institution as a
set of rules that structures expectation and that shapes and embeds behaviors. On the
other hand, its rules for targeted interventions are selective and dynamically adapted,
which lends algorithmic governance features of an autonomous actor (Helbing 2015;
Just and Latzer 2017: 247–248; Ziewitz 2016: 5). The algorithmic decision-making
system mediates and proactively intervenes into the perceptions and behaviors of many
individuals; and it is self-regulating, learning over time and thereby adapting how it
performs these interventions. Moreover, the algorithmic decision-making system as a
centralized instance interacts with a multitude of individual entities. Yet it does not do
so in a uniform but in a differentiated fashion—through its capacity to personalize
outputs—as if it were many different actors at once.
Overall, with these abilities, algorithmic steering exhibits an extraordinary capacity
to regulate social relations under conditions of increased complexity. It thus has the
potential to change the way in which societies are organized. This is not only true for
applications like the ones mentioned further above, in which the state employs algo-
rithmic decision systems to solve coordination problems in certain areas. The capacities
of algorithmic governance also form the basis of broader visions of society in which
social relations are increasingly collaborative and shaped by the intelligent steering of a
comprehensive, algorithmically enhanced nervous system (Helbing 2015; Arthur 2011;
O’Reilly 2011)—a vision that also informs the creation of smart cities and that
resonates in certain conceptions of e-government (Linders 2012; Wohlers and Bernier
2016; Nam 2012).
More than simply promising effective and efficient handling of social coordination
tasks, algorithmic governance also manages complexity in a highly responsive and
decentralized way. It incorporates and accommodates the inputs of the individuals
subjected to its coordination activity. Specifically, it integrates diverse inputs, converting
complexity into collective coordination and steering outcomes without, however, erad-
icating this complexity. In this regard, it even seems to mirror the democratic promise of
creating unity out of diversity without sacrificing the latter, while also surpassing any
known form of governing in terms of the capacity to manage social complexity and to
achieve responsiveness. This may further contribute to the notion that algorithmic
governance as such can sustain a new way of ordering society. However, as will be
argued in the following section, this notion is ultimately misleading.

4 Leveraging Hobbes to Understand Algorithmic Governance

4.1 The Algorithmic Leviathan as a Well-Functioning Giant Machine

Algorithmic governance altogether emerges as a very potent mode of shaping social


order. It performs a political function to the extent that it generates collective
P. D. König

coordination outcomes which can be equivalent to collectively binding decisions as a


mode of regulating behavior. This function is furthermore realized with an unprece-
dented capacity of managing social complexity. However, its novel character and
special operating mode notwithstanding, to understand what kind of social ordering
or political vision algorithmic governance realizes, one can draw on a template that has
become foundational for modern political thought—Thomas Hobbes’ Leviathan.3
Already with its operating mode of harnessing the combined strength of all individ-
uals that form part of the coordination in algorithmic governance, this kind of gover-
nance is reminiscent of the figure of Hobbes’ Leviathan (Hobbes 1909)—which
Hobbes, strikingly, even saw as a giant machine. Algorithmic decision-making systems
form a sort of algorithmic Leviathan in the sense that they tap a potential which lies in
possible coordination outcomes involving many individuals. It does so through realiz-
ing complex coordination tasks and solving coordination challenges that the individuals
involved did not necessarily even know existed. This is because instead of the
individuals interacting with each other to produce solutions of collective action, they
all act through the algorithmic system as the mediator that coordinates their various
actions and produces collective outcomes. Hence, through their interaction with the
mediating instance, they can all act as if it were their collective goal to realize some
outcome, but they do not have to be aware of the complex means-end relationships
inherent to this coordination effort. This is the job of the algorithmic system.
In sum, individuals do not have to trust each other; they primarily have to expect that
the mediation through the algorithmic coordination will produce satisfactory outcomes
by working not only behind their back but also behind that of the others. This kind of
coordination resembles the idea of an invisible hand ordering social relations—even
more so than in the context of market-based coordination, because in algorithmic
governance, there is an actual instance that performs the proactive steering. While this
kind of steering seems far removed from the sort of rule envisioned by Hobbes, there is
still a striking commonality. The effectiveness is rooted in a single algorithmic system
which draws its power from a multitude of individuals. Similarly, Hobbes (1909: 9) has
expressly understood the Leviathan as an “artificiall man” [sic], as if it were one actor
but made up of many individuals.
Like Hobbes’ Leviathan, the algorithmic system of sorting, selecting, and
decision-making recedes in the background but is nevertheless there and operates
in manifold social relations. Algorithmic governance thus also takes on an institu-
tional character in the sense that it becomes naturalized and takes on a life of its
own. Like other institutions, it can become embedded in social practices and
mediate social interactions by providing information and incentives and by shaping
expectations. As Schroeder and Ling’s (2014: 795) state on a more general note, the
proliferation of information and communication technologies leads to social rela-
tions and individual experiences being increasingly technologically mediated. Al-
gorithmic governance is a particularly striking case in that regard. It is mediating in
an emphatic sense—proactively intervening into social order—and it is pervasive
while operating in the background.

3
Rahwan (2017) has touched upon this figure thinking about what a social contract about algorithmic systems
could look like. In the following, a different perspective is chosen, one that looks at how algorithmic
governance itself amounts to a sort of social contract.
Dissecting the Algorithmic Leviathan: On the Socio-Political...

For Hobbes’, the ruling sovereign and the arrangement that realizes a certain social
order are established through individuals mutually agreeing to relinquish their individ-
ual freedom to do anything (Hobbes 1909: 133–134). This occurs through a social
contract between these individuals which is, however, not so much some real original
pact but rather a hypothetical construct: Individuals surrender their natural freedoms
and concede power over them to the ruler who is tasked to create and protect social
order and safeguard the same degree of individual freedom for each of its members.
This is paralleled by algorithmic coordination being based on voluntarily surrendering
some individual autonomy. Specifically, individuals cede power and responsibility of
decision-making to the algorithmic system so that the system can fulfill its coordination
role, based in part on anticipating individuals’ desires and preferences and intervening
accordingly.
Individuals bound up with the system of algorithmic governance will want to put
their trust in the system to the extent that they expect algorithmic coordination will
satisfy their needs and wants. As in Hobbes’ vision of political order, individuals
perform a sort of authorization through self-binding, through ceding freedom out of
their own desire to have a part in the outputs of the algorithmic system. Again, this does
however not occur through some actual social pact but rather through many—
deliberate or unwitting—individual decisions to subject oneself to algorithmic
coordination.
A major difference is that the authority of the Leviathan is essentially based on the
fear of premature and violent death, and the promise of safety (Hobbes 1909: 101). In
contrast, algorithmic coordination derives its authority not from guaranteeing peace but
instead from the promise of assisting in the pursuit of happiness and fulfilling individ-
ual preferences.4 Individuals do not have to follow the guidance of algorithmic systems
and may decide not to comply. In that case, however, the optimal coordination outcome
can be undermined—as would Hobbes’ commonwealth in case of renouncing obedi-
ence to the sovereign (Hobbes 1909: 261). To put their trust in the algorithmic
coordination effort, individuals must be able to expect that it is effective as well as
unbiased and fair. Hence, its outputs create the acceptance and legitimacy of algorith-
mic governance. This mirrors Hobbes’ (1909: 112) account, in which the legitimacy of
the political rule is ultimately founded in its effectiveness.
In case of such perceived effectiveness, the mediating system of algorithmic coor-
dination provides information, advice, or decisions such that those addressed by them
are inclined to oblige out of their own self-interest. Indeed, as some authors have noted,
there is a widespread readiness to subject oneself to and even become dependent on
forms of algorithmic surveillance and steering if this provides them with palpable
benefits and amenities (Yeung 2017b: 131; Brandimarte and Acquisti 2012; Zuboff
2019). As long as individuals have no reason to question its effectiveness and fairness
and indulge in the benefits resulting from algorithmic coordination, it does not have to
be authoritative and binding in order to have an equivalent effect.
Conversely, it is important that algorithmic governance can generate the acquies-
cence of those affected by its steering and therefore remains undisputed. This marks a

4
The Chinese example of the Social Credit System, however, also shows that this can be turned around into
effecting behavior and decisions through the fear of losing social status and access to various public or
commercial offers and services.
P. D. König

further parallel between algorithmic governance and Hobbes’ idea of the sovereign.
Hobbes (1909: 142–150) pleaded for a monarchy as the form of political rule because,
he argued, only where this rule is located in a single person will contradictions in the
laws be avoided. In contrast, he saw democratic politics as an inadequate form of rule,
as democratic conflict and contestation would destroy effective decision-making—and
even denoted contestation diabolical as it was likely to introduce confusion and chaos
(Kratochwil 2013: 287).
Algorithmic coordination, in order to realize its effectiveness and efficiency in
achieving coordination, similarly requires that individuals rely on its superior distrib-
uted awareness and its “intelligence” in realizing individually satisfying outcomes,
which are also part of a larger coordination effort. Opening this coordination effort to
individuals challenging and contesting this process, hence, to them making use of their
own judgement, would easily risk thwarting the algorithmic system’s performance.
Taking the example of a parking guidance system as alluded to further above may serve
to illustrate how contestation of algorithmic decisions can destroy the coordination
effort. The more individuals shirk algorithmic recommendations and decisions that are
supposed to yield an optimal collective outcome or contest the criteria behind those
decisions, the more difficult attaining this outcome becomes. Moreover, the
contestability of algorithmic decisions during the coordination process could invite
individuals to exploit this option to improve their own personal outcomes—thus foiling
the coordination effort.
This is not to say that algorithmic governance necessarily forms an absolute authority
that cannot be subjected to the control of those affected by its decisions. As is
acknowledged further below, control over algorithmic governance may well be achieved
via adequate procedures. However, with regard to its very process of optimizing and
coordinating, it needs to operate as an unquestioned authority, with the capacity to
achieve the best coordination outcomes, if it is to also realize this capacity. The value of
the algorithmic Leviathan thus seems to lie in its character as a “well-functioning, big
machine,” to use the words of Carl Schmitt (1996: 42) where he refers to the value of the
state as posited in some strands of political thought. Schmitt, however, adds to this
remark that the state only seemingly forms a mere technical instrument and neutral
arrangement. In a similar vein, the preceding considerations imply that algorithmic
governance, much more than just being a technological device, puts into practice a
specific vision of social order—that of an apolitical management of societal affairs.

4.2 Highly Responsive by Design, Apolitical at Its Heart

Similar to the status and power of the Leviathan, the power of algorithmic coordination
goes hand in hand with the absence of contestation. Therein lies a fundamental trait of
algorithmic governance that is diametrically opposed to the political as an ongoing
process in which different perspectives can compete and challenges to the status quo
continuously arise (Rancière 1999: 26–27). Where there is no longer an open process in
which there can be struggle over collective decisions and over the values, rules, and
institutions that govern the people: the political ends. Its opposite is what Rancière
(1999: 27–31) calls the police, which is understood as the mere administration and
ordering of social life, without this administration being contested and called into
question. The Leviathan is apolitical in that latter sense. It exhausts its political
Dissecting the Algorithmic Leviathan: On the Socio-Political...

character in the act of instituting the sovereign and in the collective binding decisions
taken by this sovereign to govern social order. There is no space designated for political
argument and contestation, the sovereign is essentially a mighty administrator; and a
major task involved in this governing of society is to secure a space in which private,
economic activities can take place and unfold (Hobbes 1909: 164, 175–177)—not to
establish an arena for political struggles.
Algorithmic coordination, too, takes on this apolitical character. On the one hand, it
is a potentially powerful mode of coordination that can shape social relations and
operate with an effectiveness as if it were making collective and authoritative decisions.
On the other hand, its coordination process follows an administrative, technocratic
mode of problem-solving and responding to inputs, e.g., in the forms of preferences or
demands (Hildebrandt 2016; Morozov 2014; Kitchin 2014b). It is thus fundamentally
different from the political as described above and from the—at least so some degree—
inevitably hermeneutic quest of creating and struggling over meaning in the mode of
language which the political involves (Arendt 1998; Barber 2003). This process cannot
take place with a predefined direction nor does it ever rest on solid ground because it is
not about optimizing certain goals but rather about figuring out what the guiding goals
and values should be. According to pragmatist philosophy and its view on society,
language use always involves an ineradicable element of uncertainty because language
games are never perfectly reproduced and stable, but they always entail modifications,
mutations, and openings (Tully 1999: 164). Algorithmic governance cannot relieve
individuals of navigating these language games, coping with uncertainty and differing
views and of deciding when to contest extant rules and decisions.
There is thus an important distinction at stake that has been emphasized by
Hildebrandt (2016) looking at information processing in legal practice as opposed to
computational operations. She points to law as an argumentative practice instead of
merely a processing of information. Consequently, legal judgment is not simply about
performance in terms of accuracy, for instance, but “judgment itself is predicated on the
contestability of any specific interpretation of legal certainty” (Hildebrandt 2016: 8,
emphasis in original). Unlike the processing of signs in computation, legal practice is
based on argument and takes place in the mode of human language (Hildebrandt 2016:
9). It entails an intersubjective and hermeneutic dimension of struggling over meaning
as well as the content and the foundation of societal rules and values. This kind of
practice allows for undergoing a process of learning through reevaluating, revising, and
updating the rules and values that govern a society. In contrast, while the algorithmic
governance, through responding to changing inputs, also adapts itself, this does not
happen on the level of dialogue and reasoning nor does this concern the goals and
parameters that guide its process.
The high degree of responsivity in algorithmic coordination, including its provision
of personalized outputs as a reaction to individual inputs, is therefore not to be confused
with the collective influence or autonomy of those subjected to this steering. It is
fundamentally different from the kind of responsiveness realized by a liberal-
democratic system, which involves an ongoing process of setting, contesting, and
possibly revising goals and decisions based on the inputs of those governed (Urbinati
2014). The adaptivity of algorithmic governance, in contrast, is oriented towards best
fulfilling certain substantial and procedural goals which are themselves, however, not
the subject of its optimization and coordination process.
P. D. König

In sum, the learning of the algorithmic system and algorithmic coordination as a way
of managing social relations cannot replace the kind of learning involved in politics;
and algorithmic governance realizes responsiveness only in the sense that it answers to
the individual inputs and corresponds to a consumption-like preference realization.

4.3 A Challenge to the Political

All in all, achieving social coordination and managing societal complexity through
algorithmic governance is fundamentally different from dealing with social complexity
in and through politics. Yet the two are functionally related in several respects. While
algorithmic governance entails an apolitical mode of steering, it is precisely this
character that has important ramifications for the political. First, attempts to establish
algorithmic governance in certain areas work towards shaping social relations in those
areas according to its operating mode. Such attempts may well be based on the premise
or the claim that properly designed algorithmic decision-making achieves superior and
objective solutions to complex problems (Morozov 2014; Kitchin 2014a). This is itself
a political move because not only does algorithmic governance amount to exerting
public power, but this power is also never neutral. As has repeatedly been pointed out,
algorithmic decision-making systems necessarily embody certain values, goals, and
procedural parameters that inform its operations and that are never neutral or objective
even though they may become normalized and taken-for-granted (Baruh and Popescu
2017; Meijer and Bolívar 2016; Yeung 2017b; de Laat 2017). Such goals and param-
eters that guide its activities form an institutional core of algorithmic governance, easily
masked by its constant fluidity, its adaptiveness, and capacity to interact with distrib-
uted entities in a differentiated fashion.
Second, the question of the goals and values behind algorithmic governance can be
raised explicitly and there can be struggle over the core principles of its architecture and
the objectives it incorporates. But this political process would itself not follow the
managerial and technocratic mode of problem-solving and could not be part of the
algorithmic coordination as such. The political, then, starts where there is a debate
about what the algorithmic systems should accomplish, what values they should
embody, what conceptions of fairness, etc., they should realize. This is precisely the
idea of attempts to give algorithmic systems an explicit and socially agreed-upon basis
and to “create channels between human values and governance algorithms” (Rahwan
2017: 5).
It is furthermore conceivable that algorithmic decision-making systems are them-
selves used to integrate and aggregate inputs in order to facilitate political interaction,
opinion formation, and will formation. Corresponding systems or platforms could be
engineered to sustain a process of ongoing contestation and deliberation (van den
Hoven 2005; Dahlberg 2007). In doing so, however, they would exactly not perform
the kind of algorithmic coordination described above, because the mechanism of
coordination, learning, and judgment resides in the participating individuals and con-
cerns the goals they collectively set via accepted procedures in order to govern their
relations.
In any case, political debate about the appropriate design and uses of algorithmic
governance invites difficult questions involving complex technical issues. Following
Hildebrandt (2016: 8), a major source of ambiguity arises because different algorithmic
Dissecting the Algorithmic Leviathan: On the Socio-Political...

systems designed for the same purpose lead to different outcomes.5 Also, there are
various, partly contradictory ways in which performance objectives, such as quality and
fairness, can be evaluated and realized, which opens up a substantial space for debate
about the best solution (Berk et al. 2018).
Third, algorithmic governance can work towards removing the occasion for the
kind of hermeneutic activity involved in politics. With its micro-adaptive process
and its capacity for sorting, it solves coordination problems through separation
and sorting instead of integration (as, e.g., in deliberation or authoritative deci-
sions by a legitimate institution). Indeed, algorithmic sorting promises to resolve
the seemingly inevitable tension between individuality and belonging/community
(Bauman 2017) or, put differently, between a mechanic and an organic solidarity
in social integration (Schroeder and Ling 2014: 797). Individuals are enabled to
choose their social environment and their communities and to maintain their
boundaries through algorithmic filtering (Bennett and Iyengar 2008; Dylko et al.
2012). The strengthening and maintenance of communities thus no longer neces-
sarily conflicts with individualization and complex functional differentiation be-
cause algorithmic coordination can help to accommodate and integrate the diver-
sity of many different more or less closed-off social spheres. This way, it mitigates
one of the core challenges of governing under conditions of high social complex-
ity: to create harmony out of diversity.
Algorithmic governance does so—if it is functioning well—through adaptively
reacting to inputs and generating outputs that are experienced as satisfactory. It is
then working towards a state in which there is no reason for its guiding goals and
parameters to even come into view. That this orientation is built into its design has
an important implication. It means that regardless of whether its substantial and
procedural goals have been set by those affected or not, it is geared towards
responsiveness and generating acceptance through outputs that are perceived as
satisfactory. Under these conditions, the positive personal experience of affected
individuals and the temptation of the algorithmic system’s effectiveness and effi-
ciency alone can motivate the trust in it.
These considerations concern the question of accountability regarding algorithmic
steering. Problems of accountability arise not only because, as various authors have
noted, its complex process lacks transparency and remains unintelligible to the vast
majority of individuals (Ananny and Crawford 2016; de Laat 2017; Mittelstadt and
Floridi 2016; Lepri et al. 2018), but also because algorithmic governance by design
works towards removing the occasion for questions of accountability and about the
goals and values which it embodies. In that sense, it appears as highly potent in
producing satisfied individuals, but ones that are governed by rules and objectives that
are not of their own making and that they have not collectively authorized. Whether this
will be the case for future applications of algorithmic governance remains to be seen. At
least in current practice, where algorithmic governance is applied, its results, effective-
ness, and efficiency discernibly count as the principal evaluative standards.

5
Moreover, algorithmic systems process inputs in the form of data that cannot speak for itself—it must be
processed based on selection rules and decisions about what counts as relevant; and the specific ways of
selecting, categorizing, and making distinctions based on available information always impose a certain way
of seeing (Ananny and Crawford 2016; Mittelstadt and Floridi 2016; Floridi 2012).
P. D. König

This is the case with commercial applications such as platforms where individual
consumers and users are primarily interested in receiving certain services and benefits.
The guiding criteria, i.e., the question how such services are generated, are negligible
for users as long as they are content—even if the algorithmic decision-making system
performs a collective steering and coordination function. A similar primacy of the
output-dimension is however also observable for applications established by the state.
Even conceptualizations of smart cities that stress a procedural dimension predomi-
nantly refer to a process in which citizens are involved and become engaged but which
is not regarded as political (Meijer and Bolívar 2016: 402). Citizens are supposed to
provide their inputs in the algorithmic system in order to harness their distributed
knowledge and collective intelligence, which can then be used for better service
provision. They are not envisaged to enact some form of collective autonomy.
Moreover, as Brauneis and Goodman (2017) have shown for various applications of
algorithmic governance in the US regulating areas such as crime, health, and education,
these are primarily justified with and measured by their effectiveness. At the same time,
they are often deficient with regard to transparency and accountability, criteria that are
given subordinate importance. Thus, there is a discernible tendency in real-world
applications to judge algorithmic governance by its effectiveness.
Altogether, the more algorithmic governance is effective in satisfying individual
inputs in terms of demands and preferences, like a “giant machine” that mediates social
relations but recedes into the background, the more it removes the occasion for politics
even though it realizes a political function. What is strengthened, to borrow the famous
dictum by Engels (Marx and Engels 1962: 241), is the administration of things that
takes the place of genuine political action and that lets the state wither away. Ironically,
it is a hyper-efficient decentralized and in part market-like coordination of algorithmic
governance that promises to realize this vision; and one that is not necessarily con-
trolled and owned by those who are subjected to it.

5 Conclusion

Forms of algorithmic governance as they have been described above can be a


potent form of regulating behavior and shaping social relations. There are strong
reasons for thinking of it as a truly novel and distinctive form of governance
which, based on the capacities of a mediating algorithmic decision-making sys-
tem, can cope with complex social coordination tasks. On the one hand, it
involves a strongly decentralized, market-like form of coordination that resembles
the idea of societal self-organization. On the other hand, it also exhibits an
element of technocratic steering to the extent that it purposefully intervenes into
social relations based on some superior knowledge or intelligence necessary for
optimally coordinating behaviors. Moreover, algorithmic governance shapes social
relations akin to other institutions through setting rules and shaping expectations.
Yet it does not do so uniformly for collectives of individuals, but through
selectively intervening into individuals’ perceptions, decisions, and behaviors.
Thus, algorithmic governance involves creating collective coordination outcomes
based on a micro-embedding, an embedding of individuals and their behaviors that
adaptively structures their decision situations.
Dissecting the Algorithmic Leviathan: On the Socio-Political...

With this operating mode, algorithmic governance marks a qualitative change in the
capacity to manage social complexity. In its most potent form, algorithmic governance
exerts public power as it realizes outcomes that can be equivalent to collective binding
decisions—i.e., to political decisions in a narrow sense. At the same time, algorithmic
steering appears as a new way of ordering social relations that does not easily fit even
the broad concept of governance. It would thus seem that algorithmic governance calls
for updating existing ways of thinking how social order can be achieved even under
conditions of high societal complexity.
However, as has been argued above, despite its novel character, algorithmic gover-
nance can be grasped with Thomas Hobbes’ vision of the sovereign as a fundamental
figure and template in political thought. Algorithmic governance amounts to a sort of
algorithmic Leviathan, a “giant machine” that operates in the background, that brings
together and harnesses the combined power of a multitude of individuals, and that
makes possible coordination outcomes which the individuals themselves could not
attain without it. Like Hobbes’ Leviathan, algorithmic governance draws its acceptance
from its effectiveness. It entails individuals giving up a part of their autonomy—that of
intervening into the very coordination process—so that algorithmic governance can
produce outcomes from which these individuals benefit and that would otherwise not
be possible. Algorithmic governance can comprise a governing of social relations that
is engineered to be benign to those affected, geared towards producing responsive and
satisfactory outputs—as Hobbes’ Leviathan ideally would. Because of this and because
algorithmic governance is oriented towards producing satisfactory results for acquies-
cent individuals, its socio-political anatomy strongly resembles that of Hobbes’
sovereign—even though the tremendous differences in terms of socio-technological
conditions may mask this resemblance.
Like the Leviathan, the operating mode of algorithmic governance is ultimately
apolitical in the sense that it does not foresee a questioning or contesting of the goals
and criteria that guide it. With regard to the very process of realizing its steering
function and achieving complex coordination outcomes, the algorithmic decision-
making system has to count as an undisputed authority. Hence, on the one hand,
algorithmic governance can be highly responsive and adaptive to individual inputs.
On the other hand, it is highly adaptive merely in its effort to realize the substantial and
procedural goals that guide its coordination effort, but not in terms of modifying these
guiding criteria themselves. Moreover, algorithmic governance aims to react and
answer to individual inputs and produce satisfactory outcomes regardless of whether
those affected have set or are even aware of the goals that guide this process. Yet this
ordering of social relations is never neutral as it follows certain goals and parameters
instead of possible alternative ones and therefore favors some way of exerting public
power over others.
Procedures that allow for setting and revising these goals need to be external to
algorithmic governance. If those subjected to algorithmic governance are to be able to
see its collective coordination outcomes as being produced in their name, this social
steering would have to be collectively authorized. Only a political process can provide
this kind of mandate to realize certain substantial and procedural goals which inform
outcomes of collective coordination. A political process, however, involves a sort of
learning and adaptation that is fundamentally different from the adaptive process of
algorithmic governance and its instrumental mode of optimization. It is based on an
P. D. König

ongoing struggle over ideas, values, and the goals which order social life, a struggle
that involves the renegotiation of meanings and interpretations in the mode of language.
In sum, the analogy with Hobbes Leviathan serves to clarify that and why the
learning of the algorithmic system cannot substitute for the kind of learning involved in
politics. Certainly, algorithmic governance is highly capable of managing social com-
plexity and dealing with the diversity of the entities among which it aims to achieve
coordination. But the political task of creating unity out of diversity is a different
exercise altogether. Yet, although algorithmic coordination cannot replace the political,
the more it works effectively and efficiently to satisfy individuals’ expectations and
preferences, the more it may be able to remove the occasion for politics: By accom-
modating the heterogeneous needs and wants of individuals and eradicating frictions
that would call for the messy process of bridging and integrating different views and
demands. It is perhaps in this sense that it presents the biggest challenge to the political.

Acknowledgments I would like to thank the reviewers for their valuable comments and suggestions.
Thanks also go to Joschka Frech for assisting with the preparation of an earlier version of themanuscript.

References

Ananny, M., & Crawford, K. (2016). Seeing without knowing: limitations of the transparency ideal and its
application to algorithmic accountability. New Media & Society, online first.
Arendt, H. (1998). The human condition (2nd ed.). Chicago: University of Chicago Press.
Arthur, B. W. (2011). The second economy. McKinsey Quarterly, 2011, 3, 1–3, 9.
Barber, B. (2003). Strong democracy: participatory politics for a new age. Berkeley: University of California
Press.
Baruh, L., & Popescu, M. (2017). Big data analytics and the limits of privacy self-management. New Media &
Society, 19(4), 579–596.
Bauman, Z. (2017). Retrotopia. Cambridge: Polity.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious.
New Media & Society, 11(6), 985–1002.
Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political
communication. Journal of Communication, 58(4), 707–731.
Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal justice risk assessments:
the state of the art. Sociological Methods & Research, 004912411878253.
Bimber, B. (2014). Digital Media in the Obama Campaigns of 2008 and 2012: adaptation to the personalized
political communication environment. Journal of Information Technology & Politics, 11(2), 130–150.
Brandimarte, Laura, and Alessandro Acquisti (2012). ‘The Economics of Privacy’, in Martin Peitz and Joel
Waldfogel (eds.), The Oxford handbook of the digital economy, vol. New York: Oxford University Press,
547–571.
Brauneis, R., & Goodman, E. P. (2017). Algorithmic transparency for the smart city. SSRN Electronic Journal,
https://www.ssrn.com/abstract=3012499 (Accessed May 16, 2018).
Bucher, T. (2012). Want to be on top? Algorithmic power and the threat of invisibility on Facebook. Culture
Machine, 13, 1–13.
Chen, Y.-C., & Hsieh, T.-C. (2014). Big data for digital government: opportunities, challenges, and strategies.
International Journal of Public Administration in the Digital Age, 1(1), 1–14.
Clarke, A., & Margetts, H. (2014). Governments and citizens getting to know each other? Open, closed, and
big data in public management reform. Policy & Internet, 6(4), 393–417.
Coletta, C., & Kitchin, R. (2017). Algorhythmic governance: Regulating the “heartbeat” of a city using the
Internet of things. Big Data & Society, 4(2), 205395171774241.
Curry, Edward (2016). ‘The Big Data Value Chain: definitions, concepts, and theoretical approaches’, in José
Cavanillas, Edward Curry, and Wolfgang Wahlster (eds.), New horizons for a data-driven economy, vol.
Dissecting the Algorithmic Leviathan: On the Socio-Political...

Cham: Springer International Publishing, 29–37. http://link.springer.com/10.1007/978-3-319-21569-3_3


(Accessed January 31, 2017).
Dahlberg, L. (2007). Rethinking the fragmentation of the cyberpublic: from consensus to contestation. New
Media & Society, 9(5), 827–847.
Danaher, J. (2016). The threat of algocracy: reality, resistance and accommodation. Philosophy & Technology,
29(3), 245–268.
Dee, M. (2013). Welfare surveillance, income management and new paternalism in Australia. Surveillance &
Society, 11(3), 272–286.
van Dijck, J. (2013). Facebook and the engineering of connectivity: a multi-layered approach to social media
platforms. Convergence: The International Journal of Research into New Media Technologies, 19(2),
141–155.
Dunleavy, P. (2016). “Big data” and policy learning. In G. Stoker & M. Evans (Eds.), Evidence-based policy
making in the social sciences: methods that matter. Bristol Chicago, IL: Policy Press.
Dylko, I. B., Beam, M. A., Landreville, K. D., & Geidner, N. (2012). Filtering 2008 US presidential election
news on YouTube by elites and nonelites: an examination of the demoratizing potential of the internet.
New Media and Society, 14(5), 832–849.
Floridi, L. (2012). Big data and their epistemological challenge. Philosophy & Technology, 25(4), 435–437.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.),
Media technologies: essays on communication, materiality, and society (pp. 167–194). Cambridge,
Massachusetts: The MIT Press.
Helbing, Dirk (2015). Thinking ahead - essays on big data, digital revolution, and participatory market society.
Cham: Springer International Publishing. http://link.springer.com/10.1007/978-3-319-15078-9 (Accessed
July 27, 2015).
Hersh, E. (2015). Hacking the electorate: how campaigns perceive voters. New York, NY: Cambridge
University Press.
Hildebrandt, Mireille (2008). ‘Defining profiling: a new type of knowledge?’, in Mireille Hildebrandt and
Serge Gutwirth (eds.), Profiling the European citizen, vol. Dordrecht: Springer Netherlands, 17–45.
http://link.springer.com/10.1007/978-1-4020-6914-7_2 (Accessed January 31, 2017).
Hildebrandt, M. (2016). Law as information in the era of data-driven agency: law as information. The Modern
Law Review, 79(1), 1–30.
Hobbes, T. (1909). Hobbes’s leviathan : reprinted from the edition of 1651. Oxford: Clarendon Press
https://archive.org/details/hobbessleviathan00hobbuoft.
Hofmann, J., Katzenbach, C., & Gollatz, K. (2017). Between coordination and regulation: finding the
governance in Internet governance. New Media & Society, 19(9), 1406–1423.
Hood, C., & Margetts, H. (2007). The tools of government in the digital age. Basingstoke: Palgrave
Macmillan.
van den Hoven, J. (2005). E-democracy, E-contestation and the monitorial citizen*. Ethics and Information
Technology, 7(2), 51–59.
John, P. (2016). Behavioral approaches: how nudges lead to more intelligent policy design. In B. Guy Peters &
P. Zittoun (Eds.), Contemporary approaches to public policy: theories, controversies and perspectives,
vol., International series on public policy (pp. 113–131). London: Palgrave Macmillan.
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the
Internet. Media, Culture & Society, 39(2), 238–258.
Kitchin, R. (2014a). Big data, new epistemologies and paradigm shifts. Big Data & Society, 1(1) http://bds.
sagepub.com/lookup/doi/10.1177/2053951714528481 (Accessed May 25, 2016.
Kitchin, R. (2014b). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1–14.
Kratochwil, F. (2013). Communication, Niklas Luhmann, and the Fragmentation Debate in International Law.
In R. J. Beck (Ed.), Law and disciplinarity: thinking beyond borders, vol., International law, crime and
politics (pp. 257–288). New York, NY: Palgrave Macmillan.
de Laat, Paul B. (2017). ‘Algorithmic decision-making based on machine learning from big data: can
transparency restore accountability?’, Philosophy & Technology, http://link.springer.com/10.1007
/s13347-017-0293-z (Accessed June 1, 2018).
Lambin, J.-J. (2014). A digital and networking economy. in Rethinking the Market Economy, vol. London:
Palgrave Macmillan UK, 147–163 http://link.springer.com/10.1057/9781137392916_8 (Accessed
October 7, 2016).
Leese, M. (2014). The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory
safeguards in the European Union. Security Dialogue, 45(5), 494–511.
P. D. König

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable
algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges.
Philosophy & Technology, 31(4), 611–627.
Lessig, L. (2002). Code: and other laws of cyberspace. In Nachdr. New York: The Perseus Books Group.
Leszczynski, A. (2016). Speculative futures: cities, data, and governance beyond smart urbanism.
Environment and Planning A: Economy and Space, 48(9), 1691–1708.
Linders, D. (2012). From e-government to we-government: defining a typology for citizen coproduction in the
age of social media. Government Information Quarterly, 29(4), 446–454.
Lyon, David (2003). ‘Surveillance as social sorting. Computer codes and mobile bodies’, in David Lyon (ed.),
Surveillance as social sorting: privacy, risk, and digital discrimination, vol. London; New York:
Routledge, 13–30.
Mackenzie, A. (2013). Programming subjects in the regime of anticipation: Software studies and subjectivity.
Subjectivity, 6(4), 391–405.
Margetts, H., & Dunleavy, P. (2013). The second wave of digital-era governance: a quasi-paradigm for
government on the Web. Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 371(1987), 20120382–20120382.
Marx, K., & Engels, F. (1962). Marx / Engels: Werke: Band 20: Anti-Dühring - Dialektik der Natur. Berlin:
Dietz.
Meijer, A., & Bolívar, M. P. R. (2016). Governing the smart city: a review of the literature on smart urban
governance. International Review of Administrative Sciences, 82(2), 392–408.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping
the debate. Big Data & Society, 3(2), 205395171667967.
Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: current and foreseeable issues in biomedical
contexts. Science and Engineering Ethics, 22(2), 303–341.
Morozov, E. (2014). To save everything, click here: technology, solutionism and the urge to fix problems that
don’t exist. London: Penguin Books.
Nam, T. (2012). Suggesting frameworks of citizen-sourcing via Government 2.0. Government Information
Quarterly, 29(1), 12–20.
Napoli, P. M. (2014). Automated media: an institutional theory perspective on algorithmic media production
and consumption: automated media. Communication Theory, 24(3), 340–360.
Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: a
call for action on the long-term societal effects of “datification”. The Journal of Strategic Information
Systems, 24(1), 3–14.
Oliver, A. (2015). Nudging, shoving, and budging: behavioral economic-informed policy. Public
Administration, 93(3), 700–714.
O’Reilly, T. (2011). Government as a platform. Innovations: Technology, Governance, Globalization, 6(1),
13–40.
Pagallo, Ugo (2017). ‘Algo-rhythms and the beat of the legal drum’, Philosophy & Technology. http://link.
springer.com/10.1007/s13347-017-0277-z (Accessed June 2, 2018).
Pentland, A. (2013). The data-driven society. Scientific American, 309(4), 78–83.
Rahwan, I. (2017). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information
Technology, (online first), 1–10.
Rancière, J. (1999). Disagreement: politics and philosophy. Minneapolis: Univ. of Minnesota Press.
Schmitt, C. (1996). The leviathan in the state theory of Thomas Hobbes: meaning and failure of a political
symbol. Westport, Conn: Greenwood Press.
Schroeder, R., & Ling, R. (2014). Durkheim and Weber on the social implications of new information and
communication technologies. New Media & Society, 16(5), 789–805.
Treib, O., Bähr, H., & Falkner, G. (2007). Modes of governance: towards a conceptual clarification. Journal of
European Public Policy, 14(1), 1–20.
Tully, J. (1999). The agonic freedom of citizens. Economy and Society, 28(2), 161–182.
Urbinati, N. (2014). Democracy disfigured: opinion, truth, and the people. Cambridge, Massachusetts:
Harvard University Press.
Veale, Michael, Max Van Kleek, and Reuben Binns (2018). ‘Fairness and accountability design needs for
algorithmic support in high-stakes public sector decision-making’, in Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems - CHI ‘18, vol. Montreal QC, Canada: ACM
Press, 1–14 http://dl.acm.org/citation.cfm?doid=3173574.3174014 (Accessed May 16, 2019).
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making
does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99.
Dissecting the Algorithmic Leviathan: On the Socio-Political...

Williamson, B. (2014). Knowing public services: cross-sector intermediaries and algorithmic governance in
public sector reform. Public Policy and Administration, 29(4), 292–312.
Wohlers, T. E., & Bernier, L. L. (2016). Transformation of local government in the digital age. in Setting Sail
into the Age of Digital Local Government, vol. Boston, MA: Springer US, 29–36. http://link.springer.
com/10.1007/978-1-4899-7665-9_3 (Accessed November 7, 2016).
Yeung, K. (2017a). “Hypernudge”: big data as a mode of regulation by design. Information, Communication
& Society, 20(1), 118–136.
Yeung, K. (2017b). Algorithmic regulation: a critical interrogation: algorithmic regulation. Regulation &
Governance, (online first), 1–19.
Ziewitz, M. (2016). Governing algorithms: myth, mess, and methods. Science, Technology, & Human Values,
41(1), 3–16.
Zuboff, S. (2019). The age of surveillance capitalism: the fight for the future at the new frontier of power.
London: Profile Books.
Zweig, K. A., Wenzelburger, G., & Krafft, T. D. (2018). On chances and risks of security related algorithmic
decision making systems. European Journal for Security Research, 3(2), 181–203.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

You might also like