Professional Documents
Culture Documents
https://doi.org/10.1007/s13347-019-00363-w
RESEARCH ARTICLE
Pascal D. König 1
Abstract
A growing literature is taking an institutionalist and governance perspective on how
algorithms shape society based on unprecedented capacities for managing social
complexity. Algorithmic governance altogether emerges as a novel and distinctive kind
of societal steering. It appears to transcend established categories and modes of
governance—and thus seems to call for new ways of thinking about how social
relations can be regulated and ordered. However, as this paper argues, despite its novel
way of realizing outcomes of collective steering and coordination, it can nevertheless
be grasped with an old and fundamental figure in political philosophy: that of Thomas
Hobbes’ Leviathan. Comparing algorithmic governance with this figure serves to
highlight their similarities as socio-political arrangements, and specifically to clarify
how algorithmic governance parallels the apolitical traits of the Leviathan—it elimi-
nates the political as it requires compliance and forgoing contestation to best fulfill its
role and to produce satisfying outcomes.
1 Introduction
* Pascal D. König
pascal.koenig@sowi.uni-kl.de
1
Department of Social Sciences, University of Kaiserslautern, Erwin-Schrödinger-Straße, Building
57, PO-Box 3049, 67653 Kaiserslautern, Germany
P. D. König
2016; Veale et al. 2018) and for furthering party-political goals (Hersh 2015; Bimber
2014). Second, algorithmic systems are discussed as the object of political action, with
a rapidly increasing number of studies addressing the question how they can and should
be regulated, given the potentially far-reaching societal impact of some applications
(e.g., Pentland 2013; Newell and Marabelli 2015; Danaher 2016; Mittelstadt et al.
2016; Wachter et al. 2017; de Laat 2017; Pagallo 2017).
However, algorithms are also said to have a political dimension in a more general
sense in light of their use in automated and highly adaptive decision-making systems
that enable novel forms of coordination . Contributions largely from the fields of media
and legal studies have described them as institutions that structure behaviors and
intervene into social order, and they have thus conferred algorithms a political status
(Beer 2009; Bucher 2012; van Dijck 2013; Gillespie 2014; Napoli 2014). In a similar
vein, some scholars have referred to algorithm-based coordination as a form of
governance that intervenes into society and culture through shaping constructions of
social reality (Just and Latzer 2017; Kitchin 2014b; Danaher 2016; Hofmann et al.
2017; Leszczynski 2016; Yeung 2017b).
Existing contributions underline the special character of algorithmic governance and
its unprecedented capacities for coping with complex coordination tasks. As a form of
social steering, algorithmic governance seems to stand out among other established
forms of governance. It would thus seem that existing categories and concepts are
inadequate for getting a proper grasp of algorithmic governance and of the kind of
social ordering it entails. However, as will be argued below, while algorithmic gover-
nance indeed does amount to a novel mode of achieving social coordination outcomes,
it can nevertheless be aligned with a template and figure from political philosophy that
is quite old.
Drawing on the writings by Hobbes, this paper shows how major features of
algorithmic governance as a way of shaping social order and facilitating social coor-
dination parallel those of the Leviathan. The value of probing this analogy lies in
getting a sharper picture of what kind of social order algorithmic governance materi-
alizes. While a governance perspective directs our view to the potentially complex
mechanisms with which social steering can take place, the comparison with Hobbes
foregrounds how algorithmic governance—being more than a mere instrument—
inherently establishes certain socio-political relations and roles. Specifically, it serves
to sharpen our view of how algorithmic governance remains apolitical at its core in the
same sense as Hobbes’ sovereign does.
Despite the formidable degree of responsivity that algorithmic governance can
achieve as it answers to the individual inputs that it receives, this does not equal the
kind of responsiveness and accountability of democratic rule. Rather, like Hobbes’
Leviathan, it promises to produce certain outcomes that are not attainable otherwise in
exchange for compliance. It eliminates what is essential for the political because
contestation and challenges to the way in which outcomes are brought about have no
place in that process. Recognizing this apolitical heart of algorithmic governance and
its mode of ordering society is important because it nonetheless plays a political role to
the degree that it exerts public power, and because this kind of power is never neutral as
it necessarily embodies certain values and objectives.
Before turning to the comparison between algorithmic governance and Hobbes’
Leviathan, the following, second section will first characterize algorithmic governance
Dissecting the Algorithmic Leviathan: On the Socio-Political...
1
Algorithmic governance can take various forms. Yeung (2017b) has provided a taxonomy of designs that
vary along the three dimensions of standard-setting, monitoring, and sanctioning. The most potent designs
combine a flexible, dynamic standard-setting with a pre-emptive monitoring and operating.
P. D. König
the other entities can be attuned to these information updates. This is, however, not
entirely spontaneous nor is it chaotic because, second, there is also an element of
centralization. The various inputs meet in the algorithmic system, which intervenes in
the mutual adjustment and coordination of behaviors and which furthermore produces
knowledge from the inputs that forms the basis of that coordination activity.
Third, algorithmic governance performs its steering function through a specific form
of regulation, namely regulation by design (Yeung 2017b: 5). This notion is already
present in Lessig’s (2002: 10) claim that “code is law,” meaning that code has an
institutional character as it effectively structures behaviors through enabling some and
constraining other decisions. Similarly, algorithmic steering does not have to make
binding decisions to influence and steer behaviors—although it can of course also
involve decisions that have a mandatory character. Rather, it can achieve this by
designing actors’ information environments. Yeung (2017a: 120) thus uses the concept
of choice architectures to describe how forms of algorithmic governance can structure
actors’ decision situations through providing certain information, options, and sugges-
tions, thereby making some choices more and others less likely.
Associated with this regulation by design are two traits of algorithmic governance
with which it stands out most clearly from other known forms of governance. Algo-
rithmic governance (1) involves a learning and adaptation based on processing infor-
mation about behaviors in order to anticipate future behaviors of distributed entities and
thus to optimally coordinate them based on these predictions (Hildebrandt 2008;
Williamson 2014; Dunleavy 2016; Hildebrandt 2016). Optimizing outputs this way
can occur based on a combination of registered information: about individual behav-
iors, changes in the larger environmental and aggregate changes in the entirety of those
entities (Yeung 2017a: 122). This way, the adaptive algorithmic decision-making
system is capable of detecting patterns in the behaviors that it can then use for its
steering effort. It dynamically adapts itself to changing circumstances and inputs to
optimize the generated outputs.
The outputs that the algorithmic system produces, e.g., information, recommenda-
tions, or decisions, are furthermore (2) personalized to the extent that they are adjusted
to individuals and their characteristics (Just and Latzer 2017: 247–248).2 This way,
algorithmic governance aims towards a mass-customization of outputs that are at the
same time optimally coordinated. Specifically, it is the intelligent pre-emption of
behaviors based on registered patterns over many entities that helps to make sure that
the coordination is resolved effectively and efficiently. This also means that the steering
and ordering of social relations in question does not result from merely providing a
single framework or architecture of rules that collectively embed and structure individ-
uals’ behaviors. Rather, algorithmic governance is capable of individually and adap-
tively embedding and guiding behaviors—thus amounting to a sort of micro-
embedding (see also Rahwan 2017), a provision of mass-personalized outputs.
At this point, it is instructive to turn Lessig’s dictum cited above—that code is law—
around. Against the backdrop of algorithmic governance, law, on the face of it, could be
seen as a rather crude form of code. While code and algorithms can have a comparable
2
It should be noted that this does not mean that all outputs—e.g., information, suggestions, decisions—are
adapted to the particularities of every individual, but rather to features or group traits that individuals share
with others.
Dissecting the Algorithmic Leviathan: On the Socio-Political...
Having traced the contours of algorithmic governance, it thus far emerges as a distinctive
and highly complex mode of ordering social relations and producing collective coordi-
nation outcomes on a potentially grand scale. These kinds of capacities are not merely a
theoretical possibility. There is a proliferation of applications aiming to exploit the
potential of algorithmic governance for managing complex social coordination problems.
The state is increasingly availing itself of algorithmic decision-making systems to
upgrade its steering capacities in various areas. Generally, algorithmic decision-making
systems enhance information or nodality as one of the government’s key policy
resources. This no longer primarily concerns nodality in its role as a detector, which
earlier contributions on information technologies as a tool of government still empha-
sized (Hood and Margetts 2007). Rather, advances in information and communication
technologies increasingly turn nodality into an effector (Dee 2013; Margetts and
Dunleavy 2013; Williamson 2014; Dunleavy 2016).
Based on insights from behavioral economics, the state may use nudging and other
ways of influencing behavior that draw on information power in order to structure
decision situations and make targeted interventions, including in combination with the
use of algorithmic decision-making systems (Oliver 2015; John 2016; Dunleavy 2016).
Moreover, in the areas of security and law and order, comprehensive surveillance
systems that draw on the capacities of algorithmic governance can be used to monitor,
police, and identify security risks, to make predictions and intervene in a selective,
targeted fashion (Lyon 2003; Leese 2014; Zweig et al. 2018). This way, the state is
capable of minimizing risks without having to hierarchically and collectively constrain
behaviors through rule-setting—and thus allowing a higher level of social complexity.
On the level of public services, algorithmic decision-making systems can be seen as
a core part of a “digital era governance” (Margetts and Dunleavy 2013; Clarke and
P. D. König
knowledge which it uses as a basis for its selections and decisions in order to achieve
optimal coordination outcomes—a kind of knowledge that the algorithmic system
produces and to which it has more or less exclusive access. Hence, like a technocratic
government, the information system is supposed to have a superior knowledge that
guides the rule-setting and prescription of behaviors.
Second, algorithmic governance is ambivalent with regard to its status as an
institution. On the one hand, it conforms to the common notion of an institution as a
set of rules that structures expectation and that shapes and embeds behaviors. On the
other hand, its rules for targeted interventions are selective and dynamically adapted,
which lends algorithmic governance features of an autonomous actor (Helbing 2015;
Just and Latzer 2017: 247–248; Ziewitz 2016: 5). The algorithmic decision-making
system mediates and proactively intervenes into the perceptions and behaviors of many
individuals; and it is self-regulating, learning over time and thereby adapting how it
performs these interventions. Moreover, the algorithmic decision-making system as a
centralized instance interacts with a multitude of individual entities. Yet it does not do
so in a uniform but in a differentiated fashion—through its capacity to personalize
outputs—as if it were many different actors at once.
Overall, with these abilities, algorithmic steering exhibits an extraordinary capacity
to regulate social relations under conditions of increased complexity. It thus has the
potential to change the way in which societies are organized. This is not only true for
applications like the ones mentioned further above, in which the state employs algo-
rithmic decision systems to solve coordination problems in certain areas. The capacities
of algorithmic governance also form the basis of broader visions of society in which
social relations are increasingly collaborative and shaped by the intelligent steering of a
comprehensive, algorithmically enhanced nervous system (Helbing 2015; Arthur 2011;
O’Reilly 2011)—a vision that also informs the creation of smart cities and that
resonates in certain conceptions of e-government (Linders 2012; Wohlers and Bernier
2016; Nam 2012).
More than simply promising effective and efficient handling of social coordination
tasks, algorithmic governance also manages complexity in a highly responsive and
decentralized way. It incorporates and accommodates the inputs of the individuals
subjected to its coordination activity. Specifically, it integrates diverse inputs, converting
complexity into collective coordination and steering outcomes without, however, erad-
icating this complexity. In this regard, it even seems to mirror the democratic promise of
creating unity out of diversity without sacrificing the latter, while also surpassing any
known form of governing in terms of the capacity to manage social complexity and to
achieve responsiveness. This may further contribute to the notion that algorithmic
governance as such can sustain a new way of ordering society. However, as will be
argued in the following section, this notion is ultimately misleading.
3
Rahwan (2017) has touched upon this figure thinking about what a social contract about algorithmic systems
could look like. In the following, a different perspective is chosen, one that looks at how algorithmic
governance itself amounts to a sort of social contract.
Dissecting the Algorithmic Leviathan: On the Socio-Political...
For Hobbes’, the ruling sovereign and the arrangement that realizes a certain social
order are established through individuals mutually agreeing to relinquish their individ-
ual freedom to do anything (Hobbes 1909: 133–134). This occurs through a social
contract between these individuals which is, however, not so much some real original
pact but rather a hypothetical construct: Individuals surrender their natural freedoms
and concede power over them to the ruler who is tasked to create and protect social
order and safeguard the same degree of individual freedom for each of its members.
This is paralleled by algorithmic coordination being based on voluntarily surrendering
some individual autonomy. Specifically, individuals cede power and responsibility of
decision-making to the algorithmic system so that the system can fulfill its coordination
role, based in part on anticipating individuals’ desires and preferences and intervening
accordingly.
Individuals bound up with the system of algorithmic governance will want to put
their trust in the system to the extent that they expect algorithmic coordination will
satisfy their needs and wants. As in Hobbes’ vision of political order, individuals
perform a sort of authorization through self-binding, through ceding freedom out of
their own desire to have a part in the outputs of the algorithmic system. Again, this does
however not occur through some actual social pact but rather through many—
deliberate or unwitting—individual decisions to subject oneself to algorithmic
coordination.
A major difference is that the authority of the Leviathan is essentially based on the
fear of premature and violent death, and the promise of safety (Hobbes 1909: 101). In
contrast, algorithmic coordination derives its authority not from guaranteeing peace but
instead from the promise of assisting in the pursuit of happiness and fulfilling individ-
ual preferences.4 Individuals do not have to follow the guidance of algorithmic systems
and may decide not to comply. In that case, however, the optimal coordination outcome
can be undermined—as would Hobbes’ commonwealth in case of renouncing obedi-
ence to the sovereign (Hobbes 1909: 261). To put their trust in the algorithmic
coordination effort, individuals must be able to expect that it is effective as well as
unbiased and fair. Hence, its outputs create the acceptance and legitimacy of algorith-
mic governance. This mirrors Hobbes’ (1909: 112) account, in which the legitimacy of
the political rule is ultimately founded in its effectiveness.
In case of such perceived effectiveness, the mediating system of algorithmic coor-
dination provides information, advice, or decisions such that those addressed by them
are inclined to oblige out of their own self-interest. Indeed, as some authors have noted,
there is a widespread readiness to subject oneself to and even become dependent on
forms of algorithmic surveillance and steering if this provides them with palpable
benefits and amenities (Yeung 2017b: 131; Brandimarte and Acquisti 2012; Zuboff
2019). As long as individuals have no reason to question its effectiveness and fairness
and indulge in the benefits resulting from algorithmic coordination, it does not have to
be authoritative and binding in order to have an equivalent effect.
Conversely, it is important that algorithmic governance can generate the acquies-
cence of those affected by its steering and therefore remains undisputed. This marks a
4
The Chinese example of the Social Credit System, however, also shows that this can be turned around into
effecting behavior and decisions through the fear of losing social status and access to various public or
commercial offers and services.
P. D. König
further parallel between algorithmic governance and Hobbes’ idea of the sovereign.
Hobbes (1909: 142–150) pleaded for a monarchy as the form of political rule because,
he argued, only where this rule is located in a single person will contradictions in the
laws be avoided. In contrast, he saw democratic politics as an inadequate form of rule,
as democratic conflict and contestation would destroy effective decision-making—and
even denoted contestation diabolical as it was likely to introduce confusion and chaos
(Kratochwil 2013: 287).
Algorithmic coordination, in order to realize its effectiveness and efficiency in
achieving coordination, similarly requires that individuals rely on its superior distrib-
uted awareness and its “intelligence” in realizing individually satisfying outcomes,
which are also part of a larger coordination effort. Opening this coordination effort to
individuals challenging and contesting this process, hence, to them making use of their
own judgement, would easily risk thwarting the algorithmic system’s performance.
Taking the example of a parking guidance system as alluded to further above may serve
to illustrate how contestation of algorithmic decisions can destroy the coordination
effort. The more individuals shirk algorithmic recommendations and decisions that are
supposed to yield an optimal collective outcome or contest the criteria behind those
decisions, the more difficult attaining this outcome becomes. Moreover, the
contestability of algorithmic decisions during the coordination process could invite
individuals to exploit this option to improve their own personal outcomes—thus foiling
the coordination effort.
This is not to say that algorithmic governance necessarily forms an absolute authority
that cannot be subjected to the control of those affected by its decisions. As is
acknowledged further below, control over algorithmic governance may well be achieved
via adequate procedures. However, with regard to its very process of optimizing and
coordinating, it needs to operate as an unquestioned authority, with the capacity to
achieve the best coordination outcomes, if it is to also realize this capacity. The value of
the algorithmic Leviathan thus seems to lie in its character as a “well-functioning, big
machine,” to use the words of Carl Schmitt (1996: 42) where he refers to the value of the
state as posited in some strands of political thought. Schmitt, however, adds to this
remark that the state only seemingly forms a mere technical instrument and neutral
arrangement. In a similar vein, the preceding considerations imply that algorithmic
governance, much more than just being a technological device, puts into practice a
specific vision of social order—that of an apolitical management of societal affairs.
Similar to the status and power of the Leviathan, the power of algorithmic coordination
goes hand in hand with the absence of contestation. Therein lies a fundamental trait of
algorithmic governance that is diametrically opposed to the political as an ongoing
process in which different perspectives can compete and challenges to the status quo
continuously arise (Rancière 1999: 26–27). Where there is no longer an open process in
which there can be struggle over collective decisions and over the values, rules, and
institutions that govern the people: the political ends. Its opposite is what Rancière
(1999: 27–31) calls the police, which is understood as the mere administration and
ordering of social life, without this administration being contested and called into
question. The Leviathan is apolitical in that latter sense. It exhausts its political
Dissecting the Algorithmic Leviathan: On the Socio-Political...
character in the act of instituting the sovereign and in the collective binding decisions
taken by this sovereign to govern social order. There is no space designated for political
argument and contestation, the sovereign is essentially a mighty administrator; and a
major task involved in this governing of society is to secure a space in which private,
economic activities can take place and unfold (Hobbes 1909: 164, 175–177)—not to
establish an arena for political struggles.
Algorithmic coordination, too, takes on this apolitical character. On the one hand, it
is a potentially powerful mode of coordination that can shape social relations and
operate with an effectiveness as if it were making collective and authoritative decisions.
On the other hand, its coordination process follows an administrative, technocratic
mode of problem-solving and responding to inputs, e.g., in the forms of preferences or
demands (Hildebrandt 2016; Morozov 2014; Kitchin 2014b). It is thus fundamentally
different from the political as described above and from the—at least so some degree—
inevitably hermeneutic quest of creating and struggling over meaning in the mode of
language which the political involves (Arendt 1998; Barber 2003). This process cannot
take place with a predefined direction nor does it ever rest on solid ground because it is
not about optimizing certain goals but rather about figuring out what the guiding goals
and values should be. According to pragmatist philosophy and its view on society,
language use always involves an ineradicable element of uncertainty because language
games are never perfectly reproduced and stable, but they always entail modifications,
mutations, and openings (Tully 1999: 164). Algorithmic governance cannot relieve
individuals of navigating these language games, coping with uncertainty and differing
views and of deciding when to contest extant rules and decisions.
There is thus an important distinction at stake that has been emphasized by
Hildebrandt (2016) looking at information processing in legal practice as opposed to
computational operations. She points to law as an argumentative practice instead of
merely a processing of information. Consequently, legal judgment is not simply about
performance in terms of accuracy, for instance, but “judgment itself is predicated on the
contestability of any specific interpretation of legal certainty” (Hildebrandt 2016: 8,
emphasis in original). Unlike the processing of signs in computation, legal practice is
based on argument and takes place in the mode of human language (Hildebrandt 2016:
9). It entails an intersubjective and hermeneutic dimension of struggling over meaning
as well as the content and the foundation of societal rules and values. This kind of
practice allows for undergoing a process of learning through reevaluating, revising, and
updating the rules and values that govern a society. In contrast, while the algorithmic
governance, through responding to changing inputs, also adapts itself, this does not
happen on the level of dialogue and reasoning nor does this concern the goals and
parameters that guide its process.
The high degree of responsivity in algorithmic coordination, including its provision
of personalized outputs as a reaction to individual inputs, is therefore not to be confused
with the collective influence or autonomy of those subjected to this steering. It is
fundamentally different from the kind of responsiveness realized by a liberal-
democratic system, which involves an ongoing process of setting, contesting, and
possibly revising goals and decisions based on the inputs of those governed (Urbinati
2014). The adaptivity of algorithmic governance, in contrast, is oriented towards best
fulfilling certain substantial and procedural goals which are themselves, however, not
the subject of its optimization and coordination process.
P. D. König
In sum, the learning of the algorithmic system and algorithmic coordination as a way
of managing social relations cannot replace the kind of learning involved in politics;
and algorithmic governance realizes responsiveness only in the sense that it answers to
the individual inputs and corresponds to a consumption-like preference realization.
All in all, achieving social coordination and managing societal complexity through
algorithmic governance is fundamentally different from dealing with social complexity
in and through politics. Yet the two are functionally related in several respects. While
algorithmic governance entails an apolitical mode of steering, it is precisely this
character that has important ramifications for the political. First, attempts to establish
algorithmic governance in certain areas work towards shaping social relations in those
areas according to its operating mode. Such attempts may well be based on the premise
or the claim that properly designed algorithmic decision-making achieves superior and
objective solutions to complex problems (Morozov 2014; Kitchin 2014a). This is itself
a political move because not only does algorithmic governance amount to exerting
public power, but this power is also never neutral. As has repeatedly been pointed out,
algorithmic decision-making systems necessarily embody certain values, goals, and
procedural parameters that inform its operations and that are never neutral or objective
even though they may become normalized and taken-for-granted (Baruh and Popescu
2017; Meijer and Bolívar 2016; Yeung 2017b; de Laat 2017). Such goals and param-
eters that guide its activities form an institutional core of algorithmic governance, easily
masked by its constant fluidity, its adaptiveness, and capacity to interact with distrib-
uted entities in a differentiated fashion.
Second, the question of the goals and values behind algorithmic governance can be
raised explicitly and there can be struggle over the core principles of its architecture and
the objectives it incorporates. But this political process would itself not follow the
managerial and technocratic mode of problem-solving and could not be part of the
algorithmic coordination as such. The political, then, starts where there is a debate
about what the algorithmic systems should accomplish, what values they should
embody, what conceptions of fairness, etc., they should realize. This is precisely the
idea of attempts to give algorithmic systems an explicit and socially agreed-upon basis
and to “create channels between human values and governance algorithms” (Rahwan
2017: 5).
It is furthermore conceivable that algorithmic decision-making systems are them-
selves used to integrate and aggregate inputs in order to facilitate political interaction,
opinion formation, and will formation. Corresponding systems or platforms could be
engineered to sustain a process of ongoing contestation and deliberation (van den
Hoven 2005; Dahlberg 2007). In doing so, however, they would exactly not perform
the kind of algorithmic coordination described above, because the mechanism of
coordination, learning, and judgment resides in the participating individuals and con-
cerns the goals they collectively set via accepted procedures in order to govern their
relations.
In any case, political debate about the appropriate design and uses of algorithmic
governance invites difficult questions involving complex technical issues. Following
Hildebrandt (2016: 8), a major source of ambiguity arises because different algorithmic
Dissecting the Algorithmic Leviathan: On the Socio-Political...
systems designed for the same purpose lead to different outcomes.5 Also, there are
various, partly contradictory ways in which performance objectives, such as quality and
fairness, can be evaluated and realized, which opens up a substantial space for debate
about the best solution (Berk et al. 2018).
Third, algorithmic governance can work towards removing the occasion for the
kind of hermeneutic activity involved in politics. With its micro-adaptive process
and its capacity for sorting, it solves coordination problems through separation
and sorting instead of integration (as, e.g., in deliberation or authoritative deci-
sions by a legitimate institution). Indeed, algorithmic sorting promises to resolve
the seemingly inevitable tension between individuality and belonging/community
(Bauman 2017) or, put differently, between a mechanic and an organic solidarity
in social integration (Schroeder and Ling 2014: 797). Individuals are enabled to
choose their social environment and their communities and to maintain their
boundaries through algorithmic filtering (Bennett and Iyengar 2008; Dylko et al.
2012). The strengthening and maintenance of communities thus no longer neces-
sarily conflicts with individualization and complex functional differentiation be-
cause algorithmic coordination can help to accommodate and integrate the diver-
sity of many different more or less closed-off social spheres. This way, it mitigates
one of the core challenges of governing under conditions of high social complex-
ity: to create harmony out of diversity.
Algorithmic governance does so—if it is functioning well—through adaptively
reacting to inputs and generating outputs that are experienced as satisfactory. It is
then working towards a state in which there is no reason for its guiding goals and
parameters to even come into view. That this orientation is built into its design has
an important implication. It means that regardless of whether its substantial and
procedural goals have been set by those affected or not, it is geared towards
responsiveness and generating acceptance through outputs that are perceived as
satisfactory. Under these conditions, the positive personal experience of affected
individuals and the temptation of the algorithmic system’s effectiveness and effi-
ciency alone can motivate the trust in it.
These considerations concern the question of accountability regarding algorithmic
steering. Problems of accountability arise not only because, as various authors have
noted, its complex process lacks transparency and remains unintelligible to the vast
majority of individuals (Ananny and Crawford 2016; de Laat 2017; Mittelstadt and
Floridi 2016; Lepri et al. 2018), but also because algorithmic governance by design
works towards removing the occasion for questions of accountability and about the
goals and values which it embodies. In that sense, it appears as highly potent in
producing satisfied individuals, but ones that are governed by rules and objectives that
are not of their own making and that they have not collectively authorized. Whether this
will be the case for future applications of algorithmic governance remains to be seen. At
least in current practice, where algorithmic governance is applied, its results, effective-
ness, and efficiency discernibly count as the principal evaluative standards.
5
Moreover, algorithmic systems process inputs in the form of data that cannot speak for itself—it must be
processed based on selection rules and decisions about what counts as relevant; and the specific ways of
selecting, categorizing, and making distinctions based on available information always impose a certain way
of seeing (Ananny and Crawford 2016; Mittelstadt and Floridi 2016; Floridi 2012).
P. D. König
This is the case with commercial applications such as platforms where individual
consumers and users are primarily interested in receiving certain services and benefits.
The guiding criteria, i.e., the question how such services are generated, are negligible
for users as long as they are content—even if the algorithmic decision-making system
performs a collective steering and coordination function. A similar primacy of the
output-dimension is however also observable for applications established by the state.
Even conceptualizations of smart cities that stress a procedural dimension predomi-
nantly refer to a process in which citizens are involved and become engaged but which
is not regarded as political (Meijer and Bolívar 2016: 402). Citizens are supposed to
provide their inputs in the algorithmic system in order to harness their distributed
knowledge and collective intelligence, which can then be used for better service
provision. They are not envisaged to enact some form of collective autonomy.
Moreover, as Brauneis and Goodman (2017) have shown for various applications of
algorithmic governance in the US regulating areas such as crime, health, and education,
these are primarily justified with and measured by their effectiveness. At the same time,
they are often deficient with regard to transparency and accountability, criteria that are
given subordinate importance. Thus, there is a discernible tendency in real-world
applications to judge algorithmic governance by its effectiveness.
Altogether, the more algorithmic governance is effective in satisfying individual
inputs in terms of demands and preferences, like a “giant machine” that mediates social
relations but recedes into the background, the more it removes the occasion for politics
even though it realizes a political function. What is strengthened, to borrow the famous
dictum by Engels (Marx and Engels 1962: 241), is the administration of things that
takes the place of genuine political action and that lets the state wither away. Ironically,
it is a hyper-efficient decentralized and in part market-like coordination of algorithmic
governance that promises to realize this vision; and one that is not necessarily con-
trolled and owned by those who are subjected to it.
5 Conclusion
With this operating mode, algorithmic governance marks a qualitative change in the
capacity to manage social complexity. In its most potent form, algorithmic governance
exerts public power as it realizes outcomes that can be equivalent to collective binding
decisions—i.e., to political decisions in a narrow sense. At the same time, algorithmic
steering appears as a new way of ordering social relations that does not easily fit even
the broad concept of governance. It would thus seem that algorithmic governance calls
for updating existing ways of thinking how social order can be achieved even under
conditions of high societal complexity.
However, as has been argued above, despite its novel character, algorithmic gover-
nance can be grasped with Thomas Hobbes’ vision of the sovereign as a fundamental
figure and template in political thought. Algorithmic governance amounts to a sort of
algorithmic Leviathan, a “giant machine” that operates in the background, that brings
together and harnesses the combined power of a multitude of individuals, and that
makes possible coordination outcomes which the individuals themselves could not
attain without it. Like Hobbes’ Leviathan, algorithmic governance draws its acceptance
from its effectiveness. It entails individuals giving up a part of their autonomy—that of
intervening into the very coordination process—so that algorithmic governance can
produce outcomes from which these individuals benefit and that would otherwise not
be possible. Algorithmic governance can comprise a governing of social relations that
is engineered to be benign to those affected, geared towards producing responsive and
satisfactory outputs—as Hobbes’ Leviathan ideally would. Because of this and because
algorithmic governance is oriented towards producing satisfactory results for acquies-
cent individuals, its socio-political anatomy strongly resembles that of Hobbes’
sovereign—even though the tremendous differences in terms of socio-technological
conditions may mask this resemblance.
Like the Leviathan, the operating mode of algorithmic governance is ultimately
apolitical in the sense that it does not foresee a questioning or contesting of the goals
and criteria that guide it. With regard to the very process of realizing its steering
function and achieving complex coordination outcomes, the algorithmic decision-
making system has to count as an undisputed authority. Hence, on the one hand,
algorithmic governance can be highly responsive and adaptive to individual inputs.
On the other hand, it is highly adaptive merely in its effort to realize the substantial and
procedural goals that guide its coordination effort, but not in terms of modifying these
guiding criteria themselves. Moreover, algorithmic governance aims to react and
answer to individual inputs and produce satisfactory outcomes regardless of whether
those affected have set or are even aware of the goals that guide this process. Yet this
ordering of social relations is never neutral as it follows certain goals and parameters
instead of possible alternative ones and therefore favors some way of exerting public
power over others.
Procedures that allow for setting and revising these goals need to be external to
algorithmic governance. If those subjected to algorithmic governance are to be able to
see its collective coordination outcomes as being produced in their name, this social
steering would have to be collectively authorized. Only a political process can provide
this kind of mandate to realize certain substantial and procedural goals which inform
outcomes of collective coordination. A political process, however, involves a sort of
learning and adaptation that is fundamentally different from the adaptive process of
algorithmic governance and its instrumental mode of optimization. It is based on an
P. D. König
ongoing struggle over ideas, values, and the goals which order social life, a struggle
that involves the renegotiation of meanings and interpretations in the mode of language.
In sum, the analogy with Hobbes Leviathan serves to clarify that and why the
learning of the algorithmic system cannot substitute for the kind of learning involved in
politics. Certainly, algorithmic governance is highly capable of managing social com-
plexity and dealing with the diversity of the entities among which it aims to achieve
coordination. But the political task of creating unity out of diversity is a different
exercise altogether. Yet, although algorithmic coordination cannot replace the political,
the more it works effectively and efficiently to satisfy individuals’ expectations and
preferences, the more it may be able to remove the occasion for politics: By accom-
modating the heterogeneous needs and wants of individuals and eradicating frictions
that would call for the messy process of bridging and integrating different views and
demands. It is perhaps in this sense that it presents the biggest challenge to the political.
Acknowledgments I would like to thank the reviewers for their valuable comments and suggestions.
Thanks also go to Joschka Frech for assisting with the preparation of an earlier version of themanuscript.
References
Ananny, M., & Crawford, K. (2016). Seeing without knowing: limitations of the transparency ideal and its
application to algorithmic accountability. New Media & Society, online first.
Arendt, H. (1998). The human condition (2nd ed.). Chicago: University of Chicago Press.
Arthur, B. W. (2011). The second economy. McKinsey Quarterly, 2011, 3, 1–3, 9.
Barber, B. (2003). Strong democracy: participatory politics for a new age. Berkeley: University of California
Press.
Baruh, L., & Popescu, M. (2017). Big data analytics and the limits of privacy self-management. New Media &
Society, 19(4), 579–596.
Bauman, Z. (2017). Retrotopia. Cambridge: Polity.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious.
New Media & Society, 11(6), 985–1002.
Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations of political
communication. Journal of Communication, 58(4), 707–731.
Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal justice risk assessments:
the state of the art. Sociological Methods & Research, 004912411878253.
Bimber, B. (2014). Digital Media in the Obama Campaigns of 2008 and 2012: adaptation to the personalized
political communication environment. Journal of Information Technology & Politics, 11(2), 130–150.
Brandimarte, Laura, and Alessandro Acquisti (2012). ‘The Economics of Privacy’, in Martin Peitz and Joel
Waldfogel (eds.), The Oxford handbook of the digital economy, vol. New York: Oxford University Press,
547–571.
Brauneis, R., & Goodman, E. P. (2017). Algorithmic transparency for the smart city. SSRN Electronic Journal,
https://www.ssrn.com/abstract=3012499 (Accessed May 16, 2018).
Bucher, T. (2012). Want to be on top? Algorithmic power and the threat of invisibility on Facebook. Culture
Machine, 13, 1–13.
Chen, Y.-C., & Hsieh, T.-C. (2014). Big data for digital government: opportunities, challenges, and strategies.
International Journal of Public Administration in the Digital Age, 1(1), 1–14.
Clarke, A., & Margetts, H. (2014). Governments and citizens getting to know each other? Open, closed, and
big data in public management reform. Policy & Internet, 6(4), 393–417.
Coletta, C., & Kitchin, R. (2017). Algorhythmic governance: Regulating the “heartbeat” of a city using the
Internet of things. Big Data & Society, 4(2), 205395171774241.
Curry, Edward (2016). ‘The Big Data Value Chain: definitions, concepts, and theoretical approaches’, in José
Cavanillas, Edward Curry, and Wolfgang Wahlster (eds.), New horizons for a data-driven economy, vol.
Dissecting the Algorithmic Leviathan: On the Socio-Political...
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable
algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges.
Philosophy & Technology, 31(4), 611–627.
Lessig, L. (2002). Code: and other laws of cyberspace. In Nachdr. New York: The Perseus Books Group.
Leszczynski, A. (2016). Speculative futures: cities, data, and governance beyond smart urbanism.
Environment and Planning A: Economy and Space, 48(9), 1691–1708.
Linders, D. (2012). From e-government to we-government: defining a typology for citizen coproduction in the
age of social media. Government Information Quarterly, 29(4), 446–454.
Lyon, David (2003). ‘Surveillance as social sorting. Computer codes and mobile bodies’, in David Lyon (ed.),
Surveillance as social sorting: privacy, risk, and digital discrimination, vol. London; New York:
Routledge, 13–30.
Mackenzie, A. (2013). Programming subjects in the regime of anticipation: Software studies and subjectivity.
Subjectivity, 6(4), 391–405.
Margetts, H., & Dunleavy, P. (2013). The second wave of digital-era governance: a quasi-paradigm for
government on the Web. Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 371(1987), 20120382–20120382.
Marx, K., & Engels, F. (1962). Marx / Engels: Werke: Band 20: Anti-Dühring - Dialektik der Natur. Berlin:
Dietz.
Meijer, A., & Bolívar, M. P. R. (2016). Governing the smart city: a review of the literature on smart urban
governance. International Review of Administrative Sciences, 82(2), 392–408.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping
the debate. Big Data & Society, 3(2), 205395171667967.
Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: current and foreseeable issues in biomedical
contexts. Science and Engineering Ethics, 22(2), 303–341.
Morozov, E. (2014). To save everything, click here: technology, solutionism and the urge to fix problems that
don’t exist. London: Penguin Books.
Nam, T. (2012). Suggesting frameworks of citizen-sourcing via Government 2.0. Government Information
Quarterly, 29(1), 12–20.
Napoli, P. M. (2014). Automated media: an institutional theory perspective on algorithmic media production
and consumption: automated media. Communication Theory, 24(3), 340–360.
Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: a
call for action on the long-term societal effects of “datification”. The Journal of Strategic Information
Systems, 24(1), 3–14.
Oliver, A. (2015). Nudging, shoving, and budging: behavioral economic-informed policy. Public
Administration, 93(3), 700–714.
O’Reilly, T. (2011). Government as a platform. Innovations: Technology, Governance, Globalization, 6(1),
13–40.
Pagallo, Ugo (2017). ‘Algo-rhythms and the beat of the legal drum’, Philosophy & Technology. http://link.
springer.com/10.1007/s13347-017-0277-z (Accessed June 2, 2018).
Pentland, A. (2013). The data-driven society. Scientific American, 309(4), 78–83.
Rahwan, I. (2017). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information
Technology, (online first), 1–10.
Rancière, J. (1999). Disagreement: politics and philosophy. Minneapolis: Univ. of Minnesota Press.
Schmitt, C. (1996). The leviathan in the state theory of Thomas Hobbes: meaning and failure of a political
symbol. Westport, Conn: Greenwood Press.
Schroeder, R., & Ling, R. (2014). Durkheim and Weber on the social implications of new information and
communication technologies. New Media & Society, 16(5), 789–805.
Treib, O., Bähr, H., & Falkner, G. (2007). Modes of governance: towards a conceptual clarification. Journal of
European Public Policy, 14(1), 1–20.
Tully, J. (1999). The agonic freedom of citizens. Economy and Society, 28(2), 161–182.
Urbinati, N. (2014). Democracy disfigured: opinion, truth, and the people. Cambridge, Massachusetts:
Harvard University Press.
Veale, Michael, Max Van Kleek, and Reuben Binns (2018). ‘Fairness and accountability design needs for
algorithmic support in high-stakes public sector decision-making’, in Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems - CHI ‘18, vol. Montreal QC, Canada: ACM
Press, 1–14 http://dl.acm.org/citation.cfm?doid=3173574.3174014 (Accessed May 16, 2019).
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making
does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99.
Dissecting the Algorithmic Leviathan: On the Socio-Political...
Williamson, B. (2014). Knowing public services: cross-sector intermediaries and algorithmic governance in
public sector reform. Public Policy and Administration, 29(4), 292–312.
Wohlers, T. E., & Bernier, L. L. (2016). Transformation of local government in the digital age. in Setting Sail
into the Age of Digital Local Government, vol. Boston, MA: Springer US, 29–36. http://link.springer.
com/10.1007/978-1-4899-7665-9_3 (Accessed November 7, 2016).
Yeung, K. (2017a). “Hypernudge”: big data as a mode of regulation by design. Information, Communication
& Society, 20(1), 118–136.
Yeung, K. (2017b). Algorithmic regulation: a critical interrogation: algorithmic regulation. Regulation &
Governance, (online first), 1–19.
Ziewitz, M. (2016). Governing algorithms: myth, mess, and methods. Science, Technology, & Human Values,
41(1), 3–16.
Zuboff, S. (2019). The age of surveillance capitalism: the fight for the future at the new frontier of power.
London: Profile Books.
Zweig, K. A., Wenzelburger, G., & Krafft, T. D. (2018). On chances and risks of security related algorithmic
decision making systems. European Journal for Security Research, 3(2), 181–203.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.