Professional Documents
Culture Documents
I. INTRODUCTION
Offering greater efficiency, reduced costs, and new insights into current and
predicted behaviour or trends,1 the use of algorithms to make or support
decisions is increasingly central to many areas of public and private life.2
* Professor of International Human Rights Law, Essex Law School, Director, Human Rights Centre,
University of Essex and Principal Investigator and Director, ESRC Human Rights, Big Data and
Technology Project, lmcgreg@essex.ac.uk; Senior Lecturer, School of Law & Human Rights Centre,
University of Essex, Deputy Work Stream Lead, ESRC Human Rights, Big Data & Technology Project,
d.murray@essex.ac.uk; Senior Research Officer, ESRC Human Rights, Big Data & Technology
Project, Human Rights Centre, University of Essex, vivian.ng@essex.ac.uk. This work was
supported by the Economic and Social Research Council [grant number ES/M010236/1].
1
L Rainie and J Anderson, ‘Code-Dependent: Pros and Cons of the Algorithm Age’ (Pew
Research Center, February 2017) 30–1 <http://www.pewinternet.org/2017/02/08/code-dependent-
pros-and-cons-of-the-algorithm-age>; R Kitchin, ‘Thinking Critically About and Researching
Algorithms’ (2017) 20(1) Information, Communication & Society 14, 18–19.
2
See eg HJ Wilson, A Alter and P Shukla, ‘Companies Are Reimagining Business Process with
Algorithms’ (Harvard Business Review, 8 February 2016) <https://hbr.org/2016/02/companies-are-
reimagining-business-processes-with-algorithms>.
3
Oxford English Dictionary, ‘Definition of algorithm’ <https://en.oxforddictionaries.com/
definition/algorithm>.
4
For instance, from metadata, smart technology and the Internet of Things.
5
JM Balkin, ‘2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The
Three Laws of Robotics in the Age of Big Data’ (2017) 78(5) OhioStLJ 1217, 1219.
6
Select Committee on Artificial Intelligence, Corrected Oral Evidence: Artificial Intelligence,
Evidence Session No. 1 (HL 2017–2019), 10 October 2017 Evidence Session <http://data.
parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-
7
committee/artificial-intelligence/oral/71355.pdf> 2, 9. See discussion Part IIA.
8
See discussion Parts IIIA, IVA and IVB.
9
See discussion Part IIB; JA Kroll et al., ‘Accountable Algorithms’ (2017) 165(3) UPaLRev
633; S Barocas, S Hood and M Ziewitz, ‘Governing Algorithms: A Provocation Piece’ (Governing
Algorithms Conference (New York University, 29 March 2013); M Ananny and K Crawford,
‘Seeing Without Knowing: Limitations of The Transparency Ideal and Its Application to
Algorithmic Accountability’ (2018) 20(3) New Media & Society 973; DK Citron and F Pasquale,
‘The Scored Society: Due Process for Automated Predictions’ (2014) 89(1) WashLRev 1; T Zarsky,
‘The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and
Fairness in Automated and Opaque Decision Making’ (2016) 41(1) Science, Technology &
Human Values 118; N Diakopoulos, ‘Algorithmic Accountability: Journalistic Investigation of
Computational Power Structures’ (2015) 3(3) Digital Journalism 398; S Wachter, B Mittelstadt
and C Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated
Decisions and the GDPR’ (2018) 31(2) Harvard Journal of Law & Technology 841.
10
Discussed further in Part IIB.
11
These approaches tend to focus on computational methods to achieve some form of statistical
parity, which is a narrow view of giving effect to the principle of equality. See discussion Part IIB;
Kitchin (n 1) 16.
12
As shorthand in this article we use the abbreviation ‘IHRL’ to refer to international human
rights law and broader norms.
13
UN Human Rights Committee, ‘General Comment No. 31 The Nature of the Legal Obligation
Imposed on States Parties to the Covenant’ (26 May 2004) UN Doc CCPR/C/21/Rev.1/Add. 13,
paras 3–8; UN Committee on Economic Social and Cultural Rights, ‘General Comment No. 3
The Nature of States Parties’ Obligations (Art. 2, Para. 1, of the Covenant)’ (14 December 1990) UN
Doc E/1991/23, paras 2–8.
14
UN Human Rights Council, ‘Report of The Special Representative of The Secretary-General
on The Issue of Human Rights and Transnational Corporations and Other Business Enterprises, John
Ruggie, on Guiding Principles on Business and Human Rights: Implementing the United Nations
‘Protect, Respect and Remedy’ Framework’ (21 March 2011) UN Doc A/HRC/17/31, Principles
15
1–10 [hereinafter Ruggie Principles]. ibid, Principle 15.
16
Council of Europe Committee of Experts on Internet Intermediaries (MSI-NET), ‘Algorithms
and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing
Techniques and Possible Regulatory Implications’ (March 2018) Study DGI(2017)12; UN
Human Rights Council, ‘Report of the Office of the UN High Commissioner for Human Rights
on The Right to Privacy in the Digital Age’ (3 August 2018) UN Doc A/HRC/39/29, paras 1, 15;
F Raso et al., ‘Artificial Intelligence & Human Rights: Opportunities & Risks’ (Berkman Klein
Center for Internet & Society at Harvard University, 25 September 2018); M Latonero,
‘Governing Artificial Intelligence: Upholding Human Rights & Dignity’ (Data & Society, 10
October 2018); Access Now, ‘Human Rights in the Age of Artificial Intelligence’ (8 November
2018); P Molnar and L Gill, ‘Bots at the Gate: A Human Rights Analysis of Automated
Decision-Making in Canada’s Immigration and Refugee System’ (University of Toronto
International Human Rights Program and The Citizen Lab, September 2018); UN Human Rights
Council, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression on A Human Rights Approach to Platform Content
Regulation’ (6 April 2018) UN Doc A/HRC/38/35; UN Human Rights Council, ‘Report of the
Independent Expert on the Enjoyment of All Human Rights by Older Persons on Robots and
Rights: The Impact of Automation on the Human Rights of Older Persons’ (21 July 2017) UN
Doc A/HRC/36/48; UN Special Rapporteur on Extreme Poverty and Human Rights Philip
Alston, ‘Statement on Visit to the United Kingdom’ (London, 16 November 2018) <https://www.
ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=23881&LangID=E>; Global
Future Council on Human Rights 2016–2018, ‘White Paper: How to Prevent Discriminatory
Outcomes in Machine Learning’ (World Economic Forum, March 2018); D Allison-Hope,
‘Artificial Intelligence: A Rights-Based Blueprint for Business, Paper 2: Beyond the Technology
Industry’ (Business for Social Responsibility, August 2018).
17
See eg C van Veen and C Cath, ‘Artificial Intelligence: What’s Human Rights Got to Do with
It?’ (Data & Society, 14 May 2018) <https://points.datasociety.net/artificial-intelligence-whats-
human-rights-got-to-do-with-it-4622ec1566d5>.
18
UN Human Rights Council, ‘Report of the Special Representative of the Secretary-General
on the issue of human rights and transnational corporations and other business enterprises, John
Ruggie on Protect, Respect and Remedy: A Framework for Business and Human Rights’
(7 April 2008) UN Doc A/HRC/8/5, para 3.
19
Ruggie Principles (n 14) Principle 11 and accompanying commentary. At the time of writing,
the open-ended intergovernmental working group on transnational corporations and other business
enterprises with respect to human rights has produced a zero draft of ‘a legally binding instrument to
regulate, in international human rights law, the activities of transnational corporations and other
business enterprises’, as mandated by UN Human Rights Council Resolution 26/9. See UN
Human Rights Council, ‘Legally Binding Instrument to Regulate, In International Human Rights
Law, The Activities of Transnational Corporations and Other Business Enterprises’ (Zero Draft
16.7.2018).
20
UN General Assembly, ‘Report of the Working Group on the Issue of Human Rights and
Transnational Corporations and Other Business Enterprises on Access to Effective Remedies
Under the Guiding Principles on Business and Human Rights: Implementing the United Nations
Protect, Respect and Remedy Framework’ (18 July 2017) A/72/162, para 5; UN Human Rights
Council, ‘Report of the UN High Commissioner for Human Rights on Improving Accountability
and Access to Remedy for Victims of Business-Related Human Rights Abuse’ (10 May 2016) A/
HRC/32/19, para 2, 4–6.
21 22
See Rainie and Anderson (n 1) 83. ibid, 83.
23
The nature of algorithms is presented simplistically here, to encapsulate their essential
elements relevant to the present discussion. There are multiple ways of understanding what an
algorithm is, its functions, and how it executes those functions. See TH Cormen et al.,
Introduction to Algorithms (3rd edn, MIT Press 2009) 5–10; DE Knuth, The Art of Computer
Programming, vol 1 (3rd edn, Addison Wesley Longman 1997) 1–9.
24
T Gillespie, ‘The Relevance of Algorithms’ in T Gillespie, PJ Boczkowski and KA Foot (eds),
Media Technologies: Essays on Communication, Materiality, and Society (MIT Press 2014) 167,
192.
25
The case of Wisconsin v Eric L. Loomis, which deals with precisely this issue, is discussed in
greater detail in Part IVB.
26
UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston, ‘Statement on
27
Visit to the United Kingdom’ (n 16). ibid.
28
London Councils, ‘Keeping Children Safer by Using Predictive Analytics in Social Care
Management’ <https://www.londoncouncils.gov.uk/our-key-themes/our-projects/london-ventures/
current-projects/childrens-safeguarding>.
29
N McIntyre and D Pegg, ‘Councils Use 377,000 People’s Data in Efforts to Predict Child
Abuse’ (The Guardian, 16 September 2018) <https://www.theguardian.com/society/2018/sep/16/
councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse>.
30
E Benvenisti, ‘Upholding Democracy Amid the Challenges of New Technology: What Role
for the Law of Global Governance?’ (2018) 29(1) EJIL 9, 60.
31
Northpointe, ‘Practitioner’s Guide to COMPAS Core’ (Northpointe, 19 March 2015) Section
4.2.2 Criminal Associates/Peers and Section 4.2.8 Family Criminality <http://www.northpointeinc.
com/downloads/compas/Practitioners-Guide-COMPAS-Core-_031915.pdf>; AM Barry-Jester, B
Casselman and D Goldstein, ‘The New Science of Sentencing’ (The Marshall Project, 4 August
2015) <https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing>.
32
See C O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy (Penguin 2016) Introduction.
33
US Executive Office of the President, Big Data: A Report on Algorithmic Systems,
Opportunity, and Civil Rights (May 2016) 11.
34
M Hurley and J Adebayo, ‘Credit Scoring in the Era of Big Data’ (2016) 18(1) Yale Journal of
Law & Technology 148, 151–2, 163, 166, 174–5.
35
K Waddell, ‘How Algorithms Can Bring Down Minorities’ Credit Scores’ (The Atlantic, 2
December 2016) <https://www.theatlantic.com/technology/archive/2016/12/how-algorithms-can-
bring-down-minorities-credit-scores/509333/>.
36
For instance, an application may be sold to a third-party who use their own in-house data, or
37
who purchase data sets from data brokers. See Kitchin (n 1) 21.
38
F Pasquale, The Black Box Society: The Secret Algorithms That Control Money and
Information (Harvard University Press 2015) 3–14.
39
Y LeCun et al., ‘Learning Algorithms for Classification: A Comparison on Handwritten Digit
Recognition’ in J-H Oh, C Kwon and S Cho (eds), Neural Networks: The Statistical Mechanics
Perspective (World Scientific 1995) 261.
40
N Bostrom and E Yudkowsky, ‘The Ethics of Artificial Intelligence’ in K Frankish and W
Ramsey (eds), Cambridge Handbook of Artificial Intelligence (Cambridge University Press
2014) 316, 316–17.
41
As distinct from learning on the basis of training data, and then being deployed to a real-world
42
context. State of Wisconsin v Eric L. Loomis 2016 WI 68, 881 N.W.2d 749.
43
Although they were not held to be decisive in respect to the matter at hand.
44
State of Wisconsin v Eric L. Loomis (n 42) para 66.
45
See eg Lenddo, ‘Credit Scoring Solution’, <https://www.lenddo.com/pdfs/
Lenddo_FS_CreditScoring_201705.pdf>, which includes social network data in credit scores.
46
J Angwin et al., ‘Minority Neighborhoods Pay Higher Car Insurance Premiums Than White
Areas with the Same Risk’ (ProPublica, 5 April 2017) <https://www.propublica.org/article/
minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk>; M Kamp, B
Körffer and M Meints, ‘Profiling of Customers and Consumers – Customer Loyalty Programmes
and Scoring Practices’ in M Hildebrandt and S Gutwirth (eds), Profiling the European Citizen:
Cross-Disciplinary Perspectives (Springer 2008) 201, 207.
47
D Boyd, K Levy and A Marwick, ‘The Networked Nature of Algorithmic Discrimination’
(Open Technology Institute, October 2014).
48
Conference of Federal Savings & Loan Associations v Stein, 604 F.2d 1256 (9th Cir. 1979),
aff’d mem., 445 U.S. 921 (1980) 1258.
49
N Diakopoulos and M Koliska, ‘Algorithmic Transparency in the News Media’ (2017) 5(7)
Digital Journalism 809, 811.
50
BD Mittelstadt et al., ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3(2) Big Data &
51
Society 6. See Diakopoulos and Koliska (n 49) 816.
52
See Ananny and Crawford (n 9) 977.
53
D Kehl, P Guo and Samuel Kessler, ‘Algorithms in the Criminal Justice System: Assessing
the Use of Risk Assessments in Sentencing’ (July 2017) Responsive Communities 32–3.
54
See Diakopoulos and Koliska (n 49) 816.
2. Transparency challenges
Notwithstanding the importance of transparency as a normative objective, some
commentators have noted that it may be difficult to achieve in practice,63
highlighting that of itself transparency may not be meaningful.64 For
example, certain algorithms can ‘learn’ and modify their operation during
55
See Kehl, Guo and Kessler (n 53) 28.
56
N Diakopoulos, ‘Accountability in Algorithmic Decision Making’ (2016) 59(2)
57
Communications of the ACM 56, 60. See Diakopoulos and Koliska, n (n 49) 822.
58
See Diakopoulos (n 56) 58–9, 61; Diakopoulos and Koliska (n 49) 810–12; L Edwards and M
Veale, ‘Slave to the Algorithm: Why A ‘Right to An Explanation’ Is Probably Not the Remedy You
Are Looking For’ (2017) 16(1) Duke Law & Technology Review 18, 39; A Tutt, ‘An FDA for
Algorithms’ (2017) 69(1) Administrative Law Review 83, 110–11.
59
The Royal Society, ‘Machine Learning: The Power and Promise of Computers That Learn by
Example’ (2017) 93–4; See Tutt (n 58) 101–4.
60
See Ananny and Crawford (n 9) 975–7; E Ramirez, Chairwoman, Federal Trade Commission,
‘Privacy Challenges in the Era of Big Data: A View from the Lifeguard’s Chair’ (Keynote Address
at the Technology Policy Institute Aspen Forum, Aspen, Colorado, 19 August 2013) 8; Kehl, Guo
61
and Kessler (n 53) 32–3. See The Royal Society (n 59) 93.
62
AR Lange, ‘Digital Decisions: Policy Tools in Automated Decision-Making’ (Center for
63
Democracy & Technology, 2016) 11. See Kroll et al. (n 9) 639.
64
See Kroll et al. (n 9) 638, 657–60; BW Goodman, ‘A Step Towards Accountable
Algorithms?: Algorithmic Discrimination and the European Union General Data Protection’
(29th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016) 3–4.
65
At a simpler level, algorithms themselves may be modified due to a normal update/
66
development system. See Kroll et al. (n 9) 647–52.
67
MIT Technology Review Editors, ‘Explainer: What is a Blockchain?’ (MIT Technology
Review, 23 April 2018) <https://www.technologyreview.com/s/610833/explainer-what-is-a-
blockchain/>.
68
M Burgess, ‘Holding AI to Account: Will Algorithms Ever Be Free from Bias if They’re
Created by Humans?’ (The Wired, 11 January 2016) <https://www.wired.co.uk/article/creating-
transparent-ai-algorithms-machine-learning>.
69
R Neisse, G Steri and I Nai-Fovino, ‘A Blockchain-based Approach for Data Accountability
and Provenance Tracking’ (12th International Conference on Availability, Reliability and Security,
Reggio Calabria, Italy, August 2017) 1.
70
J Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning
71
Algorithms’ (2016) 3(1) Big Data & Society 3–4. See Tutt (n 58) 117–18.
72
Some authors argue that transparency is not simply full informational disclosure, and that
notions of transparency against secrecy is a false dichotomy. See Ananny and Crawford (n 9)
979; Diakopoulos (n 56) 58–9.
73
Science & Technology Committee, Oral Evidence: Algorithms in Decision-Making (HC
2017–2019, 351), 12 December 2017 Evidence Session, Q112-113 <http://data.parliament.uk/
writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/
74
algorithms-in-decisionmaking/oral/75798.pdf>. See Kroll et al. (n 9) 639, 658.
75
ibid, 639.
76
See Diakopoulos (n 56) 57, 60; M Wilson, ‘Algorithms (and the) Everyday’ (2017) 20(1)
77
Information, Communication & Society 137, 141, 143–44, 147. See Burrell (n 70) 4.
78
Science & Technology Committee (n 73) Q207.
79
See Institute of Electrical and Electronics Engineers Global Initiative on Ethics of
Autonomous and Intelligent Systems, ‘Ethically Aligned Design: A Vision for Prioritizing
Human Well-being with Autonomous and Intelligent Systems, (Version 2) (2017); M Ananny,
‘Towards an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness’ (2016)
41(1) Science, Technology & Human Values 93, 94–6; L Jaume-Palasí and M Spielkamp,
‘Ethics and Algorithmic Processes for Decision Making and Decision Support’
(AlgorithmWatch, Working Paper No. 2, 2017) 9–13; Mittelstadt et al. (n 50) 10–12.
80
See discussion Part IIB.
81
C Cath et al., ‘Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach’
(2018) 24(2) Science & Engineering Ethics 505, 508.
82
See Kroll et al. (n 9) 637–8, 662–72, 678–9.
83
The utility of this approach has recently been noted by others. See Amnesty International &
Access Now, ‘The Toronto Declaration: Protecting the right to equality and non-discrimination in
machine learning systems’ (16 May 2018) <https://www.accessnow.org/cms/assets/uploads/2018/
05/Toronto-Declaration-D0V2.pdf>. The Toronto Declaration applies the human rights framework
with a focus on the right to equality and non-discrimination. This article proposes a human rights-
based framework based on the full range of substantive and procedural rights. See discussion Part
IIIA. 84
See eg Facebook, ‘Community Standards’, <https://www.facebook.com/
communitystandards/>.
85
In her oral evidence to the House of Commons Science and Technology Committee inquiry on
algorithms in decision-making, Sandra Wachter suggested that a more refined harm taxonomy is
required to respond to the ethical and real-world problems that may be difficult to predict at the
outset and ‘new harms and new kinds of discrimination’ arising from inferential analytics. See
Science & Technology Committee, Oral Evidence: Algorithms in Decision-Making (HC 351,
2017–2019) 14 November 2017 Evidence Session, Q55 <http://data.parliament.uk/writtenevidence/
committeeevidence.svc/evidencedocument/science-and-technology-committee/algorithms-in-
decisionmaking/oral/73859.pdf>. This paper agrees that a robust understanding for harm is
necessary, but argues that it is provided by existing definitions in IHRL.
86
C Dwork et al., ‘Fairness through Awareness’ (3rd Innovations in Theoretical Computer
Science Conference, Cambridge, MA, January 2012) 215.
87
R Binns, ‘Fairness in Machine Learning: Lessons from Political Philosophy’ (Conference on
Fairness, Accountability and Transparency (New York, 2018) 3–5.
88
See eg UN Human Rights Committee, ‘General Comment No. 18 (Non-discrimination’
(10 November 1989) para 6.
89
J Angwin et al. (n 46); F Raso et al. (n 16) 21–4; R Caplan et al., ‘Algorithmic Accountability:
A Primer, Tech Algorithm Briefing: How Algorithms Perpetuate Racial Bias and Inequality’ (Data
& Society, 18 April 2018) <https://datasociety.net/wp-content/uploads/2018/04/
90
Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf. See further Part IVA.
91
It is noted that, even where international law does not impose direct obligations on businesses,
States’ obligations to protect against human rights violations requires that they take measure at the
domestic level to ensure that individuals’ human rights are not violated by businesses. States may be
held accountable for failure to take appropriate measures in this regard. See Ruggie Principles (n 14)
Principle 1.
92
See UN Human Rights Committee, General Comment No. 31 (n 13) paras 3–8; UN
Committee on Economic, Social and Cultural Rights, General Comment No. 3 (n 13) paras 2–8.
93
See eg Canadian Institute for Advanced Research, ‘CIFAR Pan-Canadian Artificial
Intelligence Strategy’ <https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy>; E
Macron, ‘Artificial Intelligence: “Making France a Leader”’ (AI for Humanity Conference,
Collège de France, 30 March 2018) <https://www.gouvernement.fr/en/artificial-intelligence-
making-france-a-leader>; NITI Aayog, ‘National Institution for Transforming India (national
Strategy for Artificial Intelligence #AIFORALL’ (June 2018), <http://niti.gov.in/writereaddata/
files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf>; Japan Strategic
Council for AI Technology, ‘Artificial Intelligence Technology Strategy’ (New Energy and
Industrial Technology Development Organization, 31 March 2017) <http://www.nedo.go.jp/
content/100865202.pdf>; AI Singapore, <https://www.aisingapore.org/>; UK Department for
Digital, Culture, Media & Sport, ‘Policy Paper: AI Sector Deal’ (26 April 2018) <https://www.
gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal>; US White
House, ‘Artificial Intelligence for the American People’ (10 May 2018) <https://www.
whitehouse.gov/briefings-statements/artificial-intelligence-american-people/>.
94
See Ruggie Principles (n 14) Principle 1.
95
UN Human Rights Council, ‘Report of the Office of the UN High Commissioner for Human
Rights on ‘The Role of Prevention in the Promotion and Protection of Human Rights’ (16 July 2015)
96
UN Doc A/HRC/30/20, para 9. ibid, para 7.
97
See eg UN Committee on Economic, Social and Cultural Rights, ‘General Comment 14 The
right to the highest attainable standard of health (article 12 of the International Covenant on
Economic, Social and Cultural Rights)’ (11 August 2000) UN Doc E/C.12/2000/4, para 33.
98
Ruggie Principles (n 14) Principle 12.
99
See eg K Crawford and J Schultz, ‘Big Data and Due Process: Toward a Framework to
Redress Predictive Privacy Harms’ (2014) 55(1) Boston College Law Review 93, 95; Kroll et al.
(n 9) 678.
100
For instance, indirect discrimination may only become visible during the deployment phase.
101
See OHCHR (n 95) para 31.
102
See eg Microsoft, ‘Satya Nadella Email to Employees: Embracing Our Future: Intelligent
Cloud and Intelligent Edge’ (Microsoft, 29 March 2018) <https://news.microsoft.com/2018/03/
29/satya-nadella-email-to-employees-embracing-our-future-intelligent-cloud-and-intelligent-edge/
>; S Pichai, ‘AI at Google: Our Principles’ (Google, 7 June 2018) <https://www.blog.google/
technology/ai/ai-principles/>; DeepMind, ‘DeepMind Ethics & Society’ <https://deepmind.com/
applied/deepmind-ethics-society/>.
103
A Hern, ‘DeepMind Announces Ethics Group to Focus on Problems of AI’ (The Guardian, 4
October 2017) <https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-
artificial-intelligence-ethics-group-problems>; J Temperton, ‘DeepMind’s New AI Ethics Unit Is
The Company’s Next Big Move’ (The Wired, 4 October 2017) <https://www.wired.co.uk/article/
deepmind-ethics-and-society-artificial-intelligence>; T Simonite, ‘Tech Firms Move To Put
Ethical Guard Rails Around AI’ (The Wired, 16 May 2018) <https://www.wired.com/story/tech-
firms-move-to-put-ethical-guard-rails-around-ai/>.
104
See Zakharov v Russia, App No 47143/06 (ECtHR, 4 December 2015) para 233.
105
See Parts IVA and IVB.
106
See, by way of analogy to security oversight, Council of Europe Commissioner for Human
Rights, ‘Issue Paper: Democratic and Effective Oversight of National Security Services’ (Council of
Europe, 2015) 47.
107
UK Department for Digital, Culture, Media & Sport, ‘Centre for Data Ethics and Innovation:
Consultation’ (June 2018) <https://assets.publishing.service.gov.uk/government/uploads/system/
uploads/attachment_data/file/715760/CDEI_consultation__1_.pdf> 10.
108
UK Department for Digital, Culture, Media & Sport, ‘Centre for Data Ethics and Innovation:
Government Response to Consultation’ (November 2018) <https://assets.publishing.service.gov.
uk/government/uploads/system/uploads/attachment_data/file/757509/
Centre_for_Data_Ethics_and_Innovation_-_Government_Response_to_Consultation.pdf> 5.
109
ibid, 12.
110
Government of Canada, Treasury Board of Canada Secretariat, ‘Responsible Artificial
Intelligence in the Government of Canada: Digital Disruption White Paper Series’ (Version 2.0, 10
April 2018) <https://docs.google.com/document/d/1Sn-qBZUXEUG4dVk909eSg5qvfbpNlRhzIef
WPtBwbxY/edit> 32–3.
111
See, for instance, the Investigatory Powers Commissioner’s Office established to oversee the
UK Investigatory Powers Act 2016.
112
See C Miller, J Ohrvik-Scott and R Coldicutt, ‘Regulating for Responsible Technology:
Capacity, Evidence and Redress’ (Doteveryone, October 2018).
113
See Kitchin (n 1) 17; M Hardt, E Price and N Srebro, ‘Equality of Opportunity in Supervised
Learning’ 2 (30th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016).
114
L McGregor, ‘Activating the Third Pillar of the UNGPs on the Right to an Effective Remedy’
(EJIL: Talk!, 23 November 2018) <https://www.ejiltalk.org/activating-the-third-pillar-of-the-
ungps-on-access-to-an-effective-remedy/>.
115
M Oswald et al., ‘Algorithmic Risk Assessment Policing Models: Lessons From the Durham
HART Model and ‘Experimental’ Proportionality’ (2018) 27(2) Information & Communications
Technology Law 223, 225.
116
See, for example, London Ventures <https://www.londoncouncils.gov.uk/our-key-themes/
london-ventures>; Data Justice Lab, ‘Digital Technologies and the Welfare System, Written
Submission to the UN Special Rapporteur on Extreme Poverty and Human rights Consultation
on the UK’ (14 September 2018) <https://www.ohchr.org/Documents/Issues/Epoverty/
UnitedKingdom/2018/Academics/DataJusticeLabCardiffUniversity.pdf> 2–3.
117
This example is provided for illustrative purposes. It is not intended to be an absolute guide to
responsibility: this is something that must be evaluated on a case-by-case basis.
IV. THE EFFECT OF APPLYING THE IHRL FRAMEWORK TO THE USE OF ALGORITHMS IN
DECISION-MAKING
This part analyses how the application of the IHRL framework may affect
decisions regarding the development and deployment of algorithms. To
reiterate, the application of the IHRL framework is intended to ensure that
the potential inherent in technology can be realized, while at the same time
ensuring that technological developments serve society. As such, the
increasing centrality of algorithms in public and private life should be
underpinned by a framework that attends to human rights. This is not
intended to be anti-technology or anti-innovation, it is directed at human-
rights-compliant, socially beneficial, innovation.
A. Are There Red Lines That Prohibit the Use of Algorithms in Certain
Instances?
Most of the debates on algorithmic accountability proceed from the assumption
that the use of algorithms in decision-making is permissible. However, as noted
at the outset of this article, a number of situations arise wherein the use of
algorithms in decision-making may be prohibited. The IHRL framework
assists in determining what those situations might be, and whether a
prohibition on the use of algorithms in a particular decision-making context
is absolute, or temporary, ie until certain deficiencies are remedied. This
question should first be addressed during the conception phase, before actual
design and development is undertaken, but should also be revisited as the
algorithm develops through the design and testing phase and into
deployment. In this regard, we envisage a number of scenarios in which the
use of algorithms in decision-making would be contrary to IHRL.
118
Ruggie Principles (n 14) Principles 11, 12.
119
Y Wang and M Kosinki, ‘Deep Neural Networks Are More Accurate Than Humans at
Detecting Sexual Orientation from Facial Images’ (2018) 114(2) Journal of Personality & Social
Psychology 246, 254.
120
H Murphy, ‘Why Stanford Researchers Tried to Create a ‘‘Gaydar’ Machine’’ (New York
Times, 9 October 2017) <https://www.nytimes.com/2017/10/09/science/stanford-sexual-
orientation-study.html?_r=0>.
121
S Levin, ‘LGBT Groups Denounce ‘Dangerous’ AI that Uses Your Face to Guess Sexuality’
(The Guardian, 9 September 2017) <https://www.theguardian.com/world/2017/sep/08/ai-gay-
gaydar-algorithm-facial-recognition-criticism-stanford>; D Anderson, ‘GLAAD and HRC Call
on Stanford University & Responsible Media to Debunk Dangerous & Flawed Report Claiming
to Identify LGBTQ People through Facial Recognition Technology’ (GLAAD Blog, 8
September 2017) <https://www.glaad.org/blog/glaad-and-hrc-call-stanford-university-responsible-
122
media-debunk-dangerous-flawed-report>. See Anderson (n 121).
123
The Yogyakarta Principles: Principles on the Application of International Human Rights Law
in Relation to Sexual Orientation and Gender Identity (March 2007) Principle 3.
124
See eg UN Human Rights Committee, ‘General Comment No. 34, Article 19: Freedoms of
Opinion and Expression’ (12 September 2011) UN Doc CCPR/C/GC/34, paras 21–22, 24–30,
33–35; UN Human Rights Council, ‘Report of the UN High Commissioner for Human Rights on
The Right to Privacy in the Digital Age’ (n 16) para 10; Zakharov v Russia (n 104) para 230;
Khan v The United Kingdom App No 35394/97 (ECtHR, 12 May 2000) para 26; Kroon and
Others v The Netherlands App No 18535/91 (ECtHR, 27 October 1994) para 31.
125
The role of human agency is also an important consideration, noting that individuals may
change their behaviour in unexpected—and unpredictable—ways.
126
See discussion Part IIA.
127
UN Human Rights Committee, ‘General Comment No. 35, Article 9: Liberty and Security of
Person’ (16 December 2014) UN Doc CCPR/C/GC/35, para 10.
128
ibid, paras 12, 15, 18, 22, 25, 36.
129
As discussed in Part IIA.
130
This appears to be the conclusion reached by the Wisconsin Supreme Court in Wisconsin v
Eric L. Loomis, discussed further in Part IVB.
131
UN Human Rights Council, ‘Report of the Special Rapporteur on the Promotion and
Protection of the Right to Freedom of Opinion and Expression on Freedom of Expression, States
and the Private Sector in the Digital Age’ (11 May 2016) UN Doc A/HRC/32/38, paras 35–37.
132
S Cope, JC York and J Gillula, ‘Industry Efforts to Censor Pro-Terrorism Online Content Pose
Risks to Free Speech’ (Electronic Frontier Foundation, 12 July 2017) <https://www.eff.org/
deeplinks/2017/07/industry-efforts-censor-pro-terrorism-online-content-pose-risks-free-speech>.
133
See Part IIA.
134
See Kroll et al. (n 9) 680 fn 136; DK Citron, ‘Technological Due Process’ (2008) 85(6)
135
WashLRev 1249, 1283–4. State of Wisconsin v Eric L. Loomis (n 42) para 7.
136 137 138 139
ibid, para 15. ibid, para 6. ibid, para 51. ibid, para 34.
140
ibid, para 88.
141
See Science & Technology Committee (n 73) Q132 (Martin Wattenberg, Google, evidence
arguing for a ‘very high level of scrutiny’ to the use of algorithms in the criminal justice system).
142 143
State of Wisconsin v Eric L. Loomis (n 42) para 3. ibid, para 40.
144
See eg O’Neil (n 32) (critiquing the assumption that big data algorithms are objective and
145 146 147
fair). See State v Loomis (n 42) para 54. ibid, para 53, 55–6. ibid, para 66.
148
ibid.
149
See Finogenov & Others v Russia App Nos 18299/03 and 27311/03 (ECtHR, 4 June 2012),
para 270, the court stated that ‘the materials and conclusions of the investigation should be
sufficiently accessible’.
150
This is contrary to the minimum standards of thoroughness and effectiveness for
investigations that demands adequate and rigorous analysis of all relevant elements by competent
relevant professionals. See ibid, para 271; UN Human Rights Committee, General Comment No. 31
(n 13) para 15.
151
See Finogenov & Others v Russia (n 149) para 271 and Husayn (Abu Zubadyh) v Poland App
No 7511/13 (ECtHR, 24 July 2014) para 480—in both cases the Court stated that “a requirement of a
“thorough investigation” means that the authorities must always make a serious attempt to find out
what happened and should not rely on hasty or ill-founded conclusions to close their investigation or
as the basis of their decisions’; Paul & Audrey Edwards v UK App No 46477/99 (ECtHR, 14 March
2002) para 71; Mapiripán Massacre v Colombia, Judgment, Inter-American Court of Human Rights
152
Series C No 134 (15 September 2005) para 224. See Rainie and Anderson (n 1) 55.
153
See Mittelstadt et al. (n 50) 11–12.
V. CONCLUSION
154
See L Floridi and JW Sanders, ‘On the Morality of Artificial Agents’ (2004) 14(3) Minds &
Machines 349, 351; GD Crnkovic and B Çürüklü, ‘Robots: Ethical by Design’ (2012) 14(1) Ethics
& Information Technology 61, 62–3; A Matthias, ‘The Responsibility Gap: Ascribing
Responsibility for the Actions of Learning Automata’ (2004) 6(3) Ethics & Information
Technology 175, 177.
155
For example, in some industries where the use of automated technology is more developed
such as the use of autopilot programmes in aviation, errors in operation can still give rise to
responsibility of the pilot and/or liability of the company.
156
See, for instance, a joint statement issued by the UN Human Rights Committee and the UN
Committee on Economic, Social and Cultural Rights, addressing 50 years of the Covenants. Joint
Statement by the UN Human Rights Committee and the UN Committee on Economic, Social and
Cultural Rights, ‘The International Covenants on Human Rights: 50 Years On’ (17 November 2016)
UN Doc CCPR/C/2016/1-E/C.12/2016/3.