You are on page 1of 15

Artificial Intelligence and the Law in Canada

Florian Martin-Bariteau & Teresa Scassa (LexisNexis, 2021)

Chapter 10
AI and Technology-Facilitated Violence and Abuse
Jane Bailey, Jacquelyn Burkell, Suzie Dunn, Chandell Gosse and Valerie Steeves

Overview
Artificial intelligence (AI) is being used—and is in some cases specifically designed—to cause harms against
members of equality-seeking communities. These harms, which we term “equality harms” have individual and
collective effects, and emanate from both “direct” and “structural” violence. Discussions about the role of AI
in technology-facilitated violence and abuse (TFVA) sometimes do not include equality harms specifically.
When they do, they frequently focus on individual equality harms caused by “direct” violence (e.g. the use of
deepfakes to create non-consensual pornography to harass or degrade individual women). Often little attention
is paid to the collective equality harms that flow from structural violence, including those that arise from
corporate actions motivated by the drive to profit from data flows (e.g. algorithmic profiling). Addressing
TFVA in a comprehensive way means considering equality harms arising from both individual and corporate
behaviours. This will require going beyond criminal law reforms to punish “bad” individual actors, since
responses focused on individual wrongdoers fail to address the social impact of the structural violence that
flows from some commercial uses of AI. Although, in many cases, the harms occasioned by these (ab)uses of
AI are the very sort of harms that law is used to address or has been used to address, existing Canadian law is
not currently well placed to meaningfully address equality harms.

Key Challenges and Issues


The key challenges and issues in this area are as follows:

• Technology, such as AI, facilitates violence, re-entrenching social inequalities that undermine the
rights of members of equality-seeking communities to self-determination, self-representation and
dignity.
• TFVA is most commonly associated with direct violence and individual bad actors, obfuscating the
equality harms presented by structural forms of TFVA. Meaningful responses to TFVA require
reconceptualizing it to include both direct and structural violence, encompassing individual and
corporate (ab)uses of AI.
• Existing laws do not adequately capture non-consensual deepfakes, nor are they framed to address the
tightly interwoven equality and privacy harms that are a central feature of discriminatory algorithmic
profiling by corporations.

RECOMMENDED CITATION
Jane Bailey, Jacquelyn Burkell, Suzie Dunn, Chandell Gosse & Valerie Steeves, “AI and Technology-
Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and
the Law in Canada (Toronto: LexisNexis Canada, 2021), ch. 10.

Discover the full collection at http://aisociety.ca/ailawbook

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

Synopsis
Introduction ........................................................................................................................................................ 3

1. AI in Action ................................................................................................................................................ 4

1.1. Non-Consensual Sexual Deepfakes .................................................................................................. 4

1.1.1. What Are Deepfakes? ................................................................................................................... 4

1.1.2. The Harm of Non-Consensual Sexual Deepfakes ........................................................................ 5

1.2. Algorithmic Profiling ........................................................................................................................ 5

1.2.1. What is Algorithmic Profiling? .................................................................................................... 5

1.2.2. The Harms of Algorithmic Profiling ............................................................................................ 6

2. Risks and Opportunities ............................................................................................................................. 7

3. Key Gaps in the Law .................................................................................................................................. 8

3.1. Non-consensual Sexual Deepfakes and the Law .............................................................................. 9

3.1.1. Criminal Law Responses .............................................................................................................. 9

3.1.2. Tort Law Responses ................................................................................................................... 10

3.2. Algorithmic Discrimination and the Law ....................................................................................... 11

3.2.1. Human Rights Legislation .......................................................................................................... 11

3.2.2. Privacy Legislation ..................................................................................................................... 13

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

Introduction
Many forms of artificial intelligence (AI) leverage data about individuals, with potential damaging or
discriminatory effect. We characterize these as “equality harms,” and note that these harms have both
individual and collective impact, and the violence that underlies them takes both “direct” and “structural”
forms.1 The former is the more familiar form of violence in which an individual’s actions results in harm to
another person;2 in the latter, by contrast, there is no individual who acts, but the violence is “built into the
structure”3 and manifests in terms of uneven distribution of power, resources, and opportunity. Common
definitions of violence focus on direct and often physical effects, ignoring the “full violent potential of
structures, artifacts, institutions, and cultures, and ideologies,”4 and this same bias is reproduced in many
discussions of technology-facilitated violence and abuse (TFVA). Thus, technology-facilitated direct violence
is widely recognized,5 but recognition of technology-facilitated structural violence is relatively nascent.6

Canadian law is presently not well placed to deal with the range of harms caused by direct and
structural TFVA for at least three reasons. First, its focus on individual wrongdoers and individual victims
shields corporations from accountability for the structural violence that arises from some uses of AI. Second,
this focus ignores the collective harms that arise from both direct and structural violence, including the
reinforcement of systemic inequalities caused by individual abuse.7 Third, in the case of algorithmic profiling,
the implicated issues of privacy and equality, though both protected under human rights instruments, are treated
as separate issues. This siloed approach has led to privacy legislation that focuses on data protection, designed
to legitimate the corporate collection and use of information that drives AI, often at the cost of more fulsome
protections that could address social consequences of the technology.

1
For other discussions of technology-mediated structural violence, see Niall Winters, Rebecca Enyon, Anne Geniets,
James Robson & Ken Kahn, “Can We Avoid Digital Structural Violence in Future Learning Systems” (2020) 45:1
Learning, Media and Technology 17; Mimi Onuhoa, “Notes on Algorithmic Violence” (22 February 2018), online:
GitHub https://github.com/MimiOnuoha/On-Algorithmic-Violence; Sara Safransky, “Geographies of Algorithmic
Violence: Redlining the Smart City” (2019) 44:2 Int. J. Urban Reg. Res. 200.
2
Suzie Dunn, “Is it Actually Violence? Framing Technology-Facilitated Abuse as Violence” in Jane Bailey, Asher
Flynn & Nicola Henry, eds., Emerald International Handbook on Technology-facilitated Violence and Abuse
(London, U.K.: Emerald Publishing, 2021) [forthcoming].
3
Johan Galtung, “Violence, Peace, and Peace Research” (1969) 6:3 J. Peace Res. 170 at 171. (The authors acknowledge
the value of Galtung’s scholarship on structural violence, but also note that he has been criticized for making anti-
Semitic remarks.)
4
Lorenzo Magnani & Sun Yat-sen, “Structural and Technology-Mediated Violence: Profiling and the Urgent Need of
New Tutelary Technoknowledge” (2011) 2:4 Int. J. Technoethics 1 at para. 5.
5
Nicola Henry & Anastasia Powell, “Beyond the ‘Sext’: Technology-Facilitated Sexual Violence and Harassment
against Adult Women” (2015) 48:1 Austl. & N.Z. J. Crim. 104; Heather Douglas, Bridget A. Harris & Molly
Dragiewicz, “Technology-Facilitated Domestic and Family Violence: Women’s Experiences” (2019) 59:3 Brit. J.
Crim. 551.
6
Niall Winters, Rebecca Enyon, Anne Geniets, James Robson & Ken Kahn, “Can We Avoid Digital Structural
Violence in Future Learning Systems” (2020) 45:1 Learn. Media Technol. 17; Mimi Onuhoa, “Notes on
Algorithmic Violence” (22 February 2018), online: GitHub https://github.com/MimiOnuoha/On-Algorithmic-
Violence; Sara Safransky, “Geographies of Algorithmic Violence: Redlining the Smart City” (24 November 2019)
44:2 Int. J. Urban Reg. Res. 200. For an analysis of systemic algorithmic discrimination framed outside of the
concept of “violence,” see Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism
(New York: N.Y.U. Press, 2018).
7
Further, as Yasmin Jiwani has noted, notwithstanding the fact that direct and structural violence are linked, the former
is often focused on while the latter is “erased, trivialized, or contained within categories that evacuate the violation
of [this form of] violence”: Yasmin Jiwani, Discourses of Denial: Mediations of Race, Gender and Violence
(Vancouver: UBC Press, 2006) at xi-xii.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

This chapter explores the factors at play in the legal response to TFVA through two specific examples:
deepfakes as an example of direct TFVA, and discriminatory algorithmic profiling as an example of
technology-facilitated corporate structural violence. Defining the role of AI in each example and examining
legal remedies currently in place, the chapter sheds light on the systemic ways that AI can lead to TFVA
through direct and structural violence, and suggests new measures that could begin to cover gaps in the law.

1. AI in Action

1.1. Non-Consensual Sexual Deepfakes

1.1.1. What Are Deepfakes?

Employing biometric data collected from images of a person’s face, deepfake technology uses AI to swap one
person’s face in a video with another’s.8 The result is a realistic but fake video where someone appears to be
featured in a video they were never actually filmed in.9 Deepfakes have been used for a range of purposes,10
the most common of which is to create non-consensual sexual videos of women.11 Although the practice of
substituting women’s faces into pornographic images is not new,12 open source deepfake technology allows
individuals with some programming skills and access to a collection of the target’s images the ability to create
realistic sexual deepfake videos on their personal computer.13 This technology entered the mainstream after
the release of fake pornographic videos featuring female celebrities on Reddit.14 It was well known that early
versions of these sexual deepfakes were not real videos and the women had not agreed to be featured in them.
Since then, deepfakes have been used as a form of image abuse against a much wider population, sometimes
with the intention to mask the inauthenticity of the video.15

8
Elizabeth Caldera, “‘Reject the Evidence of Your Eyes and Ears’: Deepfakes and the Law of Virtual Replicants”
(2019) 50:1 Seton Hall L. Rev. 177 at 179.
9
Britt Paris & Joan Donovan, Deepfakes and Cheap Fakes: The Manipulation of Audio Visual Evidence (2019) at 35,
online: Data & Society https://datasociety.net/wp-content/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal-1-
1.pdf
10
Danielle Keats Citron & Robert Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National
Security” (2019) 107 Cal. L. Rev. 1753.
11
Henry Ajder, Giorgio Patrini, Francesco Cavalli & Laurence Cullen, “The State of Deepfakes: Landscape, Threats,
and Impact” (September 2019) at 7, online: Tracer Newsletter
https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.
12
Jacquelyn Burkell & Chandell Gosse, “Nothing New Here: Emphasizing the Social and Cultural Context of
Deepfakes” (2019) 24:12 First Monday at para. 2.
13
Marie-Helen Maras & Alex Alexandrou, “Determining Authenticity of Video Evidence in the Age of Artificial
Intelligence and in the Wake of Deepfake Videos” (2018) 23:3 Intl. J. Evidence & Proof 255.
14
Samantha Cole, “AI-Assisted Fake Porn is Here and We’re all Fucked” (11 December 2017), online: Vice Magazine
https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn.
15
Nicola Henry, Asher Flynn & Anastasia Powell, “Image-Based Sexual Abuse” in Walter S. DeKeseredy & Maggie
Dragiewicz, eds., Routledge Handbook of Critical Criminology, 2nd ed. (New York: Routledge, 2018) 305 at 305.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

1.1.2. The Harm of Non-Consensual Sexual Deepfakes

Non-consensual sexual deepfakes almost exclusively target women, with a high percentage featuring South
Korean women.16 As the main targets of these videos, women are disproportionately subject to the associated
harms. These vivid and realistic videos undermine the sexual autonomy of the women featured because they
lose control over visual expressions of their sexuality and representations of their bodies become puppets for
the creator’s sexual, malicious, or other purposes.17 Regardless of the fact that the naked body in a deepfake
does not belong to the woman whose face is depicted, the illusion has very real effects on her. When a woman’s
sexual images are manipulated or decontextualized without her consent, she can suffer fear, anxiety,
depression, humiliation, lack of agency, and a profound loss of privacy. Whether the images are authentic or
created by AI, her sexual representations are being used for purposes beyond her control and without her
consent.18

Targets of deepfakes risk reputational harm and additional harassment by others. The videos are
designed to be convincing and have been used in harassment campaigns against women advocating for equality
rights.19 These images are used to perpetuate gendered and racialized stereotypes about women, reinforce
men’s sexual entitlement to women’s bodies, and shame and degrade women for being featured in sexually
explicit content.

1.2. Algorithmic Profiling

1.2.1. What is Algorithmic Profiling?

Algorithmic profiling is the practice of assembling detailed user profiles based on the collection and integration
of data about individuals, then applying analytic techniques including machine learning to identify patterns in
order to assign individuals to specific groups.20 Assembled data is often integrated from multiple sources and
can include: information volunteered by the individual (e.g., when registering for websites), information about
the individual posted by others (e.g., photographs on social media sites), and information covertly or
incidentally collected (e.g., browsing history).21 Corporations, governments, and other entities use algorithmic

16
Henry Ajder, Giorgio Patrini, Francesco Cavalli & Laurence Cullen, The State of Deepfakes: Landscape, Threats,
and Impact (September 2019) at 8, online: Tracer Newsletter
https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.
17
Danielle Keats Citron, “Sexual Privacy” (2019) 128:7 Yale L.J. 1870.
18
Suzie Dunn, “Identity Manipulation: Responding to Advances in Artificial Intelligence and Robotics” (Paper
delivered at the We Robot, 2020), online: We Robot 2020 https://techlaw.uottawa.ca/werobot/papers; Kristen
Thomasen & Suzie Dunn, “Reasonable Expectations of Privacy in an Era of Drones and Deepfakes: Expanding the
Supreme Court of Canada’s Decision in R v Jarvis” in Jane Bailey, Asher Flynn & Nicola Henry, eds., Emerald
International Handbook on Technology-facilitated Violence and Abuse (London, U.K.: Emerald Publishing, 2021)
[forthcoming]; Samantha Bates, “Revenge Porn and Mental Health: A Qualitative Analysis of the Mental Health
Effects of Revenge Porn on Female Survivors” (2016) 12:1 Feminist Criminology 22; Alexa Dodge, “Digitizing
Rape Culture: Online Sexual Violence and the Power of the Digital Photograph” (2015) 12:1 Crime Media Culture
65; Clare McGlynn & Erika Rackley, “Image-Based Sexual Abuse” (2017) 37:3 Oxford J. Leg. Stud. 534.
19
See e.g. Danielle Keats Citron, “Sexual Privacy” (2019) 128:7 Yale L.J. 1870 at 1922-1924 (harassment of Rana
Ayyub).
20
Monique Mann & Tobias Matzner, “Challenging Algorithmic Profiling: The Limits of Data Protection and Anti-
Discrimination in Responding to Emergent Discrimination” (2019) 6:2 Big Data & Society 1.
21
Jacquelyn Burkell, “Remembering Me: Big Data, Individual Identity, and the Psychological Necessity of Forgetting”
(2016) 18:1 Ethics Inf. Technol. 17.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

profiling for targeted advertising, personalized search results, mortgage default predictions,22 and tax fraud
detection.23 In many cases, algorithms assist with or have taken over “sorting” practices historically left to
human judgment, such as judicial bail decisions in some of the United States (US).24

As Oscar Gandy notes, profiling manipulates multiple pieces of data about individuals in order to sort
them into categories that act to include or exclude them for specific purposes (e.g. for targeted
advertisements).25 This practice existed before AI, but has escalated with the increasing availability of collected
data and sophistication of machine-learning technology to exploit this data. Moreover, AI-based classification
and sorting can operate over data collected in real time, enabling new practices such as the delivery of
advertisements based on current online activities.

Using algorithms to make “sorting” decisions has significant advantages; it systematizes the process
and can take much more information into account when making these decisions. At the same time, it is not
without problems, primary among which is the issue of bias. Bias has always existed in these types of
decisions—the difference is that algorithms, by virtue of their ubiquity, have the potential to “bake in” and
extend that bias in non-transparent ways.

1.2.2. The Harms of Algorithmic Profiling

Algorithmic bias results in both representational and allocative harms.26 Representational harms can both
create and re-entrench existing generalizations about groups of people based on socially constructed categories,
and “diminish the dignity of, and marginalize, individuals who are understood to occupy the categories.” 27
Examples include the infamous classification of Black faces as “gorillas” by Google’s image classification
system28 and the under-representation of women in Google searches for CEOs.29 Allocative harms result in
differential allocation of outcomes to specific groups. The COMPAS algorithm, for example, under some
conditions provides higher recidivism risk ratings of Black as compared to white defendants;30 screening tools

22
Jeff Levin, “Three Ways AI Will Impact the Lending Industry” (30 October 2019), online: Forbes
https://www.forbes.com/sites/forbesrealestatecouncil/2019/10/30/three-ways-ai-will-impact-the-lending-industry/.
23
Richard Rubin, “AI Comes to the Tax Code” (26 February 2020), online: Wall Street Journal
https://www.wsj.com/articles/ai-comes-to-the-tax-code-11582713000.
24
Peter Suciu, “AI in the Courts: The Jury is Out” (20 February 2020), online: Tech News World
https://www.technewsworld.com/story/86521.html.
25
Oscar H. Gandy Jr., The Panoptic Sort: A Politcal Economy of Personal Information (Colorado: Westview Press,
1993) at 1-2.
26
Neural Information Processing Systems, “Keynote: Kate Crawford, The Trouble with Bias” (12 December 2017),
online: Facebook https://www.facebook.com/watch/live/?v=1553500344741199. See also Nancy Fraser,
“Rethinking Recognition” (2000) 3:3 New Left Rev. 107; Rebecca Cook & Simone Cusack, Gender Stereotyping:
Transnational Legal Perspectives (Philadelphia: University of Pennsylvania Press, 2010); Jacquelyn Burkell & Jane
Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias” (2018) 2 Can. Y.B. Human
Rights 217 at 219.
27
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 220.
28
James Vincent, “Google ‘Fixed’ its Racist Algorithm by Removing Gorillas from its Image-Labeling Tech” (12
January 2018), online: The Verge https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-
recognition-algorithm-ai.
29
Andrew Keshner, “Working Women are Underrepresented in the C-Suite—and in Google Images” (18 December
2018), online: MarketWatch https://www.marketwatch.com/story/working-women-are-underrepresented-in-the-c-
suite-and-in-google-images-2018-12-18.
30
Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, “Machine Bias” (23 May 2016), online: ProPublica
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

to identify children at risk for maltreatment can magnify existing biases against racialized parents or those
living in poverty.31

Allocative harms, which are comparatively more easily characterized and addressed, result in reduced
access to opportunities or benefits for specific groups (e.g. job application screening that systematically makes
it harder for women or racialized minorities to be offered positions). Representational harms, by contrast, affect
how members of specific groups see themselves and are seen by others. These harms are realized through the
“entrenchment […] of negative stereotypes”32 and can easily lead to allocative harms that should be addressed
directly. At the same time, the more subtle harms of representational biases, which can influence self-
perception and exacerbate pre-existing stereotypes, should also be addressed.

2. Risks and Opportunities


Technology policy debates in Canada tend to assume that new technologies, including AI, will lead to
beneficial commercial and other opportunities. Consequences such as privacy and equality harms are
positioned as “risks” to be managed. This approach privileges technical development over a more thoughtful
examination of how AI reshapes society and the human experience.

Prioritizing economic opportunities over equality issues is not a new phenomenon. In the mid-1990s,
new information technologies were identified as a panacea for job creation and economic opportunity. It was
presumed that social benefits would follow so long as the government could work with the private sector to
create universal access. Privacy protection was positioned as a precondition to economic growth that would
create consumer trust in the technology. Legislators focused on data protection rules because it was believed
that these would legitimize the collection and use of information by corporations and governments alike.33
Issues like hate speech and children’s exposure to pornography were identified but set aside for further study:
the focus was to create the conditions that would allow the private sector to take advantage of the opportunities
to innovate. 34

Even as privacy, autonomy, and equality issues crystallized, innovation discourse tended to override
calls for legislative responses that would address harms suffered by marginalized communities.35 Although the
Office of the Privacy Commissioner of Canada issued reports requiring corporations to adjust their practices
(often by providing more information about how they use customers’ information), corporations have largely
been left to their own devices. For example, as concerns about identity-based harassment online grew, social

31
Beryl Lipton, “Faced with Spikes in Child Abuse Reports, One Pennsylvania County Turns to Algorithms for
Triaging Safety” (10 July 2019), online: Muckrock
https://www.muckrock.com/news/archives/2019/jul/10/algorithms-family-screening-Pennsylvania/.
32
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 220.
33
Valerie Steeves, “Now You See Me: Privacy, Technology and Autonomy in the Digital Age” in Gordon DiGiacomo,
ed., Human Rights: Current Issues and Controversies (Toronto: University of Toronto Press, 2016) 461 at 468.
34
See e.g. Information Highway Advisory Council, Connection, Community, Content: The Challenge of the
Information Highway (Ottawa: Industry Canada, 1995); Information Highway Advisory Council, Building the
Information Society: Moving Canada into the 21st Century (Ottawa: Industry Canada, 1996).
35
See e.g. House of Commons, Standing Committee on Human Rights and the Status of Persons with Disabilities,
Privacy: Where Do We Draw the Line? Report of the Standing Committee on Human Rights and the Status of
Persons with Disabilities (April 1997) (Chair: Sheila Finestone), online: Office of the Privacy Commissionner of
Canada https://www.priv.gc.ca/media/1957/02_06_03d_e.pdf; Bill S-21, An Act to guarantee the human right to
privacy, 1st Sess., 37th Parl., 2000 (first reading 13 March 2001).

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

media companies actively lobbied against regulation, and instead, introduced new features as a solution over
regulation. New features have included the use of algorithms that collect additional information to protect their
users, making equality remedies contingent upon a further loss of privacy. When these fail to protect people,
platforms blame problems on the algorithm.36 From this perspective, both allocative and representational harms
exist outside the platform, not because of it, and effective remedies remain elusive.

As discussed in the next section, due to the focus on individual bad actors in (ab)uses of AI, remedies
are naturally channelled into criminal law that is primarily aimed at deterrence rather than addressing systemic
problems. From this perspective, the risk of harm is borne by an individual victim because of the actions of an
individual transgressor, obfuscating the collective dimensions of the problem. Although these criminal law
remedies are an important part of the puzzle, they fail to address the systemic problems that occur when
algorithms use personal data in ways that reproduce inequality and discrimination.

Jurisdictions with stronger commitments to comprehensive human rights legislation, like the European
Union (EU), have had better success at designing privacy and equality remedies37 because the language of
human rights breaks out of simplistic binaries between risks and opportunities. By adopting the language of
human rights, legislators would accordingly be able to move beyond individualistic responses and conduct a
more fulsome analysis of the harms that occur when respect for privacy and equality are not built into technical
infrastructures. It would also enable legislators to move beyond simplistic technical solutions that put privacy
and equality at odds, and take into account the rich interaction between the two.

3. Key Gaps in the Law


Deepfakes and algorithmic profiling illustrate that, notwithstanding the collective nature of equality harms,
AI-enabled TFVA cannot simply be understood as individual bad behaviour. This understanding obfuscates
insidious corporate AI practices and limits our ability to effectively respond to the wide spectrum of behaviours
and practices that TFVA comprises. Meaningfully addressing TFVA requires reconceptualizing these harms
to include both individual and corporate (ab)uses of AI, as well as developing legal responses nuanced enough
to engage with its complexity. As initial steps in this direction, criminal and civil laws that prohibit the non-
consensual distribution of intimate images could be expanded or introduced to include digitally altered or
created images. In addition, human rights legislation could be amended to bar the use of the defence of
statistical correlation in cases that involve algorithmic discrimination, and we could move toward development
of agencies structured to treat privacy and equality as human rights.

However, to more fully address the relationship between AI and TFVA, legislators and courts should
move away from simplistic solutions that focus merely on individual “bad apples” and devise remedies that
fully protect both privacy and equality as human rights.

36
See e.g. Oscar Schwartz, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation” (25
November 2019), online: IEEE Spectrum https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-
learning/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation .
37
See e.g. E.U., Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free movement of such data,
and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] O.J., L. 119/1.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

3.1. Non-consensual Sexual Deepfakes and the Law


Non-consensual sexual deepfakes do not fit comfortably in most areas of Canadian law, and targeted
individuals currently have to rely on a smattering of criminal and civil remedies to redress the harms they
experience.38 The following section points out gaps in the law and recommends options for filling those gaps.
These suggestions build on existing laws that address other forms of prohibited image creation and sharing,
but it should be noted that due to their individualistic focus even if these gaps were filled neither criminal nor
civil law solutions adequately address the systemic issues that arise with the equality harms associated with
these types of deepfakes.

3.1.1. Criminal Law Responses

While several Criminal Code39 provisions could apply to sexual deepfakes, their application is at best
uncertain. For example, section 403 prohibits personation, which includes pretending to be a person or using
the person’s identity information to gain an advantage or cause a disadvantage to the personated person. Even
if a court concluded that making a video of a person’s face constitutes use of their identity information, a
perpetrator would only be convicted if the Crown can prove fraudulent intent to gain an advantage or to cause
disadvantage, which may prove difficult in many cases where the creators of deepfakes claim that their
intention was simply for sexual entertainment. Similarly, criminal harassment under section 264 would only
apply if a court held that deepfakes constitute “threatening conduct directed at the [target]” and cause the target
“reasonably, in all the circumstances, to fear for their safety.”40 Accordingly, the application of these provisions
may be limited by definitional uncertainty and the high evidentiary burden of proving intent and the reasonable
apprehension of risk to safety.

Deepfakes could also potentially be captured under existing child pornography and obscenity
provisions. For example, section 163.1 makes it an offence to make, distribute, or access representations of
children engaged in sex acts or showing their sexual organs for a sexual purpose, which can include images
that are created through artistic or technological means. The provision could capture non-consensual sexual
deepfakes if they appeared to represent children under the age of 18, although it is unclear how this provision
would be interpreted if the body in the image was clearly that of an adult. Similarly, section 163 criminalizes
anyone who “makes, prints, publishes, distributes, circulates or has in their possession for the purpose of
publication, distribution or circulation any obscene written matter, picture, model, phonograph record or any
other obscene thing.”41 A court could find that the content of a deepfake was obscene, but the threshold for
this has historically been set very high, and obscenity laws have rarely been applied in Canada.

The Criminal Code provision that most closely fits with non-consensual sexual deepfakes is section
162.1, which prohibits the non-consensual distribution of intimate images.42 Both the non-consensual
distribution of intimate images and deepfakes are forms of TFVA that damage the target’s sexual autonomy,
dignity, and privacy interests.43 Both also address heavily gendered harms—with women being the primary

38
Suzie Dunn & Alessia Petricone-Westwood, “More Than ‘Revenge Porn’: Civil Remedies for the Non-Consensual
Distribution of Intimate Images” (delivered at the 38th Annual Civil Litigation Conference, Montebello, Q.C.,
2018), online: CanLII http://www.canlii.org/t/sqtc.
39
R.S.C. 1985, c. C-46.
40
Criminal Code, R.S.C. 1985, c. C-46, s. 264
41
Criminal Code, R.S.C. 1985, c. C-46, s. 163(1).
42
Suzie Dunn, “Identity Manipulation: Responding to Advances in Artificial Intelligence and Robotics” (paper
delivered at the We Robot, 2020), online: We Robot 2020 https://techlaw.uottawa.ca/werobot/papers.
43
Danielle Keats Citron, “Sexual Privacy” (2019) 128:7 Yale L.J. 1870.

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

targets—and thus raise important questions of how these behaviours detract from women’s and girls’
equality.44 However, section 162.1 is limited to the non-consensual publication and distribution of images
where the person in the image is “nude, is exposing his or her genital organs or anal region or her breasts or is
engaged in explicit sexual activity.”45 As such, it is likely that only the person whose body is featured could
be protected by this section. The person whose face has been superimposed onto the body in the video would
likely not be protected, as their actual genitals or sexual activities are not featured. Additionally, as many
deepfakes are created using publicly available pornography videos as the base video,46 it is unlikely that the
Crown could prove, as required under the section, that the person whose body is represented in the deepfake
had a reasonable expectation of privacy when the images were recorded or distributed.47

This gap could be repaired by amending section 162.1 to include images that have been falsely created
or altered, as has been done in Virginia48 and in some states in Australia.49 This reform would be consistent
with the child pornography provisions in the Canadian Criminal Code which have been applied to prohibit
sexualized images of children that are not images of real children, such as drawings, digitally-altered images,
and manipulated videos.50 Further, a contextual and equality-focused approach to interpreting when a person
has a reasonable expectation of privacy in sexual images of them would need to be implemented by the courts. 51

3.1.2. Tort Law Responses

Targets of non-consensual sexual deepfakes could pursue a wide variety of civil remedies,52 but the harms of
deepfakes fit most comfortably in the realm of privacy. Several provinces have legislation that allows for a
civil action to be brought if a person’s intimate images have been shared without consent.53 Like the criminal
provision mentioned above, these could be amended to include altered or digitally created images. In provinces
where there is no specific legislation, like Ontario, this type of legislation could be introduced.

44
Henry Ajder, Giorgio Patrini, Francesco Cavalli & Laurence Cullen, The State of Deepfakes: Landscape, Threats,
and Impact (September 2019) at 8, online: Tracer Newsletter
https://regmedia.co.uk/2019/10/08/deepfake_report.pdf; Clare McGlynn, Erika Rackley & Ruth Houghton, “Beyond
‘Revenge Porn’: The Continuum of Image-Based Sexual Abuse” (2017) 25:1 Fem. Leg. Stud. 25.
45
Criminal Code, R.S.C. 1985, c. C-46, s. 162.1(2)(a).
46
Douglas Harris, “Deepfakes: False Pornography is Here and the Law Cannot Protect You” (2019) 17:1 Duke L. &
Tech. Rev. 99.
47
Kristen Thomasen & Suzie Dunn, “Reasonable Expectations of Privacy in an Era of Drones and Deepfakes:
Expanding the Supreme Court of Canada’s Decision in R v Jarvis” in Jane Bailey, Asher Flynn & Nicola Henry,
eds., Emerald International Handbook on Technology-facilitated Violence and Abuse (London, U.K.: Emerald
Publishing, 2021) [forthcoming].
48
U.S., Code of Virginia, § 18.2-386.2 (2019).
49
See e.g. Austl., Crimes Amendment (Intimate Images) Act 2017 (NSW), 2017/29.
50
See e.g. R. v. Rhode, 2019 SKCA 17.
51
Kristen Thomasen & Suzie Dunn, “Reasonable Expectations of Privacy in an Era of Drones and Deepfakes:
Expanding the Supreme Court of Canada’s Decision in R v Jarvis” in Jane Bailey, Asher Flynn & Nicola Henry,
eds., Emerald International Handbook on Technology-facilitated Violence and Abuse (London, U.K.: Emerald
Publishing, 2021) [forthcoming].
52
Other civil remedies that could apply include defamation, breach of confidence, appropriation of personality, breach
of fiduciary duty, extortion or intimidation, harassment, intentional infliction of mental suffering, intrusion upon
seclusion, and copyright.
53
Protecting Victims of Non-Consensual Distribution of Intimate Images Act, S.A. 2017, c. P-26.9; Intimate Image
Protection Act, C.C.S.M. 2015, c. 187, s. 11; Intimate Images and Cyber-Protection Act, S.N.S. 2017, c. 7; Intimate
Images Protection Act, R.S.N.L. 2018, c. I-22; The Privacy Amendment Act, S.S. 2018, c. 28.

10

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

Targets of deepfakes in Ontario may find a remedy under the new tort of false light introduced in
Yenovkian v. Gulian.54 In that case, the court awarded $100,000 in damages for a combination of public
disclosure of embarrassing private facts55 and publicity in false light when a man published content online that
misrepresented his children and ex-wife. The tort of false light protects people when content that would be
highly offensive to a reasonable person is published about them and that content places them in a false light.
As Justice Kristjanson stated, unlike defamation, “[t]he wrong is in publicly representing someone, not as
worse than they are, but as other than they are. The value at stake is respect for a person’s privacy right to
control the way they present themselves to the world.”56

3.2. Algorithmic Discrimination and the Law


Similar to non-consensual sexual deepfakes, algorithmic discrimination implicates both equality and privacy.
Although the equality harms of algorithmic discrimination fall at least equally if not more appropriately within
the human rights framework in Canada, policy discussions about regulation of AI and algorithmic sorting have
focused more on Canada’s privacy framework.57 While privacy is an internationally recognized human right,
intimately connected to individual dignity and autonomy,58 Canadian human rights legislation does not
explicitly address it and Canadian privacy legislation tends to treat privacy as a marketplace matter of data
protection. This siloed approach to equality and human rights on one hand and privacy on the other undermines
the capacity of Canadian law to address algorithmic discrimination.59 This section illustrates this point by
focusing on federal statutes, although many concerns extend to parallel provincial and territorial legislation.

3.2.1. Human Rights Legislation

Algorithmic discrimination can bring about allocative and representational harms.60 While human rights
legislation in Canada addresses both types of harm, it focuses primarily on allocative harms. Further, its
coverage of allocative harms is limited to certain areas by a challenging burden of proof compounded by the
possible defence of statistical correlation.

The Canadian Human Rights Act61 (CHRA) proscribes discrimination on prohibited grounds such as
race, age, and sex in three main areas: employment, accommodation, and the public provision of goods and
services.62 Claimants need not prove there was an intention to discriminate against them on a prohibited

54
2019 ONSC 7279.
55
The tort of the public disclosure of private facts (Jane Doe 464533 v. N.D., 2017 ONSC 127; Jane Doe 72511 v.
Morgan, 2018 ONBSC 6697) only applies to the publication of true and private information. Deepfakes technically
evade this stipulation by blending true and false content, and so it is unlikely a target could receive compensation
under that tort.
56
Yenovkian v. Gulian, 2019 ONSC 7279 at para. 171.
57
See e.g. Office of the Privacy Commissioner of Canada, Consultation on the OPC’s Proposals for ensuring
appropriate regulation of artificial intelligence (28 January 2020), online: Office of the Privacy Commissioner of
Canada https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/consultation-ai/pos_ai_202001/.
58
Valerie Steeves, “Foreword to EDPL, ‘The Future of Privacy’” (2017) 3:4 European Data Protection L. Rev. 438.
59
By systematically interfering with some people’s ability to fulfill basic human needs on the basis of race, gender, or
other similar grounds, algorithmic discrimination could also be understood as a structural violation of human rights:
see Kathleen Ho, “Structural Violence as a Human Rights Violation” (2007) 4:2 Essex Human Rights Rev. 4.
60
See discussions in Section 1.2.2 (“The Harms of Algorithmic Profiling”). See also Jacquelyn Burkell & Jane Bailey,
“Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias” (2018) 2 Can. Y.B. Human Rights
217 at 219.
61
Canadian Human Rights Act, R.S.C. 1985, c. H-6.
62
Canadian Human Rights Act, R.S.C. 1985, c. H-6, ss. 5-10, 12.

11

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

ground. It is enough to show that they have the protected characteristic, that they were adversely affected in
relation to one of the three main areas, and that the protected characteristic was a factor in the adverse impact.
In the context of algorithmic profiling, however, the latter can be difficult to establish because so many factors
are taken into account in the algorithms and “seemingly innocuous factors can become proxies for prohibited
grounds.”63

Even if a claimant satisfies this initial burden, respondents still have the opportunity to prove that their
behaviour was justified. In the employment context, this would require proof that they followed a rule made
in good faith that is rationally connected to the job’s requirements and there was no practical or reasonable
alternative to avoid negatively affecting the individual.64 While the burden of proof for such justification is
high, statistical correlation has in the past been accepted as a justification, with implications for algorithmic
profiling. In 1992, the Supreme Court of Canada in Zurich Insurance Co. v. Ontario (Human Rights
Commission)65 accepted statistical correlation as a bona fide justification for discriminating on the basis of age,
marital status, and gender by charging higher premiums to single men under age 25.66 Acceptance of this form
of justification could deter human rights claims relating to algorithmic discrimination since such profiling is
based on statistical correlations.

However, the Supreme Court of Canada’s more recent decision in Ewert v. Canada67 (Ewert) could
provide beneficial interpretive guidance on this issue. The Court found that the Correctional Service of Canada
had breached its statutory mandate to take all reasonable steps to ensure information it used about an offender
was accurate, up to date, and as complete as possible by relying on actuarial tools to, among other things,
predict Mr. Ewert’s (a Métis man) risk of recidivism, even though it was aware of the possibility that such
tools exhibited cultural bias.68 By placing the onus on Correctional Service of Canada, as the party seeking to
justify reliance on the tools to conduct research on whether those tools were biased in relation to Indigenous
offenders,69 the Court’s reasoning signals that courts and tribunals must be alert to the biases baked into
algorithms, rather than blithely treating them as mathematical certainties. Ewert was not brought under human
rights legislation and did not focus specifically on algorithmic decision-making. However, its focus on the
unreasonableness and unfairness of decision-making based on predictive tools suffering from discriminatory
biases may be useful for limiting statistical correlation defences to human rights claims about algorithmic
discrimination.70

The CHRA’s capacity to address algorithmic discrimination is even more limited with respect to
representational harms. The CHRA specifically addresses representational harms relating to employment ads

63
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 224.
64
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 225.
65
[1992] 2 S.C.R. 321.
66
Zurich Insurance Co. v. Ontario (Human Rights Commission), [1992] 2 S.C.R. 321 at para. 24.
67
2018 SCC 30.
68
Ewert v. Canada, 2018 SCC 30 at para. 49. See also Teresa Scassa, “Supreme Court of Canada Decision has
Relevance for Addressing Bias in Algorithmic Decision-Making” (15 June 2018), online: Teresa Scassa
http://teresascassa.ca/index.php?option=com_k2&view=item&id=278:supreme-court-of-canada-decision-has-
relevance-for-addressing-bias-in-algorithmic-decision-making&Itemid=80.
69
Ewert v. Canada, 2018 SCC 30 at para. 67.
70
Teresa Scassa, “Supreme Court of Canada Decision has Relevance for Addressing Bias in Algorithmic Decision-
Making” (15 June 2018), online: Teresa Scassa
http://teresascassa.ca/index.php?option=com_k2&view=item&id=278:supreme-court-of-canada-decision-has-
relevance-for-addressing-bias-in-algorithmic-decision-making&Itemid=80.

12

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

and to notices to the public. Section 8 prohibits use of employment application forms or ads expressing or
implying preferences based on prohibited grounds. Section 12 prohibits public display of any notice, sign, or
other representation expressing or implying an intention to discriminate or that incites or is likely to incite
discrimination of the sort prohibited by the CHRA. In both cases discriminatory representations can ground a
claim, even without any evidence that a job was actually allocated on the basis of a prohibited ground.
However, at least two factors could complicate advancing a successful claim relating to the kind of
stereotypical representations resulting from algorithmic bias, notwithstanding their capacity to produce
harmful effects on dignity and self-worth that go to the heart of human rights legislation.71

First, proving a connection between a stereotypical representation generated by algorithmic processes


and intention or incitement to discriminate would be challenging. Absent a situation where there is direct
evidence that, for example, Google search results suggesting that Black women’s hair renders them unsuitable
for professional work72 actually led a decision maker not to hire or interview a Black woman or that Google
intended that effect, finding evidence of such a connection would be costly and difficult. This is so because
any single image is only one of many factors in any individual decision, and getting solid social science
evidence to prove the connection statistically would require multiple studies involving large numbers of
participants.73

Second, imposing restrictions on the publication of stereotypes butts up against the constitutional
protection of free expression.74 In both Canada (Human Rights Commission) v. Taylor75 and Whatcott v.
Saskatchewan Human Rights Tribunal,76 the Supreme Court of Canada made it clear that legislative limits on
explicitly hateful expression can only be justified where those expressions are so extreme as to expose
members of targeted groups to intense vilification or revulsion. This line between protected and unprotected
expression may need to be redrawn. In an era where so much of our social, professional, and personal lives are
intertwined with digital communications technologies that constantly bombard us with algorithmically
generated representations, discriminatory stereotypes have an unprecedented capacity to alter our
understandings of self and other with profound implications for equality.

3.2.2. Privacy Legislation

At the federal level in Canada, the Personal Information Protection and Electronic Documents Act77 (PIPEDA)
is the privacy statute that governs the collection and use of data by private organizations. It has been criticized
for treating privacy more as an issue of marketplace regulation and data protection than as a human right, 78 a

71
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 223.
72
For further discussion of this discriminatory outcome, see Leigh Alexander, “Do Google’s ‘Unprofessional Hair’
Results Show it is Racist?” (8 April 2016), online: The Guardian
https://www.theguardian.com/technology/2016/apr/08/does-google-unprofessional-hair-results-prove-algorithms-
racist-. Search results originally Tweeted by Bonnie Kamona, @BonKamona (5 April 2016 at 2:04pm), online:
Twitter https://twitter.com/BonKamona/status/717457819864272896.
73
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 223.
74
Jacquelyn Burkell & Jane Bailey, “Unlawful Distinctions? Canadian Human Rights Law and Algorithmic Bias”
(2018) 2 Can. Y.B. Human Rights 217 at 223.
75
[1990] 3 S.C.R. 892 at 908.
76
2013 SCC 11 at para. 13.
77
S.C. 2000, c. 5.
78
Valerie Steeves, “Data Protection Versus Privacy: Lessons from Facebook’s Beacon” in David Matheson, ed., The
Contours of Privacy (Cambridge: Cambridge Scholar Publishing, 2009) 183.

13

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

concern that is particularly relevant in relation to algorithmic discrimination. This subsection focuses on two
key tenets of PIPEDA’s principles: notice and consent.79

Notice and consent are two of the chief characteristics of PIPEDA’s data protection approach, which
aims to limit private sector collection, use, or disclosure of personal information to reasonable and appropriate
purposes. PIPEDA also requires that the individual be notified about the purpose for which the information
will be used and that they give consent. The person consenting must understand the purpose and consequences
related to the collection, use, or disclosure of their personal information.80

This sort of notice-and-consent approach has been criticized by scholars from around the world.
Criticisms fall within three general categories: (i) the notice and consent approach “fails to offer real choices”;
(ii) individuals are ill-equipped to make the choices it offers; and (iii) “it asks us to make choices that shouldn’t
be ours to make.”81 In the context of algorithmic discrimination, these criticisms become more compelling.
Even if individuals are nominally given the choice not to agree with terms of service that include their data
being used to create profiles from their data to distinguish them from others, the social, commercial, and
financial costs of not accepting standard terms leaves them with no real choice but to agree.82 Further, and in
any event, individuals certainly cannot consent to their data being used to discriminate against them (or others)
on prohibited grounds because human rights cannot be contracted out of or waived.83

Additionally, even if individuals can legally consent to the use of their data for algorithmic profiling,
they are quite ill-equipped to do so because algorithmic processes that are the product of machine learning are
not clearly understood by platform providers, let alone individual users.84 Finally, if we understand privacy as
having a collective, social value85 in that profiles based on an individual’s data can ultimately be used to
discriminate not only against that person, but others as well, it follows that decisions about the use of personal
information for algorithmic profiling should not be left to one person to make.

Grounded as it is in an individualistic notice and consent model, Canadian federal privacy legislation
is completely ill-equipped to address the equality harms of algorithmic profiling. Numerous other
shortcomings of the current framework also contribute to this inadequacy,86 but we believe the notice-and-
consent model to be the most foundational.

79
For a general discussion of privacy-related issues, see Ch. 5, AI and Data Protection Law.
80
Defined as information about an identifiable individual: see “Principle 3–Consent” in Personal Information
Protection and Electronic Documents Act, S.C. 2000, c. 5, Sched. 1, s. 4.3.
81
Daniel Susser, “Notice After Notice-and-Consent: Why Privacy Disclosures are Valuable Even if Consent
Frameworks Aren’t” (2019) 9 J. Inf. Policy 37 at 46.
82
Helen Nissenbaum, “A Contextual Approach to Privacy Online” (2011) 140:4 Daedalus 32 at 35.
83
Dickason v. University of Alberta, [1992] 2 S.C.R. 1103.
84
Jane Bailey, “Democratic Rights in a Technocratic Age: When Constitutions (in Law) Are Not Enough” in The Law
Society of Upper Canada, Special Lectures 2017: Canada at 150: The Charter and the Constitution (Toronto: Irwin
Law, 2018) 487 at 494.
85
See generally Priscilla M. Regan, Legislating Privacy: Technology, Social Values and Public Policy (Chapel Hill,
N.C.: University of North Carolina Press, 2009).
86
For further discussion of PIPEDA’s shortcomings for purposes of protecting privacy, see Florian Martin-Bariteau,
“Submission to the House of Commons’ Standing Committee on Access to Information, Privacy and Ethics, Review
of the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5” (March 2017), online:
House of Commons https://www.ourcommons.ca/Content/Committee/421/ETHI/Brief/BR8852738/br-
external/Martin-BariteauFlorian-e.pdf.

14

Electronic copy available at: https://ssrn.com/abstract=3734663


ARTIFICIAL INTELLIGENCE AND THE LAW IN CANADA TECHNOLOGY-FACILITATED VIOLENCE AND ABUSE

Further Reading
Bailey, Jane, Asher Flynn & Nicola Henry, eds., Emerald International Handbook on Technology-facilitated
Violence and Abuse (London, U.K.: Emerald Publishing, 2021) [forthcoming].

Benjamin, Ruha, Race After Technology: Abolitionist Tools for the New Jim Code. (Cambridge, Mass.: Polity
Press, 2019).

Levey, Tania G., Sexual Harassment Online: Shaming and Silencing Women in the Digital Age (Boulder,
Colo.: Lynne Rienner Publishers, 2018).

Noble, Safiya Umoja, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: N.Y.U.
Press, 2018).

Powell, Anastasia & Nicola Henry, Sexual Violence in a Digital Age (London, U.K.: Palgrave Macmillan,
2017).

Acknowledgements
The authors thank the Social Sciences and Humanities Research Council of Canada for funding The eQuality
Project, a 7-year partnership initiative focused on young people’s experiences with privacy and equality in
networked environments, of which this research forms a part. The authors also thank Jasmine Dong for her
research assistance.

15

Electronic copy available at: https://ssrn.com/abstract=3734663

You might also like