You are on page 1of 5

Vancouver Affirms.

Contention One– Dehumanization

Biometrics trap black and brown bodies in a cycle of incarceration and


commodification that culminates in the body becoming a source of living labour

Dongus 19–
Beyond simply identifying criminals, biometrics are a lucrative feature within the
Prison Industrial Complex. Incarcerated people have often been forced to submit
biometric data at times when such practices would not have been accepted outside of prison. The prison
has thus been a key laboratory for experimentation.29 This situation has escalated drastically over the last decades,
mass incarceration is
as Jackie Wang describes in her book Carceral Capitalism. Wang analyses how
monetised in the US and how a neoliberal, profit-oriented logic drives the management of the
prison system, creating a predominantly Black and Hispanic surplus population.
Within prisons, these subjects “generate value or are folded into the economy as debtors”.30 This
surplus populations become the centre
highlights how certain groups that are rendered as
of forms of financial extraction by biometric states and global capital.

Racist regimes are exacerbated by the logic of machines in surveillance


capitalism
Ibrahim 21
114 Surveillance and facial recognition the postcolonial ontology, new screen mirrors reiterate the ‘White mirror’ in which his face is rendered through pre-made profiles of White faciality and mistaken identities. For Fanon, the Black man is alienated because he is
thrown into a world not of his own making, he is also construed as one incapable of making his own meaning (Oliver 2004: 24) and in the postdigital world his fragmented identity is one which is also preconceived and administrated before he has arrived. If the
virtualisation of life and the movement of capital in lifeworlds produced a sense that space and race would disappear to appropriate new forms of agency, these dreams of emancipation from the shackling of identity and subjectivity to modes of coloniality in the
offline world dissipate. With hyperdigital world private–public partnerships, collusion with immigration, border control and police will re-birth race with renewed brutally and colonial logic in which race will be intimately profiled and imposed through ‘racist’ machines

Race and its abstraction through machines will enforce


taught to extend and expand the desires of power and its intimate tryst with capital.

racist regimes with automaticity such that its integration into the social will re-populate
racialised violence intensifying inequalities and social disparities through the
infrastructure of machines and ambient intelligence. These machines will not only be taught to retain White supremacy, power
ideologies to re-racialise populations but in the quantum era of computing exceed the human in its ability

to reassemble racism in its modes of operation.

Historically, the US has used war as an excuse to forcefully extract and monetize
biometric information, consequently creating one of the most comprehensive
databases ever
Dongus 19
As Wang notes, the production of a surplus population for extraction is a carceral logic reminiscent of “biometric capitalism”. It produces risky subjects by accumulating data

war and
from people who are immobilised as a result of poverty or war. When the allied forces left Iraq in 2011, they left a country in turmoil. Between 2003 and 2011,

occupation were used as a carceral state of exception to collect fingerprints, iris


scans, and DNA from suspected insurgents and civilians. As a result, the
Pentagon amassed one of the world’s most comprehensive databases of
biometric information ever collected during wartime.

The impact is dehumanization.

Data extraction is the FIRST STEP in dehumanization


Dongus 19
In Iraq and Afghanistan, forensic technologies were to a large extent implemented alongside biometrics. In 2012, the Defense Forensics and Biometrics Agency (DFBA) was established, expanding the territory of the battlefield. Biometric control still applies the idea
of network-centric warfare, a post-Cold War strategy developed in the 1990s. This method uses control and command information technology to achieve battlefield dominance in real-time. Biometric systems centred around the control of movement in and beyond
the battlefield employ this same idea. That is to say, these systems work to achieve identity dominance through networked information technology in real-time.
The DFBA website boasts a “Hit of the Month”, featuring individuals that have been captured and added to a “Biometric Enabled Watchlist”. The faces are arranged in a tableau similar to Francis Galton’s anthropological experiments. They are schematised,

It creates a “convicted”
arranged, tabulated, and indexed; everything in sight (and out of sight) is recorded, just as Edward Said suggested. This is the contemporary form of a screen-mediated projection.

enemy; a criminal who deviates from the norm, Furthermore, it dehumanises the
human face, enabling a person to be treated like an object in a forensic court of
material things. The imagined Other must therefore remain an objectified threat – a
terrorist – who holds a systemic function. Without an outside enemy, the ideology of
liberal authoritarianism that enables mass surveillance and incarceration could not be
sustained. Data extraction is a prerequisite; the old (colonial) strategy is used to
devalue and dehumanise subjects in order to turn them into a quantity that can be
exploited.

Dehumanization outweighs all calculable impacts if we dehumanize someone


every atrocity is justified

Berube 97
David M. (is a former USC faculty member and currently a professor of communication at North Carolina State University in Raleigh, North Carolina. His doctorate is from New York Universityand he has studied
and taught communication and cognitive psychology and created the term SEIN (Social and Ethical Implications of Nanotechnology) in his book NanoHype. “NANOTECHNOLOGICAL PROLONGEVITY: The
Down Side,” http://www.cas.sc.edu/engl/faculty/berube/prolong.htm)//conway

This means-ends dispute is at the core of Montagu and Matson's treatise on the dehumanization of humanity. They warn[s]: "its destructive toll is already greater than that of any war, plague, famine, or natural
calamity on record -- and its potential danger to the quality of life and the fabric of civilized society is beyond calculation. For that reason this sickness of the soul might well be called the Fifth Horseman of the

Behind the genocide of the holocaust lay a dehumanized thought ; beneath the menticide of
Apocalypse....

deviants and dissidents ... in the cuckoo's next of America, lies a dehumanized image of man... (Montagu & Matson, 1983, p.

xi-xii). While
it may never be possible to quantify the impact dehumanizing
ethics may have had on humanity, it is safe to conclude the foundations of
humanness offer great opportunities which would be foregone. When we calculate
the actual losses and the virtual benefits, we approach a nearly inestimable
value greater than any tools which we can currently use to measure it.
Dehumanization is nuclear war, environmental apocalypse, and international genocide. When
people become things, they become dispensable. When people are dispensable, any and every
atrocity can be justified. Once justified, they seem to be inevitable for every epoch has evil and
dehumanization is evil's most powerful weapon.
Contention Two - Hunter Robots
CRS in 22 defines Lethal autonomous weapon systems (LAWS) [as] a special class of
weapon systems that use sensor suites to independently identify a target and employ an
onboard weapon system to engage and destroy the target without manual human control
of the system.
Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to
engage and destroy the target without manual human control of the system.

Facial recognition technology are critical for lethal autonomous weapons.

Kelsey Piper, Senior writer at vox, 6-21-19 “Death by algorithm: the age of killer robots is
closer than you think” VOX News //OC
Death by algorithm: the age of killer robots is closer than you think - Vox
In the past few years, the state of AI has grown by leaps and bounds. Facial
recognition has gotten vastly more accurate, as has object recognition, two skills that would
likely be essential for lethal autonomous weapons.
New techniques have enabled AI systems to do things that would have been impossible just a few years ago, from writing stories to creating fake faces to, most relevantly to

LAWS, making instantaneous tactical decisions in online war games. That means that lethal autonomous weapons have rapidly gone from
impossible to straightforward — and they’ve gotten there before we’ve developed
any sort of consensus on whether they are acceptable to develop or use.

Unfortunately, THERE IS RACIAL BIAS IN BIOMETRIC TECHNOLOGY


ALGORITHMS.

Houwing in 2022
Before I get into this point I want to make a remark: Biases are very problematic, but too often they are presented as the main
problem with invasive surveillance technology, like facial recognition. I want to emphasize that in my perspective, these technologies
will also be problematic if they would not contain biases. With that of out of the way, what are we talking about when we talk about
bias in technology? One of the problems with these systems is that they are built with data. Data that is subjective and
established in colonial and patriarchal structures.26 While to many people automated decision making has a hint of
objectivity as compared with human made decisions but it is the other way around. The bias is in the data that is used to
The bias might be
train the algorithms and in the decisions made by humans in the design of the algorithms.
hidden by the promise of neutrality that goes out from the technology, but in fact, it has the power to
unjustly discriminate at a much larger scale than biased individuals. 27 This
means that the categories, profiles, algorithms etc all entail, reproduce and exacerbate this
bias. When we use these tools to surveil people, which effectively means to sort people,
the systemic inequality is reinforced[.] to say the least. We could also say it in
more simple words: Biometric surveillance technologies like facial
recognition are racist and sexist. Some examples: Since corona made us all work and study from home, it hurts me to see the
messages of frustrated black students and students of color who are having a hard time dealing with proctoring software failing to deliver a fair and valuable
contribution to a functioning education system in this new situation in several ways. One of the most painful ones being that it shuts them out from entering their
online exams because the facial recognition gives an error of “poor lighting”.28 One student that is also software researcher looked into Proctorio, one of the most used

proctoring softwares. His research has shown that the software uses a facial detection model
that is failing to detect black faces more than 50% of the time. 29 This racism also plays a part in
algorithms used to prioritise images and deciding who gets a digital stage and who doesn’t, based on the characteristics in the picture. People with Twitter might be familiar with
this experiment on the cropping algorithm of the social media platform.30 The main goal of the algorithm is to make people click on links and let them stay on the platform.
Therefore it prioritises the parts of content that it expects you to prefer. Resulting in that it gives the stage to the people on stage. Then there is a great research from Joy
Buolamwini called Gender Shades31, where she shows how much better European and American facial recognition systems are in recognising white males compared to the
recognition of women of color, and thus how the chances of being false flagged are unequally spread over society, along the lines of privilege.

Lethal Autonmous weapons can be used for genocide and ethnic cleansing
Piper in 2019
For one thing, if LAWS development continues, eventually the weapons might be extremely inexpensive. Already today, drones can be purchased or
built by hobbyists fairly cheaply, and prices are likely to keep falling as the technology improves. And if the US used drones on the battlefield, many of
them would no doubt be captured or scavenged. “If you create a cheap, easily proliferated weapon of mass destruction, it will be used against Western
countries,” Russell told me.
Lethal autonomous weapons also seem like they’d be disproportionately useful for
ethnic cleansing and genocide; “drones that can be programmed to target a
certain kind of person,” Ariel Conn, communications director at the Future of Life Institute, told me, are one of the most straightforward
applications of the technology.

Must ban racist profiling technology of mass surveilance facial recognition data
collection
Amnesty International 21
Amnesty International today launches a global campaign to ban the use of facial
recognition systems, a form of mass surveillance that amplifies racist policing and
threatens the right to protest.
The Ban the Scan campaign kicks off with New York City and will then expand to focus on the use of facial recognition in other parts of the world in
2021. Facial recognition systems are a form of mass surveillance that violate the right to privacy and threaten the rights to freedom of
peaceful assembly and expression.

The technology exacerbates systemic racism as it could disproportionately impact


people of colour, who are already subject to discrimination and violations of their human
rights by law enforcement officials. Black people are also most at risk of being misidentified
by facial recognition systems.
“Facial recognition risks being weaponized by law enforcement against marginalized communities around the world. From New Delhi to New
York, this invasive technology turns our identities against us and undermines human rights,” said Matt Mahmoudi, AI and Human Rights
Researcher at Amnesty International.
“New Yorkers should be able to go out about their daily lives without being tracked by facial recognition. Other major cities across the US have already
banned facial recognition, and New York must do the same.”
In New York, Amnesty has joined forces with AI for the People, the Surveillance Technologies Oversight Project, the Immigrant Defence Project, the
New York Civil Liberties Union, the New York City Public Advocate’s office, The Privacy NY Coalition, State Senator Brad Hoylman and Rada Studios
to campaign for legislation to ban the use of facial recognition technology for mass surveillance by law enforcement in the city.
“Police use of facial recognition technology places innocent New Yorkers on a perpetual line up and violates our privacy rights. Facial recognition is
ubiquitous, unregulated and should be banned,” said Mutale Nkonde, Founder and CEO of AI For the People.
Albert Fox Cahn, Surveillance Technology Oversight Project Executive Director at the Urban Justice Centre, said: “Facial recognition is biased, broken,
and antithetical to democracy. For years, the NYPD has used facial recognition to track tens of thousands of New Yorkers, putting New Yorkers of
colour at risk of false arrest and police violence. Banning facial recognition won’t just protect civil rights: it’s a matter of life and death.”

US DATA COMPANIES ARE CRITICAL TO PROVIDING GLOBAL BIOMETRIC


DATA FOR FUTURE WEAPONS
Hare 22
Who owns your face? You might think that you do, but consider that Clearview AI, an American company that sells

facial recognition technology, has amassed a database of ten billion images since
2020. By the end of the year, it plans to have scraped 100 billion facial images
from the internet. It is difficult to assess the company’s claims, but if we take Clearview AI at face value, it
has enough data to identify almost everyone on earth and end privacy and
anonymity everywhere.
Today, Clearview
As you read these words, your face is making money for people whom you’ve never met and who never sought your consent when they took your faceprint from your social media profiles and online photo al bums.

AI’s technology is used by over 3,100 U.S. law enforcement agencies, as well as the
U.S. Postal Service. In Ukraine, it is being used as a weapon of war. The company has
offered its tools free of charge to the Ukrainian government, which is using them to
identify dead and living Russian soldiers and then contact their mothers.
It would be easy to shrug this off. After all, we voluntarily surrendered our privacy the moment we began sharing photos online, and millions of us continue to use websites and apps that fail to protect our data, despite warnings from privacy campaigners and
Western security services. As so many of us sympathize with Ukraine and are appalled by Russia’s brutality, it is tempting to overlook the fact that Ukraine is not using Clearview AI to identify dead Ukrainians, which suggests that we are witnessing the
use of facial recognition technology for psychological warfare, not identification. Some people will be fine with the implications of this: if Russian mothers have to receive disturbing photos of their dead sons, so be it.
To understand why we might want to rethink the use of facial recognition technology in conflict, consider the following thought experiments. First, imagine that it was Russia that had scraped Ukrainian biometric data from the internet to build a facial recognition
technology tool which it was using to identify dead Ukrainians and contact their mothers. Liberal democracies would likely condemn these actions and add them to its growing list of Russia’s barbaric actions. Second, imagine a conflict in which the United States was
fighting against an opponent who had taken American faceprints to train its facial recognition technology and was using it to identify dead American soldiers and contact their mothers. This would almost certainly cause howls of protest across the United States.
Technology executives would be vilified in the press and hauled before Congress, where lawmakers might finally pass a law to protect Americans’ biometric data.

We can stop companies like Clearview


Colaner 20–
Surveillance capitalism is presented as inevitable to discourage individuals from
daring to push back. It is an especially easy illusion to pull off as COVID-19 continues to spread around the globe. People are reaching for
immediate solutions, even if that means acquiescing to a new and possibly longer-lasting problem in the future.
When it comes to biometric data collection and surveillance, there’s often a lack of clarity around what is ethical, safe, and legal — and what laws and regulations are still
needed. The AI Now report methodically lays out all of those challenges, explains why they’re important, and advocates for solutions. It then provides shape and substance
through eight case studies that examine biometric surveillance in schools, police use of facial recognition technologies in the U.S. and U.K., national efforts to centralize
biometric information in Australia and India, and more.
All citizens — not just politicians, entrepreneurs, and technologists — need to acquire a working understanding of the issues around biometrics, AI technologies,
and surveillance. Amid a rapidly changing landscape, the report could serve as a reference for understanding the novel questions that continue to arise. It would be an injustice
to summarize the whole 111-page document in a few hundred words, but it touches on several broad themes.
Laws and regulations pertaining to data, rights, and surveillance are lagging behind the development and implementation of various AI technologies that monetize biometrics or

This is why companies like Clearview AI are thriving — what


adapt them for government tracking.

they do is offensive to many and may be unethical, but it is — with some


exceptions — still legal.
The very definition of biometric data remains unsettled, and some experts want to pause the implementation of these systems while we
create new laws and reform or update others. Others seek to ban the systems entirely on the grounds that some things are perpetually
dangerous, even with guardrails.
To effectively regulate the technology, average citizens, private companies, and governments need to fully understand data-powered
systems that involve biometrics and their inherent tradeoffs. The report suggests that “any infringement of privacy or data-protection rights be
necessary and strike the appropriate balance between the means used and the intended objective.” Such proportionality also means ensuring a “right
to privacy is balanced against a competing right or public interest.”

You might also like