Professional Documents
Culture Documents
Vda Pro Case
Vda Pro Case
Dongus 19–
Beyond simply identifying criminals, biometrics are a lucrative feature within the
Prison Industrial Complex. Incarcerated people have often been forced to submit
biometric data at times when such practices would not have been accepted outside of prison. The prison
has thus been a key laboratory for experimentation.29 This situation has escalated drastically over the last decades,
mass incarceration is
as Jackie Wang describes in her book Carceral Capitalism. Wang analyses how
monetised in the US and how a neoliberal, profit-oriented logic drives the management of the
prison system, creating a predominantly Black and Hispanic surplus population.
Within prisons, these subjects “generate value or are folded into the economy as debtors”.30 This
surplus populations become the centre
highlights how certain groups that are rendered as
of forms of financial extraction by biometric states and global capital.
racist regimes with automaticity such that its integration into the social will re-populate
racialised violence intensifying inequalities and social disparities through the
infrastructure of machines and ambient intelligence. These machines will not only be taught to retain White supremacy, power
ideologies to re-racialise populations but in the quantum era of computing exceed the human in its ability
Historically, the US has used war as an excuse to forcefully extract and monetize
biometric information, consequently creating one of the most comprehensive
databases ever
Dongus 19
As Wang notes, the production of a surplus population for extraction is a carceral logic reminiscent of “biometric capitalism”. It produces risky subjects by accumulating data
war and
from people who are immobilised as a result of poverty or war. When the allied forces left Iraq in 2011, they left a country in turmoil. Between 2003 and 2011,
It creates a “convicted”
arranged, tabulated, and indexed; everything in sight (and out of sight) is recorded, just as Edward Said suggested. This is the contemporary form of a screen-mediated projection.
enemy; a criminal who deviates from the norm, Furthermore, it dehumanises the
human face, enabling a person to be treated like an object in a forensic court of
material things. The imagined Other must therefore remain an objectified threat – a
terrorist – who holds a systemic function. Without an outside enemy, the ideology of
liberal authoritarianism that enables mass surveillance and incarceration could not be
sustained. Data extraction is a prerequisite; the old (colonial) strategy is used to
devalue and dehumanise subjects in order to turn them into a quantity that can be
exploited.
Berube 97
David M. (is a former USC faculty member and currently a professor of communication at North Carolina State University in Raleigh, North Carolina. His doctorate is from New York Universityand he has studied
and taught communication and cognitive psychology and created the term SEIN (Social and Ethical Implications of Nanotechnology) in his book NanoHype. “NANOTECHNOLOGICAL PROLONGEVITY: The
Down Side,” http://www.cas.sc.edu/engl/faculty/berube/prolong.htm)//conway
This means-ends dispute is at the core of Montagu and Matson's treatise on the dehumanization of humanity. They warn[s]: "its destructive toll is already greater than that of any war, plague, famine, or natural
calamity on record -- and its potential danger to the quality of life and the fabric of civilized society is beyond calculation. For that reason this sickness of the soul might well be called the Fifth Horseman of the
Behind the genocide of the holocaust lay a dehumanized thought ; beneath the menticide of
Apocalypse....
deviants and dissidents ... in the cuckoo's next of America, lies a dehumanized image of man... (Montagu & Matson, 1983, p.
xi-xii). While
it may never be possible to quantify the impact dehumanizing
ethics may have had on humanity, it is safe to conclude the foundations of
humanness offer great opportunities which would be foregone. When we calculate
the actual losses and the virtual benefits, we approach a nearly inestimable
value greater than any tools which we can currently use to measure it.
Dehumanization is nuclear war, environmental apocalypse, and international genocide. When
people become things, they become dispensable. When people are dispensable, any and every
atrocity can be justified. Once justified, they seem to be inevitable for every epoch has evil and
dehumanization is evil's most powerful weapon.
Contention Two - Hunter Robots
CRS in 22 defines Lethal autonomous weapon systems (LAWS) [as] a special class of
weapon systems that use sensor suites to independently identify a target and employ an
onboard weapon system to engage and destroy the target without manual human control
of the system.
Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to
engage and destroy the target without manual human control of the system.
Kelsey Piper, Senior writer at vox, 6-21-19 “Death by algorithm: the age of killer robots is
closer than you think” VOX News //OC
Death by algorithm: the age of killer robots is closer than you think - Vox
In the past few years, the state of AI has grown by leaps and bounds. Facial
recognition has gotten vastly more accurate, as has object recognition, two skills that would
likely be essential for lethal autonomous weapons.
New techniques have enabled AI systems to do things that would have been impossible just a few years ago, from writing stories to creating fake faces to, most relevantly to
LAWS, making instantaneous tactical decisions in online war games. That means that lethal autonomous weapons have rapidly gone from
impossible to straightforward — and they’ve gotten there before we’ve developed
any sort of consensus on whether they are acceptable to develop or use.
Houwing in 2022
Before I get into this point I want to make a remark: Biases are very problematic, but too often they are presented as the main
problem with invasive surveillance technology, like facial recognition. I want to emphasize that in my perspective, these technologies
will also be problematic if they would not contain biases. With that of out of the way, what are we talking about when we talk about
bias in technology? One of the problems with these systems is that they are built with data. Data that is subjective and
established in colonial and patriarchal structures.26 While to many people automated decision making has a hint of
objectivity as compared with human made decisions but it is the other way around. The bias is in the data that is used to
The bias might be
train the algorithms and in the decisions made by humans in the design of the algorithms.
hidden by the promise of neutrality that goes out from the technology, but in fact, it has the power to
unjustly discriminate at a much larger scale than biased individuals. 27 This
means that the categories, profiles, algorithms etc all entail, reproduce and exacerbate this
bias. When we use these tools to surveil people, which effectively means to sort people,
the systemic inequality is reinforced[.] to say the least. We could also say it in
more simple words: Biometric surveillance technologies like facial
recognition are racist and sexist. Some examples: Since corona made us all work and study from home, it hurts me to see the
messages of frustrated black students and students of color who are having a hard time dealing with proctoring software failing to deliver a fair and valuable
contribution to a functioning education system in this new situation in several ways. One of the most painful ones being that it shuts them out from entering their
online exams because the facial recognition gives an error of “poor lighting”.28 One student that is also software researcher looked into Proctorio, one of the most used
proctoring softwares. His research has shown that the software uses a facial detection model
that is failing to detect black faces more than 50% of the time. 29 This racism also plays a part in
algorithms used to prioritise images and deciding who gets a digital stage and who doesn’t, based on the characteristics in the picture. People with Twitter might be familiar with
this experiment on the cropping algorithm of the social media platform.30 The main goal of the algorithm is to make people click on links and let them stay on the platform.
Therefore it prioritises the parts of content that it expects you to prefer. Resulting in that it gives the stage to the people on stage. Then there is a great research from Joy
Buolamwini called Gender Shades31, where she shows how much better European and American facial recognition systems are in recognising white males compared to the
recognition of women of color, and thus how the chances of being false flagged are unequally spread over society, along the lines of privilege.
Lethal Autonmous weapons can be used for genocide and ethnic cleansing
Piper in 2019
For one thing, if LAWS development continues, eventually the weapons might be extremely inexpensive. Already today, drones can be purchased or
built by hobbyists fairly cheaply, and prices are likely to keep falling as the technology improves. And if the US used drones on the battlefield, many of
them would no doubt be captured or scavenged. “If you create a cheap, easily proliferated weapon of mass destruction, it will be used against Western
countries,” Russell told me.
Lethal autonomous weapons also seem like they’d be disproportionately useful for
ethnic cleansing and genocide; “drones that can be programmed to target a
certain kind of person,” Ariel Conn, communications director at the Future of Life Institute, told me, are one of the most straightforward
applications of the technology.
Must ban racist profiling technology of mass surveilance facial recognition data
collection
Amnesty International 21
Amnesty International today launches a global campaign to ban the use of facial
recognition systems, a form of mass surveillance that amplifies racist policing and
threatens the right to protest.
The Ban the Scan campaign kicks off with New York City and will then expand to focus on the use of facial recognition in other parts of the world in
2021. Facial recognition systems are a form of mass surveillance that violate the right to privacy and threaten the rights to freedom of
peaceful assembly and expression.
facial recognition technology, has amassed a database of ten billion images since
2020. By the end of the year, it plans to have scraped 100 billion facial images
from the internet. It is difficult to assess the company’s claims, but if we take Clearview AI at face value, it
has enough data to identify almost everyone on earth and end privacy and
anonymity everywhere.
Today, Clearview
As you read these words, your face is making money for people whom you’ve never met and who never sought your consent when they took your faceprint from your social media profiles and online photo al bums.
AI’s technology is used by over 3,100 U.S. law enforcement agencies, as well as the
U.S. Postal Service. In Ukraine, it is being used as a weapon of war. The company has
offered its tools free of charge to the Ukrainian government, which is using them to
identify dead and living Russian soldiers and then contact their mothers.
It would be easy to shrug this off. After all, we voluntarily surrendered our privacy the moment we began sharing photos online, and millions of us continue to use websites and apps that fail to protect our data, despite warnings from privacy campaigners and
Western security services. As so many of us sympathize with Ukraine and are appalled by Russia’s brutality, it is tempting to overlook the fact that Ukraine is not using Clearview AI to identify dead Ukrainians, which suggests that we are witnessing the
use of facial recognition technology for psychological warfare, not identification. Some people will be fine with the implications of this: if Russian mothers have to receive disturbing photos of their dead sons, so be it.
To understand why we might want to rethink the use of facial recognition technology in conflict, consider the following thought experiments. First, imagine that it was Russia that had scraped Ukrainian biometric data from the internet to build a facial recognition
technology tool which it was using to identify dead Ukrainians and contact their mothers. Liberal democracies would likely condemn these actions and add them to its growing list of Russia’s barbaric actions. Second, imagine a conflict in which the United States was
fighting against an opponent who had taken American faceprints to train its facial recognition technology and was using it to identify dead American soldiers and contact their mothers. This would almost certainly cause howls of protest across the United States.
Technology executives would be vilified in the press and hauled before Congress, where lawmakers might finally pass a law to protect Americans’ biometric data.