Professional Documents
Culture Documents
Maria Imperatrice
CST 300
11 February 2024
Introduction/Background
Facial recognition technology has been utilized for a variety of different functions, from
unlocking our smartphones to helping law enforcement fight crime. Advancements in this
technology have taken place over several decades, from its initial development involving
computers mapping markers on human faces to the use of artificial intelligence to identify values
in large amounts of data. With these continuing developments, an ethical issue has arisen from
the use of this technology. The same data being used to train artificial intelligence in facial
recognition has created ongoing concerns around the existence of bias that presents itself in these
data sets. While the technology itself is not inherently bias, its development can and has been
embedded with bias. A notable example of this can be seen when mostly white male faces are
used to train this technology, leading to possible misidentification of women and minorities by
Development of facial recognition technology began in the 1960s. The earliest pioneers
of this technology were Woody Bledsoe, Helen Chan Wolf and Charles Bisson. This
rudimentary technology used computers to recognize the human face through manual marking
and measured the distance between these landmarks to determine identity (NEC New Zealand,
2020). Goldstein, Harmon and Lesk built on Bledsoe’s work in the 1970’s to incorporate
additional subjective markers, including lip thickness and hair color, in order to automate the
process in facial recognition. Time passed, and it wasn’t until the 80’s and 90’s that additional
progress was made. Mathematicians Sirovich and Kirby used applications of linear algebra to
2
solve some of the issues with facial recognition software at that time. Finally, in the 2000s, the
U.S. government began Facial Recognition Vendor Tests (FRVT) that were designed to allow
the government access to prototype technology and evaluations of facial recognition software
systems that were available for use (NEC New Zealand, 2020). It was during this time that 9/11
created a larger demand for this software, making its usage more widespread by federal, state,
Further advancements in this technology and its utilization by government agencies have
only expanded in recent times. In a 2021 Government Accountability Office report, 24 federal
agencies were surveyed and reported using this technology for digital access, cybersecurity,
criminal investigations, and physical security, including controlling access to buildings. The
agencies that reported its usage noted that it was also used to aid in their investigations. It was
used to compare images against mugshots and to identify victims of crimes through systems that
allowed them to compare against publicly available images from social media. Nearly half of
these agencies went on to say that they plan to expand their use of facial recognition technology
Stakeholder Analysis
Two opposing stakeholders each hold values, position and claims regarding the use of
facial recognition technology and software. Those adversely affected by the use of this
technology are against its continued use by government agencies, while the same government
agencies that utilize this technology are in favor of its use in the commission of their work.
Values. Those affected most by the inaccuracies present in this technology also have the
most to lose when it fails. Due to the devastation it can cause those with the potential to be
3
adversely affected, the importance of addressing these concerns prior to deploying this
intelligence and machine learning software creates an ethical dilemma which can lead to
discrimination and false identifications (Hassanin, 2023). The utilization of this technology
without addressing these biases is irresponsible and will create issues in the future as artificial
Position. The use of facial recognition software should not be utilized, especially by
government agencies, until biases have been addressed and there isn’t a discrepancy in accuracy
between race and gender. Laws must be enacted to regulate this technology. Continuing its use is
not only irresponsible, but potentially damaging to those adversely affected by the bias existing
in the data sets being deployed to train this technology. According to a paper written by
researchers at MIT, error rates for gender classification were higher for females than males and
were also higher for subjects that were darker skinned than those who were lighter skinned. This
paper went on to note that the data set used for this system was more than 77 percent male and
more than 83 percent white (Hardesty, 2018). This highlights a fundamental flaw with this
banned. This claim of policy mitigates the ethical issues surrounding the continued use of this
technology. Its discontinued use would stop any institutional racial, and gender disparities that
enforcement and government entities to not only prevent crime, but also catch criminals.
4
Allowing the continued use of this technology will allow law enforcement agencies to keep
communities safe and aid in solving crimes that may not have otherwise been solved without the
use of this technology. It is the government’s responsibility to keep its citizens safe to the best of
its abilities and by using any means necessary. The importance of this goal can be seen through
the various government agencies that have been created in the commission of this goal, including
the NSA, Homeland Security, the Federal Bureau of Investigation, and countless other state and
Position. While there may be mistakes and bias in facial recognition software, the
benefits of using this software outweigh the risks, especially when it is used to prevent crimes
and keep society safer. Utilizing this technology has expedited processes that traditionally have
taken countless hours; for example, in the past when the police would attempt to identify persons
of interest, they would manually look though hundreds of mugshots or search through databases
with minimal descriptions. Facial recognition software automates this process, allowing agencies
to dedicate additional time and resources to other cases. While this software can be prone to
identifications have been responsible for a large number of wrongful convictions in the United
Claim. Despite the presence of bias, we must look at the greater good and the benefits to
society of utilizing facial recognition software and artificial intelligence. This aligns with a claim
of value, that it is better to keep society safe and catch potential criminals than to discontinue its
Argument Question
Should government agencies utilize facial recognition technology to help solve and
prevent crimes knowing the issues and bias that exist within the AI data sets used to train this
technology?
Stakeholder Argument
Government agencies should not use facial recognition technology given the bias
embedded in its software. Virtue ethics would dictate that there needs to be limitations and
safeguards in place to ensure that these concerns are addressed and rectified before using this
technology. This is because morality is guided by virtuous characteristics; being just, brave,
generous, wise, and courageous all play a part in this framework. Allowing discrimination of any
kind is inherently not virtuous. This ethical framework is said to have originated with Aristotle,
who noted that a person is seen as virtuous if they have ideal characteristics. These
characteristics are applied in a moral situation and focus on experience, ability to reason, and
sensitivity rather than rules or principles (Athanassoulis, n.d.). In that sense, it is virtuous to
The virtue ethics framework places an emphasis on virtue or morality as opposed to rules
or consequences. One must ask whether it is virtuous to allow for the continued usage of
technology that has shown large discrepancies in accuracies across race and gender. These
discrepancies can lead to misidentifications in best-case scenarios and serious long-term legal
consequences in the worst cases. Rather than focusing on the outcome of a given moral dilemma,
character and virtue should be used when considering what the morally correct course of action
6
is (Athanassoulis, n.d.). In this case, moving towards a society free of bias and discrimination is
This course of action is most correct because it emphasizes the role of virtue in this moral
dilemma. Discontinuing the use of this technology would effectively end yet another means of
institutional discrimination. Consequently, further action to address the issues in the software
would create a product that is not only more reliable, but free of embedded biases.
Those adversely affected by the continued use of this technology can see their losses in
the ongoing discrimination resulting from the inaccuracies in this software. Overall, bias in
artificial intelligence data sets has led to a series of problems, including but not limited to false
identification, racial and gender discrimination, and profiling (Harvard). Conversely, what is
gained by banning this technology is the protection of those in our society who have continued to
It is in the government’s best interest to use this facial recognition software, as doing so
would allow for these agencies to continue fighting crime, catching criminals, and keeping our
society safer. This aligns with the ethical framework of consequentialism, which is derived from
utilitarianism. The consequentialist framework has been around for a significant amount of time,
and it’s the theory had roots in the utilitarian framework developed by Jeremy Bentham,
however, the term “consequentialism” was first coined by Elizabeth Anscombe in her essay,
Modern Moral Philosophy. According to this ethical framework, the consequences of an action
are what matter when making an ethical decision (Seven Pillars Institute, n.d.). In this situation,
the consequences of continued use of this technology can be seen in the reduction of crime, the
Although profiling, racial and gender discrimination are inherently bad, if utilizing
software in which these issues are present saves a life or protects citizens from harm, the end
justifies the means. This ties back to the idea that it is the outcome and consequence that matters
most when making an ethical decision. Despite the flaws and potential inaccuracies in facial
recognition software, is it not better to save lives or stop crime if given the means to do so?
Applying the tenets of consequentialism would indicate that this is the ethically correct choice.
Allowing for the continued use of this technology is correct due to the lives it has saved
and will continue to save. Ultimately, this technology has benefitted the government in a variety
of ways, permitting them to gain leads on investigations and catch criminals that may not
otherwise be captured. The consequences of its continued use outweigh any possible ethical
Government agencies would lose a key resource in the fight against crime. The National
Institute of Science and Technology found that forensic examiners work best when facial
recognition technology is used as a support tool, increasing the accuracy of their performance. It
has a variety of benefits for government agencies, including but not limited to identifying
victims, catching terrorists, solving sex crimes, fighting human trafficking, and solving violent
crimes (Parker, 2020). To discontinue its use it would take away an effective means of fighting
Student Position
Facial recognition software should be used to a limited extent; however, it should not be
used by the government as a means of policing its people. We should continue to use this
software and advance it, but also ensure that biases in the system are adequately addressed and
8
resolved. Regulations and oversight are essential to its continued use by government agencies,
This position aligns most with those that believe the government should not be utilizing
this technology without first addressing the large inconsistencies and disparities between races
and gender. While the technology itself is not inherently biased, the artificial intelligence used in
this software is perpetuating racism and sexism. Despite these problems, there are means to
address these issues and they should be explored more aggressively. In no way should the use of
this technology be discontinued, nor should development be halted. Instead there are ethical
ways to not only address these ongoing issues, but to advance technology in a way that is
beneficial to all.
While it’s easy to make judgments on whether or not this technology should be utilized,
concrete action needs to take place in order to address the ongoing underlying issues. There must
be continued research into artificial intelligence and the ways in which bias can be minimized in
the data sets used. Additionally, there needs to be clear boundaries and limitations on the use of
this software, especially by a government agency. The lack of oversight and accountability must
be addressed through regulations and laws limiting its usage and mandating the disclosure of its
use in any system. Stopping the use of artificial intelligence in facial recognition software
limitations and continuing research into bias would not only mitigate the issues, but allow for
improvement of this technology as a whole by increasing its accuracy and decreasing its error
rate.
9
References
%20virtue%20ethics%20theories%20take
Hardesty, L. (2018, February 11). Study finds gender and skin-type bias in commercial artificial-
https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-
systems-0212.
Hassanin, N. (2023, August 21). Law professor explores racial bias implications in facial
https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-
recognition-technology.
Najibi, A. (2020, October 24). Racial Discrimination in Face Recognition Technology. Science
https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-
technology/
NEC New Zealand. (2020, May 26). A brief history of facial recognition. NEC. Retrieved from
https://www.nec.co.nz/market-leadership/publications-media/a-brief-history-of-facial-
recognition/
Parker, J. (2020, July 16). Facial Recognition Success Stories Showcase Positive Use Cases of
https://www.securityindustry.org/2020/07/16/facial-recognition-success-stories-
showcase-positive-use-cases-of-the-technology/
10
Seven Pillars Institute. (n.d.). Applying Utilitarianism: Are Insider Trading and the Bailout of
utilitarianism-are-insider-trading-and-the-bailout-of-gm-ethical/#:~:text=Consequentialist
%20theories%20have%20been%20around
https://www.gao.gov/products/gao-21-526