You are on page 1of 8

Defence Academy – College of Management

and Technology Cranfield University


Shrivenham
(Word Count – 2,080)

Investigate a contemporary issue of professional, ethical or social concern in relation


to information technology and its future implications

Use of facial recognition technology in public spaces

S6600657


MSc Information Capability Management

14/December/2019

20191111- professional issues


assignment_ONeill_UC
Facial recognition technology can no longer be considered a new technology having been employed
since the early 2000s (DAVIS WEST, 2017) to achieve several functions. The technology has
generated much interest in the media and civil rights organisations, often due to a perceived threat
to human rights and its potential to further the emergence of a surveillance society, with
controversy over its regular use (Gayle, 2019, Liberty, 2019). The technology has been widely
employed in sectors such as security and policing but is now increasingly being used for more
personal applications within people’s everyday life, not only in the physical world of public spaces
and the home but also in online interactions within social media and other applications. It is likely
that both the move to 5G and increased penetration of the internet of things will see its presence
spread (PYMNTS, 2019).

This paper will look at some of the professional, ethical and social concerns around the technology
through both the media’s exploration of the issue and an overview of academic debate on the
subject. It will start with an attempt to define what is meant by facial recognition technology in an
attempt to understand the bounds of the technologies under discussion, then moving on to consider
how the technology is being employed in public spaces. It will then look in turn at areas where the
technology generates issues that impact on professionals, including regulatory attitudes, ethical and
societal issues (such as loss of privacy, function creep, and informatisation of the body) before
attempting to draw conclusions on its use.

Before embarking on looking at the questions around the use of facial recognition technology a
description of the technology will be useful. The technology can be stated to be reliant on the
computer analysis of biometric differences between human faces as provided to the machine
through either a photograph or live video. At it simplest level all facial recognition systems will in
essence carry out the following processes in order to achieve the matching of two images. The first
task the system needs to do is capture the image. It then needs to identify the face within the
image. Once this is achieved the features of the face need to be extracted and compared to a library
of other facial features. The comparison of the same combination of features will allow it to record a
match.

Depending on the nature of the system it will then undertake one of three types of classification to
achieve the systems goal :
• ‘one to one’: the system will already have an image of the individual using it and their facial ID will
be verified against the stored image. This is most likely to be used where the technology is being
employed as a biometric tool in an access control scenario.
• ‘one to many’: an image is compared to a library of faces and a match is attempted of an individual
against this library. This technique would be used in a law enforcement scenario where a suspect’s
face may be compared to a library of criminals faces.
• ‘many to many’: the system tries to identify and allocate all the faces in an image. This is the goal
of mass surveillance CCTV systems employed by the police in a crowd control situation. It is the
employment of systems that can implement one or all of these methods in public spaces that is
having such impact on public discourse (Tistarelli, Li and Chellappa, 2009).

Facial recognition technology has expanded in its area of use, out of the security function it was
initially concentrated in, to a much broader range of applications. However the media’s interest in it
still appears to be focused on its potential abuse as a security tool within public spaces. This often
follows on from either academic research such as (Davies, Innes and Dawson, 2018) or an
organisation’s own clumsy attempts to popularise the technology in the public’s imagination, as was
the case in the LA super bowl in 2001 (Woodward, 2001). Use of the technology varies from culture
to culture with different societies attitudes to privacy and law playing a considerable role in how
quickly the technology has been adopted and accepted. At this point in time China appears to be
the culture that has seen both the widest use in policing and social control as well as the greatest
diffusion of the technologies away from their original use (Zeng et al., 2019, Qiang, 2019). Zeng et
al., 2019, show that China is seeing the technology being increasingly used as a biometric means to
demonstrate an entitlement to access apartment blocks or make financial payments and carry out
other daily transitionary functions. The UK itself has seen trials of the technology to assist policing of
large events (Fussey and Murray, 2019) and, while European Union GDPR regulation has placed
some constraint on their use, courts appear to be providing developers freedom to experiment with
their use (Rees, 2019). In the US where there is a strong constitutional defence of personnel liberty
acceptance of the technology has been less wholehearted with many cities and states actively
banning the technology from public spaces (Lynch et al., 2019, Feldstein, 2019).

Another issue around facial recognitions use, that moves the technology beyond the domains of
policing and security within public spaces, is the growing amount of micro advertising. This sees the
technology used to directly target individuals in public spaces, such as petrol station forecourts and
department stores (Page, 2019, Petrescu and Krishen, 2018). What may further create ethical
challenges around the technology is the increasing amount of effort being placed on developing
facial recognition technologies’ ability to analyse peoples mood from their expression (Zeng et al.,
2019). This increased level of knowledge on the observed individual’s state of mind will further
allow advertiser to specify marketing aimed at that person or police to monitor for perceived intent
within faces in a crowd.

Both these broad areas of debate around facial recognition technology centre in the public’s
imagination on the real ability to threaten privacy. The US Centre for Democracy and Technology
suggested loss of privacy by the technology could be given one of three levels (Lorenzo Hall, 2012).
The first most simple level is called ‘individual counting’ – here facial information is gathered on
aggregate basis but not used to target people specifically in individually focused activity such as
micro marketing. The second level is ‘individual targeting’ – here the facial data is used for
individually targeted interaction such as specific micro advertising aimed at the target’s areas of
interest. The final level is called ‘individual identification’ where the individuals facial data points are
used to search for them across the web or through a physical location.

The follow-on concerns from privacy are questions over the very accuracy of the technology. There
have been since the outset of the employment of technology into public spaces a number of well
documented cases of the technology returning failures to correctly identify the individuals recorded
by the system. Grother George Quinn Mei Ngan, Ross and Rochford, 2017 identified two classes of
error failure. The first is the ‘false negative’ where the system is unable to match an individual’s face
to the image it holds of them in its library. In the ‘false positive’ the system incorrectly matches a
person face to a face in the data base.

These uses of facial recognition technologies can be described in terms arising from function creep
and the informatisation of the body. The first area to be explored is the issue of ‘function creep’.
Function creep is the situation of the technologies original purpose evolving beyond the role
originally identified for it. Meaning that the data used by the technology is used for a purpose not
specifically intended or approved for. There are three common causes for this movement, a policy
vacuum, unsatisfied demand and a what is called ‘the slippery sole effect’ (Tistarelli, Li and
Chellappa, 2009).

Looking at these in turn, the most common reason why the technology sees its uses spread from the
initial intention is the presence of a policy vacuum in the direction the controlling authority or state
has placed around the uses of the technology and the considerations that should be made for its
use. The danger is then raised that the most powerful and influential voices’ demands are met to
the detriment of others. The second regular course of function creep is an unsatisfied demand. Here
the opportunities presented by the technology to address unmet requirements from other areas not
originally envisaged to be met by the technology are taken without adequate consideration of the
impact. The third component the ‘slippery sole effect’ is a more imperceptible change of use as the
‘street’ finds new and novel uses for the technology un-imagined during the design, development
and regulatory process (Tistarelli, Li and Chellappa, 2009).

After function creep Tistarelli, Li and Chellappa, 2009 identify ‘informatisation of the body’ as an
area where facial recognition technology will have increasingly profound professional, social and
ethical implications. Facial recognition technology sits alongside several similar technologies that
have taken physical elements of individuals’ bodies and developed ways to categorise and identify
them by unique features; examples of this include: iris scans, finger prints and voice. The aspect of
facial recognition technology that is so potent within this domain is that it is far easier to collect and
analyse than most other potential attributes as direct interaction with the individual whose data is
sought is not required; all that is needed is an image of the individual, an item that in many cases
can be trawled from social media or government and official sources. The rise of biometrics in
importance as a key identifier has been recognised as both a potential danger and a liberation
(Green and Mordini, 2006, Tistarelli, Li and Chellappa, 2009). The danger arises from the provision to
an authoritarian state of more tools to both monitor and categorise their population. However in
countering this view there is the suggestion the technology offers the opportunity to removes from
the state its monopoly on providing individuals with legitimacy currently often expressed through
plethora of documentation such a birth certificates, national insurance numbers and passports that
validate peoples citizenship.

The issues identified point to an area that requires careful consideration of its impact, with proper
deliberation being given by developers to moral and ethical questions during the systems
development and implementation. This requirement has been recognised from the start of the
technologies’ life and some thought has been put into developing intellectual tools to help this. An
example from the end of the last decade is Introna and Nissenbaum's, 2009 suggestion of five
considerations that policy makers should have an answer to. These include;

 Ensuring the system does not disrupt the proper information flow,
 Considering how biometric identifiers could be abused if not properly constrained.
 Ensuring the individual being interrogated by the Facial Recognition system does not carry
all the risk.
 Conducting a proper assessment to ensure the benefits of the implementation fully
outweigh any risks
 Ensure that the data collected and distributed by the system is subject to appropriately
rigorous controls.

Trying to develop ethical and moral structures to shape the technologies development are not the
sole preserve of western societies, Chinese academics are also alive to the issues, contrary to the
perception in much western media as can be seen in Zeng et al's, 2019 exploration of privacy issues
around the technology as it is being implemented in China, pointing out that legal action has
successfully been taken to stop the use of facial expression monitoring in university lectures.

To summarise, the power that facial recognition technology has to both assist societal security needs
and meet commercial demands for increased accuracy of the targeting of marketing means it is
likely to remain a feature of our public spaces as we progress further into the 21st century. While
the media coverage it receives demonstrates that many in the public are very alive to its privacy
issues, policy makers response is less apparent. If the public are to trust the technologies’ increasing
prevalence its reliability needs to be increased to a level that is proportionate to the uses for which it
is being made. In addition those working with the technology need to be both cognisant and honest
about the technologies’ strengths and weaknesses and apply an ethical and regulatory framework to
the professional decisions they make that can then be scrutinised.
Bibliography
Boccia, Y., Chong, J., Claydon, T. and Herteach, H. (n.d.) Facial Recognition. Available at:
https://www.doc.ic.ac.uk/~hh4017/Introduction (Accessed: 22 November 2019).
Davies, B., Innes, M. and Dawson, A. (2018) AN EVALUATION OF SOUTH WALES POLICE’S USE OF.
Available at: http://www.statewatch.org/news/2018/nov/uk-south-wales-police-facial-recognition-
cardiff-uni-eval-11-18.pdf (Accessed: 10 December 2019).
DAVIS WEST, J. (2017) History of Face Recognition & Facial recognition software., FACEFIRST
Available at: https://www.facefirst.com/blog/brief-history-of-face-recognition-software/#
(Accessed: 12 December 2019).
Feldstein, S. (2019) ‘The Road to Digital Unfreedom: How Artificial Intelligence is Reshaping
Repression’, Journal of Democracy, 30(1), pp. 40–52. Available at: 10.1353/jod.2019.0003 (Accessed:
28 October 2019).
Fussey, P. and Murray, D. (2019) Independent Report on the London Metropolitan Police Service’s
Trial of Live Facial Recognition Technology. Available at:
http://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-
2.pdf (Accessed: 21 November 2019).
Gayle, D. (2019) Privacy campaigners warn of UK facial recognition ‘epidemic’ | Technology | The
Guardian, Guardian, 16 August. Available at:
https://www.theguardian.com/technology/2019/aug/16/privacy-campaigners-uk-facial-recognition-
epidemic (Accessed: 6 November 2019).
Green, M. and Mordini, E. (2006) Identity , Security and Democracy The Wider Social and Ethical
Implications of Automated Systems for Human Identification. Amsterdam: IOS.
Grother George Quinn Mei Ngan, P., Ross, W.L. and Rochford, K. (2017) ‘Face In Video Evaluation
(FIVE) Face Recognition of Non-Cooperative Subjects’ Available at: 10.6028/NIST.IR.8173 (Accessed:
10 December 2019).
Introna, L.D. and Nissenbaum, H. (2009) ‘Facial Recognition Technology. A Survey of Policy and
Implementation Issues’, Center for Catastrophe Preparedness and Response, New York University
Liberty (2019) NEIGHBOURHOOD WATCHED How policing surveillance technology impacts your
rights. Available at: https://www.libertyhumanrights.org.uk/sites/default/files/Explainers-Facial
Recognition_1202.pdf (Accessed: 11 December 2019).
Lorenzo Hall, D.J. (2012) FACIAL RECOGNITION & PRIVACY: AN EU-US PERSPECTIVE October 8, 2012
Facial., Centre for Democracy & Technology Available at: 10.1109/CVPR.2007.383264 (Accessed: 29
November 2019).
Lynch, J., Andrade, H.D.’, Glendon, S., Kelley, J., Lee, I., Maass, D., Shen, C. and Schoen, S. (2019)
Face Off LAW ENFORCEMENT USE OF FACE RECOGNITION TECHNOLOGY. Available at:
https://www.eff.org/wp/face-off (Accessed: 29 November 2019).
Page, R. (2019) 10 examples of brands using emotion analytics to ramp up customer engagement -
NAB, digital marketing, facial recognition, Disney, British Airways, National Australia Bank, Tesco,
Humana, Japan airlines, Kellogg’s, billboards, social listening, BBC Worldw., CMO Available at:
https://www.cmo.com.au/article/print/662788/10-examples-brands-using-emotion-analytics-ramp-
up-customer-engagement/ (Accessed: 4 December 2019).
Petrescu, M. and Krishen, A.S. (2018) ‘Novel retail technologies and marketing analytics’ Available at:
10.1057/s41270-018-0040-z (Accessed: 6 December 2019).
PYMNTS (2019) Facial Recognition Biometrics Finds Friend In 5G | PYMNTS.com., PYMNTS Available
at: https://www.pymnts.com/innovation/2019/5g-biometrics-gobox-china/ (Accessed: 10 December
2019).
Qiang, X. (2019) President Xi’s surveillance state. Available at:
https://muse.jhu.edu/article/713722/pdf?casa_token=H61FM6XkXiIAAAAA:o-
eZ67dOTiXLqFYHnTVffOxHAkUAPSAPc1VX0nk2EzamSl0WLXskJjvIYsSBVlnEVbUiJ4o (Accessed: 28
October 2019).
Rees, J. (2019) South Wales Police use of facial recognition ruled lawful - BBC News., BBC Available
at: https://www.bbc.co.uk/news/uk-wales-49565287 (Accessed: 12 December 2019).
Tistarelli, M., Li, S.Z. and Chellappa, R. (2009) Handbook of Remote Biometrics for Surveillance and
Security. Available at: http://www.springer.com/series/4205 (Accessed: 10 December 2019).
Woodward, J.D. (2001) ‘Super Bowl Surveillance’
Zeng, Y., Lu, E., Sun, Y. and Tian, R. (2019) Responsible Facial Recognition and Beyond. Available at:
https://arxiv.org/pdf/1909.12935.pdf (Accessed: 20 November 2019).

You might also like