You are on page 1of 14

Feedback from NASSCOM on the Responsible AI - #AI for All

Adopting the Framework: A Use Case Approach on Facial Recognition Technology


To
Dr Anna Roy
Senior Adviser
NITI Aayog
December 30th, 2022

INTRODUCTION

The National Association of Software & Service Companies (NASSCOM) welcomes this opportunity
to submit comments on the NITI Aayog’s Discussion Paper Responsible AI for All: Adopting the
Framework – A use case approach on Facial Recognition Technology (Paper).i The prevalence and
growth of the facial recognition technology (FRT) market as a subset within the larger artificial
intelligence (AI) sector merits discussion on the possible measures that may be suitable to ensure a
responsible use of FRT and mitigate possible risks.

To this end, we commend the NITI Aayog for advancing the policy discussions and appreciate the
overall approach adopted in the Paper, particularly the focus on encouraging the responsible
development and deployment of FRT in India like the Digi Yatra projectii, in compliance with the NITI
Aayog’s responsible AI (RAI) principles. To strengthen the document, we have offered principled
based suggestions.

At a broader level, our recommendation is that any use of FRT system needs to be evaluated basis
the use case. This will help identify the nature of risks and challenges involved and develop the
necessary precautionary measures. We have analysed the use-case of FRT system in other
jurisdictions (Border control: Global Entry in United States of America; Airports and security
clearance: Face Express in Japan; and Forensic identification: INPOL-Z in Germany) as listed
in Annex 2 of the Paper.iii The aim is to identify learnings from use cases to inform the NITI Aayog’s
thinking on responsible use of FRT.iv

While we have identified the key learnings from the international use-cases to frame our
recommendations, Annexure A of this document provides an overview of how different types of
FRT have been deployed in these three countries. For this, we have divided the study into four
themes ---- i) how FRT has been used; ii) precautions taken before adoption; iii) risks and challenges
post implementation; and iv) steps taken to address them.

Summary of recommendations

1. Requirements like, appropriateness of explainability, external review and constitution of ethics


committee should be based on nature of risk associated (like, whether the use of FRT affects
the human rights, etc.) with the FRT.
2. FRT systems whose output can be used in a manner that affects human
rights/freedom/diversity/privacy should be subject to stricter requirements.
3. Adverse event reporting system may be established to collate adverse incidents, establishing
a post-market monitoring mechanism.

Page 1 of 14
4. Considering the complex nature of the technology being used, full explainability may not
always be feasible. In that event, alternatives such as human review, audits, and limiting use
cases could be utilised.
5. Procurement requirements and conditions to be guided by the level of risks or harm and the
use case of the FRT system in question.
6. Clear division of responsibilities and liabilities between developer and deployer should be
made (through contractual framework) depending on the nature of application of FRT.
7. List different risk management functions to be considered by entities at the different steps of
the FRT supply chain for fixing the accountability.
8. Difference in terms of responsibilities between developer and deployer must be considered
while framing the grievance redress mechanism, else this can create duplication and
inefficiency.
9. To avoid regulatory challenges that may arise due to fragmentated approach in the
implementation of FRTs across India, a coordination mechanism should be established for
uniformity and certainty when FRT is deployed by multiple state agencies for analogous
purposes (like surveillance and law enforcement).
10. Any future framework of FRT should be harmonised with the Digital Personal Data Protection
Bill, 2022. To illustrate, while consent is an essential mechanism of responsible and ethical
FRT system, excessive reliance on consent would not align with the DPDP Bill.
11. Imposing a blanket need for consent at every stage should be commensurate with the risks
involved. Scope for inferred consent should be included in defined circumstances based on the
risk involved.
12. Focus should be on highlighting use cases that merit additional safeguards and/or data types
that require additional safeguards with examples to justify the same.

DETAILED RECOMMENDATIONS

Encourage a risk-based approach

The Paper broadly categories FRT as ---- 1:1 systems for non-security purposes like authentication
and verification and 1:n systems for security purposes, like, image identification and live monitoring
for law enforcement and surveillance purposes.v While it notes the differences in the risk potential
of both types of FRT – highlighting that 1:n FRT carries a greater potential of harm for individuals –
the Paper does not acknowledge that different FRT use cases require a context-specific regulatory
approach. For instance, the Paper provides common recommendations for the developers and
vendors of all FRT technologies, including requirements on explainability, audits, and instituting
internal ethical committees, among others.vi

Whereas depending on the end-use of a system (for example, whether enterprise facing or
individual facing) different operational and ethical questions can arise and hence, they should be
evaluated against different frameworks. While certain obligations may be useful in the context of
high-risk use cases – such as for surveillance – asking developers of low-impact systems (like,
unlocking a phone, comparing a person with a portrait) to comply with similar requirements
(like explainability) will be disproportionate and onerous, particularly for developers of new and
innovative FRT applications.

Page 2 of 14
To further illustrate, the Paper places explainability requirements on developers without
considering the risk/potential technical challenges of the FRT system. The appropriateness of
explainability as a risk mitigation feature will also depend on the use-case of FRT. Further,
considering the complex nature of the technology being used, full explainability may not always be
feasible. In that event, alternatives such as human review, audits, and limiting use cases could be
utilised.vii Similarly, the paper requires all FRT systems to be subject to external audits of underlying
algorithm/training datasets or review by an internal ethical committee. Such requirements should
be based on graded approach depending on the risk level that an FRT system being developed or
developed may pose.viii

Therefore, we submit that a risk-based approach is important to ensure that the regulatory
interventions are proportionate to create a balance between the leveraging the benefits of FRT while
limiting its risks. This is in line with the trends on FRT and AI regulation world-wide. For instance, as
the Paper itself highlights, the European Union has proposed an AI Act which will establish a risk-
based compliance framework – under which FRT systems categorised as high risk will be subject to
the highest level of restrictions.ix

In the past, we had submitted in our response to a previous consultation paper released by the NITI
Aayog, that a suitable framework for understanding risk in the context of AI applications worth
considering is the framework developed by the German Data Ethics Commission.x Also, useful
inference can be drawn from the European commission’s white paper titled Artificial Intelligence
- A European approach to excellence and trust. This white paper argues that both the sector
and the intended use needs to be considered for evaluating risks, from the viewpoint of protection
of safety, consumer rights and fundamental rights.xi

Recommendations

1. Requirements like, appropriateness of explainability, external review and constitution of ethics


committee should be based on nature of risk associated (like, whether the use of FRT affects
the human rights, etc) with the FRT. Imposing blanket requirements not commensurate with
risks can stifle FRT innovations.
2. Similarly, procurement requirements and conditions to be guided by the level of risks or harm
and the use case of the FRT system in question.
3. FRT systems whose output can be used in a manner that affects human
rights/freedom/diversity/privacy should be subject to stricter requirements.
4. When explainability is a challenge, use alternatives such as human review of the accuracy,
counterfactual explanations, or surrogate models that can enhance interpretability of the
outcome when it cannot be technically explained.
5. In case of high risk FRT (like, surveillance, forensic investigation, law enforcement, etc), the
following measures may be considered:
i) Periodic review of technology (like, refreshment of algorithms) to reduce error rates
and updates accuracy standards based on internationally accepted standards.
ii) Establish uniform mechanisms to conduct cybersecurity and cyber resilience tests for
FRT systems periodically and in line with appropriate standards.

Page 3 of 14
iii) To mitigate unauthorized access to data, internal frameworks can be established
outlining the degree of control over third-party service providers, and approving
access to data stored by their FRT systems.
iv) Adverse event reporting system may be established to collate adverse incidents,
establishing a post-market monitoring mechanism. xii

Apportion responsibilities between developers and deployers

The Paper provides that impacted consumers can hold both deployers and developers liable for
harms arising out of the use of FRT applications.xiii While we support the need to put in place
grievance redressal mechanisms for individuals, this should not be at the cost of misplacing
obligations on entities who have lesser visibility over the contextual application and corresponding
risks emerging from the deployment of an FRT systems. Permitting parallel grievance redressal
mechanisms (both for developers and deployers) without clear demarcation of their responsibilities
can lead to multiplicity of proceedings and act as a hurdle to grievance redressal. The Paper should
instead account for the nature of organizations involved in the development and deployment of an
FRT application. Each entity involved in the value chain in the eventual use of an AI application plays
a unique role.

The Paper should ensure that appropriate regulatory requirements and corresponding liability falls
on only those entities that are best placed to assess the risks and harms emerging from an FRT
system. This approach is line with the OECD’s Recommendation of the Council on Artificial
Intelligence, which recognises that effective AI policies must account for “stakeholders according
to their role and the context” in which AI is being deployed.xiv

For instance, holding developer’s liability for harms emerging from the inaccurate use of an FRT
system by law enforcement may be unfair – especially as the deployer would be expected to institute
appropriate safeguards before using such a system. At best, the developer can be held accountable
for the warranties made at the time of selling/providing the FRT system to the deployer. Typically,
deployers of AI applications are best placed to abate potential risks, such as requiring human-in-
the-loopxv for cases involving greater risk. Deployers are also responsible for providing the service
to the end user and are more informed of the context within which FRT is deployed and the risks
that could emerge from that application.

However, there is no one-size-fits-all solution to define the areas of responsibilities of different actors
in the FRT supply chain. The division of responsibilities between developer and deployer will vary
depending on the nature of the AI application. By providing guidance on the different risk
management functions to be considered by entities at the different steps of the FRT supply chain,
the NITI Aayog can meet its objective of enhancing accountability around the use of FRT.

Recommendations

1. No-one-size-fit-all approach should be adopted. Clear division of responsibilities and


liabilities between developer and deployer should be made depending on the nature of
application of FRT. For instance, the deployer of FRT for forensic investigation (like, office of
police), will have different set of obligations as opposed to the developer of FRT.

Page 4 of 14
2. Deployers are typically best placed to adhere to transparency requirements depending on the
use case and the context of deploying FRT. Similarly, this difference must be considered while
framing the grievance redress mechanism, else this will create duplication and inefficiency.
3. Developers can be held accountable for the warranties made at the time of selling/providing
the FRT system to the deployer.
4. List different risk management functions to be considered by entities at the different steps of
the FRT supply chain for fixing the accountability.

Ensure regulatory coordination and uniformity


We understand that implementation of FRT system would largely involve use of AI and machine
learning. In the past there have been multiple consultation processes and initiatives across
different Government bodies to examine AI (for instance, the recent consultation paper issued
by TRAI lists 14 different initiatives).xvi We suggest that an integrated and collaborative
consultation process should be adopted, which focuses on specific AI based FRT systems, their
opportunities, and risks. Here, it would be instructive to look at the use case of FRT in China. In
2014, the Chinese Government adopted a FRT called Social Credit System (SCS)xvii to identify and
track Chinese citizens’ behaviour and rank them based on a ‘social credit’.xviii SCS was implemented
regionally across China and its governance was tied with different instruments at the federal and
province/city level.xix

Further, some government agencies at the provincial level used the SCS to identify and track
individuals for reasons other than calculating their social credit.xx Due to multiplicity of agencies and
instruments, it curtailed citizens’ ability to object to privacy violations, and there is limited scope to
legally enforce any security and privacy safeguards. This has exacerbated risks arising out of the FRT
system. The risk of fragmentation may also arise in India – where different law enforcement agencies
at the federal and state level deploy different FRT for surveillance.

Recommendations

1. Coordination mechanism should be established for uniformity and certainty when FRT is
deployed by multiple state agencies for analogous purposes (like surveillance and law
enforcement).
2. NITI Aayog has recommended that a body be established to publish standards on errors in
FRT. Further, standards may be published on construct validity, reliability, robustness,
resilience, and causality.xxi
3. It may consider additionally recommending that these standards should align with globally
accepted requirements or should be based on globally accepted requirements. This can
promote uniformity, reliability, and trust in FRT on a larger scale and aid global companies in
compliance.

Ensure harmonisation with the Digital Personal Data Protection Bill, 2022

Since the release of the Paper for discussion, the MeitY has released the Digital Personal Data
Protection Bill, 2022 (DPDP Bill) for consultation.xxii We recommend that the NITI Aayog update the
Paper to account for the changes set out in the DPDP Bill.

First, the FRT Paper has made several references to the processing of sensitive personal data,
including while enumerating the rights-based challenges to the use of FRT systems, and specifically,

Page 5 of 14
anonymity as a facet of privacy.xxiii It notes that increasing applications of FRT systems further
incentivise the processing and computation of sensitive personal data, which create concerns
around the decreasing space for anonymity, and its effect on the larger erosion of privacy. The FRT
Paper recommends that under any new data law, sensitive personal data (including biometric data
such as facial images and scans) should be protected, and rigorous standards for data processing,
as well as the storage and retention of sensitive biometric data, should be adequately addressed to
mitigate privacy risks associated with FRT systems.xxiv However, the DPDP Bill has removed any
sub-categorisation (general, sensitive, critical) within personal data, and therefore, no longer
mandates a higher threshold of care to be taken while processing sensitive or critical personal data.

Secondly, the Paper emphasizes on the use of consent in FRT systems,xxv including taking user’s
consent prior to processing personal information; collection of the user’s explicit consent if the
collected data) is being used for a different purpose than for which it was collected by the
organisation, including transfer to third parties for processing. Notably, according to the Paper, under
no circumstances consent can be inferred from conduct of a data principal. This approach can be
inflexible and does not enable a context and risk-based approach.

On the other hand, Clause 8 of the DPDP Bill covers such situations where the data principal is
deemed to have given consent to the processing of personal data, including scenarios where the
data principal voluntarily provides personal data to the data fiduciary, or for public interest or
reasonable purposes, among others. Another example is the approach in the Paper that the consent
must be taken afresh for transfer, license or access to external agencies accessing the data. This
tends to place excessive reliance on consent as against the DPDP Bill, which permits data
fiduciaries to engage data processors under a valid contract if the purpose of processing
has been consented to.

Recommendations

1. Approach should be to create adequate safeguards for data principal in line with the DPDP
Bill as well as allow flexibility for businesses.
2. DPDP Bill has removed any sub-categorisation (general, sensitive, critical) within personal
data. Even in the earlier version of data protection Bills and under the Information Technology
Act, effect of classification of sensitive personal data is mainly on requiring consent and
possible restrictions on cross border transfer. However, there are no specific technical or other
measures to ensure additional protection of the sensitive personal data. Therefore, focus
should be on highlighting use cases that merit additional safeguards and/or data types that
require additional safeguards with examples to justify the same.
3. Excessive reliance on consent does not align with the DPDP Bill. Further, imposing blanket
need for consent at every stage may not be commensurate with the risks involved. Hence,
scope for inferred consent should be included in defined circumstances based on the risk
involved.
4. A participating entity can be allowed to retain data beyond the specified timeline only when
backed by law and are necessary and proportional to their objective. For this, data retention
policy should be framed and disclosed for the data collected and stored by the FRT system.xxvi

Page 6 of 14
ANNEXURE A: USE-CASES OF FRT SYSTEMS IN OTHER JURISDICTIONS

In this part, we have looked at three jurisdictions from the list of international use cases of FRT as
given in Annex 2 of the Paper. These are: i) Border control: Global Entry in United States of America;
ii) Airports and security clearance: Face Express in Japan; and iii) Forensic identification: INPOL-Z in
Germany. For each jurisdiction, we have divided our study into three themes: i) how the FRT has
been used; ii) precautions taken before adoption; iii) risks and challenges post implementation; and
iv) steps taken to address them.

Border control: Global Entry, United States of America

Objective: Global Entry is aimed at using FRT for expedited clearance for pre-approved, low-risk
travelers entering the US. It is deployed by U.S. Customers and Border Protection (CBP).xxvii Global
Entry utilises the CBP’s facial comparison technology, which has been deployed in other facial
recognition and matching programs of the CBP, like Traveler Verification Service (TVS).xxviii

How FRT is used: The face of image of members of the program is captured at a kiosk and matched
to the image the provide during their Global Entry application.xxix Earlier, members were required to
provide their fingerprints and passport – but with the use of FRT, the kiosk now only matches their
face image to retrieve their travel record and verify their identity. xxx

Precautions taken before adoption:


1. Secure storage of data: Images of members are stored in secure systems of the Department of
Homeland Security (DHS).xxxi
2. Data minimisation: CBP has limited the amount of personally identifiable information that is used
in the facial recognition process. It only collects data that is necessary to verify a passenger’s face
with their travel records – i.e., information collected through their Global Entry application and
face images captured at kiosks (which are deleted within specified timelines).xxxii
3. Data retention: Photos of U.S. citizens captured at a kiosk to verify their identities are discarded
within 12 hours of capture.xxxiii Images of non-US citizens captured at kiosks are retained for a
maximum of 14 days.xxxiv
4. Alternative in case of technical failures: In case the Global Entry kiosk fails to match a member
with their biometric face image, the system will allow them to proceed under the existing system
– submitting their passport, fingerprints and other information necessary to process their entry. xxxv
That is, members are not denied entry merely because the FRT fails to make a match.
5. Operational testing: To assess the accuracy and performance of its FRT capabilities, CBP
conducted operational testing and designed goals for improvement based on the results.xxxvi After
the testing phase, the DHS Science and Technology division also evaluated the test findings. xxxvii
6. Avoiding algorithmic bias: The CBP, in the testing phase for its FRT, trained the system with
biometrics of people from over 53 countries and people ranging between 18-85 years of age to
ensure that the FRT system can accurately recognize diverse demographics of people. xxxviii

Risks and challenges post implementation:


1. Purpose limitation: The biometrics (including images) that CBP uses to verify individual identities
could potentially be used for purposes other than for which they were collected. xxxix Since Global
Entry deploys the same FRT as other CBP projects, like TVS, there is a chance that data collected

Page 7 of 14
through Global Entry may be used for other purposes / projects that do not employ the same
safeguards.xl
2. Accuracy of underlying data: CBP relies on the face image provided by the member in their Global
Entry application along with recently clicked photos of the traveler by the CBP. xli There is a risk
that past images may be too old, or not of adequate quality to facilitate a match.xlii
3. Security resilience: Until recently, it was not possible to accurately measure the level of
cybersecurity and cyber resiliency for CBP’s systems. The CBP is now subject to the DHS’s
department-wide testing requirements.xliii
4. Accountability: There is a risk that the CBP does not retain full control and responsibility for
securing data collected and stored through Global Entry because it relies on third-party service
providers.xliv In 2019, CBP’s sub-contractor suffered a cyber-attack, because of which travelers’
images (part of CBP’s FRT pilot) were compromised.xlv The subcontractor had violated the DHS
security and privacy protocols.xlvi CBP failed to adequately retain control and protect data
accessed by third-party service providers.xlvii

Steps taken to address risks/challenges:


1. Providing greater privacy safeguards: Global Entry is voluntary, and consent based. Any agency /
third party that wants to use Global Entry members’ information for other purposes must request
the DHS for approval.xlviii And no third-parties outside Global Entry officials have access to Global
Entry kiosks or data, to prevent data use for purposes for which a consumer has not consented.xlix
CBP also provides members a privacy notice on the kiosk so that they are aware of what data is
collected, how it will be used, its sharing and disposal.l CBP also mitigates privacy risks by
publishing regular Privacy Impact Assessments.li CBP does not use data for purposes other than
those specified in the privacy notice.lii
2. Limits on data retention: CBP does not retain facial images of US citizens captured at the kiosk for
verification. It only retains non-US citizen’s images for 14 days.liii However, facial images are
retained by DHS Automated Biometric Identification System (IDENT) for purposes like national
security, law enforcement, immigration and border management, intelligence, etc.liv
3. Accuracy of FRT system: CBP continually tests and evaluates its camera technology to improve
the confidence, including for matching older photos.lv The use of photos captured by the DHS
and stored on IDENT during any encounters with the traveler – aside from their member
application – increases the rate of accuracy.lvi
4. Greater security safeguards: The DHS will conduct follow-up cyber resiliency tests.lvii CBP protects
data throughlviii (a) storage in certified cloud environments, (b) compliance with the Federal
Information Security Modernisation Act, 2015, and (c) approved system security plans under the
National Institute of Standards and Technology’s (NIST) certification and accreditation process.lix
In response to the 2019 breach (above), CBP also deployed enhanced technology to facilitate
audit tracking, logging, and enhanced encryption, updated contractual, policy, and security
requirements.lx
5. Accountability: As of August 2022, CBP has taken measures to increase its control and
responsibility for data collected through Global Entry, such as ensuring that (a) all personnel,
including those of contractors, are aware about privacy requirements, (b) only authorized
personnel, including those of contractors, have access to biometric data collected from travelers,
(c) access to TVS data is controlled through a user access request and a review process, among
others.lxi CBP also took steps to strengthen privacy protections in contracts and acquisitions
language to ensure vendor compliance with security requirements, among others.lxii

Page 8 of 14
Airports and security clearance: Face Express, Japan

Objective: Face Express is an FRT-based boarding procedure launched at airports in Japan.lxiii It is


operated and deployed by the NEC Corporation. Passengers can carry out boarding procedures on
their own through self-serve kiosks.lxiv It was first rolled out in 2021 at the Narita airport, followed by
the Haneda airport.lxv

How FRT is used: A passenger must register their face image at a check-in kiosk. Their face image
is linked to their travel documents so that they will not have to produce those documents again.lxvi
As a result, they can pass through multiple touchpoints like security checkpoints and boarding gates
smoothly.lxvii

Precautions taken prior to adoption:


The NEC took several steps to institute precautions prior to rolling out Face Express. These are:

1. Privacy: NEC engaged in several discussions with the Japanese Personal Information Protection
Commission on developing appropriate privacy safeguards.lxviii It also ensured adherence to a
guidebook published by the Japan Civil Aviation Bureau on protection personal information
under OneID – the underlying project promoting the use of biometric technology to streamlining
boarding and immigration procedures.lxix These measures included balancing meaningful
consent from passengers while improving speed and convenience by reducing the number of
screens shown to a passenger on each kiosk.lxx Using Face Express is also voluntary – passengers
who do not want their face to be photographed can use different counters. lxxi
2. Data retention: Other measures included taking passenger consent for temporary storage of their
information and requiring deletion of their information within 24 hours after their flight
departed.lxxii If passengers who do not want to use Face Express are photographed incidentally
(such as if they are standing around a passenger who uses Face Express), their images are
immediately deleted.lxxiii
3. Responsible AI practices: In 2019, NEC issued an AI and Human Rights Principles Policy and
principles for safely deploying FRT-based boarding in airports, which apply to their range of FRT
products, including Face Express.lxxiv The key principles include fairness, privacy, transparency,
among others.lxxv
4. Governance measures: It also established a Data Distribution Strategy Office and the Digital Trust
Business Strategy Division - aimed at implementing human rights by design and privacy in its
innovation.lxxvi
5. Standardised technology: Face Express utilises facial recognition software that was well-rated in
a benchmark test of FRT conducted by the National Institute of Standards and Technology
(NIST).lxxvii The kiosks facilitating Face Express were also designed to be compatible with the
International Air Transport Association’s One ID Standard.lxxviii
6. Accuracy of FRT: NEC took measures to ensure the accuracy of on-site images captured at Narita
airport to improve the overall accuracy of the FRT used in Face Express.lxxix This included
conducting tests with groups in a closed environment before large scale implementation at Narita
airport.lxxx

Page 9 of 14
Risks and challenges post implementation and redressal:
There is no publicly available informationlxxxi on the risks and challenges encountered post-
implementation.

Forensic identification: INPOL-Z, Germany

Objective: INPOL-Z is a central police information system used by the Federal Criminal Police Office,
Germany (BKA).lxxxii Since 2009, the BKA allows state criminal investigation offices and federal police
to use INPOL-Z’s database to identify unknown persons for law enforcement purposes. lxxxiii BKA
deploys the facial recognition system deployed by Cognitec.lxxxiv

How FRT is used: The INPOL-Z system matches the anatomical features of a person’s face on video
surveillance and images with information stored on the database.lxxxv The BKA deploys personnel
who assess the system’s outcomes and determine the probability of identifying a person
correctly.lxxxvi And their assessment can be used in a court of law.lxxxvii Authorities must create entries
on the INPOL-Z based on a “prognosis decision”, i.e. a decision documenting the reason a person’s
facial image needs to be recorded in the database.lxxxviii

Precautions taken before adoption: There is limited information available about the precautions
taken before INPOL-Z was operationalized. Reports indicate that INPOL-Z initially tried to minimise
the types of data stored – for example, it does not include other recorded facial images, such as
those on drivers’ licenses or passports – since the database’s primary purpose is police work.lxxxix

Risks and challenges post implementation:

1. Explainability: According to officials from the Ministry of the Interior, the BKA is unable to explain
the exact functioning of the FRT system, including how it arrives at a decision.xc The BKA notes
that neural networks and machine learning methods are used, but as of 2019, it had not assessed
the working of these systems in context of police work.xci That is, the results of the algorithm are
not explainable, which means that it may be difficult to explain the decision making process in a
court of law if necessary. Human assessments of the accuracy of outcomes are offered instead.xcii
2.
Illegal data retention: Data is stored illegally on INPOL-Z because the government may not be
complying with federal data protection regulations.xciii
3. Basis for collecting data: Several entries on INPOL-Z are created without making a “prognosis
decision”.xciv
4. Challenges with deletion: Each state police authority that creates an entry retains ownership of
that data. This means that they are responsible for deletion. However, there were reports that
despite a state police authority deleting an entry, it could continue to be stored through another
authority participating in INPOL-Z.xcv
5. Administrative difficulties: There have been operational challenges for police authorities using
INPOL-Z due to lack of training. For example, reports suggest that a state authority accidentally
deleted 40,000 INPOL-Z records while incorrectly using a new functionality.xcvi These challenges
can also create risks officers using the system incorrectly, leading to errors.xcvii
6. Legal basis: Reports indicate that the number of instances where INPOL-Z’s facial recognition
software was used in investigations had increased from 1,673 in 2010 to over 27,000 in 2017.xcviii
Some critics point out that the BKA does not have sufficient legal basis to keep expanding the use
of INPOL-Z for increased number of facial recognition investigations.xcix

Page 10 of 14
Steps taken to address risks / challenges: Here is the publicly available information on steps
taken to address challenges with the project:

1. Data deletion: The BKA recently introduced safeguards to prevent unjustified continued storage
of data. Now, if an authority wants to continue storing an entry made by another authority, an
independent and documented prognosis decision is necessary to justify continued storage. c To
operationalise this safeguard, BKA introduced a new feature – deletion with transfer of
ownership.ci The state authority who initially made the entry can also object to continued storage
by specifying a reason for deletion.cii BKA has also allowed co-ownership of data between
different authorities to coordinate storage and deletion of data.ciii BKA also introduced data
deletion deadlines in 2020.civ
2. Administrative challenges: The BKA plans to organize more meetings between federal and state
authorities to mitigate administrative challenges (such as incorrect usage of the system,
discussed above).cv

***

Page 11 of 14
For any queries related to this submission, please contact:
Ashish Aggarwal (asaggarwal@nasscom.in), Varun Sen Bahl (varun@nasscom.in) or Sudipto
Banerjee (sudipto@nasscom.in).

About NASSCOM
The National Association of Software and Services Companies (NASSCOM) is the premier trade body
and chamber of commerce of the Tech industry in India and comprises over 3000 member
companies including both Indian and multinational organisations that have a presence in India.
Established in 1988, NASSCOM helps the technology products and services industry in India to be
trustworthy and innovative across the globe. Our membership spans across the entire spectrum of
the industry from start-ups to multinationals and from products to services, Global Service Centres
to Engineering firms. Guided by India’s vision to become a leading digital economy globally,
NASSCOM focuses on accelerating the pace of transformation of the industry to emerge as the
preferred enablers for global digital transformation. For more details, kindly visit www.nasscom.in

i
See, NITI Aayog, FRT Discussion Paper.
ii
See, Ministry of Civil Aviation, Digi Yatra Reimagining Air Travel in India, August 2018.
iii
See, NITI Aayog, FRT Discussion Paper, page 54-56.
iv
See, Ministry of Civil Aviation, Digi Yatra Reimagining Air Travel in India, August 2018.
v
See, NITI Aayog, FRT Discussion Paper, page 12.
vi
See, NITI Aayog, FRT Discussion Paper, page 43-45.
vii
For instance, the Singapore government’s Model AI Governance Framework recommends using “supplementary tools” like
surrogate models, partial dependence pilots, global variable importance/interaction, sensitivity analysis, counterfactual
explanations, or self-explaining and attention-based systems, See, Singapore Model Artificial Intelligence Governance Framework
Second Edition, page 45.
viii
It is not possible for one ethics committee to fully understand and address these issues across a range of applications. Instead,
there should be multiple review bodies that can bring in relevant expertise from internal/ outside sources. These review bodies can
have different processes for different FRT applications – to provide greater flexibility. These review bodies can also be supported by
a central review team – that aids, assists and serves as institutional memory – as a reference for future decision making and store
precedents. However, this cannot be a one-size-fits-all kind of design applicable to all FRT systems.
ix
See, European Parliament, Regulating facial recognition in the EU, September 2021, page 25 & 28.
x
See, Opinion of Data Ethics Commission, Germany, page 19.
xi
See, White paper On Artificial Intelligence - A European approach to excellence and trust, European
commission, page 17.
xii
See, Comments of ForHumanity, In the Matter of Adopting the Framework: A Use Case Approach on Facial Recognition
Technology, NITI Aayog Discussion Paper, 2022, page 9.
xiii
See, NITI Aayog, FRT Discussion Paper, page 47.
xiv
See, OECD Legal Instruments, Recommendations of the Council on AI.
xv
See, Stanford Human-Centered Artificial Intelligence, Humans in the Loop: The Design of Interactive AI Systems, 2019.
xvi
See, Telecom Regulatory Authority of India, Consultation Paper on Leveraging Artificial Intelligence and Big Data in
Telecommunication Sector, page 160.
xvii
Video surveillance is operationalized through cameras installed by the government under the Skynet program – which is aimed at
tracking criminal behaviour. SCS tracks both companies and individuals. Citizens and companies are generally given points based
on positive or negative actions. See, John Weaver, China: Everything Is Not Terminator: Is China's Social Credit System The Future?,
McLane Middleton, September, 2019.
xviii
See, John Weaver, China: Everything Is Not Terminator: Is China's Social Credit System The Future?, McLane Middleton ,
September, 2019.
xix
See, Merics China Monitor, China’s Social Credit System in 2021: From fragmentation towards integration , May 2022, page 1.
xx
See, Merics China Monitor, China’s Social Credit System in 2021: From fragmentation towards integration , May 2022, page 8.
xxi
See, Comments of ForHumanity, In the Matter of Adopting the Framework: A Use Case Approach on Facial Recognition
Technology, NITI Aayog Discussion Paper, 2022, page 12.
xxii
See, Digital Personal Data Protection Bill, 2022.
xxiii
See, NITI Aayog, FRT Discussion Paper, page 22.
xxiv
See, NITI Aayog, FRT Discussion Paper, page 41.
xxv
See, NITI Aayog, FRT Discussion Paper, page 45.
xxvi
See, Comments of ForHumanity, In the Matter of Adopting the Framework: A Use Case Approach on Facial Recognition
Technology, NITI Aayog Discussion Paper, 2022, page 10.
xxvii
See, U.S. Government Accountability Office, Facial Recognition: CBP and TSA are Taking Steps to Implement Programs, but CBP
Should Address Privacy and System Performance Issues, September 2020, Report to Congressional Requesters, September 2020,
footnote no. 51, page 26.

Page 12 of 14
xxviii
See, U.S. Customs and Border Protection, Statement for the Record on Assessing CBP's Use of Facial Recognition Technology
for the hearing on “Assessing CBP's Use of Facial Recognition Technology” before the House Committee on Homeland Security,
Subcommittee on Border Security, Facilitation and Operations, July 2022.
xxix
See, U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021.
xxx
U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global Entry
Facial Recognition, December 2019, page 3.
xxxi
See, U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021.
xxxii
See, U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021.
xxxiii
See, U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021.
xxxiv
U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021,
https://www.cbp.gov/newsroom/local-media-release/ord-and-mdw-encourages-travelers-use-facial-recognition.
xxxv
See, U.S. Customs and Border Protection, ORD and MDW Encourages Travelers to Use Facial Recognition, 2021.
xxxvi
See, U.S. Government Accountability Office, Report to Congressional Requesters, Facial Recognition: CBP and TSA are Taking
Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues,, September 2020, page 1.
xxxvii
See, U.S. Customs and Border Protection, Biometric Breakthrough.
xxxviii
See, U.S. Customs and Border Protection, Biometric Breakthrough.
xxxix
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 6.
xl
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 8.
xli
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 5-6.
xlii
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 5.
xliii
See, U.S. Government Accountability Office, Report to Congressional Requesters, Facial Recognition: CBP and TSA are Taking
Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues, September 2020, Report to
Congressional Requesters, September 2020, page 38.
xliv
See, U.S. Government Accountability Office, Report to Congressional Requesters, Facial Recognition: CBP and TSA are Taking
Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues, September 2020, page 38.
xlv
See, U.S. Office of the Inspector General, Review of CBP’s Major Cybersecurity Incident during a 2019 Biometric Pilot , September
2020.
xlvi
See, U.S. Office of the Inspector General, Review of CBP’s Major Cybersecurity Incident during a 2019 Biometric Pilot, September
2020.
xlvii
This is based on a review of the incident by the Office of the Inspector General. See, U.S. Office of the Inspector General, Review
of CBP’s Major Cybersecurity Incident during a 2019 Biometric Pilot, September 2020, page 14.
xlviii
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 6.
xlix
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 6.
l
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 7.
li
See, U.S. Department of Homeland Security, DHS/CBP/PIA-002 Global Enrollment System.
lii
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 7.
liii
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 7.
liv
See, U.S. Department of Homeland Security, DHS/OBIM/PIA-001 Automated Biometric Identification System.
lv
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 5.
lvi
See, U.S. Department of Homeland Security, Privacy Impact Assessment Update for the Global Enrollment System (GES): Global
Entry Facial Recognition, December 2019, page 5.
lvii
See, U.S. Government Accountability Office, Report to Congressional Requesters, Facial Recognition: CBP and TSA are Taking
Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues,, September 2020, page 38.
lviii
As of August 2022, the CBP Privacy Office has reported that TVS images are adequately protected from inappropriate access and
use. See, U.S. Customs and Border Protection, CPB Privacy Evaluation of the Traveler Verification Service in support of the CBP
Biometric Entry-Exit Program, August 2022.
lix
See, NIST, Computer Security Resource Center.
lx
See, U.S. Customs and Border Protection, CPB Privacy Evaluation of the Traveler Verification Service in support of the CBP
Biometric Entry-Exit Program, August 2022, page 16.
lxi
See, U.S. Customs and Border Protection, CPB Privacy Evaluation of the Traveler Verification Service in support of the CBP
Biometric Entry-Exit Program, August 2022, page 15.
lxii
See, U.S. Customs and Border Protection, CPB Privacy Evaluation of the Traveler Verification Service in support of the CBP
Biometric Entry-Exit Program, August 2022, page 16.
lxiii
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxiv
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxv
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxvi
See, NEC, Kicks Off Face Express Trial at Narita Airport, Find Biometrics: Global Identity Management , 2021.
lxvii
See, NEC, Aiming to Create a New, Safe and Delightful Customer Experience Through Face Express, Which Uses Face
Recognition.

Page 13 of 14
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxviii
lxix
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport) , 2021,
lxx
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxxi
See, Narita Airport: Face Express, FAQs, https://www.narita-airport.jp/en/faceexpress/.
lxxii
See, Newsfile, To Spur Recovery the Aviation Industry Eyes Touchless Tech - NEC Corporation, October 2021.
lxxiii
See, Narita Airport: Face Express, Customer Privacy.
lxxiv
See, NEC, NEC Unveils NEC Group AI and Human Rights Principles, April 2019.
lxxv
See, NEC, Group AI and Human Rights Principles, April 2019.
lxxvi
See, NEC, NEC establishes Digital Trust Business Strategy Division, October 2018.
lxxvii
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxxviii
See, NEC Kicks Off Face Express Trial at Narita Airport, Find Biometrics , April 2021.
lxxix
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxxx
See, NEC, Introducing Face Express, a New Boarding Procedure Using Face Recognition (One ID at Narita Airport), 2021.
lxxxi
Publicly available information includes independent studies or reports on Face Express available in English.
lxxxii
See, Matthias Monroy, Facial recognition at German police authorities increased by more than a third, January 2021.
lxxxiii
See, Matthias Monroy, Facial recognition at German police authorities increased by more than a third, January 2021.
lxxxiv
See, Matthias Monroy, DNA, facial images and fingerprints: German biometric police systems contain 10 million people, March
2022.
lxxxv
See, Report for the Greens/EFA in the European Parliament, Biometric and Behavioural Mass Surveillance in Europe, October
2021.
lxxxvi
See, Report for the Greens/EFA in the European Parliament, Biometric and Behavioural Mass Surveillance in Europe, October
2021.
lxxxvii
See, Report for the Greens/EFA in the European Parliament, Biometric and Behavioural Mass Surveillance in Europe, October
2021.
lxxxviii
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
lxxxix
See, Report for the Greens/EFA in the European Parliament, Biometric and Behavioural Mass Surveillance in Europe, October
2021.
xc
See, Matthias Monroy, German authorities improve face recognition, January 2020. This source relies on an official document from
the Ministry of the Interior, page 11.
xci
See, Matthias Monroy, German authorities improve face recognition, January 2020. This source relies on an official document from
the Ministry of the Interior, page 11.
xcii
See, Report for the Greens/EFA in the European Parliament, Biometric and Behavioural Mass Surveillance in Europe, October
2021.
xciii
See, Activity report on data protection, BfDI , 2011-12, (available only in German), page 17. See for interpretation - Matthias
Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records, July 2022. In 2011,
the then Federal Commissioner for Data Protection and Freedom of Information (BfDI) made this finding and recommended that the
government adopt safeguards against illegal data retention. In July 2022, the German Federal Data Protection Commissioner
(GFDPC) has reiterated that data is stored illegally, and has called for an audit to check if the recommendations in 2011 have been
implemented. See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many
illicit records, July 2022.
xciv
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
xcv
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
xcvi
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
xcvii
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
xcviii
See, Xinhua, German police expand use of facial recognition software: report , March 2018.
xcix
See, Xinhua, German police expand use of facial recognition software: report, March 2018.
c
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records, July
2022.
ci
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records, July
2022.
cii
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
ciii
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.
civ
See, Activity Report of the BfDI, (available in German only), page 91.
cv
See, Matthias Monroy, Data Protection Commissioner’s audit: Germany’s largest police database contains many illicit records,
July 2022.

Page 14 of 14

You might also like