You are on page 1of 4

Recommendations on Responsible AI for All

Responsible AI for All- Discussion Paper

Comments
Introduction

The discussion paper titled ‘Responsible AI for All: Adopting the Framework – A use case approach
on Facial Recognition Technology (“FRT”) released by NITI Aayog provides for a use case
approach to test the Responsible AI Principles developed in 2021 on Facial Recognition
Technology. In this paper, the Responsible AI Principles are used as a mechanism to determine
better governance and regulatory frameworks to minimize the potential risk of FRTs and maximize
their benefits for people.

We at Wadhwani Institute of Technology and Policy are working passionately on enhancing


outcomes of existing and proposed policy measures by integrating them with relevant emerging
technology applications with ethical aspects. We strongly believe that the government can
effectively harness emerging technologies to have a dramatic impact in improving the delivery of
public services, strengthening governance, and making better-informed decisions.

Based on the Responsible AI for All -Discussion Paper released in November 2022, we hereby
submit a set of comments for your kind consideration –

A. General
1. The policy outlines two types of FRT- 1:1 and 1:n. 1:1 FRT involves matching a person’s
face to the corresponding dataset for the purpose of verification. 1:n technology involves
using of FRT on general surveillance to search for specific persons.
2. We foresee that 1:n FRT would involving matching of images of one person, with a
much larger data set of images, and would involve the risk of capturing images of
people who do not provide their consent to such a system. In order to avoid ‘purpose
creep’, we suggest that technology is instead used to develop AI to only pick out
suspicious activities/ patterns and not general monitoring of everyone.
Recommendations on Responsible AI for All

3. Due to the intrinsically different nature of 1:1 and 1:n FRT, we recommend that
separate SOPs and guidance are evolved for use cases utilizing the different types of
FRT.
B. Specific

S.No. Page No.; Clause Suggestion/Observation


Clause
Digi Yatra
01 Pg 32 Clause This Clause In light of the Digital Personal Data Protection
1 (iii) on Data highlights the need Bill, 2022 the data collected through the Digi
Privacy for opt-in consent Yatra App is subject to the presumption of
as opposed to opt- deemed consent under Section 8 (1) of the Bill.
out consent by Hence the Principles of responsible AI in FRT
stating that, applications are not compatible with proposed
Digital Personal Data Protection Bill.
“consent is
meaningfully
provided and not
bundled up by
default.”
02 Pg 35 on the Under Responsible The Digi Yatra FRT must be explicitly
Safety and AI Principle on accountable to its users for
Reliability of Safety and 01. For financial/business/other personal
the Digi Yatra Reliability, the Digi losses caused due to mistaken identity
FRT Yatra AI system attributable to the results of the FRT
must "ensure Application. In the event of such loss,
reliability regarding legal liability and compensation towards
their intended the aggrieved individuals must be
functions and must provided by the agency.
have built-in 02. The application must also clearly
safeguards to mention who is liable for FRT
inaccuracies: whether it would be the
Recommendations on Responsible AI for All

ensure the safety of company developing the FRT solution or


stakeholders”. the agency implementing the FRT
solution.
03. Standard Operating Procedures (SOPs)
for human operators need to be
included to ensure rational treatment of
individuals identified as red-flags, in
order to facilitate methodical handling of
inaccuracies resulting from FRT. The
SOPs must also be updated based on
learnings from time to time.

Other FRT used in India


03 Pg 54, point The Pune Municipal The FRT was initially designed for the purpose
14 on Selfie- Corporation used of monitoring home quarantined patients and
App Based drones to monitor people under home quarantine were mandated
Face home quarantined to download the Saiyam App. However, now the
Recognition patients during Pune Police is using this application for tracking
COVID-19 criminals and missing persons. It is concerning
that the App is not being used for the original
purpose and it is unclear whether its users were
given the option to remove their data from this
app after the Pune Police began using it for
tracking criminals.
To avoid this, use of data for purposes other
than those envisaged originally, must be
presumed to be absent. Consent must be
explicitly sought for applying the data collected
for purposes other than as envisaged.
Recommendations on Responsible AI for All

05 Pg 51, Point 2 Face Matching Digi locker platform’s privacy policy (where this
Technology is used FRT is hosted) does not mention opt-in/opt out
by CBSE since 2020 and deletion of records. It is recommended that
for education- students are explicitly permitted to opt-out. A
identity simple alternative to opt out must be provided.
authentication and
digital access to
academic records

You might also like