You are on page 1of 22

AI AND CHINA REGULATIONS

 The output of human civilization has so far been primarily driven by human intelligence
and when machine intelligence combines with human intelligence, the sky is the limit.
 The two biggest countries of Asia – China and India, boast of a strong AI talent pool and
enterprises and institutions that continue to strive for advanced research and innovation.
 Comprehending AI’s transformative potential, the bureaucracy has stepped up and
formulated policies to govern and to incentivize AI-based implementation for a
competitive global advantage in the crucial sectors.

China’s Background

 In 2015, the State Council released guidelines on the “Internet+” strategy that involved
integrating the Internet into all parts of economy and society.
In the same year, the 10-year plan of 2015 “Made in China 2025” set forth the aim of
transforming China into a dominant player in A.I. technology.
▪ In 2016, the 13th 5-year plan of the Chinese Communist Party included A.I. as one of the 6
critical areas of emerging industries for development.
▪ In July 2017, the State Council released the “New Generation Artificial Intelligence
Development Plan” that set out China’s A.I. policy objectives, setting it as “Year One of China’s
A.I. Development Strategy”.
China’s Artificial Intelligence Development Plan, 2017

 In July 2017, the State Council of China released a comprehensive AI policy report:
‘Artificial Intelligence Development Plan’ to develop the roadmap for AI leadership
under an ethical and supportive regulatory system and open source collaboration.
 The report covers strategic goals divided into three steps. China is to become a world
leader in defining ethical norms and standards for A.I.;
a) The first step is the setting up of an AI infrastructure at an advanced level as
compared to the world by 2020 by establishing initial ethical norms, policies,
and regulations for basic and key areas of A.I.
b) The second step, which is to be completed in 2025, aims to establish China in
the ‘world-leading level’ in AI breakthrough and to establish AI as the primary
driving force behind China’s industrial transformation with the help of a
‘breakthrough’ in artificial intelligence basic theory. By expanding upon and
codify the ethical standards for A.I. into law
c) The third step is to achieve global supremacy in AI impact and be the
‘innovation center of the world by 2030. further “upgrades” to laws and
standards to deal with newly emerging challenges and issues
 To enhance China’s competitiveness, ‘open, stable and mature’ technology systems shall
be developed in the form of algorithms, hardware, and the data.
China: Implementation

The Central Government selected “A.I. National Champions” – businesses endorsed to focus on
development specific sectors of A.I. – e.g.:
–Baidu is tasked with development autonomous driving vehicles;
–Alibaba with the development of “smart cities”; and
–Tencent with computer vision for medical diagnoses.
▪ The A.I. National Champions agree to invest in government-directed A.I. goals in exchange for
preferential contract bidding, easier access to finance from the state and state-owned banks,
and even market share protection.
▪ Nothing stopping other companies from developing the same A.I. technology but the “national
champion” status will help them dominate their sectors.

 There are areas of A.I. development “wish list” for which no “national champion” has
been selected, for example, the A.I. Development Plan calls for smart courts but there is
no designated business for smart evidence collection, case analysis, legal document
analysis, case management, etc.

China: Ethics and Regulation


 ▪ These 8 principles for A.I. governance are:
 –above all else, A.I. must enhance the common wellbeing of humanity;
 –respect for human rights;
 –privacy;
 –fairness;
 –transparency;
 –responsibility;
 –collaboration; and
 Agility to deal with new and emerging risks
 The Standardisation Administration of the People’s Republic of China released a white
paper on A.I. standards in late 2019 that discussed 3 key principles of A.I. standards:
 –principle of human interest, requiring the ultimate goal of A.I. is to benefit human
welfare;
 –principle of liability, establishing transparency and accountability as requirements for
the development and deployment of A.I. systems and solutions; and
 –principle of consistency of rights and responsibilities, regarding proper data records,
oversight, while safeguarding intellectual property.

National Strategy for AI discussion, NITI Aayog, 2018

 India showcases a promising scenario, thanks to its strong talent pool, a notable list of
world-class educational institutions and companies that are dominating the global IT
landscape. However, India couldn’t achieve global recognition primarily due to the lack
of top-notch research in AI at a significant scale.
 The National Institution for Transforming India, a thinktank established by the Indian

Government aiming to “achieve sustainable development goals with cooperative federalism by


fostering the involvement of state governments of India in the economic policymaking process”
(whatever that means) proposed the creation of a funding plan for a cloud computing platform
called AIRAWAT) and an A.I. institutional framework

 So, NITI Aayog has stepped up for the formulation of a comprehensive AI strategy with a
core focus on infrastructure development and holistic collaboration. It aims for an
#AIforAll campaign where AI shall also be used for social inclusion and not just for
defense, military, and advanced computing applications.
 In an ‘AI+X mechanism’, where AI is an enabler for increased productivity and
efficiency rather than a complete overhaul, the key focus areas for AI intervention shall
be healthcare, agriculture, education, smart cities and infrastructure, smart mobility and
transportation.
 The key challenges identified in these sectors include among others a low intensity of AI
research, insufficient talent to research and to implement AI at scale, high resource cost,
ambiguous privacy issues, and unattractive intellectual property regime to incentivize
adoption.
 The report provides more than 30 policy recommendations to develop a two-tiered
strategy, which is aimed at improving the research ecosystem as well as developing
skilling initiatives to feed the ecosystem. Initially, the ‘Centres of Research Excellence’
(CORE) shall enhance quality research and publications focussing on AI. Investment,
both domestic and foreign, shall be made to develop a state of the art infrastructure in
liaison with the concept of an ‘AI garage’.
 Besides this, the development of a National AI Marketplace (NAIM) has been proposed
in three different modules to minimize resource allocation for model development.

Legal and ethical issues posed by AI


Artificial intelligence (AI) increasingly permeates every aspect of our society, from
the critical, like urban infrastructure, law enforcement, banking, healthcare and
humanitarian aid, to the mundane like dating. AI, including embodied AI in
robotics and techniques like machine learning, can improve economic, social
welfare and the exercise of human rights. Owing to the proliferation of AI in high-
risk areas, the pressure is mounting to design and govern AI to be accountable,
fair and transparent. How can this be achieved and through which
frameworks? uestions on the role of the law, ethics and technology in governing
AI systems are thus more relevant than ever before.
This raises new challenges, for example around liability regarding automated
vehicles, the limits of current legal frameworks in dealing with ‘big data's
disparate impact’ [2] or preventing algorithmic harms [3], social justice issues
related to automating law enforcement or social welfare [4], or online media
consumption [5]. Given AI's broad impact, these pressing questions can only be
successfully addressed from a multi-disciplinary perspective.

Artificial Intelligence and ethics


There is a continuous debate around ethical conflicts in AI. The foremost
concern in the populace about AI is its inscrutability and lack of
transparency. The lack of transparency in AI is not just because it is a new
technology but because of its complexity, making it impossible for a
layperson to understand AI’s learning process. We don’t actually know how
AI acts and comes to an unambiguous decision. AI operates under a veil
with a few hands of developers to which no one can glance into. This has
made AI a mysterious concept of what we call a “black-box theory”.

Let’s take the example of Gmail spam. The AI algorithm used in Google’s
Gmail identifies ‘spam’ and relocates the mail to the spam folder. But little
do people know about its identification process that leads to the classification
of spam.

Suppose a company says it follows all protection regulations and provides


adequate transparency but cannot explain the process of AI’s algorithmic
decision. In that case, it is not transparent enough and is not ethically bound
even if it is legally compliant. The lack of transparency on how the AI makes
decisions contributes to distrust in the general public.

This leads me to my second question – Is AI neutral while making the


decisions? The answer is -no! AI shares our workload by simplifying and
making quick decisions, but making biased decisions by AI is possible. Since
AI is man-made, the machine learning algorithm and program stored by a
man can inherit a human’s biased nature into a machine.

Cognitive biases can have life-altering consequences. It is, however, clouded


as to how much or to what degree the human biases can creep into artificial
intelligence systems. But cognitive bias dwells in the algorithm of AI in
various forms. The AI system learns and trains itself according to a human’s
instructions, which can reflect biases in any form, such as historical or social
inequality.
For example, Amazon stopped using the ‘hiring algorithm’ after discovering
that the algorithm was biased against women. The company found that the
algorithm favored applicants who used words like “executed” or “captured”
(words most commonly found in men’s resumes).

In the context of AI and ethics, one cannot overlook the automated public
sphere it has created in our environment. Rather than reading newspapers,
we are now using Facebook, Twitter, Instagram to gather information, and
these platforms are more efficient in discharging information and that too in
a personalized manner.

One main legal issue regarding AI is the liability in the event of any failure in
AI technology. A question as to who will be responsible if any failure occurs
while using these systems. Most of the time companies utilizing AI tend to
get away from the responsibility. For instance, in Google’s “right to be
forgotten” case, the company argued that they aren’t responsible for the
results the search engine gives because it’s the algorithm that does that.

The “black box” theory


The term black box used for artificial intelligence is to signify the opacity it
involves. In simple terms, the black box is an imperative system like a veil,
which deceives the operations and inputs from the user. The most common
technology or tool that is affected by the black box phenomenon are those
which use artificial intelligence and/or machine learning techniques.

Why black boxes exist in AI

The AI is built with a deep learning model, which is typically conducted


through a black box arrangement. Artificial neural networks consist of hidden
layers of nods. Each node processes input and transfers the output to the
next layer of nodes. Deep learning is one of the parts of artificial neural
networks that learn on its own through the pattern made by the nodes.

The algorithm stores millions of data as input and connects specific data
features to produce an output. Since data collection is self-reliant, the
results produced by the algorithm are difficult to interpret. Even a data
scientist is not able to solve the result the AI will create.
The issue related to AI business or inscrutability arises from the black box.
When the software is used for any critical operations, the employer or
employee associated with that operation will have no clue about the process
within the organization. The organization’s unknownness will cause massive
damage in the organization if any error happens and go unnoticeable.
Sometimes these damages will have expensive or even impossible repairs.

If such a circumstance arises from black-box AI, it may continue long


enough for the company to acquire damage to its reputation and,
potentially, legal actions.

Regulations on AI in India
There is no regulation or law in India that regulates artificial intelligence,
machine learning, or big data. But the Government has felt the need to look
at the development and implications of artificial intelligence. The
government, as of now, intends to amplify the application of artificial
intelligence in the Indian environment.

Several ministries and governments have stepped forward to take the


initiative towards AI regulation. Some of these ministries include the Ministry
of Electronics and Information Technology (MeitY), the Ministry of Commerce
and Industry, the Department of Telecommunications (comes under the
Ministry of Communication), and the NITI Aayog (here and here).

NITI Aayog plans to tackle the issues related to artificial intelligence and
include suggestions like:

 Setting up an IP regime for AI innovations and a task force that will


comprise the Ministry Corporate Affairs and Department of
Industrial Policy and Promotion to examine and issue a modification
to intellectual property law;
 Developing a data privacy legal network to protect human rights
and privacy and; and
 creating a sectoral regulatory guideline encompassing privacy,
security, and ethics.
MeitY constituted four committees for the development of the regulatory
framework for artificial intelligence. The four committees are:

(a) The first committee for platforms and data on artificial intelligence;
(b) Second committee for leveraging AI for identifying national missions in
critical sectors;

(c) Third committee for mapping technology capabilities, key policy enablers
required across sectors, skilling and reskill; and

(d) Forth committee for cybersecurity, safety, legal and ethical issues.

The four committees of MeitY, as mentioned earlier, lays down the following
recommendations:

FAIRNESS ACCOUNTABILITY AND TRANSPARENCY IN AI REGULATION IN INDIA


An AI regulation includes three elements ie. fairness, accountability, and
transparency, also known as F-A-T. The FAT (fairness, accountability,
transparency) or FATE (fairness, accountability, transparency, and ethics) in
AI regulation ensures that any AI-based solutions or applications must
contain four elements for the safe, responsible, ethical, and accountable
deployment of AI tools.

Fairness

Fitness in AI means that it should not be biased against any group or


segment. As discussed above, AI relies on huge data that is usually collected
manually and provides us the results based on that data. The data collected
are barely neutral in nature, observed in several cases.

For example, algorithms used by US courts predict the likelihood of the


criminal reoffending the act. But recently it was observed that the algorithm
prediction in the context of reoffending was highly inaccurate as it showed
black defendants are more likely to re-offend. The prediction was perhaps
based on historical criminal statistics in the US; they represented black
criminals as reoffenders. The results were seen as overly biased and lacked
fairness which is the essence of justice.

The regulatory system in India has stepped forward to look into this issue to
ensure fitness. In this context, NITI Aayog proposed AI data training
solutions that will help, guide, and develop unbiased AI.
Accountability

Accountability in AI is the determination of liability attribution in the event of


loss or harm due to the use of AI solutions. Since AI solutions use big data,
learn algorithms and display results according to those data. The cases of
accountability and responsibility are more prominently seen in these self-
learning AIs.

Proponents confer the idea of distributed responsibility for AI


solutions. It seems distributed responsibility will stand out as a good
conceptual solution because decisions based on AI are the result of
interaction among several actors such as developers, designers, users,
software, hardware, etc and it is important to distribute responsibility for
each role.

However, split responsibility does not solve the problem in its eternity as in
practicality it is not possible to allocate the exactly responsible actor given
the number of interactions and other challenges.

According to NITI Aayog, there should be an attainable and practical solution


for this issue. For instance, a shift of ascertaining fixed liability towards
objectively identifying the fault and preventing such fault in the future will
prove to be more practical. The think tank also proposed to sever other
solution some of them are:

 Introduction of safety measures to protect AI solutions if


appropriate steps to monitor, test and improve the AI product took;
 Introduction of actual harm policy to prevent a lawsuit for
speculative damages;
 Creating a framework to proportionally distribute any liability among
stakeholders; and
 Striking off a standard of ‘negligence’ from strict liability and
replacing it to ascertain liability.

Transparency

Transparency means that the stakeholders of an organization should be in a


position to disclose how AI is able to produce an output with a provided
input. In short, the stakeholder should give a window to see how AI
solutions are working. However, as discussed earlier in this article, it is
difficult to unpack the inner workings of these algorithms and even
developers are not able to understand AI’s results and its working. This
takes us back to the black box phenomena (discussed earlier) where inputs
and the functioning of AI are hidden. NITI Aayog and MeitY’s take on this
issue are pretty similar to enhance the explainability of the algorithm.

This method of ‘explainability in AI’ is now called ‘Explainable AI‘ (‘XAI’).


XAI is an emerging area in machine learning that will address the question
as to how black box decisions of AI systems are made and the steps
involved in making such decisions. It will answer the question like: why did
AI come to a specific conclusion? Why didn’t the AI system not come to
some other solution? When did AI systems fail and succeed? How can AI
systems correct errors?

It is focused on the investigation of three specific areas of research:

 (1) Ethical governance: focusing on the most pertinent ethical issues raised by
AI, covering issues such as fairness, transparency and privacy (and how to
respond when the use of AI can lead to large-scale discrimination), the allocation
of services and goods (the use of AI by industry, government and companies),
and economic displacement (the ethical response to the disappearance of jobs
due to AI-based automation).

 (2) Explainability and interpretability: these two concepts are seen as possible
mechanisms to increase algorithmic fairness, transparency and accountability.
For example, the idea of a ‘right to explanation’ of algorithmic decisions is
debated in Europe. This right would entitle individuals to obtain an explanation if
an algorithm decides about them (e.g. refusal of loan application). However, this
right is not yet guaranteed. Further, it remains open how we would construe the
‘ideal algorithmic explanation’ and how these explanations can be embedded in
AI systems.

 (3) Ethical auditing: for inscrutable and highly complex algorithmic systems,
accountability mechanisms cannot solely rely on interpretability. Auditing
mechanisms are proposed as possible solutions that examine the inputs and
outputs of algorithms for bias and harms, rather than unpacking how the system
functions.

3.3.1. Case study: healthcare robots Artificial Intelligence and robotics are rapidly moving into
the field of healthcare and will increasingly play roles in diagnosis and clinical treatment. For
example, currently, or in the near future, robots will help in the diagnosis of patients; the
performance of simple surgeries; and the monitoring of patients' health and mental wellness in
short and long-term care facilities. They may also provide basic physical interventions, work as
companion carers, remind patients to take theirmedications, or help patients with their mobility.
In some fundamental areas of medicine, such as medical image diagnostics, machine learning
has been proven to match or even surpass our ability to detect illnesses.

Data protection Personal medical data needed for healthcare algorithms may be at risk. For
instance, there are worries that data gathered by fitness trackers might be sold to third parties,
such as insurance companies, who could use those data to refuse healthcare coverage (National
Public Radio, 2018). Hackers are another major concern, as providing adequate security for
systems accessed by a range of medical personnel is problematic (Forbes, 2018). Pooling
personal medical data is critical for machine learning algorithms to advance healthcare
interventions, but gaps in information governance form a barrier against responsible and ethical
data sharing. Clear frameworks for how healthcare staff and researchers use data, such as
genomics, in a way that safeguards patient confidentiality is necessary to establish public trust
and enable advances in healthcare algorithms (NHS' Topol Review, 2009).

Legal responsibility Although AI promises to reduce the number of medical mishaps, when
issues occur, legal liability must be established. If equipment can be proven to be faulty then the
manufacturer is liable, but it is often tricky to establish what went wrong during a procedure and
whether anyone, medical personnel or machine, is to blame. For instance, there have been
lawsuits against the da Vinci surgical assistant (Mercury News, 2017), but the robot continues to
be widely accepted (The Conversation, 2018). In the case of 'black box' algorithms where it is
impossible to ascertain how a conclusion is reached, it is tricky to establish negligence on the
part of the algorithm's producer (Hart, 2018). For now, AI is used as an aide for expert decisions,
and so experts remain the liable party in most cases. For instance, in the aforementioned
pneumonia case, if the medical staff had relied solely on the AI and sent asthmatic pneumonia
patients home without applying their specialist knowledge, then that would be a negligent act on
their part (Pulmonology Advisor, 2017; International Journal of Law and Information
Technology, 2019). Soon, the omission of AI could be considered negligence. For instance, in
less developed countries with a shortage of medical professionals, withholding AI that detects
diabetic eye disease and so prevents blindness, because of a lack of ophthalmologists to sign off
on a diagnosis, could be considered unethical.

The Prime Challenges

The prospect of A.I. improving healthcare also poses a challenge to the fundamental
basis of medicine – is medicine an evidence-based science or the appraisal of self-
assessment of the person affected by illness based on an understanding of their
subjective wellbeing?
▪ A.I. cannot realise or appreciate human characteristics like empathy, compassion,
determination, or resignation.
▪ Even if A.I. is limited to empirical analysis of medical imaging or pathology information,
the standard of care one may expect the A.I. to operate may be the same as that of a
human specialist, a standard that evolves with innovation and better understanding of
medical science.
The A.I. may restrict choices based on risk calculations, what it may consider the best
interests
of the patient, does reducing patient autonomy.
▪ If A.I. performs a diagnosis or formulates a treatment plan but the human doctor
cannot
explain how the A.I. came to its decisions, it would compromise the patient’s ability to
make a
fully informed choice.

The principle of beneficence requires that the patient’s wellbeing must come first, but
this
requires understanding of the patient’s subjective knowledge, life experience, evaluation
of risk
information, social and cultural context, emotional stability, etc

If there is a malfunction or defect in the A.I. or its algorithms, there could be serious
implications. It is difficult to determine in minute detail each of the steps taken by an A.I.
system to come to its conclusion and, as such, such malfunctions and defects would be
hard to
detect until too late.

Utilisation of A.I. can have a significant impact on the relationship between a doctor and
their patient. If the A.I. is used for the final assessment of findings and formulation of
treatment plans, what is the role of the doctor besides being the human vessel by which
the treatments are performed? If the A.I. is used to supplement and support medical
decisions but the final decision is the human doctor’s, to what extent is the doctor not
meeting their standard of care by relying on the A.I.?
▪ Damages under product liability law may be the answer to those questions as a matter
of law,
but it is no answer on the question of the doctor’s ethics.

Liability for Errors and Defects


▪ Since an attending human doctor is not liable for the failure of their proper and
professional
treatment of a patient – a failed treatment per se does not justify any liability. The law
requires
professional treatment as a service in accordance with professional standards and the
extent of
medical knowledge.
▪ Doctors have to evaluate information independently and cannot rely solely on the A.I.
and
abandon their professional judgment, especially if the doctor has reason to suspect that
the A.I.
may be unreliable or even defective.
▪ Neither doctors nor patients are or ought to be bound to use and/or rely on A.I. - both
doctors
and patients must be free to choose their means of diagnosis and therapy and then are
bound
by those choices.

A.I. Algorithms as Medical Devices


▪ One case of malfunctioning or defective A.I. system is the risk of false positive error, a
concept
well familiar to use during and post COVID tests. This has the potential to present a
considerable challenge to the patient and the attending medical professional.
▪ In parallel, another case of malfunctioning or defective A.I. is the risk of a false negative
error.
What is the responsibility on a medical professional to rely fully on A.I. results or to
question
them?
▪ On a holistic level, how is the accuracy and effectiveness of A.I. to be monitored and
improved
in the absence of reporting and accountability mechanisms for reporting such errors to
the A.I.
programmers? If the A.I. is self-learning through its own errors, such error rates may be
concealed.

A.I. in Legal and Government Administration

Private Use of A.I. in Law


▪ The use of legal databases have seen continuing development in the use of A.I. to
enable
better research in both common law (e.g. Lexis & Westlaw) and civil law (e.g. Beck)
jurisdictions.
▪ Smart forms and contracts have become commonplace.
▪ Technology are increasingly used in contract analysis, online dispute resolution,
discovery and
evidentiary processes, case management, etc.
▪ Robotic or A.I.-driven “lawyers” are increasingly used in dealing with high volume, low
margin,
low variance cases, such as their use in insurance claims and dealing with creditors’
claims in
insolvency.
Public Use of A.I. in Law
▪ A.I. technology is increasingly used in law enforcement, where police may use “big
data” to
predict future crimes – called “predictive policing” –

In Germany, “crime-forecasting systems” such as Precobs (Pre-Crime Observation


System) and
SKALA (System zur Kriminaltätsanalyse und Lageantizipation) are programmed to study
certain
characteristics of burglaries to estimate the offender’s likelihood of reoffending have
been
used.

In the U.S. systems as such COMPAS (Correctional Offender Management Profiling for
Alternative Sanctions) have been implemented in several states to predict reliably the
likelihood
of recidivism by offenders.
▪ COMPAS would produce a score from 1 to 10 and assist courts in deciding whether an
offender
ought to be given probation or jailed, and for how long. This has been controversial as
software or a machine is used to decide whether someone goes to prison or not.
▪ Such A.I. systems have the potential to be more accurate and less biased than humans,
at least
theoretically, to enable better allocation of scarce policing resources on preemptive
policing
and crime prevention.

ISSUES
Can A.I. be trusted to decide impartially without a human understanding of the
processes, the
“logic”, and the factors considered by COMPAS?
▪ How should courts deal with A.I.-generated evidence?
▪ To what extent should courts rely on A.I. for criminal sentencing, even though policy
simulations show that COMPAS can reduce jail population by 42 percent with no
increase in
crime rates or reducing crime rates by 25 percent without changing prison population?

The concept that A.I. is unbiased and more accurate can be theoretical at times, for even
though race and other identifying characteristics may not be put into the reference data
for use
by the A.I., the underlying social, economic, demographic, and other inequalities may
continue
to make A.I. just as biased, if not more so, than humans.
▪ For instance, U.S. crime prediction systems have shown to have a propensity to find
black
offenders more likely to offend than they actually would, and white offenders less likely
to
offend than they would.
▪ As long as social inequalities remain, A.I. not only has the potential to maintain existing
biases
but, worse, may even entrench them.

Use of A.I. in Government Administration


▪ Administrative law is primarily government decision-making based on measuring a set
of facts
or factors against a prescribed criteria.
▪ The level of prescription of the criteria to be used and the factors to be considered will
vary depending on whether it is a common law or civil law jurisdiction and the statutory
language in the regulation by which the decision-maker is empowered or delegated
with the decisionmaking power.
▪ In that sense, the use of A.I. can be used at different levels of administrative decision-
making,
from the investigation of facts, weighing of facts, counterfactual reasoning, measuring
proportionality, actual decision-making, and enforcement of administrative decisions.

The prevailing issues of data protection (“opening the door”) and the need for
transparency
and accountability in the A.I. systems (“opening the black box”) are just as important in
the use
of A.I. in administrative decision-making as it may well be in medical decision-making.
▪ Perhaps data protection is a greater issue in the case of government administration
because,
unlike in medical settings where the patient is likely to have provided the necessarily
data
voluntarily or at least consented to their collection, government departments and
authorities
have access to data concerning individuals that may not be available to the individual or
that
the individual may well be unaware.

▪ The data collection by A.I. may, be and of itself, be an issue requiring data security and
transparency concerns to be addressed. This is evident in the debate over the use of
facial
recognition algorithms by Facebook and by public authorities, especially in the recent
examples
of lockdown enforcement during the COVID-19 epidemic.
▪ In taxation, the data collected and used by the A.I. as well as the logic and factors
deployed by
A.I. may need to be kept secret by the government authority so as to protect the
integrity of
the taxation system and avoid individuals “gaming” the system for tax avoidance.

Seminar 1
– Obstacles to A.I. Regulation
Lessons from Part III
▪ Given technological change and societal changes associated with them, it is difficult to
formulate appropriate regulation with significant uncertainty.
▪ There is a risk that legal measures can become ineffective or even have dysfunctional
consequences.
▪ Regulation need to avoid stifling innovation but not be so flexible as to be easily
circumvented.
At the same time, the regulation needs to be amenable enough to change when
unforeseen
consequences happen.
▪ Today we are going to explore two fundamental obstacles to A.I. regulation:
–impact of existing data protection law: “opening the door”;
–need for transparency and accountability: “opening the black box”.

Opening the Door


▪ The problem described among A.I. regulation circles as “opening the door”, is the
challenge posed by existing data protection laws on the use of data by A.I. algorithms
and processes.
▪ The most notable example of general data protection requirements are found in the
European Union’s General Data Protection Regulation (G.D.P.R.), applying to all
businesses operating in
the European Union. This is further augmented by Article 8 of the E.U. Charter of
Fundamental Rights.
▪ The conventional interpretation, as explained below, is that G.D.P.R. and Article 8 of the
C.F.R. have the practical effect of prohibiting the use of A.I. – hence the need to open
the door.
The G.D.P.R. requires that:
–a legal basis or consent to each individual phase of data processing;
–purpose of the data collection and processing be disclosed and limited;
–principle of necessity be observed; and
–Courses of action and decision-making processes concerning data be totally foreseen,
planned, transparent, and steered by statutory means.
▪ As such, the fundamental conflict arises from:
–often even the programmers of the A.I. algorithms cannot say to any degree of
certainty how
A.I. obtains its results; and
–the data analysis itself uncovers the correlations that form the purpose.
▪ In legal terms, the use of data by A.I. contravenes every data protection principle to
varying degrees.
In the proposed European A.I. regulation, a balance is sought by introducing the
fundamental data privacy principles to A.I. regulation, sidestepping the issue of the
conflict between the G.D.P.R. and the proposed A.I. regulation.
▪ The answer may lie in the further elaboration of Articles 42-43 of G.D.P.R. that establish
data protection certification mechanisms, seals, and marks, that can be developed to
facilitate A.I.
data processing, including:
–establishment of ethics boards or committees, both internal and external;
–certification of technical compliance;
–supervision algorithms; and
–creation of review and liability safeguards.

Opening the Black Box


▪ The apparent opacity of A.I. algorithms and processes (the “black box”) is a major
political,
legal, and ethical issue in the developing regulation for A.I.
▪ Opening the black box is argued to be essential for:
–identifying encroachments to privacy;
–detect biases;
–prevent potential harm to individuals; and
–safeguard humanity.

Transparency has always been a fundamental requirement for data protection: see, e.g.
G.D.P.R.
Article 5.
▪ In 2017, New York proposed to require every city agency ”that uses, for the purposes of
targeting
services to persons, imposing penalties upon persons or policing, an algorithm or any other
method of automated processing system of data … to publish on such agency’s website the
source
code of such system” – which sounds remarkably like telling criminals how they will get
caught.
▪ In 2018, the German Conference of Information Commissions called for laws that
would require
public authorities and private actors using A.I. to publish details on the “logic” of the
system,
the classifiers and weights applied to the input data, and the expertise of programmers
and
operators.

▪ The problems with being required to be transparent about the ”logic” or the source
code of an
A.I. are that:
–the A.I. may not follow the programmers “logic” as they intended;

Most people cannot read computer code and even those who can are unlikely to
understand
all methods used in a complex A.I. system – as such the epistemic constraints means
requiring
A.I. designers to publish its source code is more symbolic than practical.
▪ The greater the volume, variety, and velocity of data processing, the harder it is to
understand
and predict the behaviour of an A.I. mechanism.

The solutions to the need for transparency in A.I. systems need to overcome false
absolutes –
not all A.I. systems are inscrutable black boxes; and not all forms of transparency assures
accountability.
▪ It must be recognized that transparency regulation must be directed at generating
knowledge and motivating individuals to contest A.I. based decisions, so that
transparency is deployed to combat widespread ignorance and the feeling of
disenfranchisement that accompanies use of
A.I.
▪ At the same time, courts and government agencies must build up the necessary
degree of
expertise for the control of A.I. based systems.

AI and Copyright

Under section 14 of the Copyright Act 1957, "Copyright" is defined as the exclusive
rights of owner to do or authorise the doing of any acts (such as reproduce work,
publication of work, adaptation and translation etc.) in respect of a work. Further,
section 17 of the Act states that the author of the work shall be the first owner of
the copyright, however, if the work is created under a contract for consideration
and upon instruction of employer, then in such a situation the owner of the work is
the employer.

It is observed that since 1970s computer generated art works have attracted a lot of
attention. Most of these computer-generated artworks are relied heavily on the
programmer who provides the input for creation of the work. However, with
technological advancement, artificial intelligence has developed to the extent that it
is capable of understanding and creating results/ outputs without any interference
by the human.8

Major question raised in this regard, is with respect to the protection over the work
created by the Artificial Intelligence. With the existing legislation of Indian IP laws
especially copyright, the idea of extending copyright protection to artificial
intelligence for works created appears to be difficult. The works created by AI can
be categorized as "works created by AI with human interference" and "works
created by AI without any human interference". In such case, let us try and answer
the following questions:

Who is the Author of the such works?


Where work is created by AI with human interference, in such case since there is a
human input, therefore, the creativity in the work can be derived from the input of
the human. In such cases, the authorship can be attributed to the human.

Where work is created by AI without any human interference, in such case where
there is no human input, the area of law is not clear with authorship. In such
situations, the following can be an approach:

The work is generated by AI without any human input, then in such cases, the
authorship may vest with the author of the AI who has developed the program
creating the AI.

Where work is created by the AI without human assistance, then an assumption can
be derived that the programming of the AI is made in such a manner that it can
create and identify equations to generate a result on its own, and therefore, the
creativity may vest with the programmer who has created the AI, with sufficient
programming.
s Ownership of the work disputed in case of work created by AI?
The situation is same as authorship. Where work is created by the AI with human
interference, then the ownership over the work may claimed by the human who
provides creative inputs to the AI, whereas in case of work created by the AI without
any human interference, the ownership may be claimed by the copyright owner of
the AI, i.e. who holds copyright over the AI software.

Is Indian Copyright Law equipped to handle work created by AI?


As discussed above, the existing Indian copyright law is not exhaustive to give rights
to AI for creation of work. India has time and again focused on requirement of
human interference for a copyright protection, however, the scope of opening
gates to accept AI as separate entity still looks doubtful.

Possible Issues that may arise if Artificial Intelligence is accepted as separate


entity and works created are protected thereunder?
AI is dependent on its programming, in order to generate a result. The AI may be
able to explore and analyse the information, which is already available, and
therefore a creation therein is based information, either available publicly or which
is a copyright of some other person. Basically, AI is not capable of creating an
original content, whereas the work created is an adaptation or modified version of
existing information in the public domain. Therefore, recognition of AI as separate
entity and protecting work separately may lead to copyright violations of other
copyright holders.

Issue of Originality: When we talk about copyright under Copyright Act 1957, we
refer to section 13 of the Act which defines "works in which copyright subsists"

The above provision clearly defines that for a work to be eligible for literary,
dramatic, musical and artistic work, the said work must be an original work. The
term "original work" is not defined in the Act, however, while deciding the originality
the Court usually check the following parameters:

1. Whether the idea and expression are intrinsically connected. (Doctrine of


Merger)
2. Whether the work was created with skill and labour by the author. (Sweat of
the Brow Doctrine)
3. Whether the work possess minimum degree of creativity. (Modicum of
Creativity Doctrine)
4. Whether the work is created with mere skill and labour or whether the work
possess skill and judgment. (Sweat of the brow/ Skill and Judgment Test)

For an AI to claim ownership / authorship over a copyright, the work so created, if


literary, dramatic, musical and artistic works, must be original and must stand
qualified on the above tests of originality. However, the question whether AI can
create original work is still debatable. Under the Copyright Act 1957, literary work
recognizes compilations and since the AI is dependent on the existing information
and the exposure of the programming, the work so created may qualify as
compilation and therefore protectable as copyright. However, alternate arguments
state that the work so generated is mere collection without any skill and judgment.

Considering the judgment of the Hon'ble Supreme Court of India in Eastern Book
Company & Ors vs D.B. Modak & Anr. ((2008) 1 SCC 1) which observed that "To claim
copyright in a compilation, the author must produce the material with exercise of his
skill and judgment which may not be creativity in the sense that it is novel or non-
obvious, but at the same time it is not a product of merely labour and capital. The
derivative work produced by the author must have some distinguishable features and
flavour." and therefore it is a requirement for any compilation or derivative work to
show Skill and Judgment.

ssue of Infringement: If AI is accepted as the author and owner of the work


generated by it, then an important question raised in Who will be held liable for any
infringement done by such AI or its creation.

Analysing the above provision it can easily be said that copyright in a work can only
be infringed by a "person". Since the legal status of the AI is still not classified as a
legal entity, therefore, any infringement caused by AI will become a serious issue. In
case of AI, it will become much more difficult to place the liability for any
infringement caused by AI. Since the AI has no legal status of its own, therefore, the
issue of giving AI – authorship rights, may become weak unless a proper channel
and chain can be established to create liabilities for the acts of AI.

Complications associated with Recognizing AI as Author of Copyright


Therefore, in case of AI, the transfer of ownership will be difficult to establish as
the AI cannot execute or authorize its creator or any other person, to become the
owner of the work.

Pursuant to Section 57, the special rights of the author may also be disputed. The
special rights of author, known as moral right, includes right to paternity (right to be
associated and recognized with the work) and right to integrity (right to restrain or
claim damages against any act which may be prejudicial to author's honour or
reputation). Therefore, if an AI is recognized as an author of the work then these
rights may become redundant, as AI may not be able to ascertain whether any act
has affected the honour or reputation of the original work.

Pursuant to the existing copyright laws of India, the author of the work has a right
to claim royalty which cannot be waived. Therefore, where the AI is author of the
work, the question of who will determine the royalty, how will the royalty be
disbursed to AI, where the AI is able to fix the amount of royalty then whether the
amount must be determined on reasonability.

For any work by AI, the accountability of AI over any creation will be difficult to be
enforced.

Blockchaining” Intellectual Property

While it is difficult to predict all potential IP-related applications of blockchain, I see three
specific fields of application pertinent to technology transfer and IP professionals.

1. Blockchain can help with IP rights management and technology transfer and
commercialization practices

Blockchain could be used by inventors looking to find potential investors while at the
same time safeguarding their inventions. A ledger might consist of a short description of
the character and goal of the invention, while those wishing to gain access to more
information on how the invention works would then have to accept the provisions of
a “smart contract”. Or blockchain could be utilized by patent holders wishing to find
potential licensees for related know-how and trade secrets in addition to the patented
invention.

Inventors might be interested in publishing their technological developments to preserve


the novelty of the inventions and guarantee their freedom to operate. Thus, blockchain
technology will be able to facilitate the management of IP rights. Publications under a
blockchain environment might be used as evidence in IP-related law proceedings.

2. Blockchain as an IP registry

Blockchain can also serve as a technology-based IP registry where IP owners can keep
hashed digital certificates of their IP and use the platform to get royalties from those
who use their creations and inventions using smart contracts.Often, the approval wait
times of patent agencies and other regulatory bodies are long. This delay can hamper
the first-mover advantage in many industries where incumbents must act fast to protect
their inventions and stay at the top of the game. By replacing centralized registration
systems with decentralized ones, it will be easier to register new IP and update filings
and transfer ownership at any time. With blockchain, regulatory agencies will be able to
achieve more with fewer resources.

3. Determining creatorship, proof of ownership, and origin of creative works

Blockchain can be used to catalog and store original works. Often, there are no
adequate means for authors to catalog their works and copyright ownership can be hard
to prove. It can also be difficult for authors to see who is using their work, and it is
equally difficult for third parties to know from whom to seek a license. Authors are often
unable to stop infringements or to monetize their works successfully. With blockchain,
copyrights need not be registered and can come into existence automatically upon the
creation of original qualifying work.

Another major problem with IP rights management is tracing a complete chain of


ownership. It is often challenging to draw a line between getting inspired by another
musician’s work and stealing it. Notorious copyright cases in music history show that
determining creatorship and copyright ownership is often “mission impossible.”
Blockchain can solve these system imperfections. For example, blockchain-based
platforms such as Binded allow authors to record copyright ownership, which can then
be utilized to see where and how the work is being used on the internet and to seek
licenses from third parties. Binded blends integrations with the U.S. Copyright Office,
Instagram, and Twitter to monitor how copyrighted images are used. Registering a work
via Binded provides a digital certificate of authenticity. This registration can help third
parties identify the author of a work and help IP owners to tackle infringements.
Currently, IP owners have difficulties protecting the IP works online, i.e., once the IP
work is uploaded on the internet, it becomes difficult to maintain control of that work and
monitor who is using it for what purpose.

When the IP work is registered and verified using blockchain-based platforms, authors
can search across a whole host of different sources simultaneously to ascertain who is
using their work. This enables IP owners to identify and stop infringements and makes it
easier to license their IP works. In this sense, blockchain can serve as an enforcement
tool. With a blockchain-based registration system, verifying whether a new song is or
isn’t infringing upon the existing IP of a previously registered song will be much simpler.
This type of blockchain-based detection system can be applied to text, art, and music
with the help of artificial intelligence

You might also like