Professional Documents
Culture Documents
The output of human civilization has so far been primarily driven by human intelligence
and when machine intelligence combines with human intelligence, the sky is the limit.
The two biggest countries of Asia – China and India, boast of a strong AI talent pool and
enterprises and institutions that continue to strive for advanced research and innovation.
Comprehending AI’s transformative potential, the bureaucracy has stepped up and
formulated policies to govern and to incentivize AI-based implementation for a
competitive global advantage in the crucial sectors.
China’s Background
In 2015, the State Council released guidelines on the “Internet+” strategy that involved
integrating the Internet into all parts of economy and society.
In the same year, the 10-year plan of 2015 “Made in China 2025” set forth the aim of
transforming China into a dominant player in A.I. technology.
▪ In 2016, the 13th 5-year plan of the Chinese Communist Party included A.I. as one of the 6
critical areas of emerging industries for development.
▪ In July 2017, the State Council released the “New Generation Artificial Intelligence
Development Plan” that set out China’s A.I. policy objectives, setting it as “Year One of China’s
A.I. Development Strategy”.
China’s Artificial Intelligence Development Plan, 2017
In July 2017, the State Council of China released a comprehensive AI policy report:
‘Artificial Intelligence Development Plan’ to develop the roadmap for AI leadership
under an ethical and supportive regulatory system and open source collaboration.
The report covers strategic goals divided into three steps. China is to become a world
leader in defining ethical norms and standards for A.I.;
a) The first step is the setting up of an AI infrastructure at an advanced level as
compared to the world by 2020 by establishing initial ethical norms, policies,
and regulations for basic and key areas of A.I.
b) The second step, which is to be completed in 2025, aims to establish China in
the ‘world-leading level’ in AI breakthrough and to establish AI as the primary
driving force behind China’s industrial transformation with the help of a
‘breakthrough’ in artificial intelligence basic theory. By expanding upon and
codify the ethical standards for A.I. into law
c) The third step is to achieve global supremacy in AI impact and be the
‘innovation center of the world by 2030. further “upgrades” to laws and
standards to deal with newly emerging challenges and issues
To enhance China’s competitiveness, ‘open, stable and mature’ technology systems shall
be developed in the form of algorithms, hardware, and the data.
China: Implementation
The Central Government selected “A.I. National Champions” – businesses endorsed to focus on
development specific sectors of A.I. – e.g.:
–Baidu is tasked with development autonomous driving vehicles;
–Alibaba with the development of “smart cities”; and
–Tencent with computer vision for medical diagnoses.
▪ The A.I. National Champions agree to invest in government-directed A.I. goals in exchange for
preferential contract bidding, easier access to finance from the state and state-owned banks,
and even market share protection.
▪ Nothing stopping other companies from developing the same A.I. technology but the “national
champion” status will help them dominate their sectors.
There are areas of A.I. development “wish list” for which no “national champion” has
been selected, for example, the A.I. Development Plan calls for smart courts but there is
no designated business for smart evidence collection, case analysis, legal document
analysis, case management, etc.
India showcases a promising scenario, thanks to its strong talent pool, a notable list of
world-class educational institutions and companies that are dominating the global IT
landscape. However, India couldn’t achieve global recognition primarily due to the lack
of top-notch research in AI at a significant scale.
The National Institution for Transforming India, a thinktank established by the Indian
So, NITI Aayog has stepped up for the formulation of a comprehensive AI strategy with a
core focus on infrastructure development and holistic collaboration. It aims for an
#AIforAll campaign where AI shall also be used for social inclusion and not just for
defense, military, and advanced computing applications.
In an ‘AI+X mechanism’, where AI is an enabler for increased productivity and
efficiency rather than a complete overhaul, the key focus areas for AI intervention shall
be healthcare, agriculture, education, smart cities and infrastructure, smart mobility and
transportation.
The key challenges identified in these sectors include among others a low intensity of AI
research, insufficient talent to research and to implement AI at scale, high resource cost,
ambiguous privacy issues, and unattractive intellectual property regime to incentivize
adoption.
The report provides more than 30 policy recommendations to develop a two-tiered
strategy, which is aimed at improving the research ecosystem as well as developing
skilling initiatives to feed the ecosystem. Initially, the ‘Centres of Research Excellence’
(CORE) shall enhance quality research and publications focussing on AI. Investment,
both domestic and foreign, shall be made to develop a state of the art infrastructure in
liaison with the concept of an ‘AI garage’.
Besides this, the development of a National AI Marketplace (NAIM) has been proposed
in three different modules to minimize resource allocation for model development.
Let’s take the example of Gmail spam. The AI algorithm used in Google’s
Gmail identifies ‘spam’ and relocates the mail to the spam folder. But little
do people know about its identification process that leads to the classification
of spam.
In the context of AI and ethics, one cannot overlook the automated public
sphere it has created in our environment. Rather than reading newspapers,
we are now using Facebook, Twitter, Instagram to gather information, and
these platforms are more efficient in discharging information and that too in
a personalized manner.
One main legal issue regarding AI is the liability in the event of any failure in
AI technology. A question as to who will be responsible if any failure occurs
while using these systems. Most of the time companies utilizing AI tend to
get away from the responsibility. For instance, in Google’s “right to be
forgotten” case, the company argued that they aren’t responsible for the
results the search engine gives because it’s the algorithm that does that.
The algorithm stores millions of data as input and connects specific data
features to produce an output. Since data collection is self-reliant, the
results produced by the algorithm are difficult to interpret. Even a data
scientist is not able to solve the result the AI will create.
The issue related to AI business or inscrutability arises from the black box.
When the software is used for any critical operations, the employer or
employee associated with that operation will have no clue about the process
within the organization. The organization’s unknownness will cause massive
damage in the organization if any error happens and go unnoticeable.
Sometimes these damages will have expensive or even impossible repairs.
Regulations on AI in India
There is no regulation or law in India that regulates artificial intelligence,
machine learning, or big data. But the Government has felt the need to look
at the development and implications of artificial intelligence. The
government, as of now, intends to amplify the application of artificial
intelligence in the Indian environment.
NITI Aayog plans to tackle the issues related to artificial intelligence and
include suggestions like:
(a) The first committee for platforms and data on artificial intelligence;
(b) Second committee for leveraging AI for identifying national missions in
critical sectors;
(c) Third committee for mapping technology capabilities, key policy enablers
required across sectors, skilling and reskill; and
(d) Forth committee for cybersecurity, safety, legal and ethical issues.
The four committees of MeitY, as mentioned earlier, lays down the following
recommendations:
Fairness
The regulatory system in India has stepped forward to look into this issue to
ensure fitness. In this context, NITI Aayog proposed AI data training
solutions that will help, guide, and develop unbiased AI.
Accountability
However, split responsibility does not solve the problem in its eternity as in
practicality it is not possible to allocate the exactly responsible actor given
the number of interactions and other challenges.
Transparency
(1) Ethical governance: focusing on the most pertinent ethical issues raised by
AI, covering issues such as fairness, transparency and privacy (and how to
respond when the use of AI can lead to large-scale discrimination), the allocation
of services and goods (the use of AI by industry, government and companies),
and economic displacement (the ethical response to the disappearance of jobs
due to AI-based automation).
(2) Explainability and interpretability: these two concepts are seen as possible
mechanisms to increase algorithmic fairness, transparency and accountability.
For example, the idea of a ‘right to explanation’ of algorithmic decisions is
debated in Europe. This right would entitle individuals to obtain an explanation if
an algorithm decides about them (e.g. refusal of loan application). However, this
right is not yet guaranteed. Further, it remains open how we would construe the
‘ideal algorithmic explanation’ and how these explanations can be embedded in
AI systems.
(3) Ethical auditing: for inscrutable and highly complex algorithmic systems,
accountability mechanisms cannot solely rely on interpretability. Auditing
mechanisms are proposed as possible solutions that examine the inputs and
outputs of algorithms for bias and harms, rather than unpacking how the system
functions.
3.3.1. Case study: healthcare robots Artificial Intelligence and robotics are rapidly moving into
the field of healthcare and will increasingly play roles in diagnosis and clinical treatment. For
example, currently, or in the near future, robots will help in the diagnosis of patients; the
performance of simple surgeries; and the monitoring of patients' health and mental wellness in
short and long-term care facilities. They may also provide basic physical interventions, work as
companion carers, remind patients to take theirmedications, or help patients with their mobility.
In some fundamental areas of medicine, such as medical image diagnostics, machine learning
has been proven to match or even surpass our ability to detect illnesses.
Data protection Personal medical data needed for healthcare algorithms may be at risk. For
instance, there are worries that data gathered by fitness trackers might be sold to third parties,
such as insurance companies, who could use those data to refuse healthcare coverage (National
Public Radio, 2018). Hackers are another major concern, as providing adequate security for
systems accessed by a range of medical personnel is problematic (Forbes, 2018). Pooling
personal medical data is critical for machine learning algorithms to advance healthcare
interventions, but gaps in information governance form a barrier against responsible and ethical
data sharing. Clear frameworks for how healthcare staff and researchers use data, such as
genomics, in a way that safeguards patient confidentiality is necessary to establish public trust
and enable advances in healthcare algorithms (NHS' Topol Review, 2009).
Legal responsibility Although AI promises to reduce the number of medical mishaps, when
issues occur, legal liability must be established. If equipment can be proven to be faulty then the
manufacturer is liable, but it is often tricky to establish what went wrong during a procedure and
whether anyone, medical personnel or machine, is to blame. For instance, there have been
lawsuits against the da Vinci surgical assistant (Mercury News, 2017), but the robot continues to
be widely accepted (The Conversation, 2018). In the case of 'black box' algorithms where it is
impossible to ascertain how a conclusion is reached, it is tricky to establish negligence on the
part of the algorithm's producer (Hart, 2018). For now, AI is used as an aide for expert decisions,
and so experts remain the liable party in most cases. For instance, in the aforementioned
pneumonia case, if the medical staff had relied solely on the AI and sent asthmatic pneumonia
patients home without applying their specialist knowledge, then that would be a negligent act on
their part (Pulmonology Advisor, 2017; International Journal of Law and Information
Technology, 2019). Soon, the omission of AI could be considered negligence. For instance, in
less developed countries with a shortage of medical professionals, withholding AI that detects
diabetic eye disease and so prevents blindness, because of a lack of ophthalmologists to sign off
on a diagnosis, could be considered unethical.
The prospect of A.I. improving healthcare also poses a challenge to the fundamental
basis of medicine – is medicine an evidence-based science or the appraisal of self-
assessment of the person affected by illness based on an understanding of their
subjective wellbeing?
▪ A.I. cannot realise or appreciate human characteristics like empathy, compassion,
determination, or resignation.
▪ Even if A.I. is limited to empirical analysis of medical imaging or pathology information,
the standard of care one may expect the A.I. to operate may be the same as that of a
human specialist, a standard that evolves with innovation and better understanding of
medical science.
The A.I. may restrict choices based on risk calculations, what it may consider the best
interests
of the patient, does reducing patient autonomy.
▪ If A.I. performs a diagnosis or formulates a treatment plan but the human doctor
cannot
explain how the A.I. came to its decisions, it would compromise the patient’s ability to
make a
fully informed choice.
The principle of beneficence requires that the patient’s wellbeing must come first, but
this
requires understanding of the patient’s subjective knowledge, life experience, evaluation
of risk
information, social and cultural context, emotional stability, etc
If there is a malfunction or defect in the A.I. or its algorithms, there could be serious
implications. It is difficult to determine in minute detail each of the steps taken by an A.I.
system to come to its conclusion and, as such, such malfunctions and defects would be
hard to
detect until too late.
Utilisation of A.I. can have a significant impact on the relationship between a doctor and
their patient. If the A.I. is used for the final assessment of findings and formulation of
treatment plans, what is the role of the doctor besides being the human vessel by which
the treatments are performed? If the A.I. is used to supplement and support medical
decisions but the final decision is the human doctor’s, to what extent is the doctor not
meeting their standard of care by relying on the A.I.?
▪ Damages under product liability law may be the answer to those questions as a matter
of law,
but it is no answer on the question of the doctor’s ethics.
In the U.S. systems as such COMPAS (Correctional Offender Management Profiling for
Alternative Sanctions) have been implemented in several states to predict reliably the
likelihood
of recidivism by offenders.
▪ COMPAS would produce a score from 1 to 10 and assist courts in deciding whether an
offender
ought to be given probation or jailed, and for how long. This has been controversial as
software or a machine is used to decide whether someone goes to prison or not.
▪ Such A.I. systems have the potential to be more accurate and less biased than humans,
at least
theoretically, to enable better allocation of scarce policing resources on preemptive
policing
and crime prevention.
ISSUES
Can A.I. be trusted to decide impartially without a human understanding of the
processes, the
“logic”, and the factors considered by COMPAS?
▪ How should courts deal with A.I.-generated evidence?
▪ To what extent should courts rely on A.I. for criminal sentencing, even though policy
simulations show that COMPAS can reduce jail population by 42 percent with no
increase in
crime rates or reducing crime rates by 25 percent without changing prison population?
The concept that A.I. is unbiased and more accurate can be theoretical at times, for even
though race and other identifying characteristics may not be put into the reference data
for use
by the A.I., the underlying social, economic, demographic, and other inequalities may
continue
to make A.I. just as biased, if not more so, than humans.
▪ For instance, U.S. crime prediction systems have shown to have a propensity to find
black
offenders more likely to offend than they actually would, and white offenders less likely
to
offend than they would.
▪ As long as social inequalities remain, A.I. not only has the potential to maintain existing
biases
but, worse, may even entrench them.
The prevailing issues of data protection (“opening the door”) and the need for
transparency
and accountability in the A.I. systems (“opening the black box”) are just as important in
the use
of A.I. in administrative decision-making as it may well be in medical decision-making.
▪ Perhaps data protection is a greater issue in the case of government administration
because,
unlike in medical settings where the patient is likely to have provided the necessarily
data
voluntarily or at least consented to their collection, government departments and
authorities
have access to data concerning individuals that may not be available to the individual or
that
the individual may well be unaware.
▪ The data collection by A.I. may, be and of itself, be an issue requiring data security and
transparency concerns to be addressed. This is evident in the debate over the use of
facial
recognition algorithms by Facebook and by public authorities, especially in the recent
examples
of lockdown enforcement during the COVID-19 epidemic.
▪ In taxation, the data collected and used by the A.I. as well as the logic and factors
deployed by
A.I. may need to be kept secret by the government authority so as to protect the
integrity of
the taxation system and avoid individuals “gaming” the system for tax avoidance.
Seminar 1
– Obstacles to A.I. Regulation
Lessons from Part III
▪ Given technological change and societal changes associated with them, it is difficult to
formulate appropriate regulation with significant uncertainty.
▪ There is a risk that legal measures can become ineffective or even have dysfunctional
consequences.
▪ Regulation need to avoid stifling innovation but not be so flexible as to be easily
circumvented.
At the same time, the regulation needs to be amenable enough to change when
unforeseen
consequences happen.
▪ Today we are going to explore two fundamental obstacles to A.I. regulation:
–impact of existing data protection law: “opening the door”;
–need for transparency and accountability: “opening the black box”.
Transparency has always been a fundamental requirement for data protection: see, e.g.
G.D.P.R.
Article 5.
▪ In 2017, New York proposed to require every city agency ”that uses, for the purposes of
targeting
services to persons, imposing penalties upon persons or policing, an algorithm or any other
method of automated processing system of data … to publish on such agency’s website the
source
code of such system” – which sounds remarkably like telling criminals how they will get
caught.
▪ In 2018, the German Conference of Information Commissions called for laws that
would require
public authorities and private actors using A.I. to publish details on the “logic” of the
system,
the classifiers and weights applied to the input data, and the expertise of programmers
and
operators.
▪ The problems with being required to be transparent about the ”logic” or the source
code of an
A.I. are that:
–the A.I. may not follow the programmers “logic” as they intended;
Most people cannot read computer code and even those who can are unlikely to
understand
all methods used in a complex A.I. system – as such the epistemic constraints means
requiring
A.I. designers to publish its source code is more symbolic than practical.
▪ The greater the volume, variety, and velocity of data processing, the harder it is to
understand
and predict the behaviour of an A.I. mechanism.
The solutions to the need for transparency in A.I. systems need to overcome false
absolutes –
not all A.I. systems are inscrutable black boxes; and not all forms of transparency assures
accountability.
▪ It must be recognized that transparency regulation must be directed at generating
knowledge and motivating individuals to contest A.I. based decisions, so that
transparency is deployed to combat widespread ignorance and the feeling of
disenfranchisement that accompanies use of
A.I.
▪ At the same time, courts and government agencies must build up the necessary
degree of
expertise for the control of A.I. based systems.
AI and Copyright
Under section 14 of the Copyright Act 1957, "Copyright" is defined as the exclusive
rights of owner to do or authorise the doing of any acts (such as reproduce work,
publication of work, adaptation and translation etc.) in respect of a work. Further,
section 17 of the Act states that the author of the work shall be the first owner of
the copyright, however, if the work is created under a contract for consideration
and upon instruction of employer, then in such a situation the owner of the work is
the employer.
It is observed that since 1970s computer generated art works have attracted a lot of
attention. Most of these computer-generated artworks are relied heavily on the
programmer who provides the input for creation of the work. However, with
technological advancement, artificial intelligence has developed to the extent that it
is capable of understanding and creating results/ outputs without any interference
by the human.8
Major question raised in this regard, is with respect to the protection over the work
created by the Artificial Intelligence. With the existing legislation of Indian IP laws
especially copyright, the idea of extending copyright protection to artificial
intelligence for works created appears to be difficult. The works created by AI can
be categorized as "works created by AI with human interference" and "works
created by AI without any human interference". In such case, let us try and answer
the following questions:
Where work is created by AI without any human interference, in such case where
there is no human input, the area of law is not clear with authorship. In such
situations, the following can be an approach:
The work is generated by AI without any human input, then in such cases, the
authorship may vest with the author of the AI who has developed the program
creating the AI.
Where work is created by the AI without human assistance, then an assumption can
be derived that the programming of the AI is made in such a manner that it can
create and identify equations to generate a result on its own, and therefore, the
creativity may vest with the programmer who has created the AI, with sufficient
programming.
s Ownership of the work disputed in case of work created by AI?
The situation is same as authorship. Where work is created by the AI with human
interference, then the ownership over the work may claimed by the human who
provides creative inputs to the AI, whereas in case of work created by the AI without
any human interference, the ownership may be claimed by the copyright owner of
the AI, i.e. who holds copyright over the AI software.
Issue of Originality: When we talk about copyright under Copyright Act 1957, we
refer to section 13 of the Act which defines "works in which copyright subsists"
The above provision clearly defines that for a work to be eligible for literary,
dramatic, musical and artistic work, the said work must be an original work. The
term "original work" is not defined in the Act, however, while deciding the originality
the Court usually check the following parameters:
Considering the judgment of the Hon'ble Supreme Court of India in Eastern Book
Company & Ors vs D.B. Modak & Anr. ((2008) 1 SCC 1) which observed that "To claim
copyright in a compilation, the author must produce the material with exercise of his
skill and judgment which may not be creativity in the sense that it is novel or non-
obvious, but at the same time it is not a product of merely labour and capital. The
derivative work produced by the author must have some distinguishable features and
flavour." and therefore it is a requirement for any compilation or derivative work to
show Skill and Judgment.
Analysing the above provision it can easily be said that copyright in a work can only
be infringed by a "person". Since the legal status of the AI is still not classified as a
legal entity, therefore, any infringement caused by AI will become a serious issue. In
case of AI, it will become much more difficult to place the liability for any
infringement caused by AI. Since the AI has no legal status of its own, therefore, the
issue of giving AI – authorship rights, may become weak unless a proper channel
and chain can be established to create liabilities for the acts of AI.
Pursuant to Section 57, the special rights of the author may also be disputed. The
special rights of author, known as moral right, includes right to paternity (right to be
associated and recognized with the work) and right to integrity (right to restrain or
claim damages against any act which may be prejudicial to author's honour or
reputation). Therefore, if an AI is recognized as an author of the work then these
rights may become redundant, as AI may not be able to ascertain whether any act
has affected the honour or reputation of the original work.
Pursuant to the existing copyright laws of India, the author of the work has a right
to claim royalty which cannot be waived. Therefore, where the AI is author of the
work, the question of who will determine the royalty, how will the royalty be
disbursed to AI, where the AI is able to fix the amount of royalty then whether the
amount must be determined on reasonability.
For any work by AI, the accountability of AI over any creation will be difficult to be
enforced.
While it is difficult to predict all potential IP-related applications of blockchain, I see three
specific fields of application pertinent to technology transfer and IP professionals.
1. Blockchain can help with IP rights management and technology transfer and
commercialization practices
Blockchain could be used by inventors looking to find potential investors while at the
same time safeguarding their inventions. A ledger might consist of a short description of
the character and goal of the invention, while those wishing to gain access to more
information on how the invention works would then have to accept the provisions of
a “smart contract”. Or blockchain could be utilized by patent holders wishing to find
potential licensees for related know-how and trade secrets in addition to the patented
invention.
2. Blockchain as an IP registry
Blockchain can also serve as a technology-based IP registry where IP owners can keep
hashed digital certificates of their IP and use the platform to get royalties from those
who use their creations and inventions using smart contracts.Often, the approval wait
times of patent agencies and other regulatory bodies are long. This delay can hamper
the first-mover advantage in many industries where incumbents must act fast to protect
their inventions and stay at the top of the game. By replacing centralized registration
systems with decentralized ones, it will be easier to register new IP and update filings
and transfer ownership at any time. With blockchain, regulatory agencies will be able to
achieve more with fewer resources.
Blockchain can be used to catalog and store original works. Often, there are no
adequate means for authors to catalog their works and copyright ownership can be hard
to prove. It can also be difficult for authors to see who is using their work, and it is
equally difficult for third parties to know from whom to seek a license. Authors are often
unable to stop infringements or to monetize their works successfully. With blockchain,
copyrights need not be registered and can come into existence automatically upon the
creation of original qualifying work.
When the IP work is registered and verified using blockchain-based platforms, authors
can search across a whole host of different sources simultaneously to ascertain who is
using their work. This enables IP owners to identify and stop infringements and makes it
easier to license their IP works. In this sense, blockchain can serve as an enforcement
tool. With a blockchain-based registration system, verifying whether a new song is or
isn’t infringing upon the existing IP of a previously registered song will be much simpler.
This type of blockchain-based detection system can be applied to text, art, and music
with the help of artificial intelligence