You are on page 1of 12

CS158-2 Activity #5

Instructions:

Answer the following questions in less than 100 words each.

Provide 10 legal issues stated in the article “Legal and human rights issues of AI: Gaps, challenges and
vulnerabilities” by Rowena Rodrigues. For each legal issue, provide answers on the below questions.

1. Artificial Intelligence issue and description


2. Why is it considered as an issue and how it impacts AI?
3. How is it currently being addressed?
4. Will it likely to be resolved soon? Or will it be a long-time problem?
5. In your own opinion, how the issue can be addressed?

 Lack of Algorithmic Transparency

1. Artificial Intelligence issue and description

The lack of Algorithmic Transparency became the forefront of the discussion on the legal
parameters on Artificial Intelligence. The issue is greatly emphasized as today’s AI applications are
slowly being augmented to almost everything, which poses a high-risk on information sensitive
topics of discussion. The risk of what AI does on a certain aspect is greatly elevated by a glaring
lack of useful and accessible information for the people. The lack of accessibility and transparency
to the inner workings of an applied AI system is a very volatile subject as accountability and
fairness is inevitably compromised.

2. Why is it considered as an issue and how it impacts AI?

The lack of Algorithmic Transparency is an issue because the lack of accessibility and
transparency of information, whether intentional or not, compromises what the public knows
regarding an AI algorithm’s functionality, parameters, and operating procedure. The obscuring of
these necessary information not only sows seeds of distrust from the general public, but most
importantly violates legal and ethical issues. If people do not know what went wrong with a
transaction, loan application, or booking a flight aside from a notice that the decision was
processed through a software, doubt and distrust will hamper the future of AI-related endeavors.

3. How is it currently being addressed?

The issue is currently being addressed, according to an EU Parliament study, through four
options. The first is raising awareness and knowledge about AI and the assigning of watchdogs and
whistleblowers that would keep AI on check. Second is holding accountability for using AI on public-
sector decision making which coincides with the third one which involves overseeing the application
of AI and checking its legal parameters. The final option involves a call for a global-scale algorithmic
governance, Aside from these four is the implementation of an algorithmic impact assessment and
a transparency standard.
4. Will it likely to be resolved soon? Or will it be a long-time problem?

The issue of transparency is a subjective and touchy subject which will not be agreed upon
in unison. Although the aforementioned solutions are indeed viable, these are all relatively new
implementing strategies that is subject to a lot of improvement. Along with this is the problem that
since the proposed solutions are something new for all of us, assessing and evaluating its efficacy
and effectivity on regulating algorithm transparency will take time to mature, refine, and be
standardly implemented. Resolving the issue of algorithm transparency will take time, and a step-
by-step mindset.

5. In your own opinion, how the issue can be addressed?

The first and foremost action to undertake in addressing information transparency issues on
AI is to disseminate information on how AI works. Information is crucial in building a society that is
self-aware of the potential benefits and risks of AI. This would prevent the public from exhibiting a
distrust and doubt on AI which would greatly affect how it would be developed and augmented to
society. The solutions of amending laws or a standardized evaluation for AI and its algorithm
transparency will only be attainable and viable if the general public knows what these laws and
regulations stand for.

 Cyber Security Vulnerabilities

1. Artificial Intelligence issue and description

Cyber security vulnerability is an issue that stems from how the augmentation of AI to
surveillance or for national security purposes can open a new method of attacking which can
compromise safety and security from local levels up to a global scale. AI and network-based
augmentation and intervention poses a looming concern as methods of containing potential
problems connected to cyber security is evenly paced with emergence of new methods of attacking
using cyber security.

2. Why is it considered as an issue and how it impacts AI?

Attacks involving cyber security comes in many forms, often which is left unnoticed. This is
the primary reason why cyber security vulnerabilities is a heavy issue. More often than not,
problems that are related to cyber security are intentionally left hidden and contained from the
general public until after the issue is resolved. This filtering of knowledge is emphasized by the
versatility on how cyber security impacts an individual’s rights – strategic targeting of political
messages, predictive policing algorithms, surveillance of civilians and many others. This breaching
of personal security greatly affects how people see Ai and cyber security.
3. How is it currently being addressed?

Some of the approaches towards addressing cyber security vulnerability issues involves
developing a recovery mechanism for sensitive data and information along with including and
conversing with human analysts in critical decision making. Risk management programs are also
being used and developed in order to have proper evaluation on the prospect of utilizing AI in cyber
security. Software upgrades and updates have been always the straightforward way of preventing
cyber security networks to be compromised, especially in today’s time that cyber warfare is as
frequent and dangerous as conventional warfare.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

Resolving the issue of cyber security vulnerabilities involves both a proactive and responsive
approach in developing cybersecurity policies and regulations, along with developing design and
methods of implementation tailored to reduce the risks of cyber security compromise. Although the
aforementioned methods are already in place, reality begs to differ from the ideal. A lot of things
have to be taken into consideration in finding a long-term solution to an ever-evolving and adapting
problem. As such, it will take time for it to be addressed in a manner that is adaptable and viable
for the years to come.

5. In your own opinion, how the issue can be addressed?

Addressing cyber security vulnerability is a herculean task. Instead focusing on solving this
issue in single take, we need to first manage that most obvious but overlooked ones. The solving of
this issue needs to start at the locals, then would slowly build up to a city, country, and even the
world. Regulating how predictive algorithms work in social media is a viable first step on how to
address cyber security vulnerability. Through this, we can then understand the nature of cyber-
attacks; which would then be used in designing a cyber architecture that would be safe for
everyone.

 Unfairness, Bias, and Discrimination

1. Artificial Intelligence issue and description

The issue of unfairness, bias, and discrimination of Artificial Intelligence is directly caused by
the use of algorithms and automated decision-making systems (Hacker 2018). Specifically, the use
of these systems and decision-making algorithms in processing and or evaluating big data and
other sensitive information bears the possibility of discrimination and infringement of basic human
rights.

2. Why is it considered as an issue and how it impacts AI?


The tendency of AI to exhibit unfairness, bias, and discrimination is a major issue as not only
does it inhibit fundamental individual rights, but also a differing treatment and discrimination with
regard to fairness and equality in opportunities for access to a number of services (education,
employment, health, criminal justice, etc.) that are being evaluated by automated systems. This
pushes AI development to include a minimize algorithmic discrimination and provide a transparent
processing of personal data that confides within an acceptable ethical framework (European
Parliament, 2017).

3. How is it currently being addressed?

The first proposal in addressing the matter is by conducting regular assessments that
evaluate the credibility of data and examining if these data are affected by biased elements. To
make algorithmic and technological advancements of AI to be less problematic, there is also a
proposal to include a human to intervene and be part of the system (Berendt, Preibusch, 2017),
along with providing a transparent algorithm so that users would know how the system operates
and the justification behind a certain decision. IEEE P7003 is an Algorithmic Bias Consideration
standard that is in development to address the issue.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

While there are laws and amendments that are being developed and enforced today to
minimize the risks of discriminatory and biased display among AI decision making systems, it still
falls short. Particularly in cases where such anti-discriminatory failsafe fail to extend in areas that
expressly protected. Human inclusion in the system may also deem detrimental rather than helpful
and a transparent algorithm doesn’t equate to better public understanding either. The nature of this
issue calls for a holistic, interdisciplinary approach that is science-backed and ethical which will take
time to spread and executed properly.

5. In your own opinion, how the issue can be addressed?

Unfairness, bias, and discrimination among AI systems inevitably falls on the responsibility of
the developer. As such, there is an emphasized need for accountability among the developers of AI
systems and algorithms. A general agreement and consensus should be laid down for all developers
to understand the gravity and responsibility of their work. Developments should be regulated, but
not hampered, in such a way that subsequent developments would confide within the spectrum of
technicality and ethical viability. A regulation for the developers and their AI developments will
minimize discriminatory tendencies and would also shape how future endeavors will form.

 Lack of Contestability
1. Artificial Intelligence issue and description

Lack of contestability is an AI legal issue that involves the presently lacking means of
challenging AI-based systems and algorithms with which the aforementioned technologies produce
and generate unexpected, unfair, and discriminatory results. Results that which inhibit individual
dignity and fundamental rights (Bayamlıoğlu, 2018).

2. Why is it considered as an issue and how it impacts AI?

Lack of contestability is an issue because it inhibits affected individuals to properly challenge


decisions and results that are generated by AI systems and algorithms. Legitimacy is hampered
when contestability cannot be properly executed which reflects to not abiding laws. If legitimate
interests and right are affected by new AI technologies that cannot be challenged to provide the
reason behind the particular decision, then it does not only affect fundamental civil rights, but also
challenge the efficacy of law and governance.

3. How is it currently being addressed?

According to Almana (2019) in the given article, the lack of contestability is being addressed
through “contestability by design”. It is a proposal aimed at better protection at decisions made by
automated processing given at each stage of an Artificial Intelligence systems’ lifecycle.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

This issue will take a long time to be resolved. This is caused by the inefficacy of general
safeguards that protect individuals to better express his/her point of view and the right to challenge
a decision and obtain an explanation of that decision is not particularly applicable automated
processing of data. Another reason is that contesting an automated decision is hard to challenge
without a clear explanation why it reached to that certain decision – which would need involving
professionals that are capable of determining false positives and existence of discriminatory
outcomes, which is not cost effective and efficient.

5. In your own opinion, how the issue can be addressed?

This issue must be addressed at many different levels of governance. However, given that
laws regarding these new technologies will take time to be better deigned, the temporary solution
to mend this issue is to provide updated regulations and supplementary regulations that does not
necessarily act as a separate law, but rather augment existing ones to better adapt to incidents
that involve newer technologies. By doing so, it empowers individuals to a certain degree to be able
to give them a legal means of challenging automated decisions.

 Legal Personhood Issues

1. Artificial Intelligence issue and description


The legal personhood issues revolve around the idea if whether or not AI (systems and
robots) should be treated within existing legal categories or a new category should be made, with
its own features, and parameters. Since AI is a fairly new development in society, its existence
challenges its legality and if existing laws can uphold it. Not only does AI developments challenges
a legal existence, but also divides people if traditional morality should be likewise adapted to
Artificial Intelligence.

2. Why is it considered as an issue and how it impacts AI?

Establishing a legal personhood for an AI system have been deemed by an AI group (AI
HLEG) as something that fundamentally distorts the concept of human accountability and
responsibility – posing a significant moral hazard should the endeavor be pushed. Whilst those in
favor see this legal personhood as a pragmatic solution where an AI should be held accountable
and support its moral rights. Treating non-biological intelligence as a new legal personality impacts
future discussions of AI as relating AI systems to something akin to human personhood will
inevitably involve morality and ethics which would further complicate the matter.

3. How is it currently being addressed?

According to the paper, here have been no significant measures and breakthroughs in
properly addressing the legal personhood of AI at an international, EU or national level. Although
this issue has been brought up for discussion and debate, an agreement (international or regional
level) has still not been made on how to address this relatively new development. This is primarily
because of the sensitive and political nature of the issue.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

Discussions that involve societal morals and ethics is something that would linger and
persist as long as society exists. The same goes, if not more, for the issue of legal personality of AI.
Even discussions about this new entity is divided, which reflects how every person has a different
interpretation of the matter at hand. Decisions regarding this issue will always involve a certain
degree of subjectivity, which inhibits a proper deliberation on how to deal with it. This issue will be
very difficult and will require a lot of time to come to a resolution.

5. In your own opinion, how the issue can be addressed?

Addressing the legal personhood of AI would inevitably rest upon at a national level. Nations
will have different methods of approaching this matter and unless a national-level comprehension
and resolution has been made for virtually all nations, an international-level of agreement will not
come to fruition. Emotional and economic appeal in dealing with this issue will play a big role on the
approach of different nations, which is both necessary and troublesome. However, superficial and
fictitious the issue may appear at face value, nations need to come into terms with its own
approach before addressing the AI personhood internationally.

 Intellectual Property Issues

1. Artificial Intelligence issue and description


Intellectual property issues revolve around the details of the different aspects on the
development and operation of an AI and who should be considered the lawful owner of which. This
includes ownership of the inventions and works generated by AI, datasets that an AI uses to self-
learn and more importantly, who to hold accountable for any invention or invention that may
impinge upon human rights and/or the violation of legal laws.

2. Why is it considered as an issue and how it impacts AI?

Intellectual property in the realm of Artificial Intelligence is a serious issue especially in


today’s time that revolve around information and innovations. At the rate and scale of today’s AI
developments, liability and recognition of ownership or its absence, can start conflicts, or end ones.
The issue poses real risk towards human and society, especially developments that pose
controversy. Documenting and identifying whose entity is responsible for an AI development, the
data it uses and the product or invention developed reinforces legality and credibility which helps in
gaining confidence among people and ensures legal liability.

3. How is it currently being addressed?

Exercised approaches in dealing with this issue involve creation of laws that protect
computer-generated literary, dramatic, musical or artistic works such as that of UK. The creator of
an AI design automatically holds accountability and ownership of the AI except when the work was
commissioned or created within an individual’s employment term, which would then be owned by
the employer or the commissioner of the work. Registered trade marks are also being used to deem
an AI system to be a personal property, unless a personal property right is given to the AI system
itself.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

Although there are advancements in addressing the intellectual property issues of AI, a large
number of intellectual property issues is still yet to undergo an adequate and conclusive
agreement. This is especially true today that AI developments and its subsequent generated works,
inventions, or breakthroughs become more intuitive, which challenges conventional classification of
intellectual property. Further research and understanding the present state of AI-generated works is
needed to be able t achieve a proper consensus of intellectual property issues, which will take time
to mature, but not in a such a long time frame.

5. In your own opinion, how the issue can be addressed?

Issues on intellectual property rights on AI or computer-generated works greatly rely on


amending corresponding laws that are tailored specifically to deal with this relatively new issue.
Application of conventional methods of designation of property rights can only be effective for so
long, and we will eventually need for a proper and bespoke law catered to deal with property issues
on developments in the area of Artificial Intelligence. Furthermore, by having a proper law to
address this issue, conflicts regarding rightful ownership can be resolved more efficiently and
accountability is preserved and overseen.

 Adverse Effects on Workers

1. Artificial Intelligence issue and description

The issue was depicted by the IBA Global Employment Institute report on 2017 as a trend
that correlates AI development and augmentation to production and work towards the workplace
situation of traditional human workers. The issue delves into economic and social consequences of
the adverse and rapid inclusion of AI to the workplace – raising moral and ethical dilemmas in
addressing this issue.

2. Why is it considered as an issue and how it impacts AI?

The effects, or rather consequences, of AI augmentation to production and work is a critical


issue as it directly affects human lives. Changes in the workplace environment is already taking
place – from laying off workers in favor of precise robots, lower demand for human workers,
integration of untrained job workers and a generally new job structure. These effects impact not
only the autonomy of workers, but also the social consequences that it brings (unemployment,
poverty, displacement, etc.). This rapid integration of AI (systems and robotics) to the workplace
without any proper transition will harbor distrust from the public.

3. How is it currently being addressed?

The most prominent approach in addressing the issue is retraining the existing workforce to
adapt to the new work environment. Aside from this, the educational system is also eyed to be re-
focused and modernized in all educational levels so people will have the skills that the new work
landscape requires. As AI continually transforms jobs, there is also movements towards supporting
workers whose jobs are about to change or be removed. Social Security Systems are also being
updated to better support the workers.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

Although relatively new, the issue of AI affecting workers have been addressed rather
effectively in the past years. Resolution towards this issue takes many forms but share a general
success – effectively adapting human workforce to better adjust to new innovations and
developments. While there is a general consensus that AI will still have a degree of disruption
towards the work environment, the risk is not enough to require critical changes in educational and
economic policing measures. Taking the adaptive approach of governments, this AI issue will be
resolved in a shorter timeframe.
5. In your own opinion, how the issue can be addressed?

Existing approaches in addressing the issue is already showing its effectivity and
adaptability. One approach that could have some room for improvement is the softer transition of
AI-related workplace and production augmentation. This is particularly true for less-developed
countries where workers are already in a detrimental standpoint. Even if there are fall back and
support mechanisms in place for workers, if the transition is too rapid and sudden, it will still disrupt
the work environment in these less-developed countries, which may result to cascading
consequences that may prove to be more what the nation can handle.

 Privacy and Data Protection Issues

1. Artificial Intelligence issue and description

Privacy and data Protection issues highlights possible risks of infringements in the data
protection rights of individuals. This issue, as described by Wachter Mittelstadt (2019) concerns
algorithmic accountability over these sensitive data processing and underlining how individuals
have little control on the data being gathered or proper insight and knowledge on how these
personal data is being utilized – effectively invading personal privacy or damaging reputation.

2. Why is it considered as an issue and how it impacts AI?

Privacy and data protection is a critical and sensitive issue that risks violating individual
rights and privacy. The implications imposed by the sheer technological capabilities of these new AI
systems in terms of privacy and informed surveillance, display and highlight the intrusive nature of
systems that are involved in big data processing and handling of data. Potential damage to rights
and reputation is the key factor why concerns are developing in AI systems which correlates to
demand for transparency and accountability for the AI systems and the entities behind it.

3. How is it currently being addressed?

European Union has already placed a privacy and data protection law that provides
safeguards against privacy infringements especially in the form of transparency and information
access of any concerned individual. Informed consent is also being upheld through notices to users
of potential harms relating to the use of a particular AI system or service. There is also a
suggestion to form a secure multi-party computation (MPC) that allow multiple parties to compute
functions together while keeping each party’s input in the function as private. Anonymization,
privacy notices and impact assessments, privacy by design are implemented approaches to the
issue.
4. Will it likely to be resolved soon? Or will it be a long-time problem?

There is still a glaring lack of protection against sensitive data-based inferences and
uninformed surveillance among individuals up to this day. Added to this, the context of AI and how
it functions is rapidly changing, which inhibits any presence of proper data protection to actually
take placed and enforced in an ideal level of efficacy. Among other concerns is that the already
implemented measures might fall short because of AI’s nature to conflict with societal values and
human rights. Even if there are existing protection measures, proper resolution to this issue will
take a long time.

5. In your own opinion, how the issue can be addressed?

Privacy and data protection issues is something that should be addressed legally, through
proper laws and its subsequent enforcement. We have seen implications of uninformed surveillance
and data use even in social media applications but there is more to it. Oversight and regulation of
these potential private data misuse will only be effective if conventional laws addressing this are
replaced by laws designed to tackle this issue. Aside from this is information dissemination on an
individual’s responsibilities of using particular automated systems like social media applications, for
the public to have a certain degree of means to protect themselves.

 Liability for Damage

1. Artificial Intelligence issue and description

The issue for damage liability of any AI-related incident revolves around the context of
damage done to a person or property. Given that there are many parties that are involved in the
development of an AI system or technology, proper establishment on who should be held liable
becomes more complex, if not convoluted. Added to this are several other factors that needs to be
taken into consideration before determining who is held liable.

2. Why is it considered as an issue and how it impacts AI?


Liability for damage is an issue that is in direct connection with property rights issues.
Holding someone accountable to any untoward incident that caused damage to a person or
property correlates to finding out who is the owner of the system or technology. This is because
legal basis is tantamount to a proper deliverance of jurisdiction. Incidents that involve AI without
any legally liable entity to be held accountable sets significant distrust and concern for people. This
hampers the introduction of any other related technologies in the future as people will be wary and
doubtful on such technology.

3. How is it currently being addressed?

The prominent approach towards addressing liability as of now is by addressing such


incidents under the criminal and civil liabilities, to whichever is more applicable to a certain
incident, towards the entity responsible of its development. There are also movements in
augmenting these liability models by the inclusion of supplementary rules that can help in
addressing this issue. By having these supplementary rules, it nurtures a better complementation
for existing laws along with acting as a basis of jurisdiction.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

Although there are efforts to upholding damage liability of AI related incidents to ensure
safety and protection of the victims, the specific characteristics of these new technologies will
continually become more difficult to be handled by conventional laws and regulations. Along with
this is the evolving trend in the development and nature of these technologies which make it more
difficult to properly allocate liability to responsible parties. Designing regulations to better address
this issue will take a long time for research, design, and implementation, which why the temporary
answer by government to address these issues is through supplementary rules.

5. In your own opinion, how the issue can be addressed?

Liability for damages is a matter that would be resolved by the time a proper intellectual
property legal framework for Artificial Intelligence and its subsequent developments are made. This
is because the nature of such issues delves in the legal aspect of society. No matter how informed
the public about these new technologies are, once an incident happens and there is no legal law to
be applied, then public awareness is futile. Governments must recognize that apart from public
information, proper legal laws and regulations must be made to adapt to these new technologies
and their potential risks.

 Lack of Accountability for Harms

1. Artificial Intelligence issue and description

The issue on the lack of accountability for harms revolve around calls for mechanisms to be
put in place that ensure responsibility in the development and execution of AI related technologies
and systems in a transparent matter. This is primarily caused by an unprecedented gap of
accountability that affects causality, justice, and compensation (Bartlett (2019).

2. Why is it considered as an issue and how it impacts AI?


Lack of accountability for harms is a considered an issue because there is a need to be able
to legally and appropriately determine who t be held responsible should potential violations or
untoward incident happen. The same how liability for damage among AI developments is
characterized, this issue brings forth a call to development of a guiding action and function of
explanation among AI developments.

3. How is it currently being addressed?

Legal accountability for harms that involve Artificial Intelligence is projected to take the form
of a “right to explanation”. Aside from this is that design of transparency safeguards, data
protection, and reporting obligations. The main gist on how the issue is approached is the emphasis
on explanation the same way that is required to that of human-based incidents and harm.

4. Will it likely to be resolved soon? Or will it be a long-time problem?

As Bartlett (2019) stated, there is no perfect solution to Artificial Intelligence accountability.


The consequence of holding AI developers accountable could cause adverse effects on AI
developments since these developers are more often than not, pawns of the higher individuals and
companies. As such, future AI creators could become wary of what to create. Existing methods of
addressing the issue like the right to explanation prove to be impractical. That is why this issue will
take time to be resolved in away that does not compromise AI development, but also provide a
satisfying means of holding who is accountable objectively.

5. In your own opinion, how the issue can be addressed?

The issue of the lack of accountability for harms can be addressed not by hastily making
uninformed legal actions and creation of legal parameters that may do more harm than good. Such
is the point raised by the paper that if AI developers are the ones held responsible and accountable
without any legal basis, AI development will be affected and the problem will not be solved at its
root. Understanding and insight on how AI development works must take place before anything of
substance can be executed to ensure that those who should be really accountable is actually found.

You might also like