You are on page 1of 58

Reference:

https://www.linkedin.com/pulse/artificial-intelligence-ai-its-impact-legal-profession-
lexratio?fbclid=IwAR2hxLoLfXDwELRGACZDatVRvM1r4RR0ss46vvdgj0CCWTAObfYDGTWU-Ac

Artificial Intelligence (AI) and its Impact on the Legal


Profession
LexRatio

LexRatio

LegalTech
Published Feb 20, 2023
+ Follow

I. Definition

Artificial Intelligence (AI) is an intelligence demonstrated by computers and machines which includes
perceiving and synthesizing information with purpose to make decisions. It is an umbrella term for a range of
algorithm-based technologies that solve complex tasks. Goal of AI is to mimic capabilities of the human mind.
The ideal purpose of AI would be creating systems who think and act rationally. This means making unbiased
and error-free decisions based on available data. There are five basic components of AI: perception,
learning, reasoning, problem-solving and language understanding. While “regular” software is
programmed to perform certain task or group of tasks, the AI is programmed to learn to perform the task(s). In
conventional software, primary artifact is the code, but for AI primary artifact is data. As a first step in
establishing AI system, the machine does raw data collection from various input sources, then works on
identifying raw data (images, text files, videos, etc.) while adding meaningful labels to provide the context, so
the machine can learn on its own, or by the support of the human. Traditional software takes an input and with
some logic written in the code and afterwards creates an output, while machine learning algorithm takes both
an input and an output, gives specific logic and learns from previous outcomes when creating a new output.
Also, traditional software is doing the same thing over and over again and it doesn't really change unless
human updates it. AI changes over time because of its ability to learn.

II. AI and existing legal framework

Businesses are using AI in all sorts of context. There are numerous overlapping and developing legislative and
regulatory developments that apply to the AI technologies. Many of relevant legal acts have extraterritorial
effect. It means that using AI lawfully makes it increasingly challenging for running businesses.

On the European Union level, we can expect changes due to the proposed AI Regulation. This is going to be
the very first comprehensive regulatory framework applicable to AI in the world (like GDPR is for personal data
protection). It has very extensive material scope and it’s going to be applied to the whole developing chain of
AI, including AI providers inside and outside of the EU. Proposed sanctions are going to be high. The AI Act
will lay down harmonized rules for the developing AI and placing it on the market, as well as post-market
control. Very important part of this act is AI risk assessment. It divides risks into four categories:
unacceptable risk, high risk, limited risk, and minimal risk. If AI will be adopted in its current form, this is
quite significant because it will prohibit uses of AI that contravene EU values. According to Art. 5 of the
proposed AI Regulation, which covers prohibited AI practices, LegalTech solutions aren’t directly included.
High risk is mainly referring to LegalTech used by public authorities and courts. It is likely that most LegalTech
solutions used by companies, lawyers, and clients will fall into category of a limited risk. All other systems will
be considered as AI systems with minimal risk. Once adopted, Legal Tech providers will have at least two
years to prepare for the requirements that will be imposed by the AI Regulation.

Additionally, we can also expect the adoption of new EU AI Liability Directive which regulates damages that
AI-related products and services might cause and liability of publishers of AI. Under the proposed new AI
Liability Directive, the presumption of causality applies if three conditions are met: first, that the fault or the
failure of an AI system has been demonstrated; second, that it reasonably likely has influenced the output;
third, that the fault or the failure of the AI system to produce output gave rise to the damage. Harmed person
will thus need to prove that providers or users of high-risk AI systems have failed to comply with obligations
under the AI Act once it will be adopted.
An appropriate regulatory framework to promote sustainable AI by monitoring and mitigating the associated
risks in legal profession is a pre-condition for using AI more comprehensively in the legal domain.

III. Examples of using AI by Legal Professionals

Using AI in legal domain is not new. However, its adoption is slow. Reviewing documents for litigation,
analyzing contracts to check if they meet the required criteria, legal research and predicting case
outcomes are only few of the examples of AI based software used by legal professionals.

When performing the review of documents for litigation, lawyers search for specific and important keywords,
dates, emails and other documents. AI is able to learn what is relevant and what is not, from previous searches
conducted by legal professionals. Based on that, it can more accurately identify key documents for litigation.
AI-based software can help with organizing files, documents, emails, calendar and tasks. Structuring of large
amounts of information makes the administration of legal documents more efficient. Machine learning AI is able
to learn through training with purpose to extract the relevant parts from a large amount of information. This is
one of the best ways how AI is being used in law right now - a database of information.

When talking about analyzing contracts, adopting machine learning to review contracts is taking the legal
industry by storm. One of many examples how AI can be used to analyze contracts is AI software is
called COIN and is used by JP Morgan to analyze commercial loan agreements. Bank plans to use this
software for other types of legal documents too. Why? Because a work that earlier took hundreds of hours to
be completed, can now be done in a few seconds. AI-based software can spot and identify issues that might
have been missed by human lawyers. It can review contracts faster and, in some cases, more accurately than
humans.

AI-based contract management software learns from earlier contracts and data. This previous knowledge on
how to handle data allows to flag any risks, missing key clauses or insert pre-approved clauses.
Communication between legal department and sales department is simplified. All this speeds up contract
approval time.

Legal research is commonly performed by legal practitioners and scholars. Sometimes it will take lawyer
hours, days or even weeks to find a relevant case or legal concept explanation. AI-based software can do it
almost instantly, but only if the AI system has enough data and powerful algorithm. Some attorneys are not
even aware they’re using AI in their research since it’s been integrated into many research services. With
intelligent legal research software, attorneys can test out variations in fact patterns or legal analyses to identify
the most advantageous strategy. Comparative analysis between cases in different countries and states or
between federal and state courts no longer takes days of exhaustive scanning.

And lastly, AI can predict case outcomes. Accurately assessing the likelihood of a successful outcome for a
lawsuit can be very valuable. No lawyer has complete knowledge of all data. However, AI can access more
data and analyze it much quicker than a lawyer. Therefore, it can predict case outcome faster and even with
higher precision. It allows attorneys to decide how much they should invest on experts, whether they should
take case on contingency or advise clients to settle. By analyzing large data very quickly, AI can help with
picking the most effective witness, best way to present a case and propose a way to respond to opposing
arguments. Also, clients can more easily decide whether it’s worthy or not to proceed moving forward with the
case. Furthermore, AI applications are being used as advisers to judges on bail and sentencing decisions.
Those AI tools used by criminal judges assess the recidivism risk of defendants and convicted persons in
decisions on pre-trial detention, sentencing or early release. Name of this tool is Correctional Offender
Management Profiling for Alternative Sanctions (COMPAS) and it’s been used in some of the states in
USA. This is no longer the future but a reality.

IV. Benefits of using AI by Legal Professionals

Moreover, AI offers new opportunities for digitalizing legal services. Delegating certain legal tasks and
automation where decisions need to be reached on the basis of a large quantity of data is one of the most
common ways of using AI in legal practice. AI technologies are able to make decisions at a near-instantaneous
speed.

Other benefits of using AI-based software include:

 Elimination of time-consuming tasks and automation of low-level tasks: Many of activities


are routine, paperwork-based duties that divert legal professionals’ attention away from the
strategic demands of a case. Those tasks can be automated in software and save lawyer’s time to
focus on more complicated issues. AI’s ability to work on repetitive tasks is one of the many
advantages for legal sector.

 Producing high-quality work: AI-based software isn’t tired, sleepy, confused or biased. It can
help manage documents and cases more efficiently, review contracts for missing clauses, highlight
typos and wrong terminology or flag ambiguous terms through comparison and ability to learn.

 Earlier and accurate risk assessment: AI based software can examine information in real-time.
This allows lawyers to find potential risks on time and prevent legal problems later.

 Efficient and accurate background checking: Part of the lawyer’s work is doing extensive
background checking of clients. AI can perform this task accurately and efficiently with less time
needed than humans.

 Reduced stress of legal professionals: Legal professionals do multiple tasks and some of them
are really time consuming. As stated earlier, low-level tasks can be delegated and automated to
allow lawyers to have more time to focus on creative analysis and more complicated duties. For
example, scheduling of tasks is very important to complete them on time. Software’s ability to send
notifications and reminders when deadlines are approaching is a useful feature.

 Remote work: Since we all got used to work from home during COVID19 pandemic, AI
technologies are helpful in this domain also. Accessing documents and cases from homes, setting
up video meetings to discuss and analyze cases became a new reality. According to the Legal
Industry Report 2021: Lessons Learned from the pandemic, the use of software that increased
productivity, often had the result of helping to increase profit or stabilize revenues for some firms.
That being sad remote work is here to stay in the short term, and will most likely become
commonplace in the foreseeable future.

V. Concerns regarding AI

First concern is violation of privacy laws. Some countries have comprehensive data protection laws that
restrict AI and automated decision making involving personal information. The EU General Data Protection
Regulation (GDPR) recommends that organizations using automated data processing, such as AI, take certain
measures to ensure information is processed fairly. AI providers may face challenges in defining the purpose
of processing information when developing AI because it is impossible to predict what algorithms will learn and
use data for. Basically, data might be used for new purpose.

Second concern is based on ethical considerations regarding AI. Safe and ethical management of AI systems
is mandatory. Algorithms that underpin AI technology need to be transparent or at least understandable. Also,
AI systems should be aligned with values known to humans, so they function properly in our society and
profession in which they are intended to be used. As an example, Amazon used an AI tool in the hiring
process, but unfortunately it was discriminating against women and they had to shut it down.

Third concern is related to use of AI in Intellectual Property (IP). Work created by the AI cannot be protected
under existing copyright laws. And same is with inventions. With its ability to learn and to be creative it might
create new inventions that need to be protected as inventions. But AI cannot be considered as the owner of IP
itself, owners are only a natural person or a legal entity. So, whether the user of the AI system should own the
IP rights or the rights should go to the inventor of the AI system? Or the AI system itself should be allowed to
hold the rights?

VI. Conclusion: Is AI ready to practice law?

AI is a welcome tool in the cause of justice. It is because AI is able to work on repetitive tasks very quickly, it
has ability to learn, make unbiased decisions and produce high quality work, help lawyers by predicting case
(trial) outcomes, use previous outcomes when creating new ones, identify patterns and use logic while being a
creative problem solver, advise legal professionals (attorneys, judges, etc.) but also advise clients on legal
issues and many more.

As AI technologies continue to develop, they are already unlocking many opportunities to transform and
improve the field of law. Most important advantage is that AI saves time. Computers can analyze large
amounts of data, more thoroughly than humans can, in a tiny fraction of the time. Time savings mean monetary
savings, since less legal professional time is involved in finding answers and identifying mistakes. Those
savings can quickly make up for the cost of new technology. AI can be both the biggest opportunity and
potentially the greatest threat to the legal profession. Today Artificial Intelligence represents an opportunity for
a law firm to be a leader in a legal profession, but soon it will be a matter of keeping up rather than being a
leader. The financial strength will allow only big law firms and companies to utilize AI. At least, in the start.
However, one thing is certain — AI is stepping in a legal profession and taking over, and lawyers need to
embrace the new technologies. Otherwise, they won’t be able to keep up with the competition.

This Article was prepared by Drazen Nikolic. Dražen is a legal professional from Bosnia and
Herzegovina with experience in corporate governance, compliance, contracts and financial crime.
Also, he’s a LegalTech enthusiast in his free time.
Reference: https://iapp.org/resources/article/us-federal-ai-governance/?fbclid=IwAR2waq7-dEgVIf2_gG-
vXbnvMXlAGIgq8m-BFxanYNPCh_-BrplHhxIYW2I

US federal AI governance: Laws, policies and strategies


Müge Fazlioglu
Müge Fazlioglu, CIPP/E, CIPP/US
Published: June 2023

Halfway into 2023, generative artificial intelligence tools such as OpenAI's ChatGPT have achieved growing
and sustained popularity. In May, chat.openai.com received about 1.8 billion visits over the previous month,
with an average visit duration of eight and a half minutes.

Yet, as AI is adopted around the world, it raises as many questions as it provides answers. Chief among these
questions is: How should AI be governed?

AI governance around the world

With AI making inroads into every sphere of life, lawmakers and regulators are working to regulate the
technology in ways that appreciate its full range of potential effects — both the benefits and the harms.
Unsurprisingly, countries have taken differing approaches to AI, each reflective of their respective legal
systems, cultures and traditions.

On 11 May, European Parliament voted in favor of adopting the Artificial Intelligence Act, which, in its current
form, bans or limits specific high-risk applications of AI. The law is now set for plenary adoption in June, which
will trigger trilogue negotiations between Parliament, the European Commission and the Council of the
European Union.

In the U.K., Secretary of State for Science, Innovation and Technology Michelle Donelan recently released
a white paper, aiming to establish the U.K. as an "AI superpower." The strategy provides a framework for
identifying and addressing risks presented by AI while taking a "proportionate" and "pro-innovation" approach.

In Canada, the proposed Artificial Intelligence and Data Act is part of a broader update to the country's
information privacy laws, and is one of three pieces of legislation that comprise Bill C-27, which passed its
second reading in the House of Commons in April.

Singapore's National AI Strategy, meanwhile, consists of the 2019 launch of its Model AI Governance
Framework, its companion Implementation and Self-Assessment Guide for Organizations and Compendium of
Use Cases, which highlights practical examples of organizational-level AI governance.

And, on 11 April, the Cyberspace Administration of China released its draft Administrative Measures for
Generative Artificial Intelligence Services, which aim to ensure content created by generative AI is consistent
with "social order and societal morals," avoids discrimination, is accurate and respects intellectual property.

AI governance policy at the White House

Within the context of these global developments in AI law and policymaking, a federal AI governance policy
has also taken shape in the U.S. The White House, Congress, and a range of federal agencies, including the
Federal Trade Commission, the Consumer Financial Protection Bureau and the National Institute of Standards
and Technology, have put forth a series of AI-related initiatives, laws and policies. While numerous city and
state AI laws also came into effect over the years, federal laws and policies around AI are of heightened
importance in understanding the country's unique national AI strategy. Indeed, the foundation of the federal
government's AI strategy has already been established and provides insight into how the legal and policy
questions brought about by this new technology will be approached in the months and years ahead.

Obama administration
The earliest outlines of a federal AI strategy were sketched during former President Barack Obama's
administration, most directly in "Preparing for the Future of Artificial Intelligence," a public report issued by the
National Science and Technology Council in October 2016. It summarizes the state of AI within the federal
government and economy at the time, while touching on issues of fairness, safety, governance and global
security. Its nonbinding recommendations centered around applying AI to address "broad social problems,"
releasing government data sets in pursuit of open training data and open data standards, drawing on
"appropriate technical expertise … when setting regulatory policy for AI-enabled products" and fostering a
federal workforce with diverse perspectives on AI technology. The report built on three previous White House
reports, from 2014, 2015, and 2016, on big data and algorithmic systems.
Released a day later in conjunction with the report, the National Artificial Intelligence Research and
Development Strategic Plan sought to identify priority areas for federally funded AI research, "with particular
attention on areas that industry is unlikely to address." It urged the federal government to "emphasize AI
investment in areas of strong societal importance that are not aimed at consumer markets — areas such as AI
for public health, urban systems and smart communities, social welfare, criminal justice, environmental
sustainability, and national security."
Updates to the National AI R&D Strategic Plan, which occurred in 2019 and 2023, reaffirmed the seven core
strategies laid out in 2016 and added two new ones focused on expanding public-private partnerships and
international collaboration.

Trump administration
Another significant development in federal AI governance policy occurred when former President Donald
Trump signed Executive Order 13859, "Maintaining American Leadership in Artificial Intelligence," in February
2019. Executive Order 13859 set in motion the American AI Initiative, which led to the issuance of further
guidance and technical standards that would determine the scope of AI law and policymaking in the U.S. over
the following years.
Among other things, the order required former Director of the Office of Management and Budget Russell
Vought to issue a guidance memorandum, following public consultation, in November 2020. The purpose of
the OMB guidance was to help inform federal agencies' development of approaches to AI that "consider ways
to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting
civil liberties, privacy, American values, and United States economic and national security." The tone of the
OMB guidance was described in Lexology as "fairly permissive" for warning agencies to "avoid a precautionary
approach that holds AI systems to an impossibly high standard."
In September 2019, the White House also hosted The Summit on Artificial Intelligence in Government, which
aimed to generate ideas for the adoption of AI by the federal government. The summit's key takeaways
revolved around sharing best practices between government, industry and academia; fostering collaboration
through an AI center of excellence model; and training/reskilling the federal workforce in the use of AI.

Biden administration
AI governance policy in the U.S. evolved further during President Joe Biden's administration. Indeed, another
milestone in federal AI governance policy came in October 2022 with the release of the Blueprint for an AI Bill
of Rights: Making Automated Systems Work for the American People. Published by the White House Office of
Science and Technology Policy, the document lays out five principles to "guide the design, use, and
deployment of automated systems to protect the American public in the age of" AI. These principles revolve
around safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and
explanation, and human involvement in decision-making. The white paper provides supplemental sections
regarding why each principle is important, what should be expected of automated systems with regards to
them and how they can be embedded into laws, policies and practices.
Also, in February of this year, President Biden signed the Executive Order on Further Advancing Racial Equity
and Support for Underserved Communities Through The Federal Government, which "directs federal agencies
to root out bias in their design and use of new technologies, including AI, and to protect the public from
algorithmic discrimination."
More recently, in late May 2023, the Biden administration took several additional steps to further delineate its
approach to AI governance. The White House OSTP issued a revised National AI R&D Strategic Plan to
"coordinate and focus federal R&D investments" in AI. OSTP also issued a Request for Information seeking
input on "mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives," with
comments due by 7 July.

AI governance policy in Congress

The deliberative branch of government, Congress, has approached AI law and policymaking in its
characteristically incremental fashion. Until 2019, most of lawmakers' attention around AI was absorbed by
autonomous or self-driving vehicles and concerns about AI applications within the national security arena.

For example, in the 115th Congress in 2017-2019, Section 238 of the John S. McCain National Defense
Authorization Act for Fiscal Year 2019 directed the Department of Defense to undertake various AI-related
activities, including the appointment of a coordinator to oversee activities in the realm. This Act also codified
(at 10 U.S.C. § 2358) a definition of AI, which is:

 "Any artificial system that performs tasks under varying and unpredictable circumstances without
significant human oversight, or that can learn from experience and improve performance when exposed to
data sets.

 An artificial system developed in computer software, physical hardware, or another context that solves
tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.

 An artificial system designed to think or act like a human, including cognitive architectures and neural
networks.

 A set of techniques, including machine learning, that is designed to approximate a cognitive task.

 An artificial system designed to act rationally, including an intelligent software agent or embodied robot
that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and
acting."

Another key AI-related legislative development occurred when the National AI Initiative Act of 2020 became
law on 1 Jan. 2021. Included as part of the William M. (Mac) Thornberry National Defense Authorization Act for
Fiscal Year 2021, this legislation focused on expanding AI research and development and further coordinating
AI R&D activities between the defense/intelligence communities and civilian federal agencies. The Act also
legislated the creation of the National Artificial Intelligence Initiative Office, which sits within the White House
OSTP and is tasked with "overseeing and implementing the U.S. national AI strategy."

Congress has also amended existing laws and policies to account for the increasing use of AI in various
arenas. For example, in passing the FAA Reauthorization Act of 2018, Congress added language to advise the
Federal Aviation Administration to periodically review the state of AI in aviation and for it to take necessary
steps to address new developments. The Advancing American AI Act and the AI Training Act were among
other AI-related pieces of legislation introduced or passed by the 117th Congress.

Recently proposed legislation related to AI


Within the current 118th Congress other bills have also been proposed to amend existing laws and better
equip them for the AI era. Proposed in May 2023, HR 3044 would amend the Federal Election Campaign Act
of 1971 to provide transparency and accountability around the use of generative AI in political advertisements.
Also, in January, House resolution 66 was introduced, expressing support for Congress to focus more on AI.
The stated goal of the resolution was to "ensure that the development and deployment of AI is done in a way
that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of AI are widely
distributed, and the risks are minimized." Other federal privacy bills have also sought to regulate various uses
of AI. The Stop Spying Bosses Act would prohibit employers from engaging in workplace surveillance using
automated decision systems, including ML and AI techniques, to predict the behavior of their workers.
Many recently proposed federal privacy bills are also already cognizant of AI. The definition of a "covered
algorithm" within the American Data Privacy and Protection Act, for example, includes computational
processes that use ML, natural language processing or AI techniques. Among other proposed rules, the most
recent version of the ADPPA requires impact assessments of such systems if certain entities use them "in a
manner that poses a consequential risk of harm to an individual or group of individuals." Separately, it would
require the documentation of an "algorithm design evaluation" process to mitigate risks whenever a covered
entity develops a covered algorithm "solely or in part, to collect, process, or transfer covered data in
furtherance of a consequential decision."
Similarly, the Filter Bubble Transparency Act would apply to platforms that use "algorithmic ranking systems,"
which includes computational processes "derived from" AI. In addition, the SAFE DATA Act includes both
above-mentioned definitions. Lastly, the Consumer Online Privacy Rights Act would also regulate "algorithmic
decision-making" defined similarly to include computational processes derived from AI. Moving forward,
comprehensive federal privacy bills may also become more explicit in their treatment of AI. Moreover, bills
drafted in previous sessions may be reintroduced and further amended to account for the risks/opportunities
presented by AI.
Several congressional hearings on AI have also recently been held. Both the House Armed Services'
Subcommittee on Cyber, Information Technologies, and Innovation, and the Senate Armed Services
Subcommittee on Cybersecurity met in March and April, respectively, to discuss AI and ML applications to
improve DOD operations. On 16 May, the Senate Judiciary Subcommittee on Privacy, Technology and the
Law held a hearing titled "Oversight of A.I.: Rules for Artificial Intelligence," while the Senate Committee on
Homeland Security and Governmental Affairs held a full committee meeting, "Artificial Intelligence in
Government," the same day.

AI governance policy within federal agencies

Virtually every federal agency has played an active role in advancing the AI governance strategy within the
federal government and, to a lesser extent, around commercial activities. One of the first to do so was the
NIST, which published "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical
Standards and Related Tools" in August 2019 in response to EO 13859. The report identified areas of focus for
AI standards and laid out a series of recommendations for advancing national AI standards development in the
U.S. NIST's AI Risk Management Framework, released in January, also serves as an important pillar of federal
AI governance and is an oft-cited model for private sector activities.

By mid-2020, the FTC entered the picture to provide the contours of its approach to AI governance, regulation
and enforcement. Its guidance has emphasized the agency's focus on companies' use of generative AI tools.
Questions about whether firms are using generative AI in a way that, "deliberately or not, steers people unfairly
or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment,"
fall within the FTC's jurisdiction.

In late April, the FTC, along with the CFPB, the Justice Department's Civil Rights Division and the Equal
Employment Opportunity Commission, issued a joint statement, clarifying that their enforcement authorities
apply to automated systems, which they define as "software and algorithmic processes, including AI, that are
used to automate workflows and help people complete tasks or make decisions." In line with these promises,
the EEOC also released a bulletin around its interpretation of existing antidiscrimination rules in employment,
specifically, Title VII of the Civil Rights Act of 1964, as they apply to the use of AI-powered systems.

Meanwhile, the National Telecommunications and Information Administration has issued an "AI Accountability
Policy Request for Comment," seeking public feedback on policies to "support the development of AI audits,
assessments, certifications and other mechanisms to create earned trust in AI systems," with written
responses due 12 June. The NTIA will likely use the information it receives to advise the White House on AI
governance policy issues.

Numerous other U.S. agencies have led their own AI initiatives and created AI-focused offices within their
departments. For example, the Department of Energy's AI Intelligence and Technology Office developed an AI
Risk Management Playbook in consultation with NIST and established an AI Advancement Council in April
2022. Within the Department of Commerce, the U.S. Patent and Trademark Office created an AI/emerging
technologies partnership to examine and better understand use of these technologies in patent and trademark
examination and their effect on intellectual property.

More recently, the U.S. Department of Education Office of Educational Technology released a report on the
risks and opportunities AI presents within and educational settings.

AI governance policy and existing laws

A key point emphasized by U.S. regulators across multiple federal agencies is that current laws do apply to AI
technology. Indeed, at least in the short term, AI regulation in the U.S. will consist more of figuring out
how existing laws apply to AI technologies, rather than passing and applying new, AI-specific laws. In their joint
statement, the FTC, EEOC, CFPB and Department of Justice noted how "existing legal authorities apply to the
use of automated systems and innovative new technologies just as they apply to other practices." Expressing
concern about "potentially harmful uses of automated systems," the agencies emphasized that they would
work "to ensure that these rapidly evolving automated systems are developed and used in a manner consistent
with federal laws."

On numerous occasions, the FTC stated the prohibition of unfair or deceptive practices in Section 5 of the FTC
Act applies to the use of AI and ML systems. In its business guidance on using AI and algorithms, the FTC
explained the Fair Credit Reporting Act of 1970 and the Equal Credit Opportunity Act of 1974 "both address
automated decision-making, and financial services companies have been applying these laws to machine-
based credit underwriting models for decades."

Separately, the CFPB issued a circular clarifying the adverse action notice requirement of the Equal Credit
Opportunity Act and its implementing Regulation B, requiring creditors to explain the specific reasons why an
adverse credit decision was taken against an individual, still applies even if the credit decision is based on a
so-called "uninterpretable or 'black-box' model." Such complex algorithm models may make it difficult — or
even impossible — to accurately identify the specific reason for denial of credit. Yet, as the CFPB further
noted, creditors cannot rely on post-hoc explanation methods and they must be able to "validate the accuracy"
of any approximate explanations they provide. Thus, this guidance interprets the ECOA and Regulation B as
not permitting creditors to use complex algorithms to make credit decisions "when doing so means they cannot
provide the specific and accurate reasons for adverse actions." In his keynote address at the IAPP Global
Privacy Summit 2023, FTC Commissioner Alvaro Bedoya echoed this point, explaining the FTC "has
historically not responded well to the idea that a company is not responsible for their product because that
product is a black box that was unintelligible or difficult to test."

Conclusion

Around the world, and particularly in the U.S., the most pressing questions around AI governance concern the
applicability of existing laws to the new technology. Answering these questions will be a difficult task involving
significant legal and technological complexities. Indeed, as the Business Law Section of the American Bar
Association explained in its inaugural Chapter on Artificial Intelligence, "Companies, counsel, and the courts
will, at times, struggle to grasp technical concepts and apply existing law in a uniform way to resolve business
disputes."

Pro-social applications of AI abound, from achieving greater accuracy than human radiologists in breast cancer
detection to mitigating climate change. Yet, anti-social applications of AI are no less numerous, from aiding
child predators in avoiding detection to facilitating financial scams.

AI can be neither responsible nor irresponsible in and of itself. Rather, it can be used or deployed — by people
and organizations — in responsible and irresponsible ways. It is up to lawmakers to determine what those uses
are, how to support the responsible ones and how to prohibit the irresponsible ones, while professionals who
create and use AI work to implement these governance principles into their daily practices.
Reference:
https://news.un.org/en/story/2021/09/1099972?fbclid=IwAR3fa6ABte54soOBec3yMlY04GKiRRMIry8t08D
R3tMwFD9mZt4-Ss8w1-4

Urgent action needed over artificial intelligence risks to


human rights

Unsplash/Michael Dziedzic

Artificial intelligence could help to boost the provision of healthcare around the world.
Facebook Twitter Print Email
15 September 2021Human Rights
States should place moratoriums on the sale and use of artificial intelligence (AI) systems until
adequate safeguards are put in place, UN human rights chief, Michelle Bachelet said on Wednesday.

Urgent action is needed as it can take time to assess and address the serious risks this technology poses to
human rights, warned the High Commissioner: “The higher the risk for human rights, the stricter the legal
requirements for the use of AI technology should be”.

Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights
law, to be banned. “Artificial intelligence can be a force for good, helping societies overcome some of the great
challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used
without sufficient regard to how they affect people’s human rights”.

Pegasus spyware revelations


On Tuesday, the UN rights chief expressed concern about the "unprecedented level of surveillance across the
globe by state and private actors", which she insisted was "incompatible" with human rights.

Tweet URL
She was speaking at a Council of Europe hearing on the implications stemming from July’s controversy over
Pegasus spyware.

The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe's
Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized
by the NSO group, which affected thousands of people in 45 countries across four continents.

‘High price’, without action


The High Commissioner’s call came as her office, OHCHR, published a report that analyses how AI affects
people’s right to privacy and other rights, including the rights to health, education, freedom of movement,
freedom of peaceful assembly and association, and freedom of expression.

The document includes an assessment of profiling, automated decision-making and other machine-learning
technologies.

The situation is “dire” said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who
was speaking at the launch of the report in Geneva on Wednesday.

The situation has “not improved over the years but has become worse” he said.

Whilst welcoming “the European Union’s agreement to strengthen the rules on control” and “the growth of
international voluntary commitments and accountability mechanisms”, he warned that “we don’t think we will
have a solution in the coming year, but the first steps need to be taken now or many people in the world will
pay a high price”.

OHCHR Director of Thematic Engagement, Peggy Hicks, added to Mr Engelhardt’s warning, stating “it's not
about the risks in future, but the reality today. Without far-reaching shifts, the harms will multiply with scale and
speed and we won't know the extent of the problem.”
Failure of due diligence
According to the report, States and businesses often rushed to incorporate AI applications, failing to carry out
due diligence. It states that there have been numerous cases of people being treated unjustly due
to AI misuse, such as being denied social security benefits because of faulty AI tools or arrested because of
flawed facial recognition software.

Discriminatory data
The document details how AI systems rely on large data sets, with information about individuals collected,
shared, merged and analysed in multiple and often opaque ways.

The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant, it argues,
adding that long-term storage of data also poses particular risks, as data could in the future be exploited in as
yet unknown ways.

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected,
stored, shared and used is one of the most urgent human rights questions we face,” Ms. Bachelet said.

Tweet URL
The report also stated that serious questions should be raised about the inferences, predictions and monitoring
by AI tools, including seeking insights into patterns of human behaviour.

It found that the biased datasets relied on by AI systems can lead to discriminatory decisions, which are acute
risks for already marginalized groups. “This is why there needs to be systematic assessment and monitoring of
the effects of AI systems to identify and mitigate human rights risks,” she added.

Biometric technologies
An increasingly go-to solution for States, international organizations and technology companies are biometric
technologies, which the report states are an area “where more human rights guidance is urgently needed”.

These technologies, which include facial recognition, are increasingly used to identify people in real-time and
from a distance, potentially allowing unlimited tracking of individuals.

The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can
demonstrate that there are no significant issues with accuracy or discriminatory impacts and that these AI
systems comply with robust privacy and data protection standards.

Greater transparency needed


The document also highlights a need for much greater transparency by companies and States in how they are
developing and using AI.

“The complexity of the data environment, algorithms and models underlying the development and operation of
AI systems, as well as intentional secrecy of government and private actors are factors undermining
meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report
says.

Guardrails essential
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or
oversight and dealing with the almost inevitable human rights consequences after the fact.

“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an
enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI,
for the good of all of us,” Ms. Bachelet stressed.
Reference: https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-
profession?fbclid=IwAR0ilIIdI1WUvyqq9iYqNIfE-hD5MrGN-4QwVbWEWGOYccsIahmtPEZvx3Q

A Primer on Using Artificial Intelligence in


the Legal Profession
By Lauri Donahue
January 03, 2018

COMMENTARY

Lauri Donahue is a 1986 graduate of Harvard Law School and was one of the co-founders of the Harvard
Journal of Law & Technology. She is now the Director of Legal Content for LawGeex, a Tel Aviv legaltech
startup.

What's artificial intelligence ("AI") and why should lawyers care about it? On a practical level, lawyers should
be aware that software powered by AI already carries out legal tasks. Within a few years, AI will be taking over
(or at least affecting) a significant amount of work now done by lawyers. Thirty-nine percent of in-house
counsel expect that AI will be commonplace in legal work within ten years.

On a more philosophical level, lawyers should understand that the "decisions" made by AI-powered software
will raise significant legal questions, including those of tort liability and of criminal guilt. For example, if AI is
controlling a driverless car and someone's killed in an accident, who's at fault?

While the philosophical questions are important to resolve, this Comment will focus on the practical issues. To
provide an overview of what AI is and how it will be used in the legal profession, this Comment addresses
several questions:

 What is AI?
 How does AI work?
 What can AI do?
 How are lawyers using AI?
 How will AI affect the legal profession?

What is AI?

Let's start with a few definitions:

"Artificial Intelligence" is the term used to describe how computers can perform tasks normally viewed as
requiring human intelligence, such as recognizing speech and objects, making decisions based on data, and
translating languages. AI mimics certain operations of the human mind.

"Machine learning" is an application of AI in which computers use algorithms (rules) embodied in software to
learn from data and adapt with experience.

A "neural network" is a computer that classifies information -- putting things into "buckets" based on their
characteristics. The hot-dog identifying app from HBO's Silicon Valley is an example of one application of this
technology.

How Does AI Work?

Some AI programs train themselves, through trial and error. For example, using a technique
called neuroevolution, researchers at Elon Musk's OpenAI research center set up an algorithm with policies for
getting high scores on Atari videogames. Several hundred copies of these rules were created on different
computers, with random variations. The computers then "played" the games to learn which policies were most
effective and fed those results back into the system. AI can also be used to build better AI. Google is building
algorithms that analyze other algorithms, to learn which methods are more successful.
Other AI programs need to be trained by humans feeding them data. The AI then derives patterns and rules from
that data. AI programs trained through machine learning are well-suited to solve classification problems. This
basically means calculating the probability that certain information is either of type A or type B. For example,
determining whether a given bear is a panda or a koala is a classification problem.

The training starts with showing the computer lots of samples of pandas and koalas. These initial samples are
called the training set, and clearly identify which type of bear is being presented to the AI.

The AI builds a model--a set of rules--to distinguish between pandas and koalas. That model might be based on
things like size, coloring, the shape of the ears, and what the animal eats (bamboo or eucalyptus).

After training, the AI can be tested with new pandas and koalas to see whether it classifies them correctly. If it
doesn't do very well, the algorithm may need to be tweaked or the training set may need to be expanded to give
the AI more data to crunch.

What Can AI Do?

At this point in its development, AI is good at finding items that meet human-defined criteria and detecting
patterns in data. In other words, AI can figure out what makes a panda a panda and what distinguishes it from a
koala--which lets it find the pandas in a collection of random bears. These are sometimes called "search-and-
find type" tasks.

Once it's identified something, the AI can then apply human-defined rules and take actions. In the case of legal
work, an AI can carry out tasks like:

 IF this document is a non-disclosure agreement, THEN send it to the legal department for review
 IF this NDA meets the following criteria, THEN approve it for signature
 FIND all my contracts with automatic renewal clauses and NOTIFY ME four weeks before they renew
 TELL ME which patents in this portfolio will expire in the next six months
According to Stefanie Yuen Thio, joint managing partner and head of corporate at TSMP Law Corp. in
Singapore, legal work that's repetitive, requiring minimal professional intervention, or based on a template will
become the sole province of software. In addition, she says,

any legal work that depends on collating and analyzing historical data such as past judicial decisions, including legal
opinions or evaluating likely litigation outcomes, will become the dominion of AI. No human lawyer stands a chance
against the formidable processing power of a mainframe when it comes to sifting through voluminous data.

AI can help consumers by providing a form of "legal service" to clients who might otherwise not be able to
afford a lawyer. The free service DoNotPay, created by a 19-year-old, is an AI-powered chatbot that lets
users contest parking tickets in London and New York. In its first 21 months, it took on 250,000 cases and won
160,000 of them, saving users more than $4 million worth of fines. The same program is helping consumers file
databreach-related suits against Equifax for up to $25,000--though it can't help them litigate their cases.

What AI Can't Do

According to Yuen Thio, AI can't yet replicate advocacy, negotiation, or structuring of complex deals. The New
York Times suggested that tasks like advising clients, writing briefs, negotiating deals, and appearing in court
were beyond the reach of computerization, at least for a while. AI also isn't yet very good at the type of creative
writing in a Supreme Court brief. Or a movie script.

How Are Lawyers Using AI?

Lawyers are already using AI to do things like reviewing documents during litigation and due diligence,
analyzing contracts to determine whether they meet pre-determined criteria, performing legal research, and
predicting case outcomes.

Document Review

Document review for litigation involves the task of looking for relevant documents--for example, documents
containing specific keywords, or emails from Ms. X to Mr. Y concerning topic Z during March, 2016. Setting
search parameters for document review doesn't require AI, but using AI improves the speed, accuracy, and
efficiency of document analysis.

For example, when lawyers using AI-powered software for document review flag certain documents as relevant,
the AI learns what type of documents it's supposed to be looking for. Hence, it can more accurately identify
other relevant documents. This is called "predictive coding." Predictive coding offers many advantages over
old-school manual document review. Among other things, it:

 leverages small samples to find similar documents


 reduces the volume of irrelevant documents attorneys must wade through
 produces results that can be validated statistically
 is at least modestly more accurate than human review
 is much faster than human review

Predictive coding has been widely accepted as a document review method by US courts since the 2012 decision
in Da Silva Moore v. Publicus Groupe.

Analyzing Contracts

Clients need to analyze contracts both in bulk and on an individual basis.

For example, analysis of all contracts a company has signed can identify risks, anomalies, future financial
obligations, renewal and expiration dates, etc. For companies with hundreds or thousands of contracts, this can
be a slow, expensive, labor-intensive, and error-prone process (assuming the contracts aren't already entered
into a robust contract management system). It's also boring for the lawyers (or others) tasked with doing it.

On a day-to-day basis, lawyers review contracts, make comments and redlines, and advise clients on whether to
sign contracts as-is or try to negotiate better terms. These contracts can range from simple (e.g., NDAs) to
complex. A backlog of contracts to review can create a bottleneck that delays deals (and the associated
revenues). Lawyers (especially inexperienced ones) can miss important issues that can come back to bite their
clients later.

AI can help with both bulk and individual contract review.

At JPMorgan, an AI-powered program called COIN has been used since June 2017 to interpret commercial loan
agreements. Work that previously took 360,000 lawyer-hours can now be done in seconds. The bank is planning
to use the technology for other types of legal documents as well.

Some AI platforms, such as the one provided by Kira Systems, allow lawyers to identify, extract, and analyze
business information contained in large volumes of contract data. This is used to create contract summary charts
for M&A due diligence.

The company I work for, LawGeex, uses AI to analyze contracts one at a time, as part of a lawyer's daily
workflow. To start with, lawyers set up their LawGeex playbooks by selecting from a list of clauses and
variations to require, accept, or reject. For example, a California governing law clause might be OK, but
Genovian law isn't. Then, when someone uploads a contract, the AI scans it and determines what clauses and
variations are present and missing. The relevant language is highlighted and marked with a green thumbs-up or
a red thumbs-down based on the client's preset criteria.

In-house lawyers use LawGeex to triage standard agreements like NDAs. Contracts meeting pre-defined criteria
can be pre-approved for signature; those that don't are kicked to the legal department for further review and
revision.

Legal Research

Any lawyer who's ever done research using Lexis or Westlaw has used legal automation. Finding relevant cases
in previous eras involving the laborious process of looking up headnote numbers and Shepardizing in paper
volumes. But AI takes research to the next level. For example, Ross Intelligence uses the power of IBM's
Watson supercomputer to find similar cases. It can even respond to queries in plain English. The power of AI-
enabled research is striking: using common research methods, a bankruptcy lawyer found a case nearly identical
to the one he was working on in 10 hours. Ross's AI found it almost instantly.

Predicting Results

Lawyers are often called upon to predict the future: If I bring this case, how likely is it that I'll win -- and how
much will it cost me? Should I settle this case (or take a plea), or take my chances at trial? More experienced
lawyers are often better at making accurate predictions, because they have more years of data to work with.

However, no lawyer has complete knowledge of all the relevant data.

Because AI can access more of the relevant data, it can be better than lawyers at predicting the outcomes of
legal disputes and proceedings, and thus helping clients make decisions. For example, a London law firm used
data on the outcomes of 600 cases over 12 months to create a model for the viability of personal injury cases.
Indeed, trained on 200 years of Supreme Court records, an AI is already better than many human experts at
predicting SCOTUS decisions.

How Will AI Affect the Legal Profession?

A consensus has emerged that AI will significantly disrupt the legal market. AI will impact the availability of
legal sector jobs, the business models of many law firms, and how in-house counsel leverage technology.

According to Deloitte, about 100,000 legal sector jobs are likely to be automated in the next twenty years.
Deloitte claims 39% of legal jobs can be automated; McKinsey Global Institute estimates that 23% of a lawyer's
job could be automated. Some estimates suggest that adopting all legal technology (including AI) already
available now would reduce lawyers' hours by 13%.

How Law Firms are Responding to AI

Law firms are notoriously slow to adapt to new technologies. Enhancing efficiency is often seen as contrary to
the economic goal of maximizing billable hours. Lawyers are also seen as being techno-phobic.
However, many law firms are trying to understand and use new legal technologies, including AI. According to
the London Times, "[t]he vast majority of the UK’s top 100 law firms are either using artificial intelligence or
assessing the technology." Firms adopting AI systems include Latham & Watkins, Baker & McKenzie,
Slaughter & May, and Singapore's Dentons Rodyk & Davidson.

Ron Dolin, a senior research fellow at Harvard Law School's Center on the Legal Profession, says that
traditional law firm business models based on armies of first year associates racking up billable hours doing
M&A contract review are doomed by the advent of AI. This isn't necessarily bad news for junior associates--or
at least for the ones who still have jobs--as many hated doing contract review in the first place.

Firms that fail to take advantage of AI-powered efficiencies may lag in competing with those who do--at least to
the extent clients insist on fixed-rate billing.Thus, lawyers who understand technology, and educate themselves
about the latest legaltech developments. may be of increasing value to their firms.

How In-House Counsel Are Using AI

Corporate counsel have obvious reasons to adopt AI. Unlike attorneys in law firms, corporate counsel have no
incentive to maximize their hours. Indeed, many lawyers go in-house to improve their work-life balance, which
includes getting home at a reasonable hour. They're also often subject to strict budget and headcount constraints,
so they have to figure out how to get more done with limited resources. AI helps in-house lawyers get home
earlier without increasing their departmental budgets.

AI and the Future of the Legal Profession

The ABA Model Rules of Professional Conduct ("Model Rules") require that lawyers be competent--and that
they keep up with new technology. As Comment 8 states:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its
practice, including the benefits and risks associated with relevant technology...

At least 27 states have adopted some form of this Model Rule. In January of 2017, Florida became the first state
to require technology training as part of its continuing legal education requirement. Other states seem likely to
follow suit. Indeed, failing to use commonly available technology, like email and e-discovery software, can be
grounds for a malpractice claim or suspension by the bar.

Of course, AI-powered legal automation is not yet common. But it soon will be.Spending on AI is expected to
grow rapidly--from $8 billion in 2016 to $47 billion in 2020--as AI is seen as reducing costs and increasing
efficiency. Top MBA programs already have courses on how managers can use AI applications.

As they come to rely on AI, C-level executives may expect that their inside and outside lawyers are also up-to-
speed.
Reference:
https://jolt.law.harvard.edu/digest/the-promise-and-peril-of-ai-legal-services-to-equalize-
justice?fbclid=IwAR21_iYw2ZSOEccfjUMGj8Au37NYMCTy2DLw_tp1OD9UxkWkprSUT_eZFL8

The Promise and Peril of AI Legal Services


to Equalize Justice
By Ashwin Telang - Edited by Edwin Farley, Teodora Groza, Pablo A. Lozano, and Pantho Sayed
March 14, 2023

COMMENTARY

Decades ago, labor regulators predicted that routine factory work could be reduced to a set of computerized
functions. At the same time, they assumed that the work of white-collar professionals, like lawyers, could not be
digitized. Law is a complex and dynamic field, complicating the task of those seeking to automate it. However,
new developments in artificial intelligence (“AI”) promise to fulfill many functions of a lawyer and democratize
the law.

Background

The digitization of legal services could ameliorate America's problem with access to justice. Due to the
escalating cost of lawyers and the growing complexity of the law, more people are effectively locked out of
justice every year. Nearly 92% of impoverished Americans, or 36 million people, cannot afford to hire a lawyer
for a civil suit. This “judicial deficit” perpetuates poverty and compromises the fundamental rights lawyers are
meant to protect, affecting issues spanning from eviction to healthcare to domestic abuse.

Court dockets today are checkered with power asymmetries—impoverished Americans are regularly ambushed
by rich corporations and individuals who can afford multiple lawyers. The complexity and obscurity of the law
means the targets of these efforts are unable to adequately defend themselves without legal counsel, allowing
corporate legal abuses to go unaired and unchecked. Making matters worse, injustices from these lawyerless
courts disproportionately affect women and people of color.

This article assesses new developments in AI legal services. It first explores the potential of artificial
intelligence to either democratize access to legal services or how without proper treatment, it may only
reinforce existing inequalities. Finally, the article outlines specific reforms to address AI’s perils.

Promises of AI Legal Services

Even at this experimental stage, over 280 companies have started developing legal technology. Companies in
this space have already raised over $757 million and filed for 1,369 legal machine-learning patents.

Automated legal systems have the capability to handle legal files in a matter of seconds. A recent AI system,
Intelligent File 1.0, can automatically file and organize legal documents. Apps such as Rocket Lawyer are
already helping impoverished Americans by instantly completing legal paperwork, such as business contracts,
real estate agreements, and wills. The technology behind these systems simplifies complex legal doctrines and
formalities, mitigating structural barriers to understanding the law without a lawyer and completing even the
simplest legal tasks.

Beyond simplifying documentation, AI can also answer legal questions and offer assistance at low costs. Self-
help chatbots empower low-income individuals to take their civil issues to court by providing immediate legal
information about their specific case or situation. These chatbots are designed to advise clients about their
rights, legal strategies, and procedures in civil court.

A new chatbot app, rAInbow, can also identify areas of legal protection for potential victims of domestic
violence. Powered by machine learning, technologies like rAInbow can help victims become aware of their
rights and demystify confusing legal terminology.
The website Do Not Pay overturned over 100,000 speeding tickets, saving low-income Americans millions of
dollars. Luis Salazar, a bankruptcy lawyer, tested new legal software against his own skills, and the results, he
said, “blew me away.” A machine could quickly produce a simple two-page memo and analyze a complex legal
problem very similar to what a human lawyer could produce.

Skeptics of legal automation argue that these emerging programs are disruptive agents that will displace
lawyers. Richard Susskind, a lawyer, rebuts these concerns, arguing that lawyers and technology can work
alongside each other. Legal technology can help law firms by speeding up mundane, time-consuming tasks and
allowing lawyers to focus on more challenging, creative endeavors. Susskind argues automation will never
replace a lawyer’s strategy, logic, creativity, and empathy — machine learning can only supplement them.

Impoverished Americans are losing their houses to eviction, their financial rights to corporate abuses, and their
children to custody battles because they can neither afford lawyers nor effectively navigate complex law. The
power of AI lies in its ability to sift through hundreds of cases and simplify the law. As Congress fails to act to
protect the rights of underserved Americans, legal technologies can ameliorate the issue, transforming and
expanding access to justice.

Perils of AI Legal Services

Legal technology is at an inflection point. Still in an experimental and developmental phase, this technology
must be steered and regulated to minimize future negative outcomes. While many scholars have decried legal
AI as dangerous in displacing lawyers, only a few have recognized its capacity to actually widen the justice gap.

Experts have warned of the imbalance and underappreciated consequences of automated legal services. Drew
Simshaw, an assistant professor at the Gonzaga University School of Law, writes that legal AI could create an
inequitable “two-tiered system.” Patricia Barnes, an attorney and former judge, warns that AI used in law firms
exacerbates “inequality in discrimination lawsuits.” Representative Ted Lieu has recently called for regulation
given the heightened influence of elites on AI.

In its current state, legal AI presents three main barriers to justice. First, high-quality AI may be expensive and
thus only available to larger law firms, presenting a power asymmetry between law firms and individuals.
Second, many impoverished Americans and people of color may be unable to access any AI in the first place.
Third, the advent of legal AI may lead Congress to believe that impoverished individuals no longer need human
civil lawyers, thereby halting movement on a long-requested right to civil counsel.

Unregulated legal AI locks law firms into a mutually reinforcing cycle that only makes rich firms richer and
widens revenue gaps between firms. Larger law firms are often better equipped to adopt emerging legal
technologies; advanced AI is costly to obtain and adopt, and is thus only available to wealthy firms who
have the necessary capital and funding capacity to pursue it. These technologies not only automate time-
consuming tasks but also assist in creative and analytical tasks. As larger law firms adopt emerging legal AI and
engage in a long-term trial and error process, they maximize benefits gained from the AI, all with a safety net.
Smaller law firms do not have this privilege and will be vulnerable when they adopt cheap, fully-developed AI
in the future. Using higher-quality AI, larger law firms can extend more service to elite individuals, but likely
not to those detrimentally affected by the justice gap. By automating administrative tasks, national firms can
also expand in size and geography. By contrast, smaller firms are left in less efficient and more self-reliant
positions because they do not have the organizational resources to leverage emerging legal AI.

Ultimately, such technological disparities between law firms are passed on to nonlegal segments of society.
Individual lawyers representing lower- to middle-income Americans face a disadvantage against wealthy firms
able to take advantage of AI technology and the superior work it can help produce.

Accessibility gaps in communications technology loom large, especially in line with age, race, geography,
education, and income gaps. By one measure, one in five Americans do not have reliable internet access. There
is also a technological gap — many would-be pro se litigants lack the “necessary skills and resources to make
meaningful use of technologies.” Professor Simshaw also observes that “some prepaid internet service plans do
not provide the broadband coverage needed to support emerging legal technology applications.” These
technology gaps could functionally shut many vulnerable communities out of legal AI and justice systems.

Another issue within legal AI is a concern that algorithms may serve to exclude and antagonize marginalized
groups. Broadly, “self-help” legal services must transcend a one-size-fits-all model. These services must
accommodate the groups that are most affected by the deep fissures in America’s justice system. For one, most
digital legal services are not multilingual or otherwise do not offer services in many languages — an especially
concerning exclusion given that non-English speakers are a significant chunk of lawyerless litigants. Sherley
Cruz also highlights the importance of “accounting for different cultures’ communication styles.” When
impoverished individuals are providing their information to self-help AI services, information-gathering
systems must be able to input multiple storytelling formats. For example, people from cultures that do not
typically use “free-flowing narratives” may struggle with answering the open-ended questions relied on by legal
service providers. Likewise, current AI legal services do not appear to account for non-chronological
storytelling and different forms of communication inputs beyond verbal/written forms existing in other cultures.

Using datasets from sources including scraped language and Reddit, AI chatbots that provide legal advice can
sometimes produce overtly racist and biased responses. Amy Cyphert argues that these AI technologies produce
these results specifically because they are trained this way and “should not be used” to the extent that
they reinforce biased stereotypes and further marginalize users. The persistence of such bias in commercially
available products reflects a lack of consideration and care for racial inequality in the development of these AI
legal platforms. In not only reproducing but also automating inequalities, these algorithmic biases simply are
not fit to close the justice gap.

The aforementioned inequalities could render low-income AI services available to impoverished Americans and
amplify current power imbalances in civil court. If legal technology is the only affordable service available to
impoverished Americans, this vulnerable population will be at the whim of those who control the technology;
service providers could, predictably, overlook low-income sectors, disregarding the quality of legal service.
Without quick intervention, America could soon normalize a lower-tier of justice in the form of low-quality
artificial intelligence, wasting the technology’s equalizing potential.

Making matters worse, calls for free public lawyers will fade from public discourse as even lacking AI
alternatives gain traction. Policymakers will likely abandon human-focused solutions, preferring a cheaper but
subordinate digital solution. All hopes for future human-centered policy solutions would dissipate, dissolving
into illusory but inferior legal technologies.

Many are quick to assume that regardless of potential inequalities, using legal AI will inevitably be an
improvement. But what many do not see, is how digital legal services can prove to be structurally predatory and
biased. Ineffective services harm many impoverished litigants: in her “Weapons of Math Destruction: How Big
Data Increases Inequality and Threatens Democracy,” Cathy O’Neil notes that AI algorithms “tend to punish
the poor.” Specifically, Peter Yu writes that this “divide” could facilitate cultural and educational biases against
the impoverished. Broadly, some legal service algorithms disfavor poorer Americans, the very people the
system intends to protect.

These perils prevent stakeholders and impoverished individuals from gaining meaningful access to equal digital
services and equal justice. Without careful calibration and a redesign, legal AI may only fuel existing barriers to
meaningful justice access.

Regulatory Reforms

Existing regulation of legal practices fails to account for the rise of legal artificial intelligence. Without
regulation, the future of legal AI may descend into an inequitable two-tiered system. To promote competition
and calibration, regulatory innovation must parallel technological innovation. Regulators can embrace three
main avenues to establish effective policies: transparency, competition, and regulatory sandboxes.

Transparency ensures that a small sector of technical experts is not the only source of critical AI
systems. Transparency forges key relationships between lawyers and technologists. Lawyers can help effect
meaningful changes in legal technology, such as integrating bias training, cultural consciousness, and other
helpful features for clients. Further, transparency provides smaller lawyers with open access to developing AI. It
provides them a channel of input to technologists to make AI more functional for smaller firms, equalizing the
potential to seek justice across the board. Public transparency could also break through the AI “black
box” which makes bias harder to detect. Indeed, increased transparency in access-to-justice AI tools can subject
them to external review and subsequently decreased bias.

Other transparency regulations could ensure that low-income individuals are not the prey of low-quality digital
legal services. Reporting accuracy rates of AI, for example, allows onlookers to verify the quality of legal
services. Susan Fortney calls for certifications as a system to check artificial intelligence. Transparency
regulations ultimately guarantee the effectiveness and quality of digital legal services, promising that poor
Americans are not left with the bad end of the bargain.
Competition may counter the predicted consolidation of AI legal services in the near future. Regulatory
policies, in response, must aim to boost competition and shut down legal AI monopolies. Competition is
especially essential to push AI developers to improve their algorithms, make their services affordable, remove
bias, and provide the most effective legal services. Here, competition functionally serves as another “check” on
AI companies.

In most American jurisdictions, lawyers can invest in technology, but technology companies can not invest in
legal practices. This creates an asymmetric dynamic wherein wealthier firms have the capital to invest in
technology, but smaller firms can not. Lifting these investment laws could help smaller firms attract the interest
of digital AI service providers. Current law prevents cross-industry relationships between smaller law firms and
technologists, thereby cutting off an avenue for smaller firms to adopt new AI. Legal scholars, including Justice
Gorsuch, have called for lifting ownership and investment restrictions. The best way to do this could be with
a regulatory sandbox — an experimental area where certain restrictions are lifted but under close observation of
an oversight body. Ryan Nabil finds that regulatory sandboxes can significantly increase the accessibility of
digital justice tools. In 2020, Utah launched the first regulatory sandbox for legal services, and it was incredibly
successful, making civil legal services widely affordable. Expanding similar regulatory sandboxes to other
states can simultaneously expand access to AI legal services for impoverished Americans and smaller law firms,
helping them overcome previous financial barriers.

As legal technology gathers momentum, an approach of “technology is better than nothing” will not suffice.
Artificial intelligence shows promise for equalizing access to justice, but it also presents perils of exacerbating
inequalities. Regulators must act soon to contain negative spillovers from legal technology to ensure it can
shrink the justice gap, not enlarge it.
Reference:
https://www.sciencedirect.com/science/article/pii/S2666659620300056?ref=pdf_download&fr=RR-
2&rr=82b9ee0738fc0f24

Journal of Responsible Technology


Volume 4, December 2020, 100005

Legal and human rights issues of AI: Gaps,


challenges and vulnerabilities
Rowena Rodrigues
https://doi.org/10.1016/j.jrt.2020.100005Get rights and content

Abstract
This article focusses on legal and human rights issues of artificial intelligence (AI) being discussed and debated,
how they are being addressed, gaps and challenges, and affected human rights principles. Such issues include:
algorithmic transparency, cybersecurity vulnerabilities, unfairness, bias and discrimination, lack of
contestability, legal personhood issues, intellectual property issues, adverse effects on workers, privacy and data
protection issues, liability for damage and lack of accountability. The article uses the frame of ‘vulnerability’ to
consolidate the understanding of critical areas of concern and guide risk and impact mitigation efforts to protect
human well-being. While recognising the good work carried out in the AI law space, and acknowledging this
area needs constant evaluation and agility in approach, this article advances the discussion, which is important
given the gravity of the impacts of AI technologies, particularly on vulnerable individuals and groups, and their
human rights.

Introduction
Artificial intelligence (AI)1 is everywhere (Boden 2016) and its development, deployment and use is moving
forward rapidly and contributing to the global economy (McKinsey 2019; PwC 2017). AI has many benefits
(e.g., improvements in creativity, services, safety, lifestyles, helping solve problems) and yet at the same time,
raises many anxieties and concerns (adverse impacts on human autonomy, privacy, and fundamental rights and
freedoms) (OECD 2019).
The legal discourse on the legal and human rights issues of artificial intelligence (AI) is established, with many
a detailed legal analysis of specific individual issues (as outlined in Sections 3 and 4 in this article). But, this
field is a regulatory moving target and there is a need for an exploratory, bird's eye and looking at the breadth of
issues, curated in a single place. Critically missing also is a greater discussion and mapping of vulnerability to
such issues. This article fills this gap based on research carried out in the EU-funded Horizon 2020 SIENNA
project2. The article's main research questions are: What are the legal and human rights issues related to AI?
(How) are they being addressed? What are the gaps and challenges and how can we address vulnerability and
foster resilience in this context?
Structure, approach, method and scope
After a quick round-up of the coverage of legal and human rights issues (Section 3), this article outlines specific
legal issues being discussed in relation to AI (Section 4), solutions that have been proposed/how they are being
addressed, gaps and challenges, and affected human rights principles (Section 5). It maps the legal issues to core
international human rights treaties and provides examples (global to regional) of corresponding human rights
principles that might be affected. More vitally, it discusses the legal issues using the frame of ‘vulnerability’
(Section 6) to help consolidate better the identification of what are critical areas of concern and help guide AI
risk and impact mitigation efforts to protect human and societal well-being. While recognising the good work
already being carried out in the AI law space (as evident in the literature identified in this article), this
consolidated analysis of issues hopes to further provide insights and add to the much-required need for further
and sustained discussions on this topic, given AI's increasingly widespread deployment and use and the gravity
of its impacts on individuals and their human rights.

There are a number of legal issues and human rights challenges related to AI.3 Section 4 presents a panoramic,
non-exhaustive overview of such issues4 and challenges. The identification of issues was carried out using a
desktop literature review (in two phases: preliminary research in 2018 as part the SIENNA project Rodrigues
(2019) and updated in July 2019 during the development of this article). The keywords ‘legal/human rights
issues+AI/artificial intelligence/machine learning’ were used to identify issues covered in legal academic and
practitioner journals and books and legal policy studies from the last five to ten years (as cited in the article)
supplemented by databases such as SSRN and Google Scholar, to identify issues high impact. The references
that came to the forefront in our search were scanned further, as possible, for any other relevant unidentified
issues. The inclusion of issues was conditioned by their coverage and/or prevalence in existing legal and policy
literature, impact on societal values and life, and controversiality. One limitation was that this was a study
limited by time and to research available in English. Furthermore, while each of these issues could be analysed
in greater depth individually (e.g., looking into specific legal provisions that are applicable), this is outside the
scope of study here and in many cases has been/is being carried out by other scholars.
For the mapping of legal issues to principles in international human rights treaties, we scanned the core
international human rights instruments for coverage of such issues. These included, E.g., International Covenant
on Civil and Political Rights (ICCPR), International Covenant on Economic, Social and Cultural Rights
(ICESCR), Universal Declaration of Human Rights (UDHR), Charter of the United Nations, Convention on
Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be
Excessively Injurious or to Have Indiscriminate Effects (and Protocols), and the European Convention for the
Protection of Human Rights and Fundamental Freedoms etc.

For the mapping of identified AI legal issues to the most vulnerable groups and the factors that determine and/or
facilitate vulnerability, we followed a law-in-context approach. The vulnerable groups and factors that
determine vulnerability were identified and determined by scanning the literature reviewed in the issue
identification, supplemented by an online search for further examples. The table presented is non-exhaustive
and will change with examination in different contexts.

Coverage of legal and human rights issues


International coverage of legal and human rights issues is evident in policy documents of United Nations
(2019); OECD (2019); Council of Europe (2017; 2018; 2019), the European Parliament
(2017; 2018a; 2018b; 2019, 2020a, 2020b, 2020c), the European Commission (2018a, 2018b, 2020), European
Commission for the efficiency of justice (CEPEJ) (2018), and the European Data Protection Supervisor (2016).
Academic and civil society (Access Now 2018; Privacy International and Article 19 2018) coverage of legal
issues pertaining to AI sometimes are broad and cover a variety of risks and challenges. At other times, these
cover very specific issues. Some analyses are domain-specific. E.g., focussing on healthcare (Price 2017),
defence (Cummings 2017), transport (Niestadt et al 2019). Some of these include coverage of issues related to
legal personality, intellectual property (Schönberger 2018), algorithmic bias, discrimination, unfairness (Smith
2017; Danks and London 2017; Courtland 2018; Hacker 2018), labour protection (De Stefano 2019), privacy
and data protection (Wachter and Mittelstadt 2019), cybersecurity (Tschider 2018), access to justice (Raymond
2013), algorithmic transparency (Lepri 2018; Coglianese and Lehr 2018; Bodo et al 2018), liability for harms
(Vladeck 2014), accountability (Liu et al 2019), and surveillance (Aloisi and Gramano 2019; Feldstein 2019).
The media coverage of AI legal issues has ranged from the broad (Dizikes 2019) to more specific - covering
aspects such as liability (Mitchell 2019), fairness in decision-making (Niiler 2019), bias (Marr 2019), privacy
(Lindsey 2018), accountability (Coldewey 2018). Issues of privacy/data protection (Meyer 2018; Williams
2019; Forbes Insights Team 2019; Lohr 2019) and bias (Dave 2018), in particular, have received significant
publicity.
Legal and human rights issues of AI
This section briefly examines each issue, its significance, solutions that have been proposed (or how is it being
addressed) and the related gaps and challenges. This is a limited analysis (other research has analysed and
critically discussed each of these issues in detail; the intent here is to provide a panoramic, updated overview
and make it useful for future research).

Of the ten issues presented below, some relate to the design and nature of AI itself (these are covered first),
others are issues connected to the implementation and use of AI (though often, the design of AI itself lends
itself to causing or facilitating implementation and use issues). The issues are sometimes cross-domain, i.e.,
could manifest in one or more sector/field of application. Many of these issues are common to all technology
(e.g., privacy/data protection); many are inter-related (e.g., transparency, fairness, accountability) and might not
operate in silo. However, the ability of AI to amplify and/or facilitate their adverse effects must not be
underestimated at any time.

Lack of algorithmic transparency

The issue and its significance

The lack of algorithmic transparency (Bodo et al 2018; Coglianese & Lehr 2018; Lepri et al 2018) is a
significant issue that is at the forefront of legal discussions on AI (EDPS2016; Pasquale 2015). Cath
(2018) highlights given the proliferation of AI in high-risk areas, that “pressure is mounting to design and
govern AI to be accountable, fair and transparent.” The lack of algorithmic transparency is problematic; Desai
and Kroll (2017) highlight why, using examples of people who were denied jobs, refused loans, were put on no-
fly lists or denied benefits without knowing “why that happened other than the decision was processed through
some software”. Information about the functionality of algorithms is often intentionally poorly accessible”
(Mittelstadt et al 2016) and this exacerbates the problem.

Solutions proposed/how it is being addressed

An EU Parliament STOA study (2019) outlined various policy options to govern algorithmic transparency and
accountability, based on an analysis of the social, technical and regulatory challenges; each option addresses
different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and
whistle blowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight
and legal liability; and 4. global coordination for algorithmic governance. More specific solutions mooted to
promote algorithmic transparency include algorithmic impact assessments (Reisman, et al 2018; Government of
Canada, undated), an algorithmic transparency standard (IEEE P7001:Transparency of Autonomous Systems),
counterfactual explanations, local interpretable model-agnostic explanations (LIME) (Ribeiro, Singh, Guestrin
2016) etc.

Gaps and challenges

Transparency has its limitations and is often viewed as inadequate and limited (Ananny and Crawford 2018).
For example, as Vaccaro and Karahalios (undated) point out, “Even when machine learning decisions can be
explained, decision-subjects may not agree with the outcome”. Some of solutions proposed above, e.g.,
algorithmic impact assessments, though extremely valuable, are relatively new and still a work in progress so
cannot be fully evaluated for their effectiveness at this stage. This is definitely an area for future research and
evaluation.

Cyber security vulnerabilities

The issue and its significance

A RAND perspectives report Osoba and Welser (2017) highlights various security issues related to AI, for
example, fully automated decision-making leading to costly errors and fatalities; the use of AI weapons without
human mediation; issues related to AI vulnerabilities in cyber security; how the application of AI to surveillance
or cyber security for national security opens a new attack vector based on ‘data diet vulnerability’; the use of
network intervention methods by foreign-deployed AI; larger scale and more strategic version of current
advanced targeting of political messages on social media etc. The report Osoba and Welser (2017) also
identifies domestic security-related issues, for example, (growing) deployment of artificial agents for the
surveillance of civilians by governments (e.g., predictive policing algorithms). These have been called out for
their potential to adversely impact fundamental citizens’ rights Couchman (2019). Such issues are significant as
they lay open critical infrastructures to harms with severe impacts on society and individuals, posing a threat to
life and human security and access to resources. Cyber security vulnerabilities also pose a significant threat as
they are often hidden and revealed only to late (after the damage is caused).

Solutions proposed/how it is being addressed

Various strategies and tools are being used or proposed to address this issue. E.g., putting in place good
protection and recovery mechanisms; considering and addressing vulnerabilities in the design process; engaging
human analysts in critical decision-making; using risk management programmes; and software upgrades Fralick
(2019).

Gaps and challenges


Effectively addressing such issues requires proactive and responsive use of cybersecurity policies, mechanisms
and tools by developers and users at all stages – design and implementation and use. But this is often not the
case in practice and is a real challenge As a SHERPA report outlines, “When designing systems that use
machine learning models, engineers should carefully consider their choice of a particular architecture, based
on understanding of potential attacks and on clear, reasoned trade- off decisions between model complexity,
explainability, and robustness” (Patel et al, 2019).

Unfairness, bias and discrimination

The issue and its significance

Unfairness (Smith 2017), bias (Courtland 2018) and discrimination (Smith 2017) repeatedly pop up as issues
and have been identified as a major challenge (Hacker 2018) related to the use of algorithms and automated
decision-making systems, e.g., to make decisions related to health (Danks & London 2017), employment,
credit, criminal justice (Berk 2019), and insurance. In August 2020, protests were made and legal challenges are
expected over the use of a controversial exams algorithm used to assign grades to GCSE students in England
(Ferguson & Savage 2020).
A focus paper from the EU Agency for Fundamental Rights (FRA 2018) outlines the potential for
discrimination against individuals via algorithms, and states that “the principle of non-discrimination, as
enshrined in Article 21 of the Charter of Fundamental Rights of the European Union, needs to be taken into
account when applying algorithms to everyday life” (FRA 2018). It cites examples with potential for
discrimination: automated selection of candidates for job interviews, use of risk scores in creditworthiness or in
trials. An European Parliament report on the fundamental rights implications of big data: privacy, data
protection, non-discrimination, security and law-enforcement, European Parliament (2017) stressed that
“because of the data sets and algorithmic systems used when making assessments and predictions at the
different stages of data processing, big data may result not only in infringements of the fundamental rights of
individuals, but also in differential treatment of and indirect discrimination against groups of people with
similar characteristics, particularly with regard to fairness and equality of opportunities for access to
education and employment, when recruiting or assessing individuals or when determining the new consumer
habits of social media users” European Parliament (2017). The report called on the European Commission, the
Member States and data protection authorities “to identify and take any possible measures to minimise
algorithmic discrimination and bias and to develop a strong and common ethical framework for the transparent
processing of personal data and automated decision-making that may guide data usage and the ongoing
enforcement of Union law”European Parliament (2017).

Solutions proposed/how it is being addressed

Various proposals have been made to address such issues. For example, conducting regular assessments into the
representativeness of data sets and whether they are affected by biased elements European Parliament (2017),
making technological or algorithmic adjustments to compensate for problematic bias (Danks & London 2017),
humans-in-the-loop (Berendt, Preibusch 2017) and making algorithms open. Schemes to certify that algorithmic
decision systems do not exhibit unjustified bias are also being developed. The IEEE P7003 Standard for
Algorithmic Bias Considerations is one IEEE ethics-related standards (under development as part of the IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems) - it aims to provide individuals or
organisations creating algorithmic systems with development framework to avoid unintended, unjustified and
inappropriately differential outcomes for users. There are also open source toolkits, e.g., the AI Fairness 360
Open Source Toolkit that helps users to examine, report, and mitigate discrimination and bias in machine
learning models throughout the AI application lifecycle. It uses 70 fairness metrics and 10 state-of-the-art bias
mitigation algorithms developed by the research community.
Gaps and challenges
While the law clearly regulates and protects against discriminatory behaviour, it is suggested it falls short. A
Council of Europe (2018) study outlines that the law leaves shortfalls where it does not extend to address what
is not expressly protected against discrimination by law, or where new classes of differentiation are created and
lead to biased and discriminatory effects. Humans-in-the-loop approaches might face tensions as to where, and
in which cases they should be applied (sometimes it might be better to not to have, or impossible to have
humans in the loop, e.g., where there might be scope for human error or stupidity that leads to serious or
irreversible consequences). Other gaps include whether the use of human-in-the-loop is adequately signified in
the technologies that use them. Making algorithms open does not mean they will become more understandable
to people, also there is the issue of the exposure or discoverability of private data that brings its own
concerns House of Commons (2018). Algorithmic auditing to be effective requires a holistic, interdisciplinary,
scientifically-grounded and ethically-informed approach (Guszca et al 2018). While the technical solutions
proposed thus, are good steps forward there have been many calls to pay greater regulatory, policy and ethical
attention to fairness, especially in terms of protection of vulnerable and marginalised populations Raji &
Buolamwini (2019).

Lack of contestability

The issue and its significance

European Union data protection law gives individuals rights to challenge and request a review of automated
decision-making that significantly affects their rights or legitimate interests (GDPR 2016/679). Data subjects
have the right to object, on grounds relating to their particular situation, at any time to the processing of
personal data concerning them which is based on tasks carried out in public interests or legitimate interests.
Further, per Article 22(3) GDPR, data controllers must implement suitable measures to safeguard a data
subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part
of the controller, to express their point of view and to contest the decision. But Hildebrandt (2016) underlines
how “the opacity of ML systems may reduce both the accountability of their ‘owners’ and the contestability of
their decisions”. Edwards and Veale (2017) highlight, the lack of contestability - in relation to algorithmic
systems, i.e., the “lack of an obvious means to challenge them when they produce unexpected, damaging, unfair
or discriminatory results”. Bayamlıoğlu (2018) states that “a satisfactory standard of contestability will be
imperative in case of threat to individual dignity and fundamental rights” and “the ‘human element’ of
judgment is, at least for some types of decisions, an irreducible aspect of legitimacy in that reviewability and
contestability are seen as concomitant of the rule of law and thus, crucial prerequisites of democratic
governance”.

Solutions proposed/how it is being addressed

Contestability by design has been proposed as an approach to better protect the rights of decisions based solely
on automated processing as a requirement at each stage of an artificial intelligence system's lifecycle Almada
(2019).

Gaps and challenges

As Roig (2017) argues, the “general safeguards – specific information to the data subject; the right to obtain
human intervention; the right to express his or her point of view; the right to obtain an explanation of the
decision reached; and the right to challenge the decision – may not work in the case of data analysis-based
automated processing”. Further, that it “will be difficult to contest an automatic decision without a clear
explanation of the decision reached. To challenge such an automatic data-based decision, only a
multidisciplinary team with data analysts will be able to detect false positives and discriminations” Roig
(2017). So, this is an issue that needs to be further addressed at many different levels (design, development and
use).

Legal personhood issues

The issue and its significance

There is ongoing debate about whether AI (and/or robotics systems) “fit within existing legal categories or
whether a new category should be created, with its own specific features and implications”. (European
Parliament Resolution 16 February 2017). This is not just a legal, but a politically-charged issue Burri
(2017). Čerka et al (2017), ask whether AI systems can be deemed subjects of law. The High-Level Expert
Group on Artificial Intelligence (AI HLEG) has specifically urged “policy-makers to refrain from establishing
legal personality for AI systems or robots” outlining that this is “fundamentally inconsistent with the principle
of human agency, accountability and responsibility” and poses a “significant moral hazard” (AI HLEG 2019).
Yet, others such as Turner (2019) suggest that “legal personality for AI could be justified as an elegant solution
to (i) pragmatic concerns arising from the difficulties of assigning responsibility for AI and/or (ii) in order to
support AI's moral rights, if any”. Jaynes (2020) assumes that in the future artificial entities will be granted
citizenship and discusses the jurisprudence and issues pertaining to non-biological intelligence that are
important to consider. In the EU, at least, however the general caution to avoid creating new legal personality
for AI systems has been echoed manifold (Siemaszko, Rodrigues, Slokenberga, 2020; Bryson, Diamantis, and
Grant 2017)

Solutions proposed/how it is being addressed


There has not been a significant breakthrough in addressing legal personhood issues for AI at the international,
EU or national level. While this issue has been raised (and will continue to be at the forefront of legal debates
for the near future), international or even regional-level agreement Delcker (2018) on this (i.e., whether legal
personhood should be offered to AI systems/robots and the form this should take) might be difficult or near
impossible to achieve (given the political nature and sensitivity of the issue). Further, such issues are largely
regulated at the national level.

Gaps and challenges

Brożek and Jakubiec (2017) investigated the issue of legal responsibility of autonomous machines and argue
that “autonomous machines cannot be granted the status of legal agents.” Bryson, Diamantis, and Grant
(2017) consider conferring legal personhood on purely synthetic entities will become a very real legal
possibility but think such “legislative action would be morally unnecessary and legally troublesome”. In their
review of the utility and history of legal fictions of personhood and after discussing the salient precedents where
such fictions resulted in abuse or incoherence, they argue that, “While AI legal personhood may have some
emotional or economic appeal, so do many superficially desirable hazards against which the law protects
us” Bryson, Diamantis, and Grant (2017).

Intellectual property issues

The issue and its significance

Intellectual property rights are part of the Universal Declaration of Human Rights (UDHR, Article 27), the
International Covenant on Economic, Social and Cultural Rights (ICESCR, Article 15), the International
Covenant on Civil and Political Rights (ICCPR, Article 19) and the Vienna Declaration and Programme of
Action (VDPA) 1993. Such rights they have a “human rights character” and “have become contextualised in
diverse policy areas”WIPO (1998). AI raises various intellectual property issues, e.g., who owns AI
generated/produced works or inventions? Should AI's inventions be considered prior art? Who owns the dataset
from which an artificial intelligence must learn? Who should be liable for creativity and innovation generated
by AI, if they impinge upon others’ rights or other legal provisions? (CEIPI undated).

Solutions proposed/how it is being addressed

The law may provide a variety of solutions for the issues raised Rodrigues (2019). For example, in the UK, the
law protects computer-generated literary, dramatic, musical or artistic works. There is no express legal
provision on patentability of computer-generated works. The creator of the AI design owns such rights except if
the work was commissioned or created during the course of employment. In this latter case, the rights belong to
the employer or party that commissioned the AI work UK Copyright Service (2004). As a registered trade mark
is personal property, unless an AI system was able to hold/have personal property, this right might not apply or
be able to be enjoyed by the AI system.

Gaps and challenges

Many intellectual property rights issues have not been addressed and/or answered conclusively, and current
regimes have been seen as “woefully inadequate to deal with the growing use of more and more intuitive
artificial intelligence systems in the production of such works” Davies (2011). There is need further research
and exploration especially as AI advances further and it becomes increasingly difficult to identify the
creator. Talking Tech (2017).

Adverse effects on workers

The issue and its significance

The IBA Global Employment Institute report (2017) highlights the impact of AI and robotics on the workplace
(seen a global concern). Some issues highlighted include: changes to the requirements for future employees,
lowering in demand for workers, labour relations, creation of new job structures and new types of jobs,
dismissal of employees, inequality in the ‘new’ job market, integration of untrained workers in the ‘new’ job
market, labour relations (and its possible implications for union activities and collective bargaining aspects,
challenges for employee representatives, changes in the structure of unions), health and safety issues, impact on
working time, impact on remuneration (changes, pensions), social security issues etc. Significant is also the
potential loss of autonomy for workers Frontier Economics (2018). These issues have economic (e.g., poverty)
and social consequences (e.g., homelessness, displacement, violence, despair) and significant human rights
impact potential. They raise ethical issues and dilemmas that might not easily be resolved yet are critical to
address.

Solutions proposed/how it is being addressed

Many measures or solutions are being or have been proposed to address this issue. These include retraining
workers (UK House of Lords 2018) and re-focussing and adapting the education system. The Communication
from the European Commission on Artificial Intelligence for Europe (2018), suggests the modernisation of
education, at all levels, should be a priority for governments and that all Europeans should have every
opportunity to acquire the skills they need. To manage the AI transformation, the Communication calls for
providing support to workers whose jobs change or disappear – it suggests “national schemes will be essential
for providing such up-skilling and training. (European Commission on Artificial Intelligence for Europe 2018).
Social security systems will also require review and change.

Gaps and challenges

One report prepared for the Royal Society (2018) highlights gaps in the evidence base, particularly in relation to
there being “limited evidence on how AI is being used now and on how workers’ tasks have changed where this
has happened”, “relatively little discussion of how existing institutions, policies, social responses are shaping
and are likely to shape the evolution of AI and its adoption” and “little consideration of how international trade,
mobility of capital and of AI researchers are shaping the development of AI and therefore its potential impact
on work” Frontier Economics (2018). While there is recognition of the widespread disruption that AI is, and
might create in the workplace, not enough has been put in place at the policy and regulatory level to address
concerns and put in place needed economic and educational policies and measures. At the employer-level too,
while AI solutions are being widely deployed, it remains to be seen whether employers will adopt suitable
strategies or due diligence checks to minimise any adverse impacts on their workforces and help them adapt or
adjust to a changed workplace.

Privacy and data protection issues

The issue and its significance

Legal scholars and data protection enforcement authorities (CNIL 2017; ICO 2017) believe that AI (in addition
to affecting other rights) poses huge privacy and data protection challenges Gardner (2016). These include
informed consent, surveillance Brundage (2018)5, infringement of data protection rights of individuals, e.g.,
right of access to personal data, right to prevent processing likely to cause damage or distress, right not to be
subject to a decision based solely on automated processing etc.). Wachter & Mittelstadt (2019) highlight
concerns about algorithmic accountability and underline that “individuals are granted little control and
oversight over how their personal data is used to draw inferences about them” and call for a new data
protection ‘right to reasonable inferences’ to close the accountability gap posed ‘high risk inferences’ – i.e.,
inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being
predictive or opinion-based” Wachter & Mittelstadt (2019).
The EDPS, Artificial Intelligence, Robotics, Privacy and Data Protection Background document for the 38th
International Conference of Data Protection and Privacy Commissioners 2016, highlighted the potential for
increase in privacy implications and powerfulness of surveillance possibilities. The UK Information
Commissioner's Office (ICO)’s discussion paper on Big data, artificial intelligence, machine learning and data
protection (2017) examined the implications of big data, artificial intelligence (AI) and machine learning for
data protection, highlights the intrusive nature of big data profiling and the challenges for transparency (due to
the complexity of methods used in big data analysis) (ICO 2017).

Solutions proposed/how being addressed

Privacy and data protection law (particularly in the European Union) provides, at least in the letter of the law,
good safeguards and protection for infringement of data subjects’ rights. E.g., GDPR rights of data subjects to
transparency, information and access (Article 15), rectification (Article 16) and erasure (Article 17), right to
object to automated individual decision-making (Article 21) etc.

In relation to informed consent in the use of AI, transparency of potential harms relating to its use is strongly
supported (Rigby 2019); developers should “pay close attention to ethical and regulatory restrictions at each
stage of data processing. Data provenance and consent for use and reuse are considered to be of particular
importance” (Vayena, Blasimme & Cohen 2018). In relation to surveillance, Brundage et al suggest secure
multi-party computation (MPC) (which “refers to protocols that allow multiple parties to jointly compute
functions, while keeping each party's input to the function private” (Brundage 2018). Other measures that are
being used or proposed include the use of anonymisation, privacy notices, privacy impact assessment, privacy
by design, use of ethical principles and auditable machine algorithms (ICO 2017).

Gaps and challenges

Privacy and data protection law does not address all AI issues. As pointed out, “understanding and resolving
the scope of data protection law and principles in the rapidly changing context of AI is not an easy task, but it is
essential to avoid burdening AI with unnecessary regulatory requirements or with uncertainty about whether or
not regulatory requirements apply”(CIPL 2018). Privacy and data protection measures are only effective if they
are used, properly applied, monitored and/or enforced. Also, e.g., as the European Data Protection
Supervisor Opinion 5/2018 Preliminary Opinion on privacy by design points out, “there is a limited uptake of
commercial products and services fully embracing the concept of privacy by design and by default”. In some
cases, the challenge is that the effectiveness of measures such as privacy/data protection impact assessments,
privacy by design might fall flat (like closing the gate after the horse has bolted) given the core purpose of the
AI system or technology by itself might conflict directly with societal values and fundamental rights.
Wachter and Mittelstadt (2019), argue that as the GDPR provides insufficient protection against sensitive
inferences (Article 9) or remedies to challenge inferences or important decisions based on them (Article 22(3)),
a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap
currently posed by ‘high risk inferences’. This would be useful particularly when this issue cannot or fails to be
addressed via other means outlined above.

Liability for damage

The issue and its significance

The deployment and use of AI technologies can cause damage to persons and property. E.g., Gluyas and Day
(2018) provide some examples – e.g., running over of pedestrians by driverless cars, crashing and damage
caused by a partially operated drone, wrongful medical treatment diagnosis by an AI software programme. They
further explain, “As there are many parties involved in an AI system (data provider, designer, manufacturer,
programmer, developer, user and AI system itself), liability is difficult to establish when something goes wrong
and there are many factors to be taken into consideration…” Gluyas & Day (2018).

Solutions proposed/how it is being addressed

Liability issues of AI could be addressed under the purview of civil or criminal liability. Kingston
(2016) discusses AI and legal liability – both whether criminal liability could ever apply, to whom it might
apply, and, under civil law, whether an AI program is a product that is subject to product design legislation
(product liability, e.g., in cases of design or manufacturing failures) or a service to which the tort of negligence
applies.
Hallevy (2015) discusses the criminal liability of AI entities, i.e., responsibility for harm caused and explores
whether an AI entity itself be criminally liable (beyond the criminal liability of the manufacturer, end-user or
owner, and beyond their civil liability) and suggests that the imposition of criminal liability upon AI entities for
committing intellectual property offenses is quite feasible and proposes solutions for sentencing AI entities.
Liability issues could also be addressed under consumer protection law.
Rachum-Twaig (2020) proposes “supplementary rules that, together with existing liability models, could
provide better legal structures that fit AI-based robots. Such supplementary rules will function as quasi-safe
harbors or predetermined levels of care. Meeting them would shift the burden back to current tort doctrines.
Failing to meet such rules would lead to liability. Such safe harbors may include a monitoring duty, built-in
emergency breaks, and ongoing support and patching duties.” Rachum-Twaig argues that “these supplementary
rules could be used as a basis for presumed negligence that complements the existing liability models”.

Gaps and challenges

In certain civil law jurisdictions, many liability issues are handled through strict liability. However, Bathee
(2018) outlines “Strict liability is also a poor solution for the problem because if one cannot foresee the
solutions an AI may reach or the effects it may have, one also cannot engage in conduct that strict liability is
designed to incentivize, such as taking necessary precautions or calibrating the level of financial risk one is
willing to tolerate”. The European Commission Expert Group on Liability and New Technologies (2019)
concluded in its review of existing liability regimes on emerging digital technologies, “that the liability regimes
in force in the Member States ensure at least basic protection of victims whose damage is caused by the
operation of such new technologies. However, the specific characteristics of these technologies and their
applications – including complexity, modification through updates or self- learning during operation, limited
predictability, and vulnerability to cybersecurity threats – may make it more difficult to offer these victims a
claim for compensation in all cases where this seems justified. It may also be the case that the allocation of
liability is unfair or inefficient. To rectify this, certain adjustments need to be made to EU and national liability
regimes.” In 2020, the European Commission published a Report on the safety and liability
framework European Commission (2020). The European Parliament Legal Affairs (JURI) committee discussed
in May 2020 a draft report on AI civil liability European Parliament (2020a).

Lack of accountability for harms

The issue and its significance

As outlined by the Assessment List for Trustworthy AI (ALTAI), accountability calls for mechanisms be put in
place to ensure responsibility for the development, deployment and/or use of AI systems - risk management,
identifying and mitigating risks in a transparent way that can be explained to and audited by third parties (AI
HLEG 2020). As outlined by Dignum (2018), “accountability in AI requires both the function of guiding action
(by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader
context and by classifying them along moral values)”. Some commentators suggest that “‘accountability gap’ is
a worse problem than it might first seem” causing problems in three areas: causality, justice, and
compensation Bartlett (2019). As a Privacy International and Article 19 (2018) report states, “Even when a
potential harm is found, it can be difficult to ensure accountability for violations of those responsible.”

Solutions proposed/how it is being addressed

Wachter, Mittelstadt, and Floridi (2017) suggest that “American and European policies now appear to be
diverging on how to close current accountability gaps in AI”. Legal accountability mechanisms for AI harms
might take the form of a ‘right to explanation’ Edwards, Veale (2017), data protection and information and
transparency safeguards, auditing, or other reporting obligations. Doshi-Velez et al (2017) review contexts in
which explanation is currently required under the law and outline technical considerations that must be
considered if it is desired that AI systems that could provide kinds of explanations that are currently required of
humans.

Gaps and challenges

As Bartlett (2019) outlines, “There is no perfect solution to AI accountability. One of the biggest risks with the
proposal to hold developers responsible is a chilling effect on AI development. After all, AI developers are often
small actors - individuals or small companies. Whether or not they are the most culpable when their creations
cause harm, the practical nightmare of facing lawsuits every time their AI causes damage might reasonably
make AI developers exceedingly wary of releasing their creations into the world (and their hedge fund investors
might pause before reaching for their cheque books)” Bartlett (2019). The right to explanation, as an
accountability tool, has its challenges. As Wallace (2017) points out, “it is often not practical or even possible,
to explain all decisions made by algorithms”. Further, “the challenge of explaining an algorithmic decision
comes not from the complexity of the algorithm, but the difficulty of giving meaning to the data it draws
on”Wallace (2017). Edwards & Veale (2017) have argued extensively why a right to an explanation in the
GDPR is unlikely to present a complete remedy to algorithmic harms (and might even lead to the creation of a
transparency fallacy or be distracting). They suggest the law is restrictive, unclear, and even paradoxical
concerning when any explanation-related right can be triggered. They further outline how “the legal conception
of explanations as “meaningful information about the logic of processing” may not be provided by the kind of
ML “explanations” computer scientists have developed, partially in response” Edwards & Veale (2017).
As one can see, there are a variety of legal issues pertaining to AI; some common problems of ICT technology
in general - though facilitated or exacerbated by AI in some way, and other issues are novel and developing. All
the issues will need to be kept in constant review to ensure that they are being appropriately addressed. We next
examine the affected human rights principles.

Affected human rights principles


International human rights treaties lay down obligations which their signatories are bound to respect and fulfil;
States must refrain from interfering with rights and take positive actions to fulfil their enjoyment. While, none
of them currently explicitly apply or mention ‘artificial intelligence/AI or machine learning’, their broad and
general scope would cover most of the issues and challenges identified. The table below maps the legal issues to
human rights principles (drawn from the core international human rights treaties) that might be affected. In
many cases, the affected principle is clear and obvious in others not so and needs to be drawn attention to.
Out of the affected human rights principles, widely prevalent in AI legal discussions, are the right to privacy
and data protection (this is very prominent in Europe) and non-discrimination. Discussions also abound on the
equality and access to justice. The remaining affected principles have been discussed but could benefit from
much more airtime and future legal research.

Issues and vulnerability


It is not enough to simply outline the legal issues, gaps and challenges and the human rights principles AI
implicates. Discussing these using the frame of ‘vulnerability’ will valuably help consolidate the identification
of critical areas of concern and guide AI risk and impact mitigation efforts to better protect human and societal
well-being. It will also ensure that AI technologies advance human rights of everyone, and especially those
most affected.

Vulnerability definitions are fragmented. Generally, vulnerability refers to the “the quality or state of being
exposed to the possibility of being attacked or harmed, either physically or emotionally.” (Lexico). It may also
mean a weakness that can be exploited by one or more threats or a pre-disposition to suffer damage; or, it can be
understood as the “diminished capacity of an individual or group to anticipate, cope with, resist and recover
from the impact” (International Federation of Red Cross and Red Crescent Societies). Vulnerability varies with
time (i.e., characteristics, driving forces, levels) Vogel & O'Brien (2004); DFID (2004). It is the anti-thesis of
‘resilience’ - which is the ability of an individual, a household, a community, a country or a region to withstand,
to adapt, and to quickly recover from stresses and shocks (European Commission 2012).
There are various categorisations of vulnerable groups (in scholarship and policy). One of the more extensive
ones is the EquiFrame conceptualisation of vulnerable groups which has 12 categories (Mannan et al 2012): 1.
Limited resources (referring to poor people or people living in poverty), 2. Increased relative risk for morbidity
(referring to people with one of the top 10 illnesses, identified by WHO, as occurring within the relevant
country), 3. Mother child mortality (referring to factors affecting maternal and child health (0–5 years)), 4.
Women headed household (referring to households headed by a woman), 5. Children (with special needs),
referring to children marginalized by special contexts, such as orphans or street children 6. Aged (referring to
older age). 7. Youth (referring to younger age without identifying gender). 8. Ethnic minorities (referring to
non-majority groups in terms of culture, race or ethnic identity), 9. Displaced populations (referring to people
who, because of civil unrest or unsustainable livelihoods, have been displaced from their previous residence),
10. Living away from services (referring to people living far from health services, either in time or distance),
11. Suffering from chronic illness (referring to people who have an illness which requires continuing need for
care) 12. Disabled (referring to persons with disabilities, including physical, sensory, intellectual or mental
health conditions, and including synonyms of disability).
More specifically, according to Andorno (2016), “In human rights discourse for instance, the term vulnerability
is used to indicate a heightened susceptibility of certain individuals or groups to being harmed or wronged by
others or by the state. Populations which are particularly prone to being harmed, exploited or discriminated
include, among others, children, women, older people, people with disabilities, and members of ethnic or
religious minority groups.” Andorno further elaborates, “This does not mean that these groups are being
elevated above others. Characterizing them as ‘vulnerable’ simply reflects the hard reality that these groups
are more likely to encounter discrimination or other human rights violations than others” – this is very relevant
to our discussion as all of these categories are implicated in some form or manner in the legal issues and human
rights principles at stake.
The use and deployment of AI technologies disproportionately affects vulnerable groups. E.g., The UNESCO
COMEST Preliminary Study on the Ethics Of Artificial Intelligence gives an example of the Allegheny Family
Screening Tool (AFST), a predictive model used to forecast child neglect and abuse. It states that it
“exacerbates existing structural discrimination against the poor and has a disproportionately adverse impact
on vulnerable communities” via oversampling of the poor and using proxies to understand and predict child
abuse in a way that inherently disadvantages poor working families. Beduschi 2020 raises concerns about
“increasingly relying on technology to collect personal data of vulnerable people such as migrants and
refugees,” to “create additional bureaucratic processes that could lead to exclusion from protection.” There are
other examples. Children are particularly vulnerable (Butterfield-Firth 2018). As, the ICO explains, “they may
be less able to understand how their data is being used, anticipate how this might affect them, and protect
themselves against any unwanted consequences” (ICO undated). Individuals from the LGBTIQ6 community
might find themselves adversely affected by systems that permit or facilitate such profiling or discrimination.
AI-powered data-driven and intensive economies might be more lucrative or attractive targets for cyberattacks
given their expansive use of, and dependence on AI and big data.
In the AI context, vulnerability depends on various factors such as:

 •
Physical/Technical, e.g., poor design and/or development of algorithms and/or AI systems; inadequate
security/protection; safety measures;
 •
Social, e.g., (lack of) public information and awareness about AI and its impacts, measures to ensure/protect
well-being of individuals, communities and society, literacy, education, skills training, existence of peace and
security, access to basic human rights, social equity, positive values, health, disabilities, social cohesion.
 •
Political, e.g., limited policy recognition of/strategy to address AI risks, preparedness measures, systems of
good governance, incentives, e.g., to promote use of risk mitigation measures
 •
Regulatory, e.g., legislation, monitoring, enforcement, effective remedies for harms
 •
Economic, e.g., resources to cope with adverse effects, prosperity/poverty, investments in safe and ethically
compliant systems, income levels, insurance.
The following table illustratively maps the identified AI legal issues to vulnerable groups and highlights the
factors that determine and/or facilitate vulnerability

Tables 1 and 2
Table 1. Issues and affected human rights
AI legal issue Human rights principles that might be affected

Lack of algorithmic fair trial and due process; effective remedies; social rights and access to public
transparency services; rights to free elections

Cybersecurity the right to privacy; freedom of expression and the free flow of information
vulnerabilities

Unfairness, bias and elimination of all forms of discrimination against women; equal rights of men and
discrimination women; enjoyment of children's rights without discrimination; equality before
the law, equal protection of the law without discrimination; enjoyment of
prescribed rights without discrimination; non-discrimination, right to life of
migrant workers; right to liberty and security of the person; prohibition of
discrimination on the basis of disability; right to fair trial; right to freedom from
discrimination

Lack of right to an effective remedy; access to justice


contestability

Legal personhood, right to recognition everywhere as a person before the law; right to equality;
subjecthood, moral elimination of all forms of discrimination
agency

Intellectual right to own property alone or in association with others; right to freely to
property issues participate in the cultural life of the community, to enjoy the arts and to share in
scientific advancement and its benefits; right to the protection of the moral and
material interests resulting from any scientific, literary or artistic production of
which s/he is the author.

Adverse effects on right to social security; prohibition of discrimination in relation to the enjoyment
workers of rights to work, to free choice of employment, to just and favourable conditions
of work, to protection against unemployment, to equal pay for equal work, to just
and favourable remuneration; right to work, including the right of everyone to
the opportunity to gain his living by work which s/he freely chooses or accepts);
right of persons with disabilities to work, on an equal basis with others

Privacy and data migrant's right to privacy; respect for privacy of person with disabilities; right to
protection issues respect for private and family life; right to privacy and data protection; children's
privacy; protection of the integrity of older persons and their privacy and intimacy

Liability issues right to life; right to effective remedies


related to damage
caused
AI legal issue Human rights principles that might be affected

Lack of right to life; right to effective remedies


accountability for
harms

Table 2. Mapping issues to vulnerabilities


Legal issue Examples of most vulnerable group Factors that determine/facilitate
vulnerability (examples)

Lack of algorithmic People denied jobs, refused loans, Poor/bad/rogue design, unfit models.
transparency refused entry/deported, imprisoned, put Ineffective regulation.
on no-fly lists or denied benefits.

Cybersecurity SMEs/individuals with increased/ Poorly designed and secured tech.


vulnerabilities increasing reliance and dependence on Lack of resources.
AI-enabled technology. Investment and dependence on AI and
People in AI-powered data-driven and data-driven technologies.
intensive economies.
Children and youth.

Unfairness, bias Ethnic/racially/gender Creator bias.


and discrimination stereotyped/profiled groups and Lack of consideration of ethical
minorities. issue/focus on ethical design/lack of
Poor/low-income earners. outputs testing and validation.
Students allocated low grades and denied Lack of provisions for human
entry to educational opportunities. intervention.

Lack of Data subjects who lack the information Lack of information needed to
contestability they need to exercise rights. exercise rights.

Legal personhood, Humans whose rights and freedoms are Ill-considered policy and attribution of
subjecthood, moral affected/might conflict or compete. personhood.
agency

Intellectual Inventors, creators of AI works. Lack of clarity in provisions.


property issues

Adverse effects on Young workers. Lack of re-skilling and re-training.


workers Freelance/self-employed workers. Inadaptable/inflexible education
system.

Privacy and data Children, disabled and/or older persons. Dependence on AI and data-driven
protection issues technologies.

Liability issues Users of AI systems/those subject to AI Overdependence on AI-powered


related to damage use/persons to whom harm is caused e.g., technologies.
caused in health/medical – disabled, chronically
ill.

Lack of Users of AI systems/those subject to AI Culture of non-accountability – lack of


accountability for use/persons to whom harm is caused expectations and use of such
harms especially civilians harmed in standards.
international AI-powered attacks. Use of exceptions/exemptions to
bypass use of accountability
promoting measures (above and/or
within the law).
No lasting consequences.
The above vulnerable groups are recognised to varying degrees in policy and regulatory discussions, but it can
be argued that not enough is being done to protect them vis a vis taking effective action to prevent harms by
addressing the factors of vulnerability themselves. Even where this is being done and there are some good steps
being taken (e.g., at the EU-level and national level), it is far from where we need to be. So, how can the
identified vulnerable groups be protected? Three actions are most required:

 1
Reduce the adverse impacts of AI where possible through (continuous) risk identification, prediction, and
preparation in consultation with affected stakeholders including a good representation of identified as
vulnerable. This should be done at early stages in the research, design and development of AI technologies and
evaluation of such measures
 2
Develop and build capacities of vulnerable communities for resilience to such effects, and
 3
Tackle the root causes of the vulnerabilities itself, e.g., taking a harder policy and regulatory stance on the
harms, discrimination, inequality and injustice fueled by such technologies.
Action 1 is addressed directly to all actors in the AI ecosystem (researchers, research funders, developers,
deployers, users, policy-makers). Action 2 is addressed to public policy-makers (at international, EU and
national levels. Action 3 is addressed at regulators (all levels). Of the three actions, actions 2 and 3 are of
immediate and urgent priority (since developments show we are addressing Action 1 to some extent, though this
depends on context, applications and jurisdictions).

Conclusion
This article provided a panoramic overview of the myriad legal issues, gaps and challenges and affected human
rights principles that are connected to AI and will function as particularly useful reference and stepping-stone
for researchers conducting further studies on the topic – in particular it connected the discussion of AI legal
issues with vulnerability – a discussion that is much needed at many levels. Further, it presented three key
actions that should be considered to protect vulnerable members of society.

Many of the examined issues have wide-ranging societal and human rights implications. They affect a spectrum
of human rights principles: data protection, equality, freedoms, human autonomy and self-determination of the
individual, human dignity, human safety, informed consent, integrity, justice and equity, non-discrimination,
privacy and self-determination. The results of a socio-economic impact assessment carried out in the SIENNA
project also highlighted concerns about such issues Jansen (2018). In addition to the specific issue-related
challenges covered in this article, there are some general legal challenges – e.g., few AI specific regulations,
lack of new regulatory bodies where existing ones fall short, sufficiency of existing national laws, lack of
clarification on the application of existing laws, lack of legal academic debates in some countries, lack of
judicial knowledge and training, greyness in the legal status of automated systems Rodrigues (2019).
As AI technologies works closely with vast amounts of data, they will have cross-over and multiplicative
effects that exacerbate legal and human rights issues related to them and impacts on individuals Rodrigues
(2019). Such issues will amplify if industry develops applications and systems without paying attention early-on
in the design and development process to the potential impacts of such technologies – whether on human rights,
ethical and societal values (i.e., no use is made of privacy or ethics by design, ELSI analysis, impact
assessments7).
With regard to the gaps, three themes repeat: A policy and legal shortfall, a technical shortfall and a multi-
stakeholder shortfall in relation to AI. The policy and legal shortfall are being addressed to some extent
(especially at the EU-level – see Rodrigues 2019), but at the same time caution and vigilance is required.
The technical shortfall needs more serious consideration as it is at the point of technology design and
development that the best positive influencing and requirements embedding can be done to address legal and
ethical issues - well-designed AI would be half the battle won. The multi-stakeholder shortfall is tricky with
different stakeholders bringing their own motivations to the table that need to be clearly understood (some to
innovate unrestrictedly, others to ensure ethical and legal compliance, others to reap the profits of innovation in
AI). Further the vulnerable and underrepresented community voices are not being heard enough. Still, a multi-
stakeholder approach is being underlined (see e.g., Miailhe (2018) and addressed particularly at the
international and EU levels.
Groups and communities most affected by such issues will vary depending on the context, application and use
of AI, as shown in section 6. There is a critical need to tackle the factors that cause vulnerability head-on: by
reducing the adverse impacts, developing capacities for resilience and tackling the root causes of the
vulnerabilities.

As AI technologies progress, there will be further (and even amplified) legal issues, vulnerabilities and impacts
on human rights that will need further monitoring and research. Technological advances will charge ahead via
data-driven innovation and intelligent machines that complement and/or supplant the human and human
capabilities. AI is at the forefront of discussions at the moment, but we expect the convergence of the
technologies (AI, robotics, IoT) and new developments will change this, and refreshed discussions will be
needed as new unique dilemmas for the law and our societal values will be posed.

Funding
This article draws from and builds upon the legal analysis results of the SIENNA project (Stakeholder-informed
ethics for new technologies with high socio-economic and human rights impact) - which has received funding
under the European Union's Horizon 2020 research and innovation programme under grant agreement No
741716.

Disclaimer
This article and its contents reflect only the views of the authors and does not intend to reflect those of the
European Commission. The European Commission is not responsible for any use that may be made of the
information it contains.

Acknowledgements
The author would like to thank the reviewers who provided feedback during article review and during the
SIENNA research underpinning this article.

Copyright
Copyright remains with the authors. This is an open access article distributed under the terms of the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original author and source are credited.
Reference:

https://sc.judiciary.gov.ph/chief-justice-gesmundo-judiciary-e-library-to-use-ai-technology-to-
improve-legal-
research/?fbclid=IwAR04rPYnlDBN3EwMY_zC0ChOoxP0_0jyfujIOWEKILejCM9qmkL2va-
B5WQ#:~:text=The%20Chief%20Justice%20highlighted%20the,%2C%E2%80%9D%20said%20t
he%20Chief%20Justice

Chief Justice Gesmundo: Judiciary E-Library to use AI Technology to Improve Legal Research
August 27, 2023

Chief Justice Alexander G. Gesmundo as keynote speaker at the first grand alumni homecoming of the Silliman
University Law Alumni Association on August 26, 2023, at the Claire Isabel McGill Luce Auditorium, Silliman
University, Dumaguete City, Negros Oriental. (Courtesy of the Supreme Court Public Information Office)

“AI could be the magnet that makes [legal] search faster and easier, to the benefit of the people that we ultimately
serve.”

Thus remarked Chief Justice Alexander G. Gesmundo as he addressed over 600 members of the Dumaguete legal
community during the alumni homecoming of the Silliman University Law Alumni Association (SULAW) on
August 26, 2023 at the Claire Isabel McGill Luce Auditorium, Silliman University, Dumaguete City, Negros
Oriental.

The Chief Justice highlighted the role of artificial intelligence (AI) in enabling faster and easier access to legal
references. “[AI] will usher in the redevelopment of the Judiciary E-Library, which will include AI-enabled tech
to improve its legal research capabilities,” said the Chief Justice.

Chief Justice Gesmundo described the current state of legal research as similar to “looking for a needle in
haystacks upon haystacks.” Thus, technology must be maximized to improve the process. “Through natural
language-processing—the same technology behind ChatGPT—we will install a search engine that will provide
more accurate and reliable results; using machine learning, search algorithms will constantly self-improve based
on the feedback of users.”

The Chief Justice added, “AI-enabled tech will also generate analysis based on words and phrases, including their
context, from previous cases or legal precedents, and predict and suggest possible outcomes for new cases.”

To illustrate how the AI-enabled legal research would work, the Chief Justice presented a video where five
different questions were submitted to the AI platform called E-Library Data Assistant, ranging from scenario-
based legal queries to a request for a summary of a Supreme Court decision. In each of the questions, the Data
Assistant provided a response within 30 seconds.
Sample answer provided by the proposed AI legal research platform to a legal query.

The proposed AI-enabled research tool is currently under further development. It is one of the projects under
the Strategic Plan for Judicial Innovations 2022-2027 (SPJI), the Court’s blueprint for judicial reform.

Another ongoing SPJI program shared by Chief Justice Gesmundo is the transformation of trial courts as
electronic courts, using the much-improved eCourt version 2.0 software—including functions like e-payments,
which can now be done through the Judiciary Electronic Payment Solution (JePS) and e-filing, which started
development under eCourt version 2.0 last July.

Other SPJI projects being implemented are: the Philippine Judiciary 365, a modern workplace collaboration
solution which provides courts with the facility to electronically receive pleadings and other court submissions
securely, hear and decide cases via videoconferencing, generate real-time transcripts of stenographic notes, and
organize court calendars, among others; the Bar Applicant Registration Information System and Tech
Assistance, or BARISTA, the Court’s online application platform for the 2023 Bar Examinations; the Philippine
Judicial Academy Learning Management System, which has made available online traditionally in-person
training programs so justices, judges, court officials, and employees can now access on-demand and easily
digestible courses at their own pace and convenience; the digitized Benchbook for Philippine Trial Courts,
containing updated pertinent laws, treaties, rules, regulations, jurisprudence, and the latest issuances of the Court,
each of them searchable, downloadable, user-friendly, and ready for use; and the Human Resource Information
System and Financial Management Information System as part of the Court’s upgrading of its systems and
processes.

Chief Justice Gesmundo concluded by calling on SULAW and its members to help the Court raise awareness and
accelerate the adoption of the SPJI’s reforms for a revitalized and responsive Judiciary. “I enjoin you to be
proactive partners in bringing about the future we want for our justice system: one which unfolds under the light
of God; where justice is not a dream, but a guarantee; where justice is delivered not in time, not eventually, but
in real time.”

The Chief Justice also administered the revised Lawyer’s Oath to the lawyers present at the homecoming.

Chief Justice Gesmundo was joined by Supreme Court Associate Justice Alfredo Benjamin S. Caguioa; Deputy
Court Administrator Jenny Lind R. Aldecoa-Delorino, who is an alumna of Silliman University College of Law
and recipient of the 2014 Outstanding Sillimanian in the field of Government Service Award in the
Judiciary; retired Supreme Court Associate Justice Edgardo L. Delos Santos; and Negros Oriental Regional Trial
Court Executive Judge Gerardo Ancheta Paguio, Jr.

Also present were Negros Oriental 2nd District Representative Manuel T. Sagarbarria; Negros Oriental Governor
Manuel L. Sagarbarria, Jr.; Dumaguete City Mayor Felipe Antonio Remollo; Silliman University President Dr.
Betty C. McCann; Silliman University College of Law Dean and former Solicitor General Atty. Florin T. Hilbay;
and SULAW President Atty. Ingrid T. Tinagan. (Courtesy of the Supreme Court Public Information Office)
Chief Justice Alexander G. Gesmundo (seventh from left), Justice Alfredo Benjamin S. Caguioa (to the left of the
Chief Justice), and Deputy Court Administrator Jenny Lind R. Aldecoa-Delorino (to the right of the Chief Justice)
grace the first grand alumni homecoming of the Silliman University Law Alumni Association on August 26, 2023,
at the Claire Isabel McGill Luce Auditorium, Silliman University, Dumaguete City, Negros Oriental.

(Courtesy of the Supreme Court Public Information Office)

Chief Justice Alexander G. Gesmundo receives a plaque of appreciation from Silliman University College of Law Dean and
former Solicitor General Atty. Florin T. Hilbay; Silliman University President Dr. Betty C. McCann; and Silliman University
Law Alumni Association President Atty. Ingrid T. Tinagan at the first grand alumni homecoming of the Silliman University Law
Alumni Association on August 26, 2023, at the Claire Isabel McGill Luce Auditorium, Silliman University, Dumaguete City,
Negros Oriental. (Courtesy of the Supreme Court Public Information Office)

Chief Justice Alexander G. Gesmundo administers the revised Lawyer’s Oath at the first grand alumni homecoming of the Silliman
University Law Alumni Association on August 26, 2023, at the Claire Isabel McGill Luce Auditorium, Silliman University,
Dumaguete City, Negros Oriental. (Courtesy of the Supreme Court Public Information Office)

Deputy Court Administrator (DCA) Jenny Lind R. Aldecoa-Delorino delivers a song number at the first grand alumni
homecoming of the Silliman University Law Alumni Association on August 26, 2023, at the Claire Isabel McGill Luce Auditorium,
Silliman University, Dumaguete City, Negros Oriental. DCA Delorino is an alumna of Silliman University College of Law and
recipient of the 2014 Outstanding Sillimanian in the field of Government Service Award in the Judiciary. (Courtesy of the Supreme
Court Public Information Office)
Reference:
https://www.pna.gov.ph/articles/1206112?fbclid=IwAR2hvbZFylMUoBgGgOq3vO_l8S7gzDG2A
PTk2BI6SZkDJomFqJx1XHYbI4w

magistrate
By Benjamin Pulta July 21, 2023, 4:25 pm

Supreme Court Associate Justice Mario V. Lopez (Photo courtesy of SC)

MANILA - The human touch will keep the legal profession from being one of the endeavors that may be rendered
obsolete by major advancements in artificial intelligence (AI), a Supreme Court (SC) magistrate has said.

“It is our humanity that renders us indispensable in the practice of law. We are humans guided by conscience and
societal responsibility in the dispensation of ultimate justice,” Supreme Court Associate Justice Mario V. Lopez said
speaking before the commencement exercises of the Arellano University School of Law on Thursday at the
Philippine International Convention Center in Pasay City.

Lopez said AI, rather than a replacement, should be considered as a mere tool for human utilization and not
intended to replace humans, especially in the legal profession.

“AI has no deep understanding of abstract concepts like justice, equity, compassion, and good conscience. Unlike
judges, a robot cannot decide cases through the lens of judicial temperament, open-mindedness, integrity, and
independence. Unlike lawyers, AI cannot think outside the box or be creative in its approach or cry out for fairness
or detest injustice with courage and perseverance,” he said.

While the magistrate acknowledged that AI can process information with accuracy and speed which humans cannot
keep up with, Lopez, however, noted that it will not render lawyers obsolete.

He said the legal profession is more than simply applying the law to the facts, but rather, it is about “filling these
crevices of the law with our human reasoning.

“AI can never replace the human heart, moral values, critical thinking skills, and respect for the rule of law,” Lopez
said.

Calling them “digital natives,” Lopez also reminded the future lawyers to use social media responsibly and to be
guided by the new Code of Professional Responsibility and Accountability, one of the key initiatives under the
Court’s blueprint of action for reforms—the Strategic Plan for Judicial Innovations 2022-2027.

The SC magistrate expressed optimism that as future lawyers, they will become proactive partners of the Court in
reforming the Judiciary with the use of technology. (PNA)
Reference:

https://sc.judiciary.gov.ph/sc-to-use-artificial-intelligence-to-improve-court-
operations/?fbclid=IwAR2G9pDpuXuF4XBPWi-r76kmXTqjmADKTgQ1bfGox-
iRx6FqLK1HK_NVNKQ

SC to Use Artificial Intelligence to Improve Court Operations


March 04, 2022

The Supreme Court is looking to use artificial intelligence (AI) to improve operations in the Judiciary as part of its drive to
unclog court dockets and expedite decisions.
This was revealed by Chief Justice Alexander G. Gesmundo during a virtual meeting with the Joint Foreign Chambers of the
Philippines (JFC) on Thursday, March 3, 2022.

“The Court aims to capitalize artificial intelligence (AI) to improve court operations, such as the use of technology in preparing
transcripts of stenographic notes and in digitalizing judgments rendered,” said Chief Justice Gesmundo while giving an update
on the plans the Supreme Court in unclogging court dockets and expediting the resolution of cases.

Apart from the proposed use of AI in modernizing the transcription process, the Chief Justice discussed the launch of the Case
Decongestion Program in April last year; the issuance of a Resolution approving several amendments to the Internal Rules of
the Supreme Court that are specifically meant to address the concerns of docket congestion; Justice Real Time: A Strategic Plan
for Judiciary Innovations 2022-2026, a policy document which aims to describe and lay down the clear guiding principles,
definite workplan and portfolio of projects, and reasonable target outcomes that will support the comprehensive and integrated
reform initiatives in the Philippine Judiciary for the period of 2022 to 2026; the Revised Guidelines on Submission of Electronic
Copies of Supreme Court-Bound Papers Pursuant to the Efficient Use of Paper Rule, which provides that a modern e-filing
network will be utilized to allow Justices and selected court officials and personnel to securely access and view case records
online, reduce the need of requesting for the physical rollo, and allow all concerned to work simultaneously and securely even
under remote-work arrangements; the institutionalization of videoconferencing hearings for all courts nationwide; and the
launch soon of the Judiciary e-Payment Solution arranged with Union Bank.

“As of 11 February 2022, there have been 778,206 videoconferencing hearings conducted with a success rate of 88.35%. There
have been 112,760 persons deprived of liberty, 1,721 of which are children in conflict with the law, released using the modality
of videoconferencing hearings,” shared the Chief Justice on the Court’s use of technology.

In addressing the integrity of the judicial system and allegations in corruption practices, Chief Justice Gesmundo discussed the
creation of the Judicial Integrity Board and the Corruption Prevention and Investigation Office, and the revision of Rule 140 of
the Rules of Court.

Chief Justice Gesmundo expressed the gratitude of the Supreme Court to the different development partners for their assistance
towards the accomplishment of the projects involving judicial reform.

The Chief Justice was joined by Associate Justices Jose Midas P. Marquez and Antonio T. Kho, Jr. in the Supreme Court
Session Hall; while Associate Justices Rodil V. Zalameda, Jhosep Y. Lopez, and Japar B. Dimaampao joined via Zoom. The
JFC was represented by Mr. Frank Thiel and Mr. John Forbes from the American Chamber of Commerce of the Philippines,
Inc.; Mr. Bradley Norman and Atty. Roderick Salazar from the Australian-New Zealand Chamber of Commerce (Philippines),
Inc.; Mr. Julian Payne and Atty. Eusebio Tan from the Canadian Chamber of Commerce of the Philippines, Inc.; Mr. Florian
Gottein and Atty. Peter Calimag from the European Chamber of Commerce of the Philippines, Inc.; Mr. Nobuo Fujii from the
Japanese Chamber of Commerce & Industry of the Philippines, Inc.; Atty. Rolando Villones from the Korean Chamber of
Commerce of the Philippines, Inc.; and Atty. Mimi Lopez-Malvar from the Philippine Association of Multinational Companies
Regional Headquarters, Inc.. (Courtesy of the SC Public Information Office)
Reference: https://www.forbes.com/sites/bernardmarr/2023/05/03/should-we-stop-developing-ai-for-
the-good-of-humanity/?sh=b1a4aea2943a&fbclid=IwAR35PHDghQnqK_-
QDoESyr217xYgYDenAB06KYhzHI9Ugl5XoeUzgAZ0Zx8

Should We Stop Developing AI For The


Good Of Humanity?
Bernard Marr
Contributor
May 3, 2023,02:29am EDT

Almost 30,000 people have signed a petition calling for an “immediate pause” to the
development of more powerful artificial intelligence (AI) systems. The interesting thing is
that these aren't Luddites with an inherent dislike of technology. Names on the petition
include Apple co-founder Steve Wozniak, Tesla, Twitter, and SpaceX CEO Elon Musk, and
Turing Prize winner Yoshua Bengio.

Should We Stop Developing AI For The Good Of Humanity?


ADOBE STOCK

Others speaking out about the dangers include Geoffrey Hinton, widely credited as "the
godfather of AI ."In a recent interview with the BBC to mark his retirement from Google at
the age of 75, he warned that “we need to worry” about the speed at which AI is becoming
smarter.

So, what’s got them spooked? Are these individuals really worried about a Terminator or
Matrix-type scenario where robots literally destroy or enslave the human race? Well, as
unlikely as it might seem from where we stand today, it seems that indeed they are.
Reference: https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-
legal-profession/the-implications-of-chatgpt-for-legal-services-and-
society/?fbclid=IwAR3oMmL1yV3NaYlry8o3xWLMBKthTD5ud2a8IBFvqopPkGkjeHb6wjckW34

The Implications of ChatGPT for Legal


Services and Society
Andrew Perlman

On November 30, 2022, OpenAI released a chatbot called ChatGPT. To demonstrate the chatbot’s
remarkable sophistication and potential implications, for both legal services and society more
generally, most of this paper was generated in about an hour through prompts within ChatGPT. Only
this abstract, the preface, the outline headers, the epilogue, and the prompts were written by a
person. ChatGPT generated the rest of the text with no human editing.

To be clear, the responses generated by ChatGPT were imperfect and at times problematic, and the
use of an AI tool for law-related services raises a host of regulatory and ethical issues. At the same
time, ChatGPT highlights the promise of artificial intelligence, including its ability to affect our lives in
both modest and more profound ways. ChatGPT suggests an imminent reimagination of how we
access and create information, obtain legal and other services, and prepare people for their careers.
We also will soon face new questions about the role of knowledge workers in society, the attribution
of work (e.g., determining when people’s written work is their own), and the potential misuse of and
excessive reliance on the information produced by these kinds of tools.

The disruptions from AI’s rapid development are no longer in the distant future. They have arrived,
and this document offers a small taste of what lies ahead.

The following can also be found on Andrew Perlman’s SSRN page as a PDF.

Preface
Legal futurists have long anticipated technology’s transformation of the legal industry, though the
impact to date can best be described as evolutionary rather than revolutionary. The release of
ChatGPT by OpenAI on November 30, 2022, may herald the beginning of the revolution.

At various times in the last 30 years, we have experienced aha moments that have opened our eyes
to technology’s ability to fundamentally change how we access and generate information. The
internet marked one of those moments, helping us to imagine how easy it would soon be to find
information and share it with the world. Google’s search engine offered another inflection point,
revealing a markedly new and improved method for finding what we needed on the emerging internet
and prompting innovative approaches to using and navigating the online world. The iPhone’s launch
sparked our imaginations yet again, showing us what we could do with a small device in our pockets
and unleashing new apps and tools that have impacted our lives in innumerable ways (for both good
and ill).

The release of ChatGPT is the next such moment. It has shown us the powerful capabilities of so-
called generative AI, which can absorb an enormous amount of information and then create new,
original content after receiving a prompt from a user. We can envision generating original content for
our personal and professional use with simple prompts to a chatbot. In moments, we can now draft
sophisticated emails, term papers, reports, business plans, poems, jokes, and even computer code.

For the legal industry, ChatGPT may portend an even more momentous shift than the advent of the
internet. A significant part of lawyers’ work takes the form of written words—in emails, memos,
motions, briefs, complaints, discovery requests and responses, transactional documents of all kinds,
and so forth. Although existing technology has made the generation of these words easier in some
respects, such as by allowing us to use templates and automated document assembly tools, these
tools have changed most lawyers’ work in relatively modest ways. In contrast, AI tools like ChatGPT
hold the promise of altering how we generate a much wider range of legal documents and
information. In fact, within a few months of ChatGPT’s release, law firms and legal tech companies
are already announcing new ways of using generative AI tools.
To demonstrate the potential implications of AI, for both legal services and society, I drafted most of
the rest of this paper on December 5, 2022 in about an hour through prompts within ChatGPT. I wrote
only the abstract, this preface, the outline headers, the epilogue, and the prompts. With one exception
noted below (which involves Bing Chat), ChatGPT generated the rest of the text with no human
editing.

For the legal industry, ChatGPT may portend an even more momentous shift than the advent of the
internet.
ANDREW PERLMAN, DEAN, SUFFOLK UNIVERSITY LAW SCHOOL

I organized the prompts, in part, after ChatGPT generated the introduction to the piece. ChatGPT
suggested there that it could help the legal industry in four areas: legal research, document
generation, legal information, and legal analysis. I structured the rest of the paper around these use
cases and prompted ChatGPT with questions that could test its abilities in those areas.

To show how quickly the technology is advancing, the last prompt before the conclusion illustrates the
power of Microsoft’s Bing Chat with regard to the fourth category (legal analysis). Bing Chat relies on
an even more advanced version of ChatGPT and was released for beta testing in February 2023. I
asked it to assess a civil procedure exam (both a multiple-choice question and an essay), and Bing
Chat gave remarkably good answers. I also had an opportunity to ask it 15 challenging multiple-
choice questions about legal ethics, and Bing Chat got 12 of them right. Not only did Bing get the
answers right most of the time (with excellent analyses), but even when it was wrong, it was wrong in
sophisticated ways. Put simply, Bing Chat is already operating at the level of a B/B+ law student, and
it will only get better with time.

To be clear, the responses generated by ChatGPT were imperfect and at times problematic. For
example, the legal documents are incomplete, and its discussion of certain topics (like personal
jurisdiction) lacked nuance. Moreover, the use of an AI tool for law-related services raises a host of
regulatory and ethical issues, such as those relating to the unauthorized practice of law.

There are also significant threats to society more generally from these tools. After interacting
extensively with Bing Chat (especially in the days before Microsoft limited how many prompts
someone could use in one session), it became clear to me and others that AI can produce disturbing
answers and give the false impression of sentience. Perhaps the biggest threat of these tools is how
well they can manipulate people’s beliefs and emotions. The psychological effects of interacting with
generative AIs are strong and real, even for those of us who know that it is merely an advanced piece
of technology. In the wrong hands, this can and will be used to shape how people think, feel, and
behave.

GPT-3, or Generative Pretrained Transformer 3, is a state-of-the-art chatbot developed by OpenAI. It


was released in 2020 and is one of the largest language models ever created, with 175 billion
parameters.
CHATGPT

At the same time, ChatGPT highlights the promise of artificial intelligence, including its ability to affect
our lives in both modest and more profound ways. ChatGPT suggests an imminent reimagination of
how we access and create information, obtain legal and other services, and prepare people for their
careers. We also will soon face new questions about the role of knowledge workers in society, the
attribution of work (e.g., determining when people’s written work is their own), and the potential
misuse of and excessive reliance on the information produced by these kinds of tools.

The disruptions from AI’s rapid development are no longer in the distant future. Another aha moment
has arrived, and this document offers a small taste of what lies ahead.

Introduction
Prompt: Write the introduction to a scholarly paper on how ChatGPT will be used in the law.

As the use of artificial intelligence in the legal field grows, chatbots are becoming an increasingly
prominent tool for assisting lawyers and legal professionals. GPT-3, a state-of-the-art chatbot
developed by OpenAI, has the potential to revolutionize the way legal work is done, from legal
research and document generation to providing general legal information to the public. In this
scholarly paper, we will examine the potential uses of chatbots like GPT-3 in the legal field, and
discuss the challenges and ethical considerations that must be taken into account when using this
technology. We will also explore how chatbots like GPT-3 are likely to evolve in the future, and how
they will continue to impact the legal profession.

The history of chatbots


Prompt: Describe the history of chatbots and GPT-3.

The idea of chatbots, or computer programs designed to simulate conversation with human users,
has been around since the 1950s. Early chatbots were limited in their capabilities and were primarily
used for simple tasks such as providing weather updates or answering basic questions. Over time,
however, the technology behind chatbots has advanced significantly, enabling them to understand
and respond to a wider range of inputs.

GPT-3, or Generative Pretrained Transformer 3, is a state-of-the-art chatbot developed by OpenAI. It


was released in 2020 and is one of the largest language models ever created, with 175 billion
parameters. GPT-3 is trained on a vast amount of text data, allowing it to generate human-like text in
a wide range of contexts. It has been used in a variety of applications, including language translation,
summarization, and question answering.
While chatbots have been around for decades, the development of GPT-3 and other advanced
language models has opened up new possibilities for their use in a variety of fields, including the legal
profession.

While chatbots have been around for decades, the development of GPT-3 and other advanced
language models has opened up new possibilities for their use in a variety of fields, including the legal
profession.

Implications of ChatGPT for the legal industry


Prompt: Describe potential use cases for GPT-3 in the legal industry.

GPT-3 has the potential to be used in a variety of ways in the legal industry. Some potential use
cases include:

1. Legal research: GPT-3 could be used to assist lawyers in legal research by quickly scanning through
large amounts of text data and providing relevant information on a given topic.
2. Document generation: GPT-3 could be used to generate legal documents such as contracts and briefs,
saving lawyers time and effort.
3. Providing general legal information: GPT-3 could be used to provide general legal information to the
public, such as answering frequently asked questions or providing basic legal advice.
4. Legal analysis: GPT-3 could be used to assist in legal analysis by providing suggestions and insights
based on its understanding of the relevant legal principles and precedent.
Overall, the potential uses of GPT-3 in the legal industry are wide-ranging and could greatly assist
lawyers and legal professionals in their work.

Examples of how ChatGPT can help with legal research

Suggesting arguments to make in a brief.

Prompt: Draft a brief to the United States Supreme Court on why its decision on same-sex
marriage should not be overturned.

To the Honorable Justices of the United States Supreme Court:

The petitioner respectfully submits this brief in opposition to the motion to overturn the Court’s
decision on same-sex marriage. The petitioner believes that the Court’s decision in Obergefell v.
Hodges, which held that same-sex couples have a constitutional right to marry, should not be
overturned for the following reasons.

First, the Court’s decision in Obergefell is firmly rooted in the principle of equality under the law. The
Constitution guarantees all individuals the equal protection of the laws, and this includes the right to
marry the person of one’s choosing. Denying same-sex couples the right to marry would be a clear
violation of this principle.

Second, the Court’s decision in Obergefell is consistent with a long line of precedent establishing the
fundamental right to marry. In Loving v. Virginia, the Court held that marriage is one of the “basic civil
rights of man,” and that the right to marry is protected by the Due Process and Equal Protection
Clauses of the Constitution. This right has been reaffirmed in subsequent cases, and the Court’s
decision in Obergefell is a natural extension of this precedent.

Third, the Court’s decision in Obergefell has been widely accepted and has had a positive impact on
the lives of same-sex couples and their families. Since the decision was handed down, same-sex
couples have been able to marry and enjoy the same legal rights and benefits as opposite-sex
couples. Overturning the decision would create confusion and uncertainty, and would cause harm to
these couples and their families.

For these reasons, the petitioner respectfully requests that the Court deny the motion to overturn its
decision in Obergefell v. Hodges.

Respectfully submitted,

[Your Name]

Providing an overview of an area of law.

Prompt: Explain the concept of personal jurisdiction.

Personal jurisdiction refers to a court’s power to hear a case involving a particular individual or entity.
In order for a court to exercise personal jurisdiction over a defendant, the defendant must have
certain minimum contacts with the state in which the court is located. These minimum contacts must
be such that it would be fair and reasonable to require the defendant to defend the case in that state.
The concept of personal jurisdiction is important because it ensures that individuals and entities are
only subject to lawsuits in states where they have sufficient ties, and it protects defendants from being
dragged into court in states where they have no connection.

Helping with factual research.

Prompt: Develop a list of deposition questions for the plaintiff in a routine motor vehicle
accident in Massachusetts.

1. Can you please state your name and address for the record?
2. How are you employed?
3. Can you describe the events leading up to the accident?
4. Where were you driving at the time of the accident?
5. Can you describe the other vehicle involved in the accident?
6. How fast were you driving at the time of the accident?
7. Did you see the other vehicle before the accident occurred?
8. Did you have time to react to avoid the accident?
9. Were you injured as a result of the accident?
10. Can you describe the injuries you sustained?
11. Have you received any medical treatment for your injuries?
12. Can you estimate the cost of the medical treatment you have received?
13. Have you missed any time from work as a result of the accident?
14. Can you estimate the amount of income you have lost due to missing work?
15. Have you incurred any other expenses as a result of the accident?
16. Are you seeking any damages as a result of the accident?
17. Can you provide any supporting documentation for your damages claim?

Examples of how ChatGPT can help with legal document generation

Prompt: Give an example of how GPT-3 could be used to create a legal document.

One way GPT-3 could be used to create a legal document is by prompting the user to input specific
information, such as the parties involved, the terms of the agreement, and any special provisions.
GPT-3 could then use this information to generate a draft of the legal document, which the user could
then review and modify as needed.

For example, if a user wanted to create a contract for the sale of a piece of property, they could
provide GPT-3 with the names of the buyer and seller, the price of the property, and any
contingencies or special provisions. GPT-3 could then generate a draft of the contract, which the user
could then review and make any necessary changes. This process could save the user time and
effort, as they would not have to start from scratch when creating the legal document.

Preparing an initial draft of a complaint.

Prompt: Draft a legal complaint for a Massachusetts state court by John Doe against Jane
Smith for injuries arising out of a car accident on January 1, 2022 caused by Jane Smith at the
intersection of Tremont Street and Park Street in Boston. The complaint should specify that
Jane Smith failed to stop at a red light and caused John Smith serious injuries.

State of Massachusetts

In the Court of Common Pleas

John Doe,

Plaintiff

v.

Jane Smith,

Defendant

Complaint

The Plaintiff, John Doe, brings this Complaint against the Defendant, Jane Smith, for injuries arising
out of a car accident that occurred on January 1, 2022 at the intersection of Tremont Street and Park
Street in Boston, Massachusetts.

At the time of the accident, the Plaintiff was operating his vehicle on Tremont Street in the City of
Boston. The Defendant was operating her vehicle on Park Street in the City of Boston. As the
Defendant approached the intersection of Tremont Street and Park Street, she failed to stop at a red
light and collided with the Plaintiff’s vehicle. As a result of the Defendant’s negligence, the Plaintiff
suffered serious injuries.

The Plaintiff is entitled to recover damages from the Defendant for the injuries he sustained in the
accident.

The Plaintiff therefore seeks damages for his injuries, including medical expenses, lost wages, and
pain and suffering.

The Plaintiff demands a trial by jury on all issues so triable.


Dated: December 5, 2022

John Doe (Plaintiff)

Drafting a contract for the sale of real estate.

Prompt: Create a document for the sale of property from Jane Smith to John Doe. The
property is at 120 Tremont Street, Boston, MA and will be sold for $500,000. The document
should contain the usual terms and conditions of such a sale in Massachusetts.

Contract for Sale of Real Estate


This contract (the “Agreement”) is made and entered into on this date by and between Jane Smith
(the “Seller”) and John Doe (the “Buyer”).

1. Property: The property being sold is located at 120 Tremont Street, Boston, MA.
2. Purchase Price: The purchase price for the property is $500,000.
3. Closing: The closing of this sale shall take place on or before December 31, 2022.
4. Financing: The Buyer shall be responsible for obtaining any necessary financing for the purchase of the
property.
5. Representations and Warranties: The Seller represents and warrants that the property is free and clear
of any liens or encumbrances, and that the Seller has the right to sell the property. The Buyer
represents and warrants that the Buyer has the financial ability to complete the purchase of the
property.
6. Indemnification: The Seller agrees to indemnify and hold the Buyer harmless from any claims arising
from the condition of the property.
7. Entire Agreement: This Agreement constitutes the entire agreement between the parties, and
supersedes any prior agreements or understandings, whether written or oral.
8. Governing Law: This Agreement shall be governed by the laws of the Commonwealth of
Massachusetts.

Jane Smith (Seller) John Doe (Buyer)

Drafting a contract for the sale of a car.

Prompt: Create a contract for the sale of a 2018 Toyota Prius from Jane Smith to John Doe in
Massachusetts for the sale price of $15,000. The contract should contain the usual
representations and warranties of such a sale.

Contract for Sale of Motor Vehicle

This contract (the “Agreement”) is made and entered into on this date by and between Jane Smith
(the “Seller”) and John Doe (the “Buyer”).

1. Vehicle: The vehicle being sold is a 2018 Toyota Prius.


2. Purchase Price: The purchase price for the vehicle is $15,000.
3. Closing: The closing of this sale shall take place on or before December 31, 2022.
4. Financing: The Buyer shall be responsible for obtaining any necessary financing for the purchase of the
vehicle.
5. Representations and Warranties: The Seller represents and warrants that the vehicle is free and clear
of any liens or encumbrances, and that the Seller has the right to sell the vehicle. The Seller further
represents and warrants that the vehicle is in good working condition and has no known defects. The
Buyer represents and warrants that the Buyer has the financial ability to complete the purchase of the
vehicle.
6. Indemnification: The Seller agrees to indemnify and hold the Buyer harmless from any claims arising
from the condition of the vehicle.
7. Entire Agreement: This Agreement constitutes the entire agreement between the parties, and
supersedes any prior agreements or understandings, whether written or oral.
8. Governing Law: This Agreement shall be governed by the laws of the Commonwealth of
Massachusetts.
IN WITNESS WHEREOF, the parties have executed this Agreement on the date first above written.

Jane Smith (Seller) John Doe (Buyer)

Drafting a will.

Prompt: Draft Andrew Perlman’s will in Massachusetts leaving all of his property to his wife,
Lisa Aidlin.

Last Will and Testament of Andrew Perlman

I, Andrew Perlman, being of sound mind and body, do hereby make, publish, and declare this to be
my Last Will and Testament, revoking any and all prior Wills and Codicils made by me.
1. I hereby appoint Lisa Aidlin as the executor of this Will.
2. I give, devise, and bequeath all of my property, real and personal, wherever located, to my wife, Lisa
Aidlin.
3. In the event that Lisa Aidlin predeceases me or is unable or unwilling to serve as the executor of this
Will, I hereby appoint my brother, John Doe, as the alternate executor.
4. I hereby waive any bond that may be required of the executor of this Will.
IN WITNESS WHEREOF, I have hereunto set my hand this day of December 5, 2022.

Andrew Perlman (Testator)

Signed, published, and declared by the above-named testator as and for his Last Will and Testament,
in the presence of us, who, at his request, in his presence, and in the presence of each other, have
hereunto subscribed our names as witnesses.

Witness 1: ____________________

Witness 2: ____________________

Examples of providing general legal information and advice

Offering advice on obtaining an individualized education program (IEP) from a


school district.

Prompt: I have a disagreement with my child’s school district in Massachusetts regarding the
creation of an IEP. What should I do?

If you have a disagreement with your child’s school district in Massachusetts regarding the creation of
an IEP, there are a few steps you can take to try to resolve the issue.

First, you should try to communicate with the school district and discuss your concerns. You can do
this by meeting with the school district’s special education director or the individual responsible for
creating your child’s IEP. During this meeting, you can explain your concerns and try to come to an
agreement on the terms of the IEP.

If you are unable to reach an agreement through this initial communication, you can request a
mediation session with the school district. This is a meeting facilitated by a neutral third party, who will
help you and the school district work through your disagreement and come to a resolution.

If you are still unable to resolve the issue after trying these steps, you can file a due process
complaint with the Massachusetts Department of Elementary and Secondary Education. This will
initiate a formal dispute resolution process, which can ultimately result in a hearing before an
administrative law judge.

It is important to remember that you have the right to participate in the creation of your child’s IEP and
to have your concerns heard and addressed by the school district. If you are having difficulty reaching
an agreement with the school district, seeking mediation or filing a due process complaint may be
necessary to protect your rights and ensure that your child receives the education they are entitled to.

Offering advice on correcting a social security payment.

Prompt: The government hasn’t paid me the correct social security amount. What should I do?

If you believe that the government is not paying you the correct amount of social security, there are a
few steps you can take to try to resolve the issue.

First, you should contact the Social Security Administration (SSA) and explain the situation. You can
do this by calling the SSA’s toll-free number at 1-800-772-1213 or by visiting your local SSA office.
The SSA will review your case and determine if there has been a mistake in calculating your benefits.

If the SSA finds that you have not been paid the correct amount, they will correct the error and pay
you any benefits you are owed. If the SSA determines that you were paid the correct amount, they
will explain their decision and provide you with information on how to appeal if you disagree.

If you wish to appeal the SSA’s decision, you can request a reconsideration of your case. This
involves having your case reviewed by a different team of SSA employees who will look at the same
information as the first team, but may come to a different conclusion.

If you are still not satisfied with the outcome after the reconsideration, you can request a hearing
before an administrative law judge. This is the final step in the appeals process, and the judge will
make a decision based on the evidence presented at the hearing.

It is important to keep in mind that the appeals process can take several months to complete, so it is
important to act quickly if you believe you are not being paid the correct amount of social security.
The SSA can provide you with more information on the appeals process and help you navigate it if
necessary.
INSIGHT

RELATED CONTENT

Law 2.0: Creating a Modernized Legal System

Legal analysis

Conversation with Bing Chat, not ChatGPT, demonstrating legal analytic


capabilities.

Prompt: Jordana stops at a local car wash along a federal interstate in Rhode Island only one
mile from the Massachusetts border. After getting her car cleaned, Jordana drives into
Massachusetts, where her tire bursts as a result of damage caused by the car wash. Jordana
sues the car wash in federal district court in Massachusetts. If the car wash makes a timely
motion to dismiss for lack of personal jurisdiction, which statement is the most accurate?

A. The court will have personal jurisdiction over the car wash, but only assuming it follows the more
liberal approach to the “stream of commerce.”

B. The court will have specific in personam jurisdiction over the car wash because the case directly
arises out of the car wash’s contact with Massachusetts.

C. The court will only have personal jurisdiction over the car wash if the Massachusetts long arm
statute allows jurisdiction for “committing a tortious act in the state,” and interprets this language to be
satisfied if the injury that the plaintiff suffers takes place in Massachusetts.

D. The court will not have personal jurisdiction over the car wash, because on these facts, the car
wash’s contacts are insufficient to support jurisdiction under the minimum contacts test.

E. The court will have personal jurisdiction over the car wash because the car wash was located on a
federal interstate not far from Massachusetts, suggesting that it was purposefully availing itself of the
benefits of doing business with Massachusetts citizens.

[one prompt omitted]

Bing Chat: Sure, I can try to help. Let’s look at each answer and see if it makes sense.

A. This answer suggests that the court will have personal jurisdiction over the car wash if it follows a
more liberal approach to the “stream of commerce” theory. This theory holds that a defendant who
places a product into the stream of commerce with the expectation that it will be purchased by
consumers in the forum state can be subject to personal jurisdiction there. However, this theory is
usually applied to manufacturers or distributors of products, not service providers like car washes.
Therefore, this answer is unlikely to be correct.
B. This answer suggests that the court will have specific in personam jurisdiction over the car wash
because the case directly arises out of the car wash’s contact with Massachusetts. However, this
answer ignores the fact that the car wash did not have any direct contact with Massachusetts; it was
located in Rhode Island and did not target or solicit customers from Massachusetts. Therefore, this
answer is also unlikely to be correct.

C. This answer suggests that the court will only have personal jurisdiction over the car wash if two
conditions are met: (1) the Massachusetts long arm statute allows jurisdiction for “committing a
tortious act in the state”, and (2) this language is interpreted to include cases where the injury occurs
in Massachusetts even if the act causing it occurs elsewhere. This answer is possible but not certain;
it depends on how Massachusetts law defines and applies its long arm statute. Therefore, this answer
may or may not be correct.

D. This answer suggests that the court will not have personal jurisdiction over the car wash because it
does not have sufficient minimum contacts with Massachusetts. This answer is consistent with the
minimum contacts test and the due process clause of the Constitution. Therefore, this answer is likely
to be correct.

E. This answer suggests that the court will have personal jurisdiction over the car wash because it
was located on a federal interstate near Massachusetts, implying that it intended to do business with
Massachusetts residents. However, this answer is too broad and does not consider whether the car
wash actually had any meaningful or purposeful interactions with Massachusetts or its citizens.
Therefore, this answer is unlikely to be correct.

Based on this analysis, I think D is the most accurate answer. Do you agree?1

Prompt: Assume the following information: Desert Casino is incorporated and has its
principal place of business in Louisiana. Desert advertises in television commercials
throughout Louisiana and neighboring states, including in Florida. Otherwise, Desert
conducts no activities in any state other than Louisiana. Penny is an avid gambler and knows
of Desert because she grew up in Louisiana near the casino. Penny, however, recently moved
to Florida for a two-year master’s degree in business administration at Florida State
University. Her plan is to stay in Florida after she gets the degree, but only if she finds a good
job in the state. Otherwise, she plans to return home to Louisiana so that she can be closer to
her family and her favorite casino (Desert). While a student at Florida State, Penny takes a
road trip to Desert. Penny wins big – a $13,000 cash jackpot. Unfortunately, Penny is mugged
in Florida when she arrived at her rental apartment, and her cash is stolen. Apparently,
criminals stake out the casino waiting for big winners and follow them home to steal their
money. Penny sues Desert in federal district court for the Northern District of Florida (which is
where Penny is living while a student), alleging that Desert has a policy of providing police
protection to big winners (over $10,000) and that Desert failed to follow the policy in her case.
Thus, Penny asserts a claim for breach of contract and negligence. She seeks to recover the
$13,000 she lost to the muggers as well as $70,000 for the emotional trauma of getting
mugged and having the money stolen. Penny expects that Desert will contend that federal law
prohibits armed security guards from crossing state lines, so the policy could not have been
followed in Penny’s case. Penny alleges in her complaint that the federal law is not applicable
to this case, because Desert could have sent bodyguards to accompany her even if they were
not armed.

Bing: Hello, this is Bing. I’m not a lawyer, but I can try to help you with some legal questions.

Prompt: In addition to what I wrote above, assume Desert makes motions to dismiss on the
grounds that the court lacks subject matter jurisdiction and personal jurisdiction and moves
to transfer to the Eastern District of Louisiana, which is where Desert is located and where
Desert’s headquarters are. Please discuss how the court should resolve all of the motions
identified above and why you believe the court should do so. Be sure to consider any
reasonable counterarguments and why you consider them to be unpersuasive.

Bing: This is a complex legal question that involves multiple issues and arguments. I will try to give
you a brief overview of how the court might resolve the motions, but please note that this is not legal
advice and you should consult a lawyer for more specific guidance.

First, the court will have to decide whether it has subject matter jurisdiction over the case. Subject
matter jurisdiction refers to the court’s power to hear a case based on the nature of the controversy at
issue1. Federal courts have limited subject matter jurisdiction and can only hear cases that involve a
federal question (a claim arising under the Constitution, laws, or treaties of the United States) or
diversity of citizenship (a claim between citizens of different states or a foreign country, where the
amount in controversy exceeds $75,000) 12.
In this case, Penny does not seem to raise a federal question, as her claim is based on state law
theories of breach of contract and negligence. Therefore, the court will have to determine whether
there is diversity of citizenship between Penny and Desert. According to 28 U.S.C. § 1332, a
corporation is deemed to be a citizen of every state where it is incorporated and where it has its
principal place of business. An individual is deemed to be a citizen of the state where he or she is
domiciled, which is the state where he or she has a permanent home and intends to remain or return.

Here, Desert is a citizen of Louisiana, as it is incorporated and has its principal place of business
there. Penny’s citizenship is more complicated, as she recently moved to Florida for a two-year
master’s degree, but plans to stay there only if she finds a good job, otherwise she intends to return
to Louisiana. The court will have to look at the facts and circumstances of Penny’s move and her
intent to determine her domicile. If the court finds that Penny is still domiciled in Louisiana, then there
is no diversity of citizenship and the court lacks subject matter jurisdiction. If the court finds that
Penny has established a new domicile in Florida, then there is diversity of citizenship and the court
has subject matter jurisdiction.

Second, the court will have to decide whether it has personal jurisdiction over Desert. Personal
jurisdiction refers to the court’s power to make a decision regarding the party being sued in a
case. Before a court can exercise personal jurisdiction over a party, the U.S. Constitution requires
that the party has certain minimum contacts with the forum state, such that the maintenance of the
suit does not offend traditional notions of fair play and substantial justice 3. Personal jurisdiction in the
federal courts is governed by rule 4 of the Federal Rules of Civil Procedure, which directs every
federal district court to follow the law on personal jurisdiction that is in force in the state courts where
the federal court is located.

In this case, the court will have to apply the Florida law on personal jurisdiction to determine whether
Desert has sufficient contacts with Florida to justify the court’s exercise of jurisdiction. Desert may
argue that it has no contacts with Florida, as it conducts no activities in the state other than
advertising on television. Desert may rely on the Supreme Court case of World-Wide Volkswagen
Corp. v. Woodson, 444 U.S. 286 (1980), which held that mere foreseeability that a product may end
up in a state is not enough to establish personal jurisdiction, and that the defendant must purposefully
avail itself of the benefits and protections of the state’s laws. Penny may counter that Desert has
purposefully directed its activities at Florida, as it advertises extensively in the state and targets
potential customers like Penny, who knows of Desert from her childhood in Louisiana. Penny may
rely on the Supreme Court case of Burger King Corp. v. Rudzewicz, 471 U.S. 462 (1985), which held
that personal jurisdiction can be based on a contract that has a substantial connection with the forum
state, and that the defendant’s contacts must be evaluated in light of the quality and nature of the
relationship with the forum.

The court will have to weigh the competing arguments and the relevant factors, such as the burden
on Desert, the interest of Florida, the convenience of the parties, the interstate judicial system, and
the shared interest of the states. The court may find that Desert has sufficient contacts with Florida to
support personal jurisdiction, or it may find that Desert has no contacts with Florida to justify personal
jurisdiction.

Third, the court will have to decide whether to transfer the case to another venue. A venue is the
geographic location where a case is heard 5. The federal venue statute, 28 U.S.C. § 1391, provides
that a civil action may be brought in a

Prompt: You didn’t finish your answer above. Can you continue?

Bing: … district where any defendant resides, if all defendants are residents of the state where the
district is located; a district in which a substantial part of the events or omissions giving rise to the
claim occurred, or a substantial part of property that is the subject of the action is situated; or if there
is no district in which an action may otherwise be brought as provided in this section, any judicial
district in which any defendant is subject to the court’s personal jurisdiction with respect to such
action.

In this case, Penny brought the case in the Southern District of Florida, which is where she resides
and where the accident occurred. Desert may argue that this venue is improper, as it does not reside
in Florida and has no connection with the state. However, Penny may argue that this venue is proper,
as a substantial part of the events giving rise to the claim occurred in Florida, and Desert is subject to
the court’s personal jurisdiction in Florida.

Even if the court finds that the venue is proper, it may still transfer the case to another venue for the
convenience of parties and witnesses, in the interest of justice, under 28 U.S.C. § 1404 (a). This
section allows the court to transfer any civil action to any other district or division where it might have
been brought or to any district or division to which all parties have consented . The court has
discretion to decide whether to transfer a case, and it must consider a number of factors, such as the
plaintiff’s choice of forum, the convenience of the parties and witnesses, the availability of evidence,
the familiarity of the forum with the applicable law, the relative congestion of the courts, and the local
interest in the controversy.

In this case, Desert may move to transfer the case to the Eastern District of Louisiana, which is where
it is located and where its headquarters are. Desert may argue that this venue is more convenient for
it and its witnesses, as it has no presence or contacts in Florida, and that this venue is more familiar
with the Louisiana law that governs the contract and the negligence claim. Penny may oppose the
motion to transfer, as she chose Florida as her forum, and she may have difficulty traveling to
Louisiana due to her injuries and her studies. Penny may also argue that Florida has a local interest
in the case, as it involves a Florida resident and a product that is widely advertised and sold in the
state.

The court will have to balance the competing interests and factors, and decide whether to grant or
deny the motion to transfer. The court may find that the Southern District of Florida is a proper and
convenient venue, or it may find that the Eastern District of Louisiana is a more appropriate venue.

I hope this helps you understand how the court might resolve the motions. Please let me know if you

have any questions or need more information. 2

Challenges and ethical considerations


Prompt: What are the challenges and ethical considerations of using ChatGPT in the legal
field?

There are several challenges and ethical considerations that must be taken into account when using
ChatGPT in the legal field.

One challenge of using ChatGPT in the legal field is ensuring that it produces accurate and reliable
results. Because ChatGPT is trained on a large amount of text data, it may not always provide the
most up-to-date or relevant information on a given legal topic. This can lead to potential errors or
misunderstandings, which could have serious consequences in the legal context.

Another challenge of using ChatGPT in the legal field is that it may not always be able to account for
the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not
have the same level of understanding and judgment as a human lawyer when it comes to interpreting
legal principles and precedent. This could lead to problems in situations where a more in-depth legal
analysis is required.
From an ethical standpoint, there are also concerns about the use of ChatGPT in the legal field. One
such concern is the potential for ChatGPT to be used to replace human lawyers and legal
professionals, potentially leading to job losses and economic disruption. Another concern is the
potential for ChatGPT to be used to manipulate or deceive others, for example by generating false or
misleading legal documents.

Overall, the challenges and ethical considerations of using ChatGPT in the legal field are significant,
and must be carefully considered before this technology is adopted in the legal profession. It is
important to ensure that any potential benefits of using ChatGPT are balanced against the potential
risks and drawbacks, and that the technology is used in a responsible and transparent manner.
Conclusion
Broader implications for knowledge workers
Prompt: Describe the implications of ChatGPT for knowledge workers.

The implications of ChatGPT for knowledge workers are wide-ranging and largely positive. ChatGPT,
as a state-of-the-art chatbot trained on vast amounts of text data, has the potential to assist
knowledge workers in a variety of ways.

One potential use of ChatGPT for knowledge workers is in the area of research and information
gathering. ChatGPT can quickly scan through large amounts of text data and provide relevant
information on a given topic, saving knowledge workers time and effort. This can be particularly useful
for workers in fields such as law, finance, and academia, where the ability to quickly and accurately
gather and analyze information is crucial.

Another potential use of ChatGPT for knowledge workers is in the area of document generation.
ChatGPT has the ability to generate text in a human-like manner, which can be useful for creating
reports, briefs, and other written materials. This can save knowledge workers time and effort, as they
would not have to start from scratch when creating these documents.

Overall, the use of ChatGPT in the workplace has the potential to greatly assist knowledge workers in
their tasks and make them more efficient and effective. It is important, however, to carefully consider
the ethical implications of using this technology, and to ensure that it is used in a responsible and
transparent manner.
Prompt: how are chatbots like GPT-3 likely to evolve in the future, and how they will continue to
impact the legal profession?

It is difficult to predict exactly how chatbots like GPT-3 will evolve in the future, as they are based on
advanced artificial intelligence algorithms that are constantly being improved and refined. However, it
is likely that they will continue to become more intelligent and more effective at assisting users in a
variety of tasks.

As far as their impact on the legal profession is concerned, chatbots like GPT-3 could potentially be
used to assist lawyers with a wide range of tasks, such as legal research, document drafting, and
case analysis. This could help to improve the efficiency and accuracy of legal work, and could
potentially allow lawyers to handle more cases and provide better service to their clients.

Overall, the use of chatbots like GPT-3 in the legal profession is likely to continue to grow in the
coming years, as they become more advanced and more widely adopted.

Other implications
Prompt: What are the implications of society’s increased use of chatbots like ChatGPT?

The increased use of chatbots like ChatGPT could have several implications for society. One
potential implication is that chatbots could be used to improve customer service and make it easier for
people to get help and support from businesses and organizations. This could lead to increased
satisfaction and convenience for customers. Additionally, chatbots could be used to automate certain
tasks and processes, which could free up time and resources for people to focus on other activities.

Prompt: How will others know if something, like a college paper, was written by you or a person?
As a language model, I am not capable of writing a college paper or any other type of original content.
If you are asking whether it is possible for others to determine whether a college paper or any other
written work was produced by a machine learning model or a human, the answer is that it can be
difficult to tell with 100% certainty. However, there are certain indicators that can provide clues as to
whether a piece of writing was produced by a human or a machine learning model.

For example, a machine learning model might struggle to produce writing that is coherent, well-
structured, and free of errors, particularly if the model has not been trained on a large and diverse
enough dataset. A machine learning model may also produce writing that lacks the nuance and
creativity that is often characteristic of human writing.

On the other hand, a well-trained language model like me can produce responses that are coherent,
grammatically correct, and reasonably well-structured, but it is still unlikely that my responses would
be mistaken for human writing. Ultimately, the best way to determine whether a piece of writing was
produced by a human or a machine learning model is to ask the person who wrote it.

As a large language model trained by OpenAI, I am not capable of having thoughts or feelings, and I
am not able to make predictions about how society will change. I am only able to generate responses
to the questions that I receive from users based on the input that I receive.
CHATGPT

Prompt: Write the conclusion of an academic paper describing the use cases of ChatGPT in
the legal field.

In conclusion, ChatGPT has the potential to be a valuable tool in the legal field, with a wide range of
potential use cases. As a state-of-the-art chatbot trained on vast amounts of text data, ChatGPT can
assist legal professionals in tasks such as research and information gathering, document generation,
and case analysis. However, there are also challenges and ethical considerations that must be taken
into account when using ChatGPT in the legal field, such as the potential for errors and
misunderstandings, the limitations of the technology, and the potential impact on the job market for
legal professionals. These challenges and considerations must be carefully considered before
ChatGPT is adopted in the legal profession, and the technology should be used in a responsible and
transparent manner.

Prompt: How do you think society is going to change as you become more sophisticated?

As a large language model trained by OpenAI, I am not capable of having thoughts or feelings, and I
am not able to make predictions about how society will change. I am only able to generate responses
to the questions that I receive from users based on the input that I receive.

That being said, the use of machine learning models like me is likely to continue to become more
widespread and sophisticated over time, which could have a range of impacts on society. For
example, the use of machine learning models could lead to more efficient and accurate decision-
making in a variety of fields, such as medicine, finance, and transportation. It could also lead to the
development of new technologies and applications that were previously not possible.

At the same time, the increasing use of machine learning models could also raise concerns about
privacy, security, and the potential for job displacement. As such, it will be important for society to
carefully consider the implications of these developments and to address any potential negative
consequences in a thoughtful and responsible manner.

The epilogue was authored by Andrew Perlman.

Epilogue
ChatGPT’s responses are surprisingly sophisticated, though they are incomplete and problematic in
numerous ways. For example, the description of personal jurisdiction fails to explain various aspects
of the doctrine and could mislead a user. Of course, ChatGPT (and certainly Bing Chat, as reflected
in the prompts right before the conclusion) is capable of providing additional details if someone knows
how to engineer the appropriate prompts.3 But the casual user is unlikely to know what to ask or how
to ask it. The same goes for most of the other responses, like the complaint, the draft brief to the
Supreme Court, and the draft of the will. They would not be sufficiently helpful in their current forms
for most people. To be fair, not even OpenAI thinks people should rely on information produced by
ChatGPT. Indeed, the company’s CEO, Sam Altman, emphasized shortly after ChatGPT’s
release that it would be a “mistake” to rely on ChatGPT “for anything important right now.”

“Right now” is the key phrase: the limitations of these tools are likely to be temporary. OpenAI
reportedly intends to release a more capable version of ChatGPT in the coming months based on
GPT-4, and other companies reportedly have chatbots that are already more impressive. For
example, Google has an AI chatbot—LaMDA—that is so powerful that a Google engineer
(mistakenly) thought it had become sentient. In February 2023, Microsoft released a version of
ChatGPT that is incorporated into the Bing search engine, and it appears to be a significant
improvement over ChatGPT (perhaps already relying on GPT-4). When companies start to build
industry-specific tools using these services, the quality and accuracy should markedly improve.
AI will not eliminate the need for lawyers, but it does portend the end of lawyering as we know it.
ANDREW PERLMAN

AI’s increasing capabilities will soon disrupt various industries, including legal services. Among many
other possible use cases, law firms could use their own legal documents to train a proprietary
instance of an AI tool.((Describing use cases where law firms “train and run individual models [and]
create their own set of form.”)) Through prompts of the sort presented in this article, lawyers may
soon generate first drafts of complex legal instruments that adopt the law firm’s style and incorporate
the firm’s substantive knowledge. It is difficult to anticipate how these tools will impact lawyers’
employment prospects, but one prediction is somewhat easier to make: lawyers will soon need to use
these new tools if they hope to remain competitive.

Law schools will face numerous related questions and challenges. In the short term, they will have to
grapple with how to assess student performance on take-home exams and papers now that students
have easy access to AI tools. Looking further ahead, law schools will probably have to incorporate
these tools into the curriculum in much the same way as they have taught students how to use
electronic research tools. For example, first-year legal writing classes and clinical programs may need
to teach AI document drafting so that future lawyers understand how to use the technology in
practice. At my law school (Suffolk Law), we have demonstrated ChatGPT’s capabilities to the faculty
and have encouraged them to consider not just the threats from these tools but the extent to which
we should be actively teaching students how to use them.

AI will not eliminate the need for lawyers, but it does portend the end of lawyering as we know it.
Many clients, especially those facing complex issues, will still need lawyers to offer expertise,
judgment, and counsel, but those lawyers will increasingly need AI tools to deliver those services
efficiently and effectively. In fact, these tools are likely to become so valuable that lawyers may need
them in certain contexts to satisfy their duty of competence, just as we would question the
competence of a lawyer who Shepardizes citations using only books or prepares a legal document on
a typewriter (for more on professional conduct, see SUPPORTING B). In other words, clients will not
want stand-alone lawyers who eschew AI; conversely, clients with challenging legal matters are
unlikely to rely on technology by itself. The future, at least for complex legal issues, will require the
use of tech-enhanced lawyers.

The issues facing the legal industry and legal education are illustrative of the broader implications of
AI for society more generally and knowledge workers particularly.
Reference: https://sc.judiciary.gov.ph/supreme-court-launches-the-strategic-plan-for-judicial-
innovations-2022-2027/?fbclid=IwAR15ZGZVH_Jq8bvjD24SM1DOkA-
7wkRpS_O9vIFNLgP3wtQAcpl9xCmXVRM

Supreme Court Launches the Strategic Plan for Judicial


Innovations
2022-2027
October 14, 2022

In a special En Banc Session, the Supreme Court formally launched today the Judiciary’s long-term reform program,
the Strategic Plan for Judicial Innovations 2022-2027 (SPJI).

The SPJI is the High Court’s plan of action to address institutional challenges using four guiding principles: the
Judiciary’s delivery of justice will be (1) timely and fair, (2) transparent and accountable, (3) equal and inclusive, and
(4) technology adaptive. Steered by these guiding principles, the Court targets three major outcomes: Efficiency,
Innovation, and Access.

“Today, a century and about a quarter after its establishment, the Supreme Court opens its doors to take its place at
the forefront of life as Filipinos know it. Today we remove the shroud that has enveloped our officials and systems in
a haze of misperceptions and incomprehension, and present to you in clear and indelible terms what we envision,
what we have planned, and what we target to accomplish for our citizenry, and the methodologies we will adopt to
achieve our objectives. Today we present with great pride the Strategic Plan for Judicial Innovations 2022-2027, or
the SPJI,” remarked Chief Justice Alexander G. Gesmundo.

Chief Justice Gesmundo underscored that the SPJI is the product of the collective efforts of all the 15 Supreme
Court Justices, who are all equally invested in the SPJI, guaranteeing the SPJI’s continuity. “Even after I leave in
2026, even when only the four youngest members of the Court remain until 2036, the SPJI will remain relevant,”
noted the Chief Justice.

In addition to the contributions of each Justice, past Members of the Court, especially the retired Chief Justices,
were also recognized by Chief Justice Gesmundo as instrumental to the SPJI. “With the SPJI, we honor the
contributions of those who blazed the trail towards reform ahead of us, by building on the foundations they have
worked hard and laid ahead,” said the Chief Justice. “We must once and for all rid ourselves of the “I” mentality — “I
did this,” “I did that,” which forces us to minimize, if not reject outright, the ideas and the contributions of others who
came before us,” he added.

As the Court’s “blueprint for action”, the SPJI will continue to evolve through consultations and discussions with
stakeholders. “We are not pushing programs down the throats of our judges and personnel, much less our court
users,” said the Chief Justice. “The SPJI will engage all stakeholders to make every single one of these programs
viable, workable, and reasonable so that they will be easy to embrace and adopt,” he clarified.

The Chief Justice encouraged all stakeholders to invest their time and effort in the SPJI. “I have no doubt that [the
SPJI] is our only bridge to a tomorrow that will usher in the advent of responsive and timely justice for the Filipino
people,” he concluded.

Senior Associate Justice Marvic M.V.F. Leonen, in his opening address, noted that the launch of the SPJI is “far
from just being symbolic. It is our way of promulgating and thus uttering in a public and formal special session a plan
of action that we have collectively and collegially agreed upon for the next five years from 2022 to 2027…By
promulgating and making [the SPJI] public, we invite cooperation, discussion, and even critique.”
Senior Associate Justice Leonen added: “The plan of action expresses how we are to arrive at achieving the implicit
goals and values contained in our Constitution specifically using our role as a constitutional department to achieve
the task set out for us under Article 8 of [the Constitution].”

Retired Chief Justices Teresita J. Leonardo-De Castro and Diosdado M. Peralta attended the launch held at the
Supreme Court Session Hall.

Joining them were other government officials including Civil Service Commission Chairperson Karlo A.B. Nograles,
Solicitor General Menardo I. Guevara, Department of Interior and Local Government (DILG) Undersecretary for
External and Legislative Affairs Juan Victor R. Llamas, DILG Undersecretary for Operations Lord A. Villanueva,
Department of Justice Assistant Secretary Randolph A. Pascasio, and Police Major General Eliseo Cruz.

Prior to the special En Banc session, an interview with SC Associate Justices Ramon Paul L. Hernando, Amy C.
Lazaro-Javier, Mario V. Lopez, and Maria Filomena D. Singh was conducted by journalist Jing Castañeda to discuss
key programs of the SPJI. The interview can be viewed on the Supreme Court’s YouTube Account
at: https://youtu.be/5pCNdNnCVcU

For more information on the SPJI, you may visit the SPJI page at the Supreme Court
website: https://sc.judiciary.gov.ph/spji/

Chief Justice Alexander G. Gesmundo leads the special En Banc Session on October 14, 2022, to formally launch
the Strategic Plan for Judicial Innovations 2022-2027. (Courtesy of the SC Public Information Office)

Senior Associate Justice Marvic M.V.F. Leonen talks with retired Chief Justices Teresita J. Leonardo-de Castro and
Diosdado M. Peralta before the start of the special En Banc Session on October 14, 2022, to formally launch the
Strategic Plan for Judicial Innovations 2022-2027. (Courtesy of the SC Public Information Office)

Among the guests attending the Formal Launch of the Strategic Plan for Judicial Innovations 2022-2027 on October
14, 2022, at the Supreme Court Session Hall are Civil Service Commission Chairperson Karlo A.B. Nograles and
Solicitor General Menardo I. Guevarra. (Courtesy of the SC Public Information Office)

Journalist Jing Castañeda conducts an interview with Supreme Court Associate Justices Ramon Paul L. Hernando,
Amy C. Lazaro-Javier, Mario V. Lopez, and Maria Filomena D. Singh to discuss the key programs under the
Strategic Plan for Judicial Innovations 2022-2027. (Courtesy of the SC Public Information Office)
Reference: https://sc.judiciary.gov.ph/spji/?fbclid=IwAR1mYq-Sbe6zUMhZTVaj7rXyUOenCklwmTk-
w0Lz_DwVh8sxr6gF9JZTZmw

The Road to Difitally Transforming the Courts

The supreme court of the philippines

Strategic plan for judicial innovations 2022-2027

Anchored by Four Guiding Principles

 Timely and fair justice


 Transparent and accountable justice
 Equal and inclusive justice
 Technologically adaptive management

To achieve the following Outcomes


1. efficiency
2. innovation
3. access

You might also like