You are on page 1of 4

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially

intelligent systems.[1] It is sometimes divided into a concern with the moral behavior of humans as
they design, make, use and treat artificially intelligent systems, and a concern with the behavior of
machines, in machine ethics. It also includes the issue of a possible singularity due to
superintelligent AI.

Ethics fields' approaches


Robot ethics
Main article: Robot ethics
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design,
construct, use and treat robots.[2] Robot ethics intersect with the ethics of AI. Robots are physical
machines whereas AI can be only software.[3] Not all robots function through AI systems and not
all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit
humans, their impact on individual autonomy, and their effects on social justice.

Machine ethics
Main article: Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial
Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though
moral.[4][5][6][7] To account for the nature of these agents, it has been suggested to consider
certain philosophical ideas, like the standard characterizations of agency, rational agency, moral
agency, and artificial agency, which are related to the concept of AMAs.[8]
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John
W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems.
Much of his work was then spent testing the boundaries of his three laws to see where they would
break down, or where they would create paradoxical or unanticipated behavior. His work suggests
that no set of fixed laws can sufficiently anticipate all possible circumstances.[9] More recently,
academics and many governments have challenged the idea that AI can itself be held accountable.
[10] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is
the responsibility either of its manufacturers, or of its owner/operator.[11]
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique
Fédérale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in
searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each
other in an attempt to hoard the beneficial resource.[12]
Some experts and academics have questioned the use of robots for military combat, especially when
such robots are given some degree of autonomous functions.[13] The US Navy has funded a report
which indicates that as military robots become more complex, there should be greater attention to
implications of their ability to make autonomous decisions.[14][15] The President of the
Association for the Advancement of Artificial Intelligence has commissioned a study to look at this
issue.[16] They point to programs like the Language Acquisition Device which can emulate human
interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than
humans. He calls this "the Singularity".[17] He suggests that it may be somewhat or possibly very
dangerous for humans.[18] This is discussed by a philosophy called Singularitarianism. The
Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that
the advances which are already occurring with AI should also include an effort to make AI
intrinsically friendly and humane.[19]
There are discussion on creating tests to see if an AI is capable of making ethical decisions. Alan
Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too
low.[20] A proposed alternative test is one called the Ethical Turing Test, which would improve on
the current test by having multiple judges decide if the AI's decision is ethical or unethical.[20]
In 2009, academics and technical experts attended a conference organized by the Association for
the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers
and the impact of the hypothetical possibility that they could become self-sufficient and able to
make their own decisions. They discussed the possibility and the extent to which computers and
robots might be able to acquire any level of autonomy, and to what degree they could use such
abilities to possibly pose any threat or hazard. They noted that some machines have acquired
various forms of semi-autonomy, including being able to find power sources on their own and being
able to independently choose targets to attack with weapons. They also noted that some computer
viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-
awareness as depicted in science-fiction is probably unlikely, but that there were other potential
hazards and pitfalls.[17]
However, there is one technology in particular that could truly bring the possibility of robots with
moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-
Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to
humans, nonlinearly and with millions of interconnected artificial neurons.[21] Robots embedded
with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way.
Inevitably, this raises the question of the environment in which such robots would learn about the
world and whose morality they would inherit – or if they end up developing human 'weaknesses' as
well: selfishness, a pro-survival attitude, hesitation etc.
In Moral Machines: Teaching Robots Right from Wrong,[22] Wendell Wallach and Colin Allen
conclude that attempts to teach robots right from wrong will likely advance understanding of human
ethics by motivating humans to address gaps in modern normative theory and by providing a
platform for experimental investigation. As one example, it has introduced normative ethicists to the
controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and
Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic
algorithms on the grounds that decision trees obey modern social norms of transparency and
predictability (e.g. stare decisis),[23] while Chris Santos-Lang argued in the opposite direction on
the grounds that the norms of any age must be allowed to change and that natural failure to fully
satisfy these particular norms has been essential in making humans less vulnerable to criminal
"hackers".[24]
According to a 2019 report from the Center for the Governance of AI at the University of Oxford,
82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged
from how AI is used in surveillance and in spreading fake content online (known as deep fakes
when they include doctored video images and audio generated with help from AI) to cyberattacks,
infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a
human controller.[25]

Ethics principles of artificial intelligence


In the review of 84[26] ethics guidelines for AI 11 clusters of principles were found: transparency,
justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy,
trust, sustainability, dignity, solidarity.[26]
Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles
of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling
principle – explicability.[27]
Transparency, accountability, and open source
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers
are representatives of future humanity and thus have an ethical obligation to be transparent in their
efforts.[28] Ben Goertzel and David Hart created OpenCog as an open source framework for AI
development.[29] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman
and others to develop open-source AI beneficial to humanity.[30] There are numerous other open-
source AI developments.
Unfortunately, making code open source does not make it comprehensible, which by many
definitions means that the AI code is not transparent. The IEEE has a standardisation effort on AI
transparency.[31] The IEEE effort identifies multiple scales of transparency for different users.
Further, there is concern that releasing the full capacity of contemporary AI to some organizations
may be a public bad, that is, do more damage than good. For example, Microsoft has expressed
concern about allowing universal access to its face recognition software, even for those who can
pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to
help determine the right thing to do.[32]
Not only companies, but many other researchers and citizen advocates recommend government
regulation as a means of ensuring transparency, and through it, human accountability. This strategy
has proven controversial, as some worry that it will slow the rate of innovation. Others argue that
regulation leads to systemic stability more able to support innovation in the long term.[33] The
OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and
finding appropriate legal frameworks.[34][35][36]
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence
(AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial
Intelligence".[37] This is the AI HLEG's second deliverable, after the April 2019 publication of the
"Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal
subjects: humans and society at large, research and academia, the private sector, and the public
sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of
both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as
well as the potential risks involved" and states that the EU aims to lead on the framing of policies
governing AI internationally.[38] To prevent harm, in addition to regulation, AI-deploying
organizations need to play a central role in creating and deploying trustworthy AI in line with the
principles of trustworthy AI, and take accountability to mitigate the risks.[39]
Ethical challenges
Biases in AI systems
Main article: Algorithmic bias
0:56
Then-US Senator Kamala Harris speaking about racial bias in artificial intelligence in 2020
AI has become increasingly inherent in facial and voice recognition systems. Some of these systems
have real business applications and directly impact people. These systems are vulnerable to biases
and errors introduced by its human creators. Also, the data used to train these AI systems itself can
have biases.[40][41][42][43] For instance, facial recognition algorithms made by Microsoft, IBM
and Face++ all had biases when it came to detecting people's gender;[44] these AI systems were
able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020
study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found
that they have higher error rates when transcribing black people's voices than white people's.[45]
Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm
favored male candidates over female ones. This was because Amazon's system was trained with
data collected over 10-year period that came mostly from male candidates.[46]
Bias can creep into algorithms in many ways. The most predominant view on how bias is
introduced into AI systems is that it is embedded within the historical data used to train the system.
For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data
accumulated over the years, during which time the candidates that successfully got the job were
mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical
data and generated predictions for the present/future that these types of candidates are mostly like to
succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to
be biased against female and minority candidates. Friedman and Nissenbaum identify three
categories of bias in computer systems: existing bias, technical bias, and emergent bias.[47] In
natural language processing, problems can arise from the text corpus — the source material the
algorithm uses to learn about the relationships between different words.[48]
Large companies such as IBM, Google, etc. have made efforts to research and address these biases.
[49][50][51] One solution for addressing bias is to create documentation for the data used to train
AI systems.[52][53] Process mining can be an important tool for organizations to achieve
compliance with proposed AI regulations by identifying errors, monitoring processes, identifying
potential root causes for improper execution, and other functions.[54]
The problem of bias in machine learning is likely to become more significant as the technology
spreads to critical areas like medicine and law, and as more people without a deep technical
understanding are tasked with deploying it. Some experts warn that algorithmic bias is already
pervasive in many industries and that almost no one is making an effort to identify or correct it.[55]
There are some open-sourced tools [56] by civil societies that are looking to bring more awareness
to biased AI.

You might also like