You are on page 1of 33

Page 1 of 33

Artificial Intelligence and Ethics


Ethics and the dawn of decision-making machines
by JONATHAN SHAW Harvard Magazine JANUARY-FEBRUARY 2019 [EXCERPT]

Illustration by Taylor Callery

ON MARCH 18, 2018, at around 10 P.M. , Elaine Herzberg was wheeling her bicycle
across a street in Tempe, Arizona, when she was struck and killed by a self-driving car.
Although there was a human operator behind the wheel, an autonomous system—
artificial intelligence—was in full control. This incident, like others involving
interactions between people and AI technologies, raises a host of ethical and proto-legal
questions. What moral obligations did the system’s programmers have to prevent their
creation from taking a human life? And who was responsible for Herzberg’s death? The
person in the driver’s seat? The company testing the car’s capabilities? The designers of
the AI system, or even the manufacturers of its onboard sensory equipment?

“Artificial intelligence” refers to systems that can be designed to take cues from their
environment and, based on those inputs, proceed to solve problems, assess risks, make
predictions, and take actions. In the era predating powerful computers and big data, such
systems were programmed by humans and followed rules of human invention, but
advances in technology have led to the development of new approaches. One of these is
machine learning, now the most active area of AI, in which statistical methods allow a
system to “learn” from data, and make decisions, without being explicitly programmed.
Such systems pair an algorithm, or series of steps for solving a problem, with a
Page 2 of 33

knowledge base or stream—the information that the algorithm uses to construct a model
of the world.

Ethical concerns about these advances focus at one extreme on the use of AI in deadly
military drones, or on the risk that AI could take down global financial systems. Closer to
home, AI has spurred anxiety about unemployment, as autonomous systems threaten to
replace millions of truck drivers, and make Lyft and Uber obsolete. And beyond these
larger social and economic considerations, data scientists have real concerns about bias,
about ethical implementations of the technology, and about the nature of interactions
between AI systems and humans if these systems are to be deployed properly and fairly
in even the most mundane applications.

Consider a prosaic-seeming social change: machines are already being given the power to
make life-altering, everyday decisions about people. Artificial intelligence can aggregate
and assess vast quantities of data that are sometimes beyond human capacity to analyze
unaided, thereby enabling AI to make hiring recommendations, determine in seconds the
creditworthiness of loan applicants, and predict the chances that criminals will re-offend.

But such applications raise troubling ethical issues because AI systems can reinforce
what they have learned from real-world data, even amplifying familiar risks, such as
racial or gender bias. Systems can also make errors of judgment when confronted with
unfamiliar scenarios. And because many such systems are “black boxes,” the reasons for
their decisions are not easily accessed or understood by humans—and therefore difficult
to question, or probe.

Examples abound. In 2014, Amazon developed a recruiting tool for identifying software
engineers it might want to hire; the system swiftly began discriminating against women,
and the company abandoned it in 2017. In 2016, ProPublica analyzed a commercially
developed system that predicts the likelihood that criminals will re-offend, created to help
judges make better sentencing decisions, and found that it was biased against blacks.
During the past two years, self-driving cars that rely on rules and training data to operate
have caused fatal accidents when confronted with unfamiliar sensory feedback or inputs
their guidance systems couldn’t interpret. The fact that private commercial developers
generally refuse to make their code available for scrutiny, because the software is
considered proprietary intellectual property, is another form of non-transparency—legal,
rather than technical.
Page 3 of 33

Meanwhile, nothing about advances in the technology, per se, will solve the underlying,
fundamental problem at the heart of AI, which is that even a thoughtfully designed
algorithm must make decisions based on inputs from a flawed, imperfect, unpredictable,
idiosyncratic real world.

Computer scientists have perceived sooner than others that engineering can’t always
address such problems post hoc, after a system has been designed. Despite notable
advances in areas such as data privacy (see “The Privacy Tools Project,” January-
February 2017), and clear understanding of the limits of algorithmic fairness, the
realization that ethical concerns must in many cases be considered before a system is
deployed has led to formal integration of an ethics curriculum—taught by philosophy
postdoctoral fellows and graduate students—into many computer-science classes at
Harvard. Far-reaching discussions about the social impact of AI on the world are taking
place among data scientists across the University, as well as in the Ethics and
Governance of AI Initiative launched by Harvard Law School’s Berkman Klein Center,
together with the MIT Media Lab. This intensifying focus on ethics originated with a
longtime member of the computer-science faculty…..

Systemic Bias and Social Engineering


THE PROBLEM of fairness in autonomous systems featured prominently at the inaugural
Harvard Data Science Conference (HDSC) in October, where Colony professor of
computer science David Parkes outlined guiding principles for the study of data science
at Harvard: it should address ethical issues, including privacy (see “The
Watchers,” January-February 2017, page 56); it should not perpetuate existing biases;
and it should be transparent. But to create learning AI systems that embody these
principles can be hard. System complexity, when thousands or more variables are in play,
can make true understanding almost impossible, and biases in the datasets on which
learning systems rely can easily become reinforced.

There are lots of reasons why someone might want to “look under the hood” of an AI
system to figure out how it made a particular decision: to assess the cause of biased
output, to run safety checks before rollout in a hospital, or to determine accountability
after an accident involving a self-driving car.
Page 4 of 33

Can you quickly navigate this simple decision tree? The inputs are: ICML (International Conference on Machine
Learning); 2017; Australia; kangaroo; and sunny. Assuming you have done it correctly, imagine trying to explain in
words how your decision to clap hands was reached. What if there were a million variables?
Courtesy of Finale Doshi-Velez

What might not be obvious is how difficult and complex such an inquiry can be.
Assistant professor of computer science Finale Doshi-Velez demonstrated by projecting
onscreen a relatively simple decision tree, four layers deep, that involved answering
questions based on five inputs (see a slightly more complex example, above). If executed
correctly, the final instruction was to raise your left hand. A few of the conference
attendees were able to follow along. Then she showed a much more complex decision
tree, perhaps 25 layers deep, with five new parameters determining the path down
through the tree to the correct answer—an easy task for a computer. But when she asked
if anyone in the audience could describe in words why they had reached the answer they
did, no one responded. Even when the correct path to a decision is highlighted, describing
the influence of complex interacting inputs on the outcome in layman’s terms is
extremely difficult. And that’s just for simple models such as decision trees, not modern
deep architectures with millions of parameters. Developing techniques to extract
explanations from arbitrary models—scalable systems with an arbitrary number of
variables, tasks, and outputs—is the subject of research in her lab.
Page 5 of 33

Bias poses a different set of problems. Whenever there is a diverse population (differing
by ethnicity, religion, or race, for example), explained McKay professor of computer
science Cynthia Dwork during a HDSC talk about algorithmic fairness, an algorithm that
determines eligibility for, say, a loan, should treat each group the same way. But in
machine-learning systems, the algorithm itself (the step-by-step procedure for solving a
particular problem) constitutes only one part of the system. The other part is the data. In
an AI system that makes automated loan decisions, the algorithm component can be
unbiased and completely fair with respect to each group, and yet the overall result, after
the algorithm has learned from the data, may not be. “Algorithms don’t have access to
the ground truth” (computer lingo for veritas), Dwork explained. If there is bias in the
data used to make the decision, the decision itself may be biased.

There are ways to manage this problem. One is to select very carefully the applicant
attributes an algorithm is permitted to consider. (Zip codes, as well-known proxies for
race, are often eliminated.) But bias has a way of creeping back in through correlations
with other variables that the algorithm uses—such as surnames combined with
geographic census data.

Bias against groups can often be addressed through smart algorithm design, Dwork said,
but ensuring fairness to individuals is much harder because of a fundamental feature of
algorithmic decision making. Any such decision effectively draws a line—and as Dwork
pointed out, there will always be two individuals from different groups close to the line,
one on either side, who are very similar to each other in almost every way. And yet only
one will get a loan.

In some cases, correcting bias through system design may be an insufficient approach.
Consider a hiring system designed by McKay professor of computer science Yiling Chen
and graduate student Lily Hu ’15 to eliminate hiring bias against African Americans,
historically a disadvantaged group. As Hu puts it, “Algorithms, which are purely
optimization-driven tools, can inherit, internalize, reproduce, and exacerbate existing
inequalities. Say we have a labor-market disparity that persists without any sort of
machine-learning help, and then here comes machine learning, and it learns to re-inscribe
those inequalities.” Their solution, which uses tools from economics and sociology to
understand disparities in the labor market, pushes the thinking about algorithmic fairness
beyond computer science to an interdisciplinary, systems-wide view of the problem.

Chen works in social computing, an area of data science that emphasizes the effect of
human behavior on inputs to algorithms. Because humans are “self-interested,
Page 6 of 33

independent, error-prone, and not predictable” enough to enable design of an algorithm


that would ensure fairness in every situation, she started thinking about how to take bias
out of the training data—the real-world information inputs that a hiring algorithm would
use.

She and Hu focused on the problem of implementing affirmative action in hiring. A


straightforward remedy to counteract the historical disadvantage faced by a minority
group would be, simply, to favor that group in employment decisions, all other things
being equal. (This might itself be deemed unfair to the majority group, but still be
considered acceptable until equity in hiring is attained.) But Chen and Hu then considered
the human element. Suppose many of the minority group’s members don’t go to college,
reasoning that “it’s expensive, and because of discrimination, even if I get a degree, the
chances of my getting a job are still low.” Employers, meanwhile, may believe that
“people from minority groups are less educated, and don’t perform well, because they
don’t try hard.” The point Chen and Hu make is that even though a minority-group
member’s decision not to attend college is rational, based on existing historical
unfairness, that decision reinforces employers’ preconceived ideas about the group as a
whole. This pattern of feedback effects is not just difficult to break—it is precisely the
sort of data pattern that an algorithm, looking at past successful hires and associating
them with college degrees, will reinforce.

The solution that Chen and Hu propose is not based on math alone: instead, it is social
engineering that uses an algorithm to change the ground truth. And it represents an
acknowledgment of just how difficult it can be to counteract bias in data. What the
researchers propose is the creation of a temporary labor market. Think of it, says Chen, as
an internship in which every job candidate must participate for two years before being
hired into the permanent workforce. Entry into the internship pool would be subject to a
simple “fairness constraint,” an algorithm that would require employers to choose interns
from minority and majority groups in representative numbers. Then, at the conclusion of
the internship, hiring from the pool of interns would be based solely on performance data,
without regard to group membership. Because the groups are equally talented at the
population level, explains Chen, the two groups eventually reach parity.

“What this paper’s trying to fight back against,” Hu explains, “is the perception—still
dominant within the machine-learning/AI community—that everything is fundamentally
an optimization problem, or a prediction problem, or a classification problem. And when
you do that—if you treat it in a standard machine-learning way—you will end up
reinforcing those inequalities.”
Page 7 of 33

Hu was the teaching fellow for Grosz’s course in AI and ethics (co-taught with
philosophy fellow Jeffrey Behrends) last year. She says people need to understand that
the act of “building technologies, and the way that we implement them, are themselves
political actions. They don’t exist in a vacuum, as instrumental tools that sometimes are
good, sometimes are bad. I think that that’s a particularly naïve way of thinking of
technology.”

Whether the technology is meant to provide facial recognition to identify crime suspects
from video footage, or education tailored to different learning styles, or medical advice,
Hu stresses, “What we need to think about is how technologies embed particular values
and assumptions. Exposing that is a first step: realizing that it’s not the case that there are
some ethical questions, and some non-ethical questions, but really that, in everything we
design…there are always going to be normative questions at hand, every step of the
way.” Integrating that awareness into existing coursework is critical to ensuring that “the
world that we’re building, with ubiquitous technology, is a world that we want to live
in.”

Jonathan Shaw ’89 is managing editor of this magazine.


https://harvardmagazine.com/2019/01/artificial-intelligence-limitations
Page 8 of 33

Is Ethical A.I. Even Possible?


Video
What Does the Future Hold for A.I.?
By The New York Times

1:41 What Does the Future Hold for A.I.?


Some of the top minds in tech and policy shared their outlooks for artificial intelligence and its applications at The
New York Times’s New Work Summit/Leading in the Age of A.I. conference. Credit Mike Cohen for The New
York Times [Available at Site]

By Cade Metz New York Times March 1, 2019

HALF MOON BAY, Calif. — When a news article revealed that Clarifai was working with the
Pentagon and some employees questioned the ethics of building artificial intelligence that
analyzed video captured by drones, the company said the project would save the lives of civilians
and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,”
read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent
A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management
position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence,
warning against biased, deceptive and malicious applications, the companies building this
technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups,
many are creating corporate principles meant to ensure their systems are designed and deployed
in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept.
Companies can change course. Idealism can bow to financial pressure. Some activists — and
even some companies — are beginning to argue that the only way to ensure ethical practices is
through government regulation.

“We don’t want to see a commercial race to the bottom,” Brad Smith, Microsoft’s president and
chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by
The New York Times. “Law is needed.”

The new ethics position at Clarifai never materialized. As this New York City start-up pushed
further into military applications and facial recognition services, some employees grew
increasingly concerned their work would end up feeding automated warfare or mass surveillance.
In late January, on a company message board, they posted an open letter asking Mr. Zeiler where
their work was headed.
Page 9 of 33

Building ethical artificial intelligence is an enormously complex task. It gets even harder when
stakeholders realize that ethics are in the eye of the beholder. Credit John Hersey

A few days later, Mr. Zeiler held a companywide meeting, according to three people who spoke
on the condition that they not be identified for fear of retaliation. He explained that internal
ethics officers did not suit a small company like Clarifai. And he told the employees that Clarifai
technology would one day contribute to autonomous weapons.

Clarifai specializes in technology that instantly recognizes objects in photos and video.
Policymakers call this a “dual-use technology.” It has everyday commercial applications, like
identifying designer handbags on a retail website, as well as military applications, like
identifying targets for drones.

This and other rapidly advancing forms of artificial intelligence can


improve transportation, health care and scientific research. Or they can feed mass
surveillance, online phishing attacks and the spread of false news.

Discovering a Bias
As companies and governments deploy these A.I. technologies, researchers are also realizing that
some systems are woefully biased. Facial recognition services, for instance, can be
significantly less accurate when trying to identify women or someone with darker skin. Other
systems may include security holes unlike any seen in the past. Researchers have shown that
driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets
even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that
American tech companies had long supported the military and that they must continue to do so.
Page 10 of 33

“The U.S. military is charged with protecting the freedoms of this country,” he told the
conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr.
Zeiler argued that autonomous weapons will ultimately save lives because they would be more
accurate than weapons controlled by human operators. “A.I. is an essential tool in helping
weapons become more accurate, reducing collateral damage, minimizing civilian casualties and
friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company
employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other
companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant
as a guide for future projects. But even with the corporate rules in place, some employees left the
company in protest. The new principles are open to interpretation. And they are overseen by
executives who must also protect the company’s financial interests.

“You functionally have situations where the foxes are guarding the henhouse,” said Liz Fong-
Jones, a former Google employee who left the company late last year.

In 2014, when Google acquired DeepMind, perhaps the world’s most important A.I. lab, the
company agreed to set up an external review board that would ensure the lab’s research would
not be used in military applications or otherwise unethical projects. But five years later, it is still
unclear whether this board even exists.

Google, Microsoft, Facebook and other companies have created organizations like the
Partnership on A.I. that aim to guide the practices of the entire industry. But these operations are
largely toothless.

Workers Speak Out


The most significant changes have been driven by employee protests, like the one at Google, and
pointed research from academics and other independent experts. After Amazon employees
protested the sale of facial recognition services to police departments and various academic
studies highlighted the bias that plagues these services, Amazon and Microsoft called for
government regulation in this area.

“People are recognizing there are issues, and they are recognizing they want to change them,”
said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a
research institute that examines the social implications of artificial intelligence. But she also told
the conference that this change was slow to happen, as the forces of capitalism continued to drive
these companies toward greater profits.
Page 11 of 33

Employees at Clarifai worry that the same technological tools that drive facial recognition will
ultimately lead to autonomous weapons — and that flaws in these tools will open a Pandora’s
box of problems. “We in the industry know that technology can be compromised. Hackers hack.
Bias is unavoidable,” read the open letter to Mr. Zeiler.

Thousands of A.I. researchers from across the industry have signed a separate open letter saying
they will oppose autonomous weapons.

The Pentagon has said that artificial intelligence built by the likes of Google and Clarifai has not
been used for offensive purposes. And it is now building its own set of ethical principles,
realizing it needs the support of industry, which has snapped up most of the world’s top A.I.
researchers in recent years.

But many policy experts say they believe these principles are unlikely to hold any more
influence than those laid down at the big corporations, especially because the Pentagon is
motivated to keep pace with China, Russia and other international rivals as they develop similar
technology. For that reason, some are calling for international treaties that would bar the use of
autonomous weapons.

Is Regulation the Answer?


In their open letter, the Clarifai employees said they were unsure whether regulation was the
answer to the many ethical questions swirling around A.I. technology, arguing that the
immediate responsibility rested with the company itself.

“Regulation slows progress, and our species needs progress to survive the many threats that face
us today,” they wrote, addressing Mr. Zeiler and the rest of the company. “We need to be ethical
enough to be trusted to make this technology on our own, and we owe it to the public to define
our ethics clearly.”

But their letter did not have the desired effect. In the days after Mr. Zeiler explained that Clarifai
would most likely contribute to autonomous weapons, the employee who wrote the letter and
was originally tapped to serve as an ethics adviser, Liz O’Sullivan, left the company.

Researchers and activists like Ms. Whittaker see this as a moment when tech employees can use
their power to drive change. But they have also said this must happen outside tech companies as
well as within.

“We need regulation,” Ms. Whittaker said, before name-dropping Microsoft’s chief legal officer.
“Even Brad Smith says we need regulation.”

https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html
Page 12 of 33

Need for AI Ethicists Becomes


Clearer as Companies Admit Tech’s
Flaws
While the position is still rare and being defined, it’s likely to become much
more common.

Lines connected to thinkers, symbolizing the meaning of artificial intelligence. PHOTO: LIU
ZISHAN/SHUTTERSTOCK
By JOHN MURAWSKI THE WALL STREET JOURNAL MARCH 1, 2019

The call for artificial intelligence ethics specialists is growing louder as technology
leaders publicly acknowledge that their products may be flawed and harmful to
employment, privacy and human rights.

Software giants Microsoft Corp. and Salesforce.com Inc. have already hired ethicists to
vet data-sorting AI algorithms for racial bias, gender bias and other unintended
consequences that could result in a public relations fiasco or a legal headache. And a
number of universities are offering courses on AI ethics, not only to equip managers but
also to train a new cadre of aspiring AI ethicists.
Page 13 of 33

While the position is still rare and being defined, it’s likely to become much more
common.

In January, consulting firm KPMG LLP listed five AI positions companies will need to
consider to be effective, including that of an AI ethicist. Brad Fisher, the author of the
KPMG blog post, said he identified the essential roles based on his consulting work for
corporate boards and hearing from directors who are reading news reports about AI
mishaps and are anxious to protect their companies from similar risk.

Companies will likely face increasing risk because of their AI systems. Microsoft and
Google parent Alphabet Inc. have started disclosing the risks inherent to AI in their
Securities and Exchange filings, a move that some believe presages a standard corporate
practice.

“AI algorithms may be flawed,” Microsoft’s 2018 annual report states. “If we enable or
offer AI solutions that are controversial because of their impact on human rights, privacy,
employment, or other social issues, we may experience brand or reputational harm.”

Google had similar language in its most recent 10K, which was released in December.
“New products and services, including those that incorporate or utilize artificial
intelligence and machine learning, can raise new or exacerbate existing ethical,
technological, legal, and other challenges, which may negatively affect our brands and
demand for our products and services and adversely affect our revenues and operating
results.”

A Brookings Institution report in September advised businesses to hire AI ethicists and


create an AI review board, develop AI audit trails and implement AI training programs.
Tellingly, the Brookings report also advised organizations to create a remediation plan
for the eventuality that their AI technology does inflict social harm.

“As AI becomes more and more pervasive, the need to deal with these questions will
become more and more apparent,” said Mr. Fisher, a data and analytics expert at KPMG.
“Eventually there’s going to be more legislative and social pressure.”
Page 14 of 33

Last year, for example, Amazon.com Inc. quashed an internal algorithm under
development for analyzing resumes from job applicants when it was discovered that the
program had developed a penchant for male job candidates and was downgrading
women, according to news reports.

Also last year, a study by the Massachusetts Institute of Technology and Stanford
University found that three commercially released facial-analysis programs were
successful in correctly identifying the gender of light-skinned men, but prone to
committing errors when guessing the gender of dark-skinned women.

A study commissioned last year by SAS Institute, Accenture Applied Intelligence and
Intel, and conducted by Forbes Insights, found that 74% of AI adoption leaders evaluate
their technology at least once a week for accuracy and reliability, and 43% of AI adoption
leaders said their organizations have procedures for overriding AI results that are
suspicious or questionable.

Microsoft is in the first wave of companies that have created a position to oversee the
ethics of AI, recently naming Mira Lane as director of ethics and society. Meanwhile, at
global consulting firm Accenture Ltd.’s Applied Intelligence platform, Accenture clients
are advised on AI ethics by Rumman Chowdhury, the consultancy’s global lead for
responsible AI.

San Francisco software company Salesforce in September named Kathy Baxter as its
architect of ethical AI practice. In a phone interview, Ms. Baxter said she spends her time
working with her company’s Einstein program that the sales team uses to develop leads
and make sales.

With degrees in applied psychology and engineering psychology, she divides her time
between working with developers to catch problems before they are encoded in Einstein,
and also advises Salesforce clients who use Einstein to make sure they are aware of how
their customer data can skew the AI algorithm.
Page 15 of 33

When Einstein was being developed three years ago, according to Ms. Baxter, the AI
program identified the No. 1 predictor of a sales opportunity as a client having the name
John. But that was a reflection of the kinds of data the AI program was analyzing,
mainly: a preponderance of male customers named John.

Some of the most common categories by which customer data is ranked—age, race,
gender, religion, zip code—can lead to similar unintended consequences and error of
judgment.

“Using AI can often highlight biases in your business processes that you might not have
even known existed,” she said.

Ms. Baxter predicted that AI ethicists, whether in title or in function, will become
increasingly important as AI becomes more widely adopted.

Ms. Baxter and other advocates for the AI ethicist position say such a role requires
someone who is technologically literate but essentially an outsider to computing,
preferably someone trained in the social sciences, anthropology, ethnography,
psychology and related humanistic disciplines.

Former IBM Watson general manager Manoj Saxena, who is executive chairman of AI
software developer CognitiveScale, is currently teaching an “Ethics in Artificial
Intelligence Design” course at the University of Texas, in Austin.

Mr. Saxena said he is preparing his students for an economy with human workers and
digital workers, both of whom have to be trained and supervised. He sees his teaching
role as not only preparing managers for an AI workforce but also training a new
generation of AI ethicists and AI auditors.

“I’m encouraging them to look at this as a career,” Mr. Saxena said. “A lot of brands are
going to be damaged by AI, and they will learn it the hard way.”
https://www.wsj.com/articles/need-for-ai-ethicists-becomes-clearer-as-companies-admit-techs-flaws-
11551436200
Page 16 of 33

Why Making AI Systems Fair Is So


Complicated
It’s impossible to completely eliminate bias from AI, but experts say
businesses should commit to a definition of fairness and accept the trade-offs

An Anheuser-Busch InBev plant in Ukraine. The company uses an AI-driven compliance platform to detect
fraud in dozens of countries, deliberately ignoring employees’ behavior patterns to avoid flagging innocent
people. PHOTO: VINCENT MUNDY/BLOOMBERG NEWS
By John Murawski The Wall Street Journal April 23, 2019
The use of artificial intelligence is increasingly focusing attention on a slippery question:
What is fairness?

The AI industry for the past several years has been formulating and compiling definitions
of fairness that data scientists and businesses can apply to minimize bias in AI systems.
AI experts, however, stress that picking one definition over another will necessitate trade-
offs and compromises in accuracy and could clash with someone else’s notion of fairness.
Page 17 of 33

Ultimately a definition of fairness determines which social groups should be protected


from bias, to what extent, and at whose expense, said Anand Rao, global artificial
intelligence lead at PricewaterhouseCoopers LLP.

“The reality of the situation is that invariably you’re going to be unfair to some group or
individual. You are redistributing harm,” said Mr. Rao, a data scientist who co-wrote a
paper published last week titled “What is fair when it comes to AI bias?”

One example is college admissions, where “having slightly different requirements for
students with disadvantaged backgrounds” is seen as necessary for achieving fairness, his
paper states, even though it means that wealthier students with qualifying scores may not
be admitted.

PricewaterhouseCoopers plans to offer clients a Responsible AI tool kit starting in June


that will include an antibias detection tool with 32 fairness settings—ways of weighing
information to achieve desired parity of outcomes for different social groups—for users
to choose from, Mr. Rao said.

The question of fairness has gained urgency because machines are increasingly making
life-changing decisions about who gets a loan, a mortgage, a job interview, or an
extended prison sentence. AI is also used for other decisions, such as who gets to see
which real-estate advertisements. In some cases AI algorithms, trained on historically
biased data, have shown a tendency to favor whites and males in identifying photographs,
ranking resumes and predicting recidivism.

When AI users select a definition of fairness that is important to them, the first step is to
decide which societal category—such as race, gender, religion, age, sexual orientation or
disability—should be protected, Mr. Rao said. In cases involving lending, housing and
hiring, those categories are identified by state and federal antidiscrimination law.

The PricewaterhouseCoopers AI tool can show analytic results for all 32 fairness settings,
allowing a company’s officials to make the final decision on what they believe comes
Page 18 of 33

closest to meeting legal requirements and social expectations, Mr. Rao said. Human
involvement is essential to guarding against AI bias, he added.

Other software and services companies have already adopted definitions of fairness. An
antibias tool developed last year by Alphabet Inc.’s Google comes preloaded with five
fairness criteria that users can choose from.

In the first fairness measure, qualifications for loans are assessed strictly on objective
merits, even if that means no women qualify. Version two requires that the qualifications
be lowered for women to compensate for historical biases. The third version says loan
awards should proportionately match the applicant pool, so that if 30% of applicants are
women, then women should account for 30% of approved loans. Version four says the
same percentage of men and women who are likely to succeed at repaying loans should
get loans. Version five says that fairness is achieved when the “percentage of times [the
system is] wrong in the total of approvals and denials is the same for both genders.”

The five options were posted online by David Weinberger, writer-in-residence at


Google’s People + AI Research initiative, whose researchers and designers created the
What-If tool. The post says defining fairness is “one of the hardest, most complex, and
most utterly human, questions raised by artificial intelligence systems.”

Some businesses, such as Anheuser-Busch InBev NV, are applying definitions of fairness
tailored to their needs.

AB InBev has developed an AI-driven compliance platform called Brewright that uses
machine learning to run risk-scoring audits of vendors to detect fraud and corruption in
dozens of countries, said Matt Galvin, vice president of ethics and compliance. The
company’s concept of fairness is deliberately blind to the behavior patterns of AB InBev
and vendor employees so that the algorithm doesn’t inadvertently flag innocent people,
Mr. Galvin said.
Page 19 of 33

The audits focus on the size, speed and geographical location of transactions with
vendors, he said, but they don’t look at the attendance, absenteeism or emails of
individual employees.

Individual information is excluded from the data in the initial audit because of concerns
about privacy and other ethical issues, Mr. Galvin said. If patterns are identified as
suspicious, a further investigation could eventually zero in on employee behavior, he
said.

AB InBev is reluctant to use personal information even if that makes it harder for its
systems to detect problems. In the company’s approach to fairness, a potential loss of
accuracy is an accepted trade-off.

“A lot of AI, for very good reasons, tends to be couched as recommendations,” Mr.
Galvin said.

AB InBev is still assessing Brewright’s effectiveness, but one benefit is that it has
reduced the costs of internal investigations and related audits. Before introducing the
platform, a review performed manually in three countries cost AB InBev $1.8 million.
The automated version of the review replicated across six countries cost $200,000.
https://www.wsj.com/articles/why-making-ai-systems-fair-is-so-complicated-11556011800
Page 20 of 33

Ethical and Legal Concerns Give


Rise to AI Antibias Tools
New fairness tools are designed to combat AI bias, but they may require some
planning and prep before deployment

Accenture’s Fairness Tool is designed to detect biases in AI models. PHOTO: PAU BARRENA/AGENCE
FRANCE-PRESSE/GETTY IMAGES
By John Murawski The Wall Street Journal April 18, 2019
Fighting biases in artificial intelligence could get easier for businesses with the recent
availability of automated fairness tools that can check for bias creep in AI models.

The What-If Tool by Alphabet Inc.’s Google, the Fairness Tool from Accenture PLC and
Watson OpenScale from International Business Machines Corp. are among the products
that have been rolled out in the past year. They can help detect biases that develop when
AI models, feeding on skewed data, start treating women and minorities differently in
lending, hiring and advertising decisions.
Page 21 of 33

However, even though these products are sometimes referred to as plug-in tools,
deploying AI to police AI may take work to implement, said Brandon Purcell, principal
analyst at Forrester Research Inc.

Antibias technology often requires companies to adopt ethical guidelines and governance
principles, said Rumman Chowdhury, the global lead for responsible AI at management
consulting firm Accenture. And to do that, companies may have to hire ethicists, delegate
these tasks to their legal or compliance department, or bring in outside consultants.

“It ain’t flip a switch and you’re golden,” said Darin Stewart, a research vice president at
Gartner Inc. “And it’s ongoing. As the system encounters more and more data that it’s
learning from, it could go off the rails.”

Still, plug-in antibias tools will become a common feature of AI technology in the next
two years as AI adoption spreads and awareness about potential bias puts public pressure
on AI users to rein in their algorithms, predicts Mr. Stewart.

“Most of them can be plugged into the machine learning pipeline in an organization so
testing and validation become standard operating procedure,” Mr. Stewart said. “You
need continual monitoring and evaluation.”

Antibias tools can test AI fairness in a number of ways. They can take sample batches of
data from AI decisions and switch the race or gender of people in those data samples.
Bias would be evident if fooling the algorithm alters the outcomes of the AI decisions
when income and other factors remain the same, said Ruchir Puri, the chief scientist at
IBM Research.

Another way is for fairness tools to analyze the pattern of AI decisions to make sure
algorithms don’t favor one group over another, or don’t violate antidiscrimination law by
rejecting or penalizing women, blacks, Hispanics and others more than they do white
males. A biased AI algorithm is one that turns down loans or job applicants from a
certain ZIP Code, because the data it trained on is tainted with historical discrimination.
Page 22 of 33

In cases where fairness isn’t governed by antidiscrimination laws, executives and boards
will have to decide how their organization defines fairness. It could be defined as using
the same standard for everyone, no matter if the results favor some groups over others. Or
the definition could be set to achieve diversity goals, and that would require deciding
which groups should be included, Mr. Stewart said.

“The overall message for companies is not only a duty to eliminate bias,” Mr. Purcell
said. “They actually need to define what fairness means for their companies.”

Companies also will have to put in place remediation protocols to fix problems as they
come up. There are a number of ways of fixing bias. One is cleaning up the data that is
used to train AI models. This is done by hiding evidence or clues about gender or race so
as to blind the algorithm to those factors, Mr. Purcell said. Another is overriding the
model to achieve diversity goals.

“The point of this stuff is to automate decisions, but you don’t have to automate them
all,” said Mike Gualtieri, a vice president and principal analyst at Forrester.

What’s more, businesses using AI also will have to decide how to balance fairness with
accuracy, because AI models can become less accurate as they become more fair, Mr.
Stewart said. The reason is that making AI conform to a standard of fairness can require
removing data to eliminate bias, and that could lead to a loss of precision, Mr. Gualtieri
said. However, fairness may align with profit by expanding a business’s pool of available
employees and customers, Mr. Purcell said.
https://www.wsj.com/articles/ethical-and-legal-concerns-give-rise-to-ai-antibias-tools-11555579801
Page 23 of 33

Seeking Ground Rules for A.I.


It’s not easy to encourage the ethical use of artificial intelligence. But here are 10
recommendations.

Cade Metz New York Times March 1, 2019

About seven years ago, three researchers at the University of Toronto built a system that could
analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and
flowers.

The system was so effective that Google bought the tiny start-up these researchers were only just
getting off the ground. And soon, their system sparked a technological revolution. Suddenly,
machines could “see” in a way that was not possible in the past.

This made it easier for a smartphone app to search your personal photos and find the images you
were looking for. It accelerated the progress of driverless cars and other robotics. And it
improved the accuracy of facial recognition services, for social networks like Facebook and for
the country’s law enforcement agencies.

But soon, researchers noticed that these facial recognition services were less accurate when used
with women and people of color. Activists raised concerns over how companies were collecting
the huge amounts of data needed to train these kinds of systems. Others worried these systems
would eventually lead to mass surveillance or autonomous weapons.

How should we, as a society, deal with these issues? Many have asked the question. Not
everyone can agree on the answers. Google sees things differently from Microsoft. A few
thousand Google employees see things differently from Google. The Pentagon has its own
vantage point.

This week, at the New Work Summit, hosted by The New York Times, conference attendees
worked in groups to compile a list of recommendations for building and deploying ethical
artificial intelligence. The results are included here.

But even the existence of this list sparked controversy. Some attendees, who have spent years
studying these issues, questioned whether a group of randomly selected people were the best
choice for deciding the future of artificial intelligence.

One thing is for sure: The discussion will only continue in the months and years to come.
Page 24 of 33

The Recommendations
Transparency Companies should be transparent about the design, intention and use of their A.I.
technology.

Disclosure Companies should clearly disclose to users what data is being collected and how it is
being used.

Privacy Users should be able to easily opt out of data collection.

Diversity A.I. technology should be developed by inherently diverse teams.

Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.

Trust Organizations should have internal processes to self-regulate the misuse of A.I. Have a
chief ethics officer, ethics board, etc.

Accountability There should be a common set of standards by which companies are held
accountable for the use and impact of their A.I. technology.

Collective governance Companies should work together to self-regulate the industry.

Regulation Companies should work with regulators to develop appropriate laws to govern the
use of A.I.

“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.

The leaders of the groups: Frida Polli, a founder and chief executive, Pymetrics; Sara Menker,
founder and chief executive, Gro Intelligence; Serkan Piantino, founder and chief executive,
Spell; Paul Scharre, director, Technology and National Security Program, The Center for a New
American Security; Renata Quintini, partner, Lux Capital; Ken Goldberg, William S. Floyd Jr.
distinguished chair in engineering, University of California, Berkeley; Danika Laszuk, general
manager, Betaworks Camp; Elizabeth Joh, Martin Luther King Jr. Professor of Law, University
of California, Davis; Candice Morgan, head of inclusion and diversity, Pinterest

https://www.nytimes.com/2019/03/01/business/ethical-ai-recommendations.html
Page 25 of 33

Everyone’s talking about ethics in AI.


Here’s what they’re missing
The rush toward ethical AI is leaving many of us behind.
BY S. A. APPLIN FAST COMPANY TECH FORECAST JUNE 14, 2019

The systems we require for sustaining our lives increasingly rely upon algorithms to
function. Governance, energy grids, food distribution, supply chains, healthcare, fuel,
global banking, and much else are becoming increasingly automated in ways that
impact all of us. Yet, the people who are developing the automation, machine
learning, and the data collection and analysis that currently drive much of this
automation do not represent all of us, and are not considering all of our needs equally.
We are in deep.

Most of us do not have an equal voice or representation in this new world order.
Leading the way instead are scientists and engineers who don’t seem to understand
how to represent how we live as individuals or in groups—the main ways we live,
work, cooperate, and exist together—nor how to incorporate into their models our
ethnic, cultural, gender, age, geographic or economic diversity, either. The result is
that AI will benefit some of us far more than others, depending upon who we are, our
gender and ethnic identities, how much income or power we have, where we are in the
world, and what we want to do.

This isn’t new. The power structures that developed the world’s complex civic and
corporate systems were not initially concerned with diversity or equality, and as these
systems migrate to becoming automated, untangling and teasing out the meaning for
the rest of us becomes much more complicated. In the process, there is a risk that we
will become further dependent on systems that don’t represent us. Furthermore, there
is an increasing likelihood that we must forfeit our agency in order for these complex
automated systems to function. This could leave most of us serving the needs of these
algorithms, rather than the other way around.

The computer science and artificial intelligence communities are starting to awaken to
the profound ways that their algorithms will impact society, and are now attempting to
develop guidelines on ethics for our increasingly automated world. The EU has
developed principles for ethical AI, as has the IEEE, Google, Microsoft, and other
countries and corporations. Among the more recent and prominent efforts is a set
of principles crafted by the Organization for Economic Co-operation and
Page 26 of 33

Development (OECD), an intergovernmental organization that represents 37 countries


for economic concerns and world trade.

In various ways, these standards are attempting to address the inequality that results
from AI and automated, data driven systems. As OECD Secretary-General Angel
Gurría put it in a recent speech announcing the guidelines, the anxieties around AI
place “the onus on governments to ensure that AI systems are designed in a way that
respects our values and laws, so people can trust that their safety and privacy will be
paramount. These Principles will be a global reference point for trustworthy AI so that
we can harness its opportunities in a way that delivers the best outcomes for all.”

However, not all ethics guidelines are developed equally—or ethically. Often, these
efforts fail to recognize the cultural and social differences that underlie our everyday
decision making, and make general assumptions about both what a “human” and
“ethical human behavior” is. This is insufficient. “Whose ethical behavior?” is the
question that must drive AI, and all the other technologies that impact our decision
making, guidelines, and policies.

Indeed, when the companies themselves are quietly funding the research on AI ethics,
this question becomes even more critical. An investigation this month
by Spotlight and the New Statesman found that large tech companies may be stacking
the ethics deck in their favor by funding research labs, reporting that “Google has
spent millions of pounds funding research at British universities” including support of
the Oxford Internet Institute (OII), where a number of professors are “prolific public
commentators on ethical AI and ADM.” Even as one of the professors, Luciano
Floridi, serves on U.K. and EU governance ethics boards, Google funds OII and
others to research outcomes from those groups. It is a normal practice for corporations
to fund research, and these sources of funding are expected to be disclosed, but the
reporters found that some of these funding sources were not always detailed in the
groups’ research publications.

While their funding suggests that Google and other large tech companies are
“offshoring” ethics to research groups, the companies seem to have struggled to
incorporate ethics—and a deep understanding of the human outcomes of their
technologies—into their development cycles at home. Two phenomena in particular
may be contributing to this problem. The first is that computer science and
engineering, in industry and in education, have developed their processes around the
concept of what is often referred to as “first principles,” building blocks that can be
accepted as true, basic, and foundational in classical Western philosophy. In
cultivating “first principles” around AI ethics, however, we end up with a fairly
limited version of the “human.” The resultant ethics, derived from these centuries-old
Page 27 of 33

contexts in early Western human history, lack the diversity in education, culture,
ethnicity and gender found in today’s complex world.

Because we are different, AI algorithms and automated processes won’t work equally
effectively for us worldwide. Different regions have different cultural models of what
constitutes sociability and thus, ethics. For example, in the case of autonomous
vehicles, AI will need more than just pragmatic “first principle” knowledge of “how
to drive” based on understanding the other machines on the road and the local laws
(which will differ sometimes by municipality). They will also need to take into
account the social actions of driving, and the ethical choices that each driver makes on
a daily basis based on their cultural framework and sense of sociability.

Reducing the rules around automation to regulations or laws cannot account for
unexpected scenarios, or situations when things go wrong. In the era of autonomous
vehicles, the entire road space cannot be controlled, and the actions within it cannot be
fully predicted. Thus the decision-making capabilities of any type of algorithm would
need to contain the multitudes of who we collectively all are. In addition to
accounting for random animals and on-the-road debris, AI will need frameworks to
understand each person (bicyclist, pedestrian, scooter rider etc.), as well as our
cultural and ethical positions, in order to cultivate the judgment required to make
ethical decisions.

IMPROVING THE ENGINEERING APPROACH TO AI


BY ADDING THE SOCIAL SCIENCES
A second crucial factor limiting the development of robust AI ethics comes from
computer scientist Melvin Conway, PhD. Conway’s Law states that “organizations
which design systems are constrained to produce designs which are copies of the
communication structures of these organizations.” That is, if a team developing a
particular AI system is made up of similar types of people who rely on similar first
principles, the resulting output is likely to reflect that.

Conway’s Law exists within educational institutions as well. In training technology


students on ethics, institutions are mostly taking a Silicon Valley approach to AI
ethics by employing a singular cultural frame that reinforces older, white, male,
Western perspectives deployed to influence younger, male minds. This approach to
ethics could be described as DIY, NIH, and NIMBY—as in “do it yourself, not
invented here, not in my backyard”—which pushes for teaching selected humanities,
ethics, and social sciences to engineers within their companies or institutions, rather
than sending them to learn outside their academic disciplines or workplaces.
Page 28 of 33

All of this means that the “ethics” that are informing digital technology are essentially
biased, and that many of the proposals for ethics in AI—developed as they are by
existing computer scientists, engineers, politicians, and other powerful entities—are
flawed, and neglect much of the world’s many cultures and ways of being. For
instance, a search of the OECD AI ethics guidelines document reveals no mention of
the word “culture,” but many references to “human.” Therein lies one of the problems
with standards, and with the bias of the committees who are creating them: an
assumption of what being “human” means, and the assumption that the meaning is the
same for every human.

This is where anthropology, the study of human social systems, history, and relations,
can contribute most strongly to the conversation. Unfortunately, social science has
largely been dismissed by technologists as an afterthought—if it’s considered at all. In
my own research, which included a summer in the Silicon Valley AI lab of a
multinational technology company, I have found that rather than hiring those with the
knowledge of complex social systems, technologists are attempting to be “self-
taught,” by taking adjunct courses or reducing non-engineers to cognitive scientists
hires who specialize in individual brain function.

These hires are often exclusively male, and often do not represent the range of
ethnicities and backgrounds of the broader population, nor do they deal with how
humans live and work: in groups. When asked about ethics, the vice president of the
AI lab where I worked told me, “If we had any ethics, they would be my ethics.”

An additional approach from technology companies has been to hire designers to


“handle” the “complex messy human,” but designers are not trained to deeply
understand and address the complexity of human social systems. They may
appropriate social science methods without knowledge of, or how to apply, the
corresponding theories necessary to make sense of collected data. This can be
dangerous because it is incomplete and lacks context and cultural awareness.
Designers might be able to design more choices for agency, but without knowing what
they are truly doing with regard to sociability, culture, and diversity, their solutions
risk being biased as well.

This is why tech companies’ AI labs need social science and cross-cultural research: It
takes time and training to understand the social and cultural complexities that are
arising in tandem with the technological problems they seek to solve. Meanwhile,
expertise in one field and “some knowledge” about another is not enough for the
engineers, computer scientists, and designers creating these systems when the stakes
are so high for humanity.
Page 29 of 33

Artificial intelligence must be developed with an understanding of who humans are


collectively and in groups (anthropology and sociology), as well as who we are
individually (psychology), and how our individual brains work (cognitive science), in
tandem with current thinking on global cultural ethics and corresponding
philosophies and laws. What it means to be human can vary depending upon not just
who we are and where we are, but also when we are, and how we see ourselves at any
given time. When crafting ethical guidelines for AI, we must consider “ethics” in all
forms, particularly accounting for the cultural constructs that differ between regions
and groups of people—as well as time and space.

That means that truly ethical AI systems will also need to dynamically adapt to how
we change. Consider that the ethics of how women are treated in certain geographical
regions has changed as cultural mores have changed. It was only within the last
century that women in the United States were given the right to vote, and even less
than that for certain ethnicities. Additionally, it has taken roughly that long for women
and other ethnicities to be accepted pervasively in the workplace—and many still are
not. As culture changes, ethics can change, and how we come to accept these changes,
and adjust our behaviors to embrace them over time, also matters. As we grow, so
must AI grow too. Otherwise, any “ethical” AI will be rigid, inflexible, and unable to
adjust to the wide span of human behavior and culture that is our lived experience.

If we want ethics in AI, let’s start with this “first principle”: Humans are diverse and
complex and live within groups and cultures. The groups and cultures creating ethical
AI ought to reflect that.

S. A. Applin, PhD, is an anthropologist whose research explores the domains of human agency,
algorithms, AI, and automation in the context of social systems and sociability. You can find more
at @anthropunk and PoSR.org.

https://www.fastcompany.com/90356295/the-rush-toward-ethical-ai-is-leaving-many-of-us-behind
Page 30 of 33

Next Next

Is it moral to imbue machines with


consciousness?
By Curt Hopkins

Artificial intelligence has become ubiquitous, even if we don’t often recognize it. It’s now a
standard part of a car’s GPS software, customer service tools, and online music
recommendations we come across, and even in our toys.
The development and adoption of the technology has been so rapid that what we can expect from
AI—or how soon we’ll get there—no longer seems clear. And it’s forcing us to confront a
question that hasn’t dogged previous computer-research efforts, namely: Is it ethical to develop
AI past the point of consciousness?
The proponents of AI call out the ability of self-regulating, intelligent machines to preserve
human life by going where we cannot safely go, from inside nuclear reactors to mines to deep
space. The detractors, however, who include a number of high-profile and influential figures,
assert that improperly managed, AI could have serious unintended consequence, including,
possibly, the end of the human race.
To begin untangling this moral skein, and possibly sketch a path toward policies that could help
guide our path, we talked to five experts—a scientist, a philosopher, an ethicist, an engineer, and
a humanist—about the implications of AI research and our obligations as human developers.

AN ORIGINAL FILM
Moral Code: The Ethics of AI
Watch the video: https://www.youtube.com/watch?v=GboOXAjGevA [8:03]

FROM CREATING TOOLS TO CREATING PARTNERS


KIRK BRESNIKER: Chief architect for Hewlett Packard Labs
I don't think we’re in imminent danger of stumbling into a consciousness that we create
ourselves. We're more likely to fool ourselves into imagining that we have by simulating
behavior we associate with consciousness.
Is it ethical to continue this pursuit? I think it is.
I think that we have a stewardship role to fill as we move from machines being mere tools that
we use to amplify our capabilities to partners that we can use to complement our abilities as
conscious beings.
Page 31 of 33

The artificial consciousness, crafted for purpose, which can go to areas, or regions, or
environments in which we as humans are limited because of our biological nature is something
we should be actively pursuing.
Systems based on non-biological substrates could exist in environments incompatible with
organic life: The vacuum and radiation of space, the hot zone of a pandemic, or the darkness and
pressure of a mid-ocean trench. If they incorporate materials which do not consume energy or
resources to maintain state (retain its stored inputs), they could offer us intelligences with a
completely new relationship to time, carrying us through the void to distant stars.
At some point, biologically-evolved systems crossed a threshold, and perhaps it wasn't a bright
line. Perhaps it slowly increased, perhaps it had to do not only with the individual, but with the
communities, and with the ability for those communities to pass on information not only
biologically in DNA, but also as culture, as learned understanding, as something that can be
accelerated by tools.
Unfortunately, we don't have reliable narrators to tell us how that happened. The only existing
case that we know is ourselves. It's hard to think about that second case study of artificial
consciousness.

THESE AWFUL UNKNOWN QUALITIES


IRINA RAICU: Internet Ethics program director, Markkula Center for Applied Ethics at Santa
Clara University
The term “artificial intelligence” may actually hurt this debate, because I don't think we’re really
talking about intelligence. Intelligence is a holistic, broad-view understanding, drawing on all
manner of considerations from genetic pre-programing to education to culture to personal stories.
I don't think what we're talking about with AI, for at least the near future, has really anything to
do with intelligence, let alone consciousness.
A philosopher colleague presented a slide that explores what she described as "superior artificial
moral intelligence," adding, in parentheses, "E.g. more consistent, better at predicting
consequences and at optimizing competing goods."
I would argue with part of that position. I don’t believe, for example, that consistency is
inherently good. We have laws, but we also have discretion, which allows human decision
makers to take all kind of factors into consideration. So would a more consistent AI, unable to
exercise discretion, really be superior in terms of morality?
Assuming we could create an AI so complex that it matched some consensus definition of
consciousness, then what kind of rights would such an AI have? Would AI entities be just like
humans in the way we treat them? If they had exactly the same consciousness that we do, then I'd
be inclined to say yes. But would they really be just like us in every way? Or would their
morality have a more utilitarian character?
The Victorians were consumed with a kind of systematized moral algebra, which did not work
out well for the people to whom it was applied. As Charles Dickens put it, in Hard Times,
“Supposing we were to reserve our arithmetic for material objects, and to govern these awful
Page 32 of 33

unknown quantities [i.e., human beings] by other means!” The Victorian times are far behind us,
but humans are still wonderfully unknown quantities whom even complex AI cannot fully map.

HOW WOULD A MACHINE HAVE THE MECHANISM


TO SUFFER?
MIRANDA MOWBRAY: Lecturer in computer science at the University of Bristol
What is consciousness? Is a mosquito conscious? A bee? A dog? A dog is certainly sentient, and
can communicate.
We already breed dogs, changing their genetic and temperamental characteristics quite
considerably. We use them as our servants.
I don't think that we have an ethical obligation not to breed dogs to be better human companions.
I think we do have an ethical obligation to treat dogs humanely.
The question, for me, is: Can they suffer pain? If you had a conscious being that couldn't suffer, I
don't think that we would have ethical obligations towards it. It’s very hard to work out how a
machine would have the mechanism to suffer. And therefore it's more like an insect, which
doesn't have a central nervous system; you can pull off its legs and it doesn't seem to react. It's
not a nice thing to pull legs off flies, but I don't think it's an ethical problem in the same way as
hurting a dog.
There are some people who would say that pretty much anything sentient is conscious. Some
people argue passionately, for example, that chimpanzees are conscious and therefore we ought
to give them some aspects of ethical rights. Some people even say that bacteria are conscious.
The borderline between what different people say is conscious and what people say is not
conscious is really not well defined. And if you define it in a certain way, you end up saying a
smart thermostat is proto-conscious. So it's a kind of argument by extremes.
There’s nothing unethical about developing AI with consciousness, however you might define
that. However, I think there are things you should worry about. In order to make most AI work,
you need enormous amounts of data. And this requires the collection of that data with all its
potential threats to privacy. A large proportion of the usable data that AI will run on will be in
the hands of companies rather than governments or individuals, and AI built to use this data is
likely to be controlled by the companies that have the data, rather than having more democratic
control.
These things are inherent in the designers, not the AI. But I think we should be careful about
them.

I REJECT THE PREMISE


MARK BISHOP: Professor of cognitive computing, Goldsmiths, University of London
Page 33 of 33

I refute the possibility of machine consciousness. It seems to me that you might as well have a
conversation about tooth fairies. I don't see any machine, now or ever, having any phenomenal
consciousness at all.
The Artificial Consciousness Test for machines inheres the notion that a test is required,
because if we predicated consciousness to a computational entity, it would potentially put other
ethical responsibilities onto us as interactors with that entity. For example, if a robot were
conscious, would it be ethical to send it into a place that could potentially be harmful to the
robot?
But I don't see any reason for believing that a machine could be at all conscious.
I think the very act of building a computer from individual logic gates gives you a very deep
insight as to what's going on in computers. You don't imagine there's anything supernatural
happening. I think people forget that what are going on in computers are simple forced modal
changes that occur due to particular changes in voltages. The computer has no more free will
than a lever. If I press down on one side of a lever the other side comes up and the lever can't
elect to do anything about that. That's just the way it is.

THE INTERNAL STATE


JULIE CARPENTER: Research fellow, Ethics + Emerging Sciences Group
We're using a model of something that's human-like and trying to apply it to a machine. But that
model is problematic because how we regard machines and interact with them can change from
culture to culture and by situation.
AI sometimes cues us—via product design like a human-like shape, or its capabilities, such as
natural language recognition—that it is different than some other tech or tools we use.
Incorporating those human-like or animal-like qualities into AI elicits reactions in us where we
tend to interact with it as if it was a living thing.
We often inaccurately respond to it as if it had an “internal state,” possessing elements that
indicate consciousness: intelligence, emotions, motivations, self-awareness.
I believe we're in a cultural transition period that will last for 20-50 years. In this period, we’re
dealing with what I call an “accommodation dilemma,” in which we're learning to live with tech
that is smart and interacts with us—sometimes responding or acting in human-like or animal-like
ways—but are not really living entities. During this period, we're going to have to rapidly pay
attention to the tendency we have to attribute lifelike associations with AI and robots, and form
policies to deal with all of these various ethical issues.
In the end, I don’t believe there will be machine consciousness in the way it is often described, as
something akin to human-like consciousness. It will be human-influenced because we imparted
our ideas about what that intelligence should look like to our AI.

THE ETHICS OF AI: TOOL, PARTNER OR MASTER?


https://www.labs.hpe.com/next-next/ai-ethics?pp=false&jumpid=ba_tyn8bw4u7z_aid-510390001

You might also like