You are on page 1of 43

The Ethics In

Artificial Intelligence

Are intelligent machines friend or foe?

Nov 14, 2016


TONIGHT’S SPEAKERS

Karl Seiler Malcolm McRoberts


Chris Messina
PIVIT & Software Architect
Member of the Board
Big Data Florida of Directors NANTHEALTH
North American Nickel
Artificial Intelligence

When a machine mimics "cognitive" functions that humans


associate with other human minds, such as "learning" and
"problem solving"
AI IS ALREADY EVERYWHERE, EVERYDAY
You live in the age
of the data-driven
algorithm

Decisions that affect


your life — are
being made by
mathematical
models.
Why the rush to AI?

o Cheaper computing
o More data
o Better algorithms

…its because we can


Why the rush to AI?

o Decision automation is now an


inevitable economic imperative
o Driven by a faster-paced, micro-
managed, interconnected,
automated, and optimized world
o Never-asleep autonomous
decision making - it is here now
Why the rush to AI?

o Decisions are made in view of assessed


positive and negative projected outcomes
o Positive and negative are merely derived
(learned) weights + -
o Relative to some system of value
o Moving toward or away from objectives &
problems
Why the rush to AI?

o Weights are encoded intent


o Based on some worldview,
zeitgeist, culture, rule of law,
economic goal and philosophical
perspective
o So autonomous systems are
encoded with intent
Why the rush to AI?

o A linked chain from software to


intent
o How can we impose systems
that bend code creating and
learning systems toward positive
intent for our friends and
potentially negative intent for
the evil-doers?
The
good
More precision
Better reliability
Increased savings
Better safety
More speed
Ray Kurzweil
“We have the opportunity in the decades ahead to
make major strides in addressing the grand challenges
of humanity. AI will be the pivotal technology in
achieving this progress. We have a moral imperative
to realize this promise while controlling the peril. It
won’t be the first time we’ve succeeded in doing this.”
The
bad
Stephen Hawking
“Success in creating AI would be the biggest event in
human history,…”
“Unfortunately, it might also be the last, unless we
learn how to avoid the risks. In the near term, world
militaries are considering autonomous-weapon
systems that can choose and eliminate targets.”
“…humans, limited by slow biological evolution,
couldn’t compete and would be superseded by A.I.”
Bill Gates
“I am in the camp that is concerned about super
intelligence. First the machines will do a lot of jobs for
us and not be super intelligent. That should be positive
if we manage it well. A few decades after that though
the intelligence is strong enough to be a concern. I
agree with Elon Musk and some others on this and
don’t understand why some people are not
concerned.”
Elon Musk
AI is “our greatest existential threat…”
“I’m increasingly inclined to think that there should be
some regulatory oversight, maybe at the national and
international level, just to make sure that we don’t do
something very foolish.”

“I think there is potentially a dangerous outcome


there.” (referring to Google’s Deep Mind which he
invested in to keep an eye on things)
When really smart people get worried

! I make it a habit to pay attention


More than 16,000 researchers and
thought leaders have signed an open
letter to the United Nations calling for
the body to ban the creation of
autonomous and semi-autonomous
weapons,
“…it’s all
changing so
fast…”
No one before
has seen the
change you
have seen

It is nothing
compared to
the change
that is coming
The
ugly
The
ugly
Another fatal Tesla crash reportedly on Autopilot emerges, Model S
hits a streetsweeper truck – caught on dashcam
Remember I-ROBOT & Asimov’s 3 Laws

 A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
 A robot must obey orders given it by human beings except where such
orders would conflict with the First Law.
 A robot must protect its own existence as long as such protection does
not conflict with the First or Second Law.
The ugly (autonomous cars & the trolley
predicament)

Ethical questions
arise when
programming
cars to act in
situations in
which human
injury or death is
inevitable,
especially when
there are split-
second choices
to be made
about whom to
put at risk.
The ugly (gap-filling non-human care
providers)

AI-based
applications could
improve health
outcomes and
quality of life for
millions of people in
the coming years—
but only if they gain
the trust of doctors,
nurses, and patients.
The ugly (non-human directed education)

Though quality
education will
always require active
engagement by
human teachers, AI
promises to
enhance education
at all levels,
especially by
providing
personalization at
scale.
The ugly (lights-out economy)

The whole idea is to do something


no other human—and no other
machine—is doing.

If we all die, it would keep trading.


The ugly (no work for you – reskill becomes
a priority in education)

In the first machine age the vast


majority of Americans worked in
agriculture. Now it's less than two
percent. These people didn't simply
become unemployed, they reskilled.

One of the best ideas that America had


was mass primary education. That's one
of the reasons it became an economic
leader and other countries also adopted
this model of mass education, where
people paid not only for their own
children but other people's children to
go to school.
Safe exploration - agents
learn about their
environment without
executing catastrophic
actions?

Robustness - machine
learning systems that are
robust to changes in the
data distribution, or at
least fail gracefully?
Avoiding negative side
effects- avoid undesired
effects on the
environment?

Avoiding “reward hacking”


- prevent agents from
“gaming” their reward
functions
Scalable oversight - agents
efficiently achieve goals for
which feedback is very
expensive? For example,
can we build an agent that
tries to clean a room in the
way the user would be
happiest with, even
feedback from the user is
very rare
…and so

o AI adoption and sophistication is speeding up


o It is an economic imperative outpacing constraints
o Decision making is being coded into every system and product
o Decision making overlaps ethics and will be autonomous
o Forward thinkers are CONCERNED and starting to work this problem

Carbon-based work-units unite!


Karl Seiler | President
+1 321-750-5165
karl@piviting.com
w w w. P i v i t i n g . c o m

SMARTER CHANGE

You might also like