Professional Documents
Culture Documents
Intelligence
● “Success in creating AI
● However, recognize that
would be the biggest event
superintelligent AI isn’t
in human history.”
guaranteed to be beneficial
-Stephen Hawking
How can AI be Dangerous?
The Intentional and Unintentional Negative Consequences
● Autonomous weapons; could easily cause mass ● Fail to fully align the AI’s goals with ours
casualties ● May accomplish the task in a catastrophic way
● May lead to war and more casualties
● Designed to be extremely difficult to simply
“turn off”
What Future Do You Want?
● Should we develop lethal autonomous weapons?
● Should we have a jobless society where everyone enjoys a life of
leisure and machine-produced wealth?
● Will we control intelligent machines or will they control us?
● Will intelligent machines replace us, coexist with us, or merge with
us?