You are on page 1of 1

First Law:

Idea: A robot may not injure a human being, or through inaction, allow a human being to come to
harm.

Layman's Terms: A robot should not harm people, and it must also prevent people from getting hurt
if it can do so.

Second Law:

Idea: A robot must obey the orders given to it by human beings, except where such orders would
conflict with the First Law.

Layman's Terms: Robots should follow our instructions, as long as doing so doesn't lead to harm to
humans.

Third Law:

Idea: A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.

Layman's Terms: A robot is programmed to stay alive and functioning, as long as it doesn't mean
breaking the first two laws. Its self-preservation should not endanger humans.

These laws are fictional guidelines created by Asimov to explore ethical considerations and potential
issues arising from the interactions between humans and intelligent machines. In reality, we don't
have such laws in place for robots, but they have influenced discussions about the ethical
development and deployment of artificial intelligence and robotics.

You might also like