You are on page 1of 2

Aldo Fernando Vilardy Roa

Machine rights.

Imagine a future where your toaster anticipates what kind toast you want. During the day, it scans
the Internet for new and exciting types of toast. Maybe it asks you about your day and wants to
chat about new knowledge in toast technology. At what level would it become a person? At which
point will you ask yourself if your toaster has feelings? If it did, would unplugging it be murder?
And would you still own it? Will we someday be forced to give our machines rights?

The Artificial Intelligence is already all around us. It makes sure discounters are stocked with
enough supplies, it serves you up just the right Internet advertising, and you may have even read a
new story written entirely by a machine. Right now, we look at chat bots like virtual assistants and
laugh at their primitive simulated emotions, but it's likely that we will have to deal with beings that
make it difficult to draw the line between real and simulated humanity. Are there any machines in
existence that deserve rights? Most likely, not yet. But if they come, we are not prepared for it.
Much of the philosophy of rights can’t deal with the case of Artificial Intelligence. Most claims for
right, with a human or animal, are centered around the question of consciousness. Unfortunately,
nobody knows what consciousness is. Some think that it's immaterial, others say it's a state of
matter, like gas or liquid. Regardless of the precise definition, we have an intuitive knowledge of
consciousness because we experience it. We are conscious of ourselves and our environment, and
know what unconsciousness feels like. Some neuroscientists believe that any sufficiently advanced
system can generate consciousness. So, if your blender's hardware was powerful enough, it may
become self-aware. If it does, would it deserve rights? Well, first we need to define "rights" to
make sense to it.

Consciousness entitles beings to have rights because it gives a being the ability to suffer. It means
the ability to not only feel pain, but to be aware of it. Robots don't suffer, and they probably won't
except we programmed them to. Without pain or pleasure, there's no preference, and rights are
meaningless. Our human rights are deeply tied to our own programming, for example we dislike
pain because our brains evolved to keep us alive. To stop us from touching a hot fire, or to make
us run away from predators. So, we came up with rights that protect us from vulnerabilities that
cause us pain. Even more abstract rights like freedom are rooted in the way our brains are wired
to detect what is fair and unfair. Would a blender that is unable to move, mind being locked in a
cage? Would its mind be being dismantled, if it had no fear of death? Would its mind be being
insulted, if it had no need for self-esteem? But what if we programmed the robot to feel pain and
emotions? To prefer justice over injustice, pleasure over pain and be aware of it? Would that make
them sufficiently human?
Many engineers believe that an explosion in technology would occur, when Artificial Intelligence
can learn and create their own Artificial Intelligences, even smarter than themselves. At this point,
the question of how our robots are programmed will be largely out of our control. What if an
Artificial Intelligence found it necessary to program the ability to feel pain, just as evolutionary
biology found it necessary in most living creatures? Do robots deserve those rights? But maybe we
should be less worried about the risk that super-intelligent robots pose to us, and more worried
about the danger we pose to them. Our whole human identity is based on the idea of human are
special and unique, enabled to dominate the natural world. Humans have a history of denying that
other beings are capable of suffering as they do. During the Scientific Revolution, René Descartes
argued animals were simple automata. As such, hurting a bear was about as morally repugnant as
punching a teddy bear. And many of the greatest crimes against humanity were justified by their
committers on the grounds that the victims were more animal than civilized human. Even more
problematic is that we have an economic interest in denying robot rights. If can coerce a sentient
AI, possibly through programmed torture, into doing as we please, the economic potential is
unlimited. We've done it before, after all.

Violence has been used to force our fellow humans into working. And we've never had trouble
coming up with ideological justifications. Slave owners argued that slavery benefited the slaves: it
put a roof over their head and taught them Christianity. Men who were against women voting
argued that it was in women's own interest to leave the hard decisions to men. Farmers argue that
looking after animals and feeding them justifies their early death for our dietary preferences. If
robots become sentient, there will be no shortage of arguments for those who say that they
should remain without rights, especially from those who stand to profit from it.

Artificial Intelligence raises serious questions about philosophical boundaries. What we may ask if
sentient robots are conscious or deserving of rights, it forces us to pose basic questions like, what
makes us human? What makes us deserve of rights? Regardless of what we think, the question
might need to be resolved soon. What are we going to do if robots start demanding their own
rights? What can robots demanding rights teach us about ourselves?

You might also like