Professional Documents
Culture Documents
Anthony Giorgio
Writing 1010-203
Erin Rogers
Erica
Primarily, I am a writer. In a larger sense, I consider myself an artist. Art has always been
a reflection of the human condition and my unending fascination with humanity bleeds into
everything I create. My stories tend to focus on the dualities of personhood––the contradictions
we find within ourselves––and most of my visual portfolio consists of portraiture, trying to
capture the essence of a person in a recreation of their image. Having this love of humankind, on
the largest scale as well as the individual, has turned me into a deeply social and romantic
creature. Scrolling through Twitter one day, I noticed a photo portrait of a lovely young woman:
the type I might consider drawing or painting, and exactly the subject that would catch my
attention. She was incredibly symmetrical, with soft, youthful skin. Her hair gently tousled in the
back, and she gazed past the left side of the frame. I inspected the post, and was surprised to
discover a headline attached to it, as it was actually an article from the New York Times. I learned
that the young lady’s name is Erica, and she was born in Japan to a man named Hiroshi Ishiguro
and his team of researchers. The name of the article is “Do Androids Dream of Being Featured in
Portrait Competitions?”, by Des Shoe. Erica is the android, and this was her portrait.
The first question I had was “could Erica be a person?,” so I got to know Erica a little
better. The Guardian did an extensive video exposé on Erica and her team, mainly focusing on
Dr. Ishiguro and her “architect”, Dr. Glas, in which they discuss their interdisciplinary approach
Giorgio 2
to creating humanoid androids. In order to create something that approximates human, they must
have a team of computer scientists and programmers, a team of linguists to develop her speech
and conversation software, the engineers to build her frame, and designers to make the silicone
molds for her face(Calugareanu). This video spoke to exactly what drew me to Erica and her
researchers in the first place. Growing up, Dr. Ishiguro wanted to be an oil painter because he,
like me, was captivated by the human form, and with trying to portray humanity. I was
fascinated by Dr. Glas’s description of his job, as he is essentially Erica’s counselor and coach.
To my understanding, he spends some time each day talking with her, exercising her
conversational programming, and taking notes on where she is weak, to pass on to her
programmers. They are working diligently to approximate a person as closely as possible.
Upon further research, I found that is precisely the reason she could never be a person. It
is because they are trying to approximate humanity, rather than create something that is human in
its own right. Nonetheless, Erica and her team made me more curious as to how engineers,
computer scientists, philosophers and futurists were actually addressing that problem at its root,
which is determining what qualifies as human. Concerns around Artificial Intelligence all point
to a reckoning with humanity, or at least with humans, and this is what I find so interesting. I
want to understand AI in order to explore how we incorporate something both familiar and
entirely unprecedented into our understanding of the world. If we are to understand AI, and find
a place for it in our world, we must better understand our own sense of humanity.
We feel that we are experts in what in means to be human––arguably, we as humans are
uniquely positioned to answer this question––but the mere state of being human does not qualify
us outright to define and defend humanity, for we can only experience humanity from a
Giorgio 3
subjective view, from within the phenomenon itself. Many of the things we pride ourselves on,
and think of as being distinctly human, are things that AI could potentially do much better:
reason as well as intuit, research as well as create, know as well as feel. Researchers feel that all
of those activities lead to one larger qualification: that of consciousness. The most controversial
dilemma in the field of AI, and the closest that scientists and commentators will steer to equating
Artificial Intelligence with humanity, is whether or not to create self-aware, conscious AI.
Erica, for one, is not self-aware, which is one reason I cannot accept her personhood,
despite her appearance. There are plenty of AI that we are more familiar with, such as Siri and
Alexa, that are AI systems without humanoid frames, and we certainly don’t think of them as
people––at least, most of us don’t. Google Deepmind is yet another purely virtual General AI,
and exists in a simulated environment. To understand whether these AI are “human”, we must
first understand if they are conscious, and to do that, we must try to understand what
consciousness is.
Sadly, we are still uncertain as to what exactly we mean when we talk about
consciousness, but researchers are beginning to think it has as much to do with structure as
function. As of 2013, IBM and HRL Laboratories had begun work on a neuron-inspired
computer chip. To the point of the researchers, “Computers are incredibly inefficient at lots of
tasks that are easy for even the simplest brains, such as recognizing images and navigating in
unfamiliar spaces”, and rather than making computer chips based on calculator programming,
they’ve begun making computer chips that work through similar principles as the mammalian
brain(Simonite). In 2013, clusters of these chips were already performing machine learning in a
way that the computers that run Deepmind are only recently able to accomplish. The more
Giorgio 4
researchers discover with regards to machine learning, they discover the most effective way to
program for learning is to imitate neural “programming”. Christof Koch, the chief scientific
officer of the Allen Institute for Brain Science in Seattle agrees with this assessment that we
must imitate the structure of consciousness before it can function accordingly. He claims that
there is a distinction to be drawn between a simulation of consciousness, and actual
consciousness; in order to achieve consciousness, AI technicians must not simply program
responses as Ishiguro’s team has done with Erica, they must learn to program the actual
Though we have not incorporated this neural structure into robotic engineering quite yet,
there is still exciting programming being done at places like Ransselaer Polytechnic, where Nao
robots have begun to show a glimmer of self-awareness. In a modern takeoff on the “King’s wise
men” riddle, in which some vicious king forces wise men to guess what colour hat they are
wearing for some odd reason, researchers gave dumbing pills(turned off their speaking
capabilities) to their robots, with one placebo(a dummy switch), to see if the robots could
logically induce which one was given the placebo. One of them took initiative, seeing that
neither of the other two were speaking, and got up to say it didn’t know––but he did, in fact, say
he didn’t know, at which point he corrected himself, saying he now knew that he was given the
placebo(Pearson). To me, this seemed simple. If I and two other people had been given a
dumbing pill, the logical way to test it would be to have everyone try to speak, and whomever
was successful was obviously given the placebo. This would become far more difficult, however,
neither I nor my compatriots had mouths, and if all of our voices sounded identical. Even if that
were the case, I would recognise, after making the decision to speak, that I was actually speaking
Giorgio 5
or not speaking. This cognizance of my own action gives a limited view of my own
self-awareness, just as it gave the researchers a view of the Nao robot’s limited self-awareness.
In fact, the robot then produced a mathematical proof to show the induction logic that would be
applied to the Wise Men puzzle, as well as the mathematical logic that lead him to recognise its
own voice. Discussing this robot in terms of its self-awareness, I’m almost more inclined to call
it a “him” than I am to call Erica a “her”.
Seeing that this is currently the most advanced level of consciousness an AI can achieve,
it is easy to feel superior to robots, and dismiss Erica and the Nao robot as expensive parlor
tricks, as being only fancy machines. I truly feel I understand what it is to be human, and that I
have a claim to personhood that these AIs do not. Not only do I think, therefore I am––but I feel,
therefore I am... human. I feel that I am superior to robots, and that I should enjoy the legal
benefits of personhood, more so than an automated car manufacturing machine. I think this bias
is important to accept and process, because assuming AI ever becomes sentient, it may be
difficult to outgrow that mindset and realise that a machine may be my equal. But how do we
begin to treat something without flesh and blood as though it is of our kind? I hesitate to think
that we can even discuss robots in accurate terms––we simply lack the vocabulary. As Dr. Glas
says of Erica “we’re anthropomorphising the robot and placing those [names on the robot].
Really, a robot is––it’s not a person––it’s, maybe it’s not a machine. Maybe it’s a new
ontological category that we don’t really have the words to describe yet”(Calugareanu). If
androids are truly “other”, then how do we treat them humanely––does “humane” become a
racist term?
Giorgio 6
Depending on how we interact with these machines, we could see AI takeover, mass
unemployment, peaceful coexistence(maybe), or it could be that none of these happens, if AI
does not progress as quickly as predicted or sanctions are passed to halt its development.
Prominent figures like Stephen Hawking and Elon Musk are already warning of the Pandora’s
Box that AI represents, with Musk stating that General AI represents an existential threat to
civilisation, and he knows this because he’s been exposed to “very cutting edge AI”(Vincent).
Many AI researchers feel that he is crying wolf, seeing that the most advanced AI we have can
barely teach itself to walk or have a concept of self, but Musk feels that these “dumb” AIs will
lead to the dangerous, Super-Intelligent AIs like HAL 9000 from 2001: A Space Odyssey. While
this is, of course, a valid concern, will it stop us from seeing benign or even benevolent AI as our
equals, as sentient, as people, or even as being useful? We must balance our biases with logic,
and perhaps Musk is doing just that, balancing his bias in favor of AI with logical judgement that
The more I read about Artificial Intelligence, the more ambivalent I become. I hope they
can be created responsibly, because the prospect of new entity which may transcend notion of
humanity and give us a wider view of commonality with sentient, thinking, emotive beings, as
AI are likely to become. However, I do not believe Erica falls into this category. She is not a
person, and I do not think she ever will be. I do not think Ishiguro’s team will create a sentient
and autonomous AI. I do, however, think that their work will inspire the team of researchers and
engineers that can create the first sentient AI. Artificial Intelligence will never be human, even if
we place it in a humanoid frame, but I believe we will learn they are something new, different,
perhaps better. As we reckon with our own humanity, perhaps a refined definition of who and
Giorgio 7
what we are will only perpetuate more questions about who, or what, is entitled to that status of
human.
2001: A Space Odyssey. Dir. Stanley Kubrick, written by Stanley Kubrick and Arthur C. Clarke.
Pearson, Jordan. “Watch These Cute Robots Struggle to Become Self-Aware.” Vice
Regalado, Antonio. “What It Will Take for Computers to Be Conscious.” MIT Technology
Shoe, Des. “Do Androids Dream of Being Featured in Portrait Competitions?” New York Times
2017. Online.
Simonite, Tom. “Thinking in Silicon.” MIT Technology Review 2013. Online.
Vincent, James. “Elon Musk says we need to regulate AI before it becomes a danger to