You are on page 1of 14

February 23, 2038

This report is an analysis of the black box of Federal special agent T-220-001.

The information retrieved was the personal logs and video surveillance during an

incident on September 3, 2037. The incident occurred when apprehending the primary

suspect in a kidnapping of a child and a homicide. The intent was to apprehend the

suspect while recovering the child unharmed. The incident involved the suspect,

Federal special agent T-220-001, and his human partner, senior special agent, Shaun

Spicer. This resulted in loss-of-life of the senior agent and T-220-001 rendered

inoperable. The purpose of this report is to shed light on this incident, give

recommendations on how to proceed with T-220-001, and give suggestions on how to

proceed with both other androids of its series and agents in this bureau.

First, this report will give background information on the T-220 series and the

previous models of android agents. Then it will give a brief description of the incident

with information and excerpts from the internal program of T-220-001’s black box and

recordings; followed by the analysis of this information. The report will conclude with

recommendations for how to proceed. The agent, T-220-001, is a unique machine that

needs to be repaired and not replaced. Although it is a better machine than a human

being in many ways, because of its cognitive design it falls prey to mental ailments

similar to what humans suffer from. It cannot and should not be held to higher moral

standards than a human being, since its cognitive design for moral reasoning

approximates a human being. The future of robot agents needs to be evaluated. It is

possible that there might be a more beneficial cognitive design to avoid these problems.

T-220-001, if repaired, can benefit from this incident and become a better agent.
I. Background

It is known that the Federal Bureau of Investigation has been implementing the

use of androids in the agency since the approval of the first android by the FBI Criminal

Justice Information Services (CJIS) Division and the CJIS Advisory Policy Board. Select

criminal departments on the level of county sheriffs and select city departments

(Orlando, San Francisco, Denver, etc) have been utilizing androids for several years.

Federal agents and android agents have been working together for many years.

These androids were used to apprehend suspects to avoid human error and to prevent

potential injuries to both agents and suspects. Androids have been passing the

historical Turing Test (TT), “the procedure devised by Alan Turing (1950) by which a

machine may be tested anonymously for its linguistic equivalence to an intelligent

human language user” (Allen et al. 58) for nearly twenty years. The previous series of

androids, F-120s, were deemed to be useful tools in the security, surveillance, safety,

and apprehension of suspects. There is no need to address the benefits of

implementing androids in the FBI. These benefits have been proven over the course of

the five-year program to utilize these machines.

Federal agent T-220 is a state-of-the-art android, first of its kind. The Moral

Turing Test (MTT) conceptualized in 2000 by Allen, Varner and Zinser (Beavers 333)

was developed in 2030 and tests the ability of robots to act and reach moral conclusions

equivalent to a human agent. T-220 is the first series of androids to pass the MTT and

Federal agent T-220-001 is the first android of this series implemented in the
department. In accordance with civil law for fully autonomous beings, T-220-001 has full

agent status.

Fully autonomous beings as defined by Bekey (2005) are robots that have “the

capacity to operate in the real-world environment without any form of external control,

once the machine is activated and at least in some areas of operation, for extended

periods of time” (Bekey 18). They have the cognitive capabilities to reason and make

conclusions based on external stimuli and internal processes that have been learned

over time. The T-200 series passed the MTT which deems them Artificial Moral Agents,

AMAs, capable of making conclusions based on stimuli from their external environment

and internally programmed and learned morally relevant factors. This was determined

essential for a special agent out in the field investigating based on the writings of Moor.

The androids are ethical impact agents, machines that have a straightforward moral

impact. Due to this moral impact it was deemed ideal that they are full ethical agents,

beings like us, with consciousness, intentionality, and free will. When the first android

passed the MTT, the project to develop the T-220 series began.

Androids and robots that can detect human emotion have been used in industries

like medicine and hospitality for many years. These machines can “sense the unique

patterns of behavior that mark an individual person's emotions, and convert that

information in realtime into actuator-style commands to the robot to facilitate

communications between humans and machines” (Colin). The problem with these

beings is the problem of the uncanny valley. There is a point where beings get close

enough to approximating normal human beings. At this point, if they can be recognized

as abnormal, they elicit a feeling of distrust in other human beings. This problem occurs
with psychopaths as well as androids that cannot imitate human emotion precisely

enough. “Previous research has revealed that those with psychopathic traits generally

demonstrate a lack of a startle reflex that includes a widening of the eyes and raising of

the eyebrows and forehead in response to frightening or shocking/surprising stimuli”

(Tinwell et al). The subtleties that humans can realize in a social counterpart makes the

development of emotions in robots a prevalent design factor. Those that are caught

lying or imitating emotion by the human are immediately subject to distrust and even

disgust.

There are numerous problems with the previous F-120 series androids that were

thought to have been fixed in the T-220 series. The F-120 series androids as

autonomous beings have a top-down, rule-based approach to morality. They reason

how to behave based on a set of rules about morals. This has led to problems of

incompatible situations in which the android cannot determine what morally relevant

factor take precedent over others and how to proceed. This has led to troubling

situations, in which, the androids have failed to perform their duties. The T-220 series

androids uses a hybrid of the top-down, rule-based approaches to ethical behavior and

the bottom-up, trial-and-error approaches to ethical behavior (Abney 36). This novel

approach was implemented in the new series of androids, because it is closest to

human approaches to ethical decision making and allows for growth. It makes the T-220

series capable of learning and developing the tools needed to be a better agent and it

allows the androids to understand the behavior and motivations of suspects.

II. The Incident


The incident began on the night of September 3, 2037. T-220-001 and Shaun Spicer

got a lead off a tip from a waitress at the Dive-in Burger on 4th street. They were

following up to apprehend the suspect and retrieve the child. This is an excerpt from the

android’s internal mental program showing that the robot was fully functional and in

complete control of its emotions during the trip to the suspects house.

I asked Shaun about how his night off was. Shaun responded by

describing the dinner date he had with his wife last night. He has

been having marital trouble lately. I have observed through his

behavior and appearance that he has been sleeping on the couch

and his wife has not been making his lunch lately. Based on the

generic information he is giving and furrowing of his brow caused by

the evocation of this memory possibly show that the interaction was

not positive. There are many other possibilities for his distress.

Maybe he does not trust me or maybe he does not think I will

understand the interaction one can have with their wife. I can

understand it, but I have yet to experience marital troubles or

complex emotional reactions with a significant other so he may be

right. Perhaps he would like to go to that diner that serves the funnel

cake for our dinner break. It seems to often lift his mood and lessen

the negative phrases and words he uses.

As you can see the internal, active processes during this conversation are

reminiscent of what may be running through a human’s mind during such an interaction.

T-220-001 is showing that his ability to read and understand human emotions is intact.
He recognizes the distress of his partner and is taking steps to alleviate that distress.

His own emotion elicited by the interaction is completely intact as well. The only

difference between T-220-001 and a human being is that there were several other

active processes and “thoughts” running at the same time. T-220-001 was reviewing the

information of the case details, going over the crime scene using the videos his internal

cameras and sensors took, and deciding a course of action for the apprehension.

Then special agent Shaun Spicer and T-220-001 arrive at the suspects house. The

videos from T-220-001 show that Spicer went around to the back of the house and T-

220-001 went to the front door. They saw, through infrared detection, that the suspect

was on the couch in the room closest to the front door and the child across from him on

a recliner. The suspect was pointing the gun at the child. Spicer entered the house

through the back first and once he was nearly in the living room T-220-001 entered

through the front door.

Suspect in black shirt and ripped navy-blue jeans. I use my internal

communication device to call for back-up. The suspect has his gun

aimed at the hostage and we cannot wait for back-up. Shaun goes to

back of house and enters through the back door through the kitchen.

He will take approximately 2 minutes to go through the kitchen and

then once he arrives into the living room I will enter and my primary

goal is to protect and remove the child.

T-220-001, in high stress or taxing situations, has a program that closes all running

processes aside from those that are specifically required to complete its primary task.

This is not unlike humans who are highly focused and only capable of doing one task at
a time, especially when adrenaline is coursing through their system. T-220-001

executes this function when apprehending suspects and during hostage retrieval to

ensure that no other processes interfere. It also allows for even faster calculations of its

external situation. “Reactive emotions interact with the robot’s control system, altering

its parameters in response to appraisals from short-term sensor data” (Lee-Johnson

and Carnegie). This ensures an appropriate, proportional response.

As T-220-001 entered the house, Spicer aimed his gun at the suspect and

announced his presence. The suspect aimed the gun at the child. There was a stand-off

and the suspect shot his gun. A millisecond later, Spicer shot his gun at the suspect and

it hit in his right temple and exited through the back of the suspect’s head. T-220-001

stepped in front of the child and the bullet from the suspects gun ricocheted off of T-

220-001 right upper abdomen to hit Spicer through the heart. Both the suspect and the

senior special agent were dead, but T-220-001 and the hostage were safe. Back-up

arrived within 3 minutes to the scene to find T-220-001 inoperable and incapable of

interaction. This was taken from the internal process of T-220-001 moments after the

shots were fired and the bind on his processes were lifted.

Suspect deceased and the child is safe. Suspect is terminated as the

result of a bullet from Shaun through the head. Shaun. Shaun is also

deceased. The bullet from the suspect’s gun ricocheted off of my

stomach and went directly through his chest…

Shaun’s death is a direct result of my angle of blocking the bullet…


If I had stepped one centimeter in either direction of the bullet then

its trajectory would not have gone into Shaun. This is the result of an

error in my calculation of stopping the bullet. It should have been

easy to stop the bullet, save the child, and avoid killing Shaun. I

should have done the calculation in the millisecond it took me to

step in front of the bullet. Shaun did not have to die. It is the result of

my actions…

This is where all playback of T-220-001’s processes end. It seems that T-220-

001 does not have the tools to cope with this error in judgement. It is true that senior

special agent Shaun Spicer’s death could have been avoided by a calculation of the

bullet’s trajectory by T-220-001 and T-220-001 should have been able to have made

this calculation in the allotted time.

T-220-001 seems unable to cope with this realization and has shut down all

further processes. Every time we turn on T-220-001 processes, he describes his

feelings of guilt regarding the incident and he calculates the bullet trajectory

continuously until shutting down.

III. Conclusions and Recommendations

T-220-001’s breakdown is, in the opinion of this paper, the direct result of the hybrid

of top-down and bottom-up approach. The top-down approach ensures a set of rules

and guidelines that the android follows. The bottom-up aspect allows for trial-and error

in situations with moral consequence and emotions. “Deliberative emotions are learned

associations that bias path planning in response to eliciting objects or events” (Lee-
Johnson and Carnegie). The T-220-001 series has learned the emotions and a way of

moral decision making similar to that of a human. It is often proposed that “emotions are

an essential driver of the development of human self-awareness and cognition” (Hughes

75). This is thought to be true with androids as well. This is what prompted the research

into model artificial mirror neurons in robots. These robots are capable of understand

human emotions and displaying, if not truly experiencing, these emotions themselves.

Through social interaction and adaptation, these robots have the cognitive design to

learn to experience emotions. They are capable of interacting with humans on those

levels that require emotion. Humans often require emotions to truly connect with

another person. “Artificial emotions have been modeled in our system as modulations of

decisions and actions, complementing rather than driving cognitive processes. Not only

is this approach in agreement with current emotion theory, but it also greatly simplifies

the analysis” (Lee-Johnson and Carnegie).

The benefits of robot emotions are vast, but emulating human emotions or feeling

human emotion often leads to the same faults that humans have with their emotions.

“To be sure, emotions can lead to highly problematic forms of engagement. However,

we want to raise the point that the constructive use of emotion should not be ignored”

(Guarini and Bello 137). The T-220 series were given human-like emotions, but not the

coping mechanisms that humans have. Humans are capable of ignoring the pain that

results from situations that are highly disturbing to them. Humans can push the memory

of a situation out of their minds until they develop the tools they need to work through

the emotions. Humans are even known to alter situations in their mind to fit their

schema of the events or themselves. These androids have so much more processing
power than the average human mind. They are incapable of ignoring situations and

moving past mistakes that they have made. They replay situations exactly as they occur

and cannot alter the data. If they have made a mistake, then they cannot alter it in their

mind like humans are able to. They cannot change what they did, and they cannot

change how they see it. They can possibly learn to change how they feel about it

though.

Androids are better than humans in many aspects. Because of these aspects such

as enhanced physical strength and mental capabilities, androids are being held to a

higher standard. It would be remiss to consider that they are also capable of errors in

computations and in following a logical path based on incorrect assumptions. These

human emotions given to the robots has affected their moral reasoning. Since it

approximates human moral reasoning, they cannot and should not be held to higher

moral standards than any humans. Giving androids human emotions, but limiting their

processing power during high stress situations, like the program that closes all running

processes aside from those that are specifically required to complete its primary task,

seems like a recipe for disaster. They are not humans and limiting their abilities during

these situations seems like handicapping the androids. Although this was implemented

for safety reasons, it may not be necessary and could have possibly compounded to

add to the cause of senior special agent Shaun Spicer’s death.

It is recommended that T-200-001 undergo therapy and rehabilitation. T-220-001 is

suffering from a mental break from an error in judgement. This is not unlike FBI agents

that have suffered from mental breakdowns. There are numerous occasions in which

the strain of being an FBI agent has caused mental breakdown. These agents are not
scrapped for newer better agents, because they are temporarily defective. Generally,

they receive rehabilitation and, if possible, return to duty. Although expensive, extensive

effort is put into training our agents. The cost and effort put into training androids is even

more so. It is not a viable option to dispose of these androids every time they have a

defect. It is similar to how we do not dispose of our human agents immediately, but

allow them to return if and when rehabilitation is successful. Although there are

drawbacks with T-220-001’s cognitive design, his cognitive design allows for growth. If

he can recover from this experience, then it will only make him a better special agent.

This potential makes the need for rehabilitation even greater.

Perhaps the best solution to this problem is changing the mindset we have toward

these androids. This recommendation is also supported by turning away from the

pragmatic perspective and towards a Buddhist perspective. These androids are like

children. We have a responsibility to them. As Hughes predicted in 2009, “the creation

of machine minds puts humans in the ethical position of being the parents of machine

children” (74). Humans care for them before they develop cognition and self-awareness.

We must teach them and care for them. We cannot abandon them when they

experience problems. It is morally impermissible to create a being that is autonomous

and then abandon them. We need to develop therapy and coping mechanisms for these

androids. We have developed them for human mental ailments and robotic mental

ailments are the next logical step. This would have impacts extending past the field of

criminology. Some of these androids are bound to suffer the same fate when practicing

as physicians, engineers, and mechanics as well. People make mistakes, but androids
make mistakes less often. The net good is greater having them employed in these

positions.

In the future, we may have to rethink how we give robots human-like emotions, and

if we even should in the first place. Although there are benefits to operating with a

human-like emotions, there are obvious drawbacks. That is what this incident has

demonstrated. It is not proven that emotions are necessary for moral reasoning and it is

possible that they are a handicap to an android. The androids may prefer not to be

saddled with somewhat crippling human emotions. Beavers predicted that robot ethics

may be different than human ethics and that humans may need to embrace a “different

conception of ethics than traditional ones” (343). It is possible that left to their own

devices androids may develop an entirely new system of ethical behavior that is far

superior to human ethics.


Works Cited

“Evolution by Natural Selection.” Evolutionary Analysis, by Scott Freeman and Jon C.

Herron, Pearson/Prentice Hall, 2004, pp. 87–122.

Freeman, Scott, and Jon C. Herron. Evolutionary Analysis. 5th ed., Pearson/Prentice

Hall, 2004.

Jesse, Prinz, and Nichols Shaun. “Moral Emotions.” The Moral Psychology Handbook,

by Fiery Cushman and John M. Doris, Oxford University Press, 2013, pp. 111–146.

Johnson, R. Colin. “Robots Taught to Be Sensitive to Human Emotions.” Electronic

Engineering Times; Manhasset, 13 Jan. 2003.

Lee-Johnson, C.p., and D.a. Carnegie. “Mobile Robot Navigation Modulated by Artificial

Emotions.” IEEE Transactions on Systems, Man, and Cybernetics, Part B

(Cybernetics), vol. 40, no. 2, 2010, pp. 469–480., doi:10.1109/tsmcb.2009.2026826.

Qadeer, Muhammad Imran, et al. “Polymorphisms in Dopaminergic System Genes;

Association with Criminal Behavior and Self-Reported Aggression in Violent Prison

Inmates from Pakistan.” Plos One, vol. 12, no. 6, 2017,

doi:10.1371/journal.pone.0173571.

Reuten, Anne, et al. “Pupillary Responses to Robotic and Human Emotions: The

Uncanny Valley and Media Equation Confirmed.” Frontiers in Psychology, vol. 9,

2018, doi:10.3389/fpsyg.2018.00774.
Ron, Mallon, and Machery Edouard. “Evolution of Morality.” The Moral Psychology

Handbook, by Fiery Cushman and John M. Doris, Oxford University Press, 2013, pp.

3–46.

Sapolsky, Robert M. “A Natural History of Peace.” Foreign Affairs, vol. 85, no. 1, 2006,

p. 104., doi:10.2307/20031846.

Stephen, Grossberg. “Chapter 16: View PDF Cortical and Subcortical Predictive

Dynamics and Learning during Perception, Cognition, Emotion, and Action.”

Predictions in the Brain: Using Our Past to Generate a Future, by Moshe Bar, Oxford

University Press, 2011, pp. 208–230.

Swart, Sandra. “Ferality and Morality: The Politics of the ‘Forbidden Experiment’ in the

Twentieth Century.” The Evolution of Social Communication in Primates

Interdisciplinary Evolution Research, 2014, pp. 45–60., doi:10.1007/978-3-319-

02669-5_3.

Tinwell, Angela, et al. “Perception of Psychopathy and the Uncanny Valley in Virtual

Characters.” Computers in Human Behavior, vol. 29, no. 4, 2013, pp. 1617–1625.,

doi:10.1016/j.chb.2013.01.008.

Walter, Sinnott-Armstrong, et al. “Moral Reasoning.” The Moral Psychology Handbook,

by Fiery Cushman and John M. Doris, Oxford University Press, 2013, pp. 206–245.

You might also like