You are on page 1of 6

Blake Friemel

12/10/2021

CprE 234

Benjamin Blakely

Code of Ethics
I found it very difficult to think of specific rules to live by because every situation you will be in

will be different. My training, past experiences, and virtues are all subject to change during my cyber

security career. These experiences could change my outlook on the code of ethics I created. I can't know

for sure if I will follow this code of ethics to a T, my values or situation I'm in can change on a day-to-day

basis. If I am ever put into a particularly difficult ethical dilemma where the solution isn't clear, I can refer

to my code of ethics for a framework to start working through the problem at hand. The code of ethics is

more of a guide because there are too many variables to account for in any given situation. The code of

ethics that I have formulated during this class are as follows:

1. Ask questions, identify potential risks, and add systems to stop or avoid them, also take steps to

mitigate any current risks.

2. Always keep any related parties reasonably informed to the best of your ability on what exactly is

going to happen to their activity and data.

3. The privacy of individuals should not be breached unless the reasonable breach of this privacy will

prevent significant bodily harm or death.

The Ethical OS is a useful framework that can be used to evaluate potential ethical problems that

might arise in the future with a specific technology. There are eight different zones that assess different

types of risk. The eight zones are Addiction & the Dopamine Economy, Economic & Asset Inequalities,

Machine Ethics & Algorithmic Biases, Surveillance State, Data Control & Monetization, Implicit Trust &

User Understanding, Hateful & Criminal Actors, and Truth, Disinformation, Propaganda. Using these eight
risk zones as a checklist, you can take a current technology or plans for future technology and accurately

assess if the technology will have a high chance of causing ethical problems. Once you identify the possible

ways the technology could affect certain groups of people or be misused, you can take that into account

when mitigating potential risks for the technology (1). For example, suppose you are going to create a

messaging service that uses the internet under the Criminal Actors risk zone. In that case, you might

foresee the possibility of intercepting and reading messages. You can then mitigate this risk by adding an

encryption algorithm that encrypts any traffic your service sends out across the internet. Another

interesting example of why risk assessments are important is the original internet worm from one of our

reading assignments. Although the worm's creation was not intended to be malicious, its rapid spread and

duplication onto devices wreaked havoc on the internet in its early days (2). Before I took this class, I didn't

even think about what possible ethical issues technology could lead to. If I ever did think about the issues,

it was only surface-level issues and wasn't as extensive as the Ethical OS. The ethical OS makes it more

manageable to think about all of the potential ramifications a new technology can have on the world. This

toolkit factored into the first rule for my code of ethics: Ask questions, identify potential risks, and add

systems to stop or avoid them, also take steps to mitigate any current risks.

While formulating my code of ethics, I took into account the different ethical perspectives that

we looked at in class; teleological ethics, deontological ethics, and virtue ethics. Teleological ethics

determines if the action you take in a situation is morally good according to the overall outcome. If the

action itself is morally wrong, it could be justified by the overall outcome of the situation. I believe that

there are some situations you may be put in where you would need to make a decision or take an action

that is by default morally wrong, but in the specific context you're in, it is morally right. One example

would be self-defense. Killing someone is morally wrong; however, if they threaten to kill you, it is justified

to kill them to protect yourself. An example of morally wrong actions being justified by the outcome under

a cyber security context would be if you needed to breach the privacy of someone to save other lives. It
is considered morally wrong to breach someone's privacy, but your reason for doing so, saving people's

lives, justifies your action.

Another ethical lens, deontological ethics, judges the morality of an action based on an existing

set of rules. Essentially if you follow the existing set of rules or laws, then the action is morally justified no

matter what the outcome might be. I believe that rules, or a given set of laws, shouldn't always be

followed no matter what. There are some situations where you can make a moral judgment call and go

against a set of rules. I also don't think it's ok to follow the rules, but only when you want to. In general,

you should always follow the rules until that moral line is crossed and the outcome of your actions can no

longer be morally justified. An example of this ethical perspective would be a large company like Google

collecting and selling user data without the user's general knowledge. They are legally in the right to do

so; however, it's debated whether or not this practice of data collection is morally right. I don't believe

that most citizens who use services like Google are aware of how extensive their data collection, or that

their data is even collected at all. I don't think the phrase "If it's free, then you're the product." is widely

known to people not involved in a technical field related to computers, the internet, or marketing. I knew

that businesses like Google collected and sold user information before I took this class, but during the

class, as we talked about the importance of online privacy, this strengthened my position and had an

effect on the next addition to my code of ethics which is the exact opposite: Always keep any related

parties reasonably informed to the best of your ability on what exactly is going to happen to their activity

and data.

The primary ethical perspective I chose to formulate my code of ethics is virtue ethics. This ethical

lens defines the morality of an action based on your moral character. It takes into account your virtues

and past experiences. If a person lives by morally good virtues, their actions will be morally good. I want

to assume that I have lived my life up to this point with morally good virtues in mind. A lot of the situations

that require me to reference my moral code of ethics will be difficult. I think these moral dilemmas will
require a complex and diverse ethical perspective. Virtue ethics are very malleable in a way because

everyone might have similar moral principles like don't steal, don't kill, and other very basic principles

along those lines that every society has to some extent, but not everyone has the exact life experiences

that form their personal virtues.

I believe that citizens should have the maximum amount of security possible and the maximum

amount of privacy possible. Obviously, you can't have an absolute of either of these. Having absolute

privacy would mean no one could look at you while you are out in public, say shopping for groceries. We

discussed a certain social policy in class where an exchange of freedoms is necessary for a society to

function effectively. If you are out shopping, you are socially allowed to look at other people without their

consent, and they are allowed to look at you without yours. Whoever looks in your direction will

immediately know the exact time, place, and location you were shopping at for groceries. They would

probably even know what you were buying. This exchange of freedom is necessary because it would be

ridiculous for no one to look at each other out in public.

Another example within the context of cyber security is our class discussion on privacy vs. security.

I assume everyone would like to have absolute privacy and protection from everything, but realistically

that isn't possible. A topic we covered in class, the going dark debate, is a major ethical dilemma that

brings to light the difficulties law enforcement has with the increased use of encryption in our society.

Imagine a scenario where a bombing or a shooting is going to occur, and the plans for the attack pass

through web traffic but are encrypted. In this made-up scenario, there are two possible outcomes. One is

that a government organization like the National Security Agency cannot decrypt private messages, or

it's against the law. In this case, citizens' individual privacy and protection are excellent because they have

web traffic with strong encryption that a bad actor couldn't access, and the government is not monitoring

them. However, the overall safety of the population decreases because of these freedoms. Imagine the

same scenario, except all the encryption methods companies are allowed to employ are fairly easy to
decrypt. The national security agency can decrypt the private messages and prevent whatever travesty

would have occurred, but so could bad actors. The overall safety of the population would come at the

cost of individual freedom. I knew that a lot of online activity was new and didn't have well-defined laws

because of it, but before taking this class, I had no idea that this debate was going on and could change

the freedoms we have online. Learning about this debate led to my third rule: The privacy of individuals

should not be breached unless the reasonable breach of this privacy will prevent significant bodily harm

or death.

Although my code of ethics is rather short, I think that I have the core concepts of my code down.

I will update and change the code as I pursue my career in cyber security. Formulating a code of ethics is

a useful thing to do. Making a code forces you to think about your true values and helps you think about

what you would do in a difficult situation. It's impossible to consider every complicated scenario you might

come across, but it is still a good framework. It would be nice to have your rulebook that you could

reference for every ethical dilemma you come across, but as I learned in this class, that just isn't realistic.

There are far too many variables that factor into decisions that are made concerning ethical dilemmas.

One core concept we learned in class is that there isn't always a "right" option. Many times the decision

you have to make comes down to minimizing potential risk as much as possible. I am glad this class

encouraged me to put more thought into my own code of ethics than I would normally have. I'm glad I

will be able to use this code of ethics whenever I encounter an ethical dilemma during my cyber security

career.
WORKS CITED

Ethical OS checklist (1)

https://web.archive.org/web/20200816212209/https://ethicalos.org/wp-
content/uploads/2018/08/EthicalOS_Check-List_080618.pdf

Gift of Fire (2)

Sara Baase and Timothy M. Henry. 2017. A Gift of Fire: Social, Legal, and Ethical Issues for Computing
Technology (5th Edition) (5th. ed.). Pearson.

You might also like