You are on page 1of 76

I.

First class

1. First Video

Welcome to the second video of the first week of our course on unethical decision making.
We would like to start our course on unethical decision making by sharing some reflections
on the history of evil with you. In this session, you will learn how philosophers explained evil
throughout history and you will see how our current understanding of evil developed over
time. This is a course on unethical decision making. It is based on one central argument.
When making decisions, we are embedded in contexts. These contexts can be so strong
that you might move to the dark side of the force despite our good intentions and values. We
might do the wrong thing without even realizing that what we do is wrong. We become
blinded for the ethical dimension of our decisions. How does this dark side of the force look
like? What is evil? Where does it come from? Here, we can deliver only a rough sketch of
this very big debate that is lasting since at least 2,000 years.

Let me take a look at these questions from my own cultural perspective, the context of
Europe and its history. If we start with the premodern perspective, we will see that the
world-view of the Middle Ages was a combination of Christianity and ancient Greek
philosophy. Human beings were perceived as being born into a world with two distinct
orders, the order of nature called cosmos, the order of society called polis. Cosmos and polis
were perceived as being in harmony. Simply spoken, polis followed cosmos. The structure of
nature reinforced a rational social order. So the premodern perception of evil discussed by the
philosopher Augustine, for instance, followed this logic. His argument can be summarized as
follows. God created the world and it was good. Man lived in the Garden of Eden. Adam and
Eve disobeyed God and had to leave the paradise because of this, and evil is the consequence
of the fall of man. Two forms of evil exist, natural evil, understood as God's punishment.
God might send an earthquake or a tsunami to punish us for our sinful lives. And moral evil,
which results from our decisions led by our weak character and caused by our alienation from
God. Within this worldview, however, very soon a logical problem appeared. The problem
that is called a theodicy problem because there are three claims that are very difficult to, to
bring together. Evil exists, God is benevolent, and God is omnipotent.

Questions that drove philosopher’s crazy in the Middle Ages and beyond were the following.
God could have created the world with fewer crimes and misfortune. Why does crime exist?
God created eternal suffering in hell for limited bad action. Why does suffering exist? Is God
superior or inferior to reason? If reason is superior, God is weak. If God is superior, the link
between guilt and punishment, good and evil, is just random.

The philosopher Leibniz came with a solution to this theodicy problem. According to him,
God's action happen for the best of us. There must be a link between sin and suffering
because God has created the best of all possible worlds. Sin, in the sense of moral evil, is
linked to suffering in the sense of natural evil even if we cannot understand and see the
causality. This was a dominating worldview that was shaken at latest in 1755 when an
earthquake happened at Lisbon.

Lisbon at the time was one of the wealthiest cities in the world. It was the cosmopolitan
harbor for the exploration and colonization of the world. On November 1, 1755, Lisbon was
shaken by an earthquake. This earthquake shocked the Western civilization more than any

1
event since the fall of Rome. If cosmos and polis are connected, why do earthquakes happen?
Why did this horrible earthquake happen in Lisbon? Was it a punishment of god? The
earthquake occurred in the morning of November 1 and lasted for about ten minutes. Many
houses got destroyed. The sky turned dark with dust. After the earthquake, terrible fires raged
over the city. People desperately tried to flee to the harbor. However, the earthquake triggered
a tsunami and huge waves smashed the port. Those who were looking for shelter at the
waterfront died. All this looked like a destruction orchestrated by God. But how could God
do this on All Saints Day? In particular, people were puzzled by the fact that most churches
got destroyed while the quarter with the whore houses remained more or less intact.

The earthquake of Lisbon sent intellectual shock waves through Europe and since Lisbon, the
belief that natural evil is connected to social evil was dropped more or less. Society
focused on the evil it can reach, moral evil, evil done by human beings. And the French
philosopher Jean-Jacques Rousseau delivered a highly influential new idea on evil.
According to him, God created the free will and we abuse it. Evil develops over time. It has a
history and we can influence it. Evil is an alienation from human nature and counter forces
against evil are first, a better self-knowledge. And second, better institutions of politics
and education. In other words, in order to fight evil, we need pedagogy, and psychology,
and politics.

With the philosopher Immanuel Kant, the clear separation between metaphysical arguments
about God and reason-based arguments got reinforced. A basic human challenge, according
to Kant, is the gap that exists between what is the case and what should be the case . But
despite all the horrible things that happen in the world, we have to be convinced that the
world, in principle, should work. So, for Kant, evil means not to follow the moral law within
me. Evil means to act against reason, to abuse reason. Since then, we have developed a
clear understanding of good and evil, moral and immoral, as connected to reason. Modernity
is the result of the rise of reason with all its good and bad consequences. We have modelled
human behaviour on the basis of our belief in reason as an ideal. In social sciences, what
dominates today is the concept of rational choice and the idea of the homo economicus, the
calculating individual decision maker who is maximizing his or her own utility. Evil is the
result of conscious decisions. It is the result of intentions. It is driven by doubtful
motivations. Reason is, at the same time, the driver and the cure of evil. So reason drives
us towards the dark side, but it helps us to cure ourselves and protect ourselves against
it.

Let us jump to the 20th century. What response was for the belief and the link between
cosmos and polis, it got destroyed. Auschwitz was for our belief in the power of reason. Up to
3 million people died in Auschwitz. And Auschwitz raised doubts about the sense we apply
moral categories to human decisions, explaining it as a deviation from reason. Auschwitz was
not an event where reason was absent. It was, in effect, the result of a careful application
of reason and science.

Evil is closer to good application of reason in the sense of Kant than we want to believe.
Individual intentions and the magnitude of evil do no longer connect after Auschwitz. The
philosopher Hannah Arendt's conclusion thus is that evil is banal and normal. It does not
need bad intentions, as in Kant's view. It does not need a demonic dimension. It spreads, as
she says, like a fungus on the surface.

2
The 20th century saw two immense wars fought with the best scientific knowledge available.
It saw two repressive political systems, fascism and communism, responsible for millions of
deaths. It comes as no surprise that the post-war intellectual debates show a deep
scepticism regarding the role of reason in human decision making.

Postmodern philosophy radically breaks with the idea of reason-based progress in human
history. Evil is not the opposite of reason anymore, not the opposite of progress. It gets
intensified with reason and progress. Post-war researchers became interested in the social
conditions that promote evil. Writers like George Orwell in his dystopian novel 1984
describes how repressive contexts keep people in check and determine what they do and
think. Psychologists like Asch, Milgram, and Zimbardo started to examine various aspects of
contextual forces that drive evil.

This is where our course takes its start. While we do not deny the existence of intentional and
reason-based unethical decisions, we assume that most evil does not result from who we
are, but rather from the context in which we are embedded. Many things might happen
not because of who we are, but despite of who we are. We thus need a better understand of
context-driven evil.

To conclude this first session, in the premodern understanding, evil is the result of the fall of
man. Our suffering is the punishment of God. After Lisbon, evil gets disconnected from
theology and gets analysed as a social phenomenon. Evil in its modern conception is a
deviation from reason. After Auschwitz, evil gets disconnected even from reason. The
analysis of evil starts to focus on the context in which decisions occur.

2. Second Video

Welcome to this third video of first week of our course on Unethical Decision Making. In our
last session, we had a look at the dark side of the force and we discussed how the concept of
evil developed in human history. In this video today, we will focus on the bright side of the
force and we will see how moral philosophy might help us to fight against the dark side. So
the main goals in our session are, that you will learn about the links between ethics and
decision making. You will understand the idea of an ethical dilemma. And you will meet the
two main concepts of moral philosophy that offer us some help in solving dilemma situations.

Life in former times was much easier. Societies were homogeneous, people shared more or
less the same values, the same traditions. They were embedded in the same kind of context,
so they were more or less cruising on autopilot when they were making decisions. The dark
side of this kind of context is, of course, that there was no real freedom.
Modern society, in contrast, is pluralistic, is heterogeneous. The rules of the game are very
often unclear. In addition, we are going through a time of high-speed change, high-speed
transformation. We are facing innovations in information technology and globalization. We
are drowning in data and we have difficulties to make sense of them. If you combine these
observations, so, unclear rules of the game, because of heterogeneity, growing speed of
decision-making, information overload. It is obvious why ethical questions become more
important.
The traditions and routines that we have developed over time, they lose their power to give us
orientation. So, the solutions we learned and the problems we face no longer fit. We're in the
middle of a crisis of orientation.

3
So what? We could ask why should we need shared ideas, why should we need shared
understandings of what is right or wrong, or good or bad?
There are two answers that have been given by moral philosophers, that are interesting for us.
The first answer comes from Thomas Hobbes, a philosopher of the 16th century. He's asking
us to imagine a world in which there are no rules, in which everyone can do what he or she
wants to do. He calls this the state of nature. There are limited resources, but unlimited
desires to possess those resources. So what will happen? According to Thomas Hobbes, we
will fight for these resources because there are no rules of the game. There will be violence.
There will be fear. So, the whole society will be highly unstable. How do we get out of that
situation? Well, we make a contract with each other in which we renounce on some of our
liberties and what we get for that is stability and peace. So, there are two ways basically of
organizing society. One is violence, so domination of the strongest. The second one is rules.
This is the first reason why we need shared understandings of good and bad, at least to a
certain degree.
The second answer comes from David Hume a philosopher who lived roughly one century
later than Thomas Hobbes. According to him, the reason why we should engage in shared
rule making is that we need cooperation. Because through cooperation we can increase the
pie that we get out of these limited resources. But if we want to cooperate, we need trust. If
we want to trust each other, we need to be able to rely on each other, which means we need to
rely on the idea that the others follow the same rules that I do. So there are two reasons for
having rules in a society, avoiding violence and increasing cooperation.

This is also true for organizations, in particular, in a situation like ours in a crisis of
orientation. In homogenous societies, right or wrong are pretty clear. In heterogeneous
societies, many decisions hang between right and wrong. They are in the grey area. When
we talk about ethics, at least in our course here, we mainly refer to situations in which we
make decisions in that grey area. The space between clearly good, clearly bad, clearly right,
clearly wrong, it's the space of uncertainty. It is unclear which decision is appropriate unless
we have thought it through carefully. We call these situations, these decisions, dilemmas.
So, a dilemma is a situation which a decision has to be made and there are two or more
options on the table, and each of them look similar right and wrong. But we have to make a
decision nonetheless. Whatever we decide, we might have to violate some of our deepest
values, or the values of others, or the interests of others.

Let me give you an example of what a dilemma situation might mean in the context of an
organization. You are the district manager of a big insurance company. One member of your
team, Claire, is a very popular and successful sales-woman. Due to the serious illness of her
daughter Anna, her sales figures went down sharply. Your own boss already told you that it, it
is harming your own career if your team does not reach its sales targets.
Jean-Paul Sartre called this kind of situations, situations where we have to make our hands
dirty because whatever we decide, it will be right and wrong at the same time. Ethical
decisions are basically about deciding how much degree of dirt we allow on our hands.
Ethics is rarely about clean hands because if it's values against values, something has to be
done at the expense of something else.

So, for our course, what's interesting is, how do we operate in these zones of grey where
there's ambivalence around decision making. In a dilemma, we have to choose between
values, principles and objectives that are all more or less equally important to us. But we
cannot enact all of them at the same time. However, there are shades of grey, so decisions
can be closer to the dark and closer to the light side. And looking back at, back at the case

4
that we right, now saw, the case of Claire, you are now about to decide whether or not to fire
Claire. And this seems to be a case of a decision in such a grey area. How to make a good
decision in such a dilemma?
Let's assume for the sake of the argument that there are two options on the table. Fire Claire
and replace her through someone else. Keeping Claire and motivating, motivating the others
to work more to compensate for the loss that she brings. Once the options are on the table,
what we have to do is we have to evaluate the moral quality of the options. What are the
values at stake that collide here? For the option of firing Claire, the ethical dimension is that
we would have to punish her for something that is clearly not under control, her daughter's
illness. She has been reliable and successful so far. You might think that you owe Clare some
loyalty. Firing her would be unfair.
What about not firing Clare? Well, in this case, the other team members have to work more.
You increase the pressure on them. You increase their stress for reasons, again, that have
nothing to do with their performance. It would be unfair towards the team. This is a clash of
duties towards Claire and towards the team, a clash between what you owe the individual
team member and the team at large. And whatever you decide, you will violate some of those
values that are important for you and for the others. There is no clear right and wrong in such
a situation.

Even if decisions are placed in the grey area between right and wrong, it does not follow
that we can make blind or random decisions. The bigger the dilemma, the higher our
duties to think through the situation, to use our brain to make a thoughtful and reason-based
decision. But how? We don't have the time here to summarize the last 2,000 years of moral
philosophy, but let me just pick two theories that have been dominating our thinking at least
in Europe and in the U.S.A., the duty ethics of Immanuel Kant and the utilitarian ethics of
Jeremy Bentham and John Stuart Mill.
Immanuel Kant gives us a simple rule for making decisions. Ask yourself, can I wish that
what I want to do becomes a rule for everyone? So, can I universalize my decision? Kant
called this the categorical imperative. If I can universalize a decision, then I have to do it. If
I cannot wish that everyone else does it, I should not do it. In his own words, we should act
only according to the maxim whereby you can at the same time will that it should become a
universal law without contradiction. I'm, if I might achieve what I want to achieve by lying, I
should ask myself, can I wish that everyone who wants to achieve something has the right to
lie? Can I turn this into a rule for everyone? The answer's clearly no, I cannot. But if I wish, if
I cannot wish that lying becomes a rule for everyone, I should not lie. Similar mood, I should
not kill, I should not steal. Regardless of the consequences, this is important for the Kantian
approach, regardless of the consequences. And this is where the Utilitarian ethics steps in
and finds this counterintuitive.
Jeremy Bentham, who was the first to think this through, he argues that a decision, whether
it is right and wrong, should depend on the consequences. The best decision is the one,
according to Utilitarian ethics, that brings the greatest benefit to the greatest number of
people. So whenever we make a decision, we should ask ourselves who is affected by that
decision. How are these persons affected? Is this effect strong or weak? Is it in the near future
or far away? And then we use all these factors and we make a utilitarian calculation. And then
we decide. So, we see Kant is focusing on the input of the decision. Utilitarians are
focusing on the output of the decision.

Which approach is better? Well, both approaches have advantages and disadvantages. As we
said already, the Kantian approach is clearly counterintuitive when it comes to consequences.
It ignores harm that might occur if we do the right thing. Utilitarians, in contrast, they are

5
willing to sacrifice the well-being of someone if it increases the well-being of the greatest
number. So we can make one person unhappy if it makes most people happy. So, both
approaches are not perfect, but we do not have to use them to the extremes. For us, they're
just two valuable tools that we can use when we think through decisions.

When we are in a dilemma, we might run the decision we are going to make through both the
Kantian approach and the Utilitarian approach. And we should add, by asking ourselves, an
important question. What are my values? What am I standing for? What is more important to
me? If we filter decisions through these three processes, universalizability, Kant, utility,
Utilitarians, my values, then we're making an informed decision.

The challenge here, however, is that all these theories assume that we can step out of our
context and take a kind of objective approach when making decisions, an objective
perspective, a point of nowhere from which we look at ourselves. And this is where the
problems start. What if these contexts in which we make our decision is so strong that we
cannot leave it? What if this little fragment of reality that we see becomes the whole and
overwhelming reality, our only universe? What is obvious for others becomes invisible for us.
And our course is exactly about that kind of situation when what is obvious is not visible
to you. How do we make decisions under these conditions? How do we deal with the fact that
the ideal of philosophers very often doesn't fit our real decision making situations because we
are in context that are stronger than reason? In the following session, we will discuss how
context can often switch off the reason that we need to make informed decisions.

So, to summarize our session of today. To conclude, we are increasingly confronted with
ethical dilemmas when we make decisions. And dilemma situations are situated in the grey
area between clearly right and clearly wrong. We have three tools in our tool box,
universalizability, utility, and values. But such reason based decisions are not always
possible because we might be embedded in a strong context that switches off reason.

II. Second Class

a. First Video

The Emperor's New Clothes by Hans Christian Andersen.

Once upon a time there lived an Emperor whose only interest was to dress elegantly. He
changed clothes all the time and he loved showing them to the citizens of the kingdom. The
vanity of the Emperor was well known in the kingdom and beyond. Two criminals had heard
about the Emperor's passion for clothes and they decided to take advantage of it. They
travelled to the castle of the Emperor and introduced themselves at the gate. We are two
excellent tailors and we have invented an extraordinary method to weave a cloth so light and
so fine that it looks invisible.
It is, however, only invisible to those who are too stupid and incompetent to appreciate the
quality of our wonderful work. Knowing about the passion of the Emperor for clothes, the
guard led the two presumed tailors to the chief of the guards. The chief of the guards sent for
the chamberlain of the court, and the chamberlain finally notified the prime minister. The
prime minister ran to the Emperor to tell him about these amazing tailors.

The Emperor got curious and he decided to see the two tailors. This cloth, your highness, will
be woven in colours and patterns created especially for you, the two tailors told him. The

6
Emperor gave them bag of gold coins and ordered them to immediately start working on the
fabric. The two criminals asked for a loom and for silk, gold thread, and silver. The Emperor
was excited. Besides getting a new wonderful suit, he would be able to find out who, among
his citizens, was stupid and incompetent.

After a few days, getting impatient, he sent his old and wise prime minister to the tailor's in
order to get a report on the progress of the work. This prime minister was known throughout
the kingdom to be a man of common sense. Go and see how the work is progressing, the
Emperor told him and come back to me. The two traders welcomed the prime minister. They
did as if they were working on the fabric, cutting the air with scissors and sewing the
invisible cloth with their needles. We have almost finished our work but we need some more
silk and some more gold thread. Here, excellency, look at the colours, feel the softness of the
cloth. The old man bent over the loom and tried to see the fabric, but there was nothing. He
felt cold sweat on his forehead. I can't see anything he thought. If I see nothing, that means I
am incompetent. Nobody should know this. Otherwise, I will lose my office. What a
wonderful work, he said, after a short hesitation. I will tell the Emperor about the great work
that you are doing and the two tailors were very happy. They had almost made it.

The Emperor decided to send another important councillor to evaluate the quality of the two
tailors' work. Upon his arrival in the workshop this poor councillor had the same problem as
the old prime minister. He couldn't see anything. Isn't this a wonderful fabric? The two crooks
asked him pointing at their imaginary work. I am stupid, the councillor thought. This is very
strange but nobody should know this. So, he praised the work of the two crooks, went back to
the Emperor and reported on the fine progress of the work. Finally, the Emperor received the
announcement that the two tailors had finished their work and that his new suit was ready.
The two tailors went to the Emperor moving forward slowly and bowed, pretending to hold
the fabric. Here it is your highness, the tailor said. We have worked day and night to produce
this beautiful fabric for you. Look at the colours, feel the softness. Of course, the Emperor
could not see or feel anything, and he panicked. I can't see it, he thought. This means I'm
stupid, or worse incompetent as an Emperor. But he soon realized that nobody could see that
he could not see the fabric, and he calmed down. The two tailors invited the Emperor to take
off his clothes, and to try the new ones, and they held up a mirror. The emperor felt very
embarrassed, but since none of the bystanders seem to be equally uncomfortable, he felt
relieved. This is marvellous. This is beautiful. How good it looks on me. The Emperor said
trying to look comfortable. You've done a wonderful job. Your majesty, the prime minister
said, the people have, has heard about this wonderful fabric and they want to see you in your
new suit. The Emperor was not sure whether this would be a good idea, to show himself to
the people like this. But he could not say no. After all, only the stupid and incompetent would
see him naked. All right, he said, I will grant the people this privilege. He gathered the
dignitaries of the court around him and they formed a ceremonial parade. Then, the Emperor
walked in a procession through the main streets. Many people yet gathered along the street,
pushing and shoving to get a better look. A big applause welcomed the procession. All the
citizens were curious to find out how stupid and competent the neighbours were. But, when
the Emperor passed, a strange murmur rose from the crowd. First whispering from one citizen
to the next. Then in a loud choir, look at the Emperor’s new clothes. They are so beautiful.
What a marvellous procession. And the colours. These wonderful colours. We have never
seen such elegant clothes in our life.

Of course, they were all disappointed of seeing nothing, but they did not dare to admit their
stupidity and incompetence. They all behaved as the two scoundrels had predicted. A little

7
boy, however, who was standing in the crowd with his father, suddenly said, but he's wearing
no clothes, the Emperor is naked. His father grabbed the boy. Fool, he angrily shouted at him,
shut up and don't speak nonsense. But some people in the crowd had heard the boy's remark
and they realized that he was right. The boy is right. The emperor is naked, it's true! They
repeat it over and over again, first whispering then louder and louder. The Emperor realized
that the people were right but could not really admit it. He though, he thought that the best he
could do was to keep up the illusion and to continue the procession. He walked on while a
page kept holding the imaginary mantle behind him.

b. Second Video

Welcome to this second video of the second week of our course on unethical decision
making. In our last video, you listened to the fairy tale of The Emperor's New Clothes by
Hans Christian Andersen. In this video I would like to share with you some thoughts on what
this fairy tale teaches us about organizations.
The main goal in this session. You will get familiarized with the main idea of this course, the
power of strong contexts over reason. And you will meet some of the main psychological
forces that create such strong contexts. Many of us have read Anderson's fairy tale of The
Emperor's New Clothes to our own children already or we know it because our parents read it
to us when we were children. Most children find the story very funny, and they are surprised
by the strange behaviour of the actors, and they easily identify with the only seemingly
rational actor in the story, the little boy.
Reading this story to our children we normally emphasize that this is just one of those fairy
tales like the Sleeping Beauty or The Brave Tin Soldier and we assume that in real world
such a dynamic would never evolve. We explain to our children. And that moment, when we
try to debrief them on this fairy tale that there is various moment in this story when normally
reason would interfere. The two tailors for instance, would have been chased away by the
guards of the castle. The Prime Minister would have revealed the lie because he was an old
wise man. The Emperor would never have walked the street nakedly. The crowd would have
started laughing about him. If he had decided to walk the streets with these imagined clothes.
Only in fairy tales we explain to our children such absurdities can be found.

Well, the power of reason should not be overestimated. If I used this fairy tale in courses with
managers would I receive as a reaction, reaction very often is. This story reminds me of my
own organization. So it's useful to have a closer look at the dynamic of this story. Let us start
by asking ourselves what is the overall atmosphere that we can observe in this kingdom? You
look at the cold sweat, at the forehead of the old man. You get a hint already, so this strange
kingdom is governed by fear. The Guide, the Chief of the Guides, the Court Chamberlain
they know about the love of the emperor for clothes, so they don't dare to stop the crooks at
the, at the gate of the castle. Their fear is to be punished. The Prime Minister, he turns pale,
He's uncertain about what he has really seen or not. He decides to lie because he doesn't want
to risk his job. People in the crowd, they fear the punishment of the emperor, but they also
fear to be ridiculed by the other people in the crowd, if they reveal that they can't see it.
So all of them are terribly afraid of something and what they show is a reaction is, is a very
common reaction to fear. In organizations, outside fairy tales as well.

Fear dominates many organizations. The fear not to live up to expectations of superiors. The
fear of being marginalized by one's peers. The fear of time pressure. The fear of complexity,
the fear of decisions. The fear of being aggressed, harassed, and expelled from one's social
context. And the two crooks, they play with that fear. And it's a common strategy to switch

8
off reason in people using fear. Who creates that fear? Well, the emperor because he's an
autocratic king of his kingdom, but interestingly fear is contagious. So it tracks back on him
as well, he has fear. To look stupid as well. So he becomes a victim of his own creation.

Fear is not the only driving force of the story. The two crooks play with another very
important element. We have seen at the very beginning of the story that this emperor is driven
by his vanity. To be more accurate it's not just vanity in general. It's, it's vanity that drives
him to love clothes and nothing else. Today we might describe him as a fashion victim. So, he
perceives the world only through clothes. The only thing that interests him is clothes. And the
two crooks, they describe their product, they write their story exactly in this frame of world
perception of the emperor. That's why they are so powerful. It's the combination of fear and
the frame of the emperor.

We all make use of frames when we act in the world. We don't act in an objectively given
world. We interpret the world. Based on our routines or experiences, and we frame different
things based on interests based on values, based on what we have perceived before. So we
have a frame of looking at the world and left and right of that frame there's darkness,
we don't see things. We reduce complexity by using frames. We restructure a highly
complex world and we make it easier for us both to make decisions as individuals but also to
collaborate with others.
But frames can be too narrow, and that's the light motive of our course. Frames can be too
narrow; they can give us a too narrow perception of what we should see when we make
decisions. So we run into risks if the frames are not appropriate.
The Prime Minister, what does he actually see? If we look at him as a key person in the story.
Well, he says nothing, because there is nothing. But what does he believe? There's cold sweat
on his forehead. He gets uncertain. So he believes that there is something, but he cannot see
it. He believes in the story, he doesn't question and challenge the story of the two crooks. He
panics, because he believes he is stupid and incompetent, and he tries to hide that. He actually
feels incompetent in that very moment.
What about the boy? Some people argue that the boy has nothing to lose that's why he tells
the truth. Well I think this is not the right way of interpreting his behaviour because having
nothing to lose means he makes a calculation of what's in for him and what's the risk but he
doesn't make that kind of calculation, he's just shouting out what he sees. Not being framed
like the others, by the fear that dominates the kingdom. He has nothing to lose, in the sense of
the frame, but he has something to lose, with regards to his father. Think of what he does. He
grabs him. He shouts at him. He probably beats him up afterwards. So the boy is acting
irrational in his own context because he risks indeed something.

What is rationality? What is irrationality? If we assume a very simple model of rationality, it


would mean that we know which means we should use to achieve particular objectives. And
all actors in that story, know exactly what they have to do to achieve their objectives. The
Prime Minister wants to stay Prime Minister, so he does what he does. The guards and the
Chief of the guards, they want to keep their jobs as well, so they do what they do. From
inside the story what they're doing makes sense, it's rational, is rewarded. Only for us outside
that story it's seems to be irrational, but they cannot see what we can see. That's an important
lesson that I would like to share with you at this point of our course. Maybe actors cannot
see what everyone else can see while they make their decisions.

What seems to be highly unethical, irrational, stupid from outside a context might seem
rational, ethical and normal thing to do, common sense from inside the context. The context

9
can be stronger than reason. Look at the end of the fairy tale. For me this is one of the most
amazing elements of this story. Look what the emperor does. He realizes that he is naked but
he continues the procession. Reason does kick in but the routine is even stronger in that
very moment. You have this very same situation in corporation that are caught by a scandal.
Very often this can, people realize internally that something is wrong, but the routine is
stronger.

Another interesting element is the dynamic that develops in this story. If you look at the
Prime Minister and the, the Emperor. We would assume they are both exposed to the same
kind of situation. They both on the surface, have no big difference. They meet the crooks,
they see nothing, they try to hide it but there's a difference between the Emperor and the
Prime Minister because the Prime Minister goes first and he goes back to the Emperor and
confirms the story of the crooks. In this very moment, the confirmation and the decision
of the Prime Minister becomes the context for the King. The more people confirm, the
more difficult it becomes for following people to not see the clothes or not believe the
story. So commitment can escalate throughout such a dynamic. It becomes stronger and
stronger. So gradually, the reality is shifted towards the narration of the two crooks. And the
stronger the context, the more difficult it becomes for the individuals inside the context, to
escape from the logic of the narration. They get trapped.

Andersen's fairy tale is not a story about some stupid people caught by some stupid forms of
behaviour. It is a story about pathological context. It tells us something about how
psychological forces can make context so strong that they become stronger than reason,
that they even switch off reason. If you put people into a strong context they might do what
these people in this fairy tale do as well. The fairy tale gives us some first ingredients into the
dynamics of strong situations that we will analyse in more detail further on in our course.
Fear. Authoritarian leadership. Group pressure, uncertainty about one's own evaluations. The
use of too narrow frames, the escalation of commitment over time.
And if you look at your own organization, you might find at least some of these elements also
in your own context. Actors might behave irrational not because of who they are but because
of the context in which they are embedded.

So let me conclude this, analysis by giving you five learnings. Learning number one, context
can be stronger than reason. Learning number two. Actors might get trapped in a very
narrow perception of reality. Learning number three, what looks irrational from outside a
context might look completely rational from inside the context. Learning number four,
fear is a key driving force of such irrational behaviour. And finally, learning number five,
modern organizations sometimes look very similar to this kingdom in the fairy tale.

c. Third Video

Welcome to this third video of the second week of our course on Unethical Decision Making.
In our previous session, we discussed the fairy tale of The Emperor's New Clothes, and we
saw how context can drive irrational behaviour.

In this video, we will discuss the story of the Ford Pinto, a real case story and you will see a
lot of parallels between the fairytale and what is possible, in organizations in the real world.
The learnings, for this session, you will learn how unethical decisions result from strong
contexts. You will learn to connect our discussions on The Emperor's New Clothes to a real

10
organizational decision making case. And you will understand how decision makers can get
caught in contexts that remove ethics from their radar screen.

The Pinto is a car launched by Ford in the year 1970. And there's probably never been a car
that has been so controversial as the Pinto. The car had a serious problem. The gas tank is at
the rear end of the car. And whenever another car crashes on the Pinto from behind, there's a
high risk of gas tank rupture. So the Pinto risks to explode in a ball of fire. And everyone in
the car would burn and die. The Pinto story has become a symbol of the cold-hearted profit
maximization attitude of companies who try to make money at any price even if human life is
at stake.

Ford engineers knew about the gas tank problem. They did more than 40 crash tests and in all
these crash tests the problem occurred. Ford even made a cost benefit analysis with regards to
the reduction of that risk. They calculated as follows. What will it cost to invest in a reduced
explosion risk? There are roughly 12,500,000 vehicles, you have to invest $11 per vehicle to
make the back part stronger. And this would cost roughly $137 million USD.
What would be the benefit of this investment? Well, statistically you can predict that there
will be roughly 180 person dying in the car. Which according to the statistics of the insurance
companies costs $200 thousand per person. There would be roughly 180 persons heavily
burned but still alive which costs a bit less $67 thousand per person. And there will be 2,100
burned cars. If you calculate $700 per car, you arrive at roughly $50 million and you see, this
is much cheaper.

It is not worth investing in a reduction in the explosion risk from the perspective of a cost-
benefit analysis. It is estimated that between 500 and 900 people burnt in Pintos. And Ford
did not stop the production of the Pinto even when the dying started, even when these cars
came in with these accidents. Instead they paid millions of dollars to settle damages out of the
court, and they continued to produce the Pinto.
In August 1978, three teenagers burned when a truck crashed into their Pinto from behind,
and Ford was for the first time charged of reckless homicide. In the end, the jury finally voted
in favour of them, but the reputation damage was huge. And finally, they decided to stop the
production of the Pinto.
Why did the company do all this? Why didn't they stop it earlier? Why did they not interfere
with this high-risk of explosion? Looking at this case, we have the impression that this is a
bunch of greedy, unethical people. But, let us look at the case from the perspective of ethical
blindness. Ford, as all the car makers, had a call back team. The call back team basically had
the role of deciding when to call back a car to fix a problem. This team sees the first Pintos
coming in and they decide not to call back the Pinto. One of the engineers in this team is
Dennis Gioia, who later became a famous management professor who later writes down a
story of his experience of the time of the 1970s when he was an engineer at Ford in the call
back team. You remember what we said about ethical blindness. It is the temporary loss of
the ability to see the ethical dimension of a decision at stake. And now here, what Dennis
Gioia is saying about his own perspective of his own position in this call back team in the
1970s, later when he became a management professor. “After I left Ford I now argue and
teach that Ford had an ethical obligation to recall. But while I was there, I perceived no
strong obligations to recall and I remember no strong ethical overtones to the case
whatsoever”. Or, “why didn't I see the gravity of the problem and its ethical overtones? What
happened to the value system I carried with me into Ford?”.

11
So you see, this is a typical case of ethical blindness. You make the decision. You do not see
the ethical dimension. Before and afterwards, you are able to see it, but not in the very
moment of making the decision.
In 1973 Dennis Gioia became the call back coordinator at Ford. And he's in charge of, of
tracking the information that might lead to call back decisions. He has the responsibility for
overseeing about 100 active recall campaigns. And he is responsible for observing numerous
other potential call back cases. So he is drowning in complexity. Comparably few Pintos
come in with this horrible accident. Is this a candidate for a call back? Not really. Call back
team has a clear standard operating procedure with two criteria they should search for when
making this decision.
First of all, is there a high frequency of cases? So, do many cars have this problem? Second,
is there a clear traceability? So, can we clearly trace the problem to a certain part of the car?
Both elements were not given in the Pinto. The Pinto, as such, was a problem. And Gioia had
little time to reflect upon cases that did not meet these two criteria. As I said, high
complexity, immense time pressure. He had to rely, basically, on the standard operating
procedure to prototype information to frame them and to make decisions. And as we have
seen, frames help us to focus. Frames help us to make quick decisions in situations that come
repeatedly. So applying these standard operating procedures, Gioia is looking for particular
cues. He can't find them in the, in the Pinto case. How can you go to your boss and tell him
you have to make a very expensive call back decision, when you cannot even give legitimate
arguments according to the standards of your corporation. Gioia felt that he would look
ridiculous proposing a call back for the Pinto.
So, this is the immediate situation of the call back team. Clear frame, time pressure,
overwhelming complexity. They're caught in this routine. And there's group pressure, of
course, because there are several engineers and none of them sees a problem. You look left,
you look right. Everyone is, is feeling comfortable with the decision you make, why should
you feel differently?

There are other elements of the story that made this frame of the engineers even more narrow.
Look at the organizational layer of the context. The Pinto is the baby of the CEO of this
company, Lee Iacocca. He was in urgent need of a small car at that time. Why? Well, there's
the oil crisis. Your competitors, Volkswagen, Japanese car makers, they aggressively win
market shares in your market because they have small cars and customers suddenly want
small cars that consume less. There's a pressure to produce such a small car as fast as
possible. Ford engineers are given 25 weeks to plan the car. Normally, they had 43. So,
there's an extremely short production planning schedule. Time pressure is not just on the
engineers in the call back team, time pressure is already on the engineers planning and
developing the car.
Safety was not a popular issue at Ford at that time. Nobody would dare to harass the CEO
with safety issues. He was famous for always saying safety doesn't sell. And he was having
rigid objectives. It was called the limit of the 2,000s. There were two limits of 2,000. First of
all, he told his team the Pinto should not weigh more than 2,000 pounds and second the Pinto
should not cost more than $2,000. Do you want to be the guy goes to him and tells him it will
cost more than $2,000? This might be a career terminating move for you.
And the word problem, by the way, was forbidden in this organization. And the legal
department had declared that it should be avoided as a word. And here you have a classic case
of self-censorship. If you cannot say it, how can you see it as a problem? CEO's like Lee
Iacocca were pretty much told what they wanted to hear, like in the fairy tale of The
Emperor's New Clothes.

12
Crash tests start in 1968. The Pinto is put on the market in 70. So, there's not much data
available. And, crash tests are only obligatory in 1977. So, in the year the Pinto is brought to
the market, you can completely fail a crash test and still market the car. It's legal. Engineers
are still struggling to interpret the data of the crash test they have started to do. How would
these crash tests apply, the data apply to real crashes, real accidents? There was no consensus
about that. Of course, the Pinto performed worse than all the other small cars at that time in
crash tests, but only a bit worse than the others. So it seemed to be acceptable. Engineers at
Ford, were driving Pinto's themselves.

So, the impression overall was, the Pinto is more or less as safe as all the other cars on the
market. And maybe, the explosion was perceived as an acceptable risk. It had a low
probability. And driving cars as such, at that time, was perceived as a risk that customers
accept. Finally, if we have these two layers of the immediate pressure of the context of the
engineers and the call back team, the organization, we have this third layer of the societal
institutional context. How does that context look like? As I said, early in the 1970s there's the
oil crisis. There's the crisis of the automotive industry, not because of the oil crisis alone, but
also because of the aggressive competition with the car makers who arrive with their small
cars. People want smaller and less consuming cars. The only trump Ford had was the Pinto.
There was no other small car available. Do you want to risk your only trump in the market in
a situation of crisis if there's nothing else you have to offer?

There's a growing regulatory pressure that started at the time. Ralph Nader, the activist, has
written a book, Unsafe at Any Speed, in which he explains in detail how the car makers
disregarded safety issues and were responsible for many dead people in accidents. This
external pressure created an atmosphere of us against them inside Ford. So there were all
these external threats. All these enemies. Governments who wanted to regulate aggressive
competition. Customers changing their, their habits and expectations. If you have a very
strong perception of us against them, the us, the inside people, they're close like in a medieval
castle, and they feel beseeched by the outside world. If you develop strong inside outside
group feelings, this is often the beginning of rule breaking. Outside, the idiots. Inside, the
guys who know everything. This is the atmosphere at that moment.

Finally, drivers as I said, are well aware of the risk of driving cars. The overall perception is
that accidents happen for two reasons, because of drivers and because of streets. Because
drivers are not well-trained and streets are not well-built. So, if you're in this call back team,
you see these Pintos coming in with these devastating accidents. And if you believe that
accidents are caused by drivers and streets, you might not be surprised that the engineers did
not involve more self-reflection in this situation. Ford even held patents for safer gas tanks.
But safer gas tanks were larger, they would reduce the space in the trunk. And if you asked
customers in the early 70s, would you prefer more safety, or larger trunk? Most of them
would prefer the larger trunk.

Iacocca gave the order, as I said, that the car should not cost more that $2,000. Why did he
give this order? Because he was a well experienced automotive manager. He knew that there
was a very high price sensitivity in the market. We increase the price; customers switch to
another model. In the crisis situation, that's the least thing you want.

So is it a case of greed and in, intended evil? Is it a case of bad apples? As you might agree
with us, Ford's behaviour is less a case of deviant behaviour. It is more a case of confirming
the dominating rules of the game. The dominating practices and beliefs that are there in this

13
industry at this very moment. Safety is systematically de-emphasized by everyone in that
industry in this moment.

So, if you put together this constellation of these three contexts: the situation, the
organization, the societal level, what you get is a very strong context that seems to remove
the ethical dimension from the decision making of the managers. And this does explain why
someone like Dennis Gioia can say, well why didn't I see it when I was making this decision?
Of course, we do not want to excuse the behaviour of these managers. We just want to
understand them better so that we can, in the future, prevent these kind of strong context
around people who make decisions.

So let me conclude with three observations. The first one, the idea that bad things are done
by bad people, so called bad apples, does not help to explain sufficiently unethical
decisions in organizations. Second, different layers of contexts, situation, organization,
society, can build powerful constellations that switch off reason in decision makers. And
third, in such contexts, decision makers might get blinded for the ethical dimension of
their decisions. They behave unethically, but they cannot see it.

d. Fourth Video

Welcome, to this fourth video of the second week of our course on, unethical decision
making. In this video, we introduce the idea of ethical blindness, which builds the backbone
of our course.
In this session you will understand the concept of ethical blindness, and you will be
familiarized, with the key theoretical elements of the concept.

Almost every day we hear about a new scandal of corporate misbehaviour in the news and we
see the photos of arrested managers. We are wondering, why do people and organizations
break the legal and moral rules of the game time and again? Normally, we attribute what
people do to their character.

Our basic assumption is, that bad things are done by bad people. Crooks, criminals, so called
bad apples. This is based on the idea that decisions are made, by rational actors. People who
brag moral and legal rules, thus make a calculation. We assume, they compare advantages
and risks of rule breaking. We assume, that rule breaking thus is intentional. This assumption
however contrasts with one observation. People who break the rules are often shocked and
surprised, by themselves. How could I ever do this? We will talk about some experiments
done, by social psychologists between the 1950s and 1970s, later on in our course. These
experiments demonstrate, how easily normal people, can be manipulated into highly
questionable behaviour such as giving electric shocks, to others.

One of those experiments, was done by the psychologist, Philip Zimbardo, who took young
male students, put them in the roles of prisoners and prison guards. He gave them uniforms.
And his idea was to observer them for two weeks to see what they would do. How would
they behave? The experiment was stopped, after six days. Prisoners, mentally collapsed and
prison guards became ever more sadistic from day to day. One of these prison guards in the
debriefing, of the experiment right afterword said the following, “while I was doing it, I
didn't feel any regret, I didn't feel any guilt, I was, it was only afterwards, when I began to
reflect on what I had done, that this behaviour, began to dawn on me”.

14
This is a phenomenon, that returns time and time again in situations where people, break the
rules, get corrupted, arrest their colleagues, manipulate information or steal from their
employers. When confronted with their own behaviour, they are often shocked. But while
they are doing it, they do not see it, they do not see that what they do, is wrong. They do not
see that they do harm, to others. They do not see that they act against their own values. In
some situations, people seem to behave unethically without being aware of it. So, the ethical
dimension of, their decision, seems to be removed from their radar screen. We call the
decision maker's temporary inability, to see the ethical dimension of a decision at stake,
ethical blindness.

Ethical blindness, has three key aspects. 1. When ethically blind, actors deviate from their
own values and principles. When making a decision, they cannot access those values.
2. Ethical blindness is context-bound and thus a temporary state. When the situation
changes, actors will likely return to practicing their original values and principles.
3.Ethical blindness is unconscious. Actors are not aware of deviating from the rules of the
game when making a decision.

It is important, to understand that ethical blindness is not the same as unethical behaviour. It
is just the increasing inability to see the ethical dimension of what, on deciding. But ethical
blindness increases the probability, of unethical behaviour. So, in many cases, unethical
decision making is less rational and less deliberate, than we think. But more intuitive and
automatic. And a specific circumstance, the ethical aspect of a decision, might fade away.
How is that possible? Our main assumption is that, many unethical decisions in
organizations have less to do, with a person, making the decision and more with the
context, in which they make their decision. Contexts, can be stronger than reason, stronger
than values and good intentions.

How can ethical blindness be explained? Together with our colleague, Professor Franciska
Krings, also here at the University of Lausanne, we have developed a model, that explains
how, and when, context might overpower reason. Our first question is, how do we make
decision over ethics? From our discussion in the first week you have learnt that in the ideal
world of moral philosophers, we make decisions, based on reason. For instance, think of
Kant. With this categorical imperative, well the utilitarian calculation. We process data almost
like a computer. In reality however, decisions are less rational. We decide within a reduced
perception of reality. Indeed, reality is not just out there and we perceive it as it is. Consider
this picture here. What do you see? Many people see a woman. In fact, many of you might
have seen the face Marilyn Monroe, or Che Guevara in a similar style. But there is something
else, that you might see. A saxophone player. We interpret what we see and, you just turn it
around. What we see is our interpretation. This picture is in fact a very powerful
demonstration, that we construct reality, our reality.
Social scientists, have established the term, cognitive frames, to refer to this process.
Frames, are mental structures, that we use to construct reality. They are like, cognitive maps
of our environment. We use these maps, to navigate the complexity of our world. They focus
our attention on one thing, obscuring other things.

To again, use an example from vision, I have my office in the sixth floor, of our building,
with a wonderful view on the Alps. On the other side of Lake Geneva. When I look at the
mountains, I do not see the window. I do not see whether it's dirty or clean. I could also do
this, I could look at the window. But then the Alps would disappear, for me.

15
So frames, are mental structures, they focus our intention. They limit our perspective. They
filter what we see, and what we do not see. Frames have blind spots. Some information, does
not pass through our frame.

And if this is the case, you might imagine that there's a risk of a too narrow framing of reality.
And you, just think about the discussion we had on the emperor's new clothes, the fairy tale.
You remember the very rigid frame of the emperor, the way he perceived the world. He was
only interested, in clothes. And the two criminals, they played, with his frame. But, the story
also shows that, rigid framing only develops its destructive power in a particular context.
This context was characterized by fear, by an escalation of commitment, the pressure of the
group, and an authoritarian leadership style. In the fairy tale what developed was, a kind of
collective interpretation of the world. It was completely irrational, from outside, but they,
those inside the context, they couldn't see it. So these actors, created their own special
rationality. Their own, little microcosms. Disconnected, from a broader asset to reality.

So a decision that might look irrational, unethical, pathological, from outside context, might
be perceived as rational, ethical and completely normal, from inside the context. As we said,
context can be stronger than reason.

Coming to conclusions. Good people, can do bad things, without being aware of it. We call
this state, ethical blindness. Ethical blindness is context bound. It is created, by contextual
pressures, that impose a too narrow frame, of word perception, on a decision maker.

e. Fifth Video

Welcome to this fifth video of the second week of our course on unethical decision making.
In this video, we discuss the role of, social contexts for the idea of ethical blindness. In this
session, you will understand the concept of ethical blindness and you will be familiarized
with the key theoretical elements, of the concept.

So far we've introduced the idea of framing and how we run, into a decision making situation
where our frame, becomes too narrow, too rigid. We are embedded in context and context
influence how we frame the world. There are three different layers of context and they
reinforce rigid framing, eventually. And if they do so, decision makers might become blinded
for the ethical dimension, of their decision. These three context layers are, the immediate
situation, the organizational context and the institutional or societal context.

In our course, in the coming weeks we will analyse these three contexts, in more detail. Here
we will just give you a rough idea of what the basic, idea of ethical blindness is and what
kind of importance these contexts have for this model.
The first layer, of the concept of ethical blindness is the immediate decision making
situation. Some situations are so powerful that they elicit a very similar, but very specific
behaviour in many people. Independently of their intentions, levels of morality, values or
reasoning, people act in a very similar way. In fact, they do not act, they react.
Experiments, by social science in the 50s, 60s, 70s, provided a lot of, illustrations of the
power, of the situation. One for instance is a series of experiments initiated by Stanley
Milgram, on authority pressure. For instance, if the leader, points to the right and the code
of conduct, points to the, left, what would people do? Would they follow the leader? Or,
would they follow the code of conduct? Experiments show, in most cases people are
obedient. They follow the authority.

16
A second situational force, that we will look is peer pressure, majority pressure, group
pressure. This kind of studies has been initiated by Solomon Asch, in the 50s. And he put,
people in situation. Well they should make a judgement. And their own judgement would say,
this is correct. But then they have been surrounded by others, who obviously had other
opinions, other views, other judgements. And they've been very irritated and finally, did not
trust their own judgements, but, sided with what their peers said. We have also seen an
example of this in the story of the emperor. Just remember the cold sweat on the forehead of
the prime minister in our fairy tale, when he could not see the clothes. He did not question the
story of the two crooks. What he questioned was his own intelligence. In groups, we might
behave like sheeps.
Another kind of pressure we will consider is, time pressure. Time pressure is in fact a very,
powerful situation factor. It effects individuals framing, people will under time pressure,
behave differently, as if they have time and they may eventually ignore ethical dimensions,
when under time pressure.

The second contextual layer we would like to look at with you is the organizational context.
Organizations, tend to simplify our world perception. They tend to create increasingly,
homogeneous and simple world views among their members. In particular this is the case in
successful, organizations. Why? Well, we, we develop routines, over time based on the
positive feedback we receive for previous decisions. So the more successful we are, the more
we believe that we've found a way of doing things. And the higher we climb in an
organization, the more we believe, we are right. The less we are able, and willing to learn.
Some aspects of this level of the organizational contexts have pushed us towards rigid
framing are for for instances, the aggressive competition, that you can create among team
members inside the organization, or the aggressive competition in which your industry, is
embedded.
There's a growing risk of getting focused, on a very narrow set of objectives, if you are, under
aggressive pressure of competition. You create a climate of Darwinist struggle for survival
and this is, a key driving force, of rigid framing.

The last layer in our model is the institution context. We are surrounded, by societal
institutions. There are beliefs out there, values, common practices and, our organizations, in
which we are embedded, are also embedded in these wider contexts. If these norms, values,
and practices are very strong, they can increase rigid framing. They can turn into dogmatic
ideologies, for instance, rule breaking in corporations, might be supported by the ideologies
taught in business schools. And we will devote some of the following sessions to discuss the
situation, the organization and the institution context in more detail.

In our model of ethical blindness, there's interaction, between our frames, the inside and the
pressures that come from outside, the contextual pressures. And these two, mutually reinforce
each other. And if our own frames and the pressure from outside, point in the same direction
and both contribute, that ethical dimensions are removed from our radar screen, we may
become ethically blind. And this may help in even, for people with high levels of integrity.
They may act unethically.

Let us come to the conclusion of this session. Three, points to take home. Ethical blindness,
results from a too narrow framing of a decision making, situation. And the frame we use, to
interpret the world and make our decisions is strongly influenced by the context in which we
are making our decisions. Three contextual layers, can be differentiated, situation, the
organization and the institutions.

17
III. Third Class

a. First Video

Welcome to the first video of week three of our course on unethical decision making. In our
previous session, we have introduced you to a central concept of this course. Ethical
blindness.
In this video we will focus on one important aspect of ethical blindness, Framing. That is,
we will discuss how people look at the world and how they construct reality, their reality. In
this session you will learn what frames are. You will understand why they are useful and
dangerous at the same time. And you will learn how you can protect yourself against narrow
frames.

Let me show you this painting. In fact, you do not only see the painting, but also rather huge
and very stylish frame around it. Why are there such frames around paintings, pictures,
photos, what is their function? A frame, in particular this one here, separates inside and
outside the painting and the environment. It focuses your attention. Focuses your attention to
the painting and it separates the painting from the whole environment out there. That's
exactly what a photographer does when taking a picture. He selects and thereby determines
what can be seen on the picture and what not. For this purpose, he can also use a zoom.
Sometimes it is important to see the details. At other times, it is important to see the big
picture. Such a switch of focus is also central to artworks like the following. Here we have
several parts and you can focus on these parts. But with a wider frame, you see something
else.

Now consider this problem. Here are nine dots. Your task is to connect them with four
straight lines. If you want, you can now try it yourself. Back? Fine. Let me show you one
attempt to solve it. The first line. The second. third. Oops. Okay. One more. First. Second.
Hm. Third. Oh. I failed because I tried to find a solution within this area here. That does not
work. The only way to solve this problem is to widen the frame. Let's do it together. Here is
my first line. The second. Third, and here you go. Maybe you have heard the expression, to
be able to think out of the box. This example here nicely illustrates how useful it can be to
adapt a wider perspective and to extend boundaries and note, often these boundaries are self-
imposed.

Here's a new task. You will see a short video clip with three people in white shirts passing a
basketball to each other and three people with black shirts doing the same thing. Your task is
to count how often a player with a black shirt passes the ball to another player with a black
shirt. Was there something unusual in this video? If you do not know what I mean, please go
back to the video, but this time you don't count, but you just lean back and watch. There are
20 passes and a woman with an umbrella walking through the scene. If you have not seen her,
watch the clip again. The phenomenon that I try to demonstrate has been termed
innatentional blindness. It is to failure to notice an unexpected stimulus that is in one's field
of vision when other attention demanding tasks are being performed. It has been studied by
Ulric Neisser and his colleagues in the 1970s who also produced the video with the umbrella
woman. These researchers and others, such as Daniel Simons reported that about 50% of the
people failed to see these unexpected stimuli.
How is this phenomenon linked to our present topic, framing? If our attention is focused, we
may fail to see things that are not in the focus of attention. And now, you can also understand

18
that frames are not only out there around paintings. There was no physical frame around the
nine dots. There was no physical frame around the black players. When social scientists use
the word frames, they use it as a metaphor to refer to mental structures that simplify and
guide our understanding of a complex reality. They focus our attention. They force us to view
the world from a particular and limited perspective.

As an illustration, consider the following story. A Sultan was once attending a meeting of his
advisers. A meeting during which they had heated discussions and couldn't agree on how they
should see the issue at hand, and what to do. To illustrate what he, the Sultan, has seen by
watching their dispute, he told them about several men. Who have been blindfolded and
brought to an elephant with the task of finding out what that was. Now, one was standing near
to one of the legs and claimed it's a tree. Another touched the tail and said it's a rope, and so
on. The point the Sultan wanted to make, disagreement can result from limited perspectives.
That is from narrow frames.

Let us now consider a real case that once happened at the paediatric ward of a hospital. A six-
year-old child just had an important surgical intervention. And the physician prescribed, as a
painkiller, five milligram of morphine every four hours. The child received it correctly two
times, at 8 a.m. And at 12 p.m. The nurse made a severe mistake at 4 p.m. She gave him five
milligram of methadone, a different drug that should not be used in this situation. At about 5
p.m. The patient became very sleepy and finally stopped breathing. Fortunately, he could be
resuscitated by the reanimation team, incubated, and transferred to the intensive care unit,
from where he could be released after three days without lasting damages.
Now the question, what to do with the nurse? He mentioned you have to make this decision,
think about it for a while, before you continue. Most members of the Paediatric Executive
Board who actually had to make this decision pointed out that this error was really, really
serious and the patient almost died. They concluded that the nurse should be fired. So when
asking what contributed to this mistake they focused their attention on the nurse. After all,
she made this mistake.
Now, in the university hospital, there is an internal procedure on error and risk analysis.
When unexpected serious events happen, a group of experts is formed. They analyse the
situation in detail, interviews, they call us and publish an internal report on the situation. The
Paediatric Executive Board agreed to wait with the decision until this group published its
report. This group adapted a wider frame and they took the whole context into consideration
through interviews with stakeholders and experts and based on their literature review, they
identified these contributive, factors. Patient. Parents put high pressure on the nurses and the
doctors to alleviate the pain of their child. And we will talk about pressure in the next week.
Environment. One nurse was sick. There were only three nurses for 22 patients so there was a
significant stress for every nurse. Material. The morphine pills and boxes and the methadone
pills and boxes look very similar and on top of that, they are beside each other in the
pharmacy. The nurse was young and had not much experience with this kind of drugs. There
was no double check, before drug administration was performed. And the team, the young
nurse didn't get any support from other nurses to prepare and check the drug. And finally, the
institutional context, there was no automated pharmacy in the wards. There was permission to
have methadone in the paediatric division. There was no poll of nurses able to replace sick
staff.
So based on this systematic analysis, this panel of experts concluded that the nurse was only a
part of the drug administration mistake. And so they gave the following recommendations to
the Paediatric Executive Board. Limit the number of patients per nurse to six and create a poll
of nurses that can replace sick ones on a day-to-day basis. Do not allow methadone in

19
paediatric wards. As it is not a common used drug in this setting. Change the label of the
methadone box to make it clearly different than the one in the morphine box. Use a
systematic double check procedure. Either two nurses or one nurse and the physician before
administrating any opiate drug. And do not fire the nurse, but offer her additional training.

The board decided to follow these recommendations and they did not fire the nurse. So they
ultimately, took the complete opposite decision they initially came up with. They learned that
framing is really important before making any major decision.

Let us now consider how frames distort reality. Frames filter what we see. They control
what information is attended to. And just as important, what is obscured. Frames themselves
are often hard to see. We construct reality with our mental structures, but we see only the
result, which we call reality. But note the underlying construction process.
Frames appear complete, usually we don't realize that there are different frames that we
might also use. Seen from the outside, we may miss important aspects but seen from our own
inside view, there are no holes or gaps.
Frames are exclusive. It is hard to have different views or interpretations of the world at the
same time.
Frames can be sticky and hard to change. Once we are locked into a frame it can be
difficult to switch especially without conscious effort. When people have emotional
attachments to their frames, changing frames can seem threatening.

How to becomes aware of your frames. How to evaluate their fit. How to generate new
frames. Do not only look for the mirror of your frame, but into the mirror, and try to find out
how you construct your reality. Start by considering the possibility that there are multiple
ways to see the world. Observe the symptoms of frame misfit. Start by considering the
possibility that your frame is just one way to construct reality. Role play your adversaries and
stakeholders. Put yourself in the other's shoes. Try to anticipate how they see the world,
which aspects they focus on, what they would have done in a particular situation. Ask others
for their views and opinions but make sure that they can speak openly and without fear. If you
cannot exclude the possibility that they are afraid of negative consequences, think about how
they can express their views anonymously. Use subgroups. In an organization setting, each
group is likely to notice different highlight. Embrace your opponents. The best devil's
advocates are those people you continually disagree with. Approach them and face their
challenges. But not within defensive mindset. Rather appreciate their potential to widen your
frames. Seek opportunities to meet people from other cultures and find out how they think,
what they consider to be important. Marcel Proust once said, “the real voyage of discovery
consists not in seeing new landscapes but in having new eyes”. So traveling will not only
allow you to see other places, but to learn about yourself. You would see you own
environment differently after you returned.
Now, coming to conclusions. Frames are mental structures. We use them to construct reality.
They help to focus our attention and to navigate in a complex world. This beneficial effect
comes with a price. Frames have blind spots. Usually we are not aware of the frames we use.
And so we do not realize our blind spots. A central question related to this course is, do our
frames allow us to see ethical dimensions?

b. Second Video

20
Welcome to the second video of the third week of our course on Unethical Decision Making.
In this video we will discuss the Enron scandal through the lens of our concept of ethical
blindness.
Main goal of this session, in this session you will learn how organizational cultures can drive
ethical blindness. Enron was a company that for a few years turned everything into gold,
what they touched upon. They were the invincible masters of the universe. They were the
most desired employer, year after year. Until they collapsed in a huge scandal of fraudulent
accounting in 2001.
It is often discussed as a case of a few criminals on the top of a corporation who create a
system that is criminal. So called bad apples. You learned from our course so far already that
we have a slightly different perspective on unethical behaviour. And we would like to share
with you some thoughts on how you can interpret the Enron story through the lens of ethical
blindness.

Enron is the result of a merger of two corporations in 1985. Until the early 90s, Enron is the
leading infrastructure producer of the U.S. Their business is to provide pipe lines to move gas
across the U.S.
So they were the biggest owner of pipe lines ever in the U.S, and they were called the kings
of the American pipe line business. Their business model was very simple and very
profitable. They were paid to transport energy from A to B. The pipeline business of Enron
was heavily regulated, until 1988, when the government decided to deregulate this industry.
And from that point on, Enron had two problems. Problem number one, they were still in
massive debts because of the merger that they had gone through. And second, because of the
deregulation, their profitability shrinked.
So they were in search of a new and more innovative business model. Kenneth Lay, the CEO
of the company, did what many companies do when they have to change strategy but have no
clue of what to do. He called McKinsey. One member of this McKinsey team was Jeff
Skilling and he had a fascinating idea. He proposed to turn Enron into a gas bank. What does
that mean? Well, instead of just transporting gas from A to B, the new business model was
about buying gas, transporting it, and selling it. So Enron could get the control over the whole
supply chain of gas in the country. They would charge a fee for the transportation, but they
would in addition, charge fees for the selling of the gas.
In 1990, Kenneth Lay creates the new division of Enron Finance Corporation, and he hires
Jeff Skilling to lead this division. Over time, they increased their power over the market.
They bought large parts of the, the gas market and therefore became the dominating actor, not
just for the transportation, but also for the buying and selling of gas. And they made superior
profits.
In November 1999, Enron entered in the new economy. You might remember the late 90's,
early 2000's are the time of the first wave of the new economy. And they created the E-
trading platform Enron online. Within a few weeks, this platform became the biggest E-
commerce platform that had ever existed in the world. And immediately afterwards, it
became a standard for trading platforms online in general.
They started to sell all kinds of commodities. They entered the electricity market. And soon
after entering this market, they became the biggest energy marketer in the USA. They bought
the biggest metal trader in the world. So step after step, Enron turned into one of the biggest
corporations in the USA. The share value of Enron grew by 1,400% in ten years. In August
2000, Enron stock hit an all-time high of roughly $90. The market was fascinated by the
success of Enron.

21
Goldman Sachs called them literally unbeatable in whatever they do. The Fortune Magazine
selected Enron to be the most admired and most innovative company in the world. And the
CEO of Enron, Kenneth Lay was praised to be an energetic messiah.
For quite a while, whatever Enron touched upon became a success story. The time gap
between buying and selling energy however, soon became a problem for Enron. because they
had to buy it, to invest and to hold the product, until they found a customer. By mid 2000 they
were trading several billion dollars every day. So employees were always encouraged to do
their own trades, to invent new products, to sell, buy and sell new commodities.
By mid 2000, Enron was trading more than 800 different products. The dilemma was that the
more successful they became, the more cash they needed to bridge this gap between buying
and selling the commodity. So they were exposed to high credit cost. In June 2000 for
instance, they needed $2 million just to pay credits to banks every day. Higher credits meant
lower profits. Lower profits meant a lower share price.
So Enron needed more cash without more debts. Andy Fastow was the CFO of the company.
He had an idea how to solve this problem. He created so-called special purpose entities. What
is that? These are external partnerships and as external partnerships they could be removed
from the balance sheet of the corporation. In order to be entitled to label something in
external partnership, a special purpose entity, 3% of the property has to be owned by an
external investor. This became a very fascinating tool for the CFO of Enron. He could put the
debts of the corporation into these special purpose entities, and thereby remove debts from
the balance sheet. Removing them from balance sheet means that the performance of Enron
looks much better, and the positive effect of the stock value is easy to imagine.
The problem was that the 3% external partner was an Andy Festo himself in most of the
cases. So these entities looked like independent, but in reality they were Enron. So Enron
continued buying commodities at an ever faster space selling them to customers, but no
longer having this problem of the gap between buying and selling, because this was shifted to
these outside entities.
We don't want to go into the detailed analysis of the fraudulent techniques at Enron, or at the,
the criminal behaviour of, of the, the accounting specialist in the company. Our focus is, is a
different one. But let me just summarize the steps that led to the Enron collapse. There was an
increasing scepticism in the market, as it was for all new economy companies from a certain
point in time on. On October 16, 2001 the SEC announced that it was investigating the
special purpose entities. Roughly one month later on November 28 the share of Enron is
downgraded to junk status already. So, the value of the company dropped at high speed. On
December 2 Enron filed for Chapter 11. So they are bankrupt. The house of card had
collapsed. As I said, we are more interested in looking into the culture of this company, to
understand how this contagious environment of cheating could emerge.

The top managers of Enron of course, they were aggressive and greedy individuals. They
were driven by self-interest. They were cheating. They were taken by the hubris of being
above the rules. That was affecting many companies of the new economy of that time. They
were bad apples probably in the strict sense. But we will not understand the Enron scandal if
we just look at these people at the top and zoom into their character deficiencies.
What is interesting in the Enron case, is their large scale of divine behaviour, that has taken
the whole organization across all levels of hierarchy. The whole barrel was rotten. Not just
some apples. Many people in many places at Enron got corrupted. They tried us. Amazingly,
they came from top universities. Enron hired mainly from Harvard and Wharton. They never
dreamed of becoming criminals. But the atmosphere of Enron, might have pushed them
towards a behavior that they did not expect to get entangled with when they started to work
for this company.

22
So what did it mean to work for Enron? The Enron hype is strongly connected to the overall
new economy hype of the late 1990s. There was a debate on old economy versus new
economy. Old economy meaning, slow, bureaucratic, big corporations. New economy
meaning innovative, high-speed start-ups. And Enron was an example of how you could turn
an old economy corporation into a new economy model. And the broadly shared impression
of that time, the mood was that something very exciting is happening right now. And the rules
that we have learned and applied in the past, don't count any more. Rules are for the old
economy. The new economy is making its rules on the way up to these new type of
organizations.
So, the behaviour of Enron managers was, was pretty much in line with the overarching
ideology of deregulation, rule breaking, transformation, new economy. Markets are perceived
as good. Governments as a problem.

The shareholder value ideology, dominated the belief system at Enron. Skilling Jeff Skilling
once said, we are doing God's work. So we are representatives of a God like mechanism, that
promotes the common good for everyone through our own self-interesteded behaviour. This
was the spirit of deregulation. The spirit of profit maximization that has caught the early
2000s. Enron created its own reality. Building on the high press of being superior. We are up
here, everyone else is down there. This is another citation from Skilling. Tom Wolfe in his, in
his novels calls this kind of behaviour the Master of the Universe behaviour, or attitude.
But the reality, it reflects exactly the values and beliefs that characterize the society or the
economy. At that point in time, in general.

So Enron was from that point of view not an exception. They were pretty much the rule of the
belief systems of their time. What we believe as clearly wrong, Enron managers might have
perceived as clever. You beat the system. You tried to beat the system. Cleverness is probably
the term that describes best the overall culture at Enron. It is a culture of the company that we
would like to analyse next, to, to show you that there is this hubris of the overall societal
context, but there's also the cleverness that drives the organizational context, towards what
we would call ethical blindness.

As I mentioned, Enron was hiring just graduates from top business schools in the US. And
they were hired basically as traders. They were in their early 20s. They were embedded in a,
in a context of entrepreneurial aggressiveness of competition. Of creative destruction. Of fast
growth. Of you can do what you want as long as you bring in trades. There were flat
hierarchies, few layers between the top managers and the traders. There was a meritocratic
system. Instead of being promoted or being rewarded because of your seniority, you were
rewarded because of the trades you brought in. High bonuses. High stock options for young
traders, based on success, based on deals. The objective was to bring as many deals as
possible at an ever higher speed. The young traders, which only loosely controlled. They had
a large space of making their own decisions. Even for big projects. They were inexperienced.
And you can imagine what happens if you give broad decision making freedom to people
who have not much experience. The reward system at Enron was as I said, to pay according
to the trade that you bring in. They created a kind of governistic culture around the traders.
And this governistic culture is pretty much manifesting in the reward system that you had in
one, in the evaluation system.
How did it look like? The evaluation system at Enron? There was a group of 20 managers,
who evaluated their peers every year, according to their performance. The Enron traders were
categorized into mainly two groups. The high performers, and the low performers. The high
performers, 5% of the traders overall, they received huge bonuses and Ferraris partly. When,

23
when bonus day came, Ferraris piled up in front of the headquarter. The low performers.
Roughly 15% of the traders were fired the same day. So, what do you do if you want to
survive in such an environment? You better don't make problems. You do whatever is
expected from you, you don't criticize your superiors. You bring in business. Because you
know when bonus day comes, you might be humiliated. And just imagine, put yourself into
the shoes of one of these traders. You come from a top elite business school. You are trained
already that you are the best and brightest. You work for the company that is perceived as the
business model of the future. Do you want to be fired after six months because you are a low
performer? You cannot afford to do this. This would be the end of your career. At least you
might perceive this as the end of your career. This is Darwinism. It's struggle for survival in a
very aggressive context.

So if we, if we want to understand this corporation, there're a few conclusions that we can
draw in the context of our concept of ethical blindness. The Enron scandal is not simply the
result of the criminal behaviour of a few people at the top of the company. It is not the result
of the behaviour of so called bad apples. There're bad apples, they drive the corporation into
this direction, but you only understand it, if you look at the whole culture of the organization.
It became contagious. The deviant behaviour of the top leaders. So the culture of the
corporation was characterized by a dangerous mix of cleverness, arrogance, vanity,
aggressiveness, greed, and fear. If you create such a culture, you're promoted with your
evaluation system, your bonus system your career path. Then you should not be surprised to
get out something as Enron. And in addition, this culture of greed and cleverness is embedded
in a particular historic moment. The, history of the internet bubble of the early 2000s, in
which everyone was convinced that now the whole economy is changing and turning into
something completely different. This culture led to a systematic rule breaking on all levels of
the organization.

c. Third video

Welcome to this third video of week three of our course on unethical decision making. In this
video, we will discuss how the words we use influence what we think and what we do, and
how language can drive ethical blindness.
In this session, you will understand the role of language for framing processes. And you will
learn about the potentially negative impact of two types of vocabularies on the ethical climate
in organizations. Vocabularies of war and of gaming.

When it collapsed in September 2008, Lehman Brothers had been one of the largest
investment banks in the US. It has since then become a symbol of the financial crisis that had
hit the world in the first decade of the 21st-century. Because of the hubris, the greed, and the
arrogance of bankers who were only interested in their bonuses, the financial system was
pushed to what's the risk of a global meltdown. Well, following our course, you will realize
that the story is probably a bit more complex than that.
Many different actors have contributed to the crisis. Governments who did not regulate the
industry sufficiently. People buying houses they could not afford. Shareholders who wanted
higher profits. Business schools, exclusively teaching ideological illusions of self-regulatory
markets. Or journalists not asking critical questions, and many other actors.
But of course, decisions inside investment banks like Lehman Brothers play a key role. Here,
we do not want to analyse the technical side of the company's collapse,. In a nutshell, the
bank collapsed because it was too deeply investing in high-risk financial products. For a
while, Lehman Brothers and all the other banks made huge profits by packaging high risk

24
investments in seemingly low risk financial products, which they sold to their customers
around the world. These profits turned into huge losses when the financial industry ran into
massive defaults, in particular because of their mortgage backed securities. Lehman Brothers
understood this abrupt change from a housing boom to a country wide and then global
recession and decline in home prices, too late. They had simply taken too high risks.

We also do not want to discuss the details of the bank's bonus system. We have already
analysed incentives and performance evaluation as potential driving forces of ethical
blindness in our session on the Enron story. The situation at Lehman and all the other banks
was pretty similar, and not much has changed since then in the financial industry, by the way.
Here, what we're going to do is we want to pick one little but very powerful detail in our
analysis. We want to reflect upon the influence of language on ethical blindness.
In our first video of this week, we discussed the influence of frames on our decisions. As you
have learned, framing is important for our efficiency. We automise what kind of information
counts when we make a particular decision. However, you also learned that frames can
become too narrow. In such a case, we do not see what we should see to make a good
decision.

Framing is something we do with words, and thus, the basic material we use when framing
the world is our language. As we know from the philosopher Wittgenstein, words and their
meanings will group themselves in language games. These language games do not just
reflect what we believe, but they determine and limit what we can believe. As the
management scholar, Karl Weick, once wrote, “how can I know what I think until I see what I
say?” The way we speak reveals what we think. What we think influences what we do.
Thinking, speaking, and acting are strongly connected. We understand our reality through
language. We use it to share meaning. Our words store meaning for us. Manipulating
language, therefore, means manipulating thinking. Just assume you want to cut the budget for
public schools, but you do not want the citizens to really understand the consequences. Call it
No Child Left Behind policy. You pollute the environment with your factories and fear public
criticism? Frame your activities as sustainable production and paint your website green. You
want to ignore the rules of the game? Call yourself a member of the new economy for which
the old rules do not count. You want to avoid tough regulation and higher tax? Move
production to Bangladesh and pay tax in Luxembourg, but package it in a story on free
markets. Who can be against free markets? Who can be against freedom? Your soldiers fight
in a war and eventually kill each other instead of the enemy? Just call it friendly fire and it
triggers less questions at home.

In his book, 1984, George Orwell develops a dystopian vision of a future in which we are all
ruled by the totalitarian dictator, Big Brother. One of the key elements of the repressive
regime of Big Brother is the control of our thoughts. How do you get control over thoughts?
By controlling, introducing the vocabulary the citizens can use. You abolish what they call
old speak and create new speak, a simple language without any ambivalence and any
metaphorical power. This is how one of the people working on the new language praises his
work in a discussion with a colleague. Don't you see that the whole aim of Newspeak is to
narrow the range of thought? In the end we shall make thoughtcrime literally impossible,
because there will be no words in which to express it. Every concept the can ever be needed,
will be expressed by exactly one word, with its meaning, rigidly defined, and all its
subsidiary meanings rubbed out and forgotten.

25
If we move back from George Orwell to Lehman Brothers, what does language tell us about
the belief system and the cognitive limitations of decision makers inside the corporation?
Larry McDonald, one of the managers on the trading floor at Lehman Brothers, has written a
book on his experience of the corporation's collapse entitled “A Colossal Failure of Common
Sense”. What fascinates me about this book is less the story itself where the author presents
himself as one of the few heroes who saw it coming but couldn't stop it. What fascinates me
is the revealing language the author is using when telling his story. McDonald describes
Lehman as an organization that was run by a junta of platoon officers. Traders spent a lot of
time in combat like battle-hardened regulars. He himself worked on the gun deck of the ship
where financial cannons roared. He liked to be where the bullets fly, where people drop their
hand grenades and where the traders got dispatchers from the front line of a war zone. He
describes his colleagues as soothsayers with an AK-47, a Navy SEAL, as a battlefield
commander, and an old battle-zone warrior. Do you feel the testosterone in this language?
This is the language of war as if Lehman Brothers was an army, not a company. If you frame
your reality in a language of war, it is obvious what happens. You create an atmosphere of
war. Full of stress, pressure, fear and aggression. And in a war, the rules that count in times of
peace, don't count anymore. You're surrounded by enemies. Sooner or later, people start to do
things they would not do in a peaceful environment.
Next to the war vocabulary, there's a second type of words used by McDonald. One colleague
seems to have the instinct of a gambler, and the trailer room is cooled down like a casino in
Las Vegas. He and his colleagues are in the finance game, the brokerage game, or the
subprime mortgage game with a CEO playing his usual poker game. Here the world of
traders at Lehman Brothers is framed like a game. And it is not difficult the imagine those
traders sitting in front of numerous computer screens filled with numbers playing the game of
winning and losing on financial markets, beating the gamblers of the competitors. And like
the war, the game is a special situation that is disconnected from normal reality. These
bankers operate within their own bubble, which they perceive as the whole reality. They
cannot look beyond.

Language reinforces this effect of being locked in one particular but much too narrow frame.
Language of war and gaming is a warning signal that something might be wrong in the
organization.
To conclude, there are three takeaways from this session. First, the way we talk reveals and
influences the way we think and the way we think influences our decisions. Second, the
words we use can limit what we can see and think. Third, ethical blindness can be reinforced
by vocabularies of war and gaming.

d. Fourth Video

Welcome to this fourth video of week three, of our course on unethical decision making. In
the last video, we have argued that language can drive ethical blindness. In this, video we will
elaborate on this idea and share evidence from various scientific disciplines with you that
support our claim.
In this session, you will learn about research in various disciplines that demonstrate the
power of language over decisions. You will understand in particular how the selection of
metaphors and labels influences what we believe and do. And you will learn that language is
not just a powerful instrument for the manipulation of perceptions, but also for revealing
manipulation and creating mutual understanding.

26
In our last video, we made a very strong claim. The corruption of behaviour might start with,
or might get reinforced by the corruption of language. It is easier to engage in bribing if you
call it facility payment, or an accounting fraud if you call it creative accounting. Language
can distort our thinking. We have argued that narrow framing, which drives ethical
blindness, is promoted by the use of aggressive language. For example, when managers speak
as if they were in a war with their competitors. If competition is perceived as war, it requires
a behaviour appropriate to the special situation.
We illustrated this by the story of Lehman Brothers and it's CEO. But Lehman Brothers is not
an exception. Of course, it is well known since millennia that language is very powerful when
it comes to influencing others. Rhetoric was a key domain of philosophy for ancient
philosophers like Aristotle who knew that what you say can be less important than how you
say it. One of the key abilities of great leaders is to find the appropriate words to craft strong
motivational and visionary messages. Just listen to Martin Luther King's powerful I Have A
Dream speech of 1963. Or Barack Obama's Yes We Can election victory speech of 2008, and
you understand the motivational power of words. Or read Shakespeare's Imagine speech of
Marcus Antonius to the people of Rome after the assassination of Julius Caesar. Language
can influence what others think and eventually do.

What we claim here, however, goes deeper. Language is a representation of thinking, and it
goes deep into our belief systems that shape who we are and what we do, not just ad hoc
when we hear an inspirational speech, but constantly. We build imaginary worlds in our
minds and then we enact them. Depending on the metaphors we choose, these worlds can be
corrupted.

Coming back to our critique of war language, this kind of verbal aggression's widely spread
in corporations. Jack Welch, the former CEO of General Electric, used war metaphors when
communicating with shareholders and employees. War rhetoric’s appears often in markets
characterized by strong competition. The cutthroat competition. It appears in hostile
takeovers, or in emerging markets with high uncertainty. Textbooks for strategic management
use war metaphors. In fact, corporate strategy emerges from an adaptation of military
strategy. Key approaches like Michael Porter's approach for strategic management are based
on the assumption that companies are in a constant struggle for domination, fighting not only
against their competitors but against governments, customers, employees, suppliers. It is the
nightmarish situation of a war of everyone against everyone that Thomas Hobbes describes
which builds the context for management strategy. And war metaphors promote a negative,
even a hostile perception of others.

Neuroscientists like Alice Flaherty have shown that metaphors create a powerful
physiological connection between reason and emotions in our brain. Metaphors can make one
feel. The give, as Flaherty says, emotional resonance to abstract ideas. Making someone
feel is often the pre-condition for making someone act. The cognitive linguist George Lakoff
has argued that the purpose of a metaphor is to inference patterns from one domain to
another. And war metaphors make an inference between the practice of warfare and the
practice of management. They implicitly or explicitly structure our understanding and our
evaluation of managerial decision making, along the thinking of army leaders in war
situations.
Metaphors, therefore, create frames. We have talked about framing already on several
occasions in this course. Here, we focus on the linguistic aspect of it. In this sense, framing
can be understood as the use of text for the promotion of particular perspectives, evaluations
or facts.

27
The formulation of a statement thus consciously or unconsciously manipulates perception and
interpretation of a statement by particular audiences. This has been called framing effect, and
psychologist like Daniel Kahneman have widely examined it. While metaphors are very
powerful in putting our mind on a particular track, framing effects always result from
seemingly harmless decisions on how to name or label something. One interesting example of
such a framing effect can be found in the discussion on global warming. While 97% of all
scientists agree on the evidence of man-made climate change, the public opinion in some
countries is split. Many people deny the phenomenon. Whether we believe in global warming
or not, does not depend on the exposure to scientific evidence. It depends on how the
discussion resonates with our values and beliefs. If there's a contradiction between our beliefs
and scientific evidence, we stick to the beliefs not the evidence. And what is the role of
language in this game? You might have noticed that I used two different terms here. Global
warming, and climate change. Research shows that people react differently to these words,
depending on what they believe. The term global warming is preferred by those who deny the
phenomenon, because it somehow seems to neutralize the fear that the word climate change
creates. If you want to convince people who are torn between believing and denying the
phenomenon, as a denier you will use the word global warming, as a believer, climate
change.

Our beliefs reveal themselves in the language we use. They are embedded in what the
philosopher Ludwig Wittgenstein calls language games, and his colleague Wilfrid Sellars
calls battery of concepts. Beliefs about global warming build on beliefs about God or science.
Our perception of the world comes from inferences of beliefs leading to new beliefs. And all
this is expressed in our own speech acts. As the philosopher Richard Rorty has argued, we act
in a world with our own particular vocabulary. He calls this a final vocabulary. In his words,
“all human beings carry about a set of words which they employ to justify their actions, their
beliefs, and their lives. These are the words in which we formulate praise for our friends and
contempt for our enemies, our long-term projects, our deepest self-doubts and our highest
hopes. They are the words, in which we tell, sometimes prospectively, and sometimes
retrospectively, the story of our lives”.

Beyond philosophical speculation, there is, indeed, plenty of scientific evidence for the power
of vocabularies over decisions. Let me give you a few examples. As argued, the use of a
single metaphor can guide the way people think about social phenomenon. Stanford
psychologist, Paul Thibodeau and Lera Boroditsky have demonstrated this. They confronted
participants with crime statistics depending on whether they framed crime as a beast that is
lurking in the neighborhood, or a virus that has infected the neighborhood. Participants
reacted with different propositions with regards to potential solutions. Those who heard that
crime is a beast, were more likely to propose law and order solutions. Those who heard that
crime is a virus rather supported social reform measures.
In another experiment, Harvard law professor was asking students what they were willing to
pay to insure against the risk. For one group, he framed the risk as dying of cancer. For a
second group, he argued that the death would be very gruesome and intensely painful, as the
cancer eats away the internal organs of the body. It comes as no surprise. Participants of
group two were willing to pay much more for the insurance.
Research shows that people smoke more cigarettes if you call the cigarettes, light. And
teenagers smoke more, not less, if they are told that smoking is for adults and dangerous.
When fast food companies drown customers in tons of incomprehensible nutritional
information, customers react by evaluating the food as rather healthy. Genocide in Germany

28
and in Rwanda, is strongly connected to the creation of labels that highlight differences and
invents threats, where such threats and differences do not exist. Tutsis and Hutus, Arians and
Jews. Labels like this create the inside outside perception, that is an important first step in the
process of dehumanizing others.
And on to the second step relates to language. Psychologist Albert Bandura showed that an
escalation of violence is strongly connected to the use of dehumanizing language with regard
to the victims. Once they are dehumanized, they are no longer human beings but obstacles,
animals, filth, verms. And independent from our opinion of global warming, our general
perception of sustainability. The debate around sustainability. Will the with the metaphors we
grab when we talk about nature. Is it a machine we can use and manipulate? Is it a living
system, web of life with delicate complex feedback loops? Is it our mother? Is it God's
creation intended to serve us? To test the power of words we're thinking, just look at the
following formulation. I assume most of you have already driven a car, maybe you own one,
and you certainly have an opinion about driving. Here comes the sentence. Due to
technological progress we could all sit in self-driving cars, soon. Traffic systems will be
developed around such cars. This will reduce the human risk. Millions of lives will be saved.
How do you react to that? Positively? Of change? Let me rephrase it. Due to technological
progress we could all sit in self-driving cars, soon. Traffic systems will be developed around
such cars. We will lose our autonomy as drivers. Machines take control over our cars. What
now? You still like it? Or does your scepticism grow? Self-driving cars are a great invention,
but they come with a big change that deeply influences our beliefs and values. If you want to
convince someone that this change is bad, let the person imagine that you are controlled by a
machine, and that your freedom disappears.
Richard Rorty described this as the most important philosophical discovery of our time, and
he called it the linguistic turn. Language is not just a neutral tool we use to understand
reality. It profoundly influences and shapes our reality. According to Rorty, we do not
advance our understanding of reality by looking at the world outside in order to find a
timeless truth. We better try to understand how we use language to make sense of reality and
create reality in particular historic contexts.
French post-modern philosophers like Michel Foucault have also advocated a linguistic turn
in philosophy and started to critically deconstruct the historical, grown meaning of terms such
as madness, discipline or punishment. Foucault demonstrated that our understanding of those
terms is not neutral. These terms do not describe an objective reality but result from a social
construction over time. And their meaning promotes particular interests now and then. This is
very similar to the ideas we already discussed around George Orwell's Dystopian novel 1984
in our previous video. Language, however, is not only a means to distort meaning, and to
manipulate others, but also a powerful means for revealing manipulation, removing
distortion, and to find common grounds. Ethics thus, has also taken a linguistic turn. The
philosopher Jurgen Habermas has argued that language has the in-built function of
convincing others of our arguments and the authenticity of what we claim. He calls this
approach discourse ethics. Decision-making, according to him, should not result from
isolated thinking about the right principles, like Kant or all the utilitarians. Instead, it should
result from the intersubjective exchange of arguments in a context where the influence of
power and manipulative techniques is reduced. And ideally, the better argument will convince
all participants of a discourse. Here, language is seen as the source, maybe the only remaining
source of ethical decision making in a world where we do no longer share the same traditions,
religion, way of life, but nonetheless have to organize our living together in a way that
violence gets avoided.

29
So let me conclude this session. Frames are constructed linguistically. There is strong
evidence from research in psychology, law and neurosciences that linguistically constructed
frames deeply impact what we do if they resonate strongly with our deep beliefs and values.
Metaphors and labels are particularly strong frame makers. Language is at the same time the
best source for critically examining manipulation and finding common ethical ground with
others.

IV. Fourth week

a. First Video

Welcome to the first video of week four of our course on unethical decision making. In the
last week, we have introduced the concept of framing and looked at two cases that illustrated
how people construct their reality.
In this video, we will focus on how people can make and do make decisions by using simple
heuristics. In this session, you will learn what simple heuristics are and you will see how
effective and efficient these decision strategies are.

How do people make decisions and how should they make decisions? Well, if you ask
economists, most would say when making decisions, people should strive to be rational.
Economists are, of course, framed. When they look at someone making a decision, they see
homo economicus. Homo economicus cares about economic outcome, cares about subjective
expected utility. Here we have John von Neumann and Oskar Morgenstern. Who
formulated the axioms of this theory and who made operational a notion of rational
behaviour. For them, being rational meant being consistent with logical schemes and it meant
choosing the alternative that maximizes subjective expected utility. To compute which course
of action maximizes subjective expected utility, homo economicus takes all relevant and
available information into account to update probability estimates of various events.
And here we have Leo Savage, a proponent of this statistical, or say number-crunching,
approach to decision making. This view has soon been challenged by Herbert Simon, who
basically said, calculating expected utilities in order to optimize behaviour may be possible in
a small and stable world. But it's not feasible in the real world, which is large. That is,
described on much more dimensions than one con handle computationally. It is dynamic and
it contains a lot of interdependencies and uncertainties.

So Herbert Simon questioned the psychological possibility of SEU theory. In his Nobel
Prize lecture, he said that the classical model calls for knowledge of all the alternatives that
are open to choice. It calls for complete knowledge of the consequences that will follow on
each of the alternatives. It calls for certainty in the decision-maker's present and future
evaluation of these consequences. And it calls for the ability to compare consequences in
term of some consistent measure of utility.

We can contrast these two views here in this illustration. We have two visions of cognition.
One is the unbounded rationality, then, that includes optimization under constraints. On the
other hand, we have Herbert Simon's view, who coined the term bounded rationality. In
Herbert Simon's view, people do not optimize, but they use simple heuristics. And here we
have an illustration of what we call the adaptive toolbox. This adaptive toolbox consists of
simple heuristics that people can use to make decisions. It is not optimization under
constraint and it doesn't mean that people are irrational when they use these heuristics. To the
contrary, as we would see, these heuristics do a fairly good job.

30
Let me now use one of these heuristics to illustrate the notion of bounded rationality in more
detail. Imagine you're sitting on a field on some playground and your task is catch a ball. It
comes high in the air, so you want catch the ball. How do you do this? It's simple, you may
say, you just go there, catch it, that's it. But if you tried to figure out what's actually going on
in the human brain, you see it's not so simple. Here we have a quote of Richard Dawkins.
When a man throws a ball high in the air and catches it again, he behaves as if he had solved
a set of differential equations in predicting the trajectory of the ball. At some subconscious
level, something functionally equivalent to the mathematical calculation is going on.

Do people use mathematics? Do they solve differential equations when catching balls? Here's
what Peter McLeod and Zoltan Dienes found out in their research. Peter, fielders, players,
they actually use a very simple heuristic. They look at the ball, the person throwing the ball
up in the air, and they start running immediately. They do not do any calculation and predict
where the ball would land. They start running before they even know where the ball would
land. And they address their running speed so that they fixate the ball with a constant angle
and, at some point, they'll have it even though they do not know where it would land. But
that's not the goal. The goal is not to predict where the ball would land, the ball, goal is to
catch it. There's a lot of empirical evidence that people and animals actually use this gaze
heuristic.
Let me give you another very powerful illustration of the gaze heuristic. That's actually an
event that made it to the front news of all major newspapers worldwide. It's an event that
happened on January 15th, 2009. It was US Airways flight 1549. Two minutes after take-off
at LaGuardia Airport, there were birds hitting the engines. The engine burned and the airplane
basically turned into a glider. And the pilots had ten seconds, 15 seconds, to make a decision
where to bring down the airplane. They considered Titerboro Airport or at some point they
then wondered, should we go back to La Guardia. How did they make this decision? Here's
what Jeffrey Skiles later said in a talk show. It is not so much a mathematical calculation as
visual, in that when you're flying in an airplane, a point that you can't reach will actually rise
in your windshield. A point that you're going to overfly will descend in your windshield. So
it's more of visual calculation.
To better understand this, let's look at the following animation. This is the perspective of the
pilot as the pilot have seen from the cockpit. And here we see the airport. Question, will they
make it? If they look at the scratch in their windshield, they see the airport is rising. So at that
moment, they knew we will not be able to make it. And they actually use the gaze heuristic].
Field players adjust the running speed so that they can always have the same angle. The
pilots, who are not in the position where they could manipulate the speed of the airplane
anymore, it was a glider, but they could look at the angle. And they could figure out, was the
angle stable? Was it the same? And they've seen no, the angle changed. So they knew their
speed was not fast enough and this is why they decided to go down into the Hudson and this
later became known as the miracle of the Hudson River.

The gaze heuristic is a very powerful illustration that the simple heuristic can guide behaviour
so that an organism can reach his goal. And note that most of the information out there and
work knowledge can be ignored. No complex calculation is needed. No differential equations
need to be solved. Similar thing can be said about other simple heuristics that people have or
may have in their adaptive toolbox. When one has to make a choice between a recognized
and an unrecognized object, the recognition heuristic goes for the one that is recognized. No
information about the alternative is processed. In a choice task, Take The Best considers one
attribute after the other, starting with the best one and stops once an attribute is found that

31
discriminates between the alternatives and makes the decision only based on this attribute. All
others are ignored.
Quickest is a simple heuristic to make numerical estimates. Fast and frugal trees can be used
to categorize objects or to decide among courses of actions. And often, one piece of
information may be enough for this purpose. All other information is ignored.
Tallying sums up the arguments, speaking for each alternative, but does not weigh them.
Elimination-by-aspects simply eliminates alternatives based on thresholds until only one is
left. Often, we satisfy (satisficing heuristic), that is, we are satisfied with something that is
sufficiently good. Most often, we do not make the effort to find the optimum. And in most
cases, it is not even clear what the optimum is. Often, we do not make much effort to find out
what is best. We simply imitate others hoping that this is not too bad. And often, we do not
make decisions each time anew, but we follow routines and go with what we did in the past in
a similar situation. Often, we do not decide at all, but adapt defaults set by others.

So let us conclude this session. Economists tend to frame decision making in terms of
unbounded rationality. Homo economicus is an optimizer. The classical approach has been
challenged by the concept of bounded rationality. People have a repertoire of simple
heuristics in their Adaptive Toolbox. These heuristics allow them to make good decisions
even though people have limited knowledge, limited memory, and limited computational
capacity. These heuristics may also be applied in situations that can be evaluated from an
ethical point of view. And here comes the danger. These heuristics ignore information. If
ethical dimensions are ignored, outcomes may be unethical. We will come to this in the next
sessions

Welcome to the second part of the first video of week four of our course on unethical decision
making. In the last video we discussed simple heuristics that people can and do use when
making decisions. We have seen that these heuristics allow people to make decisions very
efficiently in the sense that they can ignore a lot of information out there and still make
good decisions.
In the present video we will discuss another aspect of efficiency, one that comes into play if
people make similar decisions repeatedly. So we basically add a temporal dimension to
decision making. And we will talk about routines.
In this session you will learn what routines are, what functions they have, and why they can
be dangerous when it comes to ethics.

How many decisions do you make per day on average? What do you think? And how many
decisions do you make per year? Now, I haven't done this yet, but I suspect if I ask the first
question per day to one group of people and the second, per year, to another group, the ratio
between the error to responses will not be 1 to 365. Why not? With a question, I communicate
in a very subtle way what I mean with decision. If someone answers like, hundreds or even
thousand per day, I would not find this strange. At the same time, I would not find it strange if
someone answered, well last year, I made three decisions. We make big decisions like whom
to marry, whether to accept a certain job offer, or even where to spend vacations. These are
decisions which require some cognitive effort and involvement, some deliberation. And then
we make many, many small decisions like how to get to the workplace in the morning or
which bread to buy in the bakery or in the supermarket. Or where to place the key after
having entered the apartment in the evening.

The same is true for companies or organizations. Some decisions are big, strategic. You may
also call them basic in the sense that they build the basis for much else that follows, and that

32
is built on these big decisions. They are usually carefully prepared, usually by many people
and over long periods of time. For example, should a company invest in the research and
development of a certain new product? Should it outsource some important activities? Should
a city council or a parliament pass a certain law for the community?

And then there are many, many small decisions that we often may not even perceive as
decisions like filling out forms, ordering new papers for the printer, redistributing work after
a colleague called in the morning and informed the team that he is sick, and so on. These are
routine decisions. What is a routine? It comes from the French la route. The direction, the
way to go. Here's a more formal definition. A routine is a set of customary or unchanging
and often mechanically performed activities or procedures. Or on the level of
organizations, routines are repetitive patterns of interdependent actions in the organization.
To illustrate, let's maybe go back to the origin of the word, and consider a wide open space
during wintertime. There's been snowing overnight and in the morning everything is white.
The first person comes, she wants to cross from one side to the other and she has to make a
decision which way to go. Imagine a second person's coming, which path do you think will
the second person take? Probably the same. It requires less effort, snow is already stepped
down and it is easier. One does not need to spend any effort on making decisions. One simply
copies what others did. That is, the second person will imitate. And you can, of course, also
imitate yourself. You do what you did yesterday in the same way. Whenever we move into a
new environment, new apartment, new working place, have a new spouse, we have to make
decisions. And usually people deliberate about these decisions. For instance, the decision how
to get to the working place. By bus, car, bike, foot. So we collect information. We compare
options. We try out something. We evaluate, we learn, we adapt. And at some point the whole
thing stabilizes and we can switch on autopilot. And we take the same way every morning.
Likewise, we will develop some repetitive interaction patterns with our neighbours or
colleagues. We will go to the same places to buy the same products, and so on. So, routines
are recurrent patterns of behaviour or, when considering groups or organizations, recurrent
patterns of interactions.

How are routines related to rules or standard operating procedures? Rules and standard
operating procedures are more formal, official, and they are often written down somewhere.
Sometimes routines emerge bottom up. Someone starts somewhere and something and this
leads ultimately to routines through processes such as imitation, variation, evaluation,
adaptation. Maybe, and quite late in the process, someone writes it down as a standard
operating procedure. But many routines are not put in writing.
And then there are also rules that start in written form. As orders that are formulated at some
level in the hierarchical organization with the purpose of regulating behaviour at lower levels,
as a top down process. Such written rules will then be read, interpreted, and executed by the
members of an organization, which is another source of routines.

What are the functions and effects of routines, in particular in organizations? First, they
economize on our cognitive resources. Deliberate decisions require cognitive effort. Once
we have invested this effort, we no longer need to invest it when we are in a similar situation.
We just imitate our own past behaviour, and this can be done via semi-conscious
mechanisms, which in turn frees up cognitive resources for other tasks that may require our
attention and awareness. Second, routines ensure stability and they reduce uncertainty.
Sometimes we can predict the decision of others, but sometimes not. If I know, however, that
you do not deliberate at the moment, but simply imitate what you deserve or what other did in
the past, you are predictable. Sometimes it is not good to be predictable. But if we work

33
within the same organization, and if goals are aligned, then it is usually good to know what
the others are doing. This gives us planning security, and it facilitates coordination, which in
turn can increase efficiency. A third aspect of routines is that they help groups of people to
stabilize their relationships. That is, to create micro political stability that strikes a balance
between the interests and the participants in the routines. They can be seen as results of many,
many little negotiations. I give you this, you give me that. And these solutions are repeatedly
implemented. Another aspect that we want to mention here is that they store knowledge.
They can be seen as an important part of the memory of an organization. Routines bind
knowledge, including tacit knowledge. Individuals and organizations constantly get feedback.
They learn and they adjust their behaviour. Different behaviour patterns can be seen as being
in competition with each other, and the feedback plays an important role when it comes to
selecting the way to go. Now, if the feedback is used to select a path and if this then becomes
a routine, one can say that these routines build on the past and reflect previous experiences.

This not to say that routines cannot change, but they bring some conservatism into the
system, into an organization. The positive aspect is, as we already mentioned, stability,
predictability, efficiency gains through the ability to coordinate. But the negative aspect, or
the dangers must also be seen. Rigidity and inertia. That means routines are a starting
point for, and at the same time also an obstacle to learning. Learning means changing.
And in order to change something, this something has to be there in the first place. In order to
change routines, you have to have some. But routines can be tricky to change, in particular if
they are shared, and if many other individuals would need to change a particular routine at
the same time. This would also require coordination effort.

Let us now turn to ethics. Routines play an eminent role when it comes to the ethical culture
of an organization. The ethical culture of an organization comprises all those aspects and
elements of an organization which influence the ethical conduct of its members. It is
important to understand that the ethical culture of an organization has several layers. At the
heart are the shared values and assumptions of the organization. These provide the overall
direction for the behaviour of the organization and its members. Those core values can also
be implicit, especially in smaller organizations. In order to make those core values explicit
and to show how they translate into behaviour in the daily business, organizations establish
formal norms. These include standard operating procedures, code of conduct policies and
guidelines. The members of an organization then apply those norms. And as a result, practices
and routines emerge. This is the top-down process that we just discussed as one of the ways
routine enter an organization.
The other way was that common practices and routines emerge bottom-up through the
organization members, independent of the values, norms, and the directions communicated
by the leaders of the organization.

The final layer of culture comprises the artefacts and symbols. These are for instance, value
statements or slogan printed on its marketing material, reports about the organization's
philanthropic actions, volunteer days, or speeches by the organization's executives about the
ethical culture of the organization.

34
In an earlier video, we already established that ethics is about taking others into account. It is
unethical to ignore one's social environment, but instead and exclusively to follow one's own
interests and maximize one's own utility. In contrast, it is ethical, according to Immanuel
Kant, if we behave such that the rules underlying our behaviour can be universalized. That is,
if everyone would behave according to the same rules, we would not have war and fight, but
peace and understanding. But to create such a world, one must be able to put oneself into the
shoes of others. Something similar can be said about Bentham, according to whom we
should behave such that we achieve the greatest good for the greatest number. This can also
be only achieved if we consider the consequences that our behaviour has on others.

Keeping this in mind, let us look again at the functions of routines. One was that they allow
us to save cognitive resources. We contrasted deliberate decisions and routine decisions.
What do you think? What kind of decisions are more ethical? Those for which you take time
and effort and look at various aspects, or those you are on cognitive autopilot? Think about it.
If you ignore others in your deliberate decision, chances are that you are at least realizing that
you're doing this. For routinized behaviour, you may not notice. This would correspond to our
concept of ethical blindness. Routines ensure stability and they reduce uncertainty. Whether
this is for the good or for the bad, when it comes to ethics, depends on who you are, and for
an organization, on its ethical culture. If this culture is very ethical, then stability is good.
Chances are that people help each other to behave ethically, and that new people who enter an
organization will be absorbed and infected by this culture. But it can of course also work in
the opposite direction, if the level is low. Truce with the micro-political stability that strikes a
balance between the interests of the participants in the routines to the extent that people will
usually voice their interests. Chances are that the interests of the people who are involved will
be considered. But note, some may be more shy than others to voice their interests. And even
if all the interests are on the table, they can still be ignored by others who are more ruthless
and more powerful.
When it comes to knowledge and learning, the same can be said as for stability. Routines can
be good or bad depending on the level of ethicality that has established in an organization. It
can be protective if an organization is already on a high level. But they can also be an
obstacle if the company is on a very low level when it comes to ethics, and if routines are
rigid and hard to change, which they are.

It is important to note the power of routines. Speeches and appeals to being more ethical may
not be enough, as long as the daily routines are not changing as well. When people make
decisions, they simplify all the time. We have seen this when we talked about framing.
Framing simplified perceiving the world. We talked about heuristics. Heuristics simplify
information processing. And also routines simplify our lives. We just follow what others did,
or what we did in the past. So routines are beneficial. They help us a great deal in our daily

35
lives, but they are a double edged sword. And you should, from time to time, step back,
review your routines, and check to what extent you may behave unethically when being on
autopilot, and hence when being vulnerable to ethical blindness.

b. Second Video

Welcome to the session on how organizations contribute to ethical blindness. This session, is
split on, in two videos, in the first video, I will familiarize you with the forces that create,
strong organization context, and thus contribute to ethical blindness. In the second video we
will see how organizations try to fight, against compliance risks by creating control systems,
that obviously give our knowledge that we developed so far in ethical blindness, are not
sufficient to keep this risk under control.

The main goal of this session on organizations is, to familiarize you with the organizational
factors, that drive the risk of ethical blindness. You will learn how they can form dangerous
constellations around you when you make decisions. And you will learn how you should not
manage that risk in organizations.

Let us first start by looking at the risk of narrow framing in organizations. How do they
contribute to ethical blindness? You might know the Encyclopaedia Britannica, that's a global
standard work for information. Created in 1768 already. So, it is a success model for more
than 200 years. Few meters of leather bound books with the best available information you
can imagine. In 2012. The production of these books was stopped. And the reason is quite
simple. It's Wikipedia. It is, in my view, one of the many cases where corporations did not
understand the changes around them in appropriate way and were crashing after so many
years of success. So the question that I would like to think about with you here is, if you are a
global expert in the production and dissemination of knowledge. Like the encyclopaedia
Britannica was. Why is it someone else who invents Wikipedia and not you? Why can you
not see it coming? Why do we not see this radical change around you that threatens your
existence? Why do we behave like a dinosaur in a market where everyone else sees it coming
except you?
We have seen in our session ethical blindness already that organizations tend toward
simplicity. They have their routines, they developed their standard operating procedures, they
more or less operate on auto pilot most of the time. Why? Because it makes them highly
efficient. As soon as they have the right routines, for the right kind of decisions, they can,
more or less, proceed without thinking much. We have seen that, therefore, learning in
organizations is very difficult. The more you succeed with your routines, the less you'll learn.
So, transcending the practices that once you establish, is very difficult.

The Encyclopaedia Britannica became blinded by its success, by its old routine as well. And
this routine perception of the world, their frame, is based two things. First, people are
interested in knowledge and, therefore, by books. Second they buy the Encyclopaedia
Britannica because it is the highest quality available on the market of information. And this
was true for more than 200 years. But Wikipedia changed the rules of the game. People today
want highly mobile information. They want it wherever they are. They want it more or less
correct. So they believe in the auto correction of the system of shared knowledge. So
Wikipedia got attacked from an actor outside of their system. The killer application arrived
from the periphery of the system. And they didn't see it coming. Will they survive? Well they
put their stuff online against money now but it's not clear whether people will invest in it in
the future.

36
The Ford Pinto that we have seen already is a similar case where people developed their
strong and narrower teams and they get blinded for the change around them. They get blinded
for the things they should see. But they can't. How we manage these kinds of risks? How do
we build them up in the first place? How do organizations develop strong contexts? How do
they contribute to this? This is what we are going to discuss today. We have seen pressures
from the immediate situation when we went through the experiment of Milgram and Ash and
Organizations can add to that. And make this context even stronger. We will see in this
session some tools they have the disposition to promote ethical blindness. These tools are
basically three. Setting objectives. Designing incentives. Evaluating performance. Let us
look at them. A bit more in detail starting with the objectives.

Sometimes managers believe that if they create very tough objectives, that on principle not
achievable, they will motivate people to go for it. Well, objectives can be too tough and
research shows that if you confront people with unrealistic objectives you push them towards
their limit. They might do whatever it takes to achieve these objectives even if it is not ethical
or not legal. So you increase the risk of rule breaking by setting unrealistic objectives.
Combine these unrealistic objectives with a very simple bonus system where you just reward
individuals. You drop team bonus. You create a highly individualistic competition where
people run for their own particular objectives. And then you add an evaluation system where
you categorize people in low performers and high performers, as, as we have seen in our
discussion on Enron. What you will get is a fight for survival, the fear to be humiliated.
Losers who get punished, and you don't want to be a loser. So if you set highly unrealistic
targets, you combine them with individualistic incentives. You humiliate people in the
evaluation system and then you start the circle again. You'll create an organizational context
in which the risk of ethical blindness increases, because people develop this tunnel vision.
That's what they have to do, that's what they go for, and they don't see left and right, aspects
that they might need to see to make appropriate decisions. Some of you might think, now,
wait a minute. This is a description of my organization. Well, you have all the warning signals
here. You can see it coming in your own context.

If you want to increase this effect as a leader, what you can do is you send ambivalent signals
about the rules of the game. So the rules are not unclear. Think about how at the beginning of
the 2000s people started to talk about old economy and new economy. You are new economy.
The rules of the old economy don't count for you. And then, don't punish people if they break
the rules. Promote them. Create aggressiveness. Use a language that is aggressive. Talk about
warfare with your competitors, about bloodshed, about survival of a few about killing. Then
you create this culture of fear as the emperor does in fairy tale in the very beginning of our
course. If you act to this very authoritarian leadership style, what you create is a situation
where people do not feel in control of what they do. They feel that someone else, you the
leader, controls the situation. They can easily disconnect from responsibility. They can shift
the blame on others. This is what has been called the locus of control effect. As long as you
don't feel that you are in control of the situation, you can do horrible things. But you still feel
well because you are convinced that someone else is responsible.

Just think about how corporations often announce mass layoffs. They will say things like, the
market situation forces us to lay off people or globalization is responsible for this. You'll
rarely find managers who will say well because of the horrible strategic decisions I made in
the last five years we have to fire people. Because then it is you who is responsible. In the
other case it is globalization. You can disconnect. It's the market you can disconnect. So as

37
long as you create a situation where people easily disconnect from responsibilities. By you
being an authoritarian leader, by creating the tunnel vision, by using aggressive language, you
promote ethical blindness. This might sound like a cartoon version of an organization for you.
But is it, it is the situation that you found at Enron. That you can find at Lehman Brothers, at
Ford, and in many other scandals. That we examined in this course and that you read about in
the news every day. And it is a constellation that you find, at least partly, in many
organizations that did not yet have a scandal of unethical or illegal behaviour. Maybe you
find that constellation, partly, even in your own organization. And as we have seen already in
other sessions, corporations slowly move into that direction. It does not fall from heaven as a
bad culture, it develops as a constellation, slowly over time.

So here you have the recipe for disaster, you take these three tools that we met in this session.
Unrealistic objectives, one dimensional incentives, Darwinist evaluation systems that
humiliate people. You combine them with a specific leadership style an aggressive, one with
a war rhetoric, with a rule ambivalence that you create around your employees. With the
tendency to disconnect from responsibility and you have very powerful contacts that in
combination with the fact that we saw in our session with the situational forces will push
organizations towards ethical blindness. Thank you for listening to this little story on
organizations and ethical blindness. We will continue our session with second video where
we see how organizations react to compliance risks.

c. Third Video

Welcome to this second video, of our session on organizational impact on ethical blindness.
In the first video we have seen how. Factors of the organization promote ethical blindness. In
this second video, we will deal with the question of how organizations try to fight, against
compliance risks? By creating what they call normally compliance program. Because
managers are not stupid. They understand this risk. They observe this chaos around them, and
they know that doing the wrong thing can be heavily punished. Just think about the case of
Siemens, the company that was caught in a systematic and large scale scandal of corruption.
They had to pay $1.6 billion USD in fines. So acting against the rules can be very expensive.
Corporations, therefore, create compliance systems. These compliance systems, however,
are highly ineffective. They're designed from a legal perspective normally; they do not
cover the risks that we have seen in this course on unethical decision making. They are made
to catch the bad apples.

So, compliance systems as you find them in most organizations. They are based on the
assumptions that bad things are done by bad people. Bad apples. So it is people who make a
calculation of risks against benefits. If the risk is perceived as low, they go for the rule
breaking. So, how do we keep these kind of criminals in check? Well, you have two tools at
your disposition. You control and you punish. You sensitize them in trainings. You create a
code of conduct. You show them the torture instrument, the punishments. You show them the
duties, the risks. You surround them by an effective network of control mechanisms. And here
you have the standard approach, to compliance management in organizations. And you
already understand after four weeks of discussing about unethical behaviour committed by
normal people, that this is obviously not the appropriate strategy to keep the risks of ethical
blindness under control.

But it is based on some deep seeded beliefs, that we have in modern organizations. It is
coming from a very old tradition of thinking and liberal writings of philosophers like, like

38
Thomas Hobbes. Who have shown us that, human beings basically are wild beasts. They
would kill each other, they are greedy you have to keep them in check through rules and
powerful leaders, and punishment. It is, secondly, based on a very reductive understanding
of what organizations are, it's based on the organizations being a kind of machine. Also
something that comes from, from very early philosophical writing in the enlightenment
period. The whole Book of Nature as we learned from Galileo Galilei is written in
mathematical symbols. So nature basically is a kind of machine. A mechanism that we can
understand by reading it through mathematics. Society is a machine as well. So we apply this
metaphor to society. Human beings are machines. Frederick Winslow Taylor at the
beginning of industrialization, he tried to make the machinery of production more efficient by
deconstructing what workers do in ever smaller steps and by improving the movements of
workers in a way that the process of production could be speeded up. So, basically we treated
workers like machines. Troy Chapin's film Modern Time shows that result. Factories
basically are huge clock works.

Well since then, psychology has helped us to refine our understanding of human motivation.
And our course basically builds on this, refined understanding of modern psychology.
However, it seems very often that compliance management is still building on this old
metaphor of organizations and humans being machines. Homo economicus, the heuristic
actor, the living calculator. You can keep them on track, by two types of motivation. You
punish, negative incentives. You reward, positive incentives. Carrots and sticks. Jeremy
Bentham. The founding father of Utilitarian Ethics that we met already in the first week. He
had a dream. He had a dream of the perfect controlled system, that he called the Panopticon.
Which he applied to a prison environment. He said, well he mentioned. We could design an
architecture of a prison. Where one guard from a. Point in the middle of the system, can
control all the prisoners, all the time. But they cannot see that he is controlling them. What
will they do? Well they will assume that they are controlled, because they do not know
whether or not they're controlled. They will behave properly as if they were controlled all the
time.
Bentham was convinced that this principle could be applied to all other kinds of
organizations, schools corporations, and modern corporations when they, when they engage
in compliance they often, unconsciously of course apply this idea of Panopticon. With codes
of conduct, with tough compliance monitoring, you create a kind of internalized Panopticon.
You feel observed all the time, you feel controlled all the time, so you might be kept in check
as a criminal just by this internalized Panopticon.

But what we have seen in this course, what we can see if we observe the scandals around us,
is that this is not enough. People do not consciously commit crimes most of the time. They
are driven by context towards unconscious routines, that then move into the wrong direction.
Compliance systems have a legal perspective. Our course takes the psychological
perspective. And we assume that the real risks of, of defiant behaviour, acting against the
rules does not come from criminals who act against rules. It comes from the psychology that
creates strong contexts.

And you have important side effects of carrots and stick systems. The first one is, you destroy
intrinsic motivation, if you focus too much on extrinsic motivation. So people would stop
doing the right thing, because they believe it's the right thing, they would just follow the
rules. If the rules relax they will go in the wrong direction. Look at the Enron reward system
that we have seen already. You'll see the link very clearly. Second, strong control systems
send signals of distrust. Distrust has a negative impact on motivation. People will start to look

39
for the holes in the system, and they will take revenge, to somehow rebalance giving and
taking. They will start maybe, to steal in the name of justice. So paradoxically, compliance
systems that are designed to keep people in check, they might create the behaviour they want
to avoid.

The conclusion of our session of today is the following. Organizations can embed decision
makers in very power context, that push toward ethical blindness. Knowing that your
employees might break the rules, you might create a compliance system around them. But
this compliance system is not effective, because it is designed for criminals. It is not designed
for good people. And ethical blindness describes this risk of good people, doing bad things.

e. Fourth Video

Welcome to this fourth video of week four of our course on unethical decision making. In our
previous video, we discussed how organizations can contribute to ethical blindness. In this
video today, we want to zoom into one particular aspect of strong context, namely the power
of routines and habits over reason.
So in this session, you will learn how powerful habits can develop, and how they can blind us
for obvious things. Decision makers can be amazingly ignorant in front of disruptive changes
in their environment. Since I'm a little boy I'm fascinated by stories of knights and Vikings
and pirates and the case I would like to share with you today comes from that context. It is
about gunfire at sea.
The first thing I learned when I dealt with this story is that pirate films are operating with big
exaggeration. Think about the battles between ships that you see in these films. The cannons
are always amazingly accurate. In reality however, this accuracy of gunfire is extremely poor.
Just to give you an illustration. In a training session on cannon shooting in 1899 now, of the
US Army. Five ships who fired for 25 minutes at a ship wreck only hit twice. And they only
hit the sails. So, it was a pretty harmless attack. Now this was the rule. Gunfire at sea is
highly ineffective, why? Because the sea is rolling and the shooter has to wait for the right
moment in the rolling sea. When you can see the rolling target but it's even more complicated
you have to fire before you can see the target because you have to calculate this rolling sea
into, your shooting. You shoot before the enemy is at site. If you shoot when you see the
objective you might be too late already. So, when you shoot you include this interval, it is
highly dependent on your experience, it depends on how good you are as a person, as a
shooter. Gunfire at sea is struggling with this uncertainty and this inefficiency. The best
shootings take place when you have the best shooters. The best people on the cannons.

This system is innovated in 1898, by an English officer with the name Percy Scott. He makes
three little changes. Cannons on ships have an elevating gear so they can move up and down
to fix a position for the firing. Scott makes this mechanism pure flexible. So he changed the
gear ration, which means that the gunner can move up and down the cannon more flexibly.
Second, he combines this with an adapted use of the telescope. The telescope, as you can
imagine, is quite useless if you have to fire before you see the enemy. But now, when you can
wait longer and you can adapt to the rolling sea, the telescope becomes highly effective. The
third thing that Scott adds is he rigs a small item at the mouth of the cannon so that you can
aim more effectively. With these three little innovations Percy Scott was able to increase the
efficiency of cannon fire by 3,000% within six years. So, from that point on continuous aim
firing was invented and possible. Any of this is not the result of a new technology. This is the
result of the combination of three small changes with existing technology. Why do I tell you
this story? Well the interesting part comes now. It's about what happens next.

40
In 1900, Scott gets transferred to China and he meets a young officer of the US Army there,
William S Sims. And Sims learns from him all these little changes to adapt to his own ships
so he can also train his people in continuous aimed firing, with the same effect of explosion
of accuracy. After a few months of training he makes a remarkable process and within the
following two years he sends one report after another to his headquarters in Washington to
describe this amazing success that he had with his changes of the cannons. And he proposes
to adopt, to role this out for the whole navy's gunnery. He reports on the data that shows the
improvement. He describes the technical details of the innovation, and the weaknesses of the
current system. What is reaction of the headquarter in Washington. Deadly silence. The
Freeports are filed and forgotten. And Simms get frustrated. So he changes his strategy. He
reports in a much more aggressive tone. He sends numerous reports to other officers, in other
offices of the Navy. And now he receives a response from Washington. Now this response
basically includes three things. First, the argument of the navy is that the equipment of the US
Navy is good enough. It's as good as the British one, so there's no need for change. Second,
the lack of accuracy has nothing to do with technology, it has to do with the training of the
shooters. And third, for these two reasons, the data of Sim's must be wrong.

The conclusion of the U.S. army at that point is continuous aim firing is not possible. Sims
insists on his position and he sends more aggressive reports to Washington and he gets
attacked, his loyalty gets questioned he's called an egoist and a falsifier of evidence. So, Sims
decides to become a whistle blower. He writes a letter directly to the U.S. president Theodore
Roosevelt. Who invites him back to the U.S. and makes him the inspector of target practice at
the U.S. Navy, and then he can roll out this practice across the Navy. How is it possible that
you come with such a powerful innovation to improve the efficiency by 3000% and you get
ignored or even attacked.

Before we look into some preliminary answers to that question let me give you a second
example. And again it comes from a military context. The military is strong with examples of
strong routines that resist reality. This second example comes from the war between France
and Prussia in 1870-71. This war was won by the Prussians, and one of the reasons why they
won this war was the overwhelming power of the cavalry. In this war, there was already a
technological innovation used, but it played no decisive role for the war itself. It was the
machine gun. The machine gun was invented roughly a decade before that war, and it was an
invention that was regarded with suspicion at that time, and still with suspicion at the
beginning of the 20 20th century. In 1950, the British General Haig called the machine gun an
overrated invention. And similar thoughts must, must have been that normal perception of
machine guns across various armies at that time. And this manifests in an event in 1914, The
battle of Lagarte. There was one strong routine in the armies of their time and it was
established and trained and, and became a habit across centuries. If you want to conquer a hill
you need a cavalry attack. On the 11th of August 1914, the German army tried to conquer, in
the First World War the German army tried to conquer a hill that was held by the French army
at Lagarte. And they used their cavalry. The First Royal Bavarian Yulan Regiment. They
knew at that point in time that there were machine guns on that hill. And they knew what
machine guns do. But they could not see what is obvious for us, and what should have been
obvious for themselves at that point in time. It turned out to be probably the last attack of a
cavalry on a hill in modern warfare. And it ended with a disaster for the Germans because if
there are machine guns on a hill and you attack with a cavalry, you can imagine what
happens. You never win an uphill fight against a machine gun using horses against machine
guns. Refusing a new technology that improves your efficiency by 3,000%. Is this madness

41
of decision-makers? Well this would be too simple as an explanation. But how can we
explain such blatant irrational decisions.

Let us look more into the role of context in these situations. First of all, we have to realize
that the anticipation of the future is not that easy. If we are caught in powerful routines
that have proven to be effective and appropriate across many situations in the past it is
not evident that people are willing to change these routines. The psychologist who would
describe the event at Lagarte in one of his books, he called this a problem of structural
extrapolation. We predict the future on the basis of past experience and as a result the future
looks pretty much like the past. We have difficulties to imagine the disruptive future, even if
we see it front of us, even if the impact like the one of the machine gun is so obvious, we
might lack the imagination to include this innovation into our imagination of the future.

Jared Diamond, who has written a book on the collapses of civilizations he made a
fascinating observation. He said well in many cases where civilizations face a crisis, that
they do not understand what they do in reaction is they reinforce their routines. And
these routines are the ones that put them into that crisis. That's why many civilizations
collapsed. Improving your routines is even more difficult when there is no crisis. Why should
you improve for instance your gunnery when there is no necessity? The U.S. and the British
army had fought their battle successfully. The Americans just came from the Spanish-
American War where they had won all the battles against the Spaniards. Okay. Out of 9,500
shoots, in one of the decisive battles, only 121 hit the target. But it was enough to beat the
Spaniards. So why changing routines? Think about our first week when we talked about the
fairy tale about the Emperor. The most amazing detail for me in this story is always the fact
that the Emperor continues with his procession despite the fact the he his understood that his
reality is wrong, that his perception of reality is wrong. Routine is stronger than reason.
A second reason for the resistance against change is that technological change is not just the
introduction of a new technology, it is disruptive for all kind of social, cultural, political
aspects of a situation. Book print ended the 1,000 years of the middle ages that were, more
less without change. The internet is doing the same for us today. Change can be disruptive
and destroy, and change the whole society. And that's why innovations often face resistance.
People have to lose something if they are in good positions in the system as it is right now.
The cavalry is at the top of the army until the machine gun is invented. They lose their power
afterwards. So why should you embrace something that positions your reputation, your
power, your interests, your resources? The same for continuous aim firing on ships. It will
change the routines afterwards. It will change the organization of gunnery. The design of
ships. The strategies of ship battles. It brings a lot of insecurity. So you resist, and you keep,
keep your routines.

In both cases, the radical change couldn't be stopped. But we don't like change. When we feel
that our stability, the stability of our system is threatened, we also feel that our identity
is threatened because we are someone in that system, and we try to maintain the existing
system. Our decisions build on previous experience and these previous experiences do not
include these changes that we see around us. So we run into all kind of misinterpretations,
and we defend the context we have against better reasons. Especially if the context in which
we have developed our routines and habits has built over long periods. There's an essay
written by George Orwell on the bad quality of political language that very well adapts to this
kind of observation that we make right now here. George Orwell writes in this essay, words
of politicians like cavalry horses answering the bugle, group themselves automatically into
the familiar dreary patterns. This matter illustrates the phenomenon that we have discussed

42
right now. Experiences group themselves also automatically into the familiarly dreary pattern
when we make our decisions. Routines can be dangerous. They impose interpretations on
us that are very narrow because they're building on past experience, and they might be
too narrow if the world around us is changing. Where we perceive business as usual,
something dramatically might have changed, but we cannot see it. Or worse, we can see it,
but not understand it in the context of our own perception of the world. We ride up the hill
into the machine gun, we are blinded for the risk that we face. So, routines contribute
extremely to the phenomenon of ethical blindness that we have described so far in this
course.

So let me conclude this session with four observations. First routines result from the
experience we make and the positive feedback we get for our decisions. We build strong
habits. Routines switch off reason, because we do not need to think when executing them.
We are cruising on autopilot. In times of disruptive change, routines become a trap. So we
make risk to make the wrong decisions without even realising it because of these strong
routines.

V. Fifth Week

a. First Video

Welcome to the first video of the fifth week of our course on unethical decision making. In
this video, we will demonstrate the power of what psychologist's call, strong situations.
In this session, you will understand what a strong situation is, and you will get an overview of
some classic studies from social psychologies. These studies demonstrate the power of
situations over individual intentions and character.

Some situations are so powerful that they illicit a specific behaviour in many people
independently of their intentions, level of moral development, values, or reasoning. Typically,
these situations are characterized by pressure. In this video we will look at four types of
pressure. Authority pressure, peer pressure, role pressure, and time pressure.

What you see here is probably the most cruel, ruthless and dangerous species on this planet,
killed millions of people so far. And, you'll see some sharks peacefully gliding their way.
There are many reasons people kill other people. One, is because they are told to do this.
Soldiers, usually do not seek personal revenge or so. In most case they do not even know
their victims. They just follow their orders.
This picture here has been taken at Auschwitz, a German concentration camp during the
Second World War, in which many people were executed. Those who performed these acts of
crime were later asked, how could you have done this? And many responded, I received my
orders, and I felt I was caught in the system from which I could not escape. Stanley Milgram,
whose parents were Holocaust survivors, took such responses very, very seriously. And he
asked an important question, is “bad people do bad things” really the full story? To what
extent do external pressures contribute to what happen? Would one see a similar pattern of
behaviour, if one sets up a similar situation and put normal people in it. Of course, Milgram
of them could not set up a concentration camp just to answer these questions. But he set up an
experimental environment in which he isolated and manipulated one important factor, he
studied the effect of authority pressure on obedience.
Participants were told that the experiment is about learning. In particular, about the effect of
punishment on learning performance. The participant was assigned to the role of the teacher

43
and seated in front of a shock generator with a task to administer an electric shock to the
learner for every wrong response. The learner was, in fact, an actor seated in a nearby room.
The leftmost button on the shock generator was labelled 15 volts. This shock should be given
for the first mistake. For the next mistake, 30 volts should be given, and so on, in 15 volt
increments up to 450 volts. How many teachers would continue to the maximum? Milgram
asked colleagues and psychiatrists before the experiment what they predicted. Their estimates
were in the order of 1%. In fact, 26 of the 40 participants in Milgram's study, which is 65%
went all the way up to 450 volts. Indeed, they continued to administer these shocks even a
long time after the learner stopped crying. I mean it was silence in the other room, but the
experimenter explained that no answer counted as a wrong answer and that the procedure
requires that the teacher should go on with the shocks. And 65% went to the very end. This
experiment has been replicated many times and in many variations. My shortest comment on
all these results, they are shocking.

Milgram, the man who shocked the world, as Thomas Blass entitled his book, demonstrated
that normal people are likely to administer fatal shocks. One just needs to ask them in some
specific context and to insist that they do it. After such bad news, you may want to see a
rather funny application. And I'm sure you will have a great laugh.

Pressure may not only come from above from some authority, it may also be imposed by our
peers. Solomon Asch brought participants in a situation in which they should, one after the
other, state whether a given line equals the length of line A, B, or C. The correct answer was
quite obvious, but before participant could say it, several others, who were actually actors for
the experiment gave the same wrong answer. Now, who would you trust? Your own
judgment, or the others'? And, even if you're 100% confident that you are right, would you
dare to say what you think? Note that deviating from the others can be dangerous. It may lead
to social exclusion.
In his first study, Ash found that 75% of his participants gave an incorrect answer on at least
one of the trials. The conclusion from many replications, with many variations, is that people
often yield to peer pressure and conform to the majority. Even against their own
convictions. For the fun part, I invite you to watch this short video here.

We are all individuals, right? But at the same time we are in many roles. We have a certain
function in our job, a role, within our family, coach of a soccer club, what have you. Roles
come with expectations, and expectations may translate into pressure. Often, self-imposed
pressure. Remember what we said about frames, if you're put into a specific role, you are
likely to look at the world and to behave accordingly. More precisely, consistent with the
stereotype that you have for this role. Philip Zimbardo studied, in the 1970s, what happens
if one puts people into different roles. This research made it into the text books of social
psychology as the Stanford Prison Experiment. Instead of summarizing this research in my
own words, I'd like to ask you to watch this short video in which presented himself.

To illiterate how time pressure can impact people's behaviour, I would now like to present
you a study conducted by Dolly and Benson. The participants were seminary students for
religious studies so, ongoing priests. They were told to deliver a speech about the good
Samaritan. For those who do not know this parable it is from the New Testimony and Jesus
Christ used it to explain that it does not count, who you are, but what you do. Here's the story
in a nutshell. A man has been robbed, needed help. The priest came. Didn't help. Then an
aristocrat came. Didn't help. And finally, a Samaritan came. The Samarits was the lowest

44
class at that time, outcasts. This person helped, and Jesus made it clear that the Samaritan but
not the priest and not the aristocrat will be rewarded after death.
The participants in this experiment had one hour to prepare their speech on this parable. And
then they were asked to go to the church at the other side of the street to give it. Now, on their
way, a person broke down on the street and asked for help. So the participants remember the
ongoing priests found themselves in exactly the situation for which they had prepared their
speech. How many helped? 65%. Not all but the majority.
In another condition, the researchers interrupted the participants after 30 minutes and said,
sorry, we had to change our schedule, take your legs and run, you must give the speech right
now. These participants receive the same treatment as their peers. The same men broke down
in front of them. But they were running to give their speech about the good Samaritan. How
many behaved as this Samaritan did in the parabola? What do you think? 10%.
Time pressure focuses our attention and may remove some dimensions, here ethical
dimensions, from our radar screen. And may hence increase the risk of unethical behaviour.

So, to conclude, a strong situation exerts pressure such that most people will behave in a
similar way. Beware of situations characterized by authority pressure, peer pressure, role
pressure, time pressure. These pressures can overpower ethical considerations or make us
blind to them.

b. Second Video

Welcome to the second video of our fifth week of our course on Unethical Decision Making.
In this video, we will illustrate the power of strong situations with a case study. The explosion
of Challenger, one of NASA's space shuttles in January, 1986. In this session you will
understand how pressure, may lead to disaster. Learn, what happened during and before
Challenger flight. And you would see how these events can be interpreted, from the
viewpoint of our model of ethical blindness.

1986 was a very bad year for big technologies. On April 26, it came to a meltdown to the
nuclear power plant on Chernobyl. Earlier in the same year on January 28th, now that space
ship Challenger broke apart only 73 seconds after lift-off, resulting in the death of all seven
crew members. We will now look at the Challenger case, in more detail.

Initially, the shuttle was supposed to take off on January 22nd. For various reasons the start
was then moved, to the 23rd, then to the 24th, then to the 25th and 27th. But it is important to
realize that there were, varied delays and that they may have led to some nervosity and
probably also to some impatience. In the night prior to launch date, from January 27th to
28th, a new problem emerged. The temperatures, dropped tremendously. The forecasted
temperature for the morning of the launch, was 31 degrees of Fahrenheit. That is -1 degrees
of Celsius. Which was the absolute minimum temperature permitted for launch. Engineers, of
Rockville International, the shuttles prime contractor were horrified, when they've seen the
amount of ice that had accumulated. The launch was delayed by some more hours. Finally,
the shuttle was cleared for launch and took off as it crew of seven astronauts on board, 73
seconds after ignition and 15 kilometres above ground, the spacecraft disintegrated and
exploded. What caused this failure? Look at this picture, taken when the shuttle was just
about to start. From about, 600 milliseconds after lift-off, until three and a half seconds, black
smoke, came out of the booster. At this very spot, some 59 seconds after lift-off, one could
then see this plume here. There were hot gasses that burned a hole in the right solid rocket
booster, which then, led to the explosion. What about the smoke, that came out of the booster

45
right after ignition? The booster is not one big piece; it compromises four segments that were
assembled together. It is needless to say that these segments needed to be connected such that
no gas could leak at the side. To prevent this from happening, these segments had been sealed
with so called, O-rings. In fact, there were two. The primary O-ring that could do the job
alone and a secondary O-ring as a backup. Richard Feynman, the famous physicist, Nobel
laureate and a member of the Rogers Commission, the commission that analysed this case,
gave a very powerful demonstration of the problem. First, he has shown the rubbers elastic, at
room temperature but when he put it into ice water, everyone could see that it loses its
pliability and hence, it's ability to seal the segments on the Challenger shuttle.

So now we understand why this mission failed. Or at least, it's immediate cause in the world
of physics but of course, you cannot blame, the laws of physics or nature for this accident.
Why did NASA's flight control center give the command to launch under these conditions?
To answer this question, it is important to go back many years. And you know, what we see
there? It appears, that the pressure from the hot burning gas that accelerated this huge
machinery, was at the end of a very long chain of events. Back to July 20th, 1969, Apollo 11
made it to the moon. This was celebrated as a huge success for another. But where do you go
from there? Now, that the mission is accomplished, what do you do with all the equipment,
all the know-how, the staff, thousands of jobs? Should you send everybody home and kind of
a vacuum situation emerged. Now vacuum is the opposite of pressure and yet it let in
response, to the self-imposed pressure to come up with a new vision, it was a question of
survival the survival of this institution NASA. The new vision was to create a manned
orbiting space station and to use it as a transfer point for exploring Mars. This new goal, in
turn, required to construct a space transportation system for astronauts, the Space Shuttle
Program. With the worsening of the economic situation, just remember the oil crisis in the
1970s and with its biggest achievements now behind NASA's very expensive program, we're
losing political support. NASA had to reduce its vision. NASA exploration and orbit station
were deemed, too expensive. There was pressure to reduce costs and to find alternative uses
for their shuttles fleet. In response, they would use research and they started to cooperate with
the defensive department, that was interested in using the shuttles for launching their
satellites. Reconstruction then was necessary to enable the shuttle to transport heavy
satellites. And here, they had to make compromises. The new design of the shuttle greatly
limited its safety. And then there was this growing competition with European Ariane's space
program. And competition, leads to pressure. NASA had to prove that its shuttle program was
useful, necessary, working and affordable. There was only one way to prove it, increase the
number of flights.
Ironically, due to the big number of flights, public interest was decreasing. To regain public
attention and to forge a new reputation, NASA invented the Teacher in Space program. There
was a teacher in the crew, Christa McAuliffe, who should teach her class from space.
President Ronald Reagan was expected to give his State of the Union speech with the
Challenger being one of the key issues. Taking everything together, 1986 was regarded as the
decisive year, for the continuation of the shuttle program. And this is where we started this
video, with a report of the situation on January 28th. And you can see, there was already a lot
of pressure imposed before on various levels. Society politics, business competition, all that.
During the previous launches of a shuttle, there were already some, incidents with the O-
rings. Not so weird, but the engineers in charge knew about this problem. And the engineers
mean the engineers at Morton Thiokol. That was the supplier that produced these solid rocket
boosters. In particular, it was Roger Boisjoly, an engineer at Morton Thiokol, who was very
concerned and he raised a flag several times. He tried to warn his management. But
management was also under pressure. They wanted to have a contract with NASA. They

46
wanted to extend the contract. And the company passed on this pressure to its engineers, in
particular Boisjoly. Top management tried to calm him down. Then in September 85, Roger
Boisjoly was invited to NASA to make a presentation on the O-ring problem. He was given
instruction by his own management at Morton Thiokol, to present it not as urgent and as
dangerous. That is an improvement aspect. Seeing the eyes on the ground, Space Flight
Center administrator Larry Meloy, called Morton Thiokol, it was clear he wanted to launch,
but he did not dare to overrule the experts who had produced the boosters. So it came to a
teleconference between NASA and Modern Thiokol, with a group of engineers of Modern,
Thiokol participating. Engineers of Modern Thiokol in that conference, recommended NASA
not to launch the shuttle, until a safe temperature for the O-rings was reached. NASA did not
accept it. I have a patent, said Larry McLoy. Do you want to start me in April? And to
understand this reaction, you just, to put yourself in his shoes. So the, machine was fully
tanked. This was a serious commitment. It was waiting at the ramp, delayed already several
times. And, you should also know that the discussions about the O-ring problems, that was
internal. Morton Thiokol made sure that NASA wouldn't discover it in its full severity. So,
Larry Meloy's reaction was understandable. It cannot be that you confront me with this just
the day before take-off. So what did he do? He reacted by demanding, proof of the risk.
Which was the complete reversal of the normal procedure. Normally, the engineers had to
prove that something was safe. Morton Thiokol managers asked to go offline for a few
minutes. During this offline discussion, which then at the end, took about 30 minutes, Morton
Thiokol's Senior Vice President, Jerry Mason pointed out that the possibility of an O-ring
erosion, had always been present in earlier flights. And had been considered an acceptable
risk. He underlined, that there was a primary and a secondary O-ring. And that even in the
case of the first O-ring eroding, the second one would seal.

At that very moment, they were under time pressure. He could not prove that it was not safe.
At least he could not convince the others. So basically, Morton Thiokol management, asked
the engineers to reconsider their position. And when they refused to change their mind, but
they also, could not prove it, the management excluded the engineers from further
discussions. So, they conferred about themselves, the managers. And the three managers
immediately supported the position of the vice president. While Lund, the superior of the
engineers, hesitated. In this situation, Mason said to Lund, it is time to take off your
engineering hat and put on your management hat. So basically what Mason said here, he
informed the engineers, that the technical decision had to be turned into a management
decision. Remember what we said earlier about framing. The frame of an engineer is safety
first. This is how they look at the world, at these problems. Safety is above, everything. For
manager, manager has a wider frame. He also looks at the business, the money involved. And
for him it's a calculated risk that at some point, you may want to take. And also remember
what we said about simple heuristics and about one reason decision making. One reason
decision making heuristic put one reason in the foreground. Basically make a decision, based
on one reason. And so the question is, which gets priority? In our case here, safety or business
opportunity.

So, that was the discussion. The engineers would, was, were not able to convince the
management. And at some point, the teleconference after 30 minutes was reopened. Mason
summarized, Morton Thiokol's position. The data are inconclusive. Some tests, had shown
that the first O-ring could, sustain three times more erosion than experienced in the previous,
worst case. Furthermore, the second O-ring could serve as a back-up.
One of the NASA managers asked whether there was any disagreement, or further comment
from Morton Thiokol and that was a very dense situation. Nobody spoke up. Not even the

47
engineers. And if you're in a meeting like this and you do not speak up when you receive this
question it means, you agree to launch even Roger who was, deadly against it. But he could
not yield to pressure, that came before from his own management. Morton Thiokol was then
asked to fax a copy of the recommendation to NASA and with this approval, NASA okayed
the launch and the result, is, known.

So, to conclude at first inspection, the Challenger exploded because of the O-rings, didn't seal
at this cold temperature. But upon closer inspection, it became clear that various kinds of
pressures could be seen, as important contributors. In combination with some bad luck, these
pressures led to a situation in which there was not enough margin, to prioritize safety over
interests.

c. Third Video

Welcome to this third video of Week 5 of our course on unethical decision making. In this
video, we will discuss the role of fear as a driving force of unethical decision making.
In this session, you will understand how fear provokes irrational behaviour. You will learn
how fear emerges in organizations and how it can even become a dominating emotion. And
you will understand how a culture of fear increases the risk of unethical decision making in
organizations.

In our analysis of The Emperor's New Clothes, we have argued that fear can lead to irrational
behaviour. And fear can be found as a dominating emotion in many corporations. And it is
certainly present in all the corporate scandals we have discussed in this course so far. The fear
at Enron to be perceived and punished as a low performer. The fear of the engineer to resist
group pressure in the Challenger case. The fear of an aggressive CEO at Ford. And fear does
play a role in the current discussion on the corporate culture of Amazon as well. We have
seen how fear influenced the behaviour of participants in the experiments on authority, group,
role or time pressure that we presented in the first video of this week.

Of course, fear is not just an irrational emotion, it can be very important. It can save our life.
When we perceive a risk, we may be more concentrated. Our attention may be more focused
and we may be more cautious in how we move and what we say. However, fear also has a
negative aspect and we want to focus on one, in particular. Fear increases the risk of
provoking unethical behaviour. And it may ultimately even lead to ethical blindness.

There's a particular type of fear we want to critically examine in the context of our course, the
fear that is created by other people. Fear that emerges from social interaction switches off
reason and promotes irrational behaviour. In our fairy tale, the Emperor created a context of
fear in which the others were more concerned to hide their presumed stupidity than to
critically challenge the perspective imposed on them by the two crooks. Paradoxically, as we
have seen, these also includes the Emperor himself. He creates the fear and becomes a victim
of it. So let's look at those two aspects first, the irrational behaviour that follows fear and the
social context that creates fear.

One example of the irrational behaviour fear can trigger has been discussed by the German
psychologist Gerd Gigerenzer. After two planes crashed into the Twin Towers in New York
on September 11, 2001, people were not only shocked and terrified, they immediately
developed a very concrete fear of using airplanes. Of course, nobody wants to sit in a

48
hijacked airplane. For a while, even for long distances, many Americans who before
September 11 would have taken an airplane, used their car instead. Driving a car however, is
much riskier than taking an airplane and, as a result, the probability of being killed in a car
accident is much higher than dying in an airplane accident. As you can easily imagine, dying
in an accident of a hijacked airplane is even less probable. These facts about the risks linked
to traffic accidents are very well known and there are numerous statistics demonstrating the
relative safety of airplanes and the relative risk of cars. But, despite statistical evidence, we
often have the illusion of being safer in a car, in particular, if we drive ourselves. It is this
feeling of being in control of the situation that we do not have when we take an airplane.

When examining the consequences of September 11, Gerd Gigerenzer compared the number
of people dying in a car accident five years prior to September 11, and five years after. For
one year after the terrorist attack, people changed their traveling habits, then they went back
to their normal travel routines. Gigerenzer was wondering how many people additionally died
in car accidents as a direct result of this change of habits, this panic that followed September
11th. The model used by Gigerenzer suggests that an estimated number of 1,595 people died
in a car accident because they were concerned to die in an airplane crash.

In the case of September 11, the overall fear of airplane accidents was less a fear of
dysfunctional technology and more a fear of the uncivilized behaviour of others, terrorists
who could hijack the airplane. The philosopher Thomas Hobbes has examined this kind of
fear centuries ago and we shortly mentioned this already in our video on ethical dilemmas. In
his book Leviathan, he proposes a thought experiment. What will happen if we remove all
rules and regulations from society? How will the world look like in which we are free to do
what we want? Well, according to Thomas Hobbes, this radical freedom will not create the
best of all possible worlds. Given that resources are limited and our desires to own them is
unlimited, in a world without rules, it will be the strong and violent people who will be
successful. They take away the property of the weaker people. Power will be abused and such
an abuse creates fear, first among the weaker members of the community. However, also the
stronger people can never be safe because sooner or later, someone might arrive who is
stronger or more intelligent or able to create coalition of actors. The result of the situation is
obvious. In the absence of rules, life, according to Hobbes, is brutish and nasty. It is
dominated by fear. This is the amazing effect Hobbs assumes for such a world without rules.
It is not so much the people who become victims of violence who have fear, everybody has
fear. And fear does not result from bad experience, it already results from the risk of violence
we imagine, the violence that we perceive as lurking behind the next corner. Fear thus is
strongly connected to the expectation that the thin soil of civilization might erode
quickly. Fear results from our imagination of what might happen. What is fear? Fear is a
negative and potentially uncontrolled emotion linked to the perception that a situation is
risky, threatening or dangerous.

Many fears are hardwired into our brain. And this has been important for the survival of our
species. From a neuroscience perspective, fear is a reaction to a stimulus that is processed in a
part in the temporal lobe of our brain called amygdala. Fear is an emotion that works at high
speed, without involving reason. It is a quick and conditioned reaction to a pattern that we
perceive in our context, and that rings our alarm bells immediately. It is based on previous
experience. We can consciously or unconsciously use the power of fear to get control over
others, to win and keep power and to impose a particular thinking, a behaviour on others
which they would not show without fear.

49
There many sources and forms of fear. For instance, it seems that people across many
cultures share a fear of snakes, even if they have never seen a snake before. They intuitively
are racked by fear. As I already emphasized in the context of our course, we are more
interested in fear as the result of social dynamics, and in particular, those that emerge in
organizations and drive ethical blindness. Just imagine, you are sitting in a team meeting. You
disagree with the analysis of your boss and you try to speak up. Your boss interrupts you
aggressively bullying you, making fun of your argument and even questioning your
competence in front of the others. Nobody in the room supports you, they look down on the
table and remain silent. You feel isolated, humiliated, threatened. As the minister in the
fairytale of The Emperor's New Clothes, you might start to question yourself. Maybe you are
stupid. Maybe your critique is inappropriate. Your blood pressure increases, your heart is
beating fast, you feel out of control, both of the situation and your emotions. You feel that
you must protect yourself somehow. You have the impulse to flee, to run away from the
situation, but you can't. So instead of running away, you freeze. When you think about the
situation afterwards, you're not even surprised about what happened because you have seen
your boss shouting at others in previous meetings already. You do not want to be exposed to
such a situation again. From that day on, you already have stomach problems before you get
into the next meeting. Fear becomes a physical pain. You want to minimize the risk of being
exposed to the same situation again, so you will remain silent in the meeting, or at least you
will avoid any critique of your boss, and so do all our colleagues.

Fear in organizations may have many sources. It can result from a threatening and
humiliating leadership style. It can result from group pressure, from the overall
aggressiveness of a corporate culture in which the weaker people are bullied, ridiculed and
pushed aside. It can result from a lack of openness for critique and it will be reinforced by
reward and punishment structures as in the Enron case. Fear can be magnified by situations
of uncertainty such as in a merger of two companies where your future is unclear. As we
have learned from Thomas Hobbes, fear does not necessarily need the experience of a
threatening situation. It can already be activated by storytelling in an organization. What do I
learn about others who dared to take a risk? You do not need the unpleasant confrontation
with the CEO yourself, you only need the unpleasant stories about other colleagues'
experience to feel the fear. If you're one of the people sitting in the room while your boss
shouts at someone else, you might even feel the physical pain yourself. You'll fear to become
a victim of a similar exposure to verbal violence later on. You do not want to be stigmatized
as the troublemaker.

Fear is often reinforced by threat appeals, which are messages designed to scare you and that
recommend you to abstain from certain activities. Don't challenge me, I will fire you. Sadly
enough, it is a widespread phenomenon that people in organizations have the perception that
is not safe to speak up at work and in front of the superiors. The dynamics we have seen in
the Emperor's story, in the Enron case or at Lehman Brothers, is only possible because fear
keeps critical thinking in check.

The consequences of a culture of fear can be devastating for organizations. Research has
shown numerous side effects of fear. Fear makes it difficult to learn. Who can learn while
having panic attacks? It isolates people. Fear inhibits risk taking. It blocks innovation and it
kills your imagination. Fear damages your self-esteem. We wonder whether we are a bad or
an incompetent person. Fear encourages avoidance behaviour and pessimistic interpretations
about the future, we disconnect. Fear promotes a narrow perception, and a focus of our
cognition on a perceived threat. We develop a tunnel vision of our context, as soon as our

50
heart starts beating faster. Fear promotes counterproductive silence in situations where people
in organizations should speak up. Repeated situation of fear creates a routine, a habit of doing
or not doing certain things. For instance, criticizing your boss, talking about problems,
disagreeing with your team. You will internalize an understanding of particular situations as
being threatening and this will activate a culture of silence. We will construct the kind of
protective structure around us that includes among others the justifications for why we do
what we do and two effects come together at this point. First, an effect called Confirmation
bias. Once we have started to experience the world in a particular way, we start to hear and
see only those pieces of information that confirm what we believe already. This effect can be
complimented by a second effect, which is called Group polarisation. The more we get our
beliefs confirmed by the group to which we belong, the stronger the beliefs become, and the
less we are willing and able to see the evidence that might contradict our beliefs.

If we collectively make all the same experience with our CEO, a particular behaviour, for
instance, not challenging him or her, might be routinized intersubjectively. We all avoid it.
Social fear is contagious. We are all afraid of the temper and the often uncontrolled rage of
our CEO. In a worse case, when fear gets habitualized, we might apply the same reaction to
other superiors, or even within the teams, just because we have learned that speaking up in
general, in this organization, will harm us. In the worst of all cases, we perceive this
interaction style, shouting boss, silent subordinates as normal and get used to it. And we will
shout ourselves once we've climbed up the ladder in the hierarchy and become bosses
ourselves. We will discuss this temple dynamics of ethical and unethical decision-making
next week.

In our course, we've argued that unethical decision-making does not require consciousness. It
often results from automized reactions to particular contextual signals. Fear is a very
powerful emotion that can promote such unconscious behaviours and beliefs that switch off
conscious decision making. Fear is also a considerable obstacle for those who might have the
feeling that particular decisions or routines are inappropriate from a moral point of view.
They might not have the courage to speak up and disrupt the unconscious routines of their
colleagues or superiors and over time, they become ethically blind as well. As one of the
Enron traders stated, you do it once, and it smells. You do it again, and it smells less.

Let me conclude this session. In most of the corporate scandals where good people were
involved in bad practices, fear was a dominating emotion. Fear is contagious, it gets mutually
reinforced in an aggressive corporate culture and it switches off reason when people make
decisions. A key driver of a culture of fear is the behaviour of leaders. Leaders create cultures
of fear, but this culture will then affect them as well.

VI. Sixth Week

a. First Video

Welcome to this first video of the sixth week of our course on unethical decision making. In
this video, we're going to discuss the role of time in the development of ethical blindness.
In this session, you will understand that ethical blindness is often the result of a process
that unfolds over time. And you will be familiarized with the notion of shifting baselines.

The ancient Greek philosopher were smart people. One of them was Heraclites and one of his
insights that has been conveyed for millennia is panta rhea, everything is moving, changing.

51
It's in a flow. It's in transition. And in the same spirit he also noted that, and here I quote, you
cannot step twice into the same river. Why not? What is a river? A river has water and the
water is, of course, different today and in a year. But the river has also a water bed. And this
water bed also changes and it's the water that changes the water bed. And it is the water bed
that determines where the water is flowing. So they mutually influence each other and this is
quite interesting observation.

And, in fact, you can also look at an organization using this metaphor. You can consider an
organization to be like a river. An organization has structure, that's the most stable part,
corresponds to the water bed. But in an organization, there're also a lot of processes going on
within an organization. That's the more fluid part, corresponding to the water. And the
process, or the processes in an organization, will determine its structure, will change the
structure of an organization over time, and vice versa. The structure also pretty much
determines what processes occur. And both change. That's the important point.

So, you cannot step into the same river because the river changes. But there's a second reason
why you cannot step into the same river twice. You yourself change. And the same happens
again if you now consider an organization. If you go to an organization, you will change. You
will dive into this organization and the organization will ensure that you will be a different
one afterwards.

How do we recognize change? We need reference system. We need an outside observer. An


outside observer can easily realize that there's some change, but how can we do it for
ourselves? So, change can only be established through noticing differences with respect
to a reference point. That's the important point here.

Talking about the differences, there's an important concept in psychology called just
noticeable difference. Question. Would you notice a difference of ten grams? So you have
the weight, ten grams. Would you notice this? If you have 30 grams here, 40 grams here, you
would notice. But the difference between one kilo and ten grams, and one kilo and 20 grams,
you would not notice. So these differences are relative.
What about time? Time is, of course, something we can measure on an absolute scale. We call
it a calendar or clock or so. That's so-called Newton time. But seen from the inside, we
realize that we construct time and what I just explained about the reference points also holds
for our construction of times. And this kind of time that we construct with respect to other
things, this is called the Einstein time. Parents and children, they also spend a lot of time and
they get older, both, but they get old at the same time. So we with our friends, with our
parents, with our kids, we do not realize that they change over time. We change with them.
At some point, the doorbell rings, grandparents come in and they may say to their grandchild,
oh gosh, you have changed so much. We are surprised, we wouldn't even notice that. But the
grandparents who haven't seen the kid for a year, they realize it immediately.

What I just explained to you with this example of the parents and the grandparents is closely
related to what environmental scientists called shifting baseline. People perceive changes in
their environment relative to their own background of experience and these experiences. So if
it's a baseline for determining what is normal and natural. The term shifting baselines was
coined by maritime biologist Daniel Pauly. When Pauly attempted to determine how fish
population changed over time and how this was affected by commercial fishing, he needed as
a reference point the natural population. That is, the population prior to the influence of
human activity. And how, while he was interviewing fishermen, he found that each generation

52
of fishermen considered the natural fish stock to be the stock at the time they themselves
started their fishing careers. But almost nobody had the perspective spanning for more than a
single generation.

And it is exactly this overarching perspective that allows one to recognize dramatic changes,
which cannot be observed by an individual bound to a specific generation. By examining the
oldest available historical reports and comparing them with present numbers, the following
pictures emerged. A century ago, 20 to 30 different species of fish, including large specimens,
could be readily caught in the Gulf of California with a single simple rod and reel. But today,
a mere handful of species remain and most can only be caught by trawling hundreds of
kilometres off the coast. Between these two points in time, only a century apart, many
fishermen acknowledged that the fish stock had indeed decreased. But none was really
alarmed about the changes, except for a very few old fishermen with many decades of
experience.

Let us now consider a case that you already know. Guido walked you through this case. It's
the case of Ford Pinto. Let me add some aspects that are relevant for our issue at hand here,
temporary dynamics of ethical blindness. It is important and interesting to note that Dennis
Gioia started out full of idealism. He had very high ethical standards. He said, and here I
quote, I had a strongly-held value system that led me to question many of the perspectives
and practices I observed in the world around me. I had a profound distaste of the Vietnam
war. I was participating in various demonstrations against its conduct. I held my principles
high. I espoused my intention to help a troubled world. I wore my hair long. By any measure,
I was a prototypical Child of the 60s.
Now you may wonder how could it be that such a person enters a company, a for-profit
company, but Gioia, when he was challenged by his family and friends, he had a very
effective defence strategy. He said by accepting this job at Ford, what I could do is, I could
change the world. I could, with my values, make a difference. That's the ideal place for it.
So in, then he entered this river, Ford, and he changed. He became part of the
organization and he started to talk about us versus them. He accepted the company's
values and frames. He changed over time, slightly. He became a different person from
day to day. At some point, he cut his hair and he may not have even noticed how much
he changed.

You cannot step into the same river twice. Gioia stepped into Ford's corporate culture,
working day after working day. Then he returned home each evening. He had changed
slightly. And when he stepped into the river again the following morning, he was no longer
the same person. This process ultimately led to his perception that exploding Pintos were just
a technical problem with no ethical relevance. Interactions between teams of engineers,
managers, economists and people with other backgrounds, who all shared a functional
perspective and who all tried to increase Ford's profits, that's important, they
contributed to the narrowing of his perspective. And just as he was part of his colleagues'
environment, they were part of his. If, however, our team members change and adapt
simultaneously, one thing remains stable, the perception of what is normal. If several people
step into the same environment dominated by technical and functional perspectives, and
if all those people change at the same pace, then chances are that none of them will
notice how they have changed. In this way, even people with high ethical standards or in
Gioia's case, with long hair, they ultimately enter a state in which they are no longer able to
see the ethical dimensions of a decision.

53
Is the occurrence of creeping change inevitable? I hope I've shaken the faith of those who
believe they are immune to such processes of adaptation and change. As social beings, we
cannot avoid encounters, contacts, and the resulting influences of our social
environments. Are creeping change processes necessarily unconscious and therefore
undetectable as such? Here, my answer is a clear no. One day, Gioia stood in front of a
crumbled burned car at a Ford depot, a place known as the chamber of horror by some of the
people who work there. Perhaps this terrible view of the scorched car catapulted him out of
the reference system that had ordered his thoughts and actions as a manager until then. And
he suddenly realized how much he had changed over the years.

Similar, amid their daily routines, parents fail to notice how their children grow. But one day,
while handling a child's long forgotten toy, the memories wash over them and they may
murmur to themselves, how have the kids grown. And maybe this experience is followed by
the reflection, how much older did I grow in the meantime?

To conclude, people change over time. That is, we change over time. With a short time
horizon, these changes may not be noticed, that is, we change without being aware of this
process. Depending on the contexts we are embedded in and to which we adapt, we may
change such that the risk of ethical blindness increases. And at the end, we may do unethical
things without noticing it.

b. Second Video

Welcome to the second video of the sixth week of our course on unethical decision making.
In our previous session, we discussed the impact of time perception on decision making. In
the session of today, we would like to share with you some thoughts on the power of
institutions over decision making.
So, in this session you will get familiarized with the sociological concept of institutions.
You will learn how institutions drive behaviour. And you will understand the importance
of institutions for our concept of ethical blindness.

Let me start by telling you a joke. Two tuna fish swimming in the ocean meet a dolphin. Hi
guys, says the dolphin. How's the water today? The tunas remain silent. When the dolphin is
gone, one of the tuna turns to the other one and asks what is water. Often we are not aware
of the most important things that shape our context. We take it for granted. We depend on
it, but we do not even realize that it’s there. We are born into a context that's an observation
that we owe to Aristotle. The Greek philosopher Aristotle who said that we are, in principle,
a zoo-political, a social animal. We are born into a community. The community is there when
we arrive, with all its traditions, and habits, and rules. So, our context determines, to a certain
degree, what we do, and what we believe, and what we strive for. The founding father of
sociology, Emile Durkheim, he brought us some more evidence about the power of context
of our decisions. At the end of the nineteenth century he was examining the phenomenon of
suicide. So he was asking, why do people kill themselves? And as you might imagine, there is
a variety of reasons, of individual suffering, that leads to suicide. Love sickness, depression,
poverty, pain, and so on. So we can make statistics about types of individual motivations
across the cases. And you can examine them scientifically. But Durkheim made a strange
observation. And this is the beginning of our understanding of institutions. What he observed
was that between 1856 and 1878, the suicide rate in France doubled, not gradually but in a
big jump. However, the proportions of the individual motivations remained exactly the same.
From poverty, to jealousy, to pain, whatever. And this was statistically highly improbable. So

54
he started to dig into the data. And he what he found was very surprising for him. The number
of suicides radically differed between Catholics and Protestants. Between men and women,
between summer and winter, between married and single. So it seems that in a lot of cases
suicide could not be explained by different, by individual motivations and traits, but by the
social categories to which people belonged. And this went beyond the understanding of
motivation of and action of that time. In his book Suicide that Durkheim published in 1887,
he called those forces social facts. So, social facts are reasons that drive behaviour coming
from the context in which we are embedded.
Why does that happen? Why are individuals driven by such large and shared norms and
beliefs and values? Where do they come from? About 2,500 years ago, the Greek philosopher
Heraclitus argued that we can never step into the same river twice. The reason's obvious.
Next time we step into the river, the water will be different, and we will be someone different.
Because you might have changed over time and the water continues to flow. So life moves
forward. Every moment is different from the previous one. Every experience is different from
the previous one. Even if this difference might be very small, reality never stands still.
Experience thus forever is in motion. And therefore, behaviour is so difficult to predict.
The challenge is, how do we make decisions in a constantly changing environment. We
need some predictability, some stability to know what we should do next. Somehow we have
to freeze the world around us when we make decisions. We have to find patterns of
similarities in the sea of change. We already discussed that we dispose of cognitive scripts,
of frames through which we interpret the world. We develop them in organizations, as
individuals. But, we develop them also on large scale, in large scale social context. Whole
populations can share the same or similar mental maps to align their behaviours and beliefs in
order to make social behaviour possible. And these large scale mental maps can be called
institutions. We argued already that ethical blindness results from the too narrow perception
of reality. This too narrow of perception can be reinforced through various context of the
situation, the organization, and the overarching institutional context in which we are
embedded. And this is what we want to talk about today.

But what are institutions? Let me propose a definition for you. Institutions can be
understood as the norms, values, beliefs and practices that we take for granted in the
various larger social contexts in which we are embedded. You must imagine institution
like an iceberg. What you can see on the surface is the behaviour. What you find under the
water, invisible to us and often unconscious, is the values and beliefs that drive that
behaviour, and that make that behaviour legitimate.

How do these institutions emerge? Well, following Aristotle, we get socialized into a context.
Imagine yourself as a little child. When you're sitting on the table and eating with your
parents, and you spill your soup all over the table, you will learn that this makes your mom
and dad angry. You spilled the soup again? You understand that parents get angry every time
when you spill the soup. And then you learn that even other people get angry, if you spill the
soup. You will learn that you better don't spill the soup on the table. So a specific observation
in a specific context, you and your parents, is transformed into a generalized attitude. One
does not spill soup. And over time, this sinks down in your unconsciousness. You become a
non-soup spiller. It's taken for granted. And every day, reality is built of myriads of such
taken for granted routines that we learn over time and forget.

Two sociologists DiMaggio and Powell applied this idea to modern organizations. How do
shared beliefs and values and practices emerge in context where organizations enact their
decisions. What we normally assumed, in particular, if you look at corporations, is that they

55
try to be different. They try to be different from the others, to beat them in the competition for
customers. However, in reality what we can find is that corporations in the same
industry tend to be very similar, same belief systems, same values, same practices.
Institutions, are broadly shared among those organizations. They build up what might be
called an institutional field. So individual organizations that enter into such a field, they have
to adapt to the rules of the game. They copy the behaviour and the values and the beliefs of
the others. Why do they do this? Well, they reduce uncertainty. They create legitimacy for
what they do. The DiMaggio and Powell called this adaptation process as Isomorphism.
And they differentiate between three types of isomorphism. Coercive isomorphism, which
means we are exposed to formal pressure by legal rules for instance. An organization or
corporation might publish and annual financial report because it is simply the law that
ascribes, prescribes it. Or mimetic isomorphism. I'm the newcomer industry. I don't know
the rules of the game. I have uncertainty. To reduce uncertainty, I adapt to the behaviour of
others. Just think of all these websites that popped up at the beginning of the 2000s when the
new economy started. Everyone had to have a website. And third, there is normative
isomorphism. We are trained to have the same beliefs and values and behaviours. For
instance, in the business codes where we learn what corporations have to do and what
managers have to do when they make decisions in corporations. We learn, basically, that they
should maximize profits.

So people in particular contexts, let's say managers in a particular industry like banking or
mining. They tend to show the same patterns of behaviour and mind-sets. But, you can apply
this to all kinds of organizations, doctors in hospitals, people living in New York, school
teachers in Italy or China. Coherent patterns appear across the behaviour of different people
who share the same field. And these strong institutions they do not just guide us. They put us
on a track. They impose specific behaviours on us. They define what is right and wrong,
appropriate, inappropriate. They define the rules of the game. What we said about ethical
blindness so far is very much aligned with having the pressure to show a particular behaviour
across three types of contexts. Immediate context, organizational context, overarching
institutional context. Think about the fairytale we talked about in our first week. The
institutional context here might be the authoritarian structure of the absolute monarchy that
we have in this empire. Or the dominating feeling of fear that is there everywhere in this
kingdom. As the tuna fish in the joke that I told you at the beginning, the citizens in this
empire, they are very much aligned in their mental maps about the world, what they can see,
what they cannot see. Combine this with the organizational context. So the hierarchy in the
castle, for instance. Who gives the orders? How is the process of commands? At the
immediate situation, you are the Prime Minister. You're in front of these two crooks. You
have to make a decision right now. You have to say yes or no, do they see the clothes or not.
So strong context can be created across these three types of forces and they can make us blind
for broader reality. The overarching institution context can become overwhelming. It can be
so strong that there's really nothing else we can do but the one thing that is prescribed for a
specific decision making situation. We would call such an institutional context totalitarian. It
leaves no space for alternatives, no space for critique, no space for interpretations. It turns
into dogmas and ideologies. So, institutions can switch off reason by turning into
ideologies. And ethical blindness becomes highly probable if we are surrounded by dogmas
about our larger social context. In our course, we focus on corporations mainly, and people in
corporations. So the powerful ideology that we see around corporations is the one on
maximizing shareholder value.

56
We will analyse this context, this particular ideology in our next session, and we believe that
this is necessary to understand why scandals happened in the last 10, 20 years in modern
corporations.

Let me conclude by giving you three insights of this session. First, what we believe and
what we do, is under the strong influence of the institutions in which we are embedded.
Institutions therefore set behavioural and cognitive limits to us. And as a result, they
might reinforce situational and organizational pressures that drive us towards ethical
blindness.

c. Third Video

Welcome to this third video of the sixth week of our course on unethical decision-making. In
our previous video, we discussed the influence of institutions on unethical decision-making.
In this video now, we would like to zoom into one particular form of institutions, namely
those that have become rigid, dogmatic and ideological.
In this session, you will learn how institutions might morph into ideologies. You will
understand the moral foundations of the free market ideology. That's the one that interests us
most in this session. And you will get familiarized with a critical perspective on free
market ideology. And finally, you will understand the supportive role of institutions as
ideologies in the process of becoming ethically blind.

In our discussion on institution theory we highlighted the fact that a lot of decisions are
copies of the decisions of others. They are imitations. Isomorphism was a term that was
shaped for this. People imitate others even if those copies are strategically irrational. Even if
those copies might be largely efficient. Even if those copies might be morally doubtful. We
simply follow what we perceive as the rules of the game. Institutions can become too
dogmatic. They can deliver too narrow interpretations of the world. They become ideologies.
Ideologies can be understood as structured simplifications. What they do, is they position
certain beliefs and practices and values as objective and incontestable. We might tend to
perceive the prevailing root of the game, caught by ideologies, as natural and without
alternatives. You remember our discussion of Václav Havel in the forum. His essay on the
power of the powerless. One key element of his reflections is that the power of ideologies lies
in their thoughtless acceptance. When we look back at business scandals like Enron,
Lehmann Brothers, Siemens, our first reaction often is to say well, these are deviations from
the norm, bad apples. But maybe these are not deviations, but on the contrary, these are
over-stretched interpretations of the rules of the game. So they're very much in line with
the institutional context in which these organizations are embedded. Or to argue more
carefully, they are too rigid interpretations of that institutional order. Unethical behaviour
might fall on very fertile, ideological ground. If we talk about ideology in the context of our
course on mainly corporations, and there's one that sticks out as the dominating theory of
what markets and organizations do or should do, it's the shareholder value ideology of
Milton Friedman. In September 1970, Milton Friedman, the Chicago economist, publishes
an article in the New York Times Magazine with the provocative title, The Social
Responsibility of Business is to Increase Its Profits. And it had a tremendous impact on
theory and practice of management and on how we designed our economies, in the US with
Ronald Reagan, in the UK with Margaret Thatcher. In this article and afterwards around the
world because this became the dominating model of how we design markets in the world. In
this article, Milton Friedman makes the following statement. In a free-enterprise, private-
property system, a corporate executive is an employee of the owner of the business. He has

57
direct responsibility to his employers. That responsibility is to conduct the business in
accordance with their desires, which generally will be to make as much money as possible.
So Friedman harshly criticizes those who defend a broad understanding of corporate
responsibility. Arguing that those who, for instance, fight for a responsibility of the
corporation beyond profits are just socialists who threaten the freedom of our society. Why?
Because managers are agents of shareholders. They have to align their own decisions with the
interests of the shareholders. You cannot spend shareholder money for your own
decisions. For your own deviating interests. If you, for instance, invest in a better pollution
filter for your factories that goes beyond the law, according to Milton Friedman, you are
stealing the money from the shareholders because they don't give you the right to do that. It's
their money that you take for your own decisions. As managers, we have just one moral
duty which is to maximize profits. Other more duties exist, but they are foreseen for other,
other roles and other identities.

You have other obligations in your role as a church goer, as a mother, as a father, as a good
citizen. But all of these moral obligations are strictly separate from those of the manager.
How could Friedman make such an argument? And how could such an argument fall on such
fertile ground and find such a strong support? We have to understand Friedman's position
from his particular historical context and from his belief system. We look at Friedman with a
hindsight with the advantage of the hindsight. And what might look very immoral to us might
make a lot of sense in his particular value system. Milton Friedman writes down his theory in
the late 60s, early 70s. This is the climax of the fight of the systems. Communism and
capitalism. And it is far from being clear who would win this fight. So in both camps,
defenders of the respective ideologies defended them without compromise. And a key pillar
of the Western model, the capitalist model, is the belief in property rights.
Where does that come from? In the Middle Ages, there was a saying in Europe and mainly in
the German part of Europe that was town air makes free, town air makes free. Why does town
air make free? Well, most people were slaves of feudal landlords. And they were condemned
to lead a miserable life. There was one chance to get out of this situation. They could flee to
one of these cities that popped up in the Middle Ages. And they had, to a certain degree, their
own rules of the game already established. If they could flee to one of these cities and live
there for one year and one day, they received citizen rights which, basically at that time,
meant property rights. The right that nobody could come and take away your property
arbitrarily. So human rights in Europe developed as property rights. And the capitalist
system as such is deeply shaped by this belief that property rights are a key element of how
we understand human rights. The government is the enemy because it's the government
who reduces or threatens by arbitrary rule making, your property rights. This is deeply
ingrained in our belief systems. And it is connected to another strong element of our capitalist
belief system.

It's the belief in the power of the free markets. Markets are, according to this belief, the
best instrument to protect property rights. But not just that, they are also superior to all other
economic systems that we know with regards to the promotion of the common good. That's
what we believed for many decades. And it goes back to the 16th century philosopher Adam
Smith, who made an amazing observation when he looked at markets. He said, by pursuing
his own interest, the individual promotes that of society more effectually than when he really
intends to promote it. So on the market, people meet as egoists, only interested in their own
projects. If I'm a buyer, I'm a seller, I want to make money. I want to have a product. I'm only
interested in that, but by meeting, by doing this transaction, we do not just satisfy each other.
The more of these transactions there are, the higher the level of the production of goods, the

58
higher the level of prosperity in a country. Human beings are calculating egoistic hectares.
We are homoeconomicus, but the market is able to neutralize that egoism and transform it
into the common good. Adam Smith calls this the invisible hand of the market. Ronald
Reagan later on called this the magic of the market, so abracadabra, the market turns egoism
into common good.

Milton Friedman combines these two elements, the property rights and the market
efficiency. And he argues that the market, therefore, is the best way of, of promoting both my
interest and the interest of everyone else. Markets are the solution. Governments are the
problem. Egoism is good. Greed is good.

Over the coming decades, this has become a rigid belief system that we teach in business
course, that we enact in corporations and in making legal framework around markets around
the world. When the financial industry got criticized for their role in the financial crisis, the
collapse, the almost collapse of the banking system, the CEO of Goldman Sachs defended
himself by saying, well, I am just doing God's work, doing God's work? This sounds like a
Hebraist of a CEO who has lost his connection to the real world, but if you look at, this
profound belief in the efficiency of markets to promote the common good, you might bet, get
a better understanding of why a CEO can dare to say this. The invisible hand of the market
promotes the coming good much better than anything else. So it's a divine mechanism.
Doing God's work. Doesn't that ring a bell? Yes, bingo. Another powerful CEO has used the
same expression, Jeff Skilling from Enron. And greed is good is a sentence we know from
Gordon Gekko, from the film Wall Street, this rogue trader. Greed is healthy is a sentence for
which even Frederick Buskey has become famous. A real rogue trader, who said this in a
graduate ceremony at the University of California at Berkeley in 1980. So we trained
generations of managers in this idea that something that normally is perceived as bad,
greed, egoism, is a good thing. As a manager, you should be greedy, you should be egoistic,
because that's the best way to promote the common good. If we all believe in this, it is easy to
imagine how we can disconnect from a broader social context, how we can focus just on
maximizing profits, regardless of the consequences for society in general. It does not
necessarily lead to ethical or to unethical decisions. But it supports an atmosphere of
rule breaking, it supports this effect that we have seen already in all the stories that we
shared, where managers disconnect from a broad understanding of what their role is.

So let me conclude with four observations. First, institutions can turn into rigid ideologies,
they are perceived as true in that very moment. Second, ideologies are structured
simplifications. Third, shareholder value ideology promotes greed as a virtue, and
shareholder value ideology is perceiving profit maximization as the only moral responsibility
of corporations. And finally, therefore, it can promote ethical blindness when it is aligned
with organizational and situational forces that push managers in exactly the same direction.

iv. Fourth Video

Welcome to the present session of our course on unethical decision making in organizations.
Our course is focused on unethical behaviour that results from ethical blindness. This concept
describes situations in which decision makers are not aware of the ethical dimensions of their
decision because they are embedded in an overwhelmingly strong context. We have, at the
very beginning of the course, argued that unconscious, unethical behaviour is not the only
type of unethical behaviour. In many situations, actors are well aware of the fact that
their behaviour is wrong.

59
In this session, you will learn what types of unethical behaviour exist, how these types are
related to each other. What moral disengagement is and how it can lead to unethical
behaviour and eventually also to ethical blindness. And you will learn how an ethical
dilemma may develop over time such that it may result in ethical blindness, and hence,
possibly also in unethical behaviour.

In our course we have adopted a rather descriptive take on unethical behaviour and on ethical
blindness. We argued that actors are ethically blind if they violate their own values, rules, and
principles, often in strong situations. When people are ethically blind, non-ethical aspects of
their decisions may overpower and overshadow ethical ones. Most importantly, because of
the strong situation, people do not see that what they do is wrong. In a more normative sense,
we have in the first week shown you the toolbox of philosophers who try to evaluate ethical
or unethical behaviour. From a more general perspective and independent from particular
decision makers individual ethical standpoint. As you might recall, according to Immanuel
Kant, a decision has to run through the universalizability test. Can I wish that my rule
becomes the rule for everyone? If not, I shouldn't do it. According to Jeremy Bentham from
a utilitarian perspective, the right behaviour is the one that aims at achieving the greatest
utility for the greatest number of people. The common denominator of both approaches is that
you care about others, and actually, you even care for others.

How you care about or for others provides for us the roadmap for this session. Specifically,
we will propose the following types of unethical behaviour. You care about others in a
negative sense. That is, you want to harm them intentionally. You do not care at all
about them, and any harm-doing is just a by-product. Or you do care about them, but
still do some harm despite of your good intentions. In our course, we have adopted a rather
descriptive take on unethical behaviour and on ethical blindness. We argue that actors are
ethically blind if they violate their own values, rules and principles, often in a strong
situation. When people are ethically blind, non-ethical aspects of the decision making
overpower and overshadow ethical ones. Most importantly, because of the strong situation,
people do not see that what they do is wrong. In a normative sense, we have in the first week
shown you the toolbox of philosophers who tried to evaluate ethical or unethical behaviour
from a general perspective and independent from a particular decision-maker's ethical
standpoint.

Let us start by looking at the first type. When someone breaks the rules, we often assume
automatically that the harm that has been caused by such a rule breaking was intended.
In extreme cases, we may even have the impression that some individuals derive pleasure
from harming others. Often those people are psychopaths. Psychopathy is a clinical
phenomenon that describes a personality disorder, which manifests in antisocial behaviour of
people who lack the ability to imagine themselves in the shoes of someone who suffers
from the consequences of their behaviour. Psychopaths also lack the remorse that might
prevent them from repeating such behaviour. The psychiatrist Robert Hare has developed a
psychological assessment tool to measure whether or not an individual is a psychopath. The
protagonists of some of the corporate scandals which we have discussed in our course, such
as Jeff Skilling from Enron, have been labelled in the mass media discussion, corporate
psychopath. While we must be careful with using personality disorder labels from a distance
when evaluating managers in organizations. We have already highlighted in our session on
the impact of fear, that aggressive antisocial behaviour might be a wide-spread phenomenon
in organizations. There might also be good reasons to assume that modern corporations
promote such behaviour by role-modelling aggressiveness as a key element of a successful

60
career path. We may be able to understand such attempts to do harm to others intentionally.
Maybe such psychopaths have not received enough love or attention, and now they might
want to let some innocent people suffer, so that they are not alone. Or they want to
demonstrate to themselves or others that they have power. There are also reliable insights
from neuroscience about the potential for antisocial behaviour being hotwired in some
people's brains. Whatever the explanation might be, there is no doubt that the way a
psychopath treats others is morally wrong, regardless of the psychological reasons behind
such behaviour. Psychology is not excusology, as Phillips Zimbardo has highlighted.
Another variant of intentional unethical behaviour is revenge. Paradoxically, someone who is
taking a revenge might be driven by motives that he or she perceives as highly ethical,
namely fairness and justice. This phenomenon exists in societies with deep seated histories
of injustice, or with traditional forms of social interaction where clans and families fight
revenge games sometimes over centuries.
More interesting for us, it also exists in organizations. Unethical behaviour motivated by
perceived injustice is not motivated by greed and the pleasure to hurt others. In his study,
stealing in the Name of Justice, the psychologist Gerald Greenberg showed that people who
feel treated unfairly might look for revenge while having the feeling of being morally
entitled to do so. People might, for instance, steal or sabotage or even harass others in their
respective organizational context because they feel treated unfairly. Such behaviour is
widespread in organizations. In the case of revenge, it is even easier to understand why
someone wants to hurt someone else intentionally. Remember, it is a justice motive that is
underlying the desire to take revenge. And who of us does not want to be just? One may in
addition even argue that such practices, if established in some cultures, ensure that people
think twice before they, say, kill someone else. They may in fact not dare to do this if they
anticipate the mindset of the relatives of their potential victims. But again, in modern
societies, we do not consider self-justice to be right. And there are also good reasons why it is
abolished and even punished. Actions driven by self-justice are more error prone than
those endorsed by neutral institutions. These actions more often affect innocence and
easily lead to viscous circles.

This was our first category, harming others intentionally, and maybe even deriving pleasure
or satisfaction from it. The category we want to present next is ubiquitous. Very often,
unethical decisions are a kind of collateral damage. We harm others as a side effect of
pursuing own interests. Ideally, we do care about others. No question about this. But we
also care about ourselves. After all, we are the center of our world. This is true for every one
of us. We are the center of the world surrounding us, and the center is a very privileged place,
isn't it? As long as it doesn't cost us anything, it is easy to care about others and to be nice.
One may, and in particular many economists do, argue that being nice to others is also in
our own interest. Simply because it will increase the chance that those others will also be
nice to us. So being nice has advantages. And even paying some costs here can be conceived
as a good investment. This week, we have discussed how ideology reinforces ethical
blindness. And we have zoomed into one particular ideology which has shaped the education
at business schools around the world. Shareholder value maximization. Greed has been
turned into a value. Because one of the basic assumptions of the capitalist ideology is that this
is how we are as human beings, homo economicus. We maximize our own utility, or as
organizations, our profit. And in turn, free markets transform this egoism into welfare for
everyone. Equipped with such narrative, we might focus on our own interest, and
develop a significant tolerance for the collateral damages it creates. As the CEO of
Goldman Sachs stated when his company was criticized during the financial crisis of 2008,
I'm doing God's work. Unethical behaviour thus might result, not from bad intentions, not

61
from the failed need to balance things. But from a deeply seated conviction that it is the
best to focus on self-interests even when there are side effects. So far, we have discussed
two types of unethical behaviour. The first type, people want to harm others intentionally.
And in the second, they do not care about them at all, and any harm doing is just a side
effect. In the next video, we will discuss the third type where people do care about others,
want to do good, but still behave unethically.

v. Fifth Video

Welcome to the second part on our session on types of unethical behaviour. The category of
unethical behaviour in which we will focus in this session is of particular relevance for our
course, since it represents the first step on the slippery slope towards ethical blindness. Since
we teach this course here on Coursera, we had discussions with you our participants on
whether ethical blindness is a binary concept. You either are or you're not ethically blind. Our
video on shifting baselines already pointed at a very different understanding of the
phenomena. We understand it as a process, in which we become habitually accustomed to not
seeing the ethical dimension of our decisions. As we already cited in our Enron video, one of
the traders of the company once stated, you do it once, it smells. You do it again, it smells
less. The awareness for the ethical dimension of a decision seems to fade away over time and
our shifting baseline session already describe the mechanisms that trigger this process.

Instead of arguing that someone is or is not ethically blind, we would argue that the decision-
maker can be more or less blind or more or less aware of the fact that a particular decision
smells. Over time, awareness may decrease more and more, until it's finally entirely gone. If
this is the case, it is important to understand the entry point of the process and here we draw
from the work of the psychologist Albert Bandura, who proposed the concept of moral
disengagement. The concept of moral disengagement has stimulated a lot of research, mostly
in social psychology. The bottom line of all those studies is the following, people who
manage to morally disengage from some unethical action are more likely to take this action.
So what is moral disengagement?

We all have moral standards and we use these standards to regulate our behaviour. Any
behaviour that would violate our standards would typically be identified as such and our self-
regulatory control mechanisms would ensure that we do not commit such action. Most people
would, for instance, condemn any action that would harm other people. This would be against
their moral standards. They are not ethically blind. They see that a particular action, such as
harming someone else, would be wrong and so they would not do it. They do care about
others. But then there are many situations in which behaving according to our moral
standards would incur costs or would be against our own interests or would force us to act
against our in-group or against some authority. Note that there are many situations in which it
is not easy to determine which course of action would be consistent with our moral standards.
These are ethical dilemmas. We already discussed them in the first week of this course.
Whatever you do in such a situation, you realize that you need to violate one of your values
in order to live another value. You're forced to get your hands dirty, one way or the other.
Ethical blindness is typically not an issue here. On the contrary, people are fully aware of
the conflict and they suffer or rationalize to cope with the situation. Like the poor
participants in the Milgram experiment, who were torn between disobeying authorities or
giving an electric shock to someone else.

62
Moral disengagement can be seen as a way out of such a situation, as a way of stopping the
suffering. According to Albert Bandura, there are several mechanisms we can use to
morally disengage. Each of them allows us to take actions that violate our moral standards.
The first three mechanisms are how the behaviour’s seen and evaluated. Moral justification
basically means that unethical behaviour is seen as having a moral purpose, which in turn,
makes it socially acceptable. Examples would be torturing in order to get some information
that's necessary to protect others or justifying holy terror by religious principles.
Euphemistic language can be used to make harmful behaviour respectable, and reduce
responsibility for it. For instance, military attacks are labelled as clean surgical strikes. The
victims are referred to as collateral damage and terrorists name themselves as freedom
fighters. An advantageous comparison contrasts own harmful behaviour with the clearly
harmful behaviour of someone else, thereby trivializing the own immoral behaviour. For
instance, the American military interventions during the Vietnam war lead to massive
destruction and these actions were advantageously portrayed as saving the local population
from communist enslavement.

The next mechanism is about the detrimental effects, minimizing, ignoring, or


misconstruing the consequences. It is relatively easy to harm others, if the harmful
consequences of one's actions are ignored, if they are not visible, if they are not linked to
one's own action, or if they are realized at a very remote place. Psychologically, it makes a
difference whether someone kills a sleeping victim with a knife, or whether he is using a
computer mouse and a screen to navigate a drone in order to kill this person.

The next two mechanisms for moral disengagement are about the link between action and
effect. Displacement of responsibility distorts the link between actions and the effects they
cause. People are eventually willing to execute orders, if a legitimate authority takes over the
responsibility for the consequences. In such a case, an executor of an inhumane action may
not perceive it as his or her own action anymore. He or she is just the functionary. He or she
is only producing the effect but the action, true actor is someone else. Remember what we
said about Hannah Arendt's banality of evil or the Eichmann trial. Eichmann argued that he
was just a bureaucrat, executing orders.

Diffusion of responsibility, when everyone is responsible, no one is responsible. This is


especially apparent in very large groups. Collective action provides anonymity, which allows
weakening of moral control. In a large group, it is, in fact, very easy for each group member
to perceive their own share and impact as minimal. So why exactly should you be the one
who does anything or stop doing something unethical, if there are so many in the same
position as you? You may have heard of Kitty Genovese, who was stabbed to death in 1964 at
night-time on her way to her apartment. This event became famous for an article that
appeared two weeks later in The New York Times, which claimed that many people
witnessed what has happened, but no one called the police or the ambulance. Presumably
because everyone believed that at least one of the others had done this already. This special
kind of diffusion of responsibility has since then also been called the bystander effect or the
Genovese syndrome.

The last two mechanisms are related to the victim. Blaming the victim, this makes it easier
to do harm because, after all, the victim somehow deserves it. A perpetrator may later say, for
instance, that the other one started the whole fight with some provocations. This triggered
some justifiable defence reactions and at the end, the initial aggressor was suddenly and

63
somehow dead. So the victim is portrayed as the bad guy and what happened was his own
fault.

Dehumanizing the victim, dehumanizing victims means that these victims are no longer
seen as individuals with feelings, hopes and concerns, but more like objects or as animals.
This process of animalistic dehumanization is most commonly established through the use of
a metaphor. For example, the Nazi regularly compared the Jews to rats and the Hutus used the
term tootsie and cockroach interchangeably in the majority of their propaganda. Killing a
human being is certainly harder than killing a disgusting animal.

It appears that unethical behaviour should not be observable as long as people do care about
others, but these mechanisms of moral disengagement do the trick. Normal people who have
intact moral standards and who care about others may eventually behave unethically
and they can even do so without having to change their moral standards. These
mechanisms may just offset the self-regulatory processes, which normally ensure that people
behave in accordance with their moral standards. We said that moral disengagement
typically starts with an awareness of some conflict. Something similar can be said when a
decision-maker perceives an ethical dilemma. As we have explained in the very first week of
this course, actors experience an ethical dilemma if they are in a situation in which values
clash with each other. That is, a situation in which people cannot behave without violating at
least one of their values. Acting against any of their values brings people in conflict with their
moral standards and is typically unacceptable for them. For instance, you might be expected
to do something in your organization that violates your feeling of justice, but at the same
time, your in-group expects you to do as they do and you want to be a loyal group member.
Once you decide to go in one or the other direction, a dynamic process is triggered. You
gravitate towards one of the options and you do so repeatedly, neglecting the other more and
more over time. Think of what we said about temporary dynamics and shifting baselines.
Every action you take will have an impact on you, who you are, how others perceive and treat
you, and what you will see. If you take sides, chances are that you, at some point, will no
longer be aware of the other value that constituted an important element of the initial conflict.
At some point you do not see it anymore and you may take actions that you perceived long
time ago as unethical, because these actions were inconsistent with one of your values. You
can become ethically blind, with respect to this particular dimension. First, it smells. Then, it
stops smelling.

Examples include journalists lying in their articles in an attempt to protect the environment,
activists destroying property when fighting for animal rights, or people killing others when
fighting for freedom or for the right religion or for some -ism, that is, some ideology.
Probably all experienced some conflict at the beginning. Some managed to always see the
conflict and the dimensions and values involved and they always tried to strike a balance.
Others managed to suppress one side, to morally disengage from one dimension and from one
value, and ultimately, behave unethically with regard to this dimension, in an attempt to do
good with respect to something else they focus on.

As we see, it is not easy to navigate through the complexities that modern life presents.
Ethical and unethical behaviour is an interesting topic, but also a thorny territory. It is often
not easy to evaluate the actions and omissions of others who see the world surrounding them
from their point of view, which is definitely not ours. That being said, tolerance is a value we
shouldn't forget about it.

64
To summarize, there are various types of unethical decisions. Some people have sometimes
bad intentions and often they know what they do is wrong, unethical, or illegal. Another type
that we discussed is that people just don't care for others. Such egoists typically produce
allocations of resources that others may find unethical, but that will not bother the egoists.
They don't care what the others think about them. The type of unethical behaviour that we
find most interesting in the context of course is the one in which people do care about others.
They have good intentions and moral standards. Here we looked at the continuum that starts
with awareness that the old behaviour is unethical, coupled with attempts to nevertheless
continue with this behaviour. This requires though that the self-regulatory system is set off.
We discussed various mechanisms that may help us to morally disengage, as Bandura has put
it. Something similar can be said for an ethical dilemma. An ethical dilemma is characterized
by awareness that there is an ethical issue. If we consider the temporal dimension, we can see
that both the process of moral disengagement and also the way we handle a dilemma could
eventually result in ethical blindness. That is, in a mental state in which the awareness is
gone, the conflict is no longer perceived as such, and people behave unethically without
noticing it.

VII. Week Seven

a. First Video

Welcome to Week 7 of our course on Unethical Decision Making. In the last six weeks we
have discussed unethical behaviour in organizations. We have tried to understand why such
behaviour may occur. Thereby, we focused on one possible explanation, ethical blindness.
People do not see that what they are doing is unethical. After so many videos on unethical
behaviour, on ethical blindness, and on the dark side, we now finally the last week want to
turn to ethical behaviour. More precisely, we want to address question, how can we promote
ethical behaviour, and how can we fight against ethical blindness?

A good therapy requires an accurate diagnosis. So to summarize, what were the main risk
factors for ethical blindness that we discussed? One was located first and foremost within
ourselves. We discussed framing, the way we see the world, and decision making, the way we
process information. In both cases, we have discussed people's tendency to simplify and to
reduce complexity. Another family of risk factors are external pressures. And we can clearly
locate their sources outside of us. We discussed external pressures in a given, proximal,
situation. We looked at pressures at a meso-level, the organization in which people are
embedded. And finally we considered the distal context, institutions and ideologies. All three
layers, the immediate situation, the organization, and institutions and ideologies constitute
our external environment, and are hence, closely related to each other. This is, by the way,
why I refer to them as a family of risk factors, where they could blind us.

It seems to be straightforward to adopt this distinction between mind, framing, decision


making, and environment, external pressures. We're now addressing the question, how to
promote ethical behaviour and how to reduce the risk of ethical blindness?

We will start by focusing on the environment with the present video entitled, Nudging. In the
next video entitled Mindfulness, we will focus on the mind. And subsequently in the last
video of this week, we will address this question from an organizational point of view. In this
session you will learn what nudges are. You will learn how nudges can be used to promote

65
ethical behaviour and finally, understand some limitations of nudges and problem of this
approach.

So what are nudges? A nudge is any aspect of the choice architecture that alters people's
behaviour in a predictable way, without forbidding any options or significantly changing the
economic incentives. To count as a mere nudge, the intervention must be easy and cheap to
avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk
food does not. Increasing prices does not. In short, nudges are interventions that change
the decision-making environment to affect behaviour in systematic ways without
changing the economic incentives and without constraining choices. The concept has been
made popular by Dick Thaler, a professor of behavioural science and economics, and by Cass
Sunstein, who is a law school professor. These two see nudges as tools that can be used in
what they refer to as libertarian paternalism. Paternalism builds on the distinction between
parents and child. A child is someone who should and possible also needs to be educated and
protected so that it cannot harm itself or others. A parent is usually and ideally already
educated, knows better, has more power, good intentions and uses this power and knowledge
for the best of the child. So this is clearly a hierarchical relationship. Liberalism, in contrast,
emphasizes freedom, for instance, freedom of choice. And it is not very fond of hierarchy,
power, control, and influence. Liberal thinking is against strong governments and authorities.
The term libertarian paternalism combines the two isms, and hence, appears to be at first
glance a contradiction in itself, but only at first glance. An example makes it obvious that the
combination is in fact not so contradictory. Imagine you are an owner of a little grocery store
or a manager in a big supermarket, that doesn't matter. Research, and your own experience
tell you that putting items at certain places will influence people's decisions, and increases
these items' chances of being picked. For instance, by placing them at eye level, or at the
cashier, where people are forced to wait and often purchase items spontaneously and
impulsively.
This example shows customers can freely choose. They can ignore those items at the cashier,
and they are entirely free to pick up the items somewhere in the corner, down in the shelf.
This is the libertarian aspect. But at the same time, the choices will be made in an
environment that you, the shop owner, control and design. And you know that placement has
an effect, and you can hence use your knowledge to steer customers' purchase behaviour. This
is the paternalistic aspect.

You cannot not decide where to place which product. You have to place the stuff somewhere.
Whatever you do, you nudge. You create your customers' choice environment. And this is
why you find this term choice architect quite often in the corresponding literature. How is this
related to ethics? Imagine you, the shop owner, know, or at least believe, that some products
are more ethical than others. They are maybe produced in a more sustainable way by
companies that have higher social standards, are members of the Fair Labour Association,
signed on to obey fair trade agreements, whatever. Your customers may not have this
background information about the companies and their products, or they may not care. And
some may not even see the ethical dimension involved in purchase decisions. They are
ethically blind in this regard. But you have the information. You do care. And you are
convinced that producing, selling, buying and using product A compared to B makes the
world a better place. In this situation, you as the shop owner and choice architect can use this
nudge, where to place these products to increase the rate of ethical choices in your customers.

Let's now look at some other nudges. They all follow the same scheme. Some people may not
see the ethical dimension of a decision. But someone else, who has control over the choice

66
environment of this person, does. This choice architect then uses his or her power to control
the choice environment, and also his or her knowledge on how this best should be done to
make those people behave more ethically. An example for a nudge that is often given in the
literature is the use of defaults. And here a prime example is organ donation. Every year,
many people Who needs an organ donation, die, because there are not enough organs. There
are various reasons for this bottleneck. Logistics, transportation, timing, functioning of
matching procedures, but before all those issues become relevant, there's an important legal
obstacle that is related to the question, who owns the organs of a dead person? Are the
physicians allowed to take the organs after death for this purpose? The answer's definitely yes
if the person before death gave his order of consent to be a donor. This figure displays for
various countries the proportion of people who gave their consent. One may have different
views here. But assume for the moment that someone asked you for your consent and argues
as follows. After death, you don't need your organs anyway, but for someone else they could
make the difference between life and death. Hence giving consent to organ donation is
ethical, and not giving is unethical. To the extent that we are willing to adapt this
argumentation, how can it be that a vast majority of people in countries like Austria, France,
or Portugal, are ethical, whereas, most people in Denmark, the UK, or Germany, are
unethical?
The difference between these countries is the legal default. In France, for instance, you are an
organ donator by default, and you would have to opt out if you disagree. In contrast, the
default in Denmark is that physicians may not use your organ unless you actively opted in
before death.

Another nudge related to the last one makes use of people's hurting behaviour. People do not
only tend to follow legal defaults but also social norms. In the first week, in week five, on the
power of strong situations, we already introduced you to the experiments of Sullivan Asch on
group conformity. If people are informed, what the majority of people does or chooses, this
functions like a default. Many just imitate and stay within the herd, rather than opting
out, so to speak, leave the herd, and become the black sheep.
In the series of some simple but very clever designed experiments, Goldstein Giodanni, and
Griskevicius, compared the effectiveness of various messages that aimed at getting hotel
guests to reuse their towels rather than sending them to the laundry. Nudging the guests
towards reuse is not only more ecologically better for the environment, and hence also more
ethical, but also reduces costs for the hotel. What were these messages? One was an appeal.
Please help us preserve natural resources by reusing your towel. Reuse rate with this appeal,
35%. Another message was a social norm. 75% of our guests use their towels more than once.
Reuse rate, 44%. So interestingly, social norms appear to have a larger impact on behaviour
than appeals.

In a related study members of a research team, led by preservation researcher Chiodini, went
door to door and placed hangers on doorknobs of houses in San Diego, California, with
messages about energy conservation. There were four different messages. Some urged the
homeowners to save energy to protect the environment. A second message said to conserve to
benefit future generations. A third pointed to the resulting cost savings. And the forth used,
again, is social norm by saying that most of the neighbours were taking steps to save every
day. So the first messages made an ethical argument. The third, a monetary, and the last, a
social argument. At the end of the month the team came to read the meters, and they found
that the only method that reduced consumption was the one with a social norm.

67
Still another study, again on energy consumption in households, revealed a very interesting
finding. Households have been accurately informed on their own consumption relative to
their neighbours. Some were informed that their energy use was above average. Guess what
happened? Energy use when down. Others were informed that their use was below average.
What happened here, it went up. So there was a boomerang effect, the information backfired.
This effect has been discussed, in terms of, licensing, if someone knows that he fulfils some
norm, it has done some good. And this gives him the license to let's say, lower the standards.
Interestingly, this pendulum changed in another experimental condition in which people not
only received the numerical information, but, in addition, also some emoticons that were
simply placed next to the numbers. For instance, sad emoticons for those whose consumption
was above average, effect, consumption dropped even more compared to those who only saw
the numbers. Even more interestingly, the boomerang effect disappeared for those whose
consumption was below average. A simply smiley next to the number with the evaluation or
call it social approval, you are doing great, and there was no licensing anymore.

I found this very remarkable. Simply providing people with information that can be
interpreted as you are better than average, with respect to something that has an ethical
dimension can be detrimental, and can backfire. But once you approve it, once you tell
them this is great, you can be proud of yourself, we certainly are, something like this,
and it will not backfire.

Consider still another example, also related to conformity. In an experiment, the effectiveness
of four kinds of information to increase tax compliance was compared. One group of
taxpayers was informed that their taxes were used for charity work. The second group has
been threatened with some punishment for non-compliance. Group number three received
information about how to get help filling the forms. And the last group was informed that
90% of the population in this city has already paid their taxes. Guess which kind of
information was most effective to increase compliance? Again, the last, the one that involved
social norm. The implication of the study's obvious. If you want to promote ethical behaviour,
then talk about ethical behaviour. Communicate how many are already on this track, though it
only works if this is already the majority, of course. Create positive messages. Set role
models. Do not talk so much about unethical behaviour or in the present example about those
who did not pay taxes. Some people who will hear and read this may think, haha there are
such people, and obviously this is an option. Instead use the power of social norms of herding
and conformity, and spread messages about behaviour that you want to be imitated.

Time only allowed us to present and discuss some nudges. But it is very easy to retrieve
literature, examples, and studies, on nudging in the Internet and other data banks. Let us
finally address two criticisms. One has been formulated by Elizabeth Corbet as follows. If
people can't be trusted to make the right choice for themselves, how can they possibly be
trusted to make the right decision for the rest of us? Let us add as a follow-up question. If
choice architects control decision makers' environments, who is controlling these controllers?
If we all agree what is ethical and what not, and if we all want to promote ethical behaviour,
then who is left to be nudged into ethical behaviour? If, however there's a profound
disagreement about what is ethical in a given situation, then the issue gets thorny.
Who is nudging whom? Nudges are interventions, they are tools. It all hinges on the question
of, who is using these tools and for what purposes? So, here we run into problems of
legitimacy, of control, and checks and balances.
The second criticism or maybe better call it, limitation, that we want to mention here is,
generalizability and stability of ethical behaviour. Let us go back to the example with the

68
shop owner who has a background about companies and products in who nudges his
customers into what he believes to be ethical purchase behaviour. Granted, these customers,
in this shop, might have bought more ethical products after his intervention. But they did so
because they were nudged. And not because they took ethical dimensions into account.
Hence, they were remained ethically blind. And will most likely no longer buy these more
ethical products in another shop without such purchase. Lesson, changing choice
environments without changing the people may not be very sustainable. Once the people
are in a new choice environment, the benefits of the purchase may be lost. And this is
exactly the topic of our next video, how to promote ethical behaviour and how to reduce the
risk of ethical blindness, not via changing the environment, but via changing the people.

To conclude. According to our model on ethical blindness, unethical behaviour may result
from factors both inside the decision maker (framing and information processing) and outside
(various kinds of pressures in the environment). Ethical behaviour can be promoted by
focusing on factors inside as well as those outside the decision maker. Nudges are
interventions that change the decision making environment to affect behaviour in systematic
ways (the paternalistic aspect) while leaving people free to still choose any of the options
without any material costs (the liberal aspect). Examples of nudges are information displays,
defaults, and social norms. But there are many more. Two major criticisms are legitimacy and
control on the one hand, and limited generalizability and robustness due to lack of insight on
the other hand.

Welcome to the first video of week seven. Over the last weeks we analysed the forces that
drive ethical blindness. We learned that contexts can be stronger than people, stronger than
values. We learn that we can do the wrong things despite the values we have and despite the
good intentions that, that might drive us. Today, we will learn about possible defence
strategies. We will learn how we can defend ourselves against ethical blindness.
Let us start by looking at the main goal of this video first. As I said, contexts can be stronger
than people. They can push us in the wrong directions. But we are not robots. We can defend
ourselves against ethical blindness. So in this session, you will learn about individual defence
strategies, and you will be familiarized with basically four defence lines that you have at your
disposition to fight against ethical blindness.

This course basically assumes that the things we do depend to a certain degree on the context
in which we operate. Strong context can switch off reason. They can impose particular
routines of perceiving the world on us. And over time, we might perceive the world through
an ever narrower frame, so we see less and less of what we should see. Decisions thus are
often made on autopilot. They are mindless. They are unconscious. They are action without
thinking. And as a result, we might run into ethical blindness. We might lose the ability to see
the ethical dimension of a decision that we are making. So, because of that, we are highly
vulnerable to make unethical decisions.

Solutions or defence strategies against ethical blindness have to be analysed on all the levels
that we have seen, the individual, the organization, society. In this video here, we will focus
on what we can do as individuals. As individuals, we have a kind of difficult situation. We are
not just victims of context. We create context. As leaders, for instance, we are one of the main
drivers of narrow context for others. We push others into strong context as leaders. So, the
difference between being a victim and being a perpetrator is often very fuzzy. You remember
the fairy tale that we discussed at the very beginning of this course, where we saw that the
emperor creates the atmosphere of fear, but then he becomes the victim of that very same fear

69
himself. Leaders that trigger a culture of aggressiveness in their teams, in their organizations,
often will, at a certain point, struggle to keep that aggressiveness that they created under
control. It's a bit like the scientist Frankenstein, who creates a monster in Mary Shelley's
novel and then loses control over that monster. This difference between being a victim and
being a perpetrator is even less clear when we look at group pressure. Who makes the
pressure? Who is the victim? This can often be the very same persons.

Let me tell you a story to illustrate the power of strong frames on our own perception of the
world and our own behaviour. Recently, my two boys, I have two boys of nine and 13 years,
and they were shouting like crazy. And I couldn't really stand it anymore, so I was shouting at
them to stop shouting. And then I remembered my own work on blindness and it dawned on
me that something is wrong with the way I reacted in the situation. I was embarrassed by my
own stupid reaction. But the good thing here is I, as someone who works on ethical blindness
since quite a while already, I'm not protected completely against it. I fall into this trap as well.
But being sensitized for it, I have now the option to escape from it, to, perceive my own
blindness and to develop strategies of getting out of that trap. I could change my frame when
I was reacting to my two boys.

So, if rigid framing is the problem, flexible framing must be part of the solution. What is
flexible framing? Flexible framing is the ability to apply a holistic, a broader view, to a
decision at stake. As we've seen, the problem is narrow frames. We see less and less of the
world. So, making our frames broader, opening up the horizon of what we can see, is very
important to find a solution or a defence strategy against ethical blindness. Flexible framing
basically requires the combination of, of various abilities. The first thing is being mindful.
What does it mean to be mindful? Mindfulness means, in our case here, to step out of a
routine and to decide consciously. Our course gave you some tools to know about the forces
that create strong context, so we might understand better in the future how these contexts
might emerge, how these constellations slowly build up around us. Constellations are factors
like leadership, like group pressure, like time pressure, like aggressive language, can form a
whole that then pushes us towards behaviour that we might not want to have. So you can see
it coming in principle.

But we should always be aware of the fact that knowledge alone doesn't protect us. As I
could, as you could see in the story with my two boys. But you can also see this if you look at
the experiments of Milgram and Zimbardo and others. The Milgram experiment recently was
repeated on French TV. We, as scientists, we can no longer do this. This is forbidden for good
reasons because it exposes participants to risks, but television can. So there was a reality
show where exactly the same setting of the Milgram experiment with the, with the electric
shocks was repeated and obedience went up to roughly 80%. One of the participants later
explained that she knew about the Milgram experiment. But while she was in this TV show,
she forgot about it. She was sucked into the context. The context was stronger than her
despite the knowledge she had about exactly that kind of context. So what can we do? We
should always consciously observe our own decision-making situations. We should check on
a regular and systematic base things like the shifts that we have around us in our culture, for
instance. Is it going in a dangerous situation? Do we have elements of the ethical blindness
drivers that we have seen in our course such as aggressive leadership style, such as into
highly individualized bonus systems, such as humiliating performance measurement systems?
Do we have combinations of those that push us to a certain behaviour? Do we see the
emergence of in-group/out-group perceptions of people around us? Do we have leaders with

70
master-of-the-universe attitude? Do we approach the world and our team with over simply,
oversimplified interpretations?

Furthermore, what we realized when we discussed about the temporal dynamics of ethical
blindness is that the very important thing is how it starts. Remember that the first step is the
decisive one. Mind the beginning. We have seen that we do not start to behave in a blind
way from one day to the other. It slowly develops over time. It comes in small steps. And
since we only compare the last two, three steps when we move forward, we might forget
about the beginning when we were still full of integrity. So, the very important thing is that
we should not make compromises on what we believe is the right thing to do from the very
beginning even if it is about very small compromises. We move on a slippery slope if you do
this. One sharp weapon against ethical blindness therefore is to stick to the rules even if
we have the impression that the transgression is harmless. We systematically overestimate
our power to stop what we have started. So, narrows frames develop slowly over time. They
creep into our unconsciousness. We have to be frame-vigilant. We have to understand what
kind of perspective our frame imposes on us. What can we not see when we apply a particular
frame? For instance, if you're a manager and you make a decision on outsourcing,
outsourcing a production activity somewhere in another country, what you normally do is you
make an economic analysis of that outsourcing decision. So you frame it as a purely
economic decision. But if you frame it as a purely economic decision, you might not see the
risk of human right violations to, to which you might get connected through this decision
later on.

The second defence line that we have at our disposition is the ability to imagine a broader
set of options when we make decisions. Coming back to this outsourcing decision, what we
might want to do is to frame it also systematically through a moral lens. We might look at
other corporations who made similar decisions in the past already. How did they do it? We
might dialog with non-governmental organizations who are specialized in these kind of
challenges. We might join a multi-stakeholder initiative, which is an initiative of NGOs,
corporations, unions, sometimes governments that deal with the problems that might occur,
for instance, when you outsource into countries where human rights are not protected as in
your own context. We might apply a broader time horizon to our analysis, which already
changes the frame considerably. So, what we should keep in mind is then if we frame a
decision as a purely economic decision, as a purely engineering decision, as a purely legal
decision, we run into the trap of ethical blindness. We need a broader lens. We have to look at
it from a cultural perspective, a political one, as broad as we can see a decision, and then we
might make a more mindful decision in the end. Being frame-vigilant means we are able to
break the frame that drives our routines. We have to create a culture around us that makes this
possible. We have to invite for dissent even if dissent is not always easy to bear. Hannah
Arendt, the philosopher, called this ability, the ability to have moral imagination, to imagine
as broadly as possible what we are going to decide. The two colleagues, Rosso and
Schumacher, they cite one of the most famous CEOs, in the history of American business.
Alfred P. Sloan Junior, who was at one point the CEO of General Motors and who, whom
they cite as follows. Gentleman, I take it we are all in complete agreement on the decision
here. Then I propose we postpone further discussion of this matter until our next meeting to
give ourselves time to develop disagreement and perhaps gain some understanding of what
the decision is all about. So in other words, he tries to break two elements that create a
narrow frame in his particular team, group conformity and time pressure. He gives them the
time and he invites him to dissent from the consensus to find other ideas around the question

71
at stake. So up to here, we have seen two of the four defence lines against ethical blindness
and then I will be happy to continue our discussion with the next two lines of defence.

b. Second Video

Welcome back to our session on mindfulness, on defence strategies against ethical blindness.
And we will now continue with our third line of defence against ethical blindness. The third
defence line against ethical blindness has to do with our weakness of overestimating the
power to control what we do. Overestimate our integrity, overestimate the positive
perception of ourselves. We might look into the mirror and perceive ourselves as beautiful
princesses, but, in reality, we are really monsters, we just do not realize it. So, developing a
better knowledge of ourselves, of our values is a key element of defending ourselves against
ethical blindness.

You might remember the session we had on institutions where we saw that large parts of what
we do, large parts of our decisions are driven by unconscious routines are driven by
auto pilot. But behind our routines there is this set of values and beliefs about the world that
created these routines in the first place. So, in some context our beliefs that build up these
routines, they might get distorted. They might get buried under more salient values that the
immediate situational context pushes like greed, like competition with others. But these
buried values, they're still there. You can see then when someone wakes you up from your
state of ethical blindness, when you're taken out of your context, when you realize what you
did was wrong. You remember, in the session ethical blindness we discussed this
phenomenon that blindness is just a temporary effect driven by the situation. If you take the
person out of the context, he or she might realize that what she did was wrong. And one
of the effects by which you can see that is when people are taken out of the context, they
often ask themselves, how could I ever do this? This is so against the values that are felt
inside. And they have no answer. You have some answers now after these seven weeks on
ethical blindness, of course. Our values, are our moral compass, even if we don't see them,
even if they're buried, even if we might struggle listing them if we are asked what our values
are because they are unconscious.

So, it is true that some people don't have values, they don't have that compass. They make
decisions in the wrong direction by intention and in our course we do not want to deny that
option, we do not want to deny the existence of bad apples in organizations. Indeed, they are
there all the time. The bigger an organization the higher the probability that you have
criminals there. However, what we claimed in this course so far is that this does not explain
many of these large scandals that we have seen. It does not explain how whole cultures can
get corrupted by wrong practices. So in our course, we do not think about solutions for the
bad apples. We think about solutions for people who do the wrong things against their good
values, against their good intentions, being sucked into a context that then takes control over
what they do and what they think.

So, what we have to do, is we have to strengthen those values in our decisions. Think about
the session we had on dilemmas in our first week. One of the aspects that we highlighted
there is that we should know our values. We should analyse our options in the decision
making situation against the background of the values that we hold that are important
for us. So we should always ask ourselves from time to time, what are my values? What
is important for me in my life? What is not negotiable? What kind of compromise am I
willing to make and where would I want to stop? If you want to understand what your

72
deep values are, think about critical decisions that you had in the past. Where you had to go
into one or the other direction. Where you felt more or less comfortable with the choice you
made in the end. Or look at your own biography, the direction it took. Are you happy with
what you decided in the past? If yes, why? If not, why not? Where are the points where you
would have made different decisions if you could do it again? Or imagine you're about to die.
You look back at your life, ask yourself, at what kind of life do I want to look back when I'm
about to die? What is my idea of a good life summarized from the end of it? What is my
vision of a good life? How do I want to go there from here, if I understand what my vision is?
How do I go there from here?

Or another way of getting to your values is think about a situation where you were too weak
to really do what you thought was the right thing to do. You just obeyed. You followed the
group. How did you feel about this? What kind of decision would you do instead today and
why? Why do you feel that this decision was wrong? Normally, we avoid asking these kind of
questions, stuck in our routines. But very often we avoid doing the things we believe are the
right things to do because we fear. We fear to lose something. We fear to be humiliated. We
fear of being marginalized by others. We fear of not fitting the mold of other people's
expectations. Sometimes even, we fear violence. So if you would ask me what is the main
drive of ethical blindness, my answer would be, it's fear. Therefore, when we think about
fear, when we think about decision making situations where we were driven by fear, we
should ask ourselves, how would I decide if I had no fear? This simple question might wake
us up for what is the right thing to do. In situations where we are not clear about where to go,
where we are unsure about whether or not a compromise is worth taking. We should just ask
us this simple question; how would I decide if I had no fear? And you would see, if you asked
yourself this question, it will reveal what is really important for you. It will wake you up. It
will release you from all the pressures of your context, at least in your imagination. And then
you might still do the wrong thing but then you do it consciously and you have no excuse,
you cannot shift the blame to the power of the context anymore. So it is our values that build
the material for the ethical framing of our decisions. It is my own character. It is my own
identity where I should start to think about change. When I want to deal with strong situation,
it's not the situation where I should start. Think about what the famous business caller Karl
Weick once said. If people want to change their environment, they need to change themselves
and their own actions, not someone else's.
In the old Greek society, at the Temple of Apollo at Delphi, the inscription was know thyself.
We have to understand ourselves as weak actors in strong context. We have to understand
how context can overpower our goodwill. We have to know our own weaknesses in order to
better deal with them and to defend ourselves against strong context.

So if we know our values better, if we have a clear idea of our ideals, if we know which kind
of path we want to choose for our life, we can defend ourselves against strong context in a
decent way. Finally, imagine the construction of your defence line against ethical blindness as
a permanent activity. The ancient Greek philosopher, Aristotle, understood morality not as a
set of values, not as something that you can put into a code of conduct and then read, for him,
it was profoundly about training of your character. So we can lose our morality if we stop
practicing. You must imagine morality like a kind of sports ground in which we exercise
ourselves every day to routinize the ethical decisions against the context that, very often,
pushes us towards a routine that excludes ethics as an element.
The philosopher Gunther Anders once called this moral stretching. So, we have to stretch
our moral muscles all the time to keep ourselves in fit in context where we may need them. If
we don't exercise our moral muscles, they get weaker.

73
So our context push towards narrow frames and mindless routines. But what we can do is we
can defend ourselves by creating at least islands of mindfulness. Islands of mindful
decisions in this ocean of mindless routines. Mindlessness is the problem. Mindfulness is part
of the solution.

So let me conclude this video by summarizing our four defence strategies that we have at our
disposition against ethical blindness. The first one is mindfulness, try to step out of your
routines and decide consciously. The second one is moral imagination. Try to imagine a
broader set of consequences for your decision and a broader set of options. The third one is
self-knowledge. Develop a deeper knowledge, a deep understanding of yourself, your values,
your beliefs, your vision of life. And finally, the fourth element is moral stretching. The
right behaviour results from the constant training of your character. If you follow these four
advices, you might still fall into the trap of ethical blindness, but you are better equipped to
defend yourself than others are.

c. Third Video

Welcome to this third video of week 7 of our course on Unethical Decision Making. In our
previous video, we discussed how individuals can resist the pressures that may lead to ethical
blindness. In this video, we want to give you some idea of how you can influence your
organizational context in order to reduce the risk of ethical blindness.

In this session, you will understand how leaders can evaluate the risk of ethical blindness in
their organizations. You will get familiarized with the key questions you might want to ask
when designing an organizational context that promotes integrity.

It can be extremely difficult for an individual to resist to those psychological and sociological
forces that we have discussed in our previous sessions. But, we have also looked at some
possible solutions from the perspective of an individual actor. What about organizations?
What can be done within an organization to protect its members against ethical blindness?
Implementing changes on the organizational level is of course easier for leaders than it is for
subordinates. Therefore, the following recommendations are mainly formulated for leaders.
But this does not mean that you, if you are only a team member, should stop this video here.
Team members can also take over responsibility. Granted, they may need to be a bit more
careful when it comes to initiating any changes, but some bosses actually appreciate if they
get support from team members who act with integrity.

The first and most important step to fight ethical blindness is to increase awareness for the
dynamics of strong contexts. Watch out for signals of ethical blindness in your context. As a
leader, you may want to analyse the situation in your team using the following checklist. But
keep in mind, what I just said, basically everybody, also team members could do that. Our
time pressure is intense. We are completely absorbed by our work. The pressure to perform is
very strong. Our objectives are not realistic. If somebody does not fit in here, they usually
leave soon. The language in our company is very aggressive. Fear is a widespread feeling in
our company. Your answers to this checklist will give you an idea about the overall dynamics
in your organization or in your team. And it also gives you an idea what you might want to
change. Change in order to promote a broader perspective of the people in your organization.
We will now zoom into some selective types of context rapport, starting with time pressure.

74
If you're a leader in an organization with huge time pressure, help your people to press the
pause button when making decisions. Research clearly shows, if you take three minutes of
reflection, you increase the probability of ethical decisions. So motivate others in your
organization to take a deep breath, to bring themselves in the present moment, to investigate
what their automatic routines behaviour would have been, to reflect alternatives they could
consider. Trying this kind of conscious break taking before decision making so that it
becomes a habit for you, and ideally across your organization. Leaders have to be
authorities in their organizations but, they should not abuse their power. They rather
should lead as role models. And in particular they should empower their team members
to speak without fear. Fear, as you remember, is one of the key drivers of ethical blindness.
So as a leader, you should communicate clearly on the values of the organization. You should
be open to the critical statements of others. You should encourage reason-based dissent. You
should be clear about the rules of the game. You should not lead with vague and ambivalent
messages. And you should not leave your people confused about the rules of the game.
For obedience, as a manager of an organization, you should ask yourself, am I
communicating clearly and regularly on questions of ethics and compliance? Do I act as a
role model for integrity? Do I make it clear to my team that integrity is important for me? Do
I respond promptly and decisively when compliance failures occur?

Next, investigate the organization's processes and management systems. It is important to


promote such management systems that clearly align target setting, evaluation, and incentives
with integrity. Don't push other decision makers to the perception that they have to choose
between success and integrity. Pressure is of course an important leverage for motivation,
however, too much pressure is a main source of ethical blindness. It is important to wisely
manage the balance between the too much and the not enough pressure in your organization.

Here's another issue. Promote clear role expectations. As you have seen in the discussion on
the prison experiment, unclear role expectations can promote ethical blindness. A challenge in
organizations is that they're often dominated by the feeling or the experience that is
inappropriate to raise ethical issues to talk about that. We do not want to be perceived as
demoralizers and thus often avoid talking about ethics. It contradicts our perception of how
we believe others see the role of a manager. Generations of managers have gone through a
similar socialization at business schools. Managers have to be tough, and they have to be
focused on profits. This narrow understanding of management responsibility is clearly
outdated today. Good managers need a broad understanding of roles and responsibilities in
order to avoid unethical decisions. And this is not only important from an ethical perspective.
Unethical decisions can be very expensive for organizations. So, with regards to role
expectations, as a manager of an organization you should ask yourself, what are the qualities
and the characteristics of a successful leader in our organization. Who gets promoted in our
organization. The aggressive but successful person, or the reflective person who looks at
decisions from a broader perspective.

Another key driver of ethical blindness is locus of control effects. If we feel that we are not
in control of the situation, we might disconnect from consequences. In such a case, we can do
horrible things without feeling responsible. It is thus important, to help people to develop this
feeling of being in control of a situation. If you empower your team members by giving them
the feeling that their contributions are relevant, and by showing to them that they can make a
difference, they will have the perception of being in control. For instance, look at the impact
of democratic leadership style. It increases the feeling of being in control of my decisions and
of the consequences of my decisions. So, with regards to the locus of control effect, as a

75
manager of your organization you should ask yourself, do people in my organization have the
feeling that they can influence things, or do they feel driven by someone, something else. Do
our people tend to say things like, there is nothing I can do about it. This is not my
responsibility.
One aspect of organizational dynamics you have to understand better in order to avoid ethical
blindness, is the so called slippery slope effect. Evil will enter your organization in very
small steps. Steps that might look pretty harmless, in themselves. Therefore, you must insist
on the rules of the game, even if transgressions seem harmless. As Cat Stevens once sang, the
first cut is the deepest. You might remember from our session on temporal dynamics, that the
commitment to a particular belief or behaviour can escalate over time. We move forward on a
slippery slope. We lose the ability to stop a dynamic, which might have started with
something very small. Therefore, small compromises on values and rules are already
dangerous. Be attentive to your own compromises on rules, and those that you observe in
your organization. Ring the alarm bell when you see slippery slopes around you. So, with
regards to slippery slope, as a manager of an organization you should ask yourself, are we
maintaining the same level of integrity standards, or are we relaxing them over time? Is
ethical misconduct made transparent, corrected, or do we rather not talk about it? Do we
apply the Code of Conduct without any exceptions? Do we deal with compliance issues early
and thoroughly?
Let us wrap up this session. As a leader in an organization, you can influence the context in
which others make decisions. We recommend that you analyse your organization for
factors that may increase the risk of ethical blindness, such as time pressure, obedience,
management systems, role expectations, and locus of control. Manage these factors in a
way that the pressures on the team and the organization get reduced.

76

You might also like