You are on page 1of 2

Risk management Derivatives Regulation Investing Quantum Cutting Edge Books Journals Search All

risk.net
sections

Events Awards White papers Analytics Hub Futures & Options Hub Webinars Tech Directory Subscribe Follow us My account

Search risk.net

Advances in Behavioural SEARCH BOOKS

Search Risk Books


Economics and Finance Type to search
René Doff

6. Advances in Behavioural Economics and Finance Contents

Chapter 3 introduced the concept of homo economicus: a fully rational actor in the financial
markets able to reach their investment objectives in an optimal way. We will show in this
chapter that humans are not so fully rational, but rather suffer from all kinds of human biases
Chapter first published in:
and heuristics. A bias is a consistent and systematic behaviour (ie, not random) that deviates
from rational behaviour as defined in traditional economic theory. Heuristics are problem-
Behavioural Risk solving approaches such as rules of thumb or irrational shortcuts that result in biases. This
Management chapter will assess the key insights of a large body of behavioural economic sciences. Since
By René Doff this body of science encompasses entire libraries of books and journals, it is impossible to do
First published: justice to it all, so we will focus on the most crucial areas and try to link our insights to the
19 JUN 2020 extent they are relevant for the risk manager. Interested readers should consult the
ISBN: 9781782724230 bibliography at the end of this book.

BUY NOW
This chapter, and the following, will investigate the theoretical foundations that we will use
later in the book. Some of these theories might seem quite complex, which is why examples
Subscriber discount i will help illustrate the essence of each. The application of the behavioural economic
framework to risk management processes will follow in Chapters 11 and 12.

We will first discuss the development of behavioural economics as a branch of science and
its critiques to the traditional neoclassical framework. The chapter will show that there are
human biases that can result in suboptimal decisions, and will identify those biases relating to
individual behaviour and group behaviour separately. One of the main contributions of
behavioural economics that is essential in the risk management domain relates to risk
aversion. On that basis, we will discuss the relevance of prospect theory, and examine how
emotions also drive behaviour and why this is important for a risk manager to understand.
Contrary to the neoclassical paradigm, the chapter will explore how economic systems are
prone to booms and busts, and why it is important for risk managers to identify areas where
booms (and consequent busts) might occur. This important contribution to behavioural
finance leads on to an investigation of entire financial systems in Chapter 8.

BEHAVIOURAL ECONOMICS AS A SCIENCE


As discussed in Chapter 3, economics as a science developed significantly during the 20th
century. The cornerstone of many theories is the assumption of the homo economicus: that
agents in the economy are rational and wealth-maximising, and are perfectly able to process
all available information instantaneously. As a consequence, economic markets will
eventually reach an equilibrium state. Since the late 1970s, however, that assumption
suffered from counterintuitive observations made in many academic journals. Economic
scientists have argued that the equilibria of neoclassical economic theories are not likely to
occur because:

economic systems are too complex to understand, something that neoclassical theories
oversimplify,1 and as a consequence are full of booms and busts; and

economic actors are not as rational as neoclassical theories suggest, but rather the behaviour is
determined by all kind of psychological aspects that are overlooked by neoclassical theories.

We will address the first item later in this chapter. The behavioural aspect eventually evolved
into a field known as behavioural economics (or behavioural finance when applied to the
financial industry and capital markets). For example, the Nobel Prize winning Thaler
explained how even a group of economists behaved contrary to what the assumption of homo
economicus would suggest.2 Ground-breaking research on human behaviour was carried out
by two psychologists, Daniel Kahneman and Amos Tversky. Jointly with a large number of co-
researchers, they developed a framework of which we will here describe two crucial elements
and their consequences: bounded rationality and our changing attitude towards being risk-
averse or risk-seeking.

The first element of behavioural economics is that people are not fully rational and cannot
digest all available information, contrary to the homo economicus assumption. This effect was
first broadly described as the “bounded rationality” of human actors, although it was
elaborated later. The fact that people are not as rational as traditional economists thought has
an impact on the decisions they make when facing risks. Thaler makes a distinction between
Econs (homo economicus) and Humans (mere mortals such as ourselves). Latterly,
Kahneman described the human brain using a metaphor based on it having two separate
systems. Although not based on medical foundations, this metaphor helps explain human
behaviour. While the so-called System 1 is quick to analyse the problem at hand and swift to
make decisions, it is also crude in both its assessment of the problem and its solution. An
example of System 1 is that we know how to drive while also having a conversation with
passengers, even in a non-native language. System 2 in the brain is much slower in its
assessment and problem solving but assesses matters more thoroughly and potentially more
rationally. An example of a System 2 activity is complex mathematical computations or
learning to count in a new foreign language. However, System 1 often bypasses System 2,
and as a result the human mind takes certain psychological shortcuts in reasoning and
analysis. Other psychologists, such as Walter Mischel, differentiate between this “hot system”
(System 1) and a “cold system” (System 2), or in more general terms “emotion” versus “ratio”
– although the latter is actually too general a reference for such a complex matter.

The interaction between the two systems causes surprising and sometimes undesirable
behaviour. While unexpected, this behaviour is relatively consistent and therefore predictable,
as we will see below. The remainder of this book will discuss how to avoid the undesirable
effects by explicitly activating System 2, or by creating a setting in which System 1 is likely to
respond in an adequate way. These traits have become known as human biases, determining
how we deal with information and decisions. As we will show, decision-making influences our
risk profile. Research has found that the older we get, the more important System 1 becomes.
As most strategic decisions are made by senior people, System 1 influences these decisions
to a greater extent. This provides a better understanding of the various human biases crucial
for risk managers.

The second element of behavioural economics is a specific case of human biases. We all
know that people are risk-averse: if faced between a choice between a certain and an
uncertain outcome, most people prefer certainty over risk unless there is a compensating
reward for taking the risks. Apparently, when faced with risky decisions people make
decisions in a different way if a loss or a gain is involved. This crucial insight of behavioural
economics has become known as prospect theory, which will be discussed later in the
chapter.

The field of behavioural economics has received much criticism from traditional economists
for various reasons. As these insights were new and, for some counterintuitive, there was
resistance since they contradicted so much of existing worldview. Also, many of the initial
insights were gained from simple laboratory experiments, and it was more difficult to test their
implications in the real world. However, later experiments in capital markets and labour
markets did prove the value of behavioural aspects in the economy.

Why is this field of behavioural economics so relevant for the risk management profession?
First, the observation that traditional economic theory fails the test of practice and replaced by
another paradigm is already interesting enough. Much of the quantitative tooling of the risk
manager is based on traditional finance, such as option theory, capital pricing and, ultimately,
VaR. Chapter 3 observed that the VaR paradigm suffered in some risk management areas,
and showed how regulators worked around those issues in Basel II/III and Solvency II. As
discussed in Chapter 2, risk financing has been seen as the main risk management tool, and
this chapter will build the foundation for the remainder of the book by arguing the
assumptions of VaR do not hold and presenting what some new assumptions could be.
Second, as we saw in Chapter 5, governance or risk culture issues equally lie at the core of
financial problems in terms of capital adequacy. While capital adequacy has crucial for many
risk managers, governance assessments are less dominant in the toolboxes of most risk
managers. Dealing with the governance issues covered in the previous chapter will be
increasingly important for the risk managers of the future. The same holds for instruments
that deal with risk aversion, a dominant issue in many decisions involved in the day-to-day
processes of managing a financial institution, both at the strategic level and the operational
processes.

HUMAN BIASES ON AN INDIVIDUAL LEVEL


This section will explain how the human brain is biased, and sometimes makes false
assessments of problems it faces and the solution it comes up with. There are two groups of
biases: at the individual level and at the group level. At the individual level, the biases are
based on implicit beliefs, while at the group level they are socially originated and are
influenced by group dynamics. The distinction is relevant because, when looking at day-to-
day decisions made in a financial institution, a large number are made by individuals. An
equally large number of decisions are made by groups of people, be it the board at the top
level of an organisation, the risk committee or the management team at a lower level in the
organisation. Therefore, we will assess how decision-making works individually as well as in
a group.

Let’s get started. Most of us have already heard variants of the statement “this is how we sell
our products, because we have always done it like this”. The hesitation to change is called
status quo bias, or inertia. Examples abound: after choosing an investment mix in a particular
mutual fund, clients hardly ever change their preferences; after determining what health
insurance cover best fits one’s personal situation, most people stick to that particular cover
despite changes in age, family composition, etc; in class, students often pick the same seat in
the classroom during the entire course. This list of examples could go on endlessly.

Richard Thaler once designed a widely repeated experiment in which he distributed a specific
gift to a group of people in a random fashion, such that half of the people in the group
received the gift of a coffee mug. He then organised a market in such a way that people could
trade the mugs if they wanted. More specifically, mug owners stated their selling prices, while
the non-owners stated their buying prices. Theory based on the homo economicus would
predict that half of the coffee mugs would be traded and that an equilibrium would arise. In
practice, however, the mug owners were reluctant to sell and non-owners were not as
interested to buy as predicted. Thaler’s experiment is a specific case of status quo bias called
the “endowment effect”: people attach higher value to a good once they have it. As a result,
people are reluctant to dispose of an item once they have it, even if they can sell it at a higher
value than for which they received it.

How do scientists explain the status quo bias? The following reasons are mentioned in
theories, which we will describe one by one: sunk costs; regret avoidance; cognitive
dissonance; and risk aversion. Risk aversion is intuitively easy to explain. When someone is
faced with a potential change, the unknown alternative is new and risky, whereas the current
status is known, relatively well-understood and comfortable. According to risk aversion, a
person rates the loss of the well-known situation higher than the potential improvement over
the current situation. Sunk costs are incurred costs or already-made expenses that are
actually irrelevant to the choice at hand. For example, a company started an IT project and,
after the first year and €2.0 million spending, the project had yielded no tangible results
whatsoever, not even partial completion of the intended results. During the board meeting to
decide whether or not to stop the project, these costs and time spent should be considered as
wasted and irrelevant. However, many boards conclude that so many resources had already
been spent that the project should be prolonged and a new budget should be released. When
put in relation to status quo bias, the sunk costs fallacy lets people stick to the current less-
preferred situation because sunk costs are involved somewhere. As stated, these sunk costs
are actually irrelevant to the decision at hand, but they still partly drive the decision.

Regret avoidance is the phenomenon that people are afraid to regret a change, and hence
stick to the same situation. In the above example of the coffee mugs, participants were afraid
they would later regret having sold the mug. As a result, they wanted to keep them or asked a
much higher selling price. Cognitive dissonance is the psychological effect one faces when
new observation contradicts one’s held beliefs or thoughts. One of the strategies to cope with
this feeling is to ignore the new observation (for instance, the fact that a new health insurance
policy or investment mix might be cheaper or better) and to stick to the current situation.

Why is status quo bias relevant for risk managers? Risk managers often make proposals for
changes to business processes or policies with the ultimate objective of limiting a certain type
of risk. Part of the resistance to such proposals can be traced back to status quo bias. One
solution to overcome this bias is to make the costs of the status quo more explicit (see Panel
6.1).

PANEL 6.1: STATUS QUO AND REVERSING THE QUESTION


Assume you are considering relocating to another city to accept a new job. The new
job will involve a US$10,000 raise. The status quo bias would induce you to turn down
the job offer and the move. Alternatively, you could imagine you are already living in
the new city, and whether you would consider moving to your current town for a
US$10,000 pay cut. In this case, the cost of the status quo is US$10,000.

Suppose during coffee break a colleague expresses their concern about a certain stock
investment they made, which happens to be part of your asset portfolio as well. When
analysing your investment portfolio in more detail that evening at home, you are likely to find
all kinds of evidence online that show the riskiness of that particular stock. Apparently, many
stock analysts have given “sell” advice on that stock. You decide to sell the stock immediately.
Later that week, the stock recovers and hits an all-time high. This is an example of
confirmation bias: you looked online and only found news that this particular stock would
decline, confirming your initial beliefs following the conversation with your colleague. News
that contradicted your belief either did not show up or you considered it non-valuable.

Here’s another example: assume that John is a committed vegetarian. He is likely to look for
information explaining the harmful effects of meat consumption to the human body, as well as
animal suffering and the environmental impact of meat-eating. At the same time, Sue, who is
a farmer’s daughter and enthusiastic meat eater, is likely to look for data confirming the
positive health effects of meat in a human’s diet. Not only will John and Sue look for different
sources of information on their ideology and ignore the rest, even if they were to find identical
sources they would both interpret them differently.

A related concept is selective perception: you see only what you already know or what you
want to see. This creates blind spots for so-called black swans. Kahneman explains that the
way in which the human brain interprets complex data is to confirm rather than disconfirm
hypotheses. Even worse, in a couple of variants Kahneman explains how the human brain is
easily fooled to confirm an initial thought even if counter-information is just as available. Even
the ordering of data is relevant. Confirmation bias is relevant for risk managers because
board members and other managers are likely to ignore information or events that could
potentially harm their current strategies (ie, risks). These risks are potential black swans.
Undesired information about such risks is often given insufficient attention or simply ignored
by decision-makers. Moreover, the way in which risks are presented determines their
acceptance. Hence, risk managers will need to take confirmation bias into account when
presenting their risk reports on individual risks (see Panel 6.2).

PANEL 6.2: CONFIRMATION BIAS


Kahneman explains an experiment where people are asked to rate the personality of
two people, Alan and Ben. Who would like better as a person?

Alan: intelligent industrious impulsive critical stubborn envious.

Ben: envious, stubborn, critical, impulsive, industrious, intelligent.

If you are like most people, you attributed more positive character traits to Alan than
to Ben. A closer look at the list of traits conveys that actually the list is identical for
both except for the order of presentation. However, because you (as most people do)
probably started reading from left to right, you looked for character traits that
confirmed your first impression of Alan and Ben. Due to the order of the words, you
started with positive traits for Alan and negative traits for Ben.

A related bias is hindsight bias. According to this phenomenon, people believe that a certain
event or crisis was unavoidable – after the fact! In many cases, however, the event or crisis
was predictable or obvious. After a major financial market crisis, many experts explain how
the crisis could have happened, and in hindsight it all makes sense. However, if it was so
obvious in advance, the crisis would not have occurred. While this sounds logical in the case
of a crisis or negative event, it is also true for positive events or major successes. Imagine the
CEO of a very successful start-up that is now ready for the company’s IPO. In hindsight, the
success of the company makes sense and is obvious. However, the path to the IPO was
paved with uncertainties and difficulties. Was it due to skill or luck that the company
succeeded?

Interestingly, Kahneman explains in quite an amount of detail that, in the majority of cases
where actually luck is involved, we – as human beings – attribute the outcome to skill. There
is a logic of regression to the mean that in sports an athlete has winning as well as off days,
and the same holds for a stock broker or CEO. If someone were to bet on a race on a
particular day and win a significant amount because “their” horse won the race, they would
run the risk of attributing this success to their own skill, and become overly optimistic the next
time and increase their bets. This specific case is called “attribution bias”, and is related to the
illusion of control or illusion of skill, in which events that were random actually feel like they
are in one’s personal control. Both the illusion of control and the attribution bias can lead to
overconfidence. In the example of horse betting, this overconfidence is obviously dangerous
to someone’s finances. We will now discuss overconfidence in more detail.

The concept of overconfidence has been much discussed by scientists, and all agree that it is
a result of a series of human biases. Overconfidence implies that we, as human beings,
believe we are able to achieve better results than predicted, and run relatively high risks in
trying to achieve this. First, the human mind makes us see the world as neat and predictable,
whereas it actually is not. As an implication, we think we understand causal relations where
there might not be any. Also, we believe certain observations are valid and generalisable,
whereas they might not be. As a result, we feel we are on top of a problem that we are only
beginning to grasp. This makes us overly optimistic. Add to this the fact that we suffer from
planning fallacy and an illusion of control means we are going to be overconfident in our
approach.

We are all aware of the traps of the planning fallacy. Think of the latest IT implementation
project in your company or a large road construction project in your area. Both projects were
probably completed with significant overdrafts on either time or money compared to the initial
budget. This is not because the project managers were extremely poor planners, but because
they – as humans – suffer from the planning fallacy. Humans make too optimistic plans and
have a hard time learning to make realistic ones. Related to this is the illusion of control,
which states that we believe we are in full control of a project and that uncertainties will not
damage it. Examples abound: despite high divorce rates, people still get married because
they think that marriage problems will not apply to them. Start-up companies continue to be
set up despite the extremely high failure rate of such projects. Despite the odds,
entrepreneurs think that the rule does not apply to them. All these examples show that human
beings are overconfident and over-optimistic. The difference between them is that
overconfidence relates to someone’s own skill (see Panel 6.3) while over-optimism relates to
external circumstances. This over-optimism serves a purpose, however, as it helps
encourage innovations. Many times, it will also lead us to take risks or undervalue risks that
are imminent, which is why the effect is relevant for risk managers. In spite of all the warning
signals coming from the risk managers on the sideline, business line managers are
overconfident by nature. The trick is to take this into account when preparing a new risk
analysis or board proposal.

PANEL 6.3: OVERCONFIDENCE AND THE ILLUSION OF VALIDITY


Consider a pair of almost identical large international companies, with one major
difference: company A has a very successful CEO, whereas company B has a
normally successful CEO. Which of the two companies is most successful? The
illusion of understanding tricks us into concluding that the successful CEO must be
able to make company A more successful than company B. In practice, the
contribution of CEOs to companies’ successes can hardly be academically proven.
Therefore, we cannot infer any conclusions from the observations. Despite the illusion
of validity, the human mind finds it very tempting to believe company A must be the
most successful of the two.
Assume your neighbour is a trader on the stock market. Over a Sunday afternoon
drink, he explains the method he uses for stock picking and illustrates this by a 25%
value increase of his portfolio in the previous six months. We, as non-experts, might
be tempted to turn this single observation into a general truth. Again, we are trapped
by a human bias – in this case the illusion of validity, because we think the
neighbour’s strategy will be generally valid for all investment problems. In practice,
however, the neighbour could equally well have had extreme luck rather than superb
skill.

Representation bias is another bias that humans suffer from. It might best be explained by
borrowing an example from Kahneman: Assume your sister in law speaks of a mutual friend
in these terms: “He won’t go far as an academic, he has too many tattoos”. While we
realistically understand that tattoos have no relation whatsoever to academic capability, we
quickly compare the mutual friend with the standard image we have of an academic and draw
conclusions regarding the appearance of the friend. In this example, a person with many
tattoos does not match our standard representation of academic professors. However, it can
be more subtle too, such as in the example in Panel 6.4 where statistical probability is
explained not so much by representation but by other phenomena.

PANEL 6.4: LIBRARIAN OR FARMER?


In a well-known experiment, Kahneman and Tversky describe a hypothetical person
as follows: “Steve is very shy and withdrawn, invariably helpful but with little interest in
people or the world of reality. A meek and tidy soul, he has a need for order and
structure and a passion for detail.” Now, they pose the question: Is Steve more likely
to be a librarian or a farmer?
Most readers will recognise the traits of the stereotype librarian in the description of
Steve. At the same time, there are many more farmers than librarians in the world.
Statistically, the chances of Steve being a farmer is therefore much higher. This is an
example of representation bias.

When we observe joint effects, we can easily fall into the trap of causality where actually a
correlation might be a better assumption. For example, assume that older cars are more often
involved in traffic accidents. Would it be correct to charge all older cars a higher insurance
premium? There might be a correlation without a causal relation. In fact, older cars are
perhaps more likely to be driven by younger and inexperienced drivers who do not have the
financial resources to afford newer cars. The higher accident rate could actually be explained
not so much by the inferior design of older cars, but by the inexperience of their drivers.

However, while it is tempting, we should not confuse a co-relation (correlation) with a causal
relation. A related example: still assuming that younger drivers cause relatively more
accidents than experienced drivers, would it be a good idea to give younger drivers that did
not cause an accident in a particular year a discount on their next year’s insurance premium?
Intuitively, that might sound like a good idea, rewarding good driving skills. But… is this a
matter of skill or luck? In the latter case, one would be rewarding a particular driver for mere
luck that is outside of their influence (illusion of control). In addition, one year of luck might
imply that in the next year they could be more unfortunate because of “regression to the
mean”. Regression to the mean is a statistical phenomenon that applies when outcomes are
scattered around the mean: one time above the mean, another time below it. Extremes are
logically followed by observations closer to the mean. As the concept is difficult to recognise
in practice, we fall into the trap of explaining dispersion around the mean by causal relations.
In the case of overconfidence, the trap is even deeper: we normally explain the positive
dispersion by our personal skill rather than by random effects. Likewise, we explain the
negative outcomes as due to bad luck rather than unwise decisions. Figure 6.1 shows this in
more detail.

Why is this relevant for risk managers? Risk managers are quite often quantitatively educated
or skilled people. When carrying out risk analysis, they analyse data and will most likely use
statistical techniques. This could lead them to be prone to the biases mentioned above. If not
the risk managers themselves, certainly decision-makers around them will fall into these
traps. Awareness and attention are often a first remedy to overcome them. At the same time,
a second pair of eyes looking specifically at the validity of conclusions would help avoid falling
into the traps.

HUMAN BIASES IN GROUP PROCESSES


Assume an important board meeting where you are the only opponent of a particular proposal
at hand. The decision to accept the proposal will be important for the board as well as the
company. When imagining the situation, you are likely to have an uncomfortable feeling.
Despite the content of the proposal to which you oppose, you would rather not stand out
against the rest of the board, or you would rather not oppose the authority of the board’s
chairperson, whom you very much respect and admire for their charm and intellect. As you
see, human biases are not limited to behaviour at an individual level. Often group processes
involve biases as well. In a certain way, group biases are logical and useful because human
beings are herd animals. In prehistoric times, belonging to the group helped to safeguard
against wild animals, famine and war. Although these hazards now belong to the past, the
biases are still present in our genes. This section will describe a few group biases. From
these examples, it will be obvious that the concept of countervailing power is hard to organise
in practice.

Peer pressure is present when we want to conform to the group, as can be seen from the
example above. We disagreed with the content of the proposal, but we considered voting for
it despite our concerns, often because we are afraid of our place and ranking in the group.

Groupthink is also well-known. A group will often follow a particular line of thinking and then
extract this line of reasoning further along the lines of the initial idea. Perspectives that
contradict the initial idea come to mind only with great difficulty, or they seem less and less
relevant when discussing the advantages of the proposal. On an individual level, this is called
confirmation bias (as described in the previous section). On a group level, this is called
groupthink or the “bandwagon effect”. It causes blindness to opposing views, and combined
with peer pressure not to articulate counter-opinions, can lead groups into disaster. Kodak is
a famous example here: the famous photographic film company disregarded the potential
risks of digital photography and continued to focus on film (despite the fact that it actually
invented digital photography in the mid-1970s). When analysing its poor performance in the
early 2000s, the company looked at various causes but failed to see (or ignored) the danger
of competing technologies. This ultimately lead it to bankruptcy in 2012. In the financial
markets, groupthink can create herding behaviour, which occurs when some individuals think
that the market is over-valued and sell their securities in an attempt to limit their loss. Other
market players follow this example, and this creates an ever-growing oil spill pulling the
markets to rock bottom. We will analyse herding in the financial markets in more detail in
Chapter 12.

The social bias to conform to authority is dangerous because it has the risk of destroying
creative thinking if the authority (eg, the CEO or chairperson) has a strong vision. People
around the authority figure are less likely to voice counter-opinions since: (i) they are likely to
want to protect their own position, making them afraid to stand up; or (ii) they consider the
authority figure to be all-knowing and all-understanding. The latter aspect might not always be
justified, but is difficult to avoid. Even more democratic and non-authoritative leaders can
suffer from this trap. Related to this is the concept of “group attribution error”, which is the
held belief that a taken decision was by consent of the entire group. Due to this effect, group
members have an inherent difficulty in assessing whether other group members disagree with
a proposal.

Fundamental attribution error (attribution bias) in group processes arises when groups
attribute positive outcomes to their own skill while negative outcomes are viewed as due to
external circumstances. While this effect exists on the individual level, we also see it in group
behaviour. It is visible in many annual reports in which boards report on the execution of the
strategy. Successes of the strategy are due to deliberate choices and actions, while failures
are due to unforeseen circumstances that even in hindsight were difficult to predict.

The above biases may interact and reinforce each other. Consequently, they are not only
present in the boardroom where the major strategies are set, but throughout the entire
organisation. This makes the biases even riskier because they impact all day-to-day
processes. To supplement the biases described above, Table 6.1 provides an overview and
classifies the biases into three groups or causes.

Why is this relevant for the risk manager? As in the previous section, group biases will lead to
suboptimal behaviour, and incur the risk of losses when not adequately addressed. Moreover,
group behaviour that disregards risks tends to become even riskier than individual behaviour
since the critical mass increases. Finally, group biases influence the way in which boards and
other decision-making bodies interpret the risk analyses or risk reports that support an
intended decision. Properly structuring risk analyses can support risks being addressed more
adequately in the decision-making process.

Table 6.1 Human biases in a group setting

Overestimation ofof
1. – Illusion the group
invulnerability Closed-mindedness
1. – Groupthink Pressures toward uniformity
1. – Self-censorship
2. – Belief in the inherent morality of the 2. – Bandwagon effect 2. – Peer pressure
3. – Collective rationalisation 3. – Conformation to authority
group 4. – Stereotypes of out- 4. – Illusion of unanimity
3. – Group attribution error 5. – Direct pressure on
groups
dissenters
6. – Self-appointed mind-guards

Source: Baron (2017)

RISK AVERSION
We work with risk every day: we cross the street on foot despite cars on the road, we drive
cars despite the risk of crashes, we put money on our savings account despite the risk of
bank failures… the list is endless. In some cases, we are simply unaware of the risk. In
others, we take the risk consciously and accept the potential impact of the risk. In again other
cases, we decide to avoid the risk – for instance, by not investing in a particular stock or
investment fund, or by deciding not to go on a skiing trip. In the 18th century, Daniel Bernoulli
revealed that people are risk-averse, as it became called later (see Panel 6.5). If there is a
choice between a risky and a risk-less situation with the same outcome, then people often
choose the risk-less situation, he argued. In other words: if there is a risk involved, people
want to be compensated for taking that risk. This principle is also incorporated in the fair
valuation of assets and liabilities in the current International Financial Reporting Standards
(IFRS) guidance. For instance, under IFRS 17 (in force per 2021) an insurance company will
value its insurance liabilities as the sum of the NPV of the cashflows and a separate risk
margin to compensate for the risk surrounding these cashflows.

PANEL 6.5: BERNOULLI’S NOTION OF RISK-AVERSE


Let us illustrate Bernoulli’s idea with an example. If you are faced with a choice
between A and B below, which would you choose?

A: a toss of a fair coin. If heads, you will gain €100; if tails, you will gain €0.

B: there is no toss of the coin, you will automatically receive €50.

Most of us will choose option B. The expected outcome of option A is €50, which is
identical to the certain outcome of option B. An alternative version of this question is:
instead of €100, at what price would you prefer option A? The higher the price you
request, the more risk-averse you are.

Bernoulli rightfully noted that a €10 increase in wealth varies for different people. For a very
poor person living on the street, an additional would €10 allow them to eat and sleep for a
couple of extra days without worries. For a millionaire, an additional €10 might merely be the
parking charge while dining at an expensive restaurant. This prompted Bernoulli to develop
the principle of utility: the value someone attaches to an increase or decrease in wealth
depends on the relative increase or decrease rather than the nominal amount. The marginal
utility decreases with the increase of wealth. Figure 6.2 explains this in more detail.

Kahneman and Tversky also showed that while the Bernoulli utility curve is valid for increases
in wealth, it does not work the same for losses. They showed that it mattered whether the
change in wealth was formulated in terms of gains or losses. The reference point from which
the choice is taken is important. Kahneman and Tversky called this insight prospect theory
(see Panel 6.6).3 While most examples of prospect theory are phrased in terms of simple
gambles, the principle also holds in other cases – for instance, an employee who has been
promised a large raise at the annual appraisal will consider a smaller raise as a loss because
the large raise was already taken as a reference point.

PANEL 6.6: PROSPECT THEORY


Panel 6.5 showed that with a choice between option A and option B, most people will
choose B. Let’s now assume we have a choice between C and D.

C: a toss of a fair coin. If heads, you will gain €110; if tails, you will gain €0.

D: there is no toss of the coin, you will automatically receive €50.

If you are like most people, you will still prefer option D, although the expected value
of the bet (€55) is slightly higher than the sure win of €50. Now let us consider the
next example, in which you have to choose between options E and F.

E: a toss of a fair coin. If heads, you will lose €110; if tails, you will lose €0.

F: there is no toss of the coin you will automatically lose €50.

Now, if you are like most people, you will prefer E rather than F in an attempt to limit
your losses. A closer look will reveal that problems C and D and E and F are identical
in structure, but the preferred choice differs. Now, let us consider another example, in
which you will have to choose between option G and option H.

G: we will give you €200 and then toss a fair coin; if heads, you will lose €90 if tails, you
will lose €200.

H: we will give you €200, but there is no toss of the coin; you will automatically lose
€150.

If you are like most people, you will prefer option G over option H. Although the
financial outcomes are mathematically identical to the C and D problem, you are
again likely to change your preference.

It follows that, when faced with losses, people are more ready to take risks. A similar effect is
known at the casinos: when people have won a certain amount early in the evening, they
treat this as playing money or the house’s money and are ready to take more risks with it than
if it was their monthly paycheck. Figure 6.3 shows the utility curve that arises. It is clear that
the curve is much steeper on the left-hand side than on the right-hand side, which
demonstrates that people hate losing about twice as much as they like winning.

Panel 6.6 illustrated that it matters how the situation is framed, as choices C and D and G
and H are identical. The concept of framing became an important insight in behavioural
economics, as we will see below.

Why is this relevant for the risk manager? Companies take risks every day, and financial
institutions such as banks and insurers are no exception. In fact, financial institutions take
risks relatively more explicitly, with the policies around them being clearly formulated. In risk
taking, losses can occur from time to time, that is the part of the game. However, in loss-
making situations people turn from risk-averse to risk-seekers. A serious example comes to
mind here: rogue trading. There have been a couple of major rogue trading events in the last
decade or so. Chapter 4 discussed the case of Nick Leeson and Barings as an example.
While the position that Leeson took was fraudulent in the first place, he tried to make up for
the initial losses by increasing the risks. This principle resembles the sunk costs fallacy,
except that the higher risks are likely to result in even higher potential losses in the future
(due to prospect theory). Ultimately, the losses were so high that Barings defaulted. For the
risk manager, the biggest lesson here is that risks in calm waters (profit-making, regular state
of the business) are not the same risks in rough waters (loss-making, crisis state of a portfolio
or entire business). We will see in the remaining chapters how to deal with such situations
before they occur.

As stated, framing is an important element of behavioural economics. Framing of a situation


changes an individual’s perception of the situation, be it intentional or unintentional. Some
years ago there was a heavy fog in the Dover straits. One UK newspaper ran the headline
“Heavy fogs: Continental Europe isolated”. However, the UK is actually smaller than
continental Europe, but the headline did not run “UK isolated”. An oft-used version of framing
is to compare your situation to the group, in order to organise group pressure. For example,
someone could state that: “90% of our neighbours mow the lawn every week, which gives the
neighbourhood a tidy and orderly appearance”. This creates the perception that you are a
bad neighbour if you do not maintain your lawn every week. In this area, Thaler and Sunstein
(2009) developed an entire framework around what they call nudges.

Nudges are examples of framing in which the setting of the choice architecture leads people
to choose preferred options. As stated, behavioural economics treats people as non-rational
actors in the economic process, from which Thaler and Sunstein derived the need to take on
a “paternalistic libertarian” perspective4 for choices that people face in their daily lives. An
example is saving for retirement, which people normally are unlikely to do sufficiently without
being properly incentivised. By changing the framing, retirement savings increase drastically
with two important items: the default option and an opt-out possibility. In the example of a
retirement savings plan, people enrol into a particular pension plan as the default option – ie,
automatically and without their explicit consent – but they are adequately informed about this
status. The opt-out possibility allows people to change their preference on a well-informed
basis into a different savings plan. The advantage of such a structure is that, as a principle,
people are enrolled into a programme unless they deliberately and consciously choose
otherwise. This is to avoid human biases such as procrastination and analysis paralysis.

The phenomenon of procrastination is also well-known: people tend to postpone difficult tasks
(such as choosing a retirement savings plan). Analysis paralysis occurs when you analyse a
complex problem and hesitate to take a decision, continuing to analyse more aspects of the
problem instead. The more urgent and complex the decision, the harder it is to actually make
the decision. The matter at hand is: when do you have sufficient information to actually make
the decision?

The principles of risk-averse or risk-seeking behaviour are relevant for the risk manager as it
shows how the board, the risk committee or a business line manager is likely to respond to
policy choices when framed in terms of potential loss or in terms of potential gain. Often, the
risk managers’ language is full of potential threats, which is why the risk profession as a
group sometimes has a negative reputation. Risk managers warn of potential losses, want to
avoid projects likely to fail, put out signals on how to circumvent detrimental developments.
While this intention contributes to good corporate governance practices, the form in which it
takes place is crucial, as shown by the theories discussed above.

When the reference point is a loss-making situation, there is an increased likelihood of risk-
seeking behaviour by decision-makers. However, when the reference point is a profit-making
situation, there is an increased likelihood of risk-averse behaviour. Taking this principle into
account might require nudges in the style of Thaler and Sunstein in the framing of the
proposal. Prospect theory is even more important for risk managers in the way they put risk
controls (such as risk limits) in place to avoid risky behaviour in loss-making situations.
Normally, a series of risk limits are present, are most detailed in the trading room of
investment banks, and risk managers design systems in such a way that risk limits are “hard”
limits. We will discuss the consequences of this in more detail in the next chapter, but the key
objective is that people should avoid situations where they can double their risks in an
attempt to compensate for initial losses.

SP/A THEORY AND BEHAVIOURAL PORTFOLIO THEORY


Prospect theory has gained much respect in behavioural economics, and Kahneman was
honoured with a Nobel Prize in 2002.5 The insights on the many behavioural pitfalls and the
consequences for dealing with risk was derived during various experiments. Other scientists,
such as Lola Lopes, have designed a framework that began with how individuals perceive
risks. A classical experiment was as follows. Participants were invited to throw rings over a
peg, but were allowed to choose the distance from the peg themselves. Those participants
that chose to stand close to the pegs faced little risks, because it is not that difficult to hit the
target from near by. However, they also enjoyed less satisfaction from hitting the mark than
those standing far away and taking more risk by so doing. The distance from the peg that
participants chose says something about their attitude towards risk-taking. This experiment
shows the three basic emotions that people face when dealing with risk: fear; hope; and
aspiration. Hope and fear relates to the chance of hitting the mark, while aspiration relates to
the need for satisfaction in one’s life. The emotion of fear relates to the need for security that
every person has in life. The emotion of hope relates to the need for potential. Security and
potential are two elements of the same continuum, and the feeling of aspiration is also
present for everybody.

This theory is referred to as the SP/A theory: security–potential/ aspiration. The need for
security and potential is very much personally driven. They differ from person to person. The
aspiration element is much more situational, and is longer-term oriented. It depends on what
rewards are available in every situation and what constraints are present in the setting at
hand.

This theory has been the basis for a behavioural portfolio theory (see Shefrin and Statman,
2000). The “traditional” view of portfolio construction is based on the model of Markowitz and
assumes the homo economicus constructs the desired portfolio by looking at all asset classes
instantaneously and observing the correlations. Of course, in practice investors have difficulty
in observing all assets across the classes at the same time and also to determine their
correlations. Hence, they look at classes on an individual basis and optimise the portfolio
within classes instead. Within each class, some assets serve to protect the value of the
portfolio (security), others to generate return (potential). The overall goal of the portfolio in its
entirety relates to the aspiration element. This has lead to the following insights.

Investment portfolios are constructed in layers, with each layer designed for the purpose of
either portfolio protection or profit generation.

Investors who believe they have an informational advantage (real or imagined) about a particular
security will take more speculative positions in that security. At the same time, the information
can be labelled or framed by suppliers to exploit behavioural pitfalls. Moreover, due to labelling
there is a so-called home bias in the investment portfolio.

Investors who are aware of their risk-aversion tend to hold more cash, and will hold a portfolio
with more assets and with a wider spread. The latter is not so much a diversification effect but
rather an avoidance to liquidate loss-making assets.

What makes these insights relevant for the risk manager? The first insight for the risk
manager is that the attitude to risk is different from person to person. Hence, in approaching a
particular decision-maker with the outcomes of a risk analysis, the risk manager will need to
differentiate in their tactics of “delivering the message”. This in itself is already a useful
insight. Decision-makers who value security highly are likely to be triggered by items that
emphasise the way in which a risk analysis investigates the value protection. Decision-
makers who value potential more dominantly are more likely to be triggered by the aspects
that ensure how value will be increased without any delays.

In portfolio selection, the realisation of layers with different objectives is already a helpful
insight for the risk manager. For instance, the larger or more advanced pension funds have
often made this approach explicit in their investment strategy. However, challenging the
labelling of securities will be useful as it may avoid biases in the portfolio. A limit structure to
avoid losses will avoid irrational allocations to loss-making assets. These are areas in which
risk managers can contribute to the decision-making process in order to achieve objectives.

THE STABILITY OF FINANCIAL MARKETS


In the financial industry, the study of financial markets is part of daily life and also a source of
the majority of risks. Looking at the risk report of a typical bank, investment firm or insurer, the
volatility of the financial markets is a dominant risk factor. Regulators force institutions to
observe financial markets by setting capital requirements, so it is logical that these firms keep
a close watch on them.

Traditional economists viewed financial markets as fully rational machines governed by the
invisible hand of the market (to paraphrase Adam Smith), and ultimately resulting in
equilibrium. Hence, markets are seen as relatively stable and calm although they could be
temporarily out of balance. Earlier in this chapter we argued that the traditional perspective of
homo economicus does not hold. This is also in line with what we see in practice: financial
markets suffer from severe crises and optimistic booms that often cannot be adequately
explained by general economic cycles. Even worse, the economy influences financial
markets and financial markets influence the “real” economy (as a mutually influencing
system). Robert Shiller is one contemporary scientist who has criticised the traditional
economic viewpoint, and consistently researched the emergence of bubbles in the financial
markets (Shiller, 2015). One of his key contributions was the importance of the housing
market as a driver of market bubbles. In addition, behavioural scientists have introduced the
concept of herding behaviour in the financial markets, resulting in over-responses in the
market to both ups and downs. There are two perspectives here:

either the financial markets are fully driven by emotions rather than ratios, which means that
markets will themselves structurally produce booms and busts; or

markets aim to achieve equilibrium states but structurally overshoot due to feedback and feed-
forward loops, and hence permanently oscillate around the equilibrium state.

Both perspectives have a group of followers roughly equal in size. Whichever perspective you
personally prefer, the conclusion is that bubbles are likely to continue occurring.

The economist Hyman Minsky argued that financial instability is inherent in free markets, and
also that credit drives booms and busts. According to Minsky, it is not necessarily stock
markets that cause instability but rather credit markets. He claimed the following three forms
of financing are present during a boom, leading to a “Minsky moment” (ie, the bursting of the
bubble).

Hedge financing: The underlying business model provides for sufficient cashflow to pay the
interest payments and the ultimate repayment of the notional. In other words, the risks of the
loan are fully hedged by the underlying business.

Speculative financing: The underlying business model enables sufficient cashflow for the interest
instalments, but assumes a certain future profit to enable the ultimate repayment of the notional.
In this form, there is some speculation on whether or not the notional repayment will be made.

Ponzi financing: The underlying business model requires such growth that both interest and
notional payments are dependent on future growth, or on intermediate refinancing. This scheme
clearly incurs significant risks.

Minsky’s main point was that “stability induces instability”. This means that crises do not
occur due to a certain exogenous development, but are inherent to the economic system due
to the relations within the system (in hindsight, Minsky emphasised analysing the economy as
a system, something we will also address in Chapter 8). He therefore called for governments
to intervene and for central banks to act as lender of last resort for banks as a safety net.
Although Minsky formulated his theories and solutions in the form of policy recommendations
in mid-20th century, his ideas received renewed attention after the global financial crisis. The
concept is now dubbed the “financial instability hypothesis” (FIH).

The body of literature evolving under the FIH includes the following behavioural phenomena.6

Euphoria: Financial markets will show signs of euphoria, which increases risk-taking (see
discussion of the winner’s curse in Chapter 13). It encourages market participants to take more
risks and increase leverage (speculative and Ponzi finance). Moreover, euphoria draws more
participants to the market, something that was visible in the subprime crisis when non-traditional
actors entered the mortgage market (eg, general investors in mortgage-backed securities, MBS,
insurers providing CDS, and non-bank special purpose vehicles, SPVs, that did not fall under
banking regulations).

Ponzi finance: Due to euphoria, financial speculators engage in credit where the repayment of
the principal and interest is based on asset appreciation rather than amortisation. Also,
refinancing of existing debt by new debt is part of Ponzi finance. During the subprime crisis
many mortgage products bore aspects of this. An assumption of asset increases is crucial in
these products: major losses occur when the assumption does not come true. It was also
possible due to innovative products that effectively hid the true nature of the risks (including
negative amortisation mortgages and adjustable-rate mortgages).

New era thinking: The phrase “This time it will be different” says it all. In hindsight, it is easy to
spot the dangers, but during a boom many market participants believe that a new era has
started in which the lessons of previous crises are no longer considered relevant. For those
investors who might not fully believe that this time is different, a sound reason to enter the
market is the fear of missing out.

Too big to fail: Some events can trigger a market collapse or large side-effects for an economic
system (such as bank runs). Companies such as banks can take an explicit bet of being rescued
when the large risks materialise, allowing them a one-sided bet only. The upside profits would be
for the bank, but the downside losses would be for the regulator and the society. This is an unfair
asymmetry. In acknowledging that regulators often lag behind financial innovations in the
market, Minsky advocated regulations that limited the size of systemically important financial
companies.

Based on these developments, Minsky described five critical stages through which a bubble
occurs. Table 6.2 provides a short overview and summary of these stages during two
bubbles: the dot-com and the subprime bubbles. Both require further analysis to understand
the true lessons learnt, in both their run-up and aftermath.

It is not the objective here to predict financial markets, or to develop a framework to monitor
the likelihood of a future financial bubble that might burst soon. Rather, if we acknowledge
that bubbles exist, this section lays the groundwork to consider potential actions to limit the
risks involved. Also, the Minsky framework provides us with clues to identifying risks during a
boom phase. After this foundation, we will further explore these concepts in Chapter 12.

In assessing the extent to which a crisis could be forthcoming, human biases again limit our
ability to act swiftly and effectively. First, procrastination on difficult decisions occurs due to
status quo bias and the effect of heuristics. This is in line with the new era thinking mentioned
above. Also, the control illusion gives us the impression we can always intervene at a later
stage. Second, analysis paralysis kicks in: decision-makers continue to analyse the problem
at hand while losses accumulate because they just want to make one single decision that
could solve the problem. Instead one could choose to make a first attempt to at least move
the issue in the right direction, or limit the losses. Multiple iterations will then get it right. While
this seems simple in hindsight, it is very difficult to be aware of it while in a crisis situation.
Moreover, being courageous enough to solve a problem iteratively is understandably very
challenging. Third, while in a crisis situation decision-makers often have the illusion of control
to revert the situation later. This signal of overconfidence is often seen, while in practice
individuals usually have very little room to manoeuvre when a crisis gets more severe. Even
worse, the room to manoeuvre decreases as the crisis advances. As a result, many investors
are stuck with losses amid the crisis.

Table 6.2 Minsky’s five stages of a bubble

Stage Explanation Occurrence during dot-com bubble Occurrence in subprime crisis


Displacement New era thinking Internet to provide seemingly endless Lower interest rates drove search for yield and
opportunities mortgage innovation
Boom Price increases gain Media attention Subprime MBS increased, specialised subprime
momentum (fear of missing lenders received attention
out)
Euphoria New era thinking, less Nasdaq PE ratios skyrocketing Underlying credit quality of subprime SPVs
stringent risk criteria decreases, similarly for primary subprime
lending
Profit-taking Some (smart) investors start First investors sold their stocks, Some banks withdrew from MBS markets or did
to cash in their profits despite continued growth of equity not renew outstanding credit lines when due
markets
Panic A minor (unrelated) event Fed interest rate increase, Nasdaq Liquidity crunch following the default of
triggers a turnaround of started to slide Lehmans
markets

The relevance for risk managers is that lessons of financial crises can be very helpful to
identify risks of potential new or arising crises. The approach taken by Minsky is a systems
approach in the sense that it looks at the financial system in a holistic way and analyses all
interactions. For risk managers, such an approach is very useful. Minsky also identified clear
behavioural pitfalls that allow crises to exacerbate after taking off – acknowledging and
recognising these pitfalls creates awareness for future risks.

CONCLUSION
This chapter has described the key points of the body of knowledge that has become known
as behavioural economics or behavioural finance. The crucial insight is that human beings
are prone to behavioural biases. Our “System 1” traps us into hasty conclusions or actions.
The Appendix to this book provides an overview of the biases discussed here for ease of
reference, while Figure 6.5 offers an overview of the biases, something we will further
address in the next chapter.

Human biases are consistent, which means that they are predictable. By designing or framing
the environment, we can steer ourselves away from undesirable mistakes. However, we will
need to do this very consciously because our human mind leads us into pitfalls. Only very
active open-minded thinking helps us to avoid falling into the traps of our human biases. In
other words, we will need to activate “System 2” – for instance, by continuously assessing
whether we are analysing the correct problem and fairly weighing the potential alternative
solutions that we decide not to implement. Chapter 10 will argue that slowing down the
process for important solutions is a good approach to active open-minded thinking.

People are risk-averse, as we all learned during economics class. However, insights in
prospect theory show that people turn into risk-seekers when they are faced with losses. For
the risk manager, this has important implications for the design of risk controls in a financial
institution. Risk limits will be under pressure when they are needed most.

Rather than suboptimal decisions, human behaviour can lead us to run into devastating
crises on the financial markets, as we have seen in this chapter. Many large financial market
crises follow a predictable pattern, but the difficulty is to recognise the phases. This is
relevant for risk managers who will have a role to watch out for crises indicators. We have
seen in this chapter that a systems approach with multidisciplinary perspectives is best suited
to recognising such crises in progress. We will use these insights to present alternative tools
in the toolkit of the risk manager in financial institutions in Chapters 11 and 12. One of those
tools will be scenarios, which we will examine in Chapter 9.

prev chapter next chapter

Contents

Preface

1. An Introduction to Behavioural Risk Management

2. Risk Management Context

3. Value-at-Risk as the Dominant Risk Management Tool in the Financial Industry

4. Case Studies on Risk Management Failure

5. The Role of Regulation in Risk Management

6. Advances in Behavioural Economics and Finance

7. Behavioural Issues with Probability

8. Systems Theory

9. Using Scenarios

10. Making Robust Decisions

11. Advances in the Risk Management Process

12. Behavioural Risk Management in the Financial Markets

13. Countervailing Power

14. Behavioural Risk Management: Closing Thoughts

Appendix: Selective list of Behavioural Biases

Bibliography

RECOMMENDED BOOKS

The RMB Risk Model Operational Risk Behavioural Risk Navigating Internal Models Systematic Collateral Markets Inflation-Sensitive
Handbook: Validation (3rd Capital Models Management European Energy and Solvency II Trading in Energy and Financial Assets
Trading, Investing edition) (2nd edition) and Commodity Markets Plumbing (3rd
and Hedging Markets Edition)
Regulation

About us Contact us Advertising Help Centre Terms and conditions Privacy policy

California Privacy Rights – Do not sell my information

© Infopro Digital Risk (IP) Limited (2020). All rights reserved. Published by Infopro Digital Services Limited, 133
Houndsditch, London, EC3A 7BX. Companies are registered in England and Wales with company registration
numbers 09232733 & 04699701. Best Digital B2B Publishing Company

You might also like