You are on page 1of 10

Practical Utility, Risk Aversion, and Investing

(Draft)

James White
August 30, 2016

Abstract
Concepts of utility and optimizing expected marginal utility are
ubiquitous in economics, and also in the academic literature of gambling.
However, we have been surprised how little these concepts have
impacted real-world investment practice, despite in many cases
providing tools which are both helpful and practical. This seems
especially odd given the investment world’s ostensible focus on risk
management and generating superior risk-adjusted returns. We seek to
explain how the concept of utility is both central and practical for all
investors, and how tools arising from maximizing expected marginal
utility are central to the important question of investment sizing.

1 History
For a bravura, detailed description of the history behind the concept of economic
utility, we recommend Peter Bernstein’s excellent book “Against the Gods” [6].
The highly abridged version is that Daniel Bernoulli gave the first full expo-
sition of the concept in his 1731 paper Specimen Theoriae Novae de Mensura
Sortis (Exposition of a New Theory on the Measurement of Risk), in which
he concludes that expected value alone cannot underly a model for economic
decision-making under uncertainty, as each individual has subjective risk pref-
erences specific to their particular circumstances. Earlier in 1728, another Swiss
mathematician named Gabriel Cramer had also described the core concept of
utility, writing “...the mathematicians estimate money in proportion to its quan-
tity, and men of good sense in proportion to the usage that they may make of
it.”
Bernoulli describes the predominant feature of all utility functions: “The
utility resulting from any small increase in wealth will be inversely proportionate
to the quantity of goods previously possessed.” To illustrate this, he considers
a two-player game between Peter and Paul1 in which Paul will toss coins, and
1 Also called the St. Petersburg Paradox

1
Peter will pay Paul 2N ducats if there are N consecutive “heads” before a non-
heads toss. Prior to Bernoulli, the accepted value to Paul of playing this game
might have been the expected value:
∞ ∞
X
n
X 2n
V alue = 2 P (n) = =∞ (1)
n=1 n=1
2n

Bernoulli rightly recognized that, contrary to this result, few real people
would pay much money to play this game2 . Suggesting a utility function of
U = ln(w), Bernoulli shows that an individual with wealth w0 having to pay an
upfront amount V , and seeking to maximize expected marginal utility instead
of expected value, would see:

X 1
E[∆U ] = n
[ln(2n − V − w0 ) − ln(w0 )] ≥ 0 (2)
n=1
2

and thus, after doing some math:



1
Y
V ≤ (w0 + 2n ) 2n − w0 (3)
n=1

Here we show V as a function of log10 w0 :

Figure 1: V (w0 )
Bernoulli’s concept of utility was far from the last word on economic decision
making. Many have put forward critiques of a general expected utility theory,
on grounds of circularity [1], failure to consistently agree with experiment3 , and
2 Even assuming an infinite number of potential throws and infinite wealth on Peter’s part
3 See the entire behavioral economics literature, and [2] for a discussion of how estimated
utility parameters can vary considerably across experiment.

2
failure even to fully resolve the St. Petersburg Paradox4 . Not surprisingly,
modern economics has mostly moved on from pure expected marginal utility
being at the center of decision-making models, though many modern models
do involve some form of extended utility. Fortunately we are not seeking here
a General Theory of Economic Decision Making - for our purposes Bernoulli’s
core utility concept will work nicely.

2 Utility and Risk Aversion


Consider an investor, who we shall call Jack. Jack is offered a 50/50 bet at b : 1
net odds - the question is at what odds will Jack accept the bet? It’s clear this
can be framed as a question of Jack’s risk aversion: any odds over 1:1 result in
a positive expected value for the bet, but Jack likely requires ’extra’ expected
value to compensate for the possibility of loss. If Jack is perfectly risk-averse, he
would not take the bet at any odds, and if he is perfectly risk-taking he would
take the bet at 1:1 odds. If we assume Jack has log-utility U = ln(w) and initial
wealth w0 and look at the expected marginal utility of the bet, we have:
1 1
E[∆U ] = ln(w0 + b) + ln(w0 − 1) − ln(w0 ) (4)
2 2
If Jack only takes the bet when it has zero or positive expected marginal
utility, the lowest net odds for which he would play are:

Figure 2

As expected, we see Jack will accept lower odds as his potential loss rela-
tive to his wealth becomes smaller. To show graphically the mechanics of the
expected utility calculation:
n
4 For example, consider a payout of e2 instead of 2n . [Menger, 1934] shows that for any

unbounded utility function there exists a payout resulting in infinite marginal utility

3
Figure 3

We see the odds Jack will accept (i.e. his level of risk-aversion) are deter-
mined by the curvature of U (w) in the vicinity of w0 . Thus the utility function
and its parameters effectively quantify risk aversion in a way very useful to us.
While this approach of maximizing marginal expected utility has a number of
potential drawbacks in the context of general economic decision making, there
are several reasons to believe it’s quite a good model in the context of the risk
preferences of professional investors:
• From the standpoint of a practitioner, we are not trying to model the risk-
adjusted decisions of an arbitrary investor - but rather to correctly and
consistently capture our own risk-aversion, about which we have much
more information and insight.

• For many professional investors risk-aversion is not just an abstract con-


cept, but a concrete, quantifiable reality built into their style, mandate,
capital-structure, etc.
• The process by which real investors evaluate many kinds of investments
is in general authentically close to the process of maximizing marginal
expected utility; though often without a formal concept of utility, e.g.
by computing expected value with a scenario-analysis framework with a
secondary weighting to adjust for ’riskiness’.
• Most professional investors explicitly aspire to a decision-model result-
ing in optimal risk-adjusted returns, and are strongly incented to act as
optimally as possible.

3 Choosing Your Idiosyncratic Utility Function


Here we discuss how a real investor may choose and parameterize their own
utility function - then in the next section we discuss how to use it. Though
there are a very large number of potential utility function forms in the academic

4
literature, we advocate for two large classes which we think meet the needs of
most investors, and have several benefits in terms of ease of use.
The largest family we discuss is called the HARA (Hyperbolic Absolute Risk
Aversion) family, for which:
 γ
1−γ αw
U (w) = +β (5)
γ 1−γ

where α, β, γ are available parameters. The HARA family is very flexible, but
may have too many parameters for most users and also is scale-dependent (unlike
the following family).
A special case of the HARA family is the CRRA (Constant Relative Risk
Aversion) family, for which:
w1−γ
U (w) = (6)
1−γ
This is the only family of utility functions which are scale-independent5 .
The technical definition of the CRRA property is that:

−wU 00 (w)
=γ (7)
U 0 (w)

But we focus on a more practical manifestation of this feature: consider a


50/50 binary bet risking 10% of initial wealth w0 . Then given net odds for which
E[∆U ] = 0, the expected marginal utility of that bet for any other initial wealth
w will also be 0. Put another way, for a 50/50 bet risking a fixed fraction of initial
wealth, the net odds required to make the bet worthwhile is independent of the
level of wealth - this is true for CRRA utility functions but not true for all others.
This specific property of scale-independence makes thinking about and working
with the CRRA family much easier vs. other families - and we believe also
accurately captures the behavior and preferences of many professional investors.
If at my current wealth I need 2:1 odds to incentive me to risk 10% of wealth on a
50/50 bet, that’s probably independent of changes in my wealth over reasonable
(and possibly even very large) levels of change.
So to parameterize our utility function, we suggest by starting with two
natural questions:
1. To risk 10%6 of total wealth on a single 50/50 bet, what odds are needed
to incentive me to make the bet?7

2. What is the lowest X for which there are no odds that would incentive me
to risk X% of wealth on a single 50/50 bet?
5 Leading to the name isoelastic utility
6 Or whatever increment seems most natural given the investor’s specific circumstances
7 “What upside probability is needed to make a bet with 1:1 odds?” is equivalent and may

be more intuitive for some.

5
We believe most investors can, intuitively or quantitatively, come to answers
for these questions which accurately express their own risk preferences. Cer-
tainly professional investors should be able to. Because of the CRRA properties
discussed above, for CRRA-utility each answer can be uniquely mapped to a
value of γ; we shall say γ1 and γ2 correspond to the γ-values implied by the
answer to questions 1) and 2) respectively.
If γ1 and γ2 are relatively close, then our risk-preferences are internally
consistent within CRRA-utility and we might pick γ = 21 (γ1 + γ2 ) to define our
utility. If however they are not relatively close, we suggest two major options:
1. Use the CRRA family with a weighted-average of the γs, weighted toward
whichever is subjectively more important
2. Answer both questions for two different levels of wealth, then use the
more flexible HARA family and solve for the parameters which best fit all
4 answers.
We strongly favor using the CRRA family if at all possible, as the single
parameter and scale-independence give it a robustness and intuitive character
which makes consistent use easier and more appealing8 .

4 Investment Scaling
Too often in our experience investments are sized based on factors having little
do with their underlying risk-adjusted returns; and especially are not sized in
a way which is both consistent across investments and consistent with the in-
vestor’s idiosyncratic risk profile. The aspiration to consistency in this regard
is not just an academic or philosophical quibble: as we discuss later, system-
atically mis-scaling relative to an investment’s underlying risk/reward can be
a significant drag on total performance. Use of the utility function to quantify
that risk profile is how we shall rationalize the investment sizing process.

4.1 The Kelly Criterion


The “Kelly Criterion” or “Kelly Bet” is a well-known gambling result describing
the optimal percentage of a gambler’s bankroll which should be risked on a single
wager. For a single wager at b : 1 odds with probability p of winning the wager,
the simple result is:
p(b + 1) − 1
Bankroll% = (8)
b
This implicitly assumes the gambler has log-utility, and is easy to prove using
the mechanics of expected marginal utility. We have the expected marginal
utility as:

E[∆U ] = pln(w0 + κw0 b) + (1 − p)ln(w0 − κw0 ) − ln(w0 ) (9)


1−γ
8 We also note that log-utility is a special case of CRRA utility, as lim w1−γ = ln(w)
γ→1

6
where κ is the percentage of bankroll wagered. To maximize E[∆U ] with
respect to κ, we have:

∂E[∆U ] pb 1−p
= − =0 (10)
∂κ 1 + κb 1 − κ
so that:
p(b + 1) − 1
κ= (11)
b

4.2 Sizing Single Investments


The process of sizing a single continuous-price investment over a fixed horizon
is just a generalization of the process described above which yields the Kelly
Criterion for single bets. Given an investor with an arbitrary utility function
U evaluating an investment with a continuous probability distribution P (x) of
potential returns x over a fixed time horizon, we have:
Z
E[∆U ] = P (x)U (w0 + κx)dx − U (w0 ) (12)

Closed form solutions for the optimal κ may exist for special combinations
of P and U , but in general we would expect to solve for κ numerically. The
sampled partial with respect to κ will be well-behaved for most reasonable P
and U , so the numerical process will generally be trivial.

4.3 Sizing Within A Portfolio


Most real-world investment scenarios will involve sizing a new investment in the
context of an existing portfolio, not serially. There are two main options here:
1. Given a potential investment I and existing portfolio P, assume we have
good modeling for the portfolio and know the probability distribution of
portfolio returns PP (x) over some relevant time horizon, where presumably
x is an aggregate state variable derived from a more complicated set of
drivers underlying P. Also we know the probability distribution PI (y) for
the returns of I, and can estimate the correlation ρ between x and y. Then
we can write, as an approximation:
ZZ Z
E[∆U ] = (1−ρ) PP (x)PI (y)U (x, y)dxdy+ρ PP (x)U (x, x)dx−U (w0 )
(13)
This looks daunting, but numerically maximizing against this objective is
no more difficult than in (12). For most cases with 0 < ρ < 1 this will be
just an approximation to the actual distribution, but our testing indicates
it’s generally a good approximation, and as shown in [3] the process is
much less sensitive to errors in variance/covariance than errors in mean.

7
2. Given a portfolio P with drivers {xi } and additional investment I with
driver y , we can directly simulate:

E[P (xi , y)U (xi , y)] (14)

then optimize against the size of I. For special cases in which investments
within the portfolio are independent, this simplifies considerably and likely
can be calculated without simulation.

4.4 Caveats and Discussion


For an excellent, detailed discussion of the good and bad aspects of using the
Kelly methodology from which we’ve generalized, we highly recommend [3]. For
a few points of our own:

1. Why have these concepts gained little traction with the professional in-
vesting community9 , despite extensive discussion in the academic finance
literature? Von Neumann et. al. were discussing use of utility to measure
risk-aversion as early as 1944 [5]. We have two working theories:

• Many of the quantitative people in finance who do the modeling


work tend to come from the derivatives side of the business where
calculations are generally in risk-neutral measure, in which context
expected value really is the correct calculation.
• Utility in economics is often taught as a highly abstract concept, so
it feels to many investors too much like “Econ101 theory” to be of
practical use in day-to-day investing.
2. In practice the “Kelly bet” sizing arrived at above should be viewed as a
ceiling on the potential bet or investment size, not an exact prescription.
There are two main reasons:

• As shown in [3], full-Kelly sizing may be statistically optimal but


still result in very aggressive drawdowns. In general a complete siz-
ing framework may want to take the potential drawdown effect into
consideration as well, depending on the number, correlation, and av-
erage size of investments being made.
• The Kelly bet is sensitive to the mean forecast return - greater de-
grees of uncertainty about the distribution of potential returns re-
quire greater degrees of caution vs. the full-Kelly sizing.
3. Thorpe et. al. show in [3] the dangers of over-betting relative to the Kelly
bet, demonstrating that for a geometric Wiener process betting exactly
twice the Kelly bet results in capital growth equal to the risk-free rate.
9 With some exceptions of course

8
4. Outside of very specific situations, it is rare in the investment world to
have definitive information about the distribution of potential returns for
any given investment. Clearly this framework favors investments with
return distributions which are estimable with at least some confidence,
but we would argue all frameworks should likely favor such investments.
It seems to us hard to argue for allocating significant relative capital to an
investment whose returns are so uncertain even their distribution cannot
be reasonably estimated.
5. One potential objection we have seen to related frameworks is that they
are only appropriate to liquid-markets investments with significant histor-
ical data. We do not agree. Estimating the return distributions may in
some sense be easier in markets with high-frequency data, but we believe
it is possible in all situations which merit an actual investment. Even in
private, highly illiquid markets, the investment process for idiosyncratic
credit and equity instruments very often includes some kind of scenario
analysis, whether heuristic or by simulation. Creating a full return dis-
tribution is a natural extension of that process, and not especially bur-
densome. After all, what is the investment process if not, in some sense,
an exploration of the potential return profile of an investment? As well,
Kahneman in [4] powerfully demonstrates that adding an explicitly quan-
titative element can significantly reduce cognitive bias even in cases where
the ultimate decision is made qualitatively.

6. This approach is central to the Merton Portfolio Problem, which we discuss


in detail in the note “Constructing a Long-Term Investment Portfolio:
Part I”
7. Just a few interesting extensions include:

• Interactions between Kelly-style criteria and starting/stopping/scaling


rules in continuous-time decision making (in contract to the treat-
ment above which assumes fixed-horizon investment periods)
• Using Kelly-style criteria to define a stack-sizing strategy for market-
making (to be effective, the strategy would also need at minimum a
mechanism for filtering ”noise” vs. ”information” orders)

References
[1] Joan Robinson. Economic Philosophy Penguin Books, 1993.

[2] Babcock, Choi, Feinerman. Risk and Probability Premiums for CARA Util-
ity Functions. Journal of Agricultural and Resource Economics, 17-24, 1993.
[3] MacLean, Thorpe, Ziemba. Good and Bad Properties of the Kelly Criterion.
http://edwardothorp.com/sitebuildercontent/sitebuilderfiles/Good Bad Paper.pdf,
2010.

9
[4] Daniel Kahneman. Thinking Fast and Slow. Macmillan Books, 2010.
[5] Von Neumann, Morganstern. Theory of Games and Economic Behavior.
Princeton University Press, 1944.
[6] Peter Bernstein. Against the Gods: The Remarkable Story of Risk. Wiley,
2008.

10

You might also like