You are on page 1of 10

Edward O.

Thorp
From Wikipedia, the free encyclopedia
Jump to navigationJump to search

This article's tone or style may not reflect the encyclopedic tone used on
Wikipedia. See Wikipedia's guide to writing better articles for suggestions.
(December 2017) (Learn how and when to remove this template message)
Edward O. Thorp
Born August 14, 1932 (age 86)
Chicago, Illinois, U.S.
Residence United States
Citizenship American
Alma mater UCLA
Scientific career
Fields Probability theory, Linear operators
Institutions UC Irvine, New Mexico State University, MIT
Thesis Compact Linear Operators in Normed Spaces (1958)
Doctoral advisor Angus E. Taylor
Influences Claude Shannon
Edward Oakley Thorp (born August 14, 1932) is an American mathematics professor,
author, hedge fund manager, and blackjack player. He pioneered the modern
applications of probability theory, including the harnessing of very small
correlations for reliable financial gain.[citation needed]

Thorp is the author of Beat the Dealer, which mathematically proved that the house
advantage in blackjack could be overcome by card counting.[1] He also developed and
applied effective hedge fund techniques in the financial markets, and collaborated
with Claude Shannon in creating the first wearable computer.[2]

Thorp received his Ph.D. in mathematics from the University of California, Los
Angeles in 1958, and worked at the Massachusetts Institute of Technology (MIT) from
1959 to 1961. He was a professor of mathematics from 1961 to 1965 at New Mexico
State University, and then joined the University of California, Irvine where he was
a professor of mathematics from 1965 to 1977[citation needed] and a professor of
mathematics and finance from 1977 to 1982.

Contents
1 Computer-aided research in blackjack
1.1 Applied research in Reno, Lake Tahoe and Las Vegas
2 Stock market
3 Bibliography
4 See also
5 References
6 Sources
7 External links
Computer-aided research in blackjack
Thorp used the IBM 704 as a research tool in order to investigate the probabilities
of winning while developing his blackjack game theory, which was based on the Kelly
criterion, which he learned about from the 1956 paper by Kelly.[3][4][5][6] He
learned Fortran in order to program the equations needed for his theoretical
research model on the probabilities of winning at blackjack. Thorp analyzed the
game of blackjack to a great extent this way, while devising card-counting schemes
with the aid of the IBM 704 in order to improve his odds,[7] especially near the
end of a card deck that is not being reshuffled after every deal.

Applied research in Reno, Lake Tahoe and Las Vegas


Thorp decided to test his theory in practice in Reno, Lake Tahoe, and Las Vegas.[5]
[7][8] Thorp started his applied research using $10,000, with Manny Kimmel, a
wealthy professional gambler and former bookmaker,[9] providing the venture
capital. First they visited Reno and Lake Tahoe establishments where they tested
Thorp's theory at the local blackjack tables.[8] The experimental results proved
successful and his theory was verified since he won $11,000 in a single weekend.[5]
Casinos now shuffle well before the end of the deck as a countermeasure to his
methods. During his Las Vegas casino visits Thorp frequently used disguises such as
wraparound glasses and false beards.[8] In addition to the blackjack activities,
Thorp had assembled a baccarat team which was also winning.[8]

News quickly spread throughout the gambling community, which was eager for new
methods of winning, while Thorp became an instant celebrity among blackjack
aficionados. Due to the great demand generated about disseminating his research
results to a wider gambling audience, he wrote the book Beat the Dealer in 1966,
widely considered the original card counting manual,[10] which sold over 700,000
copies, a huge number for a specialty title which earned it a place in the New York
Times bestseller list, much to the chagrin of Kimmel whose identity was thinly
disguised in the book as Mr. X.[5]

Thorp's blackjack research[11] is one of the very few examples where results from
such research reached the public directly, completely bypassing the usual academic
peer review process cycle. He has also stated that he considered the whole
experiment an academic exercise.[5]

In addition, Thorp, while a professor of mathematics at MIT, met Claude Shannon,


and took him and his wife Betty Shannon as partners on weekend forays to Las Vegas
to play roulette and blackjack, at which Thorp was very successful.[12] His team's
roulette play was the first instance of using a wearable computer in a casino �
something which is now illegal, as of May 30, 1985, when the Nevada devices law
came into effect as an emergency measure targeting blackjack and roulette devices.
[2][12] The wearable computer was co-developed with Claude Shannon between 1960�61.
The final operating version of the device was tested in Shannon�s home lab at his
basement in June 1961.[2] His achievements have led him to become an inaugural
member of the Blackjack Hall of Fame.[13]

He also devised the "Thorp count", a method for calculating the likelihood of
winning in certain endgame positions in backgammon.[14]

Stock market
Since the late 1960s, Thorp has used his knowledge of probability and statistics in
the stock market by discovering and exploiting a number of pricing anomalies in the
securities markets, and he has made a significant fortune.[4] Thorp's first hedge
fund was Princeton/Newport Partners. He is currently the President of Edward O.
Thorp & Associates, based in Newport Beach, California. In May 1998, Thorp reported
that his personal investments yielded an annualized 20 percent rate of return
averaged over 28.5 years.[15]

Bibliography
(Autobiography) Edward O. Thorp, A Man for All Markets: From Las Vegas to Wall
Street, How I Beat the Dealer and the Market, 2017. [1]
Edward O. Thorp, Elementary Probability, 1977, ISBN 0-88275-389-4
Edward Thorp, Beat the Dealer: A Winning Strategy for the Game of Twenty-One, ISBN
0-394-70310-3
Edward O. Thorp, Sheen T. Kassouf, Beat the Market: A Scientific Stock Market
System, 1967, ISBN 0-394-42439-5 (online pdf, retrieved 22 Nov 2017)
Edward O. Thorp, The Mathematics of Gambling, 1984, ISBN 0-89746-019-7 (online
version part 1, part 2, part 3, part 4)
Les Golden, Never Split Tens!: A Biographical Novel of Blackjack Game Theorist
Edward O. Thorp, 2017. ISBN 3-319-63485-2
Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the
Casinos and Wall Street by William Poundstone
The Kelly Capital Growth Investment Criterion: Theory and Practice (World
Scientific Handbook in Financial Economic Series), ISBN 978-9814293495, February
10, 2011 by Leonard C. MacLean (Editor), Edward O. Thorp (Editor), William T.
Ziemba (Editor)

Kelly criterion
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
In probability theory and intertemporal portfolio choice, the Kelly criterion,
Kelly strategy, Kelly formula, or Kelly bet is a formula for bet sizing that leads
almost surely to higher wealth compared to any other strategy in the long run (i.e.
the limit as the number of bets goes to infinity). The Kelly bet size is found by
maximizing the expected logarithm of wealth which is equivalent to maximizing the
expected geometric growth rate.

It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The


practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be


counterintuitive. In one study,[5][6] each participant was given $25 and asked to
bet on a coin that would land heads 60% of the time. Participants had 30 minutes to
play, so could place about 300 bets, and the prizes were capped at $250. Behavior
was far from optimal. "Remarkably, 28% of the participants went bust, and the
average payout was just $91. Only 21% of the participants reached the maximum. 18
of the 61 participants bet everything on one toss, while two-thirds gambled on
tails at some stage in the experiment." Using the Kelly criterion and based on the
odds in the experiment, the right approach would be to bet 20% of the pot on each
throw (see first example below). If losing, the size of the bet gets cut; if
winning, the stake increases.

Although the Kelly strategy's promise of doing better than any other strategy in
the long run seems compelling, some economists have argued strenuously against it,
mainly because an individual's specific investing constraints may override the
desire for optimal growth rate.[7] The conventional alternative is expected utility
theory which says bets should be sized to maximize the expected utility of the
outcome (to an individual with logarithmic utility, the Kelly bet maximizes
expected utility, so there is no conflict; moreover, Kelly's original paper clearly
states the need for a utility function in the case of gambling games which are
played finitely many times[1]). Even Kelly supporters usually argue for fractional
Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety
of practical reasons, such as wishing to reduce volatility, or protecting against
non-deterministic errors in their advantage (edge) calculations.[8]

In recent years, Kelly has become a part of mainstream investment theory[9] and the
claim has been made that well-known successful investors including Warren
Buffett[10] and Bill Gross[11] use Kelly methods. William Poundstone wrote an
extensive popular account of the history of Kelly betting.[7]

The second-order Taylor polynomial can be used as a good approximation of the main
criterion. Primarily, it is useful for stock investment, where the fraction devoted
to investment is based on simple characteristics that can be easily estimated from
existing historical data � expected value and variance. This approximation leads to
results that are robust and offer similar results as the original criterion.[12]

Contents
1 Statement
2 Proof
3 Bernoulli
4 Multiple horses
5 Application to the stock market
5.1 Single asset
5.2 Many assets
6 See also
7 References
8 External links
Statement
For simple bets with two outcomes, one involving losing the entire amount bet, and
the other involving winning the bet amount multiplied by the payoff odds, the Kelly
bet is:

{\displaystyle f^{*}={\frac {bp-q}{b}}={\frac {bp-(1-p)}{b}}={\frac {p(b+1)-1}{b}}}


{\displaystyle f^{*}={\frac {bp-q}{b}}={\frac {bp-(1-p)}{b}}={\frac {p(b+1)-1}{b}}}
where:

f * is the fraction of the current bankroll to wager, i.e. how much to bet;
b is the net odds received on the wager ("b to 1"); that is, you could win $b (on
top of getting back your $1 wagered) for a $1 bet
p is the probability of winning;
q is the probability of losing, which is 1 - p.
As an example, if a gamble has a 60% chance of winning (p = 0.60, q = 0.40), and
the gambler receives 1-to-1 odds on a winning bet (b = 1), then the gambler should
bet 20% of the bankroll at each opportunity (f* = 0.20), in order to maximize the
long-run growth rate of the bankroll.

If the gambler has zero edge, i.e. if b = q / p, then the criterion recommends the
gambler bets nothing.

If the edge is negative (b < q / p) the formula gives a negative result, indicating
that the gambler should take the other side of the bet. For example, in standard
American roulette, the bettor is offered an even money payoff (b = 1) on red, when
there are 18 red numbers and 20 non-red numbers on the wheel (p = 18/38). The Kelly
bet is -1/19, meaning the gambler should bet one-nineteenth of their bankroll that
red will not come up. Unfortunately, the casino doesn't allow betting against
something coming up, so a Kelly gambler cannot place a bet.

The top of the first fraction is the expected net winnings from a $1 bet, since the
two outcomes are that you either win $b with probability p, or lose the $1 wagered,
i.e. win $-1, with probability q. Hence:

{\displaystyle f^{*}={\frac {\text{expected net winnings}}{\text{net winnings if


you win}}}} {\displaystyle f^{*}={\frac {\text{expected net winnings}}{\text{net
winnings if you win}}}}
For even-money bets (i.e. when b = 1), the first formula can be simplified to:

{\displaystyle f^{*}=p-q.} {\displaystyle f^{*}=p-q.}


Since q = 1-p, this simplifies further to

{\displaystyle f^{*}=2p-1.} {\displaystyle f^{*}=2p-1.}


A more general problem relevant for investment decisions is the following:

1. The probability of success is {\displaystyle p} p.

2. If you succeed, the value of your investment increases from {\displaystyle 1} 1


to {\displaystyle 1+b} 1+b.
3. If you fail (for which the probability is {\displaystyle q=1-p} q=1-p) the value
of your investment decreases from {\displaystyle 1} 1 to {\displaystyle 1-a} 1-a.
(Note that the previous description above assumes that a is 1).

In this case, the Kelly criterion turns out to be the relatively simple expression

{\displaystyle f^{*}=p/a-q/b.} {\displaystyle f^{*}=p/a-q/b.}


Note that this reduces to the original expression for the special case above
( {\displaystyle f^{*}=p-q} f^{*}=p-q) for {\displaystyle b=a=1} b=a=1.

Clearly, in order to decide in favor of investing at least a small amount


{\displaystyle (f^{*}>0)} (f^{*}>0), you must have

{\displaystyle pb>qa.} {\displaystyle pb>qa.}


which obviously is nothing more than the fact that your expected profit must exceed
the expected loss for the investment to make any sense.

The general result clarifies why leveraging (taking a loan to invest) decreases the
optimal fraction to be invested, as in that case {\displaystyle a>1} a>1.
Obviously, no matter how large the probability of success, {\displaystyle p} p, is,
if {\displaystyle a} a is sufficiently large, the optimal fraction to invest is
zero. Thus, using too much margin is not a good investment strategy, no matter how
good an investor you are.

Proof
Heuristic proofs of the Kelly criterion are straightforward.[13] For a symbolic
verification with Python and SymPy one would set the derivative y'(x) of the
expected value of the logarithmic bankroll y(x) to 0 and solve for x:

>>> from sympy import *


>>> x,b,p = symbols('x b p')
>>> y = p*log(1+b*x) + (1-p)*log(1-x)
>>> solve(diff(y,x), x)
[-(1 - p - b*p)/b]
The Kelly criterion maximises the expectation of the logarithm of wealth (the
expectation value of a function is given by the sum of the probabilities of
particular outcomes multiplied by the value of the function in the event of that
outcome). We start with 1 unit of wealth and bet a fraction {\displaystyle f^{*}}
{\displaystyle f^{*}} of that wealth on an outcome that occurs with probability
{\displaystyle p} p and offers odds of {\displaystyle b} b. The probability of
winning is {\displaystyle p} p, and in that case the wealth is equal to
{\displaystyle 1+f^{*}b} {\displaystyle 1+f^{*}b}. The probability of losing is
{\displaystyle 1-p} {\displaystyle 1-p}, and in that case the wealth is equal to
{\displaystyle 1-f^{*}} {\displaystyle 1-f^{*}}. Therefore our expectation value
for log wealth {\displaystyle (E)} {\displaystyle (E)} is given by:

{\displaystyle E=p\log(1+f^{*}b)+(1-p)\log(1-f^{*})} {\displaystyle


E=p\log(1+f^{*}b)+(1-p)\log(1-f^{*})}

To find the value of {\displaystyle f^{*}} {\displaystyle f^{*}} for which the
expectation value is maximised, we differentiate the above expression and set this
equal to zero. This gives:

{\displaystyle {\frac {dE}{df^{*}}}={\frac {pb}{1+f^{*}b}}-{\frac {1-p}{1-


f^{*}}}=0} {\displaystyle {\frac {dE}{df^{*}}}={\frac {pb}{1+f^{*}b}}-{\frac {1-p}
{1-f^{*}}}=0}

Rearranging this equation for {\displaystyle f^{*}} {\displaystyle f^{*}} gives the
Kelly criterion:
{\displaystyle f^{*}={\frac {pb+p-1}{b}}} {\displaystyle f^{*}={\frac {pb+p-1}{b}}}

For a rigorous and general proof, see Kelly's original paper[1] or some of the
other references listed below. Some corrections have been published.[14]

We give the following non-rigorous argument for the case b = 1 (a 50:50 "even
money" bet) to show the general idea and provide some insights.[1]

When b = 1, the Kelly bettor bets 2p - 1 times initial wealth, W, as shown above.
If they win, they have 2pW. If they lose, they have 2(1 - p)W. Suppose they make N
bets like this, and win K of them. The order of the wins and losses doesn't matter,
so they will have:

{\displaystyle 2^{N}p^{K}(1-p)^{N-K}W\!.} 2^Np^K(1-p)^{N-K}W \! .


Suppose another bettor bets a different amount, (2p - 1 + {\displaystyle
\Delta } \Delta )W for some positive or negative {\displaystyle \Delta } \Delta .
They will have (2p + {\displaystyle \Delta } \Delta )W after a win and [2(1 - p)-
{\displaystyle \Delta } \Delta ]W after a loss. After the same wins and losses as
the Kelly bettor, they will have:

{\displaystyle (2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K}W} {\displaystyle (2p+\Delta )


^{K}[2(1-p)-\Delta ]^{N-K}W}
Take the derivative of this with respect to {\displaystyle \Delta } \Delta and
get:

{\displaystyle K(2p+\Delta )^{K-1}[2(1-p)-\Delta ]^{N-K}W-(N-K)(2p+\Delta )^{K}


[2(1-p)-\Delta ]^{N-K-1}W} {\displaystyle K(2p+\Delta )^{K-1}[2(1-p)-\Delta ]^{N-
K}W-(N-K)(2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K-1}W}
The turning point of the original function occurs when this derivative equals zero,
which occurs at:

{\displaystyle K[2(1-p)-\Delta ]=(N-K)(2p+\Delta )} {\displaystyle K[2(1-


p)-\Delta ]=(N-K)(2p+\Delta )}
which implies:

{\displaystyle \Delta =2({\frac {K}{N}}-p)} {\displaystyle \Delta =2({\frac {K}


{N}}-p)}
but:

{\displaystyle \lim _{N\to +\infty }{\frac {K}{N}}=p} {\displaystyle \lim _{N\to


+\infty }{\frac {K}{N}}=p}
so in the long run, final wealth is maximized by setting {\displaystyle \Delta }
\Delta to zero, which means following the Kelly strategy.

This illustrates that Kelly has both a deterministic and a stochastic component. If
one knows K and N and wishes to pick a constant fraction of wealth to bet each time
(otherwise one could cheat and, for example, bet zero after the Kth win knowing
that the rest of the bets will lose), one will end up with the most money if one
bets:

{\displaystyle \left(2{\frac {K}{N}}-1\right)W} {\displaystyle \left(2{\frac {K}


{N}}-1\right)W}
each time. This is true whether N is small or large. The "long run" part of Kelly
is necessary because K is not known in advance, just that as N gets large, K will
approach pN. Someone who bets more than Kelly can do better if K > pN for a
stretch; someone who bets less than Kelly can do better if K < pN for a stretch,
but in the long run, Kelly always wins.
The heuristic proof for the general case proceeds as follows.[citation needed]

In a single trial, if you invest the fraction {\displaystyle f} f of your capital,


if your strategy succeeds, your capital at the end of the trial increases by the
factor {\displaystyle 1-f+f(1+b)=1+fb} 1-f + f(1+b) = 1+fb, and, likewise, if the
strategy fails, you end up having your capital decreased by the factor
{\displaystyle 1-fa} 1-fa. Thus at the end of {\displaystyle N} N trials (with
{\displaystyle pN} pN successes and {\displaystyle qN} qN failures ), the starting
capital of $1 yields

{\displaystyle C_{N}=(1+fb)^{pN}(1-fa)^{qN}.} C_N=(1+fb)^{pN}(1-fa)^{qN}.


Maximizing {\displaystyle \log(C_{N})/N} \log(C_N)/N, and consequently
{\displaystyle C_{N}} C_N, with respect to {\displaystyle f} f leads to the desired
result

{\displaystyle f^{*}=p/a-q/b.} f^{*}=p/a-q/b .


For a more detailed discussion of this formula for the general case, see.[15]
There, it can be seen that the substitution of {\displaystyle p} p for the ratio of
the number of "successes" to the number of trials implies that the number of trials
must be very large, since {\displaystyle p} p is defined as the limit of this ratio
as the number of trials goes to infinity. In brief, betting {\displaystyle f^{*}}
f^{*} each time will likely maximize the wealth growth rate only in the case where
the number of trials is very large, and {\displaystyle p} p and {\displaystyle b} b
are the same for each trial. In practice, this is a matter of playing the same game
over and over, where the probability of winning and the payoff odds are always the
same. In the heuristic proof above, {\displaystyle pN} pN successes and
{\displaystyle qN} qN failures are highly likely only for very large {\displaystyle
N} N.

Bernoulli
In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets
or investments, one should choose that with the highest geometric mean of outcomes.
This is mathematically equivalent to the Kelly criterion, although the motivation
is entirely different (Bernoulli wanted to resolve the St. Petersburg paradox).

The Bernoulli article was not translated into English until 1954,[16] but the work
was well-known among mathematicians and economists.

Multiple horses
Kelly's criterion may be generalized [17] on gambling on many mutually exclusive
outcomes, such as in horse races. Suppose there are several mutually exclusive
outcomes. The probability that the k-th horse wins the race is {\displaystyle
p_{k}} p_{k}, the total amount of bets placed on k-th horse is {\displaystyle
B_{k}} B_{k}, and

{\displaystyle \beta _{k}={\frac {B_{k}}{\sum _{i}B_{i}}}={\frac {1}{1+Q_{k}}},}


\beta_k=\frac{B_k}{\sum_i B_i}=\frac{1}{1+Q_k} ,
where {\displaystyle Q_{k}} Q_{k} are the pay-off odds. {\displaystyle D=1-tt} D=1-
tt, is the dividend rate where {\displaystyle tt} tt is the track take or tax,
{\displaystyle {\frac {D}{\beta _{k}}}} \frac{D}{\beta_k} is the revenue rate after
deduction of the track take when k-th horse wins. The fraction of the bettor's
funds to bet on k-th horse is {\displaystyle f_{k}} f_{k}. Kelly's criterion for
gambling with multiple mutually exclusive outcomes gives an algorithm for finding
the optimal set {\displaystyle S^{o}} S^o of outcomes on which it is reasonable to
bet and it gives explicit formula for finding the optimal fractions {\displaystyle
f_{k}^{o}} f^o_k of bettor's wealth to be bet on the outcomes included in the
optimal set {\displaystyle S^{o}} S^o. The algorithm for the optimal set of
outcomes consists of four steps.[17]
Step 1 Calculate the expected revenue rate for all possible (or only for several of
the most promising) outcomes: {\displaystyle er_{k}={\frac {D}{\beta
_{k}}}p_{k}=D(1+Q_{k})p_{k}.} er_k=\frac{D}{\beta_k}p_k=D(1+Q_k)p_k.

Step 2 Reorder the outcomes so that the new sequence {\displaystyle er_{k}} er_k is
non-increasing. Thus {\displaystyle er_{1}} er_1 will be the best bet.

Step 3 Set {\displaystyle S=\varnothing } S = \varnothing (the empty set),


{\displaystyle k=1} k=1, {\displaystyle R(S)=1} R(S)=1. Thus the best bet
{\displaystyle er_{k}=er_{1}} er_k = er_1 will be considered first.

Step 4 Repeat:

If {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)} er_k=\frac{D}


{\beta_k}p_k > R(S) then insert k-th outcome into the set: {\displaystyle S=S\cup \
{k\}} S = S \cup \{k\}, recalculate {\displaystyle R(S)} R(S) according to the
formula: {\displaystyle R(S)={\frac {1-\sum _{i\in S}{p_{i}}}{1-\sum _{i\in S}
{\frac {\beta _{i}}{D}}}}} R(S)=\frac{1-\sum_{i \in S}{p_i}}{1-\sum_{i \in S }
\frac{\beta_i}{D}} and then set {\displaystyle k=k+1} k = k+1 ,

Else set {\displaystyle S^{o}=S} S^o=S and then stop the repetition.

If the optimal set {\displaystyle S^{o}} S^o is empty then do not bet at all. If
the set {\displaystyle S^{o}} S^o of optimal outcomes is not empty then the optimal
fraction {\displaystyle f_{k}^{o}} f^o_k to bet on k-th outcome may be calculated
from this formula: {\displaystyle f_{k}^{o}={\frac {er_{k}-R(S^{o})}{\frac {D}
{\beta _{k}}}}=p_{k}-{\frac {R(S^{o})}{\frac {D}{\beta _{k}}}}} f^o_k=\frac{er_k -
R(S^o)}{\frac{D}{\beta_k}}=p_k-\frac{R(S^o)}{\frac{D}{\beta_k}}.

One may prove[17] that

{\displaystyle R(S^{o})=1-\sum _{i\in S^{o}}{f_{i}^{o}}} R(S^o)=1-\sum_{i \in S^o}


{f^o_i}
where the right hand-side is the reserve rate[clarification needed]. Therefore the
requirement {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)} er_k=\frac{D}
{\beta_k}p_k > R(S) may be interpreted[17] as follows: k-th outcome is included in
the set {\displaystyle S^{o}} S^o of optimal outcomes if and only if its expected
revenue rate is greater than the reserve rate. The formula for the optimal fraction
{\displaystyle f_{k}^{o}} f^o_k may be interpreted as the excess of the expected
revenue rate of k-th horse over the reserve rate divided by the revenue after
deduction of the track take when k-th horse wins or as the excess of the
probability of k-th horse winning over the reserve rate divided by revenue after
deduction of the track take when k-th horse wins. The binary growth exponent is

{\displaystyle G^{o}=\sum _{i\in S}{p_{i}\log _{2}{(er_{i})}}+(1-\sum _{i\in S}


{p_{i}})\log _{2}{(R(S^{o}))},} G^o=\sum_{i \in S}{p_i\log_2{(er_i)}}+(1-\sum_{i
\in S}{p_i})\log_2{(R(S^o))} ,
and the doubling time is

{\displaystyle T_{d}={\frac {1}{G^{o}}}.} T_d=\frac{1}{G^o}.


This method of selection of optimal bets may be applied also when probabilities
{\displaystyle p_{k}} p_{k} are known only for several most promising outcomes,
while the remaining outcomes have no chance to win. In this case it must be that
{\displaystyle \sum _{i}{p_{i}}<1} \sum_i{p_i} < 1 and {\displaystyle \sum _{i}
{\beta _{i}}<1} \sum_i{\beta_i} < 1.

Application to the stock market


In mathematical finance, a portfolio is called growth optimal if security weights
maximize the expected geometric growth rate (which is equivalent to maximizing log
wealth).[citation needed]

Computations of growth optimal portfolios can suffer tremendous garbage in, garbage
out problems.[citation needed] For example, the cases below take as given the
expected return and covariance structure of various assets, but these parameters
are at best estimated or modeled with significant uncertainty. Ex-post performance
of a supposed growth optimal portfolio may differ fantastically with the ex-ante
prediction if portfolio weights are largely driven by estimation error. Dealing
with parameter uncertainty and estimation error is a large topic in portfolio
theory.[citation needed]

Single asset
Considering a single asset (stock, index fund, etc.) and a risk-free rate, it is
easy to obtain the optimal fraction to invest through geometric Brownian motion.
The value of a lognormally distributed asset {\displaystyle S} S at time
{\displaystyle t} t ( {\displaystyle S_{t}} S_{t}) is

{\displaystyle S_{t}=S_{0}\exp \left(\left(\mu -{\frac {\sigma ^{2}}


{2}}\right)t+\sigma W_{t}\right),} {\displaystyle S_{t}=S_{0}\exp \left(\left(\mu -
{\frac {\sigma ^{2}}{2}}\right)t+\sigma W_{t}\right),}
from the solution of the geometric Brownian motion where {\displaystyle W_{t}}
W_{t} is a Wiener process, and {\displaystyle \mu } \mu (percentage drift) and
{\displaystyle \sigma } \sigma (the percentage volatility) are constants. Taking
expectations of the logarithm:

{\displaystyle \mathbb {E} \log(S_{t})=\log(S_{0})+(\mu -{\frac {\sigma ^{2}}


{2}})t.} {\displaystyle \mathbb {E} \log(S_{t})=\log(S_{0})+(\mu -{\frac {\sigma
^{2}}{2}})t.}
Then the expected log return {\displaystyle R_{s}} R_s is

{\displaystyle R_{s}=\left(\mu -{\frac {\sigma ^{2}}{2}}\,\right)t.} {\displaystyle


R_{s}=\left(\mu -{\frac {\sigma ^{2}}{2}}\,\right)t.}
For a portfolio made of an asset {\displaystyle S} S and a bond paying risk-free
rate {\displaystyle r,} r, with fraction {\displaystyle f} f invested in
{\displaystyle S} S and {\displaystyle (1-f)} (1-f) in the bond, the expected rate
of return {\displaystyle G(f)} G(f) is given by

{\displaystyle G(f)=f\mu -{\frac {(f\sigma )^{2}}{2}}+((1-f)\ r).} {\displaystyle


G(f)=f\mu -{\frac {(f\sigma )^{2}}{2}}+((1-f)\ r).}
Solving {\displaystyle \max(G(f))} {\displaystyle \max(G(f))} we obtain

{\displaystyle f^{*}={\frac {\mu -r}{\sigma ^{2}}}.} {\displaystyle f^{*}={\frac


{\mu -r}{\sigma ^{2}}}.}
{\displaystyle f^{*}} f^{*} is the fraction that maximizes the expected logarithmic
return, and so, is the Kelly fraction.

Thorp[15] arrived at the same result but through a different derivation.

Remember that {\displaystyle \mu } \mu is different from the asset log return
{\displaystyle R_{s}} R_s. Confusing this is a common mistake made by websites and
articles talking about the Kelly Criterion.

Many assets
Consider a market with {\displaystyle n} n correlated stocks {\displaystyle S_{k}}
S_k with stochastic returns {\displaystyle r_{k}} r_{k}, {\displaystyle k=1,...,n,}
{\displaystyle k=1,...,n,} and a riskless bond with return {\displaystyle r} r. An
investor puts a fraction {\displaystyle u_{k}} u_{k} of their capital in
{\displaystyle S_{k}} S_k and the rest is invested in the bond. Without loss of
generality, assume that investor's starting capital is equal to 1. According to the
Kelly criterion one should maximize

{\displaystyle \mathbb {E} \left[\ln \left((1+r)+\sum \limits _{k=1}^{n}u_{k}


(r_{k}-r)\right)\right].} {\displaystyle \mathbb {E} \left[\ln \left((1+r)+\sum
\limits _{k=1}^{n}u_{k}(r_{k}-r)\right)\right].}

Expanding this with the Taylor series around {\displaystyle {\vec


{u_{0}}}=(0,\ldots ,0)} \vec{u_0} = (0, \ldots ,0) we obtain
{\displaystyle \mathbb {E} \left[\ln(1+r)+\sum \limits _{k=1}^{n}{\frac {u_{k}
(r_{k}-r)}{1+r}}-{\frac {1}{2}}\sum \limits _{k=1}^{n}\sum \limits
_{j=1}^{n}u_{k}u_{j}{\frac {(r_{k}-r)(r_{j}-r)}{(1+r)^{2}}}\right].} {\displaystyle
\mathbb {E} \left[\ln(1+r)+\sum \limits _{k=1}^{n}{\frac {u_{k}(r_{k}-r)}{1+r}}-
{\frac {1}{2}}\sum \limits _{k=1}^{n}\sum \limits _{j=1}^{n}u_{k}u_{j}{\frac
{(r_{k}-r)(r_{j}-r)}{(1+r)^{2}}}\right].}

Thus we reduce the optimization problem to quadratic programming and the


unconstrained solution is

{\displaystyle {\vec {u^{\star }}}=(1+r)({\widehat {\Sigma }})^{-1}({\widehat {\vec


{r}}}-r)}
\vec{u^{\star}} = (1+r) ( \widehat{\Sigma} )^{-1} ( \widehat{\vec{r}} - r )

where {\displaystyle {\widehat {\vec {r}}}} \widehat{\vec{r}} and {\displaystyle


{\widehat {\Sigma }}} \widehat{\Sigma} are the vector of means and the matrix of
second mixed noncentral moments of the excess returns.

There is also a numerical algorithm for the fractional Kelly strategies and for the
optimal solution under no leverage and no short selling constraints.[18]

See also
Risk of ruin
Gambling and information theory
Proebsting's paradox

You might also like