You are on page 1of 138

Arbitrage

The practice of taking advantage of a price differential between two or more


markets: Combinations of matching deals are struck that capitalize upon the
imbalance, the profit being the difference between the market prices. An
arbitrage is a transaction that involves no negative cash flow at any
probabilistic or temporal state and a positive cash flow in at least one state;
in simple terms, a risk-free profit.
If the market prices do not allow for profitable arbitrage, the prices are said
to constitute an arbitrage equilibrium or arbitrage free market.
Conditions for arbitrage
Arbitrage is possible when one of three conditions is met:
The same asset does not trade at the same price on all markets ("the
law of one price").
Two assets with identical cash flows do not trade at the same price.
An asset with a known price in the future does not today trade at its
future price discounted at the risk-free interest rate (or, the asset does
not have negligible costs of storage; as such, for example, this
condition holds for grain but not for securities).
In the simplest example, any good sold in one market should sell for the
same price in another. Traders may, for example, find that the price of wheat
is lower in agricultural regions than in cities, purchase the good, and
transport it to another region to sell at a higher price. This type of price
arbitrage is the most common, but this simple example ignores the cost of
transport, storage, risk, and other factors. "True" arbitrage requires that there
be no market risk involved. Where securities are traded on more than one
exchange, arbitrage occurs by simultaneously buying in one and selling on
the other.
Examples
Suppose that the exchange rates (after taking out the fees for making the
exchange) in London are 5 = $10 = 1000 and the exchange rates in Tokyo
are 1000 = 6 = $12. Converting 1000 to $12 in Tokyo and converting
that $12 into 1200 in London, for a profit of 200, would be arbitrage. In
reality, this "triangle arbitrage" is so simple that it almost never occurs. But
more complicated foreign exchange arbitrages, such as the spot-forward
arbitrage (see interest rate parity) are much more common.

Exchange-traded fund arbitrage - Exchange Traded Funds allow


authorized participants to exchange back and forth between shares in
underlying securities held by the fund and shares in the fund itself,
rather than allowing the buying and selling of shares in the ETF directly
with the fund sponsor. ETFs trade in the open market, with prices set by
market demand. An ETF may trade at a premium or discount to the
value of the underlying assets. When a significant enough premium
appears, an arbitrageur will buy the underlying securities, convert them
to shares in the ETF, and sell them in the open market. When a discount
appears, an arbitrageur will do the reverse. In this way, the arbitrageur
makes a low-risk profit, while fulfilling a useful function in the ETF
marketplace by keeping ETF prices in line with their underlying value.
Some types of hedge funds make use of a modified form of arbitrage to
profit. Rather than exploiting price differences between identical assets, they
will purchase and sell securities, assets and derivatives with similar
characteristics, and hedge any significant differences between the two assets.
Any difference between the hedged positions represents any remaining risk
(such as basis risk) plus profit; the belief is that there remains some
difference which, even after hedging most risk, represents pure profit. For
example, a fund may see that there is a substantial difference between U.S.
dollar debt and local currency debt of a foreign country, and enter into a
series of matching trades (including currency swaps) to arbitrage the
difference, while simultaneously entering into credit default swaps to protect
against country risk and other types of specific risk.
Price convergence
Arbitrage has the effect of causing prices in different markets to converge.
As a result of arbitrage, the currency exchange rates, the price of
commodities, and the price of securities in different markets tend to
converge to the same prices, in all markets, in each category. The speed at
which prices converge is a measure of market efficiency. Arbitrage tends to
reduce price discrimination by encouraging people to buy an item where the
price is low and resell it where the price is high, as long as the buyers are not
prohibited from reselling and the transaction costs of buying, holding and
reselling are small relative to the difference in prices in the different
markets.
Arbitrage moves different currencies toward purchasing power parity. As an
example, assume that a car purchased in America is cheaper than the same

car in Canada. Canadians would buy their cars across the border to exploit
the arbitrage condition. At the same time, Americans would buy US cars,
transport them across the border, and sell them in Canada. Canadians would
have to buy American Dollars to buy the cars, and Americans would have to
sell the Canadian dollars they received in exchange for the exported cars.
Both actions would increase demand for US Dollars, and supply of Canadian
Dollars, and as a result, there would be an appreciation of the US Dollar.
Eventually, if unchecked, this would make US cars more expensive for all
buyers, and Canadian cars cheaper, until there is no longer an incentive to
buy cars in the US and sell them in Canada. More generally, international
arbitrage opportunities in commodities, goods, securities and currencies, on
a grand scale, tend to change exchange rates until the purchasing power is
equal.
In reality, of course, one must consider taxes and the costs of travelling back
and forth between the US and Canada. Also, the features built into the cars
sold in the US are not exactly the same as the features built into the cars for
sale in Canada, due, among other things, to the different emissions and other
auto regulations in the two countries. In addition, our example assumes that
no duties have to be paid on importing or exporting cars from the USA to
Canada. Similarly, most assets exhibit (small) differences between countries,
and transaction costs, taxes, and other costs provide an impediment to this
kind of arbitrage.

Risks
Arbitrage transactions in modern securities markets involve fairly low risks.
Generally it is impossible to close two or three transactions at the same
instant; therefore, there is the possibility that when one part of the deal is
closed, a quick shift in prices makes it impossible to close the other at a
profitable price. There is also counter-party risk that the other party to one of
the deals fails to deliver as agreed; though unlikely, this hazard is serious
because of the large quantities one must trade in order to make a profit on
small price differences. These risks become magnified when leverage or
borrowed money is used.
Another risk occurs if the items being bought and sold are not identical and
the arbitrage is conducted under the assumption that the prices of the items
are correlated or predictable. In the extreme case this is risk arbitrage,

described below. In comparison to the classical quick arbitrage transaction,


such an operation can produce disastrous losses.
In the 1980s, risk arbitrage was common. In this form of speculation, one
trades a security that is clearly undervalued or overvalued, when it is seen
that the wrong valuation is about to be corrected by events. The standard
example is the stock of a company, undervalued in the stock market, which
is about to be the object of a takeover bid; the price of the takeover will
more truly reflect the value of the company, giving a large profit to those
who bought at the current priceif the merger goes through as predicted.
Traditionally, arbitrage transactions in the securities markets involve high
speed and low risk. At some moment a price difference exists, and the
problem is to execute two or three balancing transactions while the
difference persists (that is, before the other arbitrageurs act). When the
transaction involves a delay of weeks or months, as above, it may entail
considerable risk if borrowed money is used to magnify the reward through
leverage. One way of reducing the risk is through the illegal use of inside
information, and in fact risk arbitrage with regard to leveraged buyouts was
associated with some of the famous financial scandals of the 1980s such as
those involving Michael Milken and Ivan Boesky.
Types of arbitrage
Merger arbitrage
Also called risk arbitrage, merger arbitrage generally consists of buying the
stock of a company that is the target of a takeover while shorting the stock of
the acquiring company.
Usually the market price of the target company is less than the price offered
by the acquiring company. The spread between these two prices depends
mainly on the probability and the timing of the takeover being completed as
well as the prevailing level of interest rates.
The bet in a merger arbitrage is that such a spread will eventually be zero, if
and when the takeover is completed. The risk is that the deal "breaks" and
the spread massively widens.
Convertible bond arbitrage
A convertible bond is a bond that an investor can return to the issuing
company in exchange for a predetermined number of shares in the company.

A convertible bond can be thought of as a corporate bond with a stock call


option attached to it. The price of a convertible bond is sensitive to three
major factors:
Interest rate. When rates move higher, the bond part of a convertible
bond tends to move lower, but the call option part of a convertible
bond moves higher (and the aggregate tends to move lower).
Stock price. When the price of the stock the bond is convertible into
moves higher, the price of the bond tends to rise.
Credit spread. If the creditworthiness of the issuer deteriorates (e.g.
rating downgrade) and its credit spread widens, the bond price tends
to move lower, but, in many cases, the call option part of the
convertible bond moves higher (since credit spread correlates with
volatility).
Given the complexity of the calculations involved and the convoluted
structure that a convertible bond can have, an arbitrageur often relies on
sophisticated quantitative models in order to identify bonds that are trading
cheap versus their theoretical value.
Convertible arbitrage consists of buying a convertible bond and hedging two
of the three factors in order to gain exposure to the third factor at a very
attractive price.
For instance an arbitrageur would first buy a convertible bond, then sell
fixed income securities or interest rate futures (to hedge the interest rate
exposure) and buy some credit protection (to hedge the risk of credit
deterioration). Eventually what he'd be left with is something similar to a
call option on the underlying stock, acquired at a very low price. He could
then make money either selling some of the more expensive options that are
openly traded in the market or delta hedging his exposure to the underlying
shares.
Depository receipts
A depository receipt is a security that is offered as a "tracking stock" on
another foreign market. For instance a Chinese company wishing to raise
more money may issue a depository receipt on the New York Stock
Exchange, as the amount of capital on the local exchanges is limited. These
securities, known as ADRs (American Depositary Receipt) or GDRs (Global

Depositary Receipt) depending on where they are issued are typically


considered "foreign" and therefore trade at a lower value when first released.
However, they are exchangeable into the original security (known as
fungibility) and actually have the same value. In this case there is a spread
between the perceived value and real value, which can be extracted. Since
the ADR is trading at a value lower than what it is worth, one can purchase
the ADR and expect to make money as its value converges on the original.
However there is a chance that the original stock will fall in value too, so by
shorting it you can hedge that risk.
Regulatory arbitrage
Regulatory arbitrage is where a regulated institution takes advantage of the
difference between its real (or economic) risk and the regulatory position.
For example, if a bank, operating under the Basel I accord, has to hold 8%
capital against default risk, but the real risk of default is lower, it is
profitable to securitise the loan, removing the low risk loan from its
portfolio. On the other hand, if the real risk is higher than the regulatory risk
then it is profitable to make that loan and hold on to it, provided it is priced
appropriately.
This process can increase the overall riskiness of institutions under a risk
insensitive regulatory regime, as described by Alan Greenspan in his
October 1998 speech on The Role of Capital in Optimal Banking
Supervision and Regulation.
In economics, regulatory arbitrage (sometimes, tax arbitrage) may be used to
refer to situations when a company can choose a nominal place of business
with a regulatory, legal or tax regime with lower costs. For example, an
insurance company may choose to locate in Bermuda due to preferential tax
rates and policies for insurance companies. This can occur particularly
where the business transaction has no obvious physical location: in the case
of many financial products, it may be unclear "where" the transaction
occurs.
Arbitrage pricing theory
Arbitrage pricing theory (APT), in Finance, is a general theory of asset
pricing that has become influential in the pricing of shares.
APT holds that the expected return of a financial asset can be modeled as a
linear function of various macro-economic factors or theoretical market
indices, where sensitivity to changes in each factor is represented by a

factor-specific beta coefficient. The model-derived rate of return will then be


used to price the asset correctly - the asset price should equal the expected
end of period price discounted at the rate implied by model. If the price
diverges, arbitrage should bring it back into line.
The theory was initiated by the economist Stephen Ross in 1976.
The APT model
If APT holds, then a risky asset can be described as satisfying the following
relation:

where

E(rj) is the risky asset's expected return,


RPk is the risk premium of the factor,
rf is the risk-free rate,
Fk is the macroeconomic factor,
bjk is the sensitivity of the asset to factor k, also called factor
loading,
and j is the risky asset's idiosyncratic random shock with mean
zero.

That is, the uncertain return of an asset j is a linear relationship among n


factors. Additionally, every factor is also considered to be a random variable
with mean zero.
Note that there are some assumptions and requirements that have to be
fulfilled for the latter to be correct: There must be perfect competition in the
market, and the total number of factors may never surpass the total number
of assets (in order to avoid the problem of matrix singularity),
Arbitrage and the APT
Arbitrage is the practice of taking advantage of a state of imbalance between
two (or possibly more) markets and thereby making a risk free profit; sees
Rational pricing.
Arbitrage in expectations
The APT describes the mechanism whereby arbitrage by investors will bring
an asset which is mispriced, according to the APT model, back into line with
its expected price. Note that under true arbitrage, the investor locks-in a

guaranteed payoff, whereas under APT arbitrage as described below, the


investor locks-in a positive expected payoff. The APT thus assumes
"arbitrage in expectations" - i.e. that arbitrage by investors will bring asset
prices back into line with the returns expected by the model portfolio theory.
Arbitrage mechanics
In the APT context, arbitrage consists of trading in two assets with at least
one being mispriced. The arbitrageur sells the asset which is relatively too
expensive and uses the proceeds to buy one which is relatively too cheap.
Under the APT, an asset is mispriced if its current price diverges from the
price predicted by the model. The asset price today should equal the sum of
all future cash flows discounted at the APT rate, where the expected return
of the asset is a linear function of various factors, and sensitivity to changes
in each factor is represented by a factor-specific beta coefficient.
A correctly priced asset here may be in fact a synthetic asset - a portfolio
consisting of other correctly priced assets. This portfolio has the same
exposure to each of the macroeconomic factors as the mispriced asset. The
arbitrageur creates the portfolio by identifying x correctly priced assets (one
per factor plus one) and then weighting the assets such that portfolio beta per
factor is the same as for the mispriced asset.
When the investor is long the asset and short the portfolio (or vice versa) he
has created a position which has a positive expected return (the difference
between asset return and portfolio return) and which has a net-zero exposure
to any macroeconomic factor and is therefore risk free (other than for firm
specific risk). The arbitrageur is thus in a position to make a risk free profit:
Where today's price is too low:
The implication is that at the end of the period the portfolio would
have appreciated at the rate implied by the APT, whereas the
mispriced asset would have appreciated at more than this rate. The
arbitrageur could therefore:
Today:
1 short sell the portfolio
2 buy the mispriced-asset with the proceeds.
At the end of the period:
1 sell the mispriced asset
2 use the proceeds to buy back the portfolio

3 pocket the difference.


Where today's price is too high:
The implication is that at the end of the period the portfolio would
have appreciated at the rate implied by the APT, whereas the
mispriced asset would have appreciated at less than this rate. The
arbitrageur could therefore:
Today:
1 short sell the mispriced-asset
2 buy the portfolio with the proceeds.
At the end of the period:
1 sell the portfolio
2 use the proceeds to buy back the mispriced-asset
3 pocket the difference.
Relationship with the capital asset pricing model
The APT along with the capital asset pricing model (CAPM) is one of two
influential theories on asset pricing. The APT differs from the CAPM in that
it is less restrictive in its assumptions. It allows for an explanatory (as
opposed to statistical) model of asset returns. It assumes that each investor
will hold a unique portfolio with its own particular array of betas, as
opposed to the identical "market portfolio". In some ways, the CAPM can be
considered a "special case" of the APT in that the securities market line
represents a single-factor model of the asset price, where Beta is exposed to
changes in value of the Market.
Additionally, the APT can be seen as a "supply side" model, since its beta
coefficients reflect the sensitivity of the underlying asset to economic
factors. Thus, factor shocks would cause structural changes in the asset's
expected return, or in the case of stocks, in the firm's profitability.
On the other side, the capital asset pricing model is considered a "demand
side" model. Its results, although similar to those in the APT, arise from a
maximization problem of each investor's utility function, and from the
resulting market equilibrium (investors are considered to be the "consumers"
of the assets).

Using the APT


Identifying the factors
As with the CAPM, the factor-specific Betas are found via a linear
regression of historical security returns on the factor in question. Unlike the
CAPM, the APT, however, does not itself reveal the identity of its priced
factors - the number and nature of these factors is likely to change over time
and between economies. As a result, this issue is essentially empirical in
nature. Several a priori guidelines as to the characteristics required of
potential factors are, however, suggested:
their impact on asset prices manifests in their unexpected
movements
they should represent undiversifiable influences (these are, clearly,
more likely to be macroeconomic rather than firm-specific in
nature)
timely and accurate information on these variables is required
the relationship should be theoretically justifiable on economic
grounds
Chen, Roll and Ross identified the following macro-economic factors as
significant in explaining security returns:
surprises in inflation;
surprises in GNP as indicted by an industrial production index;
surprises in investor confidence due to changes in default premium
in corporate bonds;
surprise shifts in the yield curve.
As a practical matter, indices or spot or futures market prices may be used in
place of macro-economic factors, which are reported at low frequency (e.g.
monthly) and often with significant estimation errors. Market indices are
sometimes derived by means of factor analysis. More direct "indices" that
might be used are:

short term interest rates;


the difference in long-term and short term interest rates;

a diversified stock index such as the S&P 500 or NYSE Composite


Index;
oil prices
gold or other precious metal prices
Currency exchange rates

APT and asset management


Covered interest arbitrage
Covered interest arbitrage is the investment strategy where an investor
buys a financial instrument denominated in a foreign currency, and hedges
his foreign exchange risk by selling a forward contract in the amount of the
proceeds of the investment back into his base currency. The proceeds of the
investment are only known exactly if the financial instrument is risk-free
and only pays interest once, on the date of the forward sale of foreign
currency. Otherwise, some foreign exchange risk remains.
Similar trades using risky foreign currency bonds or even foreign stock may
be made, but the hedging trades may actually add risk to the transaction, e.g.
if the bond defaults the investor may lose on both the bond and the forward
contract.
Example
In this example the investor is based in the United States and assumes the
following prices and rates: spot USD/EUR = $1.2000, forward USD/EUR
for 1 year delivery = $1.2300, dollar interest rate = 4.0%, euro interest rate =
2.5%.
Exchange USD 1,200,000 into EUR 1,000,000
Buy EUR 1,000,000 worth of euro-denominated bonds
Sell EUR 1,025,000 via a 1 year forward contract, to receive USD
1,260,750, i.e. agree to exchange the euros back into US dollars in 1
year at today's forward price.
This set of transactions can be viewed as having an effective dollar
interest rate of (1,260,750/1,200,000)-1 = 5.1%
Alternatively, if the USD 1,200,000 were borrowed at 4%, USD
1,248,000 would be owed in 1 year, leaving an arbitrage profit of
1,260,750 - 1,248,000 = USD 12,750 in 1 year.
Models

Financial models such as interest rate parity and the cost of carry model
assume that no such arbitrage profits could exist in equilibrium, thus the
effective dollar interest rate of investing in any currency will equal the
effective dollar rate for any other currency, for risk-free instruments.
Efficient market hypothesis
In finance, the efficient market hypothesis (EMH) asserts that financial
markets are "informationally efficient", or that prices on traded assets, e.g.,
stocks, bonds, or property, already reflect all known information and
therefore are unbiased in the sense that they reflect the collective beliefs of
all investors about future prospects. Professor Eugene Fama at the
University of Chicago Graduate School of Business developed EMH as an
academic concept of study through his published Ph.D. thesis in the early
1960s at the same school.
The efficient market hypothesis states that it is not possible to consistently
outperform the market by using any information that the market already
knows, except through luck. Information or news in the EMH is defined as
anything that may affect prices that is unknowable in the present and thus
appears randomly in the future.
Assumptions
Beyond the normal utility maximizing agents, the efficient market
hypothesis requires that agents have rational expectations; that on average
the population is correct (even if no one person is) and whenever new
relevant information appears, the agents update their expectations
appropriately.
Note that it is not required that the agents be rational (which is different
from rational expectations; rational agents act coldly and achieve what they
set out to do). EMH allows that when faced with new information, some
investors may overreact and some may underreact. All that is required by the
EMH is that investors' reactions be random and follow a normal distribution
pattern so that the net effect on market prices cannot be reliably exploited to
make an abnormal profit, especially when considering transaction costs
(including commissions and spreads). Thus, any one person can be wrong
about the market indeed, everyone can be but the market as a whole is
always right.
There are three common forms in which the efficient market hypothesis is
commonly stated weak form efficiency, semi-strong form efficiency

and strong form efficiency, each of which have different implications for
how markets work.
Weak-form efficiency
No excess returns can be earned by using investment strategies
based on historical share prices or other financial data.
Weak-form efficiency implies that Technical analysis techniques
will not be able to consistently produce excess returns, though
some forms of fundamental analysis may still provide excess
returns.
In a weak-form efficient market current share prices are the best,
unbiased, estimate of the value of the security. Theoretical in
nature, weak form efficiency advocates assert that fundamental
analysis can be used to identify stocks that are undervalued and
overvalued. Therefore, keen investors looking for profitable
companies can earn profits by researching financial statements.
Semi-strong form efficiency
Share prices adjust within an arbitrarily small but finite amount of
time and in an unbiased fashion to publicly available new
information, so that no excess returns can be earned by trading on
that information.
Semi-strong form efficiency implies that Fundamental analysis
techniques will not be able to reliably produce excess returns.
To test for semi-strong form efficiency, the adjustments to
previously unknown news must be of a reasonable size and must
be instantaneous. To test for this, consistent upward or downward
adjustments after the initial change must be looked for. If there are
any such adjustments it would suggest that investors had
interpreted the information in a biased fashion and hence in an
inefficient manner.

Strong-form efficiency

Share prices reflect all information and no one can earn excess
returns.
If there are legal barriers to private information becoming public,
as with insider trading laws, strong-form efficiency is impossible,
except in the case where the laws are universally ignored. Studies
on the U.S. stock market have shown that people do trade on inside
information.[citation needed]
To test for strong form efficiency, a market needs to exist where
investors cannot consistently earn excess returns over a long period
of time. Even if some money managers are consistently observed
to beat the market, no refutation even of strong-form efficiency
follows: with tens of thousands of fund managers worldwide[citation
needed]
, even a normal distribution of returns (as efficiency predicts)
should be expected to produce a few dozen "star" performers.
Arguments concerning the validity of the hypothesis
Some observers dispute the notion that markets behave consistently with the
efficient market hypothesis, especially in its stronger forms. Some
economists, mathematicians and market practitioners cannot believe that
man-made markets are strong-form efficient when there are prima facie
reasons for inefficiency including the slow diffusion of information, the
relatively great power of some market participants (e.g. financial
institutions), and the existence of apparently sophisticated professional
investors. The way that markets react to surprising news is perhaps the most
visible flaw in the efficient market hypothesis. For example, news events
such as surprise interest rate changes from central banks are not
instantaneously taken account of in stock prices, but rather cause sustained
movement of prices over periods from hours to months.
Only a privileged few may have prior knowledge of laws about to be
enacted, new pricing controls set by pseudo-government agencies such as
the Federal Reserve banks, and judicial decisions that effect a wide range of
economic parties. The public must treat these as random variables, but actors
on such inside information can correct the market, but usually in discrete
manner to avoid detection.
Another observed discrepancy between the theory and real markets is that at
market extremes what fundamentalists might consider irrational behaviour is
the norm: in the late stages of a bull market, the market is driven by buyers

who take little notice of underlying value. Towards the end of a crash,
markets go into free fall as participants extricate themselves from positions
regardless of the unusually good value that their positions represent. This is
indicated by the large differences in the valuation of stocks compared to
fundamentals (such as forward price to earnings ratios) in bull markets
compared to bear markets. A theorist might say that rational (and hence,
presumably, powerful) participants should always immediately take
advantage of the artificially high or artificially low prices caused by the
irrational participants by taking opposing positions, but this is observably
not, in general, enough to prevent bubbles and crashes developing. It may be
inferred that many rational participants are aware of the irrationality of the
market at extremes and are willing to allow irrational participants to drive
the market as far as they will, and only take advantage of the prices when
they have more than merely fundamental reasons that the market will return
towards fair value. Behavioural finance explains that when entering
positions market participants are not driven primarily by whether prices are
cheap or expensive, but by whether they expect them to rise or fall. To
ignore this can be hazardous: Alan Greenspan warned of "irrational
exuberance" in the markets in 1996, but some traders who sold short new
economy stocks that seemed to be greatly overpriced around this time had to
accept serious losses as prices reached even more extraordinary levels. As
John Maynard Keynes succinctly commented, "Markets can remain
irrational longer than you can remain solvent."[citation needed]
The efficient market hypothesis was introduced in the late 1960s. Prior to
that, the prevailing view was that markets were inefficient. Inefficiency was
commonly believed to exist e.g. in the United States and United Kingdom
stock markets. However, earlier work by Kendall (1953) suggested that
changes in UK stock market prices were random. Later work by Brealey and
Dryden, and also by Cunningham found that there were no significant
dependences in price changes suggesting that the UK stock market was
weak-form efficient.
Further to this evidence that the UK stock market is weak form efficient,
other studies of capital markets have pointed toward them being semi
strong-form efficient. Studies by Firth (1976, 1979 and 1980) in the United
Kingdom have compared the share prices existing after a takeover
announcement with the bid offer. Firth found that the share prices were fully
and instantaneously adjusted to their correct levels, thus concluding that the
UK stock market was semi strong-form efficient. The market's ability to

efficiently respond to a short term and widely publicized event such as a


takeover announcement cannot necessarily be taken as indicative of a market
efficient at pricing regarding more long term and amorphous factors
however.
Other empirical evidence in support of the EMH comes from studies
showing that the return of market averages exceeds the return of actively
managed mutual funds. Thus, to the extent that markets are inefficient, the
benefits realized by seizing upon the inefficiencies are outweighed by the
internal fund costs involved in finding them, acting upon them, advertising
etc. These findings gave inspiration to the formation of passively managed
index funds.[1]
It may be that professional and other market participants who have
discovered reliable trading rules or stratagems see no reason to divulge them
to academic researchers. It might be that there is an information gap between
the academics who study the markets and the professionals who work in
them. Some observers point to seemingly inefficient features of the markets
that can be exploited e.g seasonal tendencies and divergent returns to assets
with various characteristics. E.g. factor analysis and studies of returns to
different types of investment strategies suggest that some types of stocks
may outperform the market long-term (e.g in the UK, the USA and Japan).
Skeptics of EMH argue that there exists a small number of investors who
have outperformed the market over long periods of time, in a way which is
difficult to attribute luck, including Peter Lynch, Warren Buffett, George
Soros, and Bill Miller. These investors' strategies are to a large extent based
on identifying markets where prices do not accurately reflect the available
information, in direct contradiction to the efficient market hypothesis which
explicitly implies that no such opportunities exist. Among the skeptics is
Warren Buffett who has argued that the EMH is not correct, on one occasion
wryly saying "I'd be a bum on the street with a tin cup if the markets were
always efficient"[citation needed] and on another saying "The professors who
taught Efficient Market Theory said that someone throwing darts at the stock
tables could select stock portfolio having prospects just as good as one
selected by the brightest, most hard-working securities analyst. Observing
correctly that the market was frequently efficient, they went on to conclude
incorrectly that it was always efficient."[citation needed] Adherents to a stronger
form of the EMH argue that the hypothesis does not preclude - indeed it
predicts - the existence of unusually successful investors or funds occurring

through chance. They also argue that presentation of anecdotal evidence of


star-performers to cast doubt on the hypothesis is rife with survivorship bias.
However, importantly, in 1962 Warren Buffett wrote: "I present this data to
indicate the Dow as an investment competitor is no pushover, and the great
bulk of investment funds in the country are going to have difficulty in
bettering, or... even matching, its performance. Our portfolio is very
different from that of the Dow. Our method of operation is substantially
different from that of mutual funds." [2]
The EMH and popular culture
Despite the best efforts of EMH proponents such as Burton Malkiel, whose
book A Random Walk Down Wall Street (ISBN 0-393-32535-0) achieved
best-seller status, the EMH has not caught the public's imagination. Popular
books and articles promoting various forms of stock-picking, such as the
books by popular CNBC commentator Jim Cramer and former Fidelity
Investments fund manager Peter Lynch, have continued to press the more
appealing notion that investors can "beat the market." The theme was further
explored in the recent The Little Book That Beats The Market (ISBN 0-47173306-7) by Joel Greenblatt.
One notable exception to this trend is the recent book Wall Street Versus
America (ISBN 1-59184-094-5), by investigative journalist Gary Weiss. In
this caustic attack on Wall Street practices, Weiss argues in favor of the
EMH and against stock-picking as an investor self-defense mechanism.
EMH is commonly rejected by the general public due to a misconception
concerning its meaning. Many believe that EMH says that a security's price
is a correct representation of the value of that business, as calculated by what
the business's future returns will actually be. In other words, they believe
that EMH says a stock's price correctly predicts the underlying company's
future results. Since stock prices clearly do not reflect company future
results in many cases, many people reject EMH as clearly wrong.
However, EMH makes no such statement. Rather, it says that a stock's price
represents an aggregation of the probabilities of all future outcomes for the
company, based on the best information available at the time. Whether that
information turns out to have been correct is not something required by
EMH. Put another way, EMH does not require a stock's price to reflect a
company's future performance, just the best possible estimate of that

performance that can be made with publicly available information. That


estimate may still be grossly wrong without violating EMH.
An alternative theory: Behavioral Finance
Opponents of the EMH sometimes cite examples of market movements that
seem inexplicable in terms of conventional theories of stock price
determination, for example the stock market crash of October 1987 where
most stock exchanges crashed at the same time. It is virtually impossible to
explain the scale of those market falls by reference to any news event at the
time. The explanation may lie either in the mechanics of the exchanges (e.g.
no safety nets to discontinue trading initiated by program sellers) or the
peculiarities of human nature.
Behavioural psychology approaches to stock market trading are among some
of the more promising alternatives to EMH (and some investment strategies
seek to exploit exactly such inefficiencies). A growing field of research
called behavioral finance studies how cognitive or emotional biases, which
are individual or collective, create anomalies in market prices and returns
that may be inexplicable via EMH alone. However, how and if individual
biases manifest inefficiencies in market-wide prices is still an open question.
Indeed, the Nobel Laureate co-founder of the programme - Daniel
Kahneman - announced his skepticism of resultant inefficiencies: "They're
[investors] just not going to do it [beat the market]. It's just not going to
happen."[3]
Ironically, the behaviorial finance programme can also be used to
tangentially support the EMH - or rather it can explain the skepticism drawn
by EMH - in that it helps to explain the human tendency to find and exploit
patterns in data even where none exist. Some relevant examples of the
Cognitive biases highlighted by the programme are: the Hindsight Bias; the
Clustering illusion; the Overconfidence effect; the Observer-expectancy
effect; the Gambler's fallacy; and the Illusion of control.
Immunization (finance)
In finance, interest rate immunization is a strategy that ensures that a
change in interest rates will not affect the value of a portfolio. Similarly,
immunization can be used to insure that the value of a pension fund's or a
firm's assets will increase or decrease in exactly the opposite amount of their

liabilities, thus leaving the value of the pension fund's surplus or firm's
equity unchanged, regardless of changes in the interest rate.
Interest rate immunization can be accomplished by several methods,
including cash flow matching, duration matching, and volatility and
convexity matching. It can also be accomplished by trading in bond
forwards, futures, or options.
Other types of financial risks, such as foreign exchange risk or stock market
risk, can be immunized using similar strategies. If the immunization is
incomplete, these strategies are usually called hedging. If the immunization
is complete, these strategies are usually called arbitrage.
Cash flow matching
Conceptually, the easiest form of immunization is cash flow matching. For
example, if a financial company is obliged to pay 100 dollars to someone in
10 years, then it can protect itself by buying and holding a 10 year zero
coupon bond that matures in 10 years and has a redemption value of $100.
Thus the firm's expected cash inflows exactly match its expected cash
outflows, and a change in interest rates will not affect the firm's ability to
pay its obligations. Nevertheless, a firm with many expected cash flows can
find that cash flow matching is difficult or expensive to achieve in practice.
Volatility matching
A more practical alternative immunization method is duration matching.
Here the duration of the assets, or first derivative of the asset's price function
with respect to the interest rate, is matched with the duration of the
liabilities. To make the match more accurate, the convexity or second
derivative of the assets and libilities, can also be matched.
Immunization in practice
Immunization can be done in a portfolio of a single asset type, such as
government bonds, by creating long and short positions along the yield
curve. It is usually possible to immunize a portfolio against the risk factors
that are most prevalent. A principal component analysis of changes along the
U.S. Government Treasury yield curve reveals that more than 90% of the
yield curve shifts are parallel shifts, followed by a smaller percentage of
slope shifts, and a very small percentage of curvature shifts. Using that
knowledge, an immunized portfolio can be created by creating long
positions with durations at the long and short end of the curve, and a

matching short position with a duration in the middle of the curve. These
positions protect against parallel shifts and slope changes, in exchange for
exposure to curvature changes.
Difficulties
Immunization, if possible and complete, can protect against term mismatch
but not against other kinds of financial risk such as default by the borrower
(of a bond).
Users of this technique include banks, insurance companies, pension funds,
and bond brokers.
The disadvantage associated with duration match is it assumes the duration
of assets and liabilities are unchanged which is not true.
Interest rate parity
The interest rate parity is the basic identity that relates interest rates and
exchange rates. The identity is theoretical, and usually follows from
assumptions imposed in economics models. There is evidence that supports
as well as rejects interest rate parity.
Interest rate parity is an arbitrage condition, which says that the returns from
borrowing in one currency, exchanging that currency for another currency
and investing in interest-bearing instruments of the second currency, while
simultaneously purchasing futures contracts to convert the currency back at
the end of the investment period should be equal to the returns from
purchasing and holding similar interest-bearing instruments of the first
currency. If the returns are different, investors could theoretically arbitrage
and make risk-free returns.
Looked at differently, interest rate parity says that the spot and future prices
for currency trades incorporate any interest rate differentials between the
two currencies.
Two versions of the identity are commonly presented in academic literature:
covered interest rate parity and uncovered interest rate parity.
Covered interest rate parity
The basic covered interest parity (also called interest parity condition) is

where

is the domestic interest rate, ic is the interest rate in the foreign country,
F is forward exchange rate between domestic currency ($) and foreign
currency (c), i.e. $/c, and
S is spot exchange rate between domestic currency ($) and foreign currency
(c), i.e. $/c.
The covered interest parity states that the interest rate difference between
two countries' currencies is equal to the percentage difference between the
forward exchange rate and the spot exchange rate. The parity condition
assumes that financial assets are perfectly mobile and similarly risky. If the
parity condition does not hold, there exists an arbitrage opportunity. (see
covered interest arbitrage and an example below).
Another way to express the interest rate parity is:

A more approximate version is sometimes given, although it is less correct


for countries with highly volatile exchange rates:

An implication of this equation is that when the domestic interest rate is


lower than the foreign interest rate, the forward price of the foreign currency
will be below the spot price. Conversely, if the domestic interest rate is
above the foreign interest rate, then the forward price of the foreign currency
will be above the spot price.
Covered interest arbitrage example
In short, assume that
.
This would imply that one dollar invested in the US < one dollar converted
into a foreign currency and invested abroad. Such an imbalance would give
rise to an arbitrage opportunity, where in one could borrow at the lower

effective interest rate in US, convert to the foreign currency and invest
abroad.
The following is a rudimentary example to understand covered interest rate
arbitrage (CIA)
Consider the interest rate parity (IRP) equation,

Assume,
the 12-month interest rate in US is 5%, per annum
the 12-month interest rate in UK is 8%, per annum
the current Spot Exchange is 1.5 $/
the current Forward Exchange is 1.5 $/
From the given conditions it is clear that UK has a higher interest rate than
the US. Thus the basic idea of covered interest arbitrage is to borrow in the
country with lower interest rate and invest in the country with higher interest
rate. All else being equal this would help you make money riskless. Thus,

Per the LHS of the interest rate parity equation above, a dollar
invested in the US at the end of the 12-month period will be,

$1 (1 + 5%) = $1.05
Per the RHS of the interest rate parity equation above, a dollar
invested in the UK (after conversion into and back into $ at
the end of 12-months) at the end of the 12-month period will
be,
$1 (1.5/1.5)(1 + 8%) = $1.08
Thus, one could carry out a covered interest rate (CIA) arbitrage as follows,
1.
2.
3.
4.

Borrow $1 from the US bank at 5% interest rate.


Convert $ into at current spot rate of 1.5$/ giving 0.67
Invest the 0.67 in the UK for the 12 month period
Purchase a forward contract on the 1.5$/ (i.e. cover your position
against exchange rate fluctuations)

At the end of 12-months


1. 0.67 becomes 0.67(1 + 8%) = 0.72
2. Convert the 0.72 back to $ at 1.5$/, giving $1.08
3. Pay off the initially borrowed amount of $1 to the US bank with 5%
interest, i.e $1.05

Making an arbitrage profit of $1.08 $1.05 = $0.03 or 3 cents per


dollar.

Obviously, any such arbitrage opportunities in the market will close out
almost immediately.
In the above example, any one or combination of the following may occur to
re-establish the equilibrium of the IRP to close out the arbitrage opportunity,
US interest rates will go up
Forward exchange rates will go down
Spot exchange rates will go up
UK interest rates will go down
Uncovered interest rate parity
The uncovered interest rate parity postulates that

The equality assumes that the risk premium is zero, which is the case if
investors are risk-neutral. If investors are not risk-neutral then the forward
rate (Ft,t + 1) can differ from the expected future spot rate (Et[St + 1]), and
covered and uncovered interest rate parities cannot both hold.
The uncovered parity is not directly testable in the absence of market
expectations of future exchange rates. Moreover, the above rather simple
demonstration assumes no transaction cost, equal default risk over foreign
and domestic currency denominated assets, perfect capital flow and no

simultaneity induced by monetary authorities. Note also that it is possible to


construct the UIP condition in real terms, which is more plausible.
Uncovered interest parity example
An example for the uncovered interest parity condition: Consider an initial
situation, where interest rates in the US (home country) and a foreign
country (e.g. Japan) are equal. Except for exchange rate risk, investing in the
US or Japan would yield the same return. If the dollar depreciates against the
yen, an investment in Japan would become more profitable than a USinvestment - in other words, for the same amount of yen, more dollars can be
purchased. By investing in Japan and converting back to the dollar at the
favorable exchange rate, the return from the investment in Japan, in the
dollar term, is higher than the return from the direct investment in the US. In
order to persuade an Investor to invest in the US nonetheless, the dollar
interest rate would have to be higher than the yen interest rate by an amount
equal to the devaluation (a 20% depreciation of the dollar implies a 20% rise
in the dollar interest rate).
Note: Technically, a 20% depreciation in the dollar only results in an
approximate rise of 20% in U.S. interest rates. The exact form is as follows:
Change in spot rate (Yen/Dollar) equals the dollar interest rate minus the yen
interest rate, with this expression being divided by one plus the yen interest
rate.
Uncovered vs. covered interest parity example
Let's assume you wanted to pay for something in Yen in a month's time.
There are two ways to do this.
(a) Buy Yen forward 30 days to lock in the exchange rate. Then you may
invest in dollars for 30 days until you must convert dollars to Yen in a
month. This is called covering because you now have covered
yourself and have no exchange rate risk.
(b) Convert spot to Yen today. Invest in a Japanese bond (in Yen) for 30
days (or otherwise loan Yen for 30 days) then pay your Yen
obligation. Under this model, you are sure of the interest you will
earn, so you may convert fewer dollars to Yen today, since the Yen
will grow via interest. Notice how you have still covered your
exchange risk, because you have simply converted to Yen
immediately.
(c) You could also invest the money in dollars and change it for Yen in
a month.

According to the interest rate parity, you should get the same number of Yen
in all methods. Methods (a) and (b) are covered while (c) is uncovered.
In method (a) the higher (lower) interest rate in the US is offset by the
forward discount (premium).
In method (b) The higher (lower) interest rate in Japan is offset by the
loss (gain) from converting spot instead of using a forward.
Method (c) is uncovered, however, according to interest rate parity, the
spot exchange rate in 30 days should become the same as the 30 day
forward rate. Obviously there is exchange risk because you must see if
this actually happens.
General Rules: If the forward rate is lower than what the interest rate parity
indicates, the appropriate strategy would be: borrow Yen, convert to dollars
at the spot rate, and lend dollars.
If the forward rate is higher than what interest rate parity indicates, the
appropriate strategy would be: borrow dollars, convert to Yen at the spot
rate, and lend the Yen.
Cost of carry model
A slightly more general model, used to find the forward price of any
commodity, is called the cost of carry model. Using continuously
compounded interest rates, the model is:
where F is the forward price, S is the spot price, e is the base of the natural
logarithms, r is the risk free interest rate, s is the storage cost, c is the
convenience yield, and t is the time to delivery of the forward contract
(expressed as a fraction of 1 year).
For currencies there is no storage cost, and c is interpreted as the foreign
interest rate. The currency prices should be quoted as domestic units per
foreign units.
If the currencies are freely tradeable and there are minimal transaction costs,
then a profitable arbitrage is possible if the equation doesn't hold. If the
forward price is too high, the arbitrageur sells the forward currency, buys the
spot currency and lends it for time period t, and then uses the loan proceeds
to deliver on the forward contract. To complete the arbitrage, the home
currency is borrowed in the amount needed to buy the spot foreign currency,
and paid off with the home currency proceeds of forward contract.
Similarly, if the forward price is too low, the arbitrageur buys the forward
currency, borrows the foreign currency for time period t and sells the foreign
currency spot. The proceeds of the forward contract are used to pay off the

loan. To complete the arbitrage, the home currency from the spot transaction
is lent and the proceeds used to pay for the forward contract.
Political arbitrage
A trading strategy which involves using knowledge or estimates of future
political activity to forecast and discount security values. For example, the
major factor in the values of some foreign government bonds is the risk of
default, which is a political decision taken by the country's government. The
values of companies in war-sensitive sectors such as oil and arms are
affected by political decisions to make war. In the UK, the government's
decision to commission new housing to the east of London is likely to affect
housebuilder company values.[citation needed]
Legal trading must be based on publicly available information. However
there is a grey area involving lobbyists and market rumours. Like insider
trading there is scope for conflicts of interest when political decision makers
themselves are in positions to profit from private investments whose values
are linked to their own public political actions.
TANSTAAFL
TANSTAAFL is an acronym for the adage "There Ain't No Such Thing As
A Free Lunch," popularized by science fiction writer Robert A. Heinlein in
his 1966 novel The Moon Is a Harsh Mistress, which discusses the problems
caused by not considering the eventual outcome of an unbalanced economy.
This phrase and book are popular with libertarians and economics textbooks.
In order to avoid a double negative, the acronym "TINSTAAFL" is
sometimes used instead, meaning "There Is No Such Thing As A Free
Lunch".
The phrase refers to the once-common tradition of saloons in the United
States providing a "free" lunch to patrons, who were required to buy at least
one drink.[citation needed]
Details
TANSTAAFL means that a person or a society cannot get something for
nothing. Even if something appears to be free, there is always a cost to the
person or to society as a whole even though that cost may be hidden or
distributed. [1] For example, you may get complimentary food at a bar during
"happy hour," but the bar owner bears the expense of your meal and will
attempt to recover that expense somehow. Some goods may be nearly free,

such as fruit picked in the wilderness, but usually some cost such as labor is
incurred.
The idea that there is no free lunch at the societal level applies only when all
resources are being used completely and appropriately, i.e., when economic
efficiency prevails. If one individual or group gets something at no cost,
somebody else ends up paying for it. If there appears to be no direct cost to
any single individual, there is a social cost. Similarly, someone can benefit
for "free" from an externality or from a public good, but someone has to pay
the cost of producing these benefits.
To a scientist, TANSTAAFL means that the system is ultimately closed
there is no magic source of matter, energy, light, or indeed lunch, that cannot
be eventually exhausted. Therefore the TANSTAAFL argument may also be
applied to natural physical processes.
In mathematical finance, the term is also used as an informal synonym for
the principle of no-arbitrage. This principle states that a combination of
securities that has the same cash flows as another security must have the
same net price.
TANSTAAFL is sometimes used as a response to claims of the virtues of
free software. Supporters of free software often counter that the use of the
term "free" in this context is primarily a reference to a lack of constraint
rather than a lack of cost.
TANSTAAFL is the name of a snack bar in the Pierce dormitory of the
University of Chicago. The name references the fact that the use of the term
was popularized by Milton Friedman, the Nobel Prizewinning former
University of Chicago professor.
Citations

In 1950, a New York Times columnist ascribed the phrase to


economist (and Army General) Leonard P. Ayres of the Cleveland
Trust Company. "It seems that shortly before the General's death [in
1946]... a group of reporters approached the general with the request
that perhaps he might give them one of several immutable economic
truisms which he had gathered from his long years of economic
study... 'It is an immutable economic fact,' said the general, 'that there
is no such thing as a free lunch.'"[2]

"Oh, 'tanstaafl'. Means 'There ain't no such thing as a free lunch.' And
isn't," I added, pointing to a FREE LUNCH sign across room, "or
these drinks would cost half as much. Was reminding her that
anything free costs twice as much in the long run or turns out
worthless."
o Manuel in The Moon Is a Harsh Mistress (1966), chapter 11, p.
162, by Robert A. Heinlein[3]
"There's no such thing as a free lunch."
[4]
o popularized by economist Milton Friedman ;
o Contrary to rumor, New York Mayor Fiorello LaGuardia did not
say it in Latin in 1934; what he really said, in Italian, was "No
more free lunch" (current references: linguistlist and a speech
by George H. W. Bush; more references needed).
The book TANSTAAFL, the economic strategy for environmental
crisis, by Edwin G. Dolan (Holt, Rinehart and Winston, 1971, ISBN
0-03-086315-5) may be the first published use of the term in the
economics literature.
Malcolm Fraser, prime minister of Australia, was a fond user of this
phrase [citation needed].
Spider Robinson's 2001 book 'The Free Lunch' draws its name from
the TANSTAAFL concept.

Triangle arbitrage
Triangle arbitrage (also known as triangular arbitrage) refers to taking
advantage of a state of imbalance between three markets: a combination of
matching deals are struck that exploit the imbalance, the profit being the
difference between the market prices.
Triangular arbitrage offers a risk-free profit (in theory), so opportunities for
triangular arbitrage usually disappear quickly, as many people are looking
for them.
Example
Suppose the exchange rate between:

the Canadian Dollar (CD$) and the US dollar (US$) is


CD$1.13/US$1.00 in Canada (1 USD gets you CD$1.13)
the Australian Dollar (AU$) and the US dollar (US$) is
AU$1.33/US$1.00 in Australia (1 USD gets you AU$1.33)

the Australian Dollar (AU$) and the Canadian Dollar (CD$) is


AU$1.18/CD$1.00 (1 CD gets you AU$1.18)

Assuming that a US investor has US$10,000 to invest, he will:

1st) Buy Canadian Dollars with his US Dollars: US$10,000 *


(CD$1.13/US$1) = CD$11,300
2nd) Buy Australian Dollars with his Canadian Dollars: CD$11,300 *
(AU$1.18/CD$1.00) = AU$13,334
3rd) Buy US Dollars with his Australian Dollars: AU$13,334 /
(AU$1.33/US$1.0000) = US$10,025
4th) Profit: US$25.00 Risk Free

Uncovered interest arbitrage


Uncovered interest arbitrage is a form of arbitrage where funds are
transferred abroad to take advantage of higher interest in foreign monetary
centers. It involves the conversion of the domestic currency to the foreign
currency to make investment; and subsequent re-conversion of the fund from
the foreign currency to the domestic currency at the time of maturity. A
foreign exchange risk is involved due to the possible depreciation of the
foreign currency during the period of the investment.
Value investing
Value investing is a style of investment strategy from the so-called "Graham
& Dodd" School. Followers of this style, known as value investors,
generally buy companies whose shares appear underpriced by some forms of
fundamental analysis; these may include shares that are trading at, for
example, high dividend yields or low price-to-earning or price-to-book
ratios.
The main proponents of value investing, such as Benjamin Graham and
Warren Buffett, have argued that the essence of value investing is buying
stocks at less than their intrinsic value.[1] The discount of the market price to
the intrinsic value is what Benjamin Graham called the "margin of safety".
The intrinsic value is the discounted value of all future distributions.
However, the future distributions and the appropriate discount rate can only
be assumptions. Warren Buffett has taken the value investing concept even
further as his thinking has evolved to where for the last 25 years or so his
focus has been on "finding an outstanding company at a sensible price"

rather than generic companies at a bargain price, this concept is important as


you are actually buying into a business.
History
Value investing was established by Benjamin Graham and David Dodd, both
professors at Columbia University and teachers of many famous investors.
In Graham's book The Intelligent Investor, he advocated the important
concept of margin of safety first introduced in Security Analysis, a 1934
book he coauthored with David Dodd which calls for a cautious approach
to investing. In terms of picking stocks, he recommended defensive
investment in stocks trading below their tangible book value as a safeguard
to adverse future developments often encountered in the stock market.
Further evolution
However, the concept of value (as well as "book value") has evolved
significantly since the 1970s. Book value is most useful in industries where
most assets are tangible. Intangible assets such as patents, software, brands,
or goodwill are difficult to quantify, and may not survive the break-up of a
company. When an industry is going through fast technological
advancements, the value of its assets is not easily estimated. Sometimes, the
production power of an asset can be significantly reduced due to competitive
disruptive innovation and therefore its value can suffer permanent
impairment. One good example of decreasing asset value is a personal
computer. An example of where book value does not mean much is the
service and retail sectors. One modern model of calculating value is the
discounted cash flow model (DCF). The value of an asset is the sum of its
future cash flows, discounted back to the present.
Value Investing Performance
Performance, value strategies
Value investing has proven to be a successful investment strategy. There are
several ways to evaluate its success. One way is to examine the performance
of simple value strategies, such as buying low PE ratio stocks, low price-tocash-flow ratio stocks, or low price-to-book ratio stocks. Numerous
academics have published studies investigating the effects of buying value
stocks. These studies have consistently found that value stocks outperform
growth stocks and the market as a whole.[2][3][4]
Performance, value investors

Another way to examine the performance of value investing strategies is to


examine the investing performance of well-known value investors. Simply
examining the performance of the best known value investors would not be
instructive, because investors do not become well known unless they are
successful. This introduces a selection bias. A better way to investigate the
performance of a group of value investors was suggested by Warren Buffett,
in his May 17, 1984 speech that was published as The Superinvestors of
Graham-and-Doddsville. In this speech, Buffett examined the performance
of those investors who worked at Graham-Newman Corporation and were
thus most influenced by Benjamin Graham. Buffett's conclusion is identical
to that of the academic research on simple value investing strategies--value
investing is, on average, successful in the long run.
During about a 25-year period (1965-90), published research and articles in
leading journals of the value ilk were few. Warren Buffett once commented,
"You couldn't advance in a finance department in this country unless you
thought that the world was flat."[5]
Well Known Value Investors
Benjamin Graham is regarded by many to be the father of value investing.
Along with David Dodd, he wrote Security Analysis, first published in 1934.
The most lasting contribution of this book to the field of security analysis
was to emphasize the quantifiable aspects of security analysis (such as the
evaluations of earnings and book value) while minimizing the importance of
more qualitative factors such as the quality of a company's management.
Graham later wrote The Intelligent Investor, a book that brought value
investing to individual investors. Aside from Buffett, many of Graham's
other students, such as William J. Ruane, Irving Kahn and Charles Brandes
have gone on to become successful investors in their own right.
Graham's most famous student, however, was Warren Buffett, who ran
successful investing partnerships before closing them in 1969 to focus on
running Berkshire Hathaway. Charlie Munger joined Buffett at Berkshire
Hathaway in the 1970s and has since worked as Vice Chairman of the
company. Buffett has credited Munger with encouraging him to focus on
long-term sustainable growth rather than on simply the valuation of current
cash flows or assets.[6]

Another famous value investor is John Templeton. He first achieved


investing success by buying shares of a number of companies in the
aftermath of the stock market crash of 1929.
Martin J. Whitman is another well-regarded value investor. His approach is
called safe-and-cheap, which was hitherto referred to as financial-integrity
approach. Martin Whitman focuses on acquiring common shares of
companies with extremely strong financial position at a price reflecting
meaningful discount to the estimated NAV of the company concerned.
Martin Whitman believes it is ill-advised for investors to pay much attention
to the trend of macro-factors (like employment, movement of interest rate,
GDP, etc.) not so much because they are not important as because attempts
to predict their movement are almost always futile. Martin Whitman's letters
to shareholders of his Third Avenue Value Fund (TAVF) are considered
valuable resources "for investors to pirate good ideas" by another famous
investor Joel Greenblatt in his book on special-situation investment "You
Can Be a Stock Market Genius" (ISBN 0-684-84007-3)(pp 247)
Joel Greenblatt achieved annual returns at the hedge fund Gotham Capital of
over 50% per year for 10 years from 1985 to 1995 before closing the fund
and returning his investors' money. He is known for investing in special
situations such as spin-offs, mergers, and divestitures. Edward Lampert is
the chief of ESL Investments. He is best known for buying large stakes in
Sears and Kmart and then merging the two companies.
Volatility arbitrage
Volatility arbitrage (or vol arb) is a type of statistical arbitrage that is
implemented by trading a delta neutral portfolio of an option and its
underlier. The objective is to take advantage of differences between the
implied volatility of the option, and a forecast of future realized volatility of
the option's underlier. In volatility arbitrage, volatility is used as the unit of
relative measure rather than price - that is, traders attempt to buy volatility
when it is low and sell volatility when it is high.
Overview
To an option trader engaging in volatility arbitrage, an option contract is a
way to speculate in the volatility of the underlying rather than a directional
bet on the underlier's price. If a trader buys options as part of a delta-neutral
portfolio, he is said to be long volatility. If he sells options, he is said to be
short volatility. So long as the trading is done delta-neutral, buying an

option is a bet that the underlier's future realized volatility will be high,
while selling an option is a bet that future realized volatility will be low.
Because of put call parity, it doesn't matter if the options traded are calls or
puts. This is true because put-call parity posits a risk neutral equivalence
relationship between a call, a put and some amount of the underlier.
Therefore, being long a delta neutral call results in the same returns as being
long a delta neutral put.
Forecast Volatility
To engage in volatility arbitrage, a trader must first forecast the underlier's
future realized volatility. This is typically done by computing the historic
daily returns for the underlier for a given past sample such as 252 days, the
number of trading days in a year. The trader may also use other factors, such
as whether the period was unusually volatile, or if there are going to be
unusual events in the near future, to adjust his forecast. For instance, if the
current 252-day volatility for the returns on a stock is computed to be 15%,
but it's known that an important patent dispute will likely be settled in the
next year, the trader may decide that the appropriate forecast volatility for
the stock is 18%. That is, based on past movements and upcoming events,
the stock is most likely to be 18% higher or lower from its current price one
year from today.
Market (Implied) Volatility
As described in option valuation techniques, there are a number of factors
that are used to determine the theoretical value of an option. However, in
practice, the only two inputs to the model that change during the day are the
price of the underlier and the volatility. Therefore, the theoretical price of an
option can be expressed as:

where is the price of the underlier, and is the estimate of future volatility.
Because the theoretical price function
is a monotonically increasing
function of , there must be a corresponding monotonically increasing
function
that expresses the volatility implied by the option's market price
, or

Or, in other words, when all other inputs including the stock price are held
constant, there exists no more than one implied volatility for each market
price for the option.
Because implied volatility of an option can remain constant even as the
underlier's value changes, traders use it as a measure of relative value rather
than the option's market price. For instance, if a trader can buy an option
whose implied volatility is 10%, it's common to say that the trader can
"buy the option for 10%". Conversely, if the trader can sell an option whose
implied volatility is 20%, it is said the trader can "sell the option at 20%".
For example, assume a call option is trading at $1.90 with the underlier's
price at $45.50, yielding an implied volatility of 17.5%. A short time later,
the same option might trade at $2.50 with the underlier's price at $46.36,
yielding an implied volatility of 16.8%. Even though the option's price is
higher at the second measurement, the option is still considered cheaper
because the implied volatility is lower. The reason this is true is because the
trader can sell stock needed to hedge the long call at a higher price.
Mechanism
Armed with a forecast volatility, and capable of measuring an option's
market price in terms of implied volatility, the trader is ready to begin a
volatility arbitrage trade. A trader looks for options where the implied
volatility, is either significantly lower than or higher than the forecast
realized volatility , for the underlier. In the first case, the trader buys the
option and hedges with the underlier to make a delta neutral portfolio. In the
second case, the trader sells the option and then hedges them.
Over the holding period, the trader will realize a profit on the trade if the
underlier's realized volatility is closer to his forecast than it is to the market's
forecast (i.e. the implied volatility). The profit is extracted from the trade
through the continual re-hedging required to keep the portfolio delta neutral.
Fixed income arbitrage
Fixed-income arbitrage is an investment strategy generally associated with
hedge funds, which consists of the discovery and exploitation of
inefficiencies in the pricing of bonds, i.e. instruments from either public or
private issuers yielding a contractually fixed stream of income.
Most arbitrageurs who employ this strategy trade globally.

In pursuit of their goal of both steady returns and low volatility, the
arbitrageurs can focus upon interest rate swaps, US non-US government
bond arbitrage, see US Treasury security, forward yield curves, and/or
mortgage-backed securities.
The practice of fixed-income arbitrage in general has been compared to that
of running in front of a steam roller to pick up nickels lying on the street [1].
Rational pricing
Rational pricing is the assumption in financial economics that asset prices
(and hence asset pricing models) will reflect the arbitrage-free price of the
asset as any deviation from this price will be "arbitraged away". This
assumption is useful in pricing fixed income securities, particularly bonds,
and is fundamental to the pricing of derivative instruments.
Arbitrage mechanics
Arbitrage is the practice of taking advantage of a state of imbalance between
two (or possibly more) markets. Where this mismatch can be exploited (i.e.
after transaction costs, storage costs, transport costs, dividends etc.) the
arbitrageur "locks in" a risk free profit without investing any of his own
money.
In general, arbitrage ensures that "the law of one price" will hold; arbitrage
also equalises the prices of assets with identical cash flows, and sets the
price of assets with known future cash flows.
The law of one price
The same asset must trade at the same price on all markets ("the law of one
price"). Where this is not true, the arbitrageur will:
buy the asset on the market where it has the lower price, and
simultaneously sell it (short) on the second market at the higher
price
deliver the asset to the buyer and receive that higher price
pay the seller on the cheaper market with the proceeds and pocket
the difference.
Assets with identical cash flows
Two assets with identical cash flows must trade at the same price. Where this
is not true, the arbitrageur will:

sell the asset with the higher price (short sell) and simultaneously
buy the asset with the lower price
fund his purchase of the cheaper asset with the proceeds from the
sale of the expensive asset and pocket the difference
deliver on his obligations to the buyer of the expensive asset, using
the cash flows from the cheaper asset.
An asset with a known future-price
An asset with a known price in the future, must today trade at that price
discounted at the risk free rate.
Note that this condition can be viewed as an application of the above, where
the two assets in question are the asset to be delivered and the risk free asset.
(a) where the discounted future price is higher than today's price:
1. The arbitrageur agrees to deliver the asset on the future date (i.e. sells
forward) and simultaneously buys it today with borrowed money.
2. On the delivery date, the arbitrageur hands over the underlying, and
receives the agreed price.
3. He then repays the lender the borrowed amount plus interest.
4. The difference between the agreed price and the amount owed is the
arbitrage profit.
(b) where the discounted future price is lower than today's price:
1. The arbitrageur agrees to pay for the asset on the future date (i.e. buys
forward) and simultaneously sells (short) the underlying today; he
invests the proceeds.
2. On the delivery date, he cashes in the matured investment, which has
appreciated at the risk free rate.
3. He then takes delivery of the underlying and pays the agreed price
using the matured investment.
4. The difference between the maturity value and the agreed price is the
arbitrage profit.
It will be noted that (b) is only possible for those holding the asset but not
needing it until the future date. There may be few such parties if short-term
demand exceeds supply, leading to backwardation.

Fixed income securities


Rational pricing is one approach used in pricing fixed rate bonds. Here, each
cash flow can be matched by trading in some multiple of a "risk free"
government issue zero coupon bond with the corresponding maturity, or in a
corresponding strip and ZCB.
Given that the cash flows can be replicated, the price of the bond, must
today equal the sum of each of its cash flows discounted at the same rate as
the corresponding government securities - i.e. the corresponding risk free
rate (here, assuming similar credit worthiness). Were this not the case,
arbitrage would be possible and would bring the price back into line with the
price based on the government issued securities.
The pricing formula is as below, where each cash flow
is discounted at
the rate which matches that of the corresponding government zero coupon
instrument:

Price =

Often, the formula is expressed as


instead of rates, as prices are more readily available.

, using prices

Pricing derivatives
A derivative is an instrument which allows for buying and selling of the
same asset on two markets the spot market and the derivatives market.
Mathematical finance assumes that any imbalance between the two markets
will be arbitraged away. Thus, in a correctly priced derivative contract, the
derivative price, the strike price (or reference rate), and the spot price will be
related such that arbitrage is not possible.

Futures
In a futures contract, for no arbitrage to be possible, the price paid on
delivery (the forward price) must be the same as the cost (including interest)
of buying and storing the asset. In other words, the rational forward price

represents the expected future value of the underlying discounted at the risk
free rate. Thus, for a simple, non-dividend paying asset, the value of the
future/forward,
, will be found by discounting the present value
at
time to maturity by the rate of risk-free return .

This relationship may be modified for storage costs, dividends, dividend


yields, and convenience yields; see futures contract pricing.
Any deviation from this equality allows for arbitrage as follows.
In the case where the forward price is higher:
1. The arbitrageur sells the futures contract and buys the underlying
today (on the spot market) with borrowed money.
2. On the delivery date, the arbitrageur hands over the underlying, and
receives the agreed forward price.
3. He then repays the lender the borrowed amount plus interest.
4. The difference between the two amounts is the arbitrage profit.

In the case where the forward price is lower:

1. The arbitrageur buys the futures contract and sells the underlying
today (on the spot market); he invests the proceeds.
2. On the delivery date, he cashes in the matured investment, which has
appreciated at the risk free rate.
3. He then receives the underlying and pays the agreed forward price
using the matured investment. [If he was short the underlying, he
returns it now.]
4. The difference between the two amounts is the arbitrage profit.
Options
As above, where the value of an asset in the future is known (or expected),
this value can be used to determine the asset's rational price today. In an
option contract, however, exercise is dependent on the price of the
underlying, and hence payment is uncertain. Option pricing models therefore
include logic which either "locks in" or "infers" this value; both approaches
deliver identical results. Methods which lock-in future cash flows assume

arbitrage free pricing, and those which infer expected value assume risk
neutral valuation.
To do this, (in their simplest, though widely used form) both approaches
assume a Binomial model for the behavior of the underlying instrument,
which allows for only two states - up or down. If S is the current price, then
in the next period the price will either be S up or S down. Here, the value of
the share in the up-state is S u, and in the down-state is S d (where u and
d are multipliers with d < 1 < u and assuming d < 1+r < u; see the binomial
options model). Then, given these two states, the "arbitrage free" approach
creates a position which will have an identical value in either state - the cash
flow in one period is therefore known, and arbitrage pricing is applicable.
The risk neutral approach infers expected option value from the intrinsic
values at the later two nodes.
Although this logic appears far removed from the Black-Scholes formula
and the lattice approach in the Binomial options model, it in fact underlies
both models; see The Black-Scholes PDE. The assumption of binomial
behaviour in the underlying price is defensible as the number of time steps
between today (valuation) and exercise increases, and the period per timestep is increasingly short. The Binomial options model allows for a high
number of very short time-steps (if coded correctly), while Black-Scholes, in
fact, models a continuous process.
The examples below have shares as the underlying, but may be generalised
to other instruments. The value of a put option can be derived as below, or
may be found from the value of the call using put-call parity.
Arbitrage free pricing
Here, the future payoff is "locked in" using either "delta hedging" or the
"replicating portfolio" approach. As above, this payoff is then discounted,
and the result is used in the valuation of the option today.
Delta hedging
It is possible to create a position consisting of calls sold and 1 share, such
that the positions value will be identical in the S up and S down states, and
hence known with certainty (see Delta hedging). This certain value
corresponds to the forward price above, and as above, for no arbitrage to be
possible, the present value of the position must be its expected future value

discounted at the risk free rate, r. The value of a call is then found by
equating the two.
1) Solve for such that:
value of position in one period = S up - (S up strike price ) = S
down - (S down strike price)
2) solve for the value of the call, using , where:
value of position today = value of position in one period (1 + r) = S
current value of call
The replicating portfolio
It is possible to create a position consisting of shares and $B borrowed at
the risk free rate, which will produce identical cash flows to one option on
the underlying share. The position created is known as a "replicating
portfolio" since its cash flows replicate those of the option. As shown, in the
absence of arbitrage opportunities, since the cash flows produced are
identical, the price of the option today must be the same as the value of the
position today.
1) Solve simultaneously for and B such that:
i) S up - B (1 + r) = MAX ( 0, S up strike price )
ii) S down - B (1 + r) = MAX ( 0, S down strike price )
2) solve for the value of the call, using and B, where:
call = S current - B
Risk neutral valuation
Here the value of the option is calculated using the risk neutrality
assumption. Under this assumption, the expected value (as opposed to
"locked in" value) is discounted. The expected value is calculated using the
intrinsic values from the later two nodes: Option up and Option down,
with u and d as price multipliers as above. These are then weighted by their
respective probabilities: probability p of an up move in the underlying,
and probability (1-p) of a down move. The expected value is then
discounted at r, the risk free rate.

1) solve for p
for no arbitrage to be possible in the share, todays price must
represent its expected value discounted at the risk free rate:
S = [ p (up value) + (1-p) (down value) ] (1+r) = [ p S u + (1p) S d ] (1+r)
then, p = [(1+r) - d ] [ u - d ]
2) solve for call value, using p
for no arbitrage to be possible in the call, todays price must represent
its expected value discounted at the risk free rate:
Option value = [ p Option up + (1-p) Option down] (1+r)
= [ p (S up - strike) + (1-p) (S down - strike) ] (1+r)

The risk neutrality assumption


Note that above, the risk neutral formula does not refer to the volatility of
the underlying p as solved, relates to the risk-neutral measure as opposed
to the actual probability distribution of prices. Nevertheless, both Arbitrage
free pricing and Risk neutral valuation deliver identical results. In fact, it can
be shown that Delta hedging and Risk neutral valuation use identical
formulae expressed differently. Given this equivalence, it is valid to assume
risk neutrality when pricing derivatives.
Swaps
Rational pricing underpins the logic of swap valuation. Here, two
counterparties "swap" obligations, effectively exchanging cash flow streams
calculated against a notional principal amount, and the value of the swap is
the present value (PV) of both sets of future cash flows "netted off" against
each other.
Valuation at initiation
To be arbitrage free, the terms of a swap contract are such that, initially, the
Net present value of these future cash flows is equal to zero; see swap
valuation. For example, consider a fixed-to-floating Interest rate swap where
Party A pays a fixed rate, and Party B pays a floating rate. Here, the fixed
rate would be such that the present value of future fixed rate payments by
Party A is equal to the present value of the expected future floating rate

payments (i.e. the NPV is zero). Were this not the case, an Arbitrageur, C,
could:
assume the position with the lower present value of payments, and
borrow funds equal to this present value
meet the cash flow obligations on the position by using the
borrowed funds, and receive the corresponding payments - which
have a higher present value
use the received payments to repay the debt on the borrowed funds
pocket the difference - where the difference between the present
value of the loan and the present value of the inflows is the
arbitrage profit.
Subsequent valuation
Once traded, swaps can also be priced using rational pricing. For example,
the Floating leg of an interest rate swap can be "decomposed" into a series of
Forward rate agreements. Here, since the swap has identical payments to the
FRA, arbitrage free pricing must apply as above - i.e. the value of this leg is
equal to the value of the corresponding FRAs. Similarly, the "receive-fixed"
leg of a swap, can be valued by comparison to a Bond with the same
schedule of payments.

Pricing shares
The Arbitrage pricing theory (APT), a general theory of asset pricing, has
become influential in the pricing of shares. APT holds that the expected
return of a financial asset, can be modelled as a linear function of various
macro-economic factors, where sensitivity to changes in each factor is
represented by a factor specific beta coefficient:
where
E(rj) is the risky asset's expected return,
rf is the risk free rate,
Fk is the macroeconomic factor,
bjk is the sensitivity of the asset to factor k,

and j is the risky asset's idiosyncratic random shock with mean


zero.
The model derived rate of return will then be used to price the asset correctly
- the asset price should equal the expected end of period price discounted at
the rate implied by model. If the price diverges, arbitrage should bring it
back into line. Here, to perform the arbitrage, the investor creates a
correctly priced asset (a synthetic asset) being a portfolio which has the same
net-exposure to each of the macroeconomic factors as the mispriced asset
but a different expected return; see the APT article for detail on the
construction of the portfolio. The arbitrageur is then in a position to make a
risk free profit as follows:
Where the asset price is too low, the portfolio should have appreciated at
the rate implied by the APT, whereas the mispriced asset would have
appreciated at more than this rate. The arbitrageur could therefore:
Today: short sell the portfolio and buy the mispriced-asset with the
proceeds.
At the end of the period: sell the mispriced asset, use the proceeds to buy
back the portfolio, and pocket the difference.
Where the asset price is too high, the portfolio should have appreciated at
the rate implied by the APT, whereas the mispriced asset would have
appreciated at less than this rate. The arbitrageur could therefore:
1. Today: short sell the mispriced-asset and buy the portfolio with the
proceeds.
2. At the end of the period: sell the portfolio, use the proceeds to buy
back the mispriced-asset, and pocket the difference.
Note that under "true arbitrage", the investor locks-in a guaranteed payoff,
whereas under APT arbitrage, the investor locks-in a positive expected
payoff. The APT thus assumes "arbitrage in expectations" - i.e that arbitrage
by investors will bring asset prices back into line with the returns expected
by the model.
The Capital asset pricing model (CAPM) is an earlier, (more) influential
theory on asset pricing. Although based on different assumptions, the CAPM
can, in some ways, be considered a "special case" of the APT; specifically,

the CAPM's Securities market line represents a single-factor model of the


asset price, where Beta is exposure to changes in value of the Market.
Fundamental theorem of arbitrage-free pricing
In a general sense, the fundamental theorem of arbitrage/finance is a way to
relate arbitrage opportunities with risk neutral measures that are equivalent
to the original probability measure.
The fundamental theorem in a finite state market
In a finite state market, the fundamental theorem of arbitrage has two parts.
The first part relates to existence of a risk neutral measure, while the second
relates to the uniqueness of the measure (see Harrison and Pliska):
1. The first part states that there is no arbitrage if and only if there exists
a risk neutral measure that is equivalent to the original probability
measure.
2. The second part states that a market is complete if and only if there is
a unique risk neutral measure that is equivalent to the original
probability measure.
The fundamental theorem of pricing is a way for the concept of arbitrage to
be converted to a question about whether or not a risk neutral measure
exists.
The fundamental theorem in more general markets
When stock price returns follow a single Brownian motion, there is a unique
risk neutral measure. When the stock price process is assumed to follow a
more general semi-martingale (see Delbaen and Schachermayer), then the
concept of arbitrage is too strong, and a weaker concept such as no free
lunch with vanishing risk must be used to describe these opportunities in an
infinite dimensional setting.
Capital asset pricing model

The Security Market Line, seen here in a graph, describes a relation between
the beta and the asset's expected rate of return.

An estimation of the CAPM and the Security Market Line (purple) for the
Dow Jones Industrial Average over the last 3 years for monthly data.
The Capital Asset Pricing Model (CAPM) is used in finance to determine
a theoretically appropriate required rate of return (and thus the price if
expected cash flows can be estimated) of an asset, if that asset is to be added
to an already well-diversified portfolio, given that asset's non-diversifiable
risk. The CAPM formula takes into account the asset's sensitivity to nondiversifiable risk (also known as systematic risk or market risk), in a number
often referred to as beta () in the financial industry, as well as the expected
return of the market and the expected return of a theoretical risk-free asset.
The model was introduced by Jack Treynor, William Sharpe, John Lintner
and Jan Mossin independently, building on the earlier work of Harry
Markowitz on diversification and modern portfolio theory. Sharpe received
the Nobel Memorial Prize in Economics (jointly with Harry Markowitz and
Merton Miller) for this contribution to the field of financial economics.

The formula
The CAPM is a model for pricing an individual security (asset) or a
portfolio. For individual security perspective, we made use of the security
market line (SML) and its relation to expected return and systematic risk
(beta) to show how the market must price individual securities in relation to
their security risk class. The SML enables us to calculate the reward-to-risk
ratio for any security in relation to that of the overall market. Therefore,
when the expected rate of return for any security is deflated by its beta
coefficient, the reward-to-risk ratio for any individual security in the market
is equal to the market reward-to-risk ratio, thus:
Individual securitys / beta
Reward-to-risk ratio

Markets securities (portfolio)


Reward-to-risk ratio

,
The market reward-to-risk ratio is effectively the market risk premium and
by rearranging the above equation and solving for E(Ri), we obtain the
Capital Asset Pricing Model (CAPM).

Where:

is the expected return on the capital asset


is the risk-free rate of interest
(the beta coefficient) the sensitivity of the asset returns to market
returns, or also
,
is the expected return of the market
is sometimes known as the market premium or risk
premium (the difference between the expected market rate of return
and the risk-free rate of return). Note 1: the expected market rate of
return is usually measured by looking at the arithmetic average of the
historical returns on a market portfolio (i.e. S&P 500). Note 2: the risk
free rate of return used for determining the risk premium is usually the

arithmetic average of historical risk free rates of return and not the
current risk free rate of return.
For the full derivation see Modern portfolio theory.
Asset pricing
Once the expected return, E(Ri), is calculated using CAPM, the future cash
flows of the asset can be discounted to their present value using this rate
(E(Ri)), to establish the correct price for the asset.
In theory, therefore, an asset is correctly priced when its observed price is
the same as its value calculated using the CAPM derived discount rate. If the
observed price is higher than the valuation, then the asset is overvalued (and
undervalued when the observed price is below the CAPM valuation).
Alternatively, one can "solve for the discount rate" for the observed price
given a particular valuation model and compare that discount rate with the
CAPM rate. If the discount rate in the model is lower than the CAPM rate
then the asset is overvalued (and undervalued for a too high discount rate).

Asset-specific required return


The CAPM returns the asset-appropriate required return or discount rate i.e. the rate at which future cash flows produced by the asset should be
discounted given that asset's relative riskiness. Betas exceeding one signify
more than average "riskiness"; betas below one indicate lower than average.
Thus a more risky stock will have a higher beta and will be discounted at a
higher rate; less sensitive stocks will have lower betas and be discounted at a
lower rate. The CAPM is consistent with intuition - investors (should)
require a higher return for holding a more risky asset.
Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. market
risk, the market as a whole, by definition, has a beta of one. Stock market
indices are frequently used as local proxies for the market - and in that case
(by definition) have a beta of one. An investor in a large, diversified

portfolio (such as a mutual fund) therefore expects performance in line with


the market.
Risk and diversification
The risk of a portfolio comprises systematic risk, also known as
undiversifiable risk, and unsystematic risk which is also known as
idiosyncratic risk or diversifiable risk. Systematic risk refers to the risk
common to all securities - i.e. market risk. Unsystematic risk is the risk
associated with individual assets. Unsystematic risk can be diversified away
to smaller levels by including a greater number of assets in the portfolio.
(specific risks "average out"); systematic risk (within one market) cannot.
Depending on the market, a portfolio of approximately 30-40 securities in
developed markets such as UK or US (more in case of developing markets
because of higher asset volatilities) will render the portfolio sufficiently
diversified to limit exposure to systemic risk only.
A rational investor should not take on any diversifiable risk, as only nondiversifiable risks are rewarded within the scope of this model. Therefore,
the required return on an asset, that is, the return that compensates for risk
taken, must be linked to its riskiness in a portfolio context - i.e. its
contribution to overall portfolio riskiness - as opposed to its "stand alone
riskiness." In the CAPM context, portfolio risk is represented by higher
variance i.e. less predictability. In other words the beta of the portfolio is the
defining factor in rewarding the systematic exposure taken by an investor.
The efficient frontier

The (Markowitz) efficient frontier


The CAPM assumes that the risk-return profile of a portfolio can be
optimized - an optimal portfolio displays the lowest possible level of risk for

its level of return. Additionally, since each additional asset introduced into a
portfolio further diversifies the portfolio, the optimal portfolio must
comprise every asset, (assuming no trading costs) with each asset valueweighted to achieve the above (assuming that any asset is infinitely
divisible). All such optimal portfolios, i.e., one for each level of return,
comprise the efficient frontier.
Because the unsystemic risk is diversifiable, the total risk of a portfolio can
be viewed as beta.
The market portfolio
An investor might choose to invest a proportion of his or her wealth in a
portfolio of risky assets with the remainder in cash - earning interest at the
risk free rate (or indeed may borrow money to fund his or her purchase of
risky assets in which case there is a negative cash weighting). Here, the ratio
of risky assets to risk free asset does not determine overall return - this
relationship is clearly linear. It is thus possible to achieve a particular return
in one of two ways:
1. By investing all of one's wealth in a risky portfolio,
2. or by investing a proportion in a risky portfolio and the remainder in
cash (either borrowed or invested).
For a given level of return, however, only one of these portfolios will be
optimal (in the sense of lowest risk). Since the risk free asset is, by
definition, uncorrelated with any other asset, option 2 will generally have the
lower variance and hence be the more efficient of the two.
This relationship also holds for portfolios along the efficient frontier: a
higher return portfolio plus cash is more efficient than a lower return
portfolio alone for that lower level of return. For a given risk free rate, there
is only one optimal portfolio which can be combined with cash to achieve
the lowest level of risk for any possible return. This is the market portfolio.
Assumptions of CAPM

All investors have rational expectations.


There are no arbitrage opportunities.
Returns are distributed normally.
Fixed quantity of assets.
Perfectly efficient capital markets.

Investors are solely concerned with level and uncertainty of future


wealth
Separation of financial and production sectors.
Thus, production plans are fixed.
Risk-free rates exist with limitless borrowing capacity and universal
access.
The Risk-free borrowing and lending rates are equal.
No inflation and no change in the level of interest rate exists.
Perfect information, hence all investors have the same expectations
about security returns for any given time period.

Shortcomings of CAPM
The model assumes that asset returns are (jointly) normally distributed
random variables. It is however frequently observed that returns in
equity and other markets are not normally distributed. As a result,
large swings (3 to 6 standard deviations from the mean) occur in the
market more frequently than the normal distribution assumption
would expect.
The model assumes that the variance of returns is an adequate
measurement of risk. This might be justified under the assumption of
normally distributed returns, but for general return distributions other
risk measures (like coherent risk measures) will likely reflect the
investors' preferences more adequately.
The model does not appear to adequately explain the variation in stock
returns. Empirical studies show that low beta stocks may offer higher
returns than the model would predict. Some data to this effect was
presented as early as a 1969 conference in Buffalo, New York in a
paper by Fischer Black, Michael Jensen, and Myron Scholes. Either
that fact is itself rational (which saves the efficient markets hypothesis
but makes CAPM wrong), or it is irrational (which saves CAPM, but
makes EMH wrong indeed, this possibility makes volatility
arbitrage a strategy for reliably beating the market).
The model assumes that given a certain expected return investors will
prefer lower risk (lower variance) to higher risk and conversely given
a certain level of risk will prefer higher returns to lower ones. It does
not allow for investors who will accept lower returns for higher risk.

Casino gamblers clearly pay for risk, and it is possible that some stock
traders will pay for risk as well.
The model assumes that all investors have access to the same information
and agree about the risk and expected return of all assets.
(Homogeneous expectations assumption)
The model assumes that there are no taxes or transaction costs, although
this assumption may be relaxed with more complicated versions of the
model.
The market portfolio consists of all assets in all markets, where each
asset is weighted by its market capitalization. This assumes no
preference between markets and assets for individual investors, and
that investors choose assets solely as a function of their risk-return
profile. It also assumes that all assets are infinitely divisible as to the
amount which may be held or transacted.
The market portfolio should in theory include all types of assets that are
held by anyone as an investment (including works of art, real estate,
human capital...) In practice, such a market portfolio is unobservable
and people usually substitute a stock index as a proxy for the true
market portfolio. Unfortunately, it has been shown that this
substitution is not innocuous and can lead to false inferences as to the
validity of the CAPM, and it has been said that due to the
inobservability of the true market portfolio, the CAPM might not be
empirically testable. This was presented in greater depth in a paper by
Richard Roll in 1977, and is generally referred to as Roll's Critique.
Theories such as the Arbitrage Pricing Theory (APT) have since been
formulated to circumvent this problem.
Because CAPM prices a stock in terms of all stocks and bonds, it is really
an arbitrage pricing model which throws no light on how a firm's beta
gets determined.
Modern portfolio theory
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Capital Market Line


Modern portfolio theory (MPT) proposes how rational investors will use
diversification to optimize their portfolios, and how a risky asset should be
priced. The basic concepts of the theory are Markowitz diversification, the
efficient frontier, capital asset pricing model, the alpha and beta coefficients,
the Capital Market Line and the Securities Market Line.
MPT models an asset's return as a random variable, and models a portfolio
as a weighted combination of assets; the return of a portfolio is thus the
weighted combination of the assets' returns. Moreover, a portfolio's return is
a random variable, and consequently has an expected value and a variance.
Risk, in this model, is the standard deviation of the portfolio's return.
Risk and reward
The model assumes that investors are risk averse. This means that given two
assets that offer the same expected return, investors will prefer the less risky
one. Thus, an investor will take on increased risk only if compensated by
higher expected returns. Conversely, an investor who wants higher returns
must accept more risk. The exact trade-off will differ by investor based on
individual risk aversion characteristics. The implication is that a rational
investor will not invest in a portfolio if a second portfolio exists with a more
favourable risk-return profile - i.e. if for that level of risk an alternative
portfolio exists which has better expected returns.
Mean and variance
It is further assumed that investor's risk / reward preference can be described
via a quadratic utility function. The effect of this assumption is that only the
expected return and the volatility (i.e. mean return and standard deviation)

matter to the investor. The investor is indifferent to other characteristics of


the distribution of returns, such as its skew. Note that the theory uses a
historical parameter, volatility, as a proxy for risk, while return is an
expectation on the future.
Recent innovations in portfolio theory, particularly under the rubric of PostModern Portfolio Theory (PMPT), have exposed many flaws in this total
reliance on standard deviation as the investor's risk proxy.
Under the model:

Portfolio return is the proportion-weighted combination of the


constituent assets' returns.
Portfolio volatility is a function of the correlation of the component
assets. The change in volatility is non-linear as the weighting of the
component assets changes.

Mathematically
In general:

Expected return:

Where R is return.
Portfolio variance:

Portfolio volatility:

For a two asset portfolio:


Portfolio return:

Portfolio variance:

For a three asset portfolio, the variance is:


As can be seen, as the number of assets (n) in the portfolio increases, the
calculation becomes computationally intensive - the number of covariance
terms = n (n-1) /2. For this reason, portfolio computations usually require
specialized software. These values can also be modeled using matrices; for a
manageable number of assets, these statistics can be calculated using a
spreadsheet.
Diversification
An investor can reduce portfolio risk simply by holding instruments which
are not perfectly correlated. In other words, investors can reduce their
exposure to individual asset risk by holding a diversified portfolio of assets.
Diversification will allow for the same portfolio return with reduced risk.
For diversification to work the component assets must not be perfectly
correlated, i.e. correlation coefficient not equal to 1.
As the formula above shows, if all assets of a portfolio have a correlation of
0, the portfolio variance and hence volatility will be the weighted average of
the individual instruments' volatilities. If correlation is less than zero, that is,
the assets are inversely correlated, the portfolio's volatility is less than the
weighted average of the volatilities, and vice-versa.
Capital allocation line
The Capital Allocation Line (CAL) is the line of expected return plotted
against risk (standard deviation) that connects all portfolios that can be
formed using a risky asset and a riskless asset. It can be proven that it is a
straight line and that it has the following equation.

In this formula P is the risky portfolio, F is the riskless portfolio and C is a


combination of portfolios P and F.
The efficient frontier

Efficient Frontier
Every possible asset combination can be plotted in risk-return space, and the
collection of all such possible portfolios defines a region in this space. The
line along the upper edge of this region is known as the efficient frontier
(sometimes the Markowitz frontier). Combinations along this line
represent portfolios for which there is lowest risk for a given level of return.
Conversely, for a given amount of risk, the portfolio lying on the efficient
frontier represents the combination offering the best possible return.
Mathematically the Efficient Frontier is the intersection of the Set of
Portfolios with Minimum Variance and the Set of Portfolios with Maximum
Return.
The efficient frontier is illustrated above, with return p on the y axis, and
risk p on the x axis; an alternative illustration from the diagram in the
CAPM article is at right.
The efficient frontier will be convex this is because the risk-return
characteristics of a portfolio change in a non-linear fashion as its component
weightings are changed. (As described above, portfolio risk is a function of
the correlation of the component assets, and thus changes in a non-linear
fashion as the weighting of component assets changes.) The efficient frontier
is a parabola (hyperbola) when expected return is plotted against variance
(standard deviation).
The region above the frontier is unachievable by holding risky assets alone.
No portfolios can be constructed corresponding to the points in this region.
Points below the frontier are suboptimal. A rational investor will hold a
portfolio only on the frontier. 1

The risk-free asset


The risk-free asset is the (hypothetical) asset which pays a risk-free rate - it
is usually proxied by an investment in short-dated Government securities.
The risk-free asset has zero variance in returns (hence is risk-free); it is also
uncorrelated with any other asset (by definition: since its variance is zero).
As a result, when it is combined with any other asset, or portfolio of assets,
the change in return and also in risk is linear.
Because both risk and return change linearly as the risk-free asset is
introduced into a portfolio, this combination will plot a straight line in riskreturn space. The line starts at 100% in cash and weight of the risky portfolio
= 0 (i.e. intercepting the return axis at the risk-free rate) and goes through
the portfolio in question where cash holding = 0 and portfolio weight = 1.
Mathematically
Using the formulae for a two asset portfolio as above:
Return is the weighted average of the risk free asset, f, and the risky
portfolio, p, and is therefore linear:
Return =
Since the asset is risk free, portfolio standard deviation is simply a
function of the weight of the risky portfolio in the position. This
relationship is linear.
Standard deviation =
=
=
=
Portfolio leverage
An investor can add leverage to the portfolio by borrowing the risk-free
asset. The addition of the risk-free asset allows for a position in the region
above the efficient frontier. Thus, by combining a risk-free asset with risky

assets, it is possible to construct portfolios whose risk-return profiles are


superior to those on the efficient frontier.
An investor holding a portfolio of risky assets, with a holding in cash, has
a positive risk-free weighting (a de-leveraged portfolio). The return
and standard deviation will be lower than the portfolio alone, but since
the efficient frontier is convex, this combination will sit above the
efficient frontier i.e. offering a higher return for the same risk as the
point below it on the frontier.
The investor who borrows money to fund his/her purchase of the risky
assets has a negative risk-free weighting -i.e a leveraged portfolio.
Here the return is geared to the risky portfolio. This combination will
again offer a return superior to those on the frontier.
The market portfolio
The efficient frontier is a collection of portfolios, each one optimal for a
given amount of risk. A quantity known as the Sharpe ratio represents a
measure of the amount of additional return (above the risk-free rate) a
portfolio provides compared to the risk it carries. The portfolio on the
efficient frontier with the highest Sharpe Ratio is known as the market
portfolio, or sometimes the super-efficient portfolio; it is the tangencyportfolio in the above diagram.
This portfolio has the property that any combination of it and the risk-free
asset will produce a return that is above the efficient frontier - offering a
larger return for a given amount of risk than a portfolio of risky assets on the
frontier would.
Capital market line
When the market portfolio is combined with the risk-free asset, the result is
the Capital Market Line. All points along the CML have superior risk-return
profiles to any portfolio on the efficient frontier. (The market portfolio with
zero cash weighting is on the efficient frontier; additions of cash or leverage
with the risk-free asset in combination with the market portfolio are on the
Capital Market Line. All of these portfolio represent the highest Sharpe
ratios possible.)

The CML is illustrated above, with return p on the y axis, and risk p on the
x axis.
One can prove that the CML is the optimal CAL and that its equation is:

Asset pricing
A rational investor would not invest in an asset which does not improve the
risk-return characteristics of his existing portfolio. Since a rational investor
would hold the market portfolio, the asset in question will be added to the
market portfolio. MPT derives the required return for a correctly priced asset
in this context.
Systematic risk and specific risk
Specific risk is the risk associated with individual assets - within a portfolio
these risks can be reduced through diversification (specific risks "cancel
out"). Systematic risk, or market risk, refers to the risk common to all
securities - except for selling short as noted below, systematic risk cannot be
diversified away (within one market). Within the market portfolio, asset
specific risk will be diversified away to the extent possible. Systematic risk
is therefore equated with the risk (standard deviation) of the market
portfolio.
Since a security will be purchased only if it improves the risk / return
characteristics of the market portfolio, the risk of a security will be the risk it
adds to the market portfolio. In this context, the volatility of the asset, and its
correlation with the market portfolio, is historically observed and is therefore
a given (there are several approaches to asset pricing that attempt to price
assets by modelling the stochastic properties of the moments of assets'
returns - these are broadly referred to as conditional asset pricing models).
The (maximum) price paid for any particular asset (and hence the return it
will generate) should also be determined based on its relationship with the
market portfolio.
Systematic risks within one market can be managed through a strategy of
using both long and short positions within one portfolio, creating a "market
neutral" portfolio.

Security characteristic line


The Security Characteristic Line (SCL) represents the relationship
between the market return (rM) and the return of a given asset i (ri) at a
given time t. In general, it is reasonable to assume that the SCL is a straight
line and can be illustrated as a statistical equation:

where i is called the asset's alpha coefficient and i the asset's beta
coefficient.
Capital asset pricing model
The asset return depends on the amount for the asset today. The price paid
must ensure that the market portfolio's risk / return characteristics improve
when the asset is added to it. The CAPM is a model which derives the
theoretical required return (i.e. discount rate) for an asset in a market, given
the risk-free rate available to investors and the risk of the market as a whole.
The CAPM is usually expressed:

, Beta, is the measure of asset sensitivity to a movement in the overall


market; Beta is usually found via regression on historical data. Betas
exceeding one signify more than average "riskiness"; betas below one
indicate lower than average.

is the market premium, the historically observed


excess return of the market over the risk-free rate.

Once the expected return, E(ri), is calculated using CAPM, the future cash
flows of the asset can be discounted to their present value using this rate to
establish the correct price for the asset. (Here again, the theory accepts in its
assumptions that a parameter based on past data can be combined with a
future expectation.)
A more risky stock will have a higher beta and will be discounted at a higher
rate; less sensitive stocks will have lower betas and be discounted at a lower
rate. In theory, an asset is correctly priced when its observed price is the

same as its value calculated using the CAPM derived discount rate. If the
observed price is higher than the valuation, then the asset is overvalued; it is
undervalued for a too low price.
Mathematically
(1) The incremental impact on risk and return when an additional risky asset,
a, is added to the market portfolio, m, follows from the formulae for a two
asset portfolio. These results are used to derive the asset appropriate
discount rate.
Risk =
Hence, risk added to portfolio =
but since the weight of the asset will be relatively low,
i.e. additional risk =
Return =
Hence additional return =
(2) If an asset, a, is correctly priced, the improvement in risk to return
achieved by adding it to the market portfolio, m, will at least match the gains
of spending that money on an increased stake in the market portfolio. The
assumption is that the investor will purchase the asset with funds borrowed
at the risk-free rate, Rf; this is rational if
Thus:

i.e. :
i.e. :
is the beta, -- the covariance between the asset and
the market compared to the variance of the market, i.e. the sensitivity
of the asset price to movement in the market portfolio.

Securities market line


The SML essentially graphs the results from the
capital asset pricing model (CAPM) formula. The
X-axis represents the risk (beta), and the Y-axis
represents the expected return. The market risk
premium is determined from the slope of the
The Security Market Line
SML.
The relationship between Beta & required return is plotted on the securities
market line (SML) which shows expected return as a function of . The
intercept is the risk-free rate available for the market, while the slope is
. The Securities market line can be regarded as representing
a single-factor model of the asset price, where Beta is exposure to changes in
value of the Market. The equation of the SML is thus:

It is a useful tool in determining if an asset being considered for a portfolio


offers a reasonable expected return for risk. Individual securities are plotted
on the SML graph. If the security's risk versus expected return is plotted
above the SML, it is undervalued since the investor can expect a greater
return for the inherent risk. And a security plotted below the SML is
overvalued since the investor would be accepting less return for the amount
of risk assumed.
Comparison with arbitrage pricing theory
The SML and CAPM are often contrasted with the Arbitrage pricing theory
(APT), which holds that the expected return of a financial asset can be
modeled as a linear function of various macro-economic factors, where
sensitivity to changes in each factor is represented by a factor specific beta
coefficient.
The APT is less restrictive in its assumptions: it allows for an explanatory
(as opposed to statistical) model of asset returns, and assumes that each
investor will hold a unique portfolio with its own particular array of betas, as

opposed to the identical "market portfolio". Unlike the CAPM, the APT,
however, does not itself reveal the identity of its priced factors - the number
and nature of these factors is likely to change over time and between
economies.
Super-diversification
The highest degree of diversification occurs when institutional asset class
funds are used to construct a financial portfolio. The term was first
introduced in Wealth Without Worry by Jim Whiddon and Lance Alston
(Brown Books, 2005) who apply the fundamental academic research of
Eugene Fama and Professor Kenneth French. See also: diversification,
efficient market hypothesis and market portfolio theory.
A super-diversified, asset class portfolio holds somewhere between 10,000
and 12,000 securities through a smaller number of institutional asset class
funds.
Earnings response coefficient
Introduction
In financial economics, arbitrage pricing theory describes the theoretical
relationship between information that is known to market participants about
a particular equity (e.g., a common stock share of a particular company) and
the price of that equity. Under the efficient market hypothesis, equity prices
are expected in the aggregate to reflect all relevant information at a given
time. Market participants with superior information are expected to exploit
that information until share prices have effectively impounded the
information. Therefore, in the aggregate, a portion of changes in a
company's share price is expected to result from changes in the relevant
information available to the market.
The earnings response coefficient, or ERC, expresses the relationship
between equity returns and the unexpected portion (i.e., new information) of
companies' earnings announcements.
The ERC is expressed mathematically as follows:
R = a + b(ern u) + e

R = the expected return


a = benchmark rate
b = earning response coefficint
(ern-u) = (actual earnings less expected earnings) = unexpected
earnings
e = random movement
Use & Debate
ERCs are used primarily in research in Accounting and Finance. In
particular, ERCs have been used in research in Positive Accounting, a
branch of Financial Accounting research, as they theoretically describe how
markets react to different information events. Research in Finance has used
ERCs to study, among other things, how different investors react to
information events. (Hotchkiss & Strickland 2003)
There is some debate concerning the true nature and strength of the ERC
relationship. As demonstrated in the above model, the ERC is generally
considered to be the slope coefficient of a linear equation between
unexpected earnings and equity return. However, certain research results
suggest that the relationship is nonlinear.(Freeman & Tse 1992)
Fair value
Fair value, also called fair price, is a concept used in finance and
economics, defined as a rational and unbiased estimate of the potential
market price of a good, service, or asset, taking into account such factors as:
relative scarcity
perceived utility (economist's term for subjective value based on personal
needs)
risk characteristics
replacement costs, or costs of close substitutes
production/distribution costs, including a cost of capital
In accounting, fair value is used as an estimate of the market value of an
asset (or liability) for which a market price cannot be determined (usually

because there is no established market for the asset). This is used for assets
whose carrying value is based on mark-to-market valuations; for assets
carried at historical cost, the fair value of the asset is not used.
Fair value vs market price
There are two schools of thought about the relation between the market price
and fair value in any kind of market, but especially with regards to tradable
assets:
The efficient market hypothesis asserts that, in a well organized,
reasonably transparent market, the market price is generally equal to
or close to the fair value, as investors react quickly to incorporate new
information about relative scarcity, utility, or potential returns in their
bids; see also Rational pricing.
Behavioral finance asserts that the market price often diverges from
fair value because of various, common cognitive biases among buyers
or sellers. However, even proponents of behavioral finance generally
acknowledge that behavioral anomalies that may cause such a
divergence often do so in ways that are unpredictable, chaotic, or
otherwise difficult to capture in a sustainably profitable trading
strategy, especially when accounting for transaction costs.
Fair Value Measurements (US markets): Exposure Draft
The Financial Accounting Standards Board (FASB) issued Exposure Draft
1201-100 on June 23, 2004, to provide proposed guidance about how
entities should determine fair value estimations for financial reporting
purposes. The draft would apply broadly to financial and nonfinancial assets
and liabilities measured at fair value under other authoritative accounting
pronouncements. Absence of one single consistent framework for applying
fair value measurements and developing a reliable estimate of a fair value in
the absence of quoted prices has created inconsistencies and incomparability.
The purpose of this exposure draft is to eliminate the inconsistencies by
developing a solid framework to be used in any fair value measurements.
The draft suggests the following definition for fair value: the price at which
an asset or liability could be exchanged in a current transaction between
knowledgeable, unrelated willing parties. It notes that the price is an
estimate in the absence of an actual exchange.
The exposure draft emphasizes the use of market inputs in valuing an asset
or liability. The specific market inputs mentioned include: quoted prices,

interest rates, yield curve, credit data, etc. Fair value is, by definition,
derived from a current transaction which happens in an active market with
knowledgeable and unrelated parties. When fair value is not available due to
the lack of an actual transaction, it is logical to use information from an
active market. However, sometimes quoted prices might not represent the
best estimate of fair value.
The basis of the framework in the exposure draft centers on a fair value
hierarchy. The hierarchy is suggested as a guide to determining what inputs
to include in valuing an asset or liability at fair value. The hierarchy is
broken down into three levels. Level One requires the use of quoted prices
from an active market for identical assets or liabilities. To use this level, the
entity must have immediate access to the market (could exchange in current
condition). If more than one market is available, the exposure draft requires
the use of the most advantageous market. Both the price and costs to do
the transaction must be considered.
Level Two requires the use of quoted prices for similar assets or liabilities in
active markets. While in Level One an entity is not permitted to make any
change to the quoted price, an entity may make price adjustments, as
necessary, in Level Two since the assets or liabilities are only similar, not
identical. It is stated, however, that any adjustment must be objective. If the
adjustment is not objective or there are no similar goods in the active
market, an entity must measure the fair value based on Level Three. This
level requires the use of valuation techniques. The draft suggests the use of
the market, income, and cost approach, unless the use of all three produces
undue costs and effort. If that is the case, an entity is to use the approach that
produces the best approximation of the fair value. Inputs used to determine
the value should be external to the entity. The entity may only rely on
internal information if the cost and effort to obtain external information is
too high.
A working draft has been established for the fair value exposure draft. The
working draft was released for comment on October 21, 2005. One of the
noticeable changes in the working draft compared to the exposure draft is
the addition of two more levels in the fair value hierarchy. Level three has
been adjusted to only include assets or liabilities that have observable inputs
other than quoted prices. It is also explained that financial instruments must
have an input that is observable over the entire term of the instrument. The
addition of the additional levels helps to eliminate the ambiguity associated

with the first exposure draft. Instruments that have inputs that are not
directly observable, but have corroboration through other data, are
considered level four. Level Five encompasses any remaining valuation that
requires all entity inputs and no market inputs.
Homo economicus
Homo economicus, or Economic man, is the concept in some economic
theories of man (that is, a human) as a rational and self-interested actor who
desires wealth, avoids unnecessary labor, and has the ability to make
judgments towards those ends.
History of the term
The term Economic Man was used for the first time in the late nineteenth
century by critics of John Stuart Mills work on political economy.[1][2] Below
is a passage from Mills work that those 19th-century critics were referring
to:
"[Political economy] does not treat the whole of mans nature as modified by
the social state, nor of the whole conduct of man in society. It is concerned
with him solely as a being who desires to possess wealth, and who is capable
of judging the comparative efficacy of means for obtaining that end."
Later in the same work, Mill goes on to write that he is proposing an
arbitrary definition of man, as a being who inevitably does that by which he
may obtain the greatest amount of necessaries, conveniences, and luxuries,
with the smallest quantity of labour and physical self-denial with which they
can be obtained.
Although the term did not come into use until the 19th century, it is often
associated with the ideas of 18th century thinkers like Adam Smith and
David Ricardo. In The Wealth of Nations, Smith wrote:
"It is not from the benevolence of the butcher, the brewer, or the baker that
we expect our dinner, but from their regard to their own interest."
This suggests the same sort of rational, self-interested, labor-averse
individual that Mill proposes. Aristotle's Politics discussed the nature of self
interest in Book II, Part V.

"Again, how immeasurably greater is the pleasure, when a man feels a thing
to be his own; for surely the love of self is a feeling implanted by nature and
not given in vain, although selfishness is rightly censured; this, however, is
not the mere love of self, but the love of self in excess, like the miser's love
of money; for all, or almost all, men love money and other such objects in a
measure. And further, there is the greatest pleasure in doing a kindness or
service to friends or guests or companions, which can only be rendered when
a man has private property."
A wave of economists in the late 19th centuryFrancis Edgeworth, William
Stanley Jevons, Leon Walras, and Vilfredo Paretobuilt mathematical
models on these assumptions. In the 20th century, Lionel Robbins rational
choice theory came to dominate mainstream economics and the term
Economic Man took on a more specific meaning of a person who acted
rationally on complete knowledge out of self-interest and the desire for
wealth.
The model
Homo economicus is a term used for an approximation or model of Homo
sapiens that acts to obtain the highest possible well-being for himself given
available information about opportunities and other constraints, both natural
and institutional, on his ability to achieve his predetermined goals. This
approach has been formalized in certain social science models, particularly
in economics.
Homo economicus is seen as "rational" in the sense that well-being as
defined by the utility function is optimized given perceived opportunities.
That is, the individual seeks to attain very specific and predetermined goals
to the greatest extent with the least possible cost. Note that this kind of
"rationality" does not say that the individual's actual goals are "rational" in
some larger ethical, social, or human sense, only that he tries to attain them
at minimal cost. Only nave applications of the Homo economicus model
assume that this hypothetical individual knows what is best for his long-term
physical and mental health and can be relied upon to always make the right
decision for himself. See rational choice theory and rational expectations for
further discussion; the article on rationality widens the discussion.
As in social science in general, these assumptions are at best
approximations. The term is often used derogatorily in academic literature,

perhaps most commonly by sociologists, many of whom tend to prefer


structural explanations to ones based on rational action by individuals.
The use of the Latin form Homo economicus is certainly long established;
Persky (1995) traces it back to Pareto (1906) but notes that it may be older.
The English term economic man can be found even earlier, in John Kells
Ingram's A History of Political Economy (1888). The Oxford English
Dictionary (O.E.D.) does not mention Homo economicus, but it is one of a
number of phrases that imitate the scientific name for the human species.
According to the O.E.D., the human genus name Homo is
Used with L. or mock-L. adjs. in names imitating Homo sapiens, etc., and
intended to personify some aspect of human life or behaviour (indicated by
the adj.). Homo faber ("feIb@(r)) [H. Bergson L'Evolution Cratrice (1907)
ii. 151], a term used to designate man as a maker of tools.) Variants are often
comic: Homo insipiens; Homo turisticus. (This is from the CD edition of
2002.)
Note that such forms should logically keep the capital for the "genus" name
i.e., Homo economicus rather than homo economicus. Actual usage is
inconsistent.

Criticisms
Homo economicus bases his choices on a consideration of his own personal
"utility function". Economic man is also amoral, ignoring all social values
unless adhering to them gives him utility. Some believe such assumptions
about humans are not only empirically inaccurate but unethical.
Consequently, the "homo economicus" assumptions have been criticized not
only by economists on the basis of logical arguments, but also on empirical
grounds by cross-cultural comparison. Economic anthropologists such as
Marshall Sahlins[5], Karl Polanyi[6], Marcel Mauss[7] or Maurice Godelier[8]
have demonstrated that in traditional societies, choices people make
regarding production and exchange of goods follow patterns of reciprocity
differ sharply from what the "homo oeconomicus" model postulates. Such
systems have been termed gift economy rather than market economy.
Criticisms of the "homo oeconomicus" model put forward from the

standpoint of Christian ethics usually refer to ths traditional ethics of


kinship-based reciprocity that held together traditional societies. They
typically tend to view the egoistic and amoral behavior of "homo
oeconomicus" as unethical conduct that may be functional within a
competitive market economy but is not in line with, but fundamentally
running against human nature and ethics.
Economists Thorstein Veblen, John Maynard Keynes, Herbert Simon, and
many of the Austrian School criticise Homo economicus as an actor with too
great of an understanding of macroeconomics and economic forecasting in
his decision making. They stress uncertainty and bounded rationality in the
making of economic decisions, rather than relying on the rational man who
is fully informed of all circumstances impinging on his decisions. They
argue that perfect knowledge never exists, which means that all economic
activity implies risk.
Empirical studies by Amos Tversky questioned the assumption that investors
are rational. In 1995, Tversky demonstrated the tendency of investors to
make risk-averse choices in gains, and risk-seeking choices in losses. The
investors appeared as very risk-averse for small losses but indifferent for a
small chance of a very large loss. This violates economic rationality as
usually understood. Further research on this subject, showing other
deviations from conventionally-defined economic rationality, is being done
in the growing field of experimental or behavioral economics. Some of the
broader issues involved in this criticism are studied in Decision Theory of
which Rational Choice Theory is only a subset.
Other critics of the Homo economicus model of humanity, such as Bruno
Frey, point to the excessive emphasis on extrinsic motivation (rewards and
punishments from the social environment) as opposed to intrinsic
motivation. For example, it is difficult if not impossible to understand how
Homo economicus would be a hero in war or would get inherent pleasure
from craftsmanship. Frey and others argue that too much emphasis on
rewards and punishments can "crowd out" (discourage) intrinsic motivation:
paying a boy for doing household tasks may push him from doing those
tasks "to help the family" to doing them simply for the reward.
Altruistic economics rejects the model as unrealistically selfish, arguing that
real people have friends to whom they are to a greater or lesser degree

altruistic, so it relaxes the restriction that people's utility functions must be


independent.
Other critics argue that the purely self-interested behavior of the Homo
economicus reflects the behavior of the psycopath and not that of the
average participant in the economy. As such the Homo economicus can be
considered to be a purely theoretcial construct.
Another weakness is highlighted by sociologists, who argue that Homo
economicus ignores an extremely important question, i.e., the origins of
tastes and the parameters of the utility function by social influences,
training, education, and the like. The exogeneity of tastes (preferences) in
this model is the major distinction from Homo sociologicus, in which tastes
are taken as partially or even totally determined by the societal environment
(see below).
Further critics, learning from the broadly-defined psychoanalytic tradition,
criticize the Homo economicus model as ignoring the inner conflicts that
real-world individuals suffer, as between short-term and long-term goals
(e.g., eating chocolate cake and losing weight) or between individual goals
and societal values. Such conflicts may lead to "irrational" behavior
involving inconsistency, psychological paralysis, neurosis, and/or psychic
pain.
One criticism contends that the Homo economicus model works as a selffulfilling prophecy if a group of people (a company, a society) accepts its
premises, particularly the idea that individuals only ever consider their
personal utility function and thatas is often claimedthe "Invisible Hand"
works to make these purely self-interested decisions promote the interest of
society. Governance structures and social norms of such a group will
effectively reward selfishness and discourage or ridicule deviant behavior
like altruism, fairness, or teamwork; its idols will be those who most
ruthlessly maximize their own utility function. This aspect has risen to wider
attention in disciplines like organization science where extrinsic motivation
has been found to be not nearly as effective with knowledge workers as it
had been for traditional industries, creating a renewed interest in forms of
motivation that do not fit into the Homo economicus model. This view
however does not account for the fact that acting selfishly is not necessarily
the same as acting in a self-interested manner, especially in social units in
which altruistic and unselfish behavior is expected.

The clearest case of a self-fulfilling prophecy concerning Homo economicus


has been in the teaching of economics. Several research studies have
indicated that those students who take economics courses end up being more
self-centered than before they took the courses. For example, they are less
willing to co-operate with the other player in a "prisoner's-dilemma"-type
game. See, for example, the article by Thomas Frank et al. (1993), cited
below.
Some critics conclude from this that the "homo oeconomicus" construct is
not so much a result of empirical research into human nature (which would
have to take into account results of comparative, cross-cultural and historical
research such as provided by economic anthropologists), but rather an
implicitly prescriptive construct in line with the amoral rationality of modern
monetary economies, in other words: not a result of empirical scientific
inquiry but rather part of the ideology of liberalism. According to this view,
traditionally held by many marxists, "homo oeceonomicus" functions as a
complement of the idea of liberal natural law for life, liberty, and property,
sharing the same ideological pattern of attributing social and individual
features that are products of history and belong into a specific social
structure (civil society) to human nature, thus making them look inborn,
natural and ahistorical, thus unchangeable. Some marxists consider this false
generalization as an ideological form necessary for maintaining the basic
social structure of civil society, just as other forms of social organizations
have their own form of ideology that places taboos on certain basic features
of social organization in order to stabilize overall social structure.
It is also worth taking note of the fact that the economists believing in
"homo oeconomicus" have been remarkably unsuccessful in creating "homo
oeconomicus" in many so called developing countries, where people just
don't seem to be ready to behave as the economists would like them to and
would have expect them to according to their model.
Responses
Economists tend to disagree with these critiques, arguing that it may be
relevant to analyze the consequences of enlightened egoism just as it may be
worthwhile to consider altruistic or social behavior. Others argue that we
need to understand the consequences of such narrow-minded greed even if
only a small percentage of the population embraces such motives. Free
riders, for example, would have a major negative impact on the provision of

public goods. However, economists' supply and demand predictions might


obtain even if only a significant minority of market participants act like
Homo economicus. In this view, the assumption of Homo economicus can
and should be simply a preliminary step on the road to a more sophisticated
model.
Yet others argue that Homo economicus is a reasonable approximation for
behavior within market institutions, since the individualized nature of human
action in such social settings encourages individualistic behavior. Not only
do market settings encourage the application of a simple cost/benefit
calculus by individuals, but they reward and thus attract the more
individualistic people. It can be difficult to apply social values (as opposed
to following self-interest) in an extremely competitive market; a company
that refuses to pollute (for example) may find itself bankrupt.
Defenders of the Homo economicus model see many critics of the dominant
school as using a straw-man technique. For example, it is common for critics
to argue that real people do not have cost-less access to infinite information
and an innate ability to instantly process it. However, in advanced-level
theoretical economics, scholars have found ways of addressing these
problems, modifying models enough to more realistically depict real-life
decision-making. For example, models of individual behavior under
bounded rationality and of people suffering from envy can be found in the
literature. It is primarily when targeting the limiting assumptions made in
constructing undergraduate models that the criticisms listed above are valid.
These criticisms are especially valid to the extent that the professor asserts
that the simplifying assumptions are true and/or uses them in a
propagandistic way.
The more sophisticated economists are quite conscious of the empirical
limitations of the Homo economicus model. In theory, the views of the
critics can be combined with the Homo economicus model to attain a more
accurate model.
One problem with making the Homo economicus model more sophisticated
is that sometimes the model becomes tautologically true, i.e., true by
definition. If someone has a "taste" for variety, for example, it becomes
difficult if not impossible to distinguish economic rationality from
irrationality. In this case, the Homo economicus model may not add any new
information at all to our economic understanding.

Homo sociologicus
Comparisons between economics and sociology have resulted in a
corresponding term Homo sociologicus (introduced by German Sociologist
Ralf Dahrendorf in 1958), to parody the image of human nature given in
some sociological models that attempt to limit the social forces that
determine individual tastes and social values. (The alternative or additional
source of these would be biology.) Hirsch, Michaels, and Friedman (1990, p.
44) say that Homo sociologicus is largely a tabula rasa upon which societies
and cultures write values and goals; unlike economicus, sociologicus acts
not to pursue selfish interests but to fulfill social roles. This "individual"
may appear to be all society and no individual. This suggests the need to
combine the insights of Homo economicus models with those of Homo
sociologicus models in order to create a synthesis, rather than rejecting one
or the other.
Rational choice theory
Rational choice theory, also known as rational action theory, is a
framework for understanding and often formally modeling social and
economic behavior. It is the dominant theoretical paradigm in
microeconomics. It is also central to modern political science and is used by
scholars in other disciplines such as sociology. The 'rationality' described by
rational choice theory is different from colloquial and most philosophical
uses of rationality. Models of rational choice are very diverse but they share
one thing in common. They all assume that individuals choose the best
action according to stable preference functions and constraints facing them.
Most models have additional assumptions. Proponents of rational choice
models do not claim that a model's assumptions are a full description of
reality, only that good models can aid reasoning and provide help in
formulating falsifiable hypotheses, whether intuitive or not. Successful
hypotheses are those that survive empirical tests.
Rational choice theory is a successor of much older descriptions of rational
behavior.[citation needed] It is widely used as an assumption of the behavior of
individuals in microeconomic models and analysis. Although rationality
cannot be directly empirically tested, empirical tests can be conducted on
some of the results derived from the models. Over the last decades it has
also become increasingly employed in social sciences other than economics,
such as sociology and political science.[1] It has had far-reaching impacts on

the study of political science, especially in fields like the study of interest
groups, elections, behaviour in legislatures, coalitions, and bureaucracy
(Dunleavy, 1991).
Models that rely on rational choice theory often adopt methodological
individualism, the assumption that social situations or collective behaviors
are the result of individual actions.
Actions, Assumptions, and Individual Preferences
Rational decision making entails choosing an action given one's preferences,
the actions one could take, and expectations about the outcomes of those
actions. Actions are often expressed as a set, for example a set of j
exhaustive and exclusive actions:

For example, if a person is to vote for either Roger, Sara, or abstain, her set
of possible voting actions is:
A = {Roger,Sara,abstain}
Individuals can also have similar sets of possible outcomes.
Rational choice theory makes two assumptions about individuals'
preferences for actions. First, is the assumption of completeness, that is that
all actions can be ranked in an order of preference (indifference between two
or more is possible). Second, is the transitivity, the assumption that if action
a1 is preferred to a2, and action a2 is preferred to a3, then a1 is preferred to a3.
Together these assumptions form the result that given a set of exhaustive and
exclusive actions to chose from, an individual can rank them in terms of her
preferences, and that her preferences are consistent.
An individual's preferences can also take forms:

Strict Preference occurs when an individual prefers a1 to a2, but not


a2 to a1. In some models, a weak preference can be held in which an
individual has a preference for at least aj, similar to the mathematical
operator .

Indifference occurs when an individual does not prefer a1 to a2, or a2


to a1.

In more complex models, other assumptions are often incorporated, such as


the assumption of independence axiom. Also, with dynamic models that
include decision-making over time, time inconsistency may affect an
individual's preferences.
Other Assumptions
Often, to simplify calculation and facilitate testing, some possibly unrealistic
assumptions are made about the world. These can include:

An individual has full or perfect information about exactly what will


occur under any choice made. More complex models rely on
probability to describe outcomes.
An individual has the cognitive ability, or time to weigh every choice
against every other choice. Studies in to the limitations of this
assumption are included in theories of bounded rationality.

Utility Maximization
Often preferences are described by their utility function or payoff function.
This is an ordinal number an individual assigns over the available actions,
such as:

The individual's preferences are then expressed as the relation between these
ordinal assignments. For example, if an individual prefers the candidate Sara
over Roger over abstaining, their preferences would have the relation:

Criticism
Both the assumptions and the behavioral predictions of rational choice
theory have sparked criticism from various camps. Some people have
developed models of bounded rationality, which hope to be more
psychologically plausible without completely abandoning the idea that
reason underlies decision-making processes. For a long time, a popular

strain of critique was a lack of empirical basis, but experimental economics


and experimental game theory have largely changed that critique (although
they have added other critiques, mainly by demonstrating some human
behavior that consistently deviates from rational choice theory).[citation needed]
In their 1994 piece, Pathologies of Rational Choice Theory, Green and
Shapiro argue that the empirical outputs of rational choice theory have been
limited. They contend that much of the applicable literature, at least in
Political Science, was done with weak methods and that when corrected
many of the empirical outcomes no longer hold. When taken in this
perspective, Rational Choice Theory has provided very little to the overall
understanding of political interaction - and is an amount certainly
disproportionately weak relative to its appearance in the literature (Green
and Shapiro, 1994).
Benefits of rational choice theory
Describing the decisions made by individuals as rational and utility
maximizing may seem to be a tautological explanation of their behavior that
provided very little new information. While there may be many reasons for a
rational choice theory approach, two are important for the social sciences.
First, assuming humans make decisions in a rational, rather than stochastic
manner implies that their behavior can be modeled and thus predictions can
be made about future actions. Second, the mathematical formality of rational
choice theory models allows social scientists to derive results from their
models that may have otherwise not been seen.
Rationality
Rationality as a term is related to the idea of reason, a word which
following Webster's may be derived as much from older terms referring to
thinking itself as from giving an account or an explanation. This lends the
term a dual aspect. One aspect associates it with comprehension,
intelligence, or inference, particularly when an inference is drawn in ordered
ways (thus a syllogism is a rational argument in this sense). The other part
associates rationality with explanation, understanding or justification,
particularly if it provides a ground or a motive. 'Irrational', therefore, is
defined as that which is not endowed with reason or understanding.
Rationality contra logic

A logical argument is sometimes described as "rational" if it is logically


valid. However, rationality is a much broader term than logic, as it includes
"uncertain but sensible" arguments based on probability, expectation,
personal experience and the like, whereas logic deals principally with
provable facts and demonstrably valid relations between them. For example,
ad hominem arguments are logically unsound, but in many cases they may
be rational. A simple philosophical definition of rationality refers to one's
use of a "practical syllogism". For example,
I am cold
If I close the window I will not be cold
Therefore, I closed the window
We should note that standard form practical syllogisms follow a very
specific format and are always valid if constructed correctly though they are
not necessarily sound. There are several notable implications of such a
definition. First, rationality is objective - it exists only when a valid practical
syllogism is used. Second, a choice is either rational or it is not - there is no
gradation since there is no gradation between valid and invalid arguments.
Third, rationality only applies to actions - i.e., shutting the window is a
rational thing to do if you are cold (assuming it is cold outside). Evidence
bears on belief but not on rationality. All that is required for an action to be
rational is that you believe that X and that that if X then Y so you do Y.
Arguments about belief are couched in the terms valid and sound - logically
you must believe something if the argument supporting it is sound. In some
cases, such as religious belief, the argument may be valid but its soundness
cannot be known for the truth of its premises cannot be known.
Rationality in the humanities and social sciences
In philosophy, rationality and reason are the key methods used to analyse the
data gathered through systematically gathered observations. In economics,
sociology, and political science, a decision or situation is often called
rational if it is in some sense optimal, and individuals or organizations are
often called rational if they tend to act somehow optimally in pursuit of their
goals. Thus one speaks, for example, of a rational allocation of resources, or
of a rational corporate strategy. In this concept of "rationality", the
individual's goals or motives are taken for granted and not made subject to
criticism, ethical or otherwise. Thus rationality simply refers to the success
of goal attainment, whatever those goals may be. Sometimes, in this context,

rationality is equated with behavior that is self-interested to the point of


being selfish. Sometimes rationality implies having complete knowledge
about all the details of a given situation. It might be said that because the
goals are not important in definition of rationality, it really only demands
logical consistency in choice making. See rational choice theory.
Debates arise in these three fields about whether or not people or
organizations are "really" rational, as well as whether it make sense to model
them as such in formal models. Some have argued that a kind of bounded
rationality makes more sense for such models. Others think that any kind of
rationality along the lines of rational choice theory is a useless concept for
understanding human behavior; the term homo economicus (economic man:
the imaginary logically consistent but amoral being assumed in economic
models) was coined largely in honor of this view.
Rationality is a central principle in artificial intelligence, where a rational
agent is specifically defined as an agent which always chooses the action
which maximises its expected performance, given all of the knowledge it
currently possesses.
Theories of rationality
The German sociologist Max Weber proposed an interpretation of social
action that distinguished between four different types of rationality. The first,
which he called Zweckrational or purposive/instrumental rationality, is
related to the expectations about the behavior of other human beings or
objects in the environment. These expectations serve as means for a
particular actor to attain ends, ends which Weber noted were "rationally
pursued and calculated." The second type, Weber called Wertrational or
value/belief-oriented. Here the action is undertaken for what one might call
reasons intrinsic to the actor: some ethical, aesthetic, religious or other
motive, independent of whether it will lead to success. The third type was
affectual, determined by an actor's specific affect, feeling, or emotion - to
which Weber himself said that this was a kind of rationality that was on the
borderline of what he considered "meaningfully oriented." The fourth was
traditional, determined by ingrained habituation. Weber emphasized that it
was very unusual to find only one of these orientations: combinations were
the norm. His usage also makes clear that he considered the first two as more
significant than the others, and it is arguable that the third and fourth are
subtypes of the first two. These kinds of rationality were ideal types.

The advantage in this interpretation is that it avoids a value-laden


assessment, say, that certain kinds of beliefs are irrational. Instead, Weber
suggests that a ground or motive can be given for religious or affect
reasons, for example that may meet the criterion of explanation or
justification even if it is not an explanation that fits the Zweckrational
orientation of means and ends. The opposite is therefore also true: some
means-ends explanations will not satisfy those whose grounds for action are
'Wertrational'.
Based on the premise that 'feelings of worthlessness' are a maladaptive
byproduct of the evolution of rationality, Phil Roberts, Jr. has proposed a
theory in which the rationality of an end is presumed to correlate with the
comprehensiveness of its underlying considerations, and in which no
concrete objective is presumed to be rational in any but a relative sense of
the term. In addition to its ability to explain what morality is (a shared
subconscious theory of rationality), Roberts has also demonstrated how his
theory can be employed to address a number of rationality paradoxes,
including the paradox of rational irrationality, cognitive versus practical
rationality conflict, the "rationality debate" (Cohen vs. Kahneman and
Tversky) and the paradox of the Prisoner's Dilemma.[1]
Behavioral finance
Behavioral finance and behavioral economics are closely related fields
which apply scientific research on human and social cognitive and emotional
biases to better understand economic decisions and how they affect market
prices, returns and the allocation of resources. The fields are primarily
concerned with the rationality, or lack thereof, of economic agents.
Behavioral models typically integrate insights from psychology with neoclassical economic theory. Behavioral Finance has become the theoretical
basis for technical analysis.[1]
Behavioral analyses are mostly concerned with the effects of market
decisions, but also those of public choice, another source of economic
decisions with some similar biases.
History
During the classical period, economics had a close link with psychology. For
example, Adam Smith wrote The Theory of Moral Sentiments, an important
text describing psychological principles of individual behavior; and Jeremy

Bentham wrote extensively on the psychological underpinnings of utility.


Economists began to distance themselves from psychology during the
development of neo-classical economics as they sought to reshape the
discipline as a natural science, with explanations of economic behavior
deduced from assumptions about the nature of economic agents. The concept
of homo economicus was developed, and the psychology of this entity was
fundamentally rational. Nevertheless, psychological explanations continued
to inform the analysis of many important figures in the development of neoclassical economics such as Francis Edgeworth, Vilfredo Pareto, Irving
Fisher and John Maynard Keynes.
Psychology had largely disappeared from economic discussions by the mid
20th century. A number of factors contributed to the resurgence of its use and
the development of behavioral economics. Expected utility and discounted
utility models began to gain wide acceptance, generating testable hypotheses
about decision making under uncertainty and intertemporal consumption
respectively. Soon a number of observed and repeatable anomalies
challenged those hypotheses. Furthermore, during the 1960s cognitive
psychology began to describe the brain as an information processing device
(in contrast to behaviorist models). Psychologists in this field such as Ward
Edwards, Amos Tversky and Daniel Kahneman began to compare their
cognitive models of decision making under risk and uncertainty to economic
models of rational behavior. In Mathematical psychology, there is a
longstanding interest in the transitivity of preference and what kind of
measurement scale uitility consistutes (Luce, 2000).
Perhaps the most important paper in the development of the behavioral
finance and economics fields was written by Kahneman and Tversky in
1979. This paper, 'Prospect theory: Decision Making Under Risk', used
cognitive psychological techniques to explain a number of documented
divergences of economic decision making from neo-classical theory. Further
milestones in the development of the field include a well attended and
diverse conference at the University of Chicago (see Hogarth & Reder,
1987), a special 1997 edition of the Quarterly Journal of Economics ('In
Memory of Amos Tversky') devoted to the topic of behavioral economics
and the award of the Nobel prize to Daniel Kahneman in 2002 "for having
integrated insights from psychological research into economic science,
especially concerning human judgment and decision-making under
uncertainty."

Prospect theory is an example of generalized expected utility theory.


Although not commonly included in discussions of the field of behavioral
economics, generalized expected utility theory is similarly motivated by
concerns about the descriptive inaccuracy of expected utility theory.
Behavioral economics has also been applied to problems of intertemporal
choice. The most prominent idea is that of hyperbolic discounting, in which
a high rate of discount is used between the present and the near future, and a
lower rate between the near future and the far future. This pattern of
discounting is dynamically inconsistent (or time-inconsistent), and therefore
inconsistent with some models of rational choice, since the rate of discount
between time t and t+1 will be low at time t-1, when t is the near future, but
high at time t when t is the present and time t+1 the near future. As part of
the discussion of hypberbolic discounting, has been animal and human work
on Melioration theory and Matching Law of Richard Herrnstein. They
suggest that behavior is not based on expected utility of on just previous
reinforcement experience.
Methodology
At the outset behavioral economics and finance theories were developed
almost exclusively from experimental observations and survey responses,
though in more recent times real world data has taken a more prominent
position. fMRI has also been used to determine which areas of the brain are
active during various steps of economic decision making. Experiments
simulating market situations such as stock market trading and auctions are
seen as particularly useful as they can be used to isolate the effect of a
particular bias upon behavior; observed market behavior can typically be
explained in a number of ways, carefully designed experiments can help
narrow the range of plausible explanations. Experiments are designed to be
incentive compatible, with binding transactions involving real money the
norm.
Key observations
There are three main themes in behavioral finance and economics (Shefrin,
2002):

Heuristics: People often make decisions based on approximate rules


of thumb, not strictly rational analyses. See also cognitive biases and
bounded rationality.

Framing: The way a problem or decision is presented to the decision


maker will affect his action.
Market inefficiencies: There are explanations for observed market
outcomes that are contrary to rational expectations and market
efficiency. These include mispricings, non-rational decision making,
and return anomalies. Richard Thaler, in particular, has written a long
series of papers describing specific market anomalies from a
behavioral perspective.

Recently, Barberis, Shleifer, and Vishny (1998), as well as Daniel,


Hirshleifer, and Subrahmanyam (1998) have built models based on
extrapolation (seeing patterns in random sequences) and overconfidence to
explain security market over- and underreactions, though such models have
not been used in the money management industry. These models assume that
errors or biases are correlated across agents so that they do not cancel out in
aggregate. This would be the case if a large fraction of agents look at the
same signal (such as the advice of an analyst) or have a common bias.
More generally, cognitive biases may also have strong anomalous effects in
aggregate if there is a social contamination with a strong emotional content
(collective greed or fear), leading to more widespread phenomena such as
herding and groupthink. Behavioral finance and economics rests as much on
social psychology within large groups as on individual psychology.
However, some behavioral models explicitly demonstrate that a small but
significant anomalous group can also have market-wide effects (eg. Fehr and
Schmidt, 1999).
Behavioral finance topics
Key observations made in behavioral finance literature include the lack of
symmetry between decisions to acquire or keep resources, called
colloquially the "bird in the bush" paradox, and the strong loss aversion or
regret attached to any decision where some emotionally valued resources
(e.g. a home) might be totally lost. Loss aversion appears to manifest itself
in investor behavior as an unwillingness to sell shares or other equity, if
doing so would force the trader to realise a nominal loss (Genesove &
Mayer, 2001). It may also help explain why housing market prices do not
adjust downwards to market clearing levels during periods of low demand.

Applying a version of prospect theory, Benartzi and Thaler (1995) claim to


have solved the equity premium puzzle, something conventional finance
models have been unable to do.
Presently, some researchers in experimental finance use experimental
method, e.g. creating an artificial market by some kind of simulation
software to study people's decision-making process and behavior in financial
markets.
Behavioral finance models
Some financial models used in money management and asset valuation use
behavioral finance parameters, for example
Thaler's model of price reactions to information, with three phases,
underreaction-adjustment-overreaction, creating a price trend
One characteristic of overreaction is that the average return of asset prices
following a series of announcements of good news is lower than the average
return following a series of bad announcements. In other words, overreaction
occurs if the market reacts too strongly or for too long (persistent trend) to
news that it subsequently needs to be compensated in the opposite direction.
As a result, assets that were winners in the past should not be seen as an
indication to invest in as their risk adjusted returns in the future are
relatively low compared to stocks that were defined as losers in the past.

The stock image coefficient

Criticisms of behavioral finance


Critics of behavioral finance, such as Eugene Fama, typically support the
efficient market theory (though Fama may have reversed his position in
recent years). They contend that behavioral finance is more a collection of
anomalies than a true branch of finance and that these anomalies will
eventually be priced out of the market or explained by appealing to market
microstructure arguments. However, a distinction should be noted between
individual biases and social biases; the former can be averaged out by the
market, while the other can create feedback loops that drive the market
further and further from the equilibrium of the "fair price".

A specific example of this criticism is found in some attempted explanations


of the equity premium puzzle. It is argued that the puzzle simply arises due
to entry barriers (both practical and psychological) which have traditionally
impeded entry by individuals into the stock market, and that returns between
stocks and bonds should stabilize as electronic resources open up the stock
market to a greater number of traders (See Freeman, 2004 for a review). In
reply, others contend that most personal investment funds are managed
through superannuation funds, so the effect of these putative barriers to entry
would be minimal. In addition, professional investors and fund managers
seem to hold more bonds than one would expect given return differentials.
Behavioral economics topics
Models in behavioral economics are typically addressed to a particular
observed market anomaly and modify standard neo-classical models by
describing decision makers as using heuristics and being affected by framing
effects. In general, behavioural economics sits within the neoclassical
framework, though the standard assumption of rational behaviour is often
challenged.
Critical conclusions of behavioral economics
Critics of behavioral economics typically stress the rationality of economic
agents (see Myagkov and Plott (1997) amongst others). They contend that
experimentally observed behavior is inapplicable to market situations, as
learning opportunities and competition will ensure at least a close
approximation of rational behavior.
Others note that cognitive theories, such as prospect theory, are models of
decision making, not generalized economic behavior, and are only
applicable to the sort of once-off decision problems presented to experiment
participants or survey respondents.
Traditional economists are also skeptical of the experimental and survey
based techniques which are used extensively in behavioral economics.
Economists typically stress revealed preferences over stated preferences
(from surveys) in the determination of economic value. Experiments and
surveys must be designed carefully to avoid systemic biases, strategic
behavior and lack of incentive compatibility, and many economists are
distrustful of results obtained in this manner due to the difficulty of
eliminating these problems.

Rabin (1998) dismisses these criticisms, claiming that results are typically
reproduced in various situations and countries and can lead to good
theoretical insight. Behavioral economists have also incorporated these
criticisms by focusing on field studies rather than lab experiments. Some
economists look at this split as a fundamental schism between experimental
economics and behavioral economics, but prominent behavioral and
experimental economists tend to overlap techniques and approaches in
answering common questions. For example, many prominent behavioral
economists are actively investigating neuroeconomics, which is entirely
experimental and cannot be verified in the field.
Other proponents of behavioral economics note that neoclassical models
often fail to predict outcomes in real world contexts. Behavioral insights can
be used to update neoclassical equations, and behavioral economists note
that these revised models not only reach the same correct predictions as the
traditional models, but also correctly predict some outcomes where the
traditional models failed.[verification needed]
Eugene Fama
Eugene Fama (born February 14, 1939) is an American economist, known
for his work on portfolio theory and asset pricing, both theoretical and
empirical.
He earned his undergraduate degree in French from Tufts University in 1960
and his MBA and Ph.D. from the Graduate School of Business at the
University of Chicago in economics and finance. He has spent all of his
teaching career at the University of Chicago.
His Ph.D. thesis, which concluded that stock price movements are
unpredictable and follow a random walk, was published as the entire
January, 1965 issue of the Journal of Business, entitled The Behavior of
Stock Market Prices. That work was subsequently rewritten into a less
technical article, Random Walks in Stock Market Prices, which was
published in Financial Analysts Journal in 1966 and Institutional Investor in
1968.
His article The Adjustment of Stock Prices to New Information in the
International Economic Review, 1969 (with several co-authors) was the first
Event study that sought to analyze how stock prices respond to an event,

using price data from the newly available CRSP database. This was the first
of literally hundreds of such published studies.
Fama is most often thought of as the father of efficient market theory. In a
ground-breaking article in the May, 1970 issue of the Journal of Finance,
entitled Efficient Capital Markets: A Review of Theory and Empirical Work,
Fama proposed two crucial concepts that have defined the conversation on
efficient markets ever since. First, Fama proposed three types of efficiency:
(1) strong-form; (ii) semi-strong form; and (iii) weak efficiency. Second,
Fama demonstrated that the notion of market efficiency could not be rejected
without an accompanying rejection of the model of market equilibrium (e.g.
the price setting mechanism). This concept, known as the "joint hypothesis
problem," has ever since vexed researchers.
In recent years, Fama has become controversial again, for a series of papers,
co-written with Kenneth French, that cast doubt on the validity of the
Capital Asset Pricing Model (CAPM), which posits that a stock's "beta"
alone should explain its average return. These papers describe two factors
above and beyond a stock's market beta which can explain differences in
stock returns: market capitalization and "value". They also offer evidence
that a variety of patterns in average returns, often labeled as "anomalies" in
past work, can be explained with their 3 factor model.
Additionally, Fama co-authored the textbook The Theory of Finance with
Nobel Memorial Prize in Economics winner Merton H. Miller. He is also the
director of research of Dimensional Fund Advisors, Inc., an investment
advising firm with $126 billion under management (as of 2006). One of his
children, Eugene F. Fama Jr., is a vice president of the company.
Fama and French Three Factor Model
In the portfolio management field, Fama and French developed the highly
successful three factor model to describe the market behavior.
CAPM uses a single factor, beta, to compare a portfolio with the market as a
whole. But it oversimplifies the complex market. Fama and French started
with the observation that two classes of stocks have tended to do better than
the market as a whole: (i) small caps and (ii) stocks with a high book-valueto-price ratio (customarily called value stocks; to be differentiated from
growth stocks). They then added two factors to CAPM to reflect a portfolio's

exposure to these two classes:


Here r is the portfolio's return rate, Rf is the risk-free return rate, and Km is
the return of the whole stock market. The "three factor" is analogous to the
classical but not equal to it, since there are now two additional factors to
do some of the work. SMB and HML stand for "small [cap] minus big" and
"high [book/price] minus low"; they measure the historic excess returns of
small caps and "value" stocks over the market as a whole. By the way SMB
and HML are defined, the corresponding coefficients bs and bv take values
on a scale of roughly 0 to 1: bs = 1 would be a small cap portfolio, bs = 0
would be large cap, bv = 1 would be a portfolio with a high book/price ratio,
etc. The Fama-French Three Factor model explains over 90% of stock
returns. The signs of the coefficients suggested that small cap and value
portfolios have higher expected returns--and arguably higher expected risk-than those of large cap and growth portfolios.[1]
The three factor model is gaining recognition in portfolio management.
Morningstar.com classifies stocks and mutual funds based on these factors.
Many studies show that the majority of actively managed mutual funds
underperform broad indexes based on three factors if classified properly.
This leads to more and more index funds and ETFs being offered based on
the three factor model.
Finance
Finance is a field that studies and addresses the ways in which individuals,
businesses, and organizations raise, allocate, and use monetary resources
over time, taking into account the risks entailed in their projects. The term
finance may thus incorporate any of the following:

The study of money and other assets;


The management and control of those assets;
Profiling and managing project risks;
The science of managing money;
As a verb, "to finance" is to provide funds for business or for an
individual's large purchases (car, home, etc.).

The activity of finance is the application of a set of techniques that


individuals and organizations (entities) use to manage their financial affairs,

particularly the differences between income and expenditure and the risks of
their investments.
An entity whose income exceeds its expenditure can lend or invest the
excess income. On the other hand, an entity whose income is less than its
expenditure can raise capital by borrowing or selling equity claims,
decreasing its expenses, or increasing its income. The lender can find a
borrower, a financial intermediary, such as a bank or buy notes or bonds in
the bond market. The lender receives interest, the borrower pays a higher
interest than the lender receives, and the financial intermediary pockets the
difference.
A bank aggregates the activities of many borrowers and lenders. A bank
accepts deposits from lenders, on which it pays the interest. The bank then
lends these deposits to borrowers. Banks allow borrowers and lenders, of
different sizes, to coordinate their activity. Banks are thus compensators of
money flows in space.
A specific example of corporate finance is the sale of stock by a company to
institutional investors like investment banks, who in turn generally sell it to
the public. The stock gives whoever owns it part ownership in that company.
If you buy one share of XYZ Inc, and they have 100 shares outstanding
(held by investors), you are 1/100 owner of that company. Of course, in
return for the stock, the company receives cash, which it uses to expand its
business in a process called "equity financing". Equity financing mixed with
the sale of bonds (or any other debt financing) is called the company's
capital structure.
Finance is used by individuals (personal finance), by governments (public
finance), by businesses (corporate finance), etc., as well as by a wide variety
of organizations including schools and non-profit organizations. In general,
the goals of each of the above activities are achieved through the use of
appropriate financial instruments, with consideration to their institutional
setting.
Finance is one of the most important aspects of business management.
Without proper financial planning a new enterprise is unlikely to be
successful. Managing money (a liquid asset) is essential to ensure a secure
future, both for the individual and an organization.

Personal finance
Questions in personal finance revolve around

How much money will be needed by an individual (or by a family) at


various points in the future?
Where will this money come from (e.g. savings or borrowing)?
How can people protect themselves against unforeseen events in their
lives, and risk in financial markets?
How can family assets be best transferred across generations
(bequests and inheritance)?
How do taxes (tax subsidies or penalties) affect personal financial
decisions?

Personal financial decisions may involve paying for education, financing


durable goods such as real estate and cars, buying insurance, e.g. health and
property insurance, investing and saving for retirement.
Personal financial decisions may also involve paying for a loan.
Business finance
In the case of a company, managerial finance or corporate finance is the task
of providing the funds for the corporations' activities. For small business,
this is referred to as SME finance. It generally involves balancing risk and
profitability. Long term funds would be provided by ownership equity and
long-term credit, often in the form of bonds. These decisions lead to the
company's capital structure. Short term funding or working capital is mostly
provided by banks extending a line of credit.
On the bond market, borrowers package their debt in the form of bonds. The
borrower receives the money it borrows by selling the bond, which includes
a promise to repay the value of the bond with interest. The purchaser of a
bond can resell the bond, so the actual recipient of interest payments can
change over time. Bonds allow lenders to recoup the value of their loan by
simply selling the bond.

Another business decision concerning finance is investment, or fund


management. An investment is an acquisition of an asset in the hopes that it
will maintain or increase its value. In investment management - in choosing
a portfolio - one has to decide what, how much and when to invest. In doing
so, one needs to

Identify relevant objectives and constraints: institution or individual goals - time horizon - risk aversion - tax considerations
Identify the appropriate strategy: active vs passive - hedging strategy
Measure the portfolio performance

Financial management is duplicate with the financial function of the


Accounting profession. However, Financial Accounting is more concerned
with the reporting of historical financial information, while the financial
decision is directed toward the future of the firm.
Shared Services
There is currently a move towards converging and consolidating Finance
provisions into shared services within an organization. Rather than an
organization having a number of separate Finance departments performing
the same tasks from different locations a more centralized version can be
created.
Finance of states
Country, state, county, city or municipality finance is called public finance.
It is concerned with

Identification of required expenditure of a public sector entity


Source(s) of that entity's revenue
The budgeting process
Debt issuance (municipal bonds) for public works projects

Financial economics
Financial economics is the branch of economics studying the interrelation of
financial variables, such as prices, interest rates and shares, as opposed to
those concerning the real economy. Financial economics concentrates on
influences of real economic variables on financial ones, in contrast to pure
finance.

It studies:

Valuation - Determination of the fair value of an asset


o How risky is the asset? (identification of the asset appropriate
discount rate)
o What cash flows will it produce? (discounting of relevant cash
flows)
o How does the market price compare to similar assets? (relative
valuation)
o Are the cash flows dependent on some other asset or event?
(derivatives, contingent claim valuation)

Financial markets and instruments


o Commodities - topics
o Stocks - topics
o Bonds - topics
o Money market instruments- topics
o Derivatives - topics

Financial institutions and regulation

Financial mathematics
Financial mathematics is the main branch of applied mathematics concerned
with the financial markets. Financial mathematics is the study of financial
data with the tools of mathematics, mainly statistics. Such data can be
movements of securitiesstocks and bonds etc.and their relations.
Another large subfield is insurance mathematics.
Experimental finance
Experimental finance aims to establish different market settings and
environments to observe experimentally and analyze agents' behavior and
the resulting characteristics of trading flows, information diffusion and
aggregation, price setting mechanisms, and returns processes. Researchers in
experimental finance can study to what extent existing financial economics
theory makes valid predictions, and attempt to discover new principles on
which such theory can be extended. Research may proceed by conducting
trading simulations or by establishing and studying the behaviour of people
in artificial competitive market-like settings.

Insider trading
Insider trading is the trading of a corporation's stock or other securities
(e.g. bonds or stock options) by corporate insiders such as officers, key
employees, directors, or holders of more than ten percent of the firm's
shares.[1] Insider trading may be perfectly legal, but the term is frequently
used to refer to a practice, illegal in many jurisdictions, in which an insider
or a related party trades based on material non-public information obtained
during the performance of the insider's duties at the corporation, or
otherwise misappropriated.[2]
All insider trades must be reported in the United States. Many investors
follow the summaries of insider trades, published by the United States
Securities and Exchange Commission (SEC), in the hope that mimicking
these trades will be profitable. Legal "insider trading" may not be based on
material non-public information. Illegal insider trading in the US requires
the participation (perhaps indirectly) of a corporate insider or other person
who is violating his fiduciary duty or misappropriating private information,
and trading on it or secretly relaying it.
Insider trading is believed to raise the cost of capital for securities issuers,
thus decreasing overall economic growth.[3]
Illegal insider trading
Rules against insider trading on material non-public information exist in
most jurisdictions around the world, though the details and the efforts to
enforce them vary considerably. The United States, the United Kingdom,
and Canada are viewed as the countries who have the strictest laws and
make the most serious efforts to enforce them.[4]
According to the U.S. SEC, corporate insiders are a company's officers,
directors and any beneficial owners of more than ten percent of a class of the
company's equity securities. Trades made by these types of insiders in the
company's own stock, based on material non-public information, are
considered to be fraudulent since the insiders are violating the trust or the
fiduciary duty that they owe to the shareholders. The corporate insider,
simply by accepting employment, has made a contract with the shareholders
to put the shareholders' interests before their own, in matters related to the
corporation. When the insider buys or sells based upon company owned
information, he is violating his contract with the shareholders.

For example, illegal insider trading would occur if the chief executive
officer of Company A learned (prior to a public announcement) that
Company A will be taken over, and bought shares in Company A knowing
that the share price would likely rise.
Liability for insider trading violations cannot be avoided by passing on the
information in an "I scratch your back, you scratch mine" or quid pro quo
arrangement, as long as the person receiving the information knew or should
have known that the information was company property. For example, if
Company A's CEO did not trade on the undisclosed takeover news, but
instead passed the information on to his brother-in-law who traded on it,
illegal insider trading would still have occurred.[5]
A newer view of insider trading, the "misappropriation theory" is now part
of US law. It states that anyone who misappropriates (steals) information
from their employer and trades on that information in any stock (not just the
employer's stock) is guilty of insider trading. For example, if a journalist
who worked for Company B learned about the takeover of Company A while
performing his work duties, and bought stock in Company A, illegal insider
trading might still have occurred. Even though the journalist did not violate
a fiduciary duty to Company A's shareholders, he might have violated a
fiduciary duty to Company B's shareholders (assuming the newspaper had a
policy of not allowing reporters to trade on stories they were covering).[6]
Proving that someone has been responsible for a trade can be difficult,
because traders may try to hide behind nominees, offshore companies, and
other proxies. Nevertheless, the U.S. Securities and Exchange Commission
prosecutes over 50 cases each year, with many being settled administratively
out of court. The SEC and several stock exchanges actively monitor trading,
looking for suspicious activity.
Not all trading on information is illegal inside trading, however. For
example, while dining at a restaurant, you hear the CEO of Company A at
the next table telling the CFO that the company will be taken over, and then
you buy the stock, you wouldn't be guilty of insider trading unless there was
some closer connection between you, the company, or the company officers.
Since insiders are required to report their trades, others often track these
traders, and there is a school of investing which follows the lead of insiders.
This is of course subject to the risk that an insider is making a buy

specifically to increase investor confidence, or making a sell for reasons


unrelated to the health of the company (e.g. a desire to diversify or buy a
house).
As of December 2005 companies are required to announce times to their
employees as to when they can safely trade without being accused of trading
on inside information.
American insider trading law
The United States has been the leading country in prohibiting insider trading
deemed illegal. Thomas Newkirk and Melissa Robertson of the SEC,
summarize the development of U.S. insider trading laws.[7]
U.S. insider trading prohibitions are based on English and American
common law prohibitions against fraud. In 1909, well before the Securities
Exchange Act was passed, the United States Supreme Court ruled that a
corporate director who bought that companys stock when he knew it was
about to jump up in price committed fraud by buying while not disclosing
his inside information.
Section 17 of the Securities Act of 1933[8] contained prohibitions of fraud in
the sale of securities which were greatly strengthened by the Securities
Exchange Act of 1934.[9]
Section 16(b) of the Securities Exchange Act of 1934 prohibits short-swing
profits (from any purchases and sales within any six month period) made by
corporate directors, officers, or stockholders owning more than 10% of a
firms shares. Under Section 10(b) of the 1934 Act, SEC Rule 10b-5,
prohibits fraud related to securities trading.
The Insider Trading Sanctions Act of 1984 and the Insider Trading and
Securities Fraud Enforcement Act of 1988 provide for penalties for illegal
insider trading to be as high as three times the profit gained or the loss
avoided from the illegal trading.[10]
S.E.C. regulation FD ("Full Disclosure") requires that if a company
intentionally discloses material non-public information to one person, it
must simulataneously disclose that information to the public at large. In the
case of an unintentional disclosure of material non-public information to one
person, the company must make a public disclosure "promptly."[11]

Insider trading, or similar practices, are also regulated by the SEC under its
rules on takeovers and tender offers under the Williams Act.
Much of the development of insider trading law has resulted from court
decisions. In SEC v. Texas Gulf Sulphur Co. (1966), a federal circuit court
stated that anyone in possession of inside information must either disclose
the information or refrain from trading.
In 1984, the Supreme Court of the United States ruled in the case of Dirks v.
SEC that tippees (receivers of second-hand information) are liable if they
had reason to believe that the tipper had breached a fiduciary duty in
disclosing confidential information and the tipper received any personal
benefit from the disclosure. (Since Dirks disclosed the information in order
to expose a fraud, rather than for personal gain, nobody was liable for insider
trading violations in his case.)
The Dirks case also defined the concept of "constructive insiders," who are
lawyers, investment bankers and others who receive confidential information
from a corporation while providing services to the corporation. Constructive
insiders are also liable for insider trading violations if the corporation
expects the information to remain confidential, since they acquire the
fiduciary duties of the true insider.
In United States v. Carpenter (1986) the U.S. Supreme Court cited an earlier
ruling while unanimously upholding mail and wire fraud convictions for a
defendant who received his information from a journalist rather than from
the company itself.
"It is well established, as a general proposition, that a person who acquires
special knowledge or information by virtue of a confidential or fiduciary
relationship with another is not free to exploit that knowledge or information
for his own personal benefit but must account to his principle for any profits
derived therefrom."
However, in upholding the securities fraud (insider trading) convictions, the
justices were evenly split.
In 1997 the U.S. Supreme Court adopted the misappropriation theory of
insider trading in United States v. O'Hagan, 521 U.S. 642, 655 (1997),.
O'Hagan was a partner in a law firm representing Grand Met, while it was
considering a tender offer for Pillsbury Co. O'Hagan used this inside

information by buying call options on Pillsbury stock, resulting in profits of


over $4 million. O'Hagan claimed that neither he nor his firm owed a
fiduciary duty to Pillsbury, so that he did not commit fraud by purchasing
Pillsbury options.[12]
The Court rejected O'Hagan's arguments and upheld his conviction.
The "misappropriation theory" holds that a person commits fraud "in
connection with" a securities transaction, and thereby violates 10(b) and
Rule 10b-5, when he misappropriates confidential information for securities
trading purposes, in breach of a duty owed to the source of the information.
Under this theory, a fiduciary's undisclosed, self-serving use of a principal's
information to purchase or sell securities, in breach of a duty of loyalty and
confidentiality, defrauds the principal of the exclusive use of the
information. In lieu of premising liability on a fiduciary relationship
between company insider and purchaser or seller of the company's stock, the
misappropriation theory premises liability on a fiduciary-turned-trader's
deception of those who entrusted him with access to confidential
information.
The Court specifically recognized that a corporations information is its
property: "A company's confidential information...qualifies as property to
which the company has a right of exclusive use. The undisclosed
misappropriation of such information in violation of a fiduciary
duty...constitutes fraud akin to embezzlement the fraudulent appropriation
to one's own use of the money or goods entrusted to one's care by another."
In 2000, the SEC enacted Rule 10b5-1, which defined trading "on the basis
of" inside information as any time a person trades while aware of material
nonpublic information so that it is no defense for one to say that she
would have made the trade anyway. This rule also created an affirmative
defense for pre-planned trades.
In May of 2007, representatives Brian Baird and Louise Slaughter
introduced a bill entitled the "Stop Trading on Congressional Knowledge
Act, or STOCK Act." that would hold congressional and federal employees
liable for stock trades they made using information they gained through their
jobs. The bill would also seek to regulate so called "Political Intelligence"
firms that research government activities and sell the information to
financial managers.[13]

Security analysis and insider trading


Security analysts gather and compile information, talk to corporate officers
and other insiders, and issue recommendations to traders. Thus their
activities may easily cross legal lines if they are not especially careful. The
CFA Institute in its code of ethics states that analysts should make every
effort to make all reports available to all the broker's clients on a timely
basis. Analysts should never report material nonpublic information, except
in an effort to make that information available to the general public.
Nevertheless, analysts' reports may contain a variety of information that is
"pieced together" without violating insider trading laws, under the mosaic
theory. This information may include non-material nonpublic information as
well as material public information, which may increase in value when
properly compiled and documented.
Arguments for legalizing insider trading
Some economists and legal scholars (e.g. Henry Manne, Milton Friedman,
Thomas Sowell, Daniel Fischel, Frank H. Easterbrook) argue that laws
making insider trading illegal should be revoked. They claim that insider
trading based on material nonpublic information benefits investors, in
general, by more quickly introducing new information into the market.
Milton Friedman, laureate of the Nobel Memorial Prize in Economics, said:
"You want more insider trading, not less. You want to give the people most
likely to have knowledge about deficiencies of the company an incentive to
make the public aware of that." Friedman did not believe that the trader
should be required to make his trade known to the public, because the
buying or selling pressure itself is information for the market.[14]
Other critics argue that insider trading is a victimless act: A willing buyer
and a willing seller agree to trade property which the seller rightfully owns,
with no prior contract (according to this view) having been made between
the parties to refrain from trading if there is asymmetric information.
Legalization advocates also question why activity that is similar to insider
trading is legal in other markets, such as real estate, but not in the stock
market. For example, if a geologist knows there is a high likelihood of the
discovery of petroleum under Farmer Smith's land, he may be entitled to
make Smith an offer for the land, and buy it, without first telling Farmer
Smith of the geological data. Of course there are also circumstances when

the geologist could not legally buy the land without disclosing the
information, e.g. when he had been hired by Farmer Smith to assess the
geology of the farm.
Advocates of legalization make free speech arguments. Punishment for
communicating about a development pertinent to the next day's stock price
might seem to be an act of censorship [1]. Nevertheless, if the information
being conveyed is proprietary information and the corporate insider has
contracted to not expose it, he has no more right to communicate it than he
would to tell others about the company's confidential new product designs,
formulas, or bank account passwords.
There are very limited laws against "insider trading" in the commodities
markets, if, for no other reason, than that the concept of an "insider" is not
immediately analogous to commodities themselves (e.g., corn, wheat, steel,
etc.). However, analogous activities such as front running are illegal under
U.S. commodity and futures trading laws. For example, a commodity broker
can be charged with fraud if he or she receives a large purchase order from a
client (one likely to affect the price of that commodity) and then purchases
that commodity before executing the client's order in order to benefit from
the anticipated price increase.
Legal differences among jurisdictions
The US and the UK vary in the way the law is interpreted and applied with
regard to insider trading.
In the UK, the relevant laws are the Financial Services Act 1986 and the
Financial Services and Markets Act 2000, which defines an offense of
Market Abuse.[15] It is not illegal to fail to trade based on inside information
(whereas without the inside information the trade would have taken place),
since from a practical point of view this is too difficult to enforce. It is often
legal to deal ahead of a takeover bid, where a party deliberately buys shares
in a company in the knowledge that it will be launching a takeover bid.[citation
needed]

Japan enacted its first law against insider trading in 1988. Roderick Seeman
says: "Even today many Japanese do not understand why this is illegal.
Indeed, previously it was regarded as common sense to make a profit from
your knowledge."[16]

In accordance with EU Directives, Malta enacted the Financial Markets


Abuse Act in 2002, which effectively replaced the Insider Dealing and
Market Abuse Act of 1994.
The "Objectives and Principles of Securities Regulation"[17] published by the
International Organization of Securities Commissions (IOSCO) in 1998 and
updated in 2003 states that the three objectives of good securities market
regulation are (1) investor protection, (2) ensuring that markets are fair,
efficient and transparent, and (3) reducing systemic risk. The discussion of
these "Core Principles" state that "investor protection" in this context means
"Investors should be protected from misleading, manipulative or fraudulent
practices, including insider trading, front running or trading ahead of
customers and the misuse of client assets." More than 85 percent of the
world's securities and commodities market regulators are members of
IOSCO and have signed on to these Core Principles.
The World Bank and International Monetary Fund now use the IOSCO Core
Principles in reviewing the financial health of different country's regulatory
systems as part of these organization's financial sector assessment program,
so laws against insider trading based on non-public information are now
expected by the international community. Enforcement of insider trading
laws varies widely from country to country, but the vast majority of
jurisdictions now outlaw the practice, at least in principle.
Market anomaly
A market anomaly (or inefficiency) is a price and/or return distortion on a
financial market.
It is usually related to:

either structural factors (unfair competition, lack of market


transparency, ...)
or behavioral biases by economic agents (see behavioral economics)

It sometimes refers to phenomena contradicting the efficient market


hypothesis. There are anomalies in relation to the economic fundamentals of
the equity, technical trading rules, and economic calendar events.
Microeconomics

Microeconomics (or price theory) is a branch of economics that studies


how individuals, households, and firms make decisions to allocate limited
resources,[1] typically in markets where goods or services are being bought
and sold.
Microeconomics examines how these decisions and behaviours affect the
supply and demand for goods and services, which determines prices, and
how prices, in turn, determine the supply and demand of goods and services.
[2][3]

Macroeconomics, on the other hand, involves the "sum total of economic


activity, dealing with the issues of growth, inflation, and unemployment and
with national economic policies relating to these issues"[4] and the effects of
government actions (e.g., changing taxation levels) on them.[5] Particularly in
the wake of the Lucas critique, much of modern macroeconomic theory has
been built upon 'microfoundations' i.e. based upon basic assumptions
about micro-level behaviour.
Overview
One of the goals of microeconomics is to analyze market mechanisms that
establish relative prices amongst goods and services and allocation of
limited resources amongst many alternative uses. Microeconomics analyzes
market failure, where markets fail to produce efficient results, as well as
describing the theoretical conditions needed for perfect competition.
Significant fields of study in microeconomics include markets under
asymmetric information, choice under uncertainty and economic
applications of game theory. Also considered is the elasticity of products
within the market system.
Assumptions and definitions
The theory of supply and demand usually assumes that markets are perfectly
competitive. This implies that there are many buyers and sellers in the
market and none of them have the capacity to significantly influence prices
of goods and services. In many real-life transactions, the assumption fails
because some individual buyers or sellers or groups of buyers or sellers do
have the ability to influence prices. Quite often a sophisticated analysis is
required to understand the demand-supply equation of a good. However, the
theory works well in simple situations.

Mainstream economics does not assume a priori that markets are preferable
to other forms of social organization. In fact, much analysis is devoted to
cases where so-called market failures lead to resource allocation that is
suboptimal by some standard (highways are the classic example, profitable
to all for use but not directly profitable for anyone to finance). In such cases,
economists may attempt to find policies that will avoid waste directly by
government control, indirectly by regulation that induces market participants
to act in a manner consistent with optimal welfare, or by creating "missing
markets" to enable efficient trading where none had previously existed. This
is studied in the field of collective action. It also must be noted that "optimal
welfare" usually takes on a Paretian norm, which in its mathematical
application of Kaldor-Hicks Method, does not stay consistent with the
Utilitarian norm within the normative side of economics which studies
collective action, namely public choice. Market failure in positive
economics (microeconomics) is limited in implications without mixing the
belief of the economist and his or her theory.
The demand for various commodities by individuals is generally thought of
as the outcome of a utility-maximizing process. The interpretation of this
relationship between price and quantity demanded of a given good is that,
given all the other goods and constraints, this set of choices is that one
which makes the consumer happiest.
Modes of operation
It is assumed that all firms are following rational decision-making, and will
produce at the profit-maximizing output. Given this assumption, there are
four categories in which a firm's profit may be considered.
A firm is said to be making an economic profit when its average total cost
is less than the price of each additional product at the profitmaximizing output. The economic profit is equal to the quantity
output multiplied by the difference between the average total cost and
the price.
A firm is said to be making a normal profit when its economic profit
equals zero. This occurs where average total cost equals price at the
profit-maximizing output.
If the price is between average total cost and average variable cost at the
profit-maximizing output, then the firm is said to be in a loss-

minimizing condition. The firm should still continue to produce,


however, since its loss would be larger if it were to stop producing. By
continuing production, the firm can offset its variable cost and at least
part of its fixed cost, but by stopping completely it would lose the
entirety of its fixed cost.
If the price is below average variable cost at the profit-maximizing
output, the firm should go into shutdown. Losses are minimized by
not producing at all, since any production would not generate returns
significant enough to offset any fixed cost and part of the variable
cost. By not producing, the firm loses only its fixed cost. By losing
this fixed cost the company faces a challenge. It must either exit the
market or remain in the market and risk a complete loss.
Market failure
In microeconomics, the term "market failure" does not mean that a given
market has ceased functioning. Instead, a market failure is a situation in
which a given market does not efficiently organize production or allocate
goods and services to consumers. Economists normally apply the term to
situations where the inefficiency is particularly dramatic, or when it is
suggested that non-market institutions would provide a more desirable
result. On the other hand, in a political context, stakeholders may use the
term market failure to refer to situations where market forces do not serve
the "public interest," a subjective assessment that is often made on social or
moral grounds.
The four main types or causes of market failure are:
Monopolies or other cases of abuse of market power where a "single
buyer or seller can exert significant influence over prices or output").
Abuse of market power can be reduced by using antitrust regulations.
[6]

Externalities, which occur in cases where the "market does not take into
account the impact of an economic activity on outsiders." There are
positive externalities and negative externalities.[6] Positive
externalities occur in cases such as when a television program on
family health improves the public's health. Negative externalities
occur in cases such as when a companys processes pollutes air or
waterways. Negative externalities can be reduced by using

government regulations, taxes, or subsidies, or by using property


rights to force companies and individuals to take the impacts of their
economic activity into account.
Public goods such as national defence[6] and public health initiatives such
as draining mosquito-breeding marshes. For example, if draining
mosquito-breeding marshes was left to the private market, far fewer
marshes would probably be drained. To provide a good supply of
public goods, nations typically use taxes that compel all residents to
pay for these public goods (due to scarce knowledge of the positive
externalities to third parties/social welfare); and
Cases where there is asymmetric information or uncertainty (information
inefficiency).[6] Information asymmetry occurs when one party to a
transaction has more or better information than the other party.
Typically it is the seller that knows more about the product than the
buyer, but this is not always the case. Buyers in some markets have
better information than the Sellers. For example, used-car salespeople
may know whether a used car has been used as a delivery vehicle or
taxi, information that may not be available to buyers. An example of a
situation where the buyer may have better information than the seller
would be an estate sale of a house, as required by a last will and
testament. A real estate broker purchasing this house may have more
information about the house than the family members of the deceased.
This situation was first described by Kenneth J. Arrow in a seminal
article on health care in 1963 entitled "Uncertainty and the Welfare
Economics of Medical Care," in the American Economic Review.
George Akerlof later used the term asymmetric information in his
1970 work The Market for Lemons. Akerlof noticed that, in such a
market, the average value of the commodity tends to go down, even
for those of perfectly good quality, because the buyer has no way of
knowing whether the product they are buying will turn out to be a
"lemon" (a defective product).
Opportunity cost
Although opportunity cost can be hard to quantify, the effect of opportunity
cost is universal and very real on the individual level. In fact, this principle
applies to all decisions, not just economic ones. Since the work of the

German economist Friedrich von Wieser, opportunity cost has been seen as
the foundation of the marginal theory of value.
Opportunity cost is one way to measure the cost of something. Rather than
merely identifying and adding the costs of a project, one may also identify
the next best alternative way to spend the same amount of money. The
forgone profit of this next best alternative is the opportunity cost of the
original choice. A common example is a farmer that chooses to farm his land
rather than rent it to neighbors, wherein the opportunity cost is the forgone
profit from renting. In this case, the farmer may expect to generate more
profit himself. Similarly, the opportunity cost of attending university is the
lost wages a student could have earned in the workforce, rather than the cost
of tuition, books, and other requisite items (whose sum makes up the total
cost of attendance). The opportunity cost of a vacation in the Bahamas might
be the down payment money for a house.
Note that opportunity cost is not the sum of the available alternatives, but
rather the benefit of the single, best alternative. Possible opportunity costs of
the city's decision to build the hospital on its vacant land are the loss of the
land for a sporting center, or the inability to use the land for a parking lot, or
the money that could have been made from selling the land, or the loss of
any of the various other possible usesbut not all of these in aggregate. The
true opportunity cost would be the forgone profit of the most lucrative of
those listed.
One question that arises here is how to assess the benefit of dissimilar
alternatives. We must determine a dollar value associated with each
alternative to facilitate comparison and assess opportunity cost, which may
be more or less difficult depending on the things we are trying to compare.
For example, many decisions involve environmental impacts whose dollar
value is difficult to assess because of scientific uncertainty. Valuing a human
life or the economic impact of an Arctic oil spill involves making subjective
choices with ethical implications.

The supply and demand model describes how prices vary as a result of a
balance between product availability at each price (supply) and the desires of
those with purchasing power at each price (demand). The graph depicts a
right-shift in demand from D1 to D2 along with the consequent increase in
price and quantity required to reach a new market-clearing equilibrium point
on the supply curve (S).
Applied microeconomics
Applied microeconomics includes a range of specialized areas of study,
many of which draw on methods from other fields. Industrial organization
and regulation examines topics such as the entry and exit of firms,
innovation, role of trademarks. Law and economics applies microeconomic
principles to the selection and enforcement of competing legal regimes and
their relative efficiencies. Labor economics examines wages, employment,
and labor market dynamics. Public finance (also called public economics)
examines the design of government tax and expenditure policies and
economic effects of these policies (e.g., social insurance programs). Political
economy examines the role of political institutions in determining policy
outcomes. Health economics examines the organization of health care
systems, including the role of the health care workforce and health insurance
programs. Urban economics, which examines the challenges faced by
cities, such as are sprawl, air and water pollution, traffic congestion, and
poverty, draws on the fields of urban geography and sociology. The field of
financial economics examines topics such as the structure of optimal
portfolios, the rate of return to capital, econometric analysis of security
returns, and corporate financial behavior. The field of economic history
examines the evolution of the economy and economic institutions, using

methods and techniques from the fields of economics, history, geography,


sociology, psychology, and political science.
Random walk hypothesis
The random walk hypothesis is a financial theory stating that stock market
prices evolve according to a random walk and thus the prices of the stock
market cannot be predicted. It has been described as 'jibing' with the
efficient market hypothesis. Investors, economists, and other financial
behaviorists have historically accepted the random walk hypothesis. They
have run several tests and continue to believe that stock prices are
completely random because of the efficiency of the market.
The term was popularized by the 1973 book, A Random Walk Down Wall
Street, by Burton Malkiel, currently a Professor of Economics and Finance
at Princeton University.
Testing the hypothesis
Burton G. Malkiel, an economist professor at Princeton University and
writer of A Random Walk Down Wall Street, performed a test where his
students were given a hypothetical stock that was initially worth fifty
dollars. The closing stock price for each day was determined by a coin flip.
If the result was heads, the price would close a half point higher, but if the
result was tails, it would close a half point lower. Thus, each time, the price
had a fifty-fifty chance of closing higher or lower than the previous day.
Cycles or trends were determined from the tests. Malkiel then took the
results in a chart and graph form to a chartist (a person who seeks to predict
future movements by seeking to interpret past patterns on the assumption
that history tends to repeat itself) (Keane 11). The chartist told Malkiel
that they needed to immediately buy the stock. When Malkiel told him it
was based purely on flipping a coin, the chartist was very unhappy. This
indicates that the market and stocks could be just as random as flipping a
coin.
The random walk hypothesis was also applied to NBA basketball.
Psychologists made a detailed study of every shot the Philadelphia 76ers
made over one and one-half seasons of basketball. The psychologists found
no positive correlation between the previous shots and the outcomes of the
shots afterwards. Economists and believers in the random walk hypothesis
apply this to the stock market. The actual lack of correlation of past and

present can be easily seen. If a stock goes up one day, no stock market
participant can accurately predict that it will rise again the next. Just as a
basketball player with the hot hand can miss his or her next shot, the stock
that seems to be on the rise can fall at any time, making it completely
random.
A non-random walk hypothesis
There are other economists, professors, and investors who believe that the
market is predictable to some degree. The people believe that there are
trends and incremental changes in the prices and when looking at them, one
can determine whether the stock is on the rise or fall. There have been key
studies done by economists and a book has been written by two professors
of economics that try to prove the random walk hypothesis wrong.
Martin Weber, a leading researcher in behavioral finance, has done many
tests and studies on finding trends in the stock market. In one of his key
studies, he observed the stock market for ten years. Over those ten years, he
looked at the market prices and looked for any kind of trends. He found that
stocks with high price increases in the first five years tended to become
under-performers in the following five years. Weber and other believers in
the non-random walk hypothesis cite this as a key contributor and
contradictor to the random walk hypothesis.
Another test that Weber ran that contradicts the random walk hypothesis was
finding stocks that have had an upward revision for earnings outperform
other stocks in the forthcoming six months. With this knowledge, investors
can have an edge in predicting what stocks to pull out of the market and
which stocks the stocks with the upward revision to leave in. Martin
Webers studies detract from the random walk hypothesis, because according
to Weber there are trends and other tips to predicting the stock market.
Professors Andrew W. Lo and A. Craig MacKinlay, professors of Finance at
the MIT Sloan School of Management and the University of Pennsylvania,
respectively, have also tried to prove the random walk theory wrong. They
wrote the book A Non-Random Walk Down Wall Street, which goes through
a number of tests and studies that try to prove there are trends in the stock
market and that they are somewhat predictable. They try to prove it with
what is called the simple volatility-based specification test, which is an
equation that states:

where
Xt is the price of the stock at time t
is an arbitrary drift parameter
t is a random disturbance term.
With this equation, they have been able to put in stock prices over the last
number of years, and figure out the trends that have unfolded (Non-Random
19). They have found small incremental changes in the stocks throughout the
years. Through these changes, Lo and MacKinlay believe that the stock
market is predictable, thus contradicting the random walk hypothesis.
Random walk hypothesis vs. market trends
The hypothesis does have its detractors. Research in behavioral finance has
shown that some phenomena, for example market trends, might in some
cases contradict that hypothesis.
Profs. Andrew W. Lo of MIT and A. Craig MacKinlay set about to prove the
theory wrong with their paper and synonymous book, A Non-Random Walk
Down Wall St., published in 1999 by the Princeton University Press. They
argue that the random walk does not exist and that even the casual observer
can look at the many stock and index charts generated over the years and see
the trends. If the market were random, it is argued, there would never be the
many long rises and declines so clearly evident in those charts. Subscribers
to the random walk hypothesis counter-argue that past performance cannot
be indicative of future performance in a semi-strong market economy.
Prediction Company, started by chaos physicists Norman Packard and
Doyne Farmer, has been attempting to predict the stock market since 1991.
So far, they have proved moderately successful.[1]
Post-modern portfolio theory
This article discusses in detail the application of post-modern portfolio
theory1 (PMPT) to risk/return analysis and describes its theoretical and
practical benefits over Modern Portfolio Theory (MPT), also referred to as
Mean-Variance Analysis (MVA). And like MPT, PMPT proposes how

rational investors will use diversification to optimize their portfolios, and


how a risky asset should be priced.
Overview
It has been a generation since Harry Markowitz laid the foundations and
built much of the structure of what we now know as MPT, the greatest
contribution of which is the establishment of a formal risk/return framework
for investment decision-making. By defining investment risk in quantitative
terms, Markowitz gave investors a mathematical approach to asset selection
and portfolio management. But as Markowitz himself and William F.
Sharpe, the other giant of MPT acknowledge, there are important limitations
to the original MPT formulation.
"Under certain conditions the MVA can be shown to lead to unsatisfactory
predictions of (investor) behavior. Markowitz suggests that a model based on
the semi-variance would be preferable; in light of the formidable
computational problems, however, he bases his (MVA) analysis on the mean
and the standard deviation2."
The causes of these unsatisfactory aspects of MPT are the assumptions
that 1) variance of portfolio returns is the correct measure of investment risk,
and 2) the investment returns of all securities and portfolios can be
adequately represented by the normal distribution. Stated another way, MPT
is limited by measures of risk and return that do not always represent the
realities of the investment markets.
Recent advances in portfolio and financial theory, coupled with todays
increased electronic computing power, have overcome these limitations. The
resulting expanded risk/return paradigm is known as Post-Modern Portfolio
Theory, or PMPT. Thus, MPT becomes nothing more than a (symmetrical)
special case of PMPT.
Introduction to PMPT
As already stated, standard deviation3 and the normal distribution are a
major practical limitation: they are symmetrical. Using standard deviation
implies that better-than-expected returns are just as risky as those returns
that are worse than expected. Furthermore, using the normal distribution to
model the pattern of investment returns makes investment results with more
upside than downside returns appear more risky than they really are, and

vice-versa for returns with more a predominance of downside returns. The


result is that using traditional MPT techniques for measuring investment
portfolio construction and evaluation frequently distorts investment reality.
It has long been recognized that investors typically do not view as risky
those returns above the minimum they must earn in order to achieve their
investment objectives. They believe that risk has to do with the bad
outcomes (i.e., returns below a required target), not the good outcomes (i.e.,
returns in excess of the target) and that losses weigh more heavily than
gains. This view has been noted by researchers in finance, economics and
psychology, including Sharpe (1964). "Under certain conditions, the meanvariance approach can be shown to lead to unsatisfactory predictions of
behavior. Markowitz suggests that models based on semi-variance would be
preferable; in light of the formidable computational problems, however, he
bases his analysis on the variance and standard deviation."
The Tools of PMPT
In 1987 The Pension Research Institute at San Francisco State University
developed the practical mathematical algorithms of PMPT that are in use
today. These methods provide a framework that recognizes investors'
preferences for upside over downside volatility. At the same time, a more
robust model for the pattern of investment returns, the three-parameter
lognormal distribution4, was introduced.

Downside risk
Downside risk (DR) is measured by target semi-deviation (the square root of
target semi-variance) and is termed downside deviation. It is expressed in
percentages and therefore allows for rankings in the same way as standard
deviation.
An intuitive way to view downside risk is the annualized standard deviation
of returns below the target. Another is the square root of the probabilityweighted squared below-target returns. The squaring of the below-target
returns has the effect of penalizing large failures much more severely than
small failures. This is consistent with observations made on the behavior of
individual decision-making under uncertainty.

where
d is downside deviation (commonly known in the finacial community as
'downside risk'). Note: By extension, d2 = downside variance.
t is the annual target return, or MAR
r is the random variable representing the return for the distribution of annual
returns f(r),
f(r) is a the three-parameter lognormal distribution
For the reasons provided below, this continuous formula is preferred over a
simpler discrete version that determines the standard deviation of belowtarget periodic returns taken from the return series.
1. The continuous form permits all subsequent calculations to be made using
annual returns which is the natural way for investors to specify their
investment goals. The discrete form requires monthly returns for there to be
sufficient data points to make a meaningful calculation, which in turn
requires converting the annual target into a monthly target. This significantly
affects the amount of risk that is identified. For example, a goal of earning
1% each and every month results in greater risk than the (apparently)
equivalent goal of earning 12% each and every year.
2. A second reason for strongly preferring the continuous form to the discrete
form has been proposed by Sortino & Forsey (1996): "Before we make an
investment, we don't know what the outcome will be... After the investment
is made, and we want to measure its performance, all we know is what the
outcome was, not what it could have been. To cope with this uncertainty, we
assume that a reasonable estimate of the range of possible returns, as well as
the probabilities associated with estimation of those returns...In statistical
terms, the shape of [this] uncertainty is called a probability distribution. In
other words, looking at just the discrete monthly or annual values does not
tell the whole story."
Using the observed points to create a distribution is a staple of conventional
performance measurement. For example, monthly returns are used to

calculate a fund's mean and standard deviation. Using these values and the
properties of the normal distribution, we can make statements such as the
likelihood of losing money (even though no negative returns may actually
have been observed), or the range within which two-thirds of all returns lies
(even though the returns identified in this way do not necessarily have to
have actually occurred). Our ability to make these statements comes from
the process of assuming the continuous form of the normal distribution and
certain of its well-known properties.
In PMPT an analogous process is followed: 1. Observe the monthly returns,
2. Fit a distribution that permits asymmetry to the observations, 3. Annualize
the monthly returns, making sure the shape characteristics of the distribution
are retained, 4. Apply integral calculus to the resultant distribution to
calculate the appropriate statistics.
Sortino ratio
The Sortino ratio measures returns adjusted for the target and downside risk.
It is defined as:

where,
r = the annualized rate of return,
t = the target return,
d = downside risk.
This ratio replaces the traditional Sharpe ratio as a means for ranking
investment results. The following table shows risk-adjusted ratios for several
major indexes using both Sortino and Sharpe ratios. The data cover the five
years 1992-1996 and are based on monthly total returns. The Sortino ratio is
calculated against a 9.0% target.
Index

Sortino Ratio Sharpe Ratio

90-day T-bill

-1.00

0.00

Lehman Aggregate -0.29

0.63

MSCI EAFE

-0.05

0.30

Russell 2000

0.55

0.93

S&P 500

0.84

1.25

As an example of the different conclusions that can be drawn using these


two ratios, notice how the Lehman Aggregate and MSCI EAFE compare the Lehman ranks higher using the Sharpe ratio whereas EAFE ranks higher
using the Sortino ratio. In many cases, manager or index rankings will be
different, depending on the risk-adjusted measure used. These patterns will
change again for different values of t. For example, when t is close to the
risk-free rate, the Sortino Ratio for T-Bill's will be higher than that for the
S&P 500, while the Sharpe ratio remains unchanged.
Volatility skewness
Volatility skewness is another portfolio-analysis statistic introduced by Rom
and Ferguson under the PMPT rubric. It measures the ratio of a distribution's
percentage of total variance from returns above the mean, to the percentage
of the distribution's total returns from returns below the mean. Thus, if a
distribution is symmetrical (i.e., normal, as is assumed under MPT), it has a
volatility skewness of 1.00. Values greater than 1.00 indicate positive
skewness; values less than 1.00 indicate negative skewness. While closely
correlated with the traditional statistical measure of skewness (viz., the third
moment of a distribution), the authors of PMPT argue that their volatility
skewness measure has the advantage of being intuitively more
understandable to non-statisticians who are the primary practical users of
these tools.

The importance of skewness lies in the fact that the more non-normal (i.e.,
skewed) a return series is, the more its true risk will be distorted by
traditional MPT measures such as the Sharpe ratio. Thus, with the recent
advent of hedging and derivative strategies, which are asymmetrical by
design, MPT measures are essentially useless, while PMPT is able to capture
significantly more of the true information contained in the returns under
consideration. This being said, as the following table shows, many of the
common market indices and the returns of stock and bond mutual funds
cannot themselves always be assumed to be accurately represented by the
normal distribution. This fact is also not well understood by many
practitioners.

Index

Upside
Skewness(%)

Downside
Skewness(%)

Volatility
Skewness

Lehman
Aggregate

32.35

67.65

0.48

Russell 2000

37.19

62.81

0.59

S&P 500

38.63

61.37

0.63

90-day T-Bill

48.26

51.74

0.93

MSCI EAFE

54.67

45.33

1.21

Conclusion
PMPT is able to assist investment practitioners more accurately create
optimal investment strategies and evaluate the true performance of
investment managers, mutual funds and other portfolios, without the
restrictions imposed by MPT.
Sharpe ratio

The Sharpe ratio or Sharpe index or Sharpe measure or reward-tovariability ratio is a measure of the mean excess return per unit of risk in an
investment asset or a trading strategy. Since its revision made by the original
author in 1994, it is defined as:

,
where R is the asset return, Rf is the return on a benchmark asset, such as the
risk free rate of return, E[R Rf] is the expected value of the excess of the
asset return over the benchmark return, and is the standard deviation of the
excess return (Sharpe 1994).
Note, if Rf is a constant risk free return throughout the period,
. Sharpe's 1994 revision acknowledged that
the risk free rate changes with time. Prior to this revision the definition was
assuming a constant Rf.
The Sharpe ratio is used to characterize how well the return of an asset
compensates the investor for the risk taken. When comparing two assets
each with the expected return E[R] against the same benchmark with return
Rf, the asset with the higher Sharpe ratio gives more return for the same risk.
Investors are often advised to pick investments with high Sharpe ratios.
Sharpe ratios, along with Treynor ratios and Jensen's alphas, are often used
to rank the performance of portfolio or mutual fund managers.
This ratio was developed by William Forsyth Sharpe in 1966. Sharpe
originally called it the "reward-to-variability" ratio in before it began being
called the Sharpe Ratio by later academics and financial professionals.
Recently, the (original) Sharpe ratio has often been challenged with regard to
its appropriateness as a fund performance measure during evaluation periods
of declining markets (Scholz 2007).
Examples
Suppose the asset has an expected return of 15%. We typically do not know
the asset will have this return; suppose we assess the risk of the asset,

defined as standard deviation of the asset's excess return, as 10%. Finally,


suppose the risk-free rate of return, Rf, has been constant at 4%. Then the
Sharpe ratio will be 1.10 (R = 0.15, Rf = 0.04, and = 0.10).
As a guide post, one could substitute in the longer term return of the
S&P500 as 10%. The risk-free return of bonds could be about 4%. And the
average standard deviation of the S&P500 is about +/- 16%. Doing the math,
we get that the average, long-term Sharpe ratio of the US market is about
0.375. But we should note that if one were to calculate the ratio over, for
example, three-year rolling periods, then the Sharpe ratio would vary
dramatically.
Sortino ratio
The Sortino ratio measures the risk-adjusted return of an investment asset,
portfolio or strategy. It is a modification of the Sharpe ratio but penalizes
only those returns falling below a user-specified target, or required rate of
return, while the Sharpe ratio penalizes both upside and downside volatility
equally. It is thus a more realistic measure of risk-adjusted returns than the
Sharpe.
The ratio is calculated as
,
where R is the asset or porfolio return; T is the target or required rate of
return for the investment strategy under consideration, (T was originally
known as the minimum acceptable return, or MAR); DR is the downside
risk. The downside risk is the target semideviation = square root of the target
semivariance (TSV). TSV is a the return distribution's lower-partial moment
of degree 2 (LPM2).
Thus, the ratio is the actual rate of return in excess of the investor's target
rate of return, per unit of downside risk.
The ratio was created by Brian M. Rom [1] in 1986 as an element of
Investment Technologies' [2] Post-Modern Portfolio Theory portfolio
optimization software.
Cost of carry

The cost of carry is the cost of "carrying" or holding a position. If long the
cost of interest paid on a margin account, or if short the cost of paying
dividends, or opportunity cost the cost of purchasing a particular security
rather than an alternative. For most investments, the cost of carry generally
refers to the risk-free interest rate that could be earned by investing currency
in a theoretically safe investment vehicle such as a money market account
minus any future cash-flows that are expected from holding an equivalent
instrument with the same risk (generally expressed in percentage terms and
called the convenience yield). Storage costs (generally expressed as a
percentage of the spot price) should be added to the cost of carry for
physical commodities such as corn, wheat, or gold.
The cost of carry model expresses the forward price (or, as an
approximation, the futures price) as a function of the spot price and the cost
of carry.

where F is the forward price, S is the spot price, e is the base of the natural
logarithms, r is the risk-free interest rate, s is the storage cost, c is the
convenience yield, and t is the time to delivery of the forward contract
(expressed as a fraction of 1 year).
The same model in currency markets is known as interest rate parity.
For example, a US investor buying a Standard and Poor's 500 e-mini futures
contract on the Chicago Mercantile Exchange could expect the cost of carry
to be the prevailing risk-free interest rate (around 3% as of June, 2005)
minus the expected dividends that one could earn from buying each of the
stocks in the S&P 500 and receiving any dividends that they might pay, since
the e-mini futures contract is a proxy for the underlying stocks in the S&P
500. Since the contract is a futures contract and settles at some forward date,
the actual values of the dividends may not yet be known so the cost of carry
must be estimated.
No-arbitrage bounds
In financial mathematics, No-arbitrage bounds are mathematical
relationships specifying simple limits on derivative prices. Normally, these
are found by simple arguments based on the payouts of the security in
question, without specifying any sort of Distribution on any of the asset
returns involved.

Lack of arbitrage explains some rather obvious questions in option pricing,


such that the value of a call option will never rise above the underlying stock
price itself. However, the most frequent nontrivial example of no-arbitrage
bounds is put-call parity for option prices.
The Arbitrage Pricing Theory
Introduction
The economist Stephen Ross PhD developed a more generalized Modern
Portfolio Theory [MPT] model called Arbitrage Pricing Theory (APT).
Definition
APT is based upon somewhat less restrictive assumptions than CAPM and
results in the conclusion that there are multiple factors representing
systematic risk. The APT incorporates the fact that different securities react
in varying degrees to unexpected changes in systematic factors other than
just beta to the market portfolio.
The risk-free return plus the expected return for exposure to each source of
systematic risk times the beta coefficient to that risk is what determines the
expected rate of return for a given security.
An important point for physicians to keep in mind is that the APT focuses on
unexpected changes for its systematic risk factors. The financial markets are
viewed as a discounting mechanism, with prices established for various
securities reflecting investors expectations about the future, so any excess
return for an expected change will be arbitraged away (i.e., the price of that
risk will be bid down to zero).
For example, market prices already reflect physician and other investors
expectations about GNP growth, so prices of assets should only react to the
extent that GNP growth either exceeds or falls short of expectations (i.e., an
unexpected change in GNP growth).
A Rhetorical Interrogative?
And so - we can ask - why do medical professionals and their advisors go
wrong in making passive asset allocation decisions using MPT? The
problem has less to do with the limitations of CAPM or APT as theories and
more to do with how these theories are applied in the real world.
The basic premise behind the various MPT models is that both return and
risk measures are the expectations assessed by the investor. Too often,
however, decisions are made based on what investors see in their rear view
mirror rather than what lies on the road ahead of them.

In other words, while modern portfolio theory is geared towards assessing


expected future returns and risk, investors and financial professionals all too
often simply rely on historical data rather than develop a forecast of
expected future returns and risks.
While it is clearly difficult for physicians and all investors to accurately
forecast future returns or betas, whether they are for the market as a whole or
an individual security, there is no reason to believe that simply using
historical data will be any more accurate. One major shortcoming of modern
portfolio theory as it is commonly applied today is the fact that historical
relationships between different securities are unstable. And, it would seem
that a physician or other healthcare provider should not rely on historical
averages to establish a passive asset allocation.
Of course, the use of unstable historical returns in modern portfolio theories
clearly violates the rule-of-thumb related to the dangers of projecting
forward historical averages; MPT is nonetheless an important concept for
medical professionals to understand as a result of its frequent use by
investment professionals.
Furthermore, MPT has helped focus investors on two extremely critical
elements of investing that are central to successful investment strategies:
First, MPT offers the first framework for investors to build a diversified
portfolio.
Second, the important conclusion that can be drawn from MPT is that
diversification does in fact help reduce portfolio risk.
Conclusion
MPT approaches are generally consistent with the first investment rule of
thumb, understand and diversify risk to the extent possible.
Additionally, the risk/return tradeoff (i.e., higher returns are generally
consistent with higher risk) central to MPT based strategies has helped
investors recognize that if it looks too good to be true, it probably is.
What are your thoughts - and experiences - in the matter?
CAPM - Another Portfolio Pricing Model to Consider
Introduction
CAPM is an economic model based upon the idea that there is a single
portfolio representing all investments (i.e., the market portfolio) at the point
of the optimal portfolio on the CML and a single source of systematic risk,
beta, to that market portfolio. The resulting conclusion is that there should
be a fair return physician investors should expect to receive given the level
of risk (beta) they are willing to assume.

Thus, the excess return, or return above the risk-free rate, that may be
expected from an asset is equal to the risk-free return plus the excess return
of the market portfolio times the sensitivity of the assets excess return to the
market portfolio excess return.
Beta then, is a measure of the sensitivity of an assets returns to the market
as a whole. A particular securitys beta depends on the volatility of the
individual securitys returns relative to the volatility of the markets returns,
as well as the correlation between the securitys returns and the markets
returns.
Thus, while a stock may have significantly greater volatility than the market,
if that stocks returns are not highly correlated with the returns of the overall
market (i.e., the stocks returns are independent of the overall markets
returns) then the stocks beta would be relatively low.
A beta in excess of 1.0 implies that the security is more exposed to
systematic risk than the overall market portfolio, and likewise, a beta of less
1.0 means that the security has less exposure to systematic risk than the
overall market.
The CAPM uses beta to determine the Security Market Line or SML. The
SML determines the required or expected rate of return given the securitys
exposure to systematic risk, the risk-free rate, and the expected return for the
market as a whole. The SML is similar in concept to the Capital Market
Line, although there is a key difference.
Both concepts capture the relationship between risk and expected returns.
However, the measure of risk used in determining the CML is standard
deviation, whereas the measure of risk used in determining the SML is
beta.
Conclusion
The CML estimates the potential return for a diversified portfolio relative to
an aggregate measure of risk (i.e., standard deviation), while the SML
estimates the return of a single security relative to its exposure to systematic
risk.
Now, if this is the essence of the Capital Asset Pricing Model, what are the
arguments against CAPM?
Understanding Modern Portfolio Theory
Modern Portfolio Theory (MPT) is the basic economic model that
establishes a linear relationship between the return and risk of an
investment. The tools of MPT are used as the basis for the passive asset

mix, which involves setting a static mix of various types of investments or


asset classes and rebalancing to that allocation target on a periodic basis.
Introduction
According to MPT, when building a diversified investment portfolio, the
goal should be to obtain the highest expected return for a given level of risk.
A key assumption underlying modern portfolio theory is that higher risk
generally translates to higher expected returns.
From the perspective of MPT, risk is defined simply as the variability of an
investments returns. While MPT is based upon the idea that expected
volatility of returns is used, risk is measured by standard deviation of
historical returns in practice.
Standard deviation is a measure of the dispersion of a securitys returns, X1,
,Xn, around its mean (or average) return.
Often, standard deviation is calculated using monthly or quarterly data
points, but is represented as an annualized number to correspond with
annualized returns of various investments.
Now, let us assume Stock A has a mean return of 10.0 percent and a standard
deviation of 7.5 percent.
Then, approximately 68 percent of Stock As returns are within one standard
deviation of the mean return, and 95 percent of Stock As returns are within 2
standard deviations.
In other words, 68 percent of Stock As returns should be between 2.5
percent and 17.5 percent, and 95 percent of the returns for Stock A should be
between negative 5.0 percent and 25.0 percent.
However, a key assumption underlying this logic is that the returns for Stock
A are normally distributed (i.e., including that the distribution curve of Stock
As returns is symmetrical around the mean).
Unfortunately, in reality security returns may not be symmetrically
distributed and both the mean return and standard deviation of returns may
shift dramatically over time.
Sources of Risk
There are many different sources of risk, but the two forms of risk
hypothesized by Harry Markowitz Ph.D., father of MPT, were systematic
risk and unsystematic risk.
Systematic risk is sometimes referred to as non-diversifiable risk, since it
affects the returns on all investments.
In alternate theories like the Capital Asset Pricing Model [CAPM],
systematic risk is defined as sensitivity to the overall market. While

Arbitrage Pricing Theory has several common macroeconomic and market


factors that are considered sources of systematic risk.
Investors are generally unable to diversify systematic risk, since they cannot
reduce their portfolios exposure to systematic risk by increasing the number
of securities in their portfolio.
In contrast, physicians diversifying an investment portfolio can reduce
unsystematic risk, or the risk specific to a particular investment. Sources of
unsystematic risk include a stocks company-specific risk and industry risk.
For example, in addition to the risk of a falling stock market, physician
investors in Merck also are exposed to risks unique to the pharmaceutical
industry (e.g., healthcare reform), as well as the risks specific to Mercks
business practices (e.g., success of research and development efforts, patent
time frames, etc).
A physician investor can reduce unsystematic risk by building a portfolio of
securities from numerous industries, countries, and even asset classes.
Thus, portfolio risk in MPT refers to the both systematic (non-diversifiable)
and non-systematic (diversifiable) risk, but a basic conclusion of MPT is that
no investor would be rational to take on non-systematic risk since this risk
can be diversified away.
Conclusion
MPT is the philosophy that higher returns correspond to higher risk, and that
doctor investors typically desire to earn the highest return per a given level
of risk.
The tradeoff between expected return and volatility of returns to make
investment decisions is known as the mean-variance framework and is
central concept in many of todays passive asset allocation portfolio
management principles.
Now, is MPT as viable today - as when it was originally proposed?
Equity Price Influencers
The equity markets react to the business cycle as it moves through standard
phases.
For example, coming out of a recession, when gross domestic product
(GDP) is increasing, cyclicals do best, since consumers are fulfilling pentup demand for big ticket items that could be deferred during tough
economic times. Conversely, as the economy turns down - so do cyclicals often slightly ahead of the overall economy.
As inflation heats up in a rising economy, companies can raise prices and
profit at first as expenses stay constant. But ultimately inflation raises

interest rates and capital becomes more expensive, so companies have to


spend more to borrow capital to finance growth. Gentle interest rate
increases do not always make the stock market fall, but it will rise more
slowly.
However, high interest rates and high inflation ultimately are negatives for
the stock market.
A bull market in stocks generally consists of three consecutive phases:
Monetary: Interest rates are falling, either naturally as inflation eases or
with the help of a central bank, like the Federal Reserve, which can
artificially lower short-term interest rates.
Earnings-Driven: Companies have been able to borrow capital cheaply and
have spent the down-market time practicing efficiencies, so now they are
geared up for growth. Consumers are buying, so earnings are beginning to
flow through to the bottom line.
Speculative blowout: The markets are responding to the good earnings
reportssometimes beyond what is justified. P/E ratios begin to get very
high relative to a normal market, and markets are overbought. Wary
physicians and canny medical investors may want to sell stocks to take
profits.
According to Goldman Sachs Research, the stock market may peak while
the overall economy is still in a growth phase. Since 1952, the S&P 500 peak
has led the overall economys peak by about seven months. During down
markets, high-dividend-paying stocks and stocks of companies that sell
necessary goods or services, like utilities and food companies tend to hold
their value. These are called defensive stocks.
Conclusion
Fundamental analysis takes into consideration economic factors such as
consumers ability to buy a companys goods or services or the companys
borrowing needs at current rates.
How has the recent economic and medical business cycle affected your
investments?
Arbitrage
Buy Low, Sell High
Arbitrage is a method, or concept, that has been around for thousands of
years, but there is still no set definition of what it is. Your stock broker may
give you one definition, while a commodities broker may tell you it's
something else. Heck, most investors have no clue what it's about.

The basic premise of the arbitrage theory is that investors (or


speculators) force a profit making opportunity to exist. In its most simple
form, the definition should look something like, "Buy in a cheap market and
immediately sell in a more expensive market."
A good example would be the farmers markets found in two different
villages. John, an arbitrage junky, goes to the Cheap Village and sees that
oranges are selling for $3 per bushel. Through the grapevine, he's heard that
oranges sell for $3.50 in Expensive Village. John takes all his cash and buys
oranges at $3 per bushel in Cheap Village, then walks to Expensive Village
and immediately sells the oranges for $3.50 per bushel. This is basic
arbitrage--John has created a profit making opportunity of $0.50 per bushel.
This theory has been used for a long time, at least conceptually.
There are four keys to arbitrage, outlined as follows...
Information: John had to know that the oranges were selling for $3 in
Cheap Village and $3.50 in Expensive Village.
Profit Opportunity: John had to see a profit making opportunity, that's the
key motivation to arbitrage.
Judgment: John had to use his judgment and determine the risk/reward
factor.
Decision: John had to make the decision whether to actually carry out his
arbitrage scheme.
These four keys seem obvious, but they're the result of many years of
testing, discussion, thought, and evaluation. Stephen Ross initiated an
important, and now infamous, arbitrage study in 1976. His study began with
comparisons to the Capital Asset Pricing Model, where he pointed out that
the CAPM only takes market risk into account when pricing securities. The
obvious problem with the CAPM is that there are other considerable risks to
securities pricing, such as the industry, sector, interest rates, and so on.
Ross argued that the Arbitrage Pricing Theory is a multi-factor model and
that it does account for non-market risks. The problem with the APT theory
is that we don't know exactly what risks of what magnitude should be
identified. For example, when pricing a stock, one investor may put heavy
weight on interest rate risk, while another investor puts heavy weight on
industry risk. For the theory to be validated, all investors would have to
consider the same factors at the same magnitude.
So in conclusion, there is still no set definition of arbitrage. Anyone will find
several different definitions when checking dictionaries, encyclopedias, and
financial glossaries. But to have a basic understanding of what the arbitrage
theory represents, there are three important things to remember.
Create a profit making opportunity.

Buy in a cheap market, sell in an expensive market.


Garbage in, garbage out; this relates to the APT and the fact that investors
will put different weights on different factors.
Demystifying Hedge Funds
Hedge funds may have an aura of exoticism and modernism, but their goals
are as old as the art of investing itself. They seek a positive annual return
(the higher the better), limited swings in value, and, above all else, capital
preservation. They do so by using the best of what modern financial science
can providerapid price discovery; massive mathematical and statistical
processing; risk measurement and control techniques; and leverage and
active trading in corporate equities, bonds, foreign exchange, futures,
options, swaps, forwards, and other derivatives.
Because of their nature, hedge funds are restricted to large-scale investors.
Historically, they have attracted high-net-worth individuals and institutional
investors, and the array of the latter has widened significantly in recent years
to include pension funds, charities, universities, endowments, and
foundations. Funds of funds are starting to introduce hedge funds to retail
markets, but on a rather limited scale. Currently, there are about 8,500 hedge
funds operating worldwide, managing over $1 trillion in assets. Quite a leap
from the 2,800 hedge funds, managing $2.8 billion in assets in 1995, not to
mention the amounts involved in the earliest hedge fundtype investments in
the days of Aristotle (see Box 1).
Box 1
It all began with olives
The first recorded hedge fundstyle investment was a "call option" trade
and appears to have occurred about 2,500 years ago. Aristotle told the story
of a poor philosopher, Thales, who proved to doubters that he had
developed a "financial device, which involves a principle of universal
application," by making a profit from negotiating with owners of olive
presses for the exclusive rights to use their equipment in the upcoming
harvest. Olive press owners were happy to pass on the risks of future olive
prices and to accept payment now as a hedge against a bad harvest later. As
it turns out, Thales correctly predicted a bountiful harvest, and the demand
for olive presses rose. He sold his rights to use the presses and made a
profit. Thales's "call option" risked only his down payment. Although he
did not invest in fields, workers, or olive presses, he participated actively
in olive production by taking on a kind of risk olive growers and press

owners were unable or unwilling to takein the process enabling them to


concentrate on growing and processing olives. They made a profit from
their work, and he made a profit from his.
Modern hedge fund history began with Alfred Winslow Jones, a sociologist
and journalist who wrote about market behavior in the 1930s and 1940s
and founded one of the first hedge funds in 1949. Jones's fund used
leverage and short selling to "hedge" its stock portfolio against drops in
stock prices. There was little widespread interest until 1966, when an
article in Fortune magazine generated considerable interest by pointing out
that Jones was earning 44 percent higher returns than the best-performing
equity asset fundeven though he charged a fee equaling 20 percent of the
fund's gain. By 1968, there were about 200 hedge funds, although many
failed in the 196970 and 197374 market downturns. Hedge fund
business really picked up in the 1990s, fueled mainly by new wealth
generated during the 1990s equity bull market.
What is hedging?
A few key features distinguish hedge funds from other investment vehicles:
the focus on absolute returns, and the use of hedging, arbitrage, and
leverage.
Absolute versus relative returns. Over very long periods, buy-and-hold
strategies almost always do well. The problem is the length of time and
starting point: it can matter enormously when you buy. For example, over
hundreds of years, stocks returns have averaged about 8 percent a year, but
there can be several decades when stock prices don't increase in value at all.
The Standard & Poor's (S&P) index, which fell sharply after reaching a peak
in 1968, failed to return to its 1968 level in inflation-adjusted terms until
1992! Back in the 1970s, these swings in value prompted investment
managers to focus on returns relative to benchmarks, such as the S&P 500
stock index. That way, good performance was expressed in terms of asset
managers' performance relative to standard asset-class indices, and the better
the relative performance, the more investors were attracted. In other words,
managers attracted more investorsand were paid more moneyeven if the
fund declined in value, so long as it did not decline as much as the
benchmark index.
In contrast, hedge fund managers focus on risk-adjusted absolute returns
that is, their objective is to maximize the increase in investment value per
year rather than to simply perform better than the average. Consequently,
most hedge fund managers are paid based on how much they increase

investors' wealtha percent of the returnnot on how well they do relative


to a benchmark, thus focusing their performance exclusively on positive
returns. Although managers are also paid a 1 or 2 percent commission a year
for assets under management, most of their compensation depends on
delivering a positive absolute return. In addition, managers typically invest
significant amounts of their own capital in the fund, which aligns their
interests with the investors and discourages reckless risk taking. And, when
used, a "high-water mark"whereby capital losses have to be made up
before a performance fee is paidintroduces a strong incentive toward
capital preservation. In this context, minimizing swings in value and
immunizing the hedge fund's portfolio from general swings in market
values, through hedging, become key to long-term return maximization.
Hedging, arbitrage, and leverage. What is hedging? It is a technique aimed
at protecting a portfolio against sharp movements in market values. It
essentially implies buying and holding assets that have good long-term
prospects while simultaneously selling assets that have doubtful prospects.
The latter technique, short selling, involves borrowing someone else's shares
of stock and selling them, with the intent to buy the shares back at a lower
price and return them to the original lender. The difference between what the
shares are sold for and what needs to be paid to buy them back is the profit.
The development of market-traded stock and stock index futures, options,
and related derivatives over the past half-century has created a near-infinite
number of ways to engage in short selling and hedging (see Box 2).
Box 2
Basic hedge fund arbitrage strategies
Convertible arbitrage. A strategy in which managers purchase a portfolio of
securities that are convertible into other kinds of securities. For example,
corporate bonds are often convertible into equity shares of the issuing
companies. Normally, the prices of the bonds and shares trade in a close
relationship. Sometimes bond and stock market conditions cause the prices to
get out of line. Hedge funds buy and sell the bonds and stocks
simultaneously, pushing the prices back into line and profiting from market
mispricing.
Distressed securities. A strategy in which managers use borrowed funds to
invest in the debt and equity of companies that are currently or recently in
bankruptcy reorganization, or may declare bankruptcy in the near future.
Event-driven/merger. A strategy in which managers invest in opportunities

created by significant transactional events, such as spin-offs, mergers and


acquisitions, bankruptcy reorganizations, recapitalizations, and share
buybacks.
Global macro. A strategy in which managers employ "top down" global
approaches and may invest in any markets using any instruments to profit
from inaccurately priced market movements resulting from shifts in world
economies, geopolitical conditions, global supply and demand balances, or
other large-scale changes.
Long/short. Strategies in which managers take long positions in securities
expected to rise in value and short positions in securities expected to fall in
value in an effort to insulate the portfolio from market volatility. One
example of this strategy is to build a portfolio made up of long positions in
the strongest companies in an industry and corresponding short positions in
the weakest companies.
Market neutral. A category of long/short strategies in which managers invest
the same amount of capital in offsetting long and short positions, maintaining
a portfolio with zero or near-zero net market exposure.
Volatility arbitrage. A strategy in which managers sell short-term call and put
options to profit from option premium decay and volatility mean-reverting
tendencies using index options and/or options on futures contracts.
The technique of arbitrage tries to profit from the fact that sometimes an
asset trades at a different price in different markets at the same time.
Because an asset should have the same price in all markets at the same time,
a way to capture a low-risk profit is to sell the higher-priced asset in one
market (sell it short) and buy the lower-priced asset (buy it long) in the other
market. When the prices converge, an arbitrage profit can be captured by
selling the formerly low-priced asset and buying back the formerly highpriced asset. A typical example of potential arbitrage opportunities is
company bonds that are convertible into equity shares of the company.
A hedge fund manager focuses on achieving absolute returns by finding as
many profit opportunities as possible that are immune to market gyrations
in industry lingo, generating alpha (returns uncorrelated to market
performance) rather than beta. Because these opportunities often involve
small trading margins, the use of leverage and prudent risk management
seeks to achieve good returns with lower volatility. The key performance
variable is risk-adjusted returns. The most widely used measure is the
Sharpe ratiothe rate of the mean of the return to its standard deviation.

Higher values mean the risk-adjusted return is higher, given a particular


measure of risk.
The bottom line is that hedge fund profits arise not from accurately
predicting the direction of prices but from being able to identify transient
pricing opportunities. To believe in the ability of hedge funds to be
successful, one must disbelieve, or at least relax, the well-known efficient
markets hypothesis, which says that on average no model projecting
directional movements in asset prices will be significantly superior to
tossing a coin. In fact, however, the operations of hedge funds may be
viewed as promoting efficient markets: by actively seeking to eliminate
market mispricing, hedge funds contribute to a faster and more efficient
convergence of prices toward a market equilibrium, diminish market pricing
errors, reduce price extremes, and can help to stabilize markets, "buying low
and selling high."
Debunking hedge fund myths
In recent years, there has been a lot of debate and hand wringing about
hedge funds, their effects on global financial stabilityespecially since a
major U.S. hedge fund had to be bailed out in 1998 (see Box 3)and the
degree of regulation or supervision to which they are subject. The reality is
that these worries are overblown. Take two of the biggest myths.
Box 3
The LTCM debacle
In all markets there are failures, and hedge funds are not immune to them.
The most famous case is that of Long-Term Capital Management (LTCM), a
well-known hedge fund that lost all its capital in the fall of 1998. A sudden
spike in market volatility in the summer of 1998 led to a very rapid increase
in LTCM's losses, forcing its liquidation. In addition to the doubtful
soundness of some of its strategies, LTCM's failure happened for two main
reasons: risk management systems in LTCM and its banks were weak; and
LTCM's investment positions had become too large relative to the total
market volume in those assets. When prices turned against it, LTCM could
not sell its holdings quickly enough. And as it engaged in fire-selling to
adjust its portfolio, its losses snowballed. Because its positions were so large
and were linked to so many other financial institutions, LTCM became a
potential systemic risk, convincing the authorities to intervene.
As a result of a thorough review of the hedge fund business following the
LTCM failure, companies that interact with hedge funds have tightened

counterparty risk management. And national and foreign financial regulatory


institutions have upgraded their supervisory oversight of hedge funds. But
perhaps even more important, there has been a rethinking within the hedge
fund industry itself, with leading hedge fund companies establishing best
practice guidelines for the industry. And follow-up evaluations, such as the
one by the Financial Stability Forum in 2002, show that risk management
discipline has increased and leverage has fallen.
Myth 1: Hedge funds can move financial markets for their own gain or
cause market turmoil. After exhaustive analysis, the U.S. Securities and
Exchange Commission (SEC) recently determined that there is little
evidence that hedge funds can move markets, and several research studies
have found no evidence that hedge funds were a cause of the Asian crisis or
other world economic turmoil (Eichengreen and others, 1998). The
unwinding of "carry trades" (borrowing at a low interest rate and lending at a
higher one) did contribute to Europe's 1993 exchange rate mechanism crisis,
the 199495 peso crisis, and the 199798 Asian crisis. But the key problem
underlying these events was the misalignment of exchange rates with respect
to their fundamentalsnot the intervention of financial market participants.
In fact, the IMF study led by Eichengreen found that hedge funds, by being
willing to take the risk of buying some of the assets that had already fallen
significantly in price, contributed to limiting the downfall during the Asian
crisis and advancing the recovery.
The reality is that hedge fund activity makes financial markets more efficient
and, in many cases, more liquid, as has been widely recognized by the U.S.
Federal Reserve, the SEC, and the IMF. Not only do hedge funds contribute
to the adjustments of markets when they overshoot, they also help banks and
other creditors unbundle risks related to real economic activity by actively
participating in the market of securitized financial instruments. And because
hedge fund returns in many cases are less correlated with broader debt and
equity markets, hedge funds offer more traditional investment institutions a
way to reduce risk by providing portfolio diversification.
Myth 2: Hedge funds are unregulated and unsupervised. The fact is that
hedge funds in the United States are regulated and supervised directly or
indirectly by seven U.S. government agencies (the Federal Reserve, the
Department of Treasury, the SEC, the Commodity Futures Trading
Commission, the National Futures Association, the Comptroller of the
Currency, and the Federal Deposit Insurance Corporation) and by numerous
international agencies.
Here to stay

In sum, hedge funds are called hedge funds because they use a full array of
hedging techniques to reduce portfolio volatility. They are becoming
increasingly popular, as private ownership of capital expands worldwide and
large-scale capital owners seek to preserve their wealth in volatile markets.
In an effort to soothe worries about transparency and supervision, public
authorities are trying to develop new approaches to meet the public's need
for financial system stability and investor protection while enabling
investors to enjoy the benefits that hedge funds bring to financial markets.

Sportsbook Arbitrage- The Basics


I. Arbitrage with Sportsbook Bonuses The Basics
What is arbitrage?
Basically, sportsbook arbitrage is exploiting price differences in a market in
order to guarantee a profit.
Sports betting offers one of the easiest, low-risk way of using arbitrage to
guarantee yourself a nice return on investment (ROI). A ROI which can be
much higher than can be obtain from investing in the stock market. First,
arbitrage opportunities only exist when a book offers a line that differs
greatly from that offered at another book. This can occur when the line first
comes out, due to the fact that the sports manager at one book disagrees with
the linesmaker at another book. More commonly, however, is that the
perceived value of a wager, like a stock, is ever changing. Lines are
constantly changing, and the books that are slowest to adjust their lines to be
with the rest of the market will be most vulnerable to sportsbook arbitrage
opportunities. Second, sportsbook arbitrage opportunities do not last long, as
there are many people trying to profit from the same mistake. Once a book
realizes it is taking too much action on the bad side of a line, it will adjust
that line and the sportsbook arbitrage will no longer exist. Lastly, if you are
fortunate enough to be able to place a bet on the bad line, you should have
no trouble selling it by betting on the other side of the event at another
book. The greatest part about sportsbook arbitrage is you can take a
negative-return arb and still make a nice profit, we will go over this later.
How do you know if an arbitrage exists?
It can be difficult at first to identify which lines at different sites will have a
sportsbook arbitrage opportunity. If you are using American odds (as
opposed to Decimal or Fractional odds), lines will be portrayed using a plus

or minus sign, followed by a number of 100 or higher. A line with a plus sign
is used for the underdog. The number that follows indicates how much you
will win if you place a $100 bet. For example, a line of +2000 means that for
every $100 you wager you will win $200. Similarly, a minus sign is used for
the favorite. The number that follows indicates how much you must bet to
win $100. For example, a line of 200 means that you must wager $200 to
win $100.
A sportsbook arbitrage exists whenever the positive line (on the
underdog) is greater than the negative line (on the favorite).
For example, lets say Detroit plays Chicago and the line for which team is
going to win (Known as the Moneyline or ML) at book A is -150 on Detroit
and +140 on Chicago. At book B, they have -170 Detroit and +155 Chicago.
Therefore, we have a situation where the positive line is greater than the
negative one. In this case, -150 with Detroit at book A and +155 with
Chicago at book B. If we placed a $500 bet on Detroit (-150) at book A, we
would win $333.33. If we place $325 bet on Chicago (+155) at book B, we
would win $503.75. Therefore, if Detroit wins we won $333.33 at book A
but lost $325 at book B, for a total profit of $8.33. If Chicago wins, we win
$503.75 at book B but lost $500 at book A, for a total profit of $3.75. So no
matter what the result of the game, we guarantee we make something. Now
3 or 8 dollars doesnt seem like it would be worth your time? However, we
will always be playing with bonus money given to us from the various
sportsbooks. We will go over this later, also.
How many sportsbook arbs can be found a day?
Generally, you will be limited by the amount of events available to bet on
that day. You will mostly place bets on the larger well-known sports, like
baseball, basketball, football, hockey, and soccer (which is a little trickier
than the others)
Basically:
Number of books used
Experience
Bankroll size
Events on given day
Expect to have no troubles finding all the sportsbook arbs you need in a
given day. Generally, only your bankroll will be your limiting factor.
What time is best for finding sportsbook arbs?
You can arb nearly 24 hours a day. You are slightly limited during the period
of 11pm (Eastern Time) until late morning (Eastern Time). These are the

time in which the betting exchanges dont have as many available bets.
Betting exchanges will be a major source of finding arbs.
How many books should I use?
Generally, you will use a couple of main books for covering your sportsbook
arbs. These include Matchbook and Pinnacle. After these two, you will be
obtaining bonuses from various online sportsbooks. Generally, you will
attempt to work 1-3 bonuses, depending on your bankroll size.
How much money do I need to get started?
You could get started with a very small amount, as low as $500. This will
allow you to work a lot of the smaller match-play bonuses at a lot of the UK
sportsbooks. However, a better figure for getting started would be $5,000.
This would allow you to go after bonuses that would require a deposit of a
range of $500-$1,000. Once you are comfortable, a better figure to work
with would be $10,000-$15,000. Then, you will be able to work off several
bonuses for the max bonus amounts at the same time.
How much money can I make and how much time will I need to invest.
This is strongly dependent on bankroll size. At first, it will be slower,
because you will be learning and spending more time double checking and
being cautious. Once you catch on to the basic concepts, you will find
yourself spending less and less time working on sportsbook arbitrage. It will
get to the point where you run out of things to do fairly quickly. By the time
you get the hang of it, you will rarely invest more than one hour a day and
be able to consistently bring in an extra $200-$500 a week. I have seen
people make over $3,000 in their first month. This wont bring in enough to
do for a living, but can provide great extra income for those who dont have
a tremendous amount of free time.
Why not just try and pick the winning bets instead?
Sportsbook arbitrage isnt very exciting because you are never risking any
money on the games you bet on, you are basically doing a bunch of simple
math problems to place your bets. Most people who try to bet on just one
side of an event are rarely successful long-term. The ones that are invest
nearly all of their time researching lines and statistics to make their bets.
Even they can have losing years.
So, why sportsbook arbitrage over handicapping? The reason is that
sportsbook arbitrage gives you a guaranteed profit. While this isnt as
exciting as trying to pick the winner of a game, trust me, it feels good to win
money on sports betting week-in and week-out. It is very nice to be able to
bet on a game and not even care who wins or loses and yet still make money
either way.
Arbitrage

Definition: Arbitrage is the practice of taking advantage of price differences


between markets. Stock brokers do this by buying in one country and
immediately selling it in another for a profit.
Arbitrage is used by some Internet entrepreneurs to take advantage of the
price difference between some advertising keywords in AdWords and
AdSense.
The basic system is that someone buys a inexpensive AdWords campaign,
such as "cheap widgets" for ten cents. The ads direct anyone clicking on
them to a Web page that is optimized for a more expensive keyword, such as
"expensive widgets" for five dollars per click. If even a fraction of people
visiting "expensive-widgets.com" click on the ads, the arbitrageur has turned
a reasonable profit.
Although there is nothing explicitly forbidden about using arbitrage to profit
from AdSense, it is often a technique employed by low quality content
producers, and Google has shut down some very profitable accounts that
were using this technique.
ARBITRAGE STRATEGIES:
1. Convertible Arbitrage involves purchasing a portfolio of convertible
securities, generally convertible bonds, and hedging a portion of the equity
risk by selling short the underlying common stock. Certain managers may
also seek to hedge interest rate exposure under some circumstances. Most
managers employ some degree of leverage, ranging from zero to 6:1. The
equity hedge ratio may range from 30 to 100 percent. The average grade of
bond in a typical portfolio is BB-, with individual ratings ranging from AA
to CCC. However, as the default risk of the company is hedged by shorting
the underlying common stock, the risk is considerably better than the rating
of the unhedged bond indicates.
2. Distressed Securities strategies invest in, and may sell short, the
securities of companies where the securitys price has been, or is expected to
be, affected by a distressed situation. This may involve reorganizations,
bankruptcies, distressed sales and other corporate restructurings. Depending
on the managers style, investments may be made in bank debt, corporate
debt, trade claims, common stock, preferred stock and warrants. Strategies
may be sub-categorized as high-yield or orphan equities. Leverage may
be used by some managers. Fund managers may run a market hedge using
S&P put options or put options spreads.

3. Equity Market Neutral investing seeks to profit by exploiting pricing


inefficiencies between related equity securities, neutralizing exposure to
market risk by combining long and short positions. Typically, the strategy is
based on quantitative models for selecting specific stocks with equal dollar
amounts comprising the long and short sides of the portfolio. One example
of this strategy is to build portfolios made up of long positions in the
strongest companies in several industries and taking corresponding short
positions in those showing signs of weakness. Another variation is investing
long stocks and selling short index futures.
4. Fixed Income: Arbitrage is a market neutral hedging strategy that seeks
to profit by exploiting pricing inefficiencies between related fixed income
securities while neutralizing exposure to interest rate risk. Fixed Income
Arbitrage is a generic description of a variety of strategies involving
investment in fixed income instruments, and weighted in an attempt to
eliminate or reduce exposure to changes in the yield curve. Managers
attempt to exploit relative mispricing between related sets of fixed income
securities. The generic types of fixed income hedging trades include: yieldcurve arbitrage, corporate versus Treasury yield spreads, municipal bond
versus Treasury yield spreads and cash versus futures.
5. Merger Arbitrage, sometimes called Risk Arbitrage, involves investment
in event-driven situations such as leveraged buy-outs, mergers and hostile
takeovers. Normally, the stock of an acquisition target appreciates while the
acquiring companys stock decreases in value. These strategies generate
returns by purchasing stock of the company being acquired, and in some
instances, selling short the stock of the acquiring company. Managers may
employ the use of equity options as a low-risk alternative to the outright
purchase or sale of common stock. Most Merger Arbitrage funds hedge
against market risk by purchasing S&P put options or put option spreads.
6. Relative Value Arbitrage attempts to take advantage of relative pricing
discrepancies between instruments including equities, debt, options and
futures. Managers may use mathematical, fundamental, or technical analysis
to determine misvaluations. Securities may be mispriced relative to the
underlying security, related securities, groups of securities, or the overall
market. Many funds use leverage and seek opportunities globally. Arbitrage
strategies include dividend arbitrage, pairs trading, options arbitrage and
yield curve trading.

7. Volatility Arbitrage: Many derivatives, particular options, are sensitive


to the levels of volatility in the market prices of securities. Volatility
arbitrage strategies aim to directly exploit mis-pricings in volatility between
options or between the relative volatility of options versus their underlying
securities. Many of these strategies aim to be long volatility in order to make
money when volatility increases but this must be carefully balanced against
the cost of the option and the potential to lose money if volatility decreases.
Managers typically employ sophisticated modeling and simulation tools to
quantify, optimise and/or hedge their exposures. Funds in the sector may
focus on single stock options, index options/futures and/or equity dispersion
trading. Some funds may also focus on non-equity volatility or a
combination of all of these.
8. Credit Long/Short: Credit long/short funds aim to achieve returns by
identifying fundamental opportunities expressed through long or short
positions in credit instruments. Returns, which can be correlated with
movements in credit spread, are generated through both carry (income) and
capital appreciation/depreciation.
9. Emerging Markets Credit: Emerging market credit funds focus on
investing in credit risks in emerging markets. Political risk is often a major
factor in determining the performance of these strategies. Funds may invest
in sovereign focused risk, dollar denominated or Brady bonds only or
include local sovereign debt, emerging markets corporate credit or distressed
debt and trade finance.
Asset Backed Securities / Leveraged Finance
10.Mortgage and Asset-Backed Credit: Mortgaged and asset-backed
securities are secured by either high value collateral, usually hard assets like
real estate, or high confidence cash flows, such as those arising from senior
secured liabilities like bank loans. Hedge funds investing in these areas
generally find opportunities either from analysis of the underlying credit
characteristics of the assets or from the complexity of the structures which
govern the coupon payments. Collateral types can include residential and
commercial mortgages, pools of receivables or bank loans.
11.Distressed: Funds employing this strategy invest primarily in the debt of
companies in financial distress or bankruptcy. Such securities typically trade
at substantial discounts to par as existing investors often sell the debt of
companies which start to experience financial distress or file for bankruptcy.

These investors either cannot or do not wish to hold debt that is undergoing
the procedural complexities of the bankruptcy or reorganisation process.
Distressed managers seek to profit by buying debt below what they estimate
to be its ultimate recovery value during or upon finalisation of the
reorganisation process.
12. Event Driven: Event driven strategies focus on capturing price
movements or anomalies generated by corporate events. Many funds are
equity-oriented but more diversified funds may invest in credit as well as
equities, although they typically hold more than 25% of their portfolios in
equities.
Relative value arbitrage encompasses a number of sub-strategies. Generally,
relative value managers seek to profit from the mis-pricings of related
financial instruments; they use quantitative and qualitative analysis to
identify securities, or spreads between securities, that deviate from their
perceived fair value and/or historical norms. Relative value sub-strategies
mainly include fixed income strategies.
Fixed Income Relative Value
Fixed income relative value funds trade a broad range of government bonds,
swaps, money markets and swaption instruments. These funds sometimes
have a small exposure to mortgages and credit but it is not their primary
focus.
13. Mortgage Relative Value: Mortgage relative value funds focus on
liquid mortgage securities. Hedge funds investing in these securities
typically model the impact of changes in interest rates and other factors on
the repayment or prepayment characteristics of an underlying pool of assets,
and attempt to identify securities that are mis-priced relative to other
mortgages in the market. They may hedge out exposure to interest rate
fluctuations using Treasuries, swaps or other fixed income derivatives. These
funds do not generally take any credit risk.
14. Municipal Bond Arbitrage: These funds focus on the municipal bond
market in the US. Municipal bonds are issued by US states, municipalities or
counties, in order to finance capital expenditures. Municipal bonds are
exempt from federal taxes and from most state and local taxes. Municipal
bond arbitrage funds seek to profit from tax rate arbitrage and non-economic
selling, often by retail investors who make up the majority of the investors in
the asset class.

15. Relative Value Diversified: The relative value diversified sector tends
to include strategies that invest outside fixed income. Examples include
commodity relative value funds, funds pursuing ADR/GDR arbitrage
strategies and closed fund discount arbitrage. It should be noted that
statistical arbitrage equity funds are typically categorised as either
systematic non-trend under Trading or equity long/short under Equity
Hedge.

You might also like