You are on page 1of 4

Asset Allocation Methodologies

Now that you understand the importance of asset allocation to your long-term investing results, let's take a
closer look at the different approaches that can be used to determine your asset allocation strategy.

The first approach is basically a "rule of thumb" (or, to use the more formal term, a "heuristic"). Three "rule of
thumb" weightings that are often cited in news stories and other popular media: a mix of 80% equities and
20% debt (for a high risk/high return portfolio); a mix of 60% equities and 40% debt (for a moderate
risk/moderate return portfolio); and a mix of 20% equities and 80% debt (for a low risk/low return portfolio).
Using different terminology, somebody else might call these three portfolios aggressive, balanced, and
conservative.

Now, let's say you wanted to maximize your chances of outperforming one of these benchmarks over a
single year holding period. You could specify the goal of this portfolio as either delivering more return than
the heuristic benchmark portfolios, while taking on no more risk, or to deliver the same level of return while
taking on less risk. The most common approach to constructing this type of  portfolio is a methodology
known as "mean/variance optimization" or MVO. This approach uses three variables for each asset class (its
expected return, standard deviation of returns, and correlation of returns with other asset classes) to
construct different combinations of portfolios which maximize return per unit of risk (another way of looking
at this is that they minimize risk per unit of return). In other words, for each of these portfolios, there is no
way to get more return for the same risk, or less risk for the same level of expected return. For that reason,
these portfolios are called "efficient", and the set that comprises all of them is called "the efficient frontier"
(because when they are plotted on a graph based on their risk and return, they form a line). The estimated
asset class returns, risks, and correlations can either be derived from historical data, based on the outputs
from a forward looking asset pricing model, or some combination of the two.

However, the MVO methodology has some significant limitations. While it is a good approach to single year
portfolio optimization problems, in multiyear settings (e.g., a typical investor's time horizon) it fails to
adequately take into account the fact that poor portfolio performance in early years can substantially reduce
the probability of achieving long term goals. It also fails to adequately account for most people's intuitive
understanding of risk: what's important isn't standard deviation (the dispersion of annual returns around their
mean), but rather the chance that I will fall short of my long-term goals. Finally, it has problems taking into
account the fact that people often pursue multiple goals (e.g., risk minimization and achieving a minimum
probability of achieving their long term target rate of return) and/or set constraints on their portfolios (e.g.,
limits on the percentage of a portfolio invested in different asset classes), particularly in a multi-period
analysis.

This brings us to the third major approach to asset allocation, which is called simulation optimization, or SO
(which is also known as "stochastic optimization"). We use this approach to develop our model portfolios,
which we call our "target return" portfolios. Our goal in employing the simulation optimization methodology is
to develop a multi-period asset allocation solution that is robust enough to achieve an investor's objectives
under a range of possible future asset class return scenarios. In practice, SO works like this: First, you
define your investment objectives. For example, you could have two objectives: (a) given historical data,
ensure that the investor's minimum required rate of return will be met (on a compound annual rate of return
basis) over a twenty year time horizon with a minimum level of probability (e.g., "I want to be at least 90%
sure my portfolio will earn a real compound annual rate of return of at least 5% over the twenty years I've
got left before I retire"); and (b) achieve this while taking on as little risk as possible ("and I don't want to ride
a roller coaster on the way there!"). The next step is to define the average annual return, standard deviation,
and correlation assumptions for the asset classes under consideration. The third step is to define the
maximum amounts that can be invested in any single asset class. So far, this sounds just like the MVO
approach doesn't it? The difference lies in the way the SO approach uses these inputs. Rather than using
them to construct a single period optimization solution (for the technically minded, to calculate an efficient
frontier -- that is, the set of different portfolios that maximizes return for any given level of risk), SO uses
these inputs as the basis for a complex Monte Carlo simulation analysis.

Here's how it works. SO starts with a "candidate" asset allocation solution (e.g., 30% domestic bonds, 40%
domestic equities, and 30% international equities). It then uses the asset class inputs to calculate a scenario
covering the twenty-year holding period. For example, if you had seven asset classes, and a holding period
of twenty hears, the scenario would contain 140 different annual returns. Once the scenario has been
defined, the SO approach checks to see how well the candidate asset allocation strategy satisfies the
investor's objective. It then repeats this process many times (we use from 2,000 to 5,000 scenarios per
asset allocation strategy) to develop a clear picture of the range of outcomes the candidate asset allocation
strategy might produce. But the SO approach doesn't stop here. It repeats the process over and over again
to test other asset allocation strategies that might do a better job of achieving the investor's goals.

It is important to understand how these alternative asset allocation strategies are determined. Because of
the nature of the optimization problem (multiple years of outcomes, with multiple possible asset class
combinations and constraints on which ones are permissible), it is impossible to use the same linear
optimization algorithm used in MVO. Moreover, given the large number of possible solutions, a "brute force"
solution ("test all of them") is, from a computational perspective, out of the question. What is needed instead
is a process that intelligently searches the landscape of possible asset allocation solutions for one which is
likely (but not guaranteed) to be at least one of the best available (in terms of its probability of achieving the
investor's goals under a wide range of future asset class return scenarios). Our SO model uses a stochastic
search process (technically, scatter/tabu search with a neural network accelerator) to identify different asset
allocation strategies to test. The result of our SO methodology cannot be said to be "optimal" in the same
sense that a one year MVO approach produces an optimal solution. Instead, the goal of our SO approach is
to produce a robust solution -- that is, one that, in comparison to other strategies, has a relatively higher
probability of achieving the investor's goals under a wide range of future conditions.

The advantages of the SO approach are that it handles time, uncertainty, and multiple investor objectives
much better than either the heuristic or MVO methodologies. However, it is also subject to some of the
limitations that also bedevil the MVO approach. Let's look at those in more detail.

All quantitative approaches to asset allocation (e.g., MVO and SO) suffer from some shortcomings. First, the
benefits of diversification are not infinite. Statistically, they begin to fall off sharply after relatively few
different asset classes have been included in a portfolio. For example, where the average correlation
between asset classes is .6, the benefits fall off after five asset classes; with an average correlation of .4,
that number only rises to eight. After these points have been reached, the way to achieve further risk
reduction is by taking short positions in certain assets, which is an approach far beyond the skills of most
retail investors. Despite this, too many investors insist on "diversifying" their portfolio across too many highly
correlated assets. For example, you get many more diversification benefits from investing in bonds and
equities than you do from investing in small capitalization value stocks and large capitalization growth
stocks. Unfortunately, this is still a mistake that too many investors continue to make.

The second major issue that many people don't fully appreciate is that the inputs used in asset allocation
processes are themselves only statistical estimates of the "true" values for these variables. Techniques such
as resampling (essentially, using Monte Carlo simulation to make these statistical estimates explicit) show
that, because of the possibility of estimation error, many portfolios with different asset allocations are
statistically indistinguishable from one another in terms of their expected risk and return. [Another point is
that mean/variance optimizers overweight asset classes with higher than average expected returns, lower
than average expected variance, and lower estimated correlations of returns with other assets - and these
are exactly the assets that are most likely to have greatest estimation error]. Practically, the lesson here is
that rebalancing a portfolio should only be done when the asset weights get significantly out of line with the
long run asset allocation (see our site for more details about this).

The third issue that affects asset allocation models is the fact that the historical returns for many asset
classes are not normally distributed, and have "fatter tails" than would be the case in a normal distribution.
Statistically, this means extreme events are more likely to happen than would be the case if the returns were
normally distributed. How much more likely? Fortunately, a 19th century Russian mathematician named
Pafnuty Chebyshev worked this out. In the case of a normal distribution, the range defined as the mean
(average) plus or minus two deviations is supposed to cover about 95 percent of possible outcomes, while
the three standard deviations are supposed to cover 99 percent. Chebyshev showed that if the distribution
isn't normal, you would need (at most) about four standard deviations to cover 95 percent of the possible
outcomes, and about seven standard deviations to capture 99 percent. Unfortunately, the assumption of
normality is practically necessary to make many asset allocation models computationally feasible, and most
investors aren't told that as a result they often provide a false sense of confidence about "the worst"
outcomes that could occur. Practically, this means that there is more risk inherent in high standard deviation
asset classes (like equities) than people may realize, and that a more conservative asset allocations are
probably more effective in the long term (a point we've taken to heart in the construction of our target return
model portfolios).

The fourth issue that affects asset allocation models is the fact that the underlying economic processes that
generate the return distributions they use as inputs are not themselves stable (or, as they say in statistics, it
isn't "stationary"). The evidence in support of this observation is quite strong: for example, standard
deviations (also known as volatility) are not stable across time; rather, they tend to cluster in "regimes" of
high and low values for this variable. The same is true for the correlations of returns between asset classes:
there is a lot of data that says that correlations tend to increase during bad times, and then move apart
during good times. Anyone looking for an example of this need look no further than the almost uniformly
dismal performance of the world's major equity markets in 2002. Equally as important (as described in the
book Iceberg Risk by Kent Osband), the correlation of returns between different asset classes does not
capture the risk posed by the fact that these returns could be driven, to an important degree, by exposure to
a common factor or factors. We undoubtedly saw this with respect to the performance of the world's equity
markets in 2002, with their collective exposure to the U.S. economy as the engine of the world's economic
growth. And we may see it again in the future if deflationary conditions in the global economy cause the
wheels to come off the world's derivative markets.

Developing new ways to deal with this "non-stationarity" risk has become quite a hot topic in the financial
world. Practically, there are relatively few results so far, apart from "stress testing" one's portfolio by setting
all the correlation coefficients equal to one and seeing if one could live with the result. Going forward, this is
an approach we will be incorporating into our future modeling efforts. In sum, for many reasons, asset
allocation still remains at best an imperfect (and imperfectly understood) science, if not an art. Despite the
apparent precision of the models that are often used for this purpose, at best they can only increase the
probability of achieving your goals -- they cannot guarantee it.

However, there are steps one can take to minimize the impact of the limitations that bedevil all asset
allocation models.

With respect to estimation errors, the first important point was made by Chopra and Ziemba in their 1993
Journal of Portfolio Management article entitled "The Effect of Errors in Means, Variances, and Covariances
on Optimal Portfolio Choice." Their key finding was that the estimation error of the mean is 10x as important
as the estimation error of the variance, which in turn is 2x as important as the estimation error of the
covariance (except where the number of assets is large, in which case the covariance error becomes more
important). The second key point is that resampling is only one way of taking estimation errors into account
in the asset allocation process. There are also other qualitative and quantitative approaches one can take.

First, there are a number of simple heuristic approaches. These include giving equal weights to all asset
classes, and excluding return estimates altogether, and simply optimizing to minimize risk. Another heuristic
approach is to put constraints on the maximum weight that can be given to any asset class. The balance of
theoretical argument on the merits of this approach seems to favor the view that in general, it does more
good than harm. This view is also reinforced by the finding that resampling analysis typically concludes that
many "intuitive" portfolios (which generally include either explicit or implicit asset class constraints) are within
the "efficient region" of statistically equivalent portfolios.

Resampling is one of three more quantitative approaches to dealing with estimation risk. Its primary benefit
seems to be that it results in less rebalancing, due to the statistical equivalence it demonstrates between
many efficient portfolios. On the other hand, resampling has two key shortcomings. First, portfolios with
statistically equal risk/return trade-offs can have very different asset weights, which leaves more room for
discretion than some (but certainly not all) people might prefer. More important, because all resampled
returns are drawn from the same distribution, resampling implicitly assumes that the underlying return
generating process is stationary. Unfortunately, a number of research papers (not to mention the existence
of clustered volatility in most asset classes) have demonstrated that this is not the case. For example, in
"Structural Change and the Predictability of Stock Returns", Rapach and Wohar "find, in the period since
World War Two, evidence of structural breaks in seven of the eight predictive models for the S&P 500" that
they study. They note that these breaks occur for many reasons, including changes in political conditions
(e.g., war), economic conditions (e.g., monetary or tax policy), and financial market conditions (e.g.,
bubbles). The net result is "significant parameter uncertainty in the use of predictive models."
The second more quantitative approach to dealing with estimation errors has been proposed by Horst, de
Roon, and Werker in their paper "Incorporating Estimation Risk in Portfolio Choice." In essence, they
propose the use of higher than actual risk aversion when determining mean/variance efficient portfolio. You
can see the relationship of this approach with the heuristic one of simply setting a maximum constraint on
the allocation to certain asset classes, which is another way of increasing your defacto risk aversion.

The third quantitative approach is using Bayesian estimators that combine prior beliefs with sample returns
to generate posterior beliefs. The challenge here is deciding what prior belief about the distribution of returns
one should use. A number of different alternatives have been proposed, including a grand mean (that is, the
average of all the sample means for all the asset classes under consideration), the same mean for all asset
classes, and the outputs from a theoretically sound asset pricing model (which also introduces potential
model error into the estimating process). This is an important distinction, because the nature of the Bayesian
approach is to shrink the sample means toward whichever prior is chosen.

We use this Bayesian approach when developing the asset class inputs for our model portfolios. We first
derive two different sets of future asset class return estimates. The first is based on historical real returns.
The second set is derived from a forward looking asset pricing model. Both of the historical and model-
based approaches have their strengths and weaknesses; combining them should theoretically produce a
better estimate of future returns. An interesting point here is the weights we gave to the asset class weights
from each approach. We chose .50 for the historical weights, and .50 for the future estimate. This is
consistent with academic research findings that simple averaging often outperforms more complex forecast
combination techniques. We note, however, that reasonable people can and do disagree on such matters.
As we've written many times, science, even with clearly explained theory, can only take you so far; at some
point in the asset allocation process, you cannot escape the need for informed judgment in the face of
unavoidable uncertainty.

Now that we have discussed the different methodologies, let's

You might also like