## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Ratings:

Length: 684 pages10 hours

Mathematical finance is a prolific scientific domain in which there exists a particular characteristic of developing both advanced theories and practical techniques simultaneously. *Mathematical Modelling and Numerical Methods in Finance* addresses the three most important aspects in the field: mathematical models, computational methods, and applications, and provides a solid overview of major new ideas and results in the three domains.
Coverage of all aspects of quantitative finance including models, computational methods and applications
Provides an overview of new ideas and results
Contributors are leaders of the field

Publisher: Elsevier ScienceReleased: Jun 16, 2009ISBN: 9780080931005Format: book

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

Alain Bensoussan

*International Center for Decision and Risk Analysis (ICDRiA), School of Management, University of Texas at Dallas, SM 30, Richardson, TX 75083-0688, USA *

Qiang Zhang

*Department of Mathematics and Department Economics and Finance, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong *

ISSN 1570-8659

Volume 15 • Number (C) • 2009

**Cover image **

**Title page **

**Handbook of Numerical Analysis **

**Front Matter **

**Special Volume: Mathematical Modeling and Numerical Methods in Finance **

**Copyright page **

**General Preface **

**Model Risk in Finance: Some Modeling and Numerical Analysis Issues **

**1 Introduction **

**2 Limitations of statistical procedures based on historical data **

**3 On calibration methods in finance **

**4 On Monte Carlo approximations of the VaR of model risk P&Ls **

**5 A stochastic game to face model risk **

**6 Model risk and technical analysis **

**Theorem 6.1 **

**7 Conclusion **

**Robust Preferences and Robust Portfolio Choice **

**1 Introduction **

**2 Robust preferences and monetary risk measures **

**3 Robust portfolio choice **

**4 Portfolio choice under robust constraints **

**Stochastic Portfolio Theory: an Overview **

**Abstract **

**Contents **

**Introduction **

**CHAPTER I Basics **

**CHAPTER II Diversity & Arbitrage **

**CHAPTER III Functionally Generated Portfolios **

**CHAPTER IV Abstract Markets **

**Asymmetric Variance Reduction for Pricing American Options **

**Abstract **

**1 Introduction **

**2 Primal and dual formulations of American option prices **

**3 Numerical results I: one-dimensional case **

**4 Numerical results II: stochastic volatility **

**5 Conclusion **

**Appendix A **

**Downside and Drawdown Risk Characteristics of Optimal Portfolios in Continuous Time **

**Abstract **

**1 Introduction **

**2 Literature review **

**3 Summary of dynamic strategies **

**4 Various risk measures **

**5 More on mean-downside-risk models **

**6 Below-mean SV **

**7 VaR **

**8 Conditional VaR **

**9 Average drawdown **

**10 Maximum drawdown **

**11 Correlations between different risk measures **

**12 Conclusions **

**Appendix A: Derivation of average drawdown **

**Appendix B: Proofs related to SV **

**Investment Performance Measurement Under Asymptotically Linear Local Risk Tolerance1 **

**Abstract **

**1 Introduction **

**2 The model and its investment performance measurement **

**3 Asymptotically linear local risk tolerance functions **

**4 At the optimum **

**5 Special cases: CARA and CRRA forward performance processes **

**Malliavin Calculus for Pure Jump Processes and Applications to Finance **

**Abstract **

**1 Introduction **

**2 Malliavin calculus for simple functionals **

**3 Integration by parts formula for pure jump processes **

**4 Sensitivity analysis for European options using integration by parts formula **

**5 Sensitivity analysis for European options using the Bismut approach **

**6 American option pricing **

**On the Discrete Time Capital Asset Pricing Model **

**Abstract **

**1 Introduction **

**2 A probability setup **

**3 Description of the market **

**4 Optimal portfolio and consumption **

**5 Dynamic programming approach **

**6 Markovian framework **

**7 Bellman equation **

**8 Continuous time framework **

**Numerical Approximation by Quantization of Control Problems in Finance Under Partial Observations **

**Abstract **

**1 Introduction **

**2 Problem setup **

**3 Filtering and dynamic programming **

**4 Approximation by quantization and error analysis **

**5 Financial application: European option hedging in a partially observed stochastic volatility model **

**Recombining Binomial Tree Approximations for Diffusions **

**Abstract **

**1 The methodology **

**2 One-dimensional examples **

**3 A two-dimensional example **

**Partial Differential Equations for Option Pricing **

**Contents **

**Introduction **

**CHAPTER I One-Dimensional Partial Differential Equations For Option Pricing **

**CHAPTER II Multidimensional Partial Differential Equations For Option Pricing **

**CHAPTER III Sensitivity and Calibration **

**Advanced Monte Carlo Methods for Barrier and Related Exotic Options **

**Abstract **

**1 Introduction **

**2 Brownian bridge techniques **

**3 Shifting the barrier **

**4 Numerical tests **

**Appendix A. Numerical approximation of the expected overshoot of a Gaussian random walk **

**Real Options **

**Abstract **

**1 Introduction **

**2 Tradable assets **

**3 Valuation of contingent claims **

**4 Valuation of a project **

**5 Valuation of an option to invest **

**6 Extensions **

**7 Uncertainties on investment **

**8 Uncertainties due to incentives **

**9 The option of abandonment **

**10 The option of mothballing **

**11 Reflecting barriers **

**12 Equilibrium model **

**Anticipative Stochastic Control for Lévy Processes With Application to Insider Trading **

**Abstract **

**1 Introduction **

**2 Framework: forward anticipating calculus **

**3 Optimal portfolio problem for a large insider **

**4 Examples **

**5 Acknowledgment **

**Optimal Quantization for Finance: From Random Vectors to Stochastic Processes **

**Abstract **

**1 Introduction **

**2 What is quadratic quantization? **

**3 Optimal (quadratic) quantization **

**4 Cubature formulae: conditional expectation and numerical integration **

**5 Vector quantization **

**6 Optimal quadratic functional quantization of Gaussian processes **

**7 Constructive functional quantization of diffusions **

**8 Applications to path-dependent option pricing **

**9 Universal quantization rate and mean regularity **

**10 About lower bounds for functional quantization **

**11 Toward new applications: a guided Monte Carlo method **

**12 Acknowledgment **

**Stochastic Clock and Financial Markets **

**Abstract **

**1 Introduction **

**2 Time changes in mathematics **

**Analytical Approximate Solutions to American Barrier and Lookback Option Values **

**Abstract **

**1 Introduction **

**2 Barrier options **

**3 Lookback options **

**4 Validation of analytical approximate solutions **

**5 Acknowledgments **

**Asset Prices With Regime-Switching Variance Gamma Dynamics **

**Abstract **

**1 Introduction **

**2 The model for asset price returns **

**3 VG processes under absolutely continuous changes of measures **

**4 Estimating model parameters **

**5 Robust statistics **

**6 Simulation **

**7 Statistical results for the S&P500 **

**8 Conclusion **

**Index **

*General Editor: *

**P.G. Ciarlet **

*Laboratoire Jacques-Louis Lions *

*Université Pierre et Marie Curie *

*4 Place Jussieu *

*75005 PARIS, France *

*and *

*Department of Mathematics *

*City University of Hong Kong *

*Tat Chee Avenue *

*KOWLOON, Hong Kong *

AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD • PARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO

North-Holland is an imprint of Elsevier

**Front Matter **

**Volume XV **

*Guest Editors: *

**Alain Bensoussan **

*International Center for Decision and Risk Analysis (ICDRiA)*,

*School of Management, University of Texas at Dallas*,

*SM 30, Richardson, TX 75083-0688, USA *

**Qiang Zhang **

*Department of Mathematics and Department Economics and Finance*,

*City University of Hong Kong*,

*83 Tat Chee Avenue, Kowloon, Hong Kong *

AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD • PARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO

North-Holland is an imprint of Elsevier

North-Holland is an imprint of Elsevier

The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK

Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands

Copyright © 2009 Elsevier B.V. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher.

Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email: **permissions@elsevier.com. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material. **

**Notice **

No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.

**British Library Cataloguing in Publication Data **

A catalogue record for this book is available from the British Library

**Library of Congress Cataloging-in-Publication Data **

A catalog record for this book is available from the Library of Congress

ISBN: 978-0-444-51879-8

For information on all North-Holland publications visit our website at **elsevierdirect.com **

Printed and bound in Great Britain

08 09 10 10 9 8 7 6 5 4 3 2 1

**P.G. Ciarlet **

July 2002

In the early eighties, when Jacques-Louis Lions and I considered the idea of a *Handbook of Numerical Analysis*, we carefully laid out specific objectives, outlined in the following excerpts from the General Preface

which has appeared at the beginning of each of the volumes published so far:

*During the past decades, giant needs for ever more sophisticated mathematical models and increasingly complex and extensive computer simulations have arisen. In this fashion, two indissociable activities, mathematical modeling and computer simulation, have gained a major status in all aspects of science, technology and industry. *

*In order that these two sciences be established on the safest possible grounds, mathematical rigor is indispensable. For this reason, two companion sciences, Numerical Analysis and Scientific Software, have emerged as essential steps for validating the mathematical models and the computer simulations that are based on them. *

*Numerical Analysis is here understood as the part of Mathematics that describes and analyzes all the numerical schemes that are used on computers; its objective consists in obtaining a clear, precise, and faithful, representation of all the information contained in a mathematical model; as such, it is the natural extension of more classical tools, such as analytic solutions, special transforms, functional analysis, as well as stability and asymptotic analysis. *

*The various volumes comprising the Handbook of Numerical Analysis will thoroughly cover all the major aspects of Numerical Analysis, by presenting accessible and in-depth surveys, which include the most recent trends. *

*More precisely, the Handbook will cover the basic methods of Numerical Analysis, gathered under the following general headings: *

– Solution of Equations in *Rn*,

– Finite Difference Methods,

– Finite Element Methods,

– Techniques of Scientific Computing.

*It will also cover the numerical solution of actual problems of contemporary interest in Applied Mathematics, gathered under the following general headings: *

– Numerical Methods for Fluids,

– Numerical Methods for Solids.

In retrospect, it can be safely asserted that Volumes I to IX, which were edited by both of us, fulfilled most of these objectives, thanks to the eminence of the authors and the quality of their contributions.

After Jacques-Louis Lions’ tragic loss in 2001, it became clear that Volume IX would be the last one of the type published so far, i.e., edited by both of us and devoted to some of the general headings defined above. It was then decided, in consultation with the publisher, that each future volume will instead be devoted to a single "*specific application* and called for this reason a

*Special Volume*.

*Specific applications* will include Mathematical Finance, Meteorology, Celestial Mechanics, Computational Chemistry, Living Systems, Electromagnetism, Computational Mathematics etc. It is worth noting that the inclusion of such

specific applications" in the *Handbook of Numerical Analysis *was part of our initial project.

To ensure the continuity of this enterprise, I will continue to act as Editor of each Special Volume, whose conception will be jointly coordinated and supervised by a Guest Editor.

**Part 1 **

Mathematical Models

**Model Risk in Finance: Some Modeling and Numerical Analysis Issues **

**Denis Talay, INRIA, 2004 Route des Lucioles, B.P. 93, 06902 Sophia-Antipolis, France **

The impact of erroneous models and measurements is an important issue in all scientific and technological fields: equations and measurement devices provide approximate descriptions of our real world so that one needs to estimate and possibly control the effects of misspecifications during the modeling and calibration process.

In fields such as physics, conservation laws constrain the models and the values of the model parameters, even when a part of stochasticity is involved to take uncertainties into account. As well, to solve numerically a partial differential equation (PDE) describing macroscopic quantities whose state space is unbounded, one needs to introduce artificial boundary conditions that allow one to compute the solution within a bounded domain; the design of these boundary conditions is a difficult issue, but one may be helped by intuitive considerations on the physical phenomenon under study; to give an example, if one desires to compute turbulent flows around airplane wings, one may assume that, away from the airplane, the velocity of the flow is equal to the wind velocity, and one thus may derive reasonable approximate Dirichlet conditions from a reasonable physical model.

In finance, modeling issues are much more complex than in physics for, at least, the following reasons.

First, no physical law helps the modeler to choose a particular dynamics to describe the time evolution of market prices or indices. The real market is incomplete and arbitrages occur. Moreover, no stationarity argument can help justify that parameters estimated from historical data will keep the same values in the next future. Therefore, the modeller has a high degree of freedom to mathematically describe the market in order to compute optimal portfolio allocations or risk measures. For example, authors propose to model the volatility of a stock as a deterministic function of the stock (and possibly of exogeneous factors) or as a stochastic process; the stochastic differential equations involved in the models may be driven by Brownian motions or by discontinuous Lévy process; the bond market is modeled by short-term dynamics or by Heath–Jarrow–Morton (HJM) equations. In addition, to compute price options and deltas, practitioners and quants find it convenient to suppose that the no arbitrage and completeness hypotheses prevail: in diffusion models, this assumption constrains the dimension and the algebraic structure of the volatility matrix so that the model used to hedge may not exactly fit the market data.

Second, statistical procedures issued from the theory of statistics of random processes and based upon historical data may be extremely inaccurate because of the lack of data. For example, an accurate parametric estimation of a volatility matrix requires that the asset price is observed at very high frequencies. As well, the parametric estimators of a drift parameter may need long-time observations to provide reliable results (see our illustration in **Section 2.1). In such a case, one needs to assume that, during the whole period, the model remains relevant and its parameters remain constant. Of course, it would be unclever to use historical data only to calibrate financial models: in order to calibrate a stock price model, the practitioners not only actually consider the past prices of the stock only but also use other available information such as past prices of derivatives on this stock (see, e.g., papers and references in AVELLANEDA [2001]). However, the stationarity of the market during the observation period remains questionable, and error estimates for complex calibration methods are not available in the literature. **

Third, in finance one neither can use data issued from experiments repeated independently nor assume a kind of ergodicity in order to increase the set of available observations. The modeler needs to design and calibrate models using one single history of the market.

Finally, model uncertainties also occur in the numerical resolution of PDEs related to option pricing or optimal portfolio allocation. Commonly used stochastic models in finance actually lead to consider processes whose time marginal laws have unbounded supports. Consequently, the PDEs are posed in unbounded domains, and artificial boundary conditions are necessary. The situation is quite different from the above example in fluid mechanics: usually one has a little knowledge on the behavior of the solution when the norm of the state variable increases: usually one finds estimates by working with simplified models. For an example of a rigorous procedure to design artificial boundary conditions for European options, see **COSTANTINI, GOBET and KAROUI [2006]; for an analysis of the error induced by misspecified boundary conditions on American option prices, see BERTHELOT, BOSSY and TALAY [2004]. **

Consequently, model misspecifications cannot be avoided, which leads to model risk. The specificity and definitions of model risk are not universally admitted(see the extended introduction in **CONT [2006] for an interesting discussion on this point and an extended list of references). In the present notes, we limit ourselves to a particular restricted family of questions: how to evaluate — and possibly control — the impact of certain model uncertainties on profit and losses (P&Ls) of hedging portfolios or on portfolio management strategies? We do not examine axiomatic questions on risk measures at all, for which we refer to CHERIDITO, DELBEAN and KUPPER [2005], BARRIEU and EL KAROUI [2005], and FÖLLMER and SCHIED [2002]. We rather adopt a pragmatic point of view and seek computational means to evaluate the impact of model uncertainties. **

We start with illustrating the difficulty in constructing a reliable market model by presenting recent results on one of the very first steps of the modeling process, namely, the design of the driving noise of the dynamics of the assets under consideration. We then present some results concerning the numerical approximation of measures of model risk such as Values at Risk (VaR) in diffusion environments. We also present a stochastic game problem related to model risk control. Finally, we propose a tentative methodology to compare the performances of financial strategies derived from (misspecified) mathematical models and strategies, which, derived from technical analysis, avoid modeling and calibration issues.

In the literature, one can find a huge number of papers that propose and analyze parametric and nonparametric estimators for the coefficients of stochastic differential equations. A more specific literature also exists on the statistics of stochastic models in finance (for a survey, see, **AïT-SAHALIA and KIMMEL [2007]). Our purpose here is not to provide a summary of these works, even partially: we limit ourselves to refer to PRAKASA RAO [1999a] and PRAKASA RAO [1999b] and the references therein for the reader interested by an overview on the subject, and to JACOD [2000] for an advanced result on the identification of the volatility function with kernel estimators. In the latter reference, it is shown that, if a diffusion process is observed at times i/n and if the diffusion coefficient has regularity r, then the accuracy of the estimator is of order 1/nr/(1+2r. Such a convergence rate is low and illustrates that the design of stochastic models for asset prices or indices from historical data necessarily leads to model risk. We give a few other illustrations below: we will start by an elementary observation that shows that the time scales that are necessary to calibrate stochastic models with good accuracies are often incoherent with the time scales at which the market evolves. We will then examine two questions involved, which, to our knowledge, were recently only tackled in the literature in spite of the fact that they should arise before calibration. They concern the driving noise, more precisely, its continuous or discontinuous nature, and (in the Brownian case) its dimension. **

Our elementary example concerns maximum likelihood estimators for drift parameters of diffusion processes and therefore the calibration of historical probability measures (e.g., in order to solve optimal porfolio management problems or to simulate benchmark histories of the market).

and a family of real-valued functions {*b*(θ, ·), θ ε Θ}. Suppose that, for each θ ε θ, the function *b*(θ, ·) is Lipschitz and consider the model

**(2.1) **

where (*Bt*, our situation covers the models with a strictly positive continuous volatility function σ(*x*).

*X*, 0 ≤ *t *≤ *T*), and let E*X*θ denote the corresponding expectation. Suppose that the function *b*(θ, *x*) is continuously differentiable w.r.t. θ for all *x *and that

of θ based upon an observation between times 0 and *T *such that the function

**(2.2) **

is bounded on compact sets, the quadratic estimation error is bounded from below:

The right-hand side is the Cramer–Rao lower bound. For a proof of this classical result, see **KUTOYANTS [1984]. **

For example, consider the model

set

that is,

with

The Cramer–Rao lower bound implies that all estimator of θ based upon the observation of one trajectory of (*St*θ) — equivalently, of (*Xt*θ) – in the time interval [0, *T*], has a quadratic estimation error larger than 1/*T*. If the unit of time is 1 year and if one observes the stock prices during 1 year, then the standard deviation of the error cannot be lower than σ.

In an impressive recent paper, **AïT-SAHALIA and JACOD [2008] constructed and analyzed a rule to decide whether a price process observed at discrete times is continuous or jumps at least once during the observation time interval. Their paper substantially improves previous works mentioned in its list of references. **

The observed process (*Xt*) is supposed to belong to a fairly general class of models, namely, it is supposed to satisfy

Here, *B *is a Brownian motion, μ is a Poisson random measure with an intensity measure of the form *v*(d*s*, d*x*) = d*s *⨷ *dx*; the function *k *is continuous and locally equal to *x *around the origin; the processes (*bs*) and (σ*s*) are optional, and the random function δ(*s*The authors require a few technical conditions that are not limitative for applications in finance (e.g., the process (σ*t*) is supposed to be of the same type as (*Xt*) itself).

Now, denote by Δ*n *a sequence of observation time steps decreasing to 0. Aït-Sahlia and Jacod’s test statistics is

**Theorem 2.1: ***Under the above assumptions, for all t *> 0 *and p *> 2, *the variables *(*p*, Δ*n*)*t converge in probability when n goes to infinity to *

Therefore, the decision rule consists in accepting the hypothesis "the process (*Xt*

The authors prove several limit theorems that allow them to construct levels of tests based on their tests statistics. In particular, they show the following theorem.

**Theorem 2.2: ***For p *− 1), *when restricted to the set of discontinuous paths, converges stably in law. *

*If X is continuous, for p *− 2*p*/2-1) *converges stably in law. *

In both cases, the limits are constructed on an extension of the original probability space, but their conditional distribution w.r.t. the original filtration is Gaussian; the two conditional variances are explicited in terms of respectively,

and

These two asymptotic variances can be estimated by means of the discrete time observations of *X. *

It is consequently possible to construct real tests for the null hypothesis that *X *is discontinuous as well as for the null hypothesis that *X *is continuous. For precise critical regions, asymptotic levels and power functions, we refer to **AïT-SAHALIA and JACOD [2008]. Simulation studies reported in the paper illustrate that observations at high frequencies actually allow one to discriminate continuous and discontinuous models. Similarly, when applied to real historical data (Dow Jones Industrial Average stock prices in 2005), observations each 5 seconds lead to the conclusion that most of the prices should be modeled by models with jumps. However, as predicted by the theoretical results, observations each 30 seconds do not allow one to get a significant information from the test. **

In conclusion, although Brownian models are commonly used to compute prices and deltas, it seems that driving noises with jumps should also be considered, especially for prices or physical variables observed at low frequencies since, in such a case, it is impossible to test the (dis)continuity hypothesis.

Suppose now that one observes prices of a basket of *d *assets and that these prices are Itô processes driven by a *q*-dimensional Brownian motion. If no arbitrage and completeness are assumed, then *d *= *q*. However, it sometimes is useless to calibrate a volatility matrix of dimension *d*: for example, some components of the noise may play a very small role in the dynamics of the price and, consequently, considering that they are null may not change much the prices of options on the basket under consideration. More generally, one may have to calibrate models for families of processes that do not model prices but indices, meteorological or economical variables, etc, for which the number of random sources is not constrained by no arbitrage or completeness conditions. In all cases, by eliminating small

noises in the dynamics, one simplifies the calibration of the volatility matrix and decreases the number of operations in the simulations of the model.

**JACOD, LEJAY and TALAY [2008] have tackled the question of estimating the explicative Brownian dimension of an Itô process from a discrete time observation. By "explicative Brownian dimension rB," we (informally) mean that a model driven by rB-dimensional Brownian motion satisfyingly fits the information conveyed by the observed path, whereas increasing the Brownian dimension does not bring a better fit. **

More precisely, suppose that we observe a path of the process

where *B *is a standard *q*-dimensional Brownian Motion, (*bs**d*-valued locally bounded process, σ is a *d *x *q *matrix-valued adapted and càdlàg processes. Set *cs *:= σ*s*σ*s**. Our aim is to estimate the maximal explicative rank of *cs *on the basis of the observation of *XiT/n *for *i *= 0, 1, …, *n*. Of course, a natural candidate should resemble the integer such that, if λ(1)*s*, …, λ(*d*)*s *are the eigenvalues of *cs *in decreasing order, then λ(*rB*)*s *is significantly larger than λ(*rB *+ 1)*s*. However, this sole definition does not lead to a tractable test since one observes a trajectory of (*Xt*) and not of (*ct*); in particular, this implies that we cannot hope to approximate the eigenvalues of *cs *with a good accuracy. Therefore, we need to define estimators of the maximal explicative rank or tests based upon observations of (*Xt*). Notice also that, as in the preceding section, these observations are at discrete times only.

We start with a linear algebra observation. Let *Ar *be the family of all subsets of {1, …, *d*} with *r *elements. For all *K *ε *Ar *and *d *x *d *symmetric nonnegative matrix Σ, let determinant *K*(Σ) be the determinant of the *r *x *r *submatrix (Σ*kl *: *k, l *ε *K*) and set

It is easy to prove that the eigenvalues

of Σ satisfy for all *r *= 1, …, *d*:

In addition,

and

Now, set

In view of the preceding inequalities, for choosing an explicative Brownian dimension, this quantity plays a role similar to

We approximate *L*(*r*)*t *by means of our observations of *X*: denoting by [*x*] the integer part of *x*, we set

where

**Theorem 2.3: ***The variables **converge in probability to *L(*r*)*t uniformly in t *ε [0, *T*]. *The processes *

*converge stably in law to a limiting process *(*V*(*r*)*t*)1≤*r*≤*d*, *which is defined on, an extension of the original space and is a nonhomogeneous Wiener process with an explicit quadratic variation process.*Set

We define a scale invariant estimator of *Rt *by

The preceding theorem allows one to propose a test based on a scale invariant relative threshold for which we have the following consistency result under reasonably weak assumptions on the coefficients (*bs*) and (σ*s*) (more or less similar to those made in the preceding section):

**Theorem 2.4: ***For all r, r′ in *{1, …, *d*}, *provided *(*Rt *= *r*′) > 0, *we have *

Empirical studies for this test and a couple of other tests applied to simulations of models with stochastic volatilities are reported in **JACOD, LEJAY and TALAY [2008]. They illustrate that, under circumstances such as observations at low frequencies or systems with strongly oscillating components, the tests may lead to very erroneous conclusions. In any case, the transformation of the real Brownian dimension into an explicative one induces a specific model risk. **

Practitioners do not only use estimators based on historical observations of primary assets but also use all the information available on the market, for example, prices of derivatives on the asset under consideration, prices of correlated assets, and forward contracts. Their data set is thus a sample χ of a random vector ξ, which represents market prices of all such products.

Various approaches have been developed by various authors: inverse problem techniques applied to the PDEs for option prices, numerical resolution of Dupire’s PDE for the volatility function, optimization techniques to fit the data, entropy minimization techniques, etc.

We first briefly describe Avellaneda–Friedman–Holmes–Samperi’s approach for the calibration of volatilities (for more details on this approach and other approaches, see **AVELLANEDA, FRIEDMAN, HOLMES and SAMPERI [1997] and the volume edited by AVELLANEDA [2001], and references therein). Consider an asset whose volatility process (σ t) is progressively measurable and satisfies **

. The set of all such processes is denoted by *H*. Suppose that the market is complete and that various European options are priced on the market, all the maturities belong to the time interval [0, *T*]. Avellaneda’s approach consists in choosing a smooth and strictly convex function *H *+ with minimal value 0 at a given value σ 0 (resulting from statistics based on historical data) and then searching the process (σ*t*), which solves

Denote the observed option prices by *Pk*, their maturities by *Tk*, and their payoff functions by Φ*k*. Then, set

The calibration procedure consists in solving

For a discussion on the corresponding numerical procedures and a survey on other numerical techniques for calibration, see **ACHDOU and PIRONNEAU [2005]. **

Another direction has been followed by El Karoui and Hounkpatin (see **HOUNKPATIN [2002]) to calibrate risk premia rather than volatilities. The El Karoui–Hounkpatin’s method is based on a variant of the selection of models by minimizing entropies as introduced in AVELLANEDA, FRIEDMAN, HOLMES and SAMPERI [1997]. Let X be the state space of a random vector ξ, which represents market prices of products related to the asset under consideration (e.g., forward contracts, derivatives, …). We observe one sample χ of this random vector. Define the set A of calibration measures as **

How to choose an optimal

element of *P*χ? Consider the entropy

Observe that *H*(*Q*) is positive, and that *H*(*Q*= *Q*.

Suppose that the asset price solves

For a vector ξ := (ξ*i*) of the form *xi *= ϕ*i*(*XT*), set

Using results of **CSISZAR [1975, theorem 3.1], El Karoui and Hounkpatin have shown that there exists a unique Q* in the set of calibration measures such that **

and the dynamics of (*Xt*) under *Q** is

) is a Brownian motion under *Q**, and λ* solves

The numerical approximation of λ*, *h*(*t, x*is theoretically possible owing to Monte Carlo methods. It is a challenging and interesting question to design an efficient algorithm.

It actually appears that all calibration measures lead to difficult numerical problems that getting good accuracies is questionable. We also emphasize that the family of calibration probability measures is arbitrarily chosen. For these reasons, calibration measures, as statistical procedures, cannot cancel model risk.

In this subsection, we follow **BOSSY, GIBSON, LHABITANT, PISTRE and TALAY [2006] where numerical results for the P&L function below are discussed. **

Consider a primary asset with price processes *S *and a saving account (or, more generally, a numéraire) with price process *F **F*is a *F*-martingale and defines a no abitrage and complete market.

Consider a trader who needs to hedge a European option on the primary assets with maturity *TO *and payoff function ϕ. At all time, 0 ≤ *t *≤ *TO *units of the saving account and the *Ht *units of the primary assets:

Its value expressed in the numéraire *F *is

The self-financing condition writes

As the martingale *SF *is supposed to define a complete market and thus to satisfy the martingale representation property, the preceding equality provides a characterization of the process *H*. However, this characterization generally is only implicit, even when Clark–Ocone formula (see **NUALART [2006]) applies, and thus when Ht can be expressed by means of conditional expectations of Malliavin derivatives, and its numerical approximation is quite difficult. Thus, would the market asset prices be a general semi-martingale, even if the trader would perfectly know and measure the model, he/she would nevertheless use a simpler model that would allow him/her to easily, and in short computational time, get numerical values for the delta. **

–Markov–Feller process (ρ*t*), and functions *g *and *h*= *g*(*t*, ρ*t*), *Ft *= *h*(*t*, ρ*t*), and

**(4.1) **

**(4.2) **

for some functions ψ, β, and γ, and for some Brownian motion *BF*. The process ρ may be *SF *itself or the instantaneous rate if *SF *solution to

**(4.3) **

The boundary condition*** is **

With the above notation, would the true world be actually governed by (ρ*t*Therefore, the self-financed pseudoreplicating

portfolio has value

The model risk P&L function is defined as

**(4.4) **

Suppose that, in the true world, the process (ρ*t**F*:

for some adapted processes β and γ. Set

A simple calculation shows that, at maturity *TO*,

Notice that, if (ρ*t*) is a Markov process, that is, if β*t *= β(*t*, ρ*t*) and γ*t *= γ(*t*, ρ*t*is the classical infinitesimal generator of (ρ*t*= π(*t*, ρ*t*)/*Ft*, where π(*t, x*) solves a PDE similar to

The discussion in the preceding subsection can obviously be extended to multidimensional market models, where the prices d*SF,i*), and basket options based on the prices *Si *and with maturity *TO*.

, P&L*t*).

Consider the fairly general stochastic differential equation

(*x*), has a density with respect to Lebesgue’s measure. Here, (*Bs*) is an *r*-dimensional Brownian motion, and the functions *A*0, *A*1, …, *Ar *are smooth with bounded derivatives.

The Euler scheme with step *T/n *is defined as

(*x*) in order to get a random variable whose law has a density

(*x*), of the quantile of level δ, ρ(*x*(*x*).

When approximating quantities of the type E*f*(*XT*), where *T *is fixed, we have the following result: for functions *f *with polynomial growth at infinity,

where

for some positive real numbers *C*, *q*, and *Q *and some increasing function *K *(see **TALAY and TUBARO [1990] for smooth functions f, BALLY and TALAY [1995] under a uniform hypoellipticity condition on the fields Ai, KOHATSU-HIGA [2001] and GOBET and MUNOS [2005] for only measurable functions under nondegeneracy conditions on the Malliavin covariance matrix of XT(x)). Thus, Romberg extrapolation techniques can be used to get higher convergence rates (see TALAY and TUBARO [1990]). For extensions to barrier options, see GOBET and MENOZZI [2004]. **

For the quantile approximation problem, the estimates are slightly different. We summarize results in **TALAY and ZHENG [2002]. Suppose first that the stochastic differential equation (SDE) for ( Xt) has time homogeneous coefficients: **

For multiindices

Also set

and

Suppose**(UH) ***CL *:= inf*x**d VL*(*x*) > 0 for some integer *L*,**(C) **, *i *= 0, …, *r, j *= 1, …, *d **d*’s may be unbounded).

Under (UH) and (C), the law of *XT*(*x*) has a smooth density *pT*(*x, x′*), so that the *d*-th marginal distribution of *XT*(*x*(*x, y*), which is strictly positive at all point *y *in the interior of its support (cf. **NUALART [2006]). **

For 0 < δ < 1, set

and

The discretization error on the quantile ρ(*x*, δ) is described by the following theorem.

**Theorem 4.1: ***Under conditions (UH) and (C), we have *

*where *

is estimated by sampling *N *(for variance reduction techniques, see **KOHATSU-HIGA and PETTERSON [2002]). Taking the corresponding Monte Carlo error into account, roughly speaking, the global error on the quantile is of order **

(*x*(*x*).

One has (see **( x(x, ξ) is of order 1/n(x, ρ(x, δ)). Such estimates are available when the generator of (Xt, P&Lt) may not have a density since all its components are driven by the Brownian processes driving the ρjs. Therefore, we now do not suppose that the Malliavin covariance matrix of (Xt(x, 0 ≤ s ≤ T - t) be a smooth version of the flow solution to **

We denote by *M*(*t, s, x′*. now suppose**(C’) **, *i *= 0, …, *r, j *= 1, …, *d *([0, *T**d*’s may be unbounded).**(M) **For all *p *≥ 1, there exists a nondecreasing function *K*, a positive real number *r*, and a positive Borel measurable function ψ such that

for all *t *in [0, *T*) and *s *in (0, *T *- *t*]. In addition, ψ satisfies: for all λ ≥ 1, there exists a function ψλ such that

and

Under condition (M), the *d*-th marginal distribution of *XT*(*x*(*x, y*) is strictly positive at all point *y *in the interior of its support, and we have the following error estimate.

**Theorem 4.2: ***Under conditions (M) and (C’), we have *

*where *

In practice, one needs to check that condition (M) is satisfied. We here give two examples.

**Theorem 4.3: ***Suppose that **for some t in *[0, *T*] *and x in **d*. *Then, the d-th marginal law of Xt *(*x*) *has a smooth density, and condition (M) is satisfied*.

Our second example concerns a model risk problem. The trader wants to hedge a European option Φ(*B*(*TO*, *T*)) on a bond price *B*(*TO*, *T*), where *TO *is the option maturity and *T *> *TO *is the bond maturity. To hedge, the trader uses bonds with maturities *TO *and *T*. Suppose that the bond market is an HJM model. When the HJM model is governed by a deterministic function σ, the delta of the option can be expressed in terms of the solution πσ to the PDE

Then, for suitable functions *u*1(*s*), *u*2(*s*), and φ(*s*), the forward value of the trader’s P&Ls satisfies an SDE of the type

where (*Yt*) satisfies

If

then condition (M) is satisfied, and one can get an explicit lower bound estimate for the marginal density.

Consider the market model

Here {π*i*} = set of prescribed strategies. Consider *u*(·) := (*b*(·), σ(·)) as the market’s control process.

**and KARATZAS [1999] have studied the dynamic measure of risks **

where *A*(*x*) denotes the class of admissible portfolio strategies issued from the initial wealth *x*, and E*v **v *for all *v **v *have the same risk-neutral equivalent martingale measure, which implies that the trader (or the regulator) is concerned by model risk on stock appreciation rates. For numerical methods related to this approach, see **GAO, LIM and NG [2004]. **

An axiomatic approach to model risk is developed by **CONT [2006], who proposes to measure model uncertainty risk by means of a coherent risk measure compatible with market prices of derivatives or of a convex risk measure. The author studies several examples, among them the case where the real noise is a linear combination of Poisson and Brownian processes, whereas the trader uses a Brownian model only. **

We now present a somewhat different approach, based on a PDE and aimed to compute the minimal amount of money and dynamic strategies that allow the financial institution to (approximately) contain the worst possible damage due to model misspecifications for volatilities, stock appreciation rates, and yield curves. Within this approach, we consider that the trader acts as a minimizer of the risk, whereas the market systematically acts as a maximizer of the risk. Thus, the model risk control problem can be set up as a two-player zero-sum stochastic differential game problem. Given a suitable function *F*, the cost function is

and the value function is

The next theorem shows that this model risk value function solves an Hamilton–Jacobi–Bellman–Issacs equation.

**THEOREM 5.1: ***Under an appropriate locally Lipschitz condition on F, the value function V(t, x, p) is the unique viscosity solution in the space *

*to the Hamilton–Jacobi–Bellman–Isaacs equation *

*where *

For a proof, see **TALAY and ZHENG [2002]. The numerical resolution of the PDE allows one to compute approximate reserve amounts of money to control model risk. Numerical investigations, undone so far, are necessary to evaluate how large are these provisions. **

The practitioners use various rules to rebalance their portfolios. These rules usually come from fundamental economic principles, mathematical approaches derived from mathematical models, or technical analysis approaches. Technical analysis, which provides decision rules based on past prices behavior, avoids model specification and thus model risk (for a survey, see **ACHELIS [2001]). PASTUKHOV [2004] has studied mathematical properties of volatility indicators used in technical analysis. BLANCHET, DIOP, GIBSON, TALAY and ANRE [2007] proposed a framework allowing one to compare the performances obtained by strategies derived from erroneously calibrated mathematical models and the performances obtained by technical analysis techniques. **

Consider an asset whose instantaneous expected rate of return changes at an unknown random time, and a trader who aims to maximize his/her utility of wealth by selling and buying the asset. The benchmark performance results from a strategy that is optimal when the model is perfectly specified and calibrated. To this benchmark we can compare the performances resulting from optimal rules but erroneous parameters, and the performances resulting from technical analysis indicators.

The real market is described by

Here, the Brownian motion (*Bt*) and the change time τ are independent, and τ follows an exponential law with parameter λ. One has

where

Suppose

We start with describing one of the technical analysis rules that are applied in the context of instantaneous rates of return changes. Denote by π*t *ε {0, 1} the proportion of the agent’s wealth invested in the risky asset at time *t*the moving average indicator of the prices. Therefore,

Given a finite set of decision times *tn*, at each *tn *the agent invests all his/her wealth into the risky asset if *Stn *. Otherwise, he/she invests all the wealth into the riskless asset. Consequently,

and the wealth at time *tn*+1 is

from which, for *T = tM*,

The logarithmic utility of *WT *d*s, Bt*): its explicit expression, according to **YOR [2001], is interesting by itself: let σ > 0 and v be real numbers, and let V be the geometric Brownian motion **

Then,

where

The performance of the technical analysis strategy is compared to the benchmark performance: the optimal wealth of a trader who perfectly knows the parameters μ1, μ2, λ, and σ. We impose constraints: as a technical analyst is only allowed to invest all his/her wealth in the stock or the bond, the proportions of the benchmark trader’s wealth invested in the stock are constrained to lie within the interval [0, 1]. In addition, the trader’s strategy is constrained to be adapted with respect to the filtration

generated by (*St*), which because of τ, is different from the filtration generated by (*Bt*).

Let π*t *be the proportion of the trader’s wealth invested in the stock at time *t; Wx*,π denotes the corresponding wealth process. Let *A*(*x*) denote the set of admissible strategies, that is,

progressively measurable process such that

The value function is

As in **KARATZAS and SHREVE [1998], we introduce an auxiliary unconstrained market defined as follows. Let D the subset of the {FtS}-progressively measurable processes v : [0, T] x Ω → R such that **

The bond price process *S*⁰(*v*) and the stock price *S*(*v*) satisfy

*FtS *Brownian motion defined as

here, *F *is the conditional a posteriori probability (given the observation of *S*) that τ has occurred within [0, *t*]:

For each auxiliary unconstrained market driven by a process *v*, the value function is

where

Let the exponential likelihood ratio process (*Lt*)*t*≥0 be defined by

**KARATZAS and SHREVE [1998] have proven the following result. **

*If there exists V such that *

*then there exists an optimal portfolio π* for which the optimal wealth (for the constrained admissible strategies) is *

*An optimal portfolio allocation strategy is *

*where **is the exponential process *

*and *ϕ *is a *-*adapted process, which satisfies *

*Here, v is the Lagrange multiplier, which makes the expectation of the left-hand side equal to x for all x. In addition, Ft satisfies *

The optimal strategies for the constrained problem are the projections on [0, 1] of the optimal strategies for the unconstrained problem. In addition, using again Yor’s Eq. **in the case of the logarithmic utility. **

For general utilities, the optimal strategy cannot be explicited. It, thus, is worth considering the case of a trader who chooses to reinvest the portfolio only once, namely at the time when the change time τ is optimally detected owing to the price history. We suppose that the reinvestment rule is the same as the technical analyst’s one: at the detected change time from μ1 to μ2, all the portfolio is reinvested in the risky asset. The stopping rule Θ*K*, which minimizes the expected miss E|Θ - τ| over all the stopping rules Θ with E(Θ) < ∞, is as follows:

where *p** is the unique solution in (1/2, 1) of the equation

with β = 2λσ²/(μ2 - μ1)² (see **SHIRYAEV [2004] and references therein). Up to a numerical approximation of p*, this rule can easily be applied. **

In practice, even if we would be able to estimate μ1 and σ with a good accuracy, the value of μ2 cannot be determined a priori, and the number of observations of τ may be too small to well estimate λ. Therefore, traders believe that the stock price is

. The above decision rules are then governed by

Actually, the value of a misspecified optimal allocation strategy is

and the corresponding wealth is

Similarly, the erroneous stopping rule is

* is the unique solution in (1/2, 1) of

. The value of the corresponding portfolio is

In view of the technical analysis technique and misspecified strategies, it is natural to compare them to the benchmark optimal strategyy and to study the following question: Is it better to invest according to a mathematical strategy based a misspecified model or according to a strategy based on technical analysis rules? It appears that, even in the logarithmic utility case, the explicit formulae for the different wealths are too complex to allow analytical comparisons. However, Monte Carlo simulations on study cases show that the technical analyst may overperform misspecified optimal allocation strategies even when for relatively small misspecifications, for example, when the parameter λ is underestimated. Simulations also show that a single misspecified parameter is not sufficient to allow the technical analyst to overperform the traders who use erroneous stopping rules. One can also observe that, when the ratio μ2/μ1 decreases, the performances of well-specified and misspecified strategies based upon stopping rules decrease.

Achdou, Y., Pironneau, O. Computational Methods for Option Pricing, Frontiers in Applied Mathematics. Philadelphia, PA: SIAM, 2005.

Achelis, S. Technical Analysis from A to Z. McGraw Hill, 2001.

Aïthsahlia, Y., Jacod, J. Testing for jumps in a discretely observed process. *Ann. Stat*. 2008. forthcoming.

Aït-Sahalia, Y., Kimmel, R. Maximum likelihood estimation of stochastic volatility s, models. *J. Financ. Econ*. 2007;83:413–452.

Avellaneda, M., eds. Quantitative Analysis in Financial Markets, Collected Papers of the NewYork University Mathematical Finance Seminar, Vol. II. River Edge, NJ: World Scientific Publishing Co., Inc., 2001.

Avellaneda, M., Friedman, M., Holmes, H., Samperi, D. Calibrating volatility surfaces via relative entropy minimization. *Appl. Math. Financ*. 1997;4(1):37–64.

Azencott, R. Densité des diffusions en temps petit: développements asymptotiques, Seminar on probability XVIII, Lecture Notes in Math. Berlin, Germany: Springer; 1984;vol. 1059. 402–498.

Bally, V., Talay, D. The law of the Euler scheme for stochastic differential equations (I): convergence rate of the distribution function. *Probab. Theory Rel*. 1995;104:43–60.

Bally, V., Talay, D. The law of the Euler scheme for stochastic differential equations (II): convergence rate of the density. *Monte Carlo Methods Appl*. 1996;2:93–128.

Barrieu, P., El Karoui, N. Inf-convolution of risk measures and optimal risk transfer. *Financ. Stoch*. 2005;9(2):269–298.

Berthelot, C., Bossy, M., Talay, D. Numerical analysis and misspecifications in finance: from model risk to localization error estimates for nonlinear PDEs. In: Akahori J., Ogawa S., Watanabe S., eds. *Proceedings of 2003 Ritsumeikan Symposium on Stochastic Processes and its Applications to Mathematical Finance*. Singapore: World Scientific Publishing Co.; 2004:1–25.

Blanchet-Scalliet, C., Diop, A., Gibson, R., Talay, D., Tanre, E. Technical analysis compared to mathematical models based methods under parameters mis-specification. *J. Bank. Financ*. 2007;31(5):1351–1373.

Bossy, M., Gibson, R., Lhabitant, F-S., Pistre, N., Talay, D. Model misspecification analysis for bond options and Markovian hedging strategies. *Rev. Derivatives Res*. 2006;9(2):109–135.

Cheridito, P., Delbaen, F., Kupper, M. Coherent and convex monetary risk measures for unbounded cÃdlÃg processes. *Financ. Stoch*. 2005;9(3):369–387.

Cont, R. Model uncertainty and its impact on the pricing of derivative instruments. *Math. Financ*. 2006;16(3):519–547.

Costantini, C., Gobet, E., El Karoui, N. Boundary sensitivities for diffusion processes in time dependent domains. *Appl. Math. Optim*. 2006;54(2):159–187.

Csiszar, I. *I*-divergence geometry of probability distributions and minimization problems. *Ann. Probab*. 1975;3:146–158.

, J., Karatzas, I. On dynamic measures of risk. *Financ. Stoch*. 1999;3(4):451–482.

Föllmer, H., Schied, A. Convex measures of risk and trading constraints. *Financ. Stoch*. 2002;6(4):429–447.

Gao, Y., Lim, K.G., NG, K.H. An approximation pricing algorithm in an incomplete market: a differential geometric approach. *Financ. Stoch*. 2004;8(4):501–523.

Gobet, E., Menozzi, S. Exact approximation rate of killed hypoelliptic diffusions using the discrete Euler scheme. *Stoch. Proc. Appl*. 2004;112(2):201–223.

Gobet, E., Munos, R. Sensitivity analysis using ItÃ'-Malliavin calculus and martingales, and application to stochastic optimal control. *SIAM J. Control Optim*. 2005;43(5):1676–1713.

Hounkpatin, O. (2002). Volatilité du Taux de Swap et Calibrage d'un Processus de Diffusion, thèse de l'université Paris 6, 2002.

Jacod, J. Non-parametric kernel estimation of the coefficient of a diffusion. *Scand. J. Stat*. 2000;27(1):83–96.

Jacod, J., Lejay, A., Talay, D. Estimation of the Brownian dimension of a continuous Itô process. *Bernoulli*. 2008;14(2):469–498.

Karatzas, I., Shreve, S.E. Methods of Mathematical Finance, Applications of Mathematics. New York, NY: Springer-Verlag; 1998;vol. 39.

Kohatsu-Higa, A. Weak approximations: a Malliavin calculus approach. *Math. Compt*. 2001;70(233):135–172.

Kohatsu-Higa, A., Pettersson, R. Variance reduction methods for simulation of densities on Wiener space. *SIAM J. Numer. Anal*. 2002;40(2):431–450.

Kutoyants, Y., Parameter Estimation for Stochastic Processes. Prakasa Rao, B.L.S., eds. Research and Exposition in Mathematics, vol. 6. Berlin, Germany: Heldermann Verlag, 1984.

Nualart, D. The Malliavin Calculus and Related Topics, Probability and its Applications (NewYork), second ed. Berlin, Germany: Springer-Verlag, 2006.

Pastukhov, S.V. On some probabilistic-statistical methods in technical analysis. *Teor. Veroyatn. Primen*. 2004;49(2):297–316. translation in *Theor. Probab. Appl*. 2005;49(2):245–260.

Prakasa Rao, B.L.S. Semimartingales and Their Statistical Inference. Boca Raton, FL: Chapman and Hall, 1999.

Prakasa Rao, B.L.S. Statistical Inference for Diffusion Type Processes. London, UK: Arnold, 1999.

Shiryaev, A.N. A remark on the quickest detection problems. *Stat. Decis*. 2004;22:79–82.

Talay, D., Tubaro, L. Expansion of the global error for numerical schemes solving stochastic differential equations. *Stoch. Anal. Appl*. 1990;8(4):94–120.

Talay, D., Zheng, Z. Worst case model risk management. *Financ. Stoch*. 2002;6(4):517–537.

Talay, D., Zheng, Z. Approximation of quantiles of components of diffusion processes. *Stoch. Proc. Appl*. 2004;109:23–46.

Yor, M. Exponential Functionals of Brownian Motion and Related Processes, Springer Finance. Berlin, Germany: Springer, 2001.

***Observe that **

**Alexander Schied, schied@orie.cornell.edu, School of ORIE, Cornell University, 232 Rhodes Hall, Ithaca, NY 14853, USA **

**Hans Föllmer, foellmer@math.hu-berlin.de, Institut für Mathematik, Humboldt-Universität, Unter den Linden 6, 10099 Berlin, Germany **

**Stefan Weber, sweber@orie.cornell.edu, School of ORIE, Cornell University, 279 Rhodes Hall, Ithaca, NY 14853, USA **

Financial markets offer a variety of financial positions. The net result of such a position at the end of the trading period is uncertain, and it may thus be viewed as a real-valued function *X *on the set of possible scenarios. The problem of portfolio choice consists in choosing, among all the available positions, a position that is affordable, given the investor’s wealth *w*, and which is optimal with respect to the investor’s preferences.

In its classical form, the problem of portfolio choice involves preferences of von Neumann-Morgenstern type, and a position *X *is affordable if its price does not exceed the initial capital *w*. More precisely, preferences are described by a utility functional *EQ*[*U*(*X*)], where *U *is a concave utility function and *Q *is a probability measure on the set of scenarios, which models the investor’s expectations. The price of a position *X *is of the form *E**[*X*], where *P** is a probability measure equivalent to *Q*. In this classical case, the optimal solution can be computed explicitly in terms of *U, Q*, and *P**. Recent research on the problem of portfolio choice has taken a much wider scope. On the one hand, the increasing role of derivatives and of dynamic hedging strategies has led to a more flexible notion of affordability. On the other hand, there is, nowadays, a much higher awareness of model uncertainty, and this has led to a robust formulation of preferences beyond the von Neumann–Morgenstern paradigm of expected utility.

In

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading