0 Up votes0 Down votes

1.1K views7 pagesNov 18, 2011

© Attribution Non-Commercial (BY-NC)

DOC, PDF, TXT or read online from Scribd

Attribution Non-Commercial (BY-NC)

1.1K views

Attribution Non-Commercial (BY-NC)

- Chapter 11- Matrices
- Generating Normally Distributed Random Numbers in Excel
- Math Module 1 2 3
- Contenstmap5_Maths_
- CVX USer Guide
- nlp 1
- 6 Parts 1-2 Summary
- Gear Health Threshold Setting Based On a Probability of False Alarm
- School Project on java
- a6
- POSITION CONTROL OF PARABOLIC DISH ANTENNA USING FEEDBACK, ZEIGLER-NICHOLS AND QUADRATIC OPTIMAL REGULATOR METHODS
- Aps Notess
- Journal Modeling
- Lecture 7 - CS 246h
- section4_3
- fx9750g_ch06
- Advanced Algorithms for Hydropower Optimization
- cme434notes0
- fast_mpc
- MODULE 12 Matrices

You are on page 1of 7

It is not too hard to simultaneously simulate (to model) random variables. In Excel, for example, we can use use =NORMSINV(RAND()) to create standard random normal variables. The RAND() function is a uniform distribution bounded by {0,1}. The NORMSINV() translates the random number into the zvalue that corresponds to the probability given by a cumulative distribution. For example, =NORMSINV(5%) returns -1.645 because 5% of the area under a normal curve lies to the left of - 1.645 standard deviations. But no realistic asset or portfolio contains only one risk factor. To model several risk factors, we could simply generate multiple random variables. Put more technically, the realistic modeling scenario is a multivariate distribution function that models multiple random variables. But the problem with this approach, if we just stop there, is that correlations are not included. What we really want to do is simulate random variables but in such a way that we capture or reflect the correlations between the variables. In short, we want random but correlated variables. The typical way to incorporate the correlation structure is by way of a Cholesky decomposition (or factorization). For FRM candidates, Jorion briefly touches on the Cholesky factorization in the 4th Edition FRM Handbook (pages 99 to 100); but, if you are not familiar with matrix math, this may not be a sufficient introduction. In the EditGrid spreadsheet below, I performed a Cholesky decomposition for a simple three-asset case. This can be viewed separately or opened into a new sheet, if you would like to edit yourself. Please note: the decomposition below is not the endgame. It is a step along the way. It produces, for us to use, a matrix that can be used to produce returns that are random but correlated. The sheet below has four small sections, each step is numbered in green. The lower triangle (LU) is the result of the Cholesky Decomposition. It is the thing we can use to simulate random variables, that itself is "informed" by our covariance matrix.

The covariance matrix. This contains the implied correlation structure; in fact, a covariance matrix can itself be decomposed into a correlation matrix and a volatility vector. The covariance matrix(R) will be decomposed into a lower-triangle matrix (L) and an uppertriangle matrix (U). Note they are mirrors of each other. Both have identical diagonals; their zero elements and nonzero elements are merely "flipped"

Given that R = LU, we can solve for all of the matrix elements: a,b,c (the diagonal) and x, y, z. Note that is by definition. That's what a Cholesky decomposition is, it is the solution the produces two triangular matrices whose product is the original (covariance) matrix. (Note, if you play with the variables it is possible to produce an error: the matrix must be 'positive definite' and so not all matrices can be decomposed this way).

4. Given the solution for the matrix elements, then I calculated the product of the triangle matrix to ensure the produce does equal the original covariance matrix (i.e., does LU = R?). Note, in Excel a single array formula can be used with = MMULT(); in EditGrid, it is just a set of MMULT() formulas.

Cholesky Matrix

Cholesky algorithm Cholesky factorization Cholesky matrix indefinite matrix negative definite matrix negative semidefinte matrix

If we think of matrices as multi-dimensional generalizations of numbers, we may draw useful analogies between numbers and matrices. Not least of these is an analogy between positive numbers and positive definite matrices. Just as we can take square roots of positive numbers, so can we take "square roots" of positive definite matrices.

Positive Definite Matrices

A symmetric matrix x is said to be: positive definite if > 0 for all row vectors b 0; positive semidefinite if 0 for all row vectors b; negative definite if < 0 for all row vectors b 0; negative semidefinite if 0 for all row vectors b; indefinite if none of the above hold. These definitions may seem abstruse, but they lead to an intuitively appealing result. A symmetric matrix x is: positive definite if all its eigenvalues are real and positive;

positive semidefinite if all its eigenvalues are real and nonnegative; negative definite if all its eigenvalues are real and negative; negative semidefinite if all its eigenvalues are real and nonpositive; indefinite if none of the above hold. It is useful to think of positive definite matrices as analogous to positive numbers and positive semidefinite matrices as analogous to nonnegative numbers. The essential difference between semidefinite matrices and their definite analogues is that the former can be singular whereas the latter cannot. This follows because a matrix is singular if and only if it has a 0 eigenvalue.

Matrix "Square Roots"

Nonnegative numbers have real square roots. Negative numbers do not. An analogous result holds for matrices. Any positive semidefinite matrix h can be factored in the form for some real square matrix k, which we may think of as a matrix square root of h. The matrix k is not unique, so multiple factorizations of a given matrix h are possible. This is analogous to the fact that square roots of positive numbers are not unique either. If h is nonsingular (positive definite), k will be nonsingular. If h is singular, k will be singular.

Cholesky Factorization

A particularly easy factorization to perform is one known as the Cholesky factorization. Any positive semidefinite matrix has a factorization of the form where g is a lower triangular matrix. Solving for g is straightforward. Suppose we wish to factor the positive definite matrix

[1 ]

[2 ]

Also by inspection, Since we already . Proceeding in this manner, we obtain a matrix g in 6 steps:

[3 ]

The above example illustrates a Cholesky algorithm, which generalizes for higher dimensional matrices. Our algorithm entails two types of calculations: 1. Calculating diagonal elements (steps 1, 4, and 6) entails taking a square root.

2. Calculating off-diagonal elements , i > j, (steps 2, 3, and 5) entails dividing some number by the last-calculated diagonal element. For a positive definite matrix h, all diagonal elements will be nonzero. Solving for each entails taking the square root of a nonnegative number. We may take either the positive or negative root. Standard practice is to take only positive roots. Defined in this manner, the Cholesky matrix of a positive definite matrix is unique. The same algorithm applies for singular positive semidefinite matrices h, but the result is not generally called a Cholesky matrix. This is just an issue of terminology. When the algorithm is applied to the singular h, at least one diagonal elements equals 0. If only the last diagonal element equals 0, we can obtain g as we did in our example. If some other diagonal element equals 0, off-diagonal element will be indeterminate. We can set such indeterminate values equal to any value within an interval [a, a], for some a 0. Consider the matrix

[4 ]

[5 ]

In the fifth step, we multiply the second row of g by the third column of g' to obtain

[6 ]

We already know

, so we have

[7 ] [8 ]

which provides us with no means of determining . It is indeterminate, so we set it equal to a variable x and proceed with the algorithm. We obtain

[9 ]

For the element to be real, we can set x equal to any value in the interval [3,3]. The interval of acceptable values for indeterminate components will vary, but it will always include 0. For this reason, it is standard practice to set all indeterminate values equal to 0. With this selection, we obtain

[10 ]

We can leave g in this form, or we can delete the second column, which contains only 0's. The resulting 3 2 matrix provides a valid factorization of h since

[11 ]

If a matrix h is not positive semidefinite, our Cholesky algorithm will, at some point, attempt to take a square root of a negative number and fail. Accordingly, the Cholesky algorithm is a means of testing if a matrix is positive semidefinite.

Computational Issues

In exact arithmetic, the Cholesky algorithm will run to completion with all diagonal elements > 0 if and only if the matrixh is positive definite. It will run to completion with all diagonal elements 0 and at least one diagonal element = 0 if and only if the matrix h is singular positive semidefinite. Things are more complicated if arithmetic is performed with rounding, as is done on a computer. Off-diagonal elements are obtained by dividing by diagonal elements. If a diagonal element is close to 0, any roundoff error may be magnified in such a division. For example, if a diagonal element should be .00000001, but roundoff error causes it to be calculated as . 00000002, division by this number will yield an off-diagonal element that is half of what it should be. An algorithm is said to be unstable if roundoff error can be magnified in this way or if it can cause the algorithm to fail. The Cholesky algorithm is unstable for singular positive semidefinite matrices h. It is also unstable for positive definite matrices h that have one or more eigenvalues close to 0.

- Chapter 11- MatricesUploaded byJhagantini Palanivelu
- Generating Normally Distributed Random Numbers in ExcelUploaded bymuliawan_firdaus
- Math Module 1 2 3Uploaded byMark Anthony Sabio Lucero
- Contenstmap5_Maths_Uploaded byAbd Laziz Ghani
- CVX USer GuideUploaded byNakul Padalkar
- nlp 1Uploaded byPramanshu Rajput
- 6 Parts 1-2 SummaryUploaded byllama1220
- Gear Health Threshold Setting Based On a Probability of False AlarmUploaded byEric Bechhoefer
- School Project on javaUploaded byProjjal Gop
- a6Uploaded byBharti
- POSITION CONTROL OF PARABOLIC DISH ANTENNA USING FEEDBACK, ZEIGLER-NICHOLS AND QUADRATIC OPTIMAL REGULATOR METHODSUploaded byFrancis Abulude
- Aps NotessUploaded byసతీష్ కుమార్
- Journal ModelingUploaded bysrir_41
- Lecture 7 - CS 246hUploaded byjeromeku
- section4_3Uploaded byAugusto De La Cruz Camayo
- fx9750g_ch06Uploaded bySantiago Cabrera
- Advanced Algorithms for Hydropower OptimizationUploaded bydjajadjaja
- cme434notes0Uploaded bymostafa
- fast_mpcUploaded bySaleem Mir
- MODULE 12 MatricesUploaded byAnonymous PPYjNtt
- UG Semester Syllabus MathematicsUploaded bykaais
- New Method Four Point Bending MatlabUploaded bySushil Kumar
- week2 courseraUploaded byPuneet Jindal
- Gill Saunders Shinnerl 1996Uploaded bymamentxu84
- Algebra u1Uploaded byEdgar Conrado
- lec15Uploaded byshashank
- Uses of Matrices in Daily LifeUploaded byjinil shah
- lecture_slides-Stats1.13.L11.pdfUploaded byRicardo Nascimento
- Linear Control Systems_B. S. MankeUploaded byRahul Gupta
- introUploaded byodimuthu

- UCLA Math 33A Midterm 2 SolutionsUploaded byWali Kamal
- Math CompleteUploaded byTú Phùng
- Convex It y 2015Uploaded bysdssd sds
- KC SINHAUploaded bysai kiran
- nls2Uploaded byAdi Subbu
- Distance NotesUploaded byrakshinara
- Bacal Typed QuestionsUploaded byAdrian Palanio Dayon
- Sylvester Galloi TheoremUploaded byErling Dranst
- Davis - Mathematics of Matrices.pdfUploaded byRafael Garcia
- A RunUploaded byPadam Garg
- 186692041-Maple-TA-Test.pdfUploaded bykev
- BIBCUploaded byRaj Shivendra
- [8]Demidovich B Problems in Mathematical Analysis Mir 1970Uploaded byAndrei Blue Lion
- Year 10 MathsUploaded byAdam
- Tensor Calculus for Physics Concise by Dwight NeuenschwanderUploaded bySebastián Silva
- 3.1 Eliminating FractionsUploaded byMrPeterson25
- College Algebra Enhanced With Graphing Utilities 7th Edition Sullivan Test BankUploaded bya559443182
- cccUploaded byNipuna Thushara Wijesekara
- Two Examples on Eigenvector and EigenvalueUploaded byNo Al Prian
- MatLab Control System Toolbox ManualUploaded byluisgama22
- solutionUploaded byanishanand
- The Poincare Lemma and de Rham CohomologyUploaded byAlex Yu
- 2D TransformationUploaded byTamimi Haji Tamby
- Revision Notes Maths XIIUploaded byrohitsrohit
- Zeroes of exponentials sum theorem and Riemann hypothesisUploaded byAlexandre Bali
- Math Booklet 7thUploaded byJose Avila Betan
- Solving a pair of simultaneous linear equations by www.mathematics.me.ukUploaded byrodwellhead
- mathUploaded bysalman saeed
- Davis and Maclagan - 2003 - The Card Game SetUploaded byabd
- E-Book 'Worksheets Volume #1 - Algebra' revision from GCSE Maths TutorUploaded bygcsemathstutor

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.