You are on page 1of 18

CS870 Spring, 2012

Applied Optimization
for Finance and Machine Learning

Yuying Li

May 1, 2012

1
CS870 Spring, 2012

Course Information

Tuesday and Thursday 1:30-3:00pm DC 3313

Prerequirements: It is assumed that you have the following


background: calculus and linear algebra, an introductory
course in numerical analysis, an introductory course in
statistics, and ability to program in Matlab. No finance or
machine learning background is assumed.

Course Work: The final grade will be determined by two


assignments (40%) and a project report (60%).

2
CS870 Spring, 2012

Reference Book (available online)

Course website:

http://www.cs.uwaterloo.ca/ yuying/Courses/CS870 12/870.html

Discussion of optimization theory will follow Convex


Optimization by Boyd and Vandenberghe,which is
available at http://www.stanford.edu/ boyd/cvxbook/.

Research papers on financial and machine learning


applications will also be used.

3
CS870 Spring, 2012

Main Course Objectives

Gain understanding of basic optimization theory (and


computational methods)

Applications
Finance
Data mining

4
CS870 Spring, 2012

Optimization theory
Discussion will include

Convex set and convex functions

Linear programming

Quadratic programming

Convex optimization

Duality and Kuhn-Tucker conditions

Basic ideas of computational methods

Nonconvex optimization

5
CS870 Spring, 2012

Optimization in Finance and Data Mining

Critical problems in finance and data mining (and many other


applications)

Model estimation/calibration

Consistency: model needs to be consistent with observed


data

Generalization: model needs to perform out-of-samples

6
CS870 Spring, 2012

These goals can often be achieved by

minimizing fitting error

regularization to achieve simplicity/smoothness

Given a model, decisions can be made by choose strategies to


optimally achieve desired goals

7
CS870 Spring, 2012

Application Examples
Examples include

Finance: Markowitz portfolio optimization, CVaR (and


other risk measures) optimization, maximizing Omega
performance measure, option model calibration

Machine learning: support vector machine classification


and regression, data fitting, optimal feature selection,
general data fitting

Machine learning in finance

8
CS870 Spring, 2012

Need For Quantitative Risk Management

Some Prior Financial Disasters:


1987 market crash
1994 Orange County bankruptcy due to $1.6 billion loss
on investments
In 1995, loss of 700 million forced Barings, the oldest
bank in UK, to cease trading
In 1998, downfall of Long-Term Capital Management
(LTCM).

9
CS870 Spring, 2012

Extreme Matters

We need to quantitatively measure risk.

Specifically, we need to address unexpected, abnormal or


extreme outcomes rather than expected, normal, or average
outcome.

Improving the characterization of the distribution of extreme


values is of paramount importance, Alan Greenspan, Joint
Central Bank Research Conference, 1995.

10
CS870 Spring, 2012

In his 2007 book, Nassim Taleb used the term Black Swans to
denote unpredicted events with huge impact.

The term Black Swan comes from the 17th century European
assumption that All swans are white. A black swan was a
symbol for something that was impossible or could not exist.

In the 18th Century, the discovery of black swans in Western


Australia metamorphosed the term to imply that a perceived
impossibility may actually come to pass.

11
CS870 Spring, 2012

Quantitative Risk Management


Portfolio Optimization

Objectives: Minimize risk while maximize profit

min risk(x)
x
subject to return(x)
x

A multicriteria optimization

12
CS870 Spring, 2012

Typical risk measures:

Variance: average quadratic deviation from the expected


outcome

Value-at-Risk (VaR)

Conditional-VaR (CVaR)

13
CS870 Spring, 2012

Maximize performance measure:

Sharpe ratio

Omega ratio

Sortino ratio

14
CS870 Spring, 2012

Data Mining

Large amount of data is available, waiting for knowledge


discovery

Business

Health care

Bioinformatics

Financial applications

15
CS870 Spring, 2012

Machine Learning: SVM Classification

Maximize Separation Margin and Minimize Classification


Error wrt to the Separation Hyperplane

16
Linear discrimination
separate two sets of points {x1, . . . , xN }, {y1, . . . , yM } by a hyperplane:

aT xi + b > 0, i = 1, . . . , N, aT yi + b < 0, i = 1, . . . , M

homogeneous in a, b, hence equivalent to

aT xi + b 1, i = 1, . . . , N, aT yi + b 1, i = 1, . . . , M

a set of linear inequalities in a, b

Geometric problems 88
Robust linear discrimination

(Euclidean) distance between hyperplanes

H1 = {z | aT z + b = 1}
H2 = {z | aT z + b = 1}

is dist(H1, H2) = 2/a2

to separate two sets of points by maximum margin,

minimize (1/2)a2
subject to aT xi + b 1, i = 1, . . . , N (1)
aT yi + b 1, i = 1, . . . , M

(after squaring objective) a QP in a, b

Geometric problems 89

You might also like