You are on page 1of 25

Real Returns through Artificial Intelligence

Forecasting weekly Nifty returns by Artificial Neural Networks

Introduction
A lot of money, from big players as well as common people, part of the Stock Market Stock Market = Risk + Uncertainty Reduction through sound forecasting

Predicting the Market Fundamental Analysis


EIC Analysis Views companies as assets meant for long term investments Assumes investors are 90% logical and 10% psychological

Predicting the Market Technical Analysis


Subjective Analysis based on historical prices and volumes data Views stocks as commodities meant for trading Assumes investors are 10% logical and 90% psychological

Predicting the Market Traditional Time-Series Prediction


Attempts to approximate future values of a time series as a linear combination of historic data Includes Linear Regression Models Widely accepted by economists, but difficult to capture non-linear relationships

Predicting the Market Efficient Market Hypothesis


Weak, Semi-Strong and Strong forms

Contradicts all other forms of analysis, considers random guessing better Market Crashes based on overwhelming fear rather than randomly occuring information, reject the EMH

Predicting the Market Chaos Theory


An attempt to show that order exists in apparent randomness i.e. a combination of a deterministic and a random process Assumes stock market is chaotic and not simply random Thus, rejects the EMH

Predicting the Market Other Computer Techniques


Let the computer think, analyse and learn (Machine Learning): Artificial Neural Networks most popular Ability to trace both linear and non-linear patterns Not equally accepted by economists compared to time-series approach

Most important functional unit in human brain a class of cells called NEURON

Artificial Neural Networks


Dendrites Cell Body Axon Synapse

Hippocampal Neurons Source: heart.cbl.utoronto.ca/ ~berj/projects.html

Schematic

Dendrites Receive information Cell Body Process information


Axon Carries processed information to other neurons Synapse Junction between Axon end and Dendrites of other Neurons
AUTHOR: Mr. Angshuman Saha

An Artificial Neuron
Dendrites Cell Body Axon

X1 X2

w1 w2

Direction of flow of Information I f V = f(I)

. . .
Xp wp

I = w1X1 + w2X2 + w3X3 + + wpXp

Receives Inputs X1 X2 Xp from other neurons or environment Inputs fed-in through connections with weights Total Input = Weighted sum of inputs from all sources Transfer function (Activation function) converts the input to output Output goes to other neurons or environment

AUTHOR: Mr. Angshuman Saha

ANN Feed-forward Network A collection of neurons form a Layer X1


Direction of information flow

X2

X3

X4

Input Layer
- Each neuron gets ONLY one input, directly from outside

Hidden Layer
- Connects Input and Output layers

Output Layer
- Output of each neuron directly goes to outside

y1
AUTHOR: Mr. Angshuman Saha

y2

One particular ANN model

What do we mean by A particular Model ?


Input: X1 X2 X3 Output: Y Model: Y = f(X1 X2 X3)

For an ANN :

Algebraic form of f(.) is too complicated to write down.

However it is characterized by # Input Neurons # Hidden Layers # Neurons in each Hidden Layer # Output Neurons WEIGHTS for all the connections Fitting an ANN model = Specifying values for all those parameters

AUTHOR: Mr. Angshuman Saha

Training the Model How to train the Model ?


Start with a random set of weights. Feed forward the first observation through the net X1 Network V1 ; Error = (Y1 V1) Adjust the weights so that this error is reduced ( network fits the first observation well ) Feed forward the second observation. Adjust weights to fit the second observation well Keep repeating till you reach the last observation This finishes one CYCLE through the data Perform many such training cycles till the overall prediction error E is small.

E = (Yi Vi) 2

AUTHOR: Mr. Angshuman Saha

Back Propagation

Feed forward

Weight adjustment during Back Propagation Weight adjustment formula in Back Propagation
Vi the prediction for ith observation is a function of the network weights vector W = ( W1, W2,.) Hence, E, the total prediction error is also a function of W

E( W ) = [ Yi Vi( W ) ] 2
Gradient Descent Method : For every individual weight Wi, updation formula looks like

W(t+1) = W(t) + * ( E / W) |W(t) + * (W(t) - W(t-1) )


= Learning Parameter (between 0 and 1) determines what amount of calculated error sensitivity to weight change will be used for the weight correction = Momentum (between 0 and 1) encourages the movement of weights in the same direction in successive steps
AUTHOR: Mr. Angshuman Saha

Data Selection & Preparation


S&P CNX Nifty
581 Closing values of Nifty from October 1996 to December 2007 (including 11 lagged values) Weekly closing values converted into Weekly Percentage Returns Data further divided into
TRAINING DATA: 528 values from 25 October 2006 to 29 December 2006, including 10 lagged values

VALIDATION DATA: 52 values from 05 January 2007 to 28 December 2007

Performance Criteria
1. Mean Square Error (MSE) = [( Error) 2]/n 2. Root Mean Square Error (RMSE) = 3. Median Absolute Deviation (MAD) = Median[ | Error Median (Error) | ] 4. Pearson Coefficient of Correlation (CORR) 5. Sign Test (SIGN) : Measures the proportion of times signs are correctly predicted 6. Mean Absolute Error (MAE) = [( Error)]/n

Weekly Returns regressed on 10 Lagged Values; Only Lag 9 Value statistically significant

Linear Autoregressive Model forecasting


Unstandardized Coefficients B Std. Error .300 .040 .012 .019 -.018 .007 -.043 -.049 .059 .089 .016 .158 .044 .044 .044 .044 .044 .044 .044 .044 .044 .044 .040 .012 .019 -.018 .007 -.044 -.049 .059 .089 .016 Standardized Coefficients Beta t 1.894 .905 .277 .431 -.406 .162 -.991 -1.110 1.344 2.017 .357

Model 1

(Constant)

Sig. (p) .059 .366 .782 .667 .685 .872 .322 .268 .180 .044 .721

yt 1 yt 2 yt 3 yt 4 yt 5 yt 6 yt 7 yt 8 yt 9 yt 10

Linear Autoregressive Model forecasting

Regression Equation obtained with Lag 9 as the explanatory variable:

yt = 0.313 + (0.092 * yt 9)

Input Layer

Software : SPSS 16.0.1 (Add-on Neural Networks Module) Network Architecture : 10-1000-1 Multi Layer Perceptron
Covariates 1 2 3 4 5 6 7 8 9 10

Artificial Neural Networks Model forecasting

Number of Unitsa Rescaling Method for Covariates Hidden Layer(s) Number of Hidden Layers Number of Units in Hidden Layer 1a Activation Function Output Layer Dependent Variables 1 Number of Units Rescaling Method for Scale Dependents Activation Function Error Function

Lag 1 Lag 2 Lag 3 Lag 4 Lag 5 Lag 6 Lag 7 Lag 8 Lag 9 Lag 10 10 None 1 1000 Sigmoid Weekly % Nifty Return 1 None Identity Sum of Squares

Artificial Neural Networks Model forecasting


Network Training Parameters
Parameter Type of Training Training Algorithm Learning Rate Momentum Interval Center Interval Offset Value BATCH TRAINING GRADIENT DESCENT 0.1 0.9 0 0.5

Empirical Findings

In-sample Forecasting
ANN
MSE RMSE MAE 9.8 3.13 2.4

LAR
11.72 3.42 2.61

MAD
CORR SIGN

1.92
0.43 0.63

2.05
0.12 0.55

Empirical Findings

Out-of-sample Forecasting
ANN
MSE RMSE MAE 11.18 3.34 2.55

LAR
11.61 3.41 2.68

MAD
CORR SIGN

1.75
0.24 0.69

1.77
0.15 0.69

Results and Conclusion ANN vs LAR: ANN wins!!


Indian stock market returns appear to be predictable Existence of inherent non-linearities in stock returns, as the ANNs capture these and perform better than LAR ANNs still an evolving and emering field of research, huge future potential to be used as a forecasting aid, specially for common people

Results and Conclusion A fitter Neural Network


More Significant Explanatory Variables like levels of foreign indices, Forex , Gold prices etc. may contribute in improving the network, being very influenceable variables as far as the Indian Stock Market is concerned Changing Time Frames like switching to an Intraday Analysis Using ANNs in combination with other models like linear autoregressive model, technical analysis etc. to yield a better model Constantly retraining a sound model, with newer/fresh data, so that it could adapt well to the changing dynamics

Real Returns through Artificial Intelligence


Forecasting weekly Nifty returns by Artificial Neural Networks
Prepared by:

Siddhant Batra
BBS 3F-B

Shaheed Sukhdev College of Business Studies, University of Delhi


Exam Roll No : 5097

You might also like