Professional Documents
Culture Documents
Real Returns Through Artificial Intelligence
Real Returns Through Artificial Intelligence
Introduction
A lot of money, from big players as well as common people, part of the Stock Market Stock Market = Risk + Uncertainty Reduction through sound forecasting
Contradicts all other forms of analysis, considers random guessing better Market Crashes based on overwhelming fear rather than randomly occuring information, reject the EMH
Most important functional unit in human brain a class of cells called NEURON
Schematic
An Artificial Neuron
Dendrites Cell Body Axon
X1 X2
w1 w2
. . .
Xp wp
Receives Inputs X1 X2 Xp from other neurons or environment Inputs fed-in through connections with weights Total Input = Weighted sum of inputs from all sources Transfer function (Activation function) converts the input to output Output goes to other neurons or environment
X2
X3
X4
Input Layer
- Each neuron gets ONLY one input, directly from outside
Hidden Layer
- Connects Input and Output layers
Output Layer
- Output of each neuron directly goes to outside
y1
AUTHOR: Mr. Angshuman Saha
y2
For an ANN :
However it is characterized by # Input Neurons # Hidden Layers # Neurons in each Hidden Layer # Output Neurons WEIGHTS for all the connections Fitting an ANN model = Specifying values for all those parameters
E = (Yi Vi) 2
Back Propagation
Feed forward
Weight adjustment during Back Propagation Weight adjustment formula in Back Propagation
Vi the prediction for ith observation is a function of the network weights vector W = ( W1, W2,.) Hence, E, the total prediction error is also a function of W
E( W ) = [ Yi Vi( W ) ] 2
Gradient Descent Method : For every individual weight Wi, updation formula looks like
Performance Criteria
1. Mean Square Error (MSE) = [( Error) 2]/n 2. Root Mean Square Error (RMSE) = 3. Median Absolute Deviation (MAD) = Median[ | Error Median (Error) | ] 4. Pearson Coefficient of Correlation (CORR) 5. Sign Test (SIGN) : Measures the proportion of times signs are correctly predicted 6. Mean Absolute Error (MAE) = [( Error)]/n
Weekly Returns regressed on 10 Lagged Values; Only Lag 9 Value statistically significant
Model 1
(Constant)
Sig. (p) .059 .366 .782 .667 .685 .872 .322 .268 .180 .044 .721
yt 1 yt 2 yt 3 yt 4 yt 5 yt 6 yt 7 yt 8 yt 9 yt 10
yt = 0.313 + (0.092 * yt 9)
Input Layer
Software : SPSS 16.0.1 (Add-on Neural Networks Module) Network Architecture : 10-1000-1 Multi Layer Perceptron
Covariates 1 2 3 4 5 6 7 8 9 10
Number of Unitsa Rescaling Method for Covariates Hidden Layer(s) Number of Hidden Layers Number of Units in Hidden Layer 1a Activation Function Output Layer Dependent Variables 1 Number of Units Rescaling Method for Scale Dependents Activation Function Error Function
Lag 1 Lag 2 Lag 3 Lag 4 Lag 5 Lag 6 Lag 7 Lag 8 Lag 9 Lag 10 10 None 1 1000 Sigmoid Weekly % Nifty Return 1 None Identity Sum of Squares
Empirical Findings
In-sample Forecasting
ANN
MSE RMSE MAE 9.8 3.13 2.4
LAR
11.72 3.42 2.61
MAD
CORR SIGN
1.92
0.43 0.63
2.05
0.12 0.55
Empirical Findings
Out-of-sample Forecasting
ANN
MSE RMSE MAE 11.18 3.34 2.55
LAR
11.61 3.41 2.68
MAD
CORR SIGN
1.75
0.24 0.69
1.77
0.15 0.69
Siddhant Batra
BBS 3F-B