Professional Documents
Culture Documents
Evolving Chart Pattern Sensitive Neural Network Based Forex Trading Agents
Evolving Chart Pattern Sensitive Neural Network Based Forex Trading Agents
%volving 3hart Aattern :ensitive Neural Network >ased 'orex Trading &gents
&ene I. 'her CorticalComputer(gmail.com )ni"ersity of Central *lorida
enormous dail volume, there are no sudden interda price changes, and there are no lags in the market, unlike in the stock market" This paper presents the first of its kind, an introduction, discussion, anal sis, and application?benchmarking (to the author@s knowledge) of a topolog and weight evolving neural network (T$%&NN) algorithm for the evolution of geometr )pattern sensitive, substrate encoded trading agents that use the actual closing price charts as input" *n this paper * will compare the Arice 3hart *nput (A3*) using neural network (NN) based traders using substrate encoding, to the standard, direct encoded NN based trading agents which use Arice 2ist *nput (A2*), in which the time series of closing prices is encoded as a list of said prices" 'inall , all of these NN based trading agents will be evolved using the memetic algorithm based T$%&NN s stem called +eus %x Neural Network (+,NN)/5,-0" It must be noted that it is not the goal of this paper to compare one TWEANN using PCI to another TWEANN using PCI. The use of DXNN system for this e periment and this paper !as due to the ease !ith !hich it !as possible to apply it to the gi"en problem. The only goal of this paper is to demonstrate the utility of this ne! method# the use of candle$stic% style chart as direct input to the geometry sensiti"e NN system e"ol"ed using a TWEANN system. The use of T$%&NNs in the financial market has thus far been ver seldom, and to this author@s knowledge, the three most general of these state of the art neuroevolutionar algorithms (+,NN, N%&T/B0, C perN%&T/.0, %&NT/90, %&NT-/=0) have not et been thoroughl tested, benchmarked, and applied within this field" *n this paper * will not use the evolved NN based agents to predict currenc pair prices, but instead evolve autonomousl trading NN based agents" Neural networks have shown time and time again /D,E,F,5;,55,5-,5B,590 that due to their highl robust nature, and universal function approximation qualities, that the fit well in the application to financial market anal sis" *n published literature though/5=,5D,5E,5F,-;0, the most commonl used neural network learning algorithm is backpropagation" This algorithm, being a local optimi!ation algorithm, can and does at times get stuck in local optima" 'urthermore, it is usuall necessar for the researcher to set up the NN topolog beforehand, and since the knowledge of what t pe of NN topolog works best for which dataset and market is ver difficult, or even impossible to deduce, one usuall has to randoml create NN topologies and then tr them out before settling down on some particular s stem" T$%&NN s stems are relativel new, and the have not et been tested thoroughl in financial markets" >ut it is exactl these t pes of s stems that can evolve not onl s naptic weights, but also the NN topologies, and thus perform a global search, evolving the most optimal NN topolog and s naptic weights at the same time" The use of such a T$%&NN s stem in evolving NN based traders is exactl what will be explored in this paper" 7utline# *n :ection)- * will briefl discuss 'oreign %xchange market, the various market h pothesis, and the approaches that
ABSTRACT
Though machine learning has been applied to the foreign exchange market for algorithmic trading for quiet some time now, and neural networks(NN) have been shown to ield positive results, in most modern approaches the NN s stems are optimi!ed through traditional methods like the backpropagation algorithm for example, and their input signals are price lists, and lists composed of other technical indicator elements" The aim of this paper is twofold# the presentation and testing of the application of topolog and weight evolving artificial neural network (T$%&NN) s stems to automated currenc trading, and to demonstrate the performance when using 'orex chart images as input to geometrical regularit aware indirectl encoded neural network s stems, enabling them to use the patterns ( trends within, when trading" This paper presents the benchmark results of NN based automated currenc trading s stems evolved using T$%&NNs, and compares the performance and generali!ation capabilities of these direct encoded NNs which use the standard sliding)window based price vector inputs, and the indirect (substrate) encoded NNs which use charts as input" The T$%&NN algorithm * will use in this paper to evolve these currenc trading agents is the memetic algorithm based T$%&NN s stem called +eus %x Neural Network (+,NN) platform"
General Terms
&lgorithms
Keywords
Neural Network, T$%&NN, %volutionar 3omputation, Neuroevolution, 4emetic &lgorithm, &rtificial 2ife, 'inancial &nal sis, 'orex" 5" *NT67+83T*7N
'
oreign exchange (also known as 'orex, or ',) is a global and decentrali!ed financial market for currenc trading" *t is the largest financial market, with a dail turnover of 9 trillion 8: dollars" The spot market, speciali!ing in the immediate exchange of currencies, comprises almost 9;< of all ', transactions, 5"= trillion dollars dail " >ecause the foreign exchange market is open -9 hours a da , closing onl for the weekend, and because of the
Sliding Window
NN System
Prediction
>ut how do we allow the NN to have access to this geometrical pattern data, and also be able to actuall use itN $e can not simpl convert these charts to bitmaps and feed them to the NN, because the bitmap encoded chart will still be Kust a long vector, and the NN will not onl have to deal with an input with high dimensionalit
neurons in the first neurode plane located at O P ;, and 5,5 neurons in the third plane located at OP5 (as shown in 'ig)B"B), then each of the 5;; neurons at O P ; receives 5;;;;;; inputs, so each has 5;;;;;; s naptic weights, and for this feedforward substrate to process a single input signal would require it 5;;Q5;;;;;; R 5Q5;; calculations, where the 5Q5;; calculations are performed b the neuron at O P 5, which is the output neuron of the substrate" This means that there would be roughl 5;;;;;;;; calculations per single input, per single processing price chart"
Su1strate 2ncoding: 6ray 7 +1 8ar9 6ray 7 , /lac9 7 1
, +1 1 Not all connections are s&own
Sense' (n%ut
1 " , +1
, +1
)ct' *ut%ut
1 " , +1
NN
NN
&s shown in the above figure, a substrate is a h percube of neurodes, and though in the above figure the dimensionalit is B, it can be an thing" %ach neurode in one la er is connected in a feedforward fashion to the neurodes in the next la er, plane, cube, or h percube, depending on the dimensionalit of the entire substrate" &ll dimensions of the substrate have a cartesian coordinate, and the length of each side of the substrate is normali!ed such that the coordinates for each axis are between )5 and 5" The substrate is impregnated with neurodes, with the number of neurodes per dimension is set b the researcher, with each neurode having a coordinate based on its location in its substrate" $ith this setup, the weight that each neurode has for the pres naptic connection with other neurodes is then determined b the neural network to whom the substrate belongs" *t is the neural network that calculates the s naptic weights for the connected neurodes based on the coordinates of those neurodes" The and pre and post)s naptic neurode coordinates are used as input to the NN" >ecause the NNs deal with coordinates, with the actual data input sent to the substrate, the substrate encoded NN s stem is aware of the geometric regularities within the input" &nd it is these geometric regularities that technical anal sis tries to find and exploit" $ith this t pe of indirect encoded neural network we can anal !e the price charts directl , making use of the geometrical patterns, and trends within" >ecause each neurode in the substrate receives a connection from ever neurode or input element in the preceding h per)la er, the chart that is fed to the substrate must first be reconstructed to the resolution that still retains the important geometrical information, and et is computationall viable as input" 'or example, if the sliding chart that is fed to the substrate is 5;;;x5;;;, which represents 5;;; historical points (hori!ontal axis), with the resolution of the price data being (4axAlotArice 1 4inAlotArice)?5;;; (the vertical axis), then each neurode in the first h perplane of the substrate will have 5;;;;;; inputs" *f the substrate has three dimensions, and we set it up such that the input signals are plane encoded and located at O P )5, with a 5;,5;
Thus it is important to determine and test what resolution provides enough geometrical detail to allow for prediction to be made, et not overwhelm the NN itself and the processing power available to the researcher" 7nce the length of the historical prices (hori!ontal axis on the price chart) and the resolution of the prices (vertical axis on the price chart) are agreed upon, the chart can then be generated for the sliding window of currenc pair prices, producing the sliding chart" 'or example, 'ig)B"9& shows a single frame of the chart whose hori!ontal and vertical resolution is 5;; and -; respectivel , for the %86?8:+ closing prices taken at 5= minute time)frame (pricing intervals)" This means that the chart is able to capture 5;; historical prices, from N to N)FF, where N is the current price, and N)FF is the price (FFQ5=)min ago" Thus, if for example this chart@s highest price was SB";; and the lowest price was S-"=;, and we use a vertical resolution of -;, the minimum discernible price difference (the vertical of the pixel) is (B)-"=)?-; P S;";-=" 'or comparison, 'ig)B"9> shows a5;x5; chart resolution"
). -&e recreated closing %rice c&art wit& an 1,,.2, resolution. /. -&e recreated closing %rice c&art wit& a 1,.1, resolution 0or com%arison.
'ig)B"9 &" and >" show a 5;;x-; and 5;x5; resolution based charts respectivel , using the candle)stick charting st le"
:imilar to 'ig)B"B, in 'ig)B"9 the pixels of the background gra are given a value of )5, the dark gra have a value of ;, and the black a value of 5" These candlestick charts, though of low resolution, retain most of the geometrical regularities and high?low pricing info of the original plot, with the fidelit increasing with the recreated chart@s resolution" *t is this input plane that can be fed to the substrate"
<This paper has been submitted for possible publication> 9" +%8: %, N%86&2 N%T$76H A2&T'764
The topolog and the s naptic weights of the neural networks need to be set and optimi!ed for the tasks the NNs are applied to" 7ne of the strongest approaches to the optimi!ation of s naptic weights and their topologies is through the application of evolutionar algorithms" The s stems that evolve both the topolog and the s naptic weights of neural networks are called Topolog and $eight %volving &rtificial Neural Networks (T$%&NN)" +,NN is a memetic algorithm based T$%&NN s stem, and is the algorithm * will use to evolve the NN based currenc trading agents in this paper" *n this section we will briefl discuss +,NN@s various features, and what makes it different from other T$%&NNs" augmented version of stochastic hill)climbing algorithm)" (ntil" The fitness has failed to increased H number of times" B" 3onvert the NN s stem back to its genot pe, with the tuned s naptic weights, and its fitness score" B" &fter all the NNs have been given a fitness score, sort the NN agents in the population based on their fitness score, which is further weighted based on the NN si!e, such that smaller si!ed NNs are given priorit " 9" +elete the bottom =;< of the population" =" 'or each NN, calculate the total number of N offspring that it is alloted to produce, where N is proportional to the NN@s fitness as compared to the average fitness of the population, and average NN si!e, where the smaller and more fit NNs are allowed to create more offspring" ." 3reate the offspring b first cloning the fit parent, and then appl ing to the clone T number of mutation operators, where T is randoml chosen to be between 5 and sqrt(AarentUTotNeurons), with uniform probabilit " 2arger NNs will produce offspring which have a chance to be produced through a larger number of applied mutation operators" D" 3ompose the new population from the fit parents and their offspring" (ntil" Termination condition is reached (4ax number of evaluations, time, or goal fitness)" $hat in the +,NN is referred to as a Ltuning phaseM is the local search phase of the algorithm, which as noted is an augmented stochastic search algorithm" The topological mutation phase, b randoml choosing the number of mutation operators to use when producing offspring b appl ing the said mutation operators to the clones of the fit parents, allows for a high variabilit of topological mutants to be created, improving the diversit of the population" The +,NN s stem uses the following list of mutation operators# 5" &dd new neuron" -" :plice two neurons (choose - connected neurons, disconnect them, and then reconnect them through a newl created neuron" This also increases the depth of the NN)" B" &dd an output connection to a randoml selected neuron, recurrent or feedforward" 9" &dd an input connection to a randoml selected neuron" =" &dd a sensor ." &dd an actuator Thus through mutation operators = and ., the offspring might incorporate into itself new and still unused sensors and actuators, if those are available in the list of sensors and actuators for its specie?population" *ndeed this particular part of the +,NN acts as a natural feature selection, and is especiall useful for complex problems, in robotics, and alife simulations" *n alife in particular, as was shown in /50, the organisms were able to evolve and integrate new sensors over time" This can also be used in robotics, letting evolution decide what sensors and actuators are most useful to the evolving individual" >ut more importantl , this can be used in evolving algorithmic trades, where :ensors can represent the different t pes of technical indicators" 'urthermore, +,NN evolves both, direct and indirect (substrate in this case) encoded NNs" $here its substrate encoded NNs further differ in their abilit to evolve different t pes of coordinate based preprocessors, which is hoped to allow it to deal with a more varied number of geometrical features, and was another reason for choosing it in this experiment" & further elaboration on this is discussed next"
<This paper has been submitted for possible publication> 9"B +*6%3T &N+ *N+*6%3T %N37+*NG
The +,NN platform evolves both direct and indirect encoded NN s stems" The direct encoded NN s stems were discussed in the above sections, the indirect encoded NNs use substrate encoding" :ince it is the substrate that accepts inputs from the environment and outputs signals to the outside world, and the NN is used to set the s naptic weights between the neurodes in the substrate, the s stem not onl has a set of sensors and actuators as in the standard NN s stem, but also a set of LcoordinateUpreprocessorsM and LcoordinateUpostprocessorsM, that are integrated into the NN during evolution in a similar manner that it integrates new sensors and actuators, onl using the LaddUcoordUpreprocessorM and LaddUcoordUpostprocessorM mutation operators" *n the standard substrate encoded NN s stem, the NN is given an input that is a vector composed of the coordinates of the neurode for which the s naptic weight must be generated, and the coordinates of the pres naptic neurode which connects to it" *n +,NN, there are man different t pes of coordinateUpreprocessors and coordinateUpostprocessors available" The coordinateUpreprocessors calculate values from the coordinates passed before feeding the resulting vector signals to the NN" The coordinateUpostprocessors post process the NN@s output, adding plasticit and other modifications provided b the particular substrateUactuator incorporated into the s stem through evolution" $heres C perN%&T which populari!ed substrate encoding, feeds the 3ANN (a NN that uses tanh, and other t pes of activation functions) simpl the coordinates of the pres naptic and posts naptic neurodes, and in some variants the distance between the neurodes, the +,NN uses the following list of coordinateUpreprocessors" &ll of these are available to the substrate encoded NNs in this paper, and can be integrated and used b the substrate encoded NN as it evolves# 5" 3artesian 3oordinates -" 3artesian +istance B" 3onvert to Aolar (if substrate is a plane) 9" 3onvert to :pherical (if substrate is a cube) =" 3entripetal distance of neurode ." +istance between coordinates D" Gaussian processed distance between coordinates This set of substrate sensors further allow the evolved NN to extract geometrical patterns and information from inputs" &lso, +,NN allows the evolved neurons to use the following list of activation functions# /tanh#gaussian#sin#absolute#sgn#linear#log# s+rt, , whether those neurons are used b standard direct encoded NNs, or NNs used to calculate weights for the substrate encoded s stems" $hich the creator of +,NN hopes will further improve the generalit of the evolve NN s stems, and has indeed shown benefit when evolving standard neurocontrollers for the double) pole balancing benchmark, in which the problems were solved faster b NNs that used sinusoidal activation functions" 'or this benchmark * created a forex market simulator, where each interfacing NN will be given a SB;; starting balance" %ach agent produces an output, with the output being further converted to ) * i# its less than +,!-, , i# between +,!- and ,!-, and * i# greater than ,!-" $hen interacting with the forex simulator, )5 means go short, ; means close position (or do nothing if no position open), and 5 means go long (if currentl shorting, then close the position, and then go long)" The 'orex simulator will simulate the market using 5;;; real %868:+ currenc pair closing prices, stretching from -;;F)55)=)--#5= to -;;F)55)-;)5;#5=, with 5= min time frame (each closing price is 5= minutes from the other)" The simulator uses a price spread of S;";;;5="This 5;;; point dataset is split into the training set, and into a testing?generali!ation set" The training set is the first E;; time steps, ranging from# -;;F)55)=)--#5= to -;;F#55)5E)E#5=, and the testing?generali!ation data set is the immediatel following -;; time steps from -;;F)55)5E)E#5= to -;;F)55)-;)5;#5=" 'inall , when opening a position, it is alwa s done with S5;;, leveraged b x=; to S=;;; (due to the use of a flat spread and bu ?sell slots, the results can be scaled)" & single evaluation of a NN is counted if the NN based agent has went through all the E;; data points, or if its balance dips below S5;;" The fitness of the NN is its networth at the end its evaluation" %ach evolutionar run lasts for -=;;; evaluations, and each experiment is composed of 5; such evolutionar runs" *n each experiment the population si!e was set to 5;" 'inall , in ever experiment the NNs were allowed to use and integrate through evolution the following set of activation functions# -tanh# gaussian# sin# absolute# sgn# linear# log# s+rt,. The remainder of the parameters were set to the values recommended in /50" *n the experiments performed, the NNs used price sliding window vectors (for direct encoded NNs), and price charts (for recurrent substrate encoded NNs) as shown in 'ig)="5" The NNs were also connected to a sensor which fed them the vector signal# /Aosition,%ntr ,Aercentage3hange0, where Aosition takes the value of either )5 (currentl shorting), ; (no position), or 5 (currentl going long), %ntr is the price at which the position was entered (or set to ; if no position is held), and Aercentage3hange is the percentage change in the position since entr " *n this paper * present 5B benchmarks?experiments, each experiment is composed of 5; evolutionar runs from which its average?max?min are calculated" The experiments demonstrate and compare the performance of A3* based NNs and the A2* based NNs" >oth these input t pe experiments were tested with different sensors of comparable dimensionalit " - ./0 e1periments" %xperiments 5)= were performed using with A2* NNs" %ach experiment differed in the resolution of the sliding window input the NNs used" %ach NN started with the sliding window sensor, and the vector# /Aosition, %ntr , Aercentage3hange0" The networks were allowed to evolve recurrent connections" These = experiments are# 5" /:liding$indow=0 -" /:liding$indow5;0 9" /:liding$indow-;0 9" /:liding$indow=;0 =" /:liding$indow5;;0 2 .C0 e1periments" %xperiments .)5B were preformed using the A3* NNs" *n these experiments each A3* based NN used a 9 dimensional substrate" The input h perla er to the substrate was composed of the price chart, the vector# /3urrentAosition, %ntr Arice, Aercentage3hange0, and the substrate@s own output, making the substrate 3ordan Recurrent " The substrate architecture of these 3A* NN based agents is shown in 'ig)="5" The reason for using Vordan 6ecurrent substrates is due to the fact that standard feedforward substrates which do not have recurrent connections, though achieving high fitness during training, did not generali!e almost at all during m preliminar experimentation, with the highest achieved balance during generali!ation testing phases being SB;B (a SB profit), but
trading the currencies onl if the profit gained before the trend changes will be greater than the spread" Table 1. Benchmark/Experiment Results
TrnA$g TrnBst =9; =-B =BD =-= =9E 9.9=9 =5D =;= =9. =9= ==; =9E =BE =-. ==E 9E5 9.. =-D =59 ==F ==D =95 =.D N/A "0# Tst TstA$g TstStd TstBst 5rst --= -9= -B= -.. -E9 *9 : *2, *2, *2; * ::* N/A N/A -FE -FB -FB B;; B;9 -E9 -FD -BE -B; -=9 -D-DF -D; 300 N/A 5B 5. 5= F 59 BBE B-. -F B. -B -; N/A N/A B=. BB5 B=B B=B B.D B9. B== B;; -FB5= B-E B-B B=9 N/A #$% .rice 8ector Sensor Type /:lid$indow=0 /:lid$indow5;0 /:lid$indow-;0 /:lid$indow=;0 /:lid$indow5;;0 /3hartAlane=,5;0 /3hartAlane=,-;0 /3hartAlane5;,5;0 /3hartAlane5;,-;0 /3hartAlane-;,5;0 /3hartAlane-;,-;0 /3hartAlane=;,5;0 /3hartAlane=;,-;0 Buy & H l! &ax ' ssible
[Position!2ntry!PercentageC&ange$
"
" " # +1 : # , # 1
NN
To test generali!ation abilities of the evolved NN based agents, ever =;; evaluations the best NN in the population at that time is applied to the -;; data point generali!ation test" Aerforming the generali!ation tests consistentl throughout the evolution of the population not onl tests the generali!ation abilit of the best NN in the population, but also builds a plot of the general generali!ation capabilities of that particular encoding and sensor t pe" This will allow us to get a better idea of whether generali!ation drops off as the A3* and A2* NNs are over)trained, or whether it improves, or sta s the same throughout the training process"
'irst, we note that the generali!ation results for both, the A3* based NNs and A2* based NNs show profit" The profits are also relativel significant, thus showing that the application of topolog and weight evolving artificial neural network s stems is viable within this field, and warrants significant further exploration in other time series anal sis applications" 'or example the highest profit reached during generali!ation, S.D out of the S5-E possible when the agent started with B;;S, making S5;; with =; leverage based trades, shows that the agent was able to extract =-< of the available profit" This is substantial, but we must keep in mind that though the agents were used on real world data, the were still onl trading in a simulated market" *t is onl after these agents are allowed to trade in real time and using real mone , would it be possible to sa with certaint that these generali!ation abilities carr over, and for how man time)steps before the agents require re)training (*n the experiment the agents are trained on E;; time steps, and tested on the immediatel followed 5;; time steps)" >ut looking at the table, the difference between A2* and A3* based NNs do show some strange anomalous" The A3* based experiment results are particularl surprising" The first thing we notice is that the A3* NN generali!ation phase@s worst performers are significantl worse than those of the A2* based NNs" The A3* based NNs either generali!ed well during an evolutionar run, or lost significantl " The A2* based NNs mostl kept close to B;; during generali!ation test phase when not making profit" &lso, on a"erage the best of A3* are lower than those produced b A2* during generali!ation" The training fitness scores are comparable for both the A3* and A2* NNs" ¬her observation we can make is that higher price resolution (,-; Ts" ,5;) correlates with the A3* NNs to achieving higher profit during generali!ation testing" &nd finall , we also see that for both A2* and A3*, generali!ation achieved b = and 5;; based windows price windows is highest" >ased on Table)5, at the face of it, it would seem as if the original h pothesis about the effectiveness of A3* NNs, and their expected superior generali!ation was wrong" >ut this changes if we now plot the best training fitness vs evaluations, and the best generali!ation test fitness vs evaluations, as shown in 'ig)."5"
'ig)."5 A2* ( A3* based LTraining and Testing 'itness Ts" %valuationsM
Though difficult to see in the above plot, we can make out that though es the A2* NNs did achieve those generali!ation fitness scores, the were simpl blips during the experiment, occurring a few times, and then disappearing, diving back under B;;" 7n the other hand though, the A3* NNs produced lower profits on average when generali!ation was tested, but the produced those profits consistentl , the generali!ed ver well" This is easier to see if we anal !e the graph of A2* Generali!ation 'itness Ts" %valuations, shown in 'ig)."-, and the A3* Generali!ation 'itness Ts" %valuations, shown in 'ig)."B"
2ets now anal !e 'ig)."B, the generali!ation results for Kust the A3* based NN s stems" The stor here is ver different" Not onl there are more consistentl higher than B;; generali!ation fitness scores in this graph, but also the last throughout the entire -=;;; evaluations" This means that there are more agents that consistentl generali!e, reaching the profitabilit in this graph" $hich gives hope that the generali!ation abilit of these A3* NN based s stems will carr over to real world trading" $hen going through raw data, it was usuall the case that for ever A2* NN based experiment, onl about 5)- in 5; evolutionar runs had a few agents which generali!ed for a brief while to scores above B-;" 7n the other hand when going through the A3* NN based experiments, B). out of 5; evolutionar runs had agents generali!ing consistentl , with scores above B-;" The more conservative A3* NNs are much more consistent" Their generali!ation sta s, and if we look at the above figure, we see that those A3* NNs that have generali!ation fitness over B;;, usuall retain it throughout the evaluations" Thus both of the original h pothesis are confirmed" 5" Topolog and $eight %volving &rtificial Neural Networks are indeed useful within this field, and their percentage profits are higher than those reported in the references papers which used backprop algorithms in optimi!ation of static topolog based NNs" &nd -" Geometrical pattern sensitive, price chart input based NNs do work, the NNs learned how to trade currencies based on geometrical patterns within the charts, looking at the trends and patterns, and were able to generali!e much better" Thus this new proposed method of evolving geometrical pattern sensitive currenc trading agents is viable" >ut there were a few issues, and when experimenting with A3* NNs, problems did show up" 'irst, the standard substrate h percube/.0 topologies are la er)to)la er feed forward topologies, and full connected topologies" * did experiment using feedforward substrates, but the could not generali!e in this domain, and their produced fitness scores were bellow -F;" This might be due to their abilit to memori!e, but without an recurrent connections, lacking the abilit to generali!e to previous unseen data, substrate topolog based anomalies will need to be anal !ed in future work" &lso, the full connected substrate topologies did not generali!e with as high a stabilit as the Vordan 6ecurrent (V6) topologies used in this paper" The results of using feedforward substrates where each neurode was recurrentl connected to itself did not produce results better than a V6 substrate either" * also noted that the low resolution substrates produce at times better results, and do so more often, than their high resolution
*f we look at :liding$indow5;;, produced b plotting the best generali!ation scores from the 5; evolutionar runs of that experiment, we see that the score of B.D was achieved briefl , between roughl the evaluation number =;;; and 5;;;;" This means that there was most likel onl a single agent out of all the agents, in the 5; evolutionar runs, that achieved this, and then onl briefl so" 7n the other hand, we also see that maKorit of the points are at B;;, which implies that most of the time, the agents did not generali!e" &nd as expected, during the ver beginning, evaluations ; to about B;;;, there is a lot more activit amongst all sliding window resolutions, which produce profits" The most stable generali!ation and thus profitabilit was shown b :liding$indow= and :liding$indow5;;, and we know this because in those experiments, there were a lot more fitness scores above B;;, consistentl " 'rom this, we can extract the fact that during all the experiments, there are onl a few agents that generali!e well, and do so onl briefl , when it comes to A2* based NNs"
6%'%6%N3%:
/50 /-0 :her, G", (preprint)# L+,NN Alatform# :hedding the >iological *nefficienciesM, ar,iv#5;55".;--vB /cs"N%0 :her, G", L+,NN# evolving complex organisms in complex environments using a novel tweann s stemM, G%337 @55 Aroceedings of the 5Bth annual conference companion on Genetic and evolutionar computation" :tanle , H"7", 4iikkulainen, 6"# %volving neural networks through augmenting topologies" %volutionar 3omputation 5;(-) (-;;-) FF15-D Hassahun, I", :ommer, G"# %fficient reinforcement learning through evolutionar acquisition of neural topologies" *n# Aroceedings of the 5Bth %uropean : mposium on &rtificial Neural Networks (%:&NN -;;=), >ruges, >elgium (-;;=) -=F1-.. Nils T :iebel and :ommer, G"# %volutionar reinforcement learning of artificial neural networks" *nternational Vournal of C brid *ntelligent : stems 9(B)# 5D5)5EB, 7ctober -;;D" Gauci, V", :tanle , H"7"# Generating large)scale neural networks through discovering geometric regularities" *n# Aroceedings of the Fth annual conference on Genetic and evolutionar computation" pp" FFD15;;9" &34, New Iork, NI (-;;D) Tersace, 4", >hatt, 6", Cinds, 7", :hiffer, 4" LAredicting the exchange traded fund +*& with a combination of genetic algorithms and neural networks"M %xpert : stems with &pplications" -D (B), pp" 95D)9-=" Iao, V" and Aoh, C"2", LArediction the H2:% *ndex 8sing Neural Networks"M *%%% *nternational 3onference on &rtificial neural networks" Himoto, T", &sakawa, H", Ioda, 4" and Takeoka, 4" (5FF;)" :tock market prediction s stem with modular neural networks" *n Aroceedings of *V3NN)F;"
/B0 /90
/=0
/.0
/E0
/F0
/5;0 Cutchinson, Vames 4" &ndrew $"2o and Tomaso Aoggio ,L& Non) Aarametric &pproach to Aricing and Cedging +erivatives :ecurities Tia 2earning Networks"M , The Vournal of 'inance, Tol" ,2*,, No"B, Vul " /550 6efenes , &"N", >ent!, I" >unn, +"$", >urgess, &"N" and Oapranis, &"+" L'inancial Time :eries 4odeling with +iscounted 2east :quares back Aropagation"M, Neuro 3omputing Tol" 59, pp"5-B)5BE" /5-0 Iuhong 2i and $eihua 4a , L&pplications of &rtificial Neural Networks in 'inancial %conomics# & :urve M, Cangshou, OheKiang 3hina" /5B0 6ong)Vun 2i, Ohi)>in ,iong, LArediction stock market with fu!! neural networksM, Aroceedings of the 'ourth *nternational 3onference on 4achine 2earning and 3 bernetics, Guang!hou, 5E)-5 &ugust -;;=" /590 Vacek 4and!iuk and 4arcin Varus!ewic! , LNeuro)evolutionar approach to stock market predictionM , Aroceedings of *nternational Voint 3onference on Neural Networks, 7rlando, 'lorida, 8:&, &ugust 5-)5D, -;;D,*%%%" /5=0 :neha :oni, L&pplications of &NNs in :tock 4arket Arediction# & :urve M *V3%T, 55);-);B" /5.0 &2ife through +,NN# http#??www" outube"com?user?+,NNs stemN blendP5(obP=Wp?a?u?5?C!s+OtE%7D; /5D0 Calbert $hite,M %conomic prediction using neural networks# the case of *>4 dail stock returnsM +epartment of %conomics 8niversit of 3alifornia, :an +iego" /5E0 +ogac :enol,M Arediction of stock price direction b artificial neural network approachM, -;;E" /5F0 Takashi Iamashita, Hotaro Cirasawa, Vinglu Cu , L&pplication of 4ulti) >ranch &rtificial neural networksto :tock 4arket AredictionM, Aroceedings of *nternational Voint 3onference on Neural Networks, 4ontreal, 3anada, -;;= /-;0 Xui) ong Ohao, ,iao u Ohao, 'u +uan , LArediction 4odel of :tock Arices >ased on 3orrelative &nal sis and Neural NetworksM -;;F :econd *nternational 3onference on *nformation and 3omputing :cience , -;;F, pp# 5EF)5F- , *%%%" /-50 >rian G" $oolle and Henneth 7" :tanle , L%volving a :ingle :calable 3ontroller for an 7ctopus &rm with a Tariable Number of :egmentsM *n# Aroceedings of the 55th *nternational 3onference on Aarallel Aroblem :olving 'rom Nature (AA:N -;5;)" New Iork, NI# :pringer