This action might not be possible to undo. Are you sure you want to continue?

CS 541 Term Project Bruce Graham 12/19/2002

Electronic Copy: Bruce Graham’s:

http://www-personal.engin.umich.edu/~bftsplyk/CS541/Parrondo E-mail

Bruce Graham

1

4/26/07

Many of the systems studied in this course had this particular feature. Starting the system over at the same initial condition cannot reproduce a stochastic trajectory. and S1 is the first successor no future state can be affected by the initial conditions. which we will describe and attempt to explain. The apparent violation was resolved by allowing the apparently irreversible machine to be reversible. This then is Parrondo’s paradox. and is often used to describe stochastic processes. The surprise is that it favors motion in the direction. The particular paradox of interest had its origin just about forty years ago when Feynman [1963] discussed a ratchet and pawl. which would be “contrary to expectation” and thus arises the paradox. A stochastic trajectory is characterized by its mean and its variance. The last piece of this tale was the challenge to create a discrete version of the Flashing Brownian Ratchet. is one of the effects of molecular motion. The second reason is that stochastic process with the Markov property. Unlike chaotic systems. which is a large class of processes. created a pair of losing games which when played alternately produced consistently positive gains. Brownian motion. depends only on their present state and not on their history. Professor Juan M. Something about them was “contrary to expectation”. but apparently not rigorous. If S0 is the initial condition. a Professor of Physics at the Universidad Complutense de Madrid. That is they have no memory. This example was qualitatively persuasive. which have a sensitive dependence on initial conditions.Overview The word paradox comes to us from Greek and Latin and means “contrary to expectation or opinion”. Several people have tried to duplicate the analysis with varying degrees of success. The Flashing Brownian Ratchet has a resemblance to the experiment described by Feynman in which a supposedly irreversible machine is shown to allow motion in either direction. Bruce Graham 2 4/26/07 . whose movement depended on converting collisions of Brownian particles with a paddlewheel into work apparently in violation of the second law of thermodynamics. stochastic processes would seem to have no dependence on initial conditions for two reasons.R. Every run from the identical initial condition produces a different result. Parrondo.

To create the Flashing Ratchet the time varying component of the potential is switched “ON” and “OFF” at a periodic rate. an effect of molecular motion. An example of this machine can be seen on a tennis court where a crank turns a toothed wheel. until the pawl drops into the valley between the teeth and prevents the ratchet from rotating in the reverse direction. on a shaft to apply tension to a cable holding up the net. Bruce Graham 3 4/26/07 . has a non-zero mean or drift and a variance σ2.u n ib a s /c h /~ e lm e r/b m /in d e x . is the random motion of a group of particles. When the ratchet is switched “ON” depending on the position and velocity of the particle it may roll into a valley on the left or a valley on the right of its current position. The claim is – “if it rolls to the left enough times – it will march uphill”. This is “contrary to expectation”. When the torque on the crank is released. Figure 1 is a crude drawing of a Java simulation on FranzJoseph Elmer’s website which exhibits the behavior of the Flashing Brownian Ratchet. the underlying basis for the Black-Scholes Partial Differential Equation. the shaft may go in reverse a bit. which is a member of the family of Parabolic Partial differential Equations.The Flashing Brownian Ratchet A ratchet is part of a mechanical device called a ratchet and pawl which only allows motion in a single direction. Geometric Brownian motion. When the ratchet is “OFF” the Brownian Particle represented by the ball tends to roll downhill from the region of “High Potential” to the region of “Low Potential”. This potential has a stationary component and a time-varying component. but is supported by the mathematics. This Brownian motion has mean zero and variance σ2. called the ratchet.h tm l Figure 1 – Flashing Brownian Ratchet Brownian motion. H ig h P o te n tia l B ro w n ia n P a rtic le R a tc h e t “O N ” R a tc h e t “O F F ” Low P o te n tia l F ra n z -J o s e p h E lm e rs ’s W e b s ite h ttp ://m o n e t. and the simulation. For the Flashing Ratchet we assume the mean zero Brownian motion and that the particles are subject to a potential.p h y s ik .

which can be selected in each round of the game. Some other titles in the series were “Air War and Emotional Stress” and “Soviet Attitudes toward Authority”. or does not know. In some cases a player will have only a single optimal choice on each round.. The two players. The payoff for these choices on this round is just the element of M at row i and column j. Then A’s choice is an integer row number “i” in the range [1. Their work was popularized by Williams[1954] under contract from the RAND Corporation. have an mxn matrix M. This is called a mixed strategy. and B’s choice is a column number “j” in the range [1. This is the description of a zero sum game. In other cases a player will alternate his choices using some random device such as cards. An example is a five-handed game of stud poker where two players have formed a coalition. and thus the game is now a four-player game. called A and B. The essential question was – could you win one? That we are still here probably means that the answer was no. Williams[1954] defines a strategy as “a complete plan that cannot be upset”. After each player makes his respective choice. Player A is assigned to the rows of M. coins. but his losses will be minimized. and stick to his optimal strategy cannot be at a disadvantage. The elements of Game Theory are easy to explain. When both players have a pure strategy the game is said to have a saddle point. The sense of this definition is that a player who can find. and player B is assigned to the columns of M. which they fully expected to be useful in designing better economic and social policies. This means that A’s winnings are equal to B’s losses. By convention a positive number means that player A “receives” the payoff from player B. rather we count sets of opposing interests. Games with saddle points sound trivial and boring. Each game has two or more players. The way in which players make their choices over multiple rounds of the game is called a strategy. When such a strategy exists. Play continues with another round if possible and desirable. He may still loose. this is called a pure strategy. or Mij.m]. The game of interest at that time was “GLOBAL THERMONUCLEAR WAR”.n]. Each player has two or more choices.Game Theory The development Game Theory is described in the seminal work by von Neuman and Morgenstern[1944]. A player who cannot find. a payoff is revealed according to the rules of the game. A non-zero sum game can be found in the stud poker game where by agreement the winner might have to contribute 10% of his winnings for drinks and refreshments. These players need not be persons. There might also be a member of law enforcement who requires a gratuity to ignore gambling on his beat. dice or a random number generator chosen to give the proper distribution. any deviation from it will result in larger than expected losses. The payoff of a two player game is represented with an ordinary matrix. and a negative number means that player B “receives” the payoff from player A. Example – The River Tale Bruce Graham 4 4/26/07 .. or deviates from his optimal strategy will give his opponent an advantage he would not normally enjoy. except that our example of Global Thermonuclear War may have been such a game.

25 (30 + 50) Bruce Graham 5 4/26/07 .” The River Tale Steve H T H ($20.00) $10. rather than play with him.00) + (50 ⋅ −$20. And—just to make it fair—you give me $20 when we match. Similarly. and write the absolute value of the result next to the opposite row. Warned by the environment (they are on a Mississippi Riverboat) Steve suspects he should have the man arrested. the Stranger’s optimal strategy.00 ($20.00 ) = −$1.This example is taken from Williams[1954]. Row 1 : Row 2 : 10 − (− 20 ) = 30 (− 20) − 30 = 50 This tells Steve that he must use a mixed strategy calling ‘heads’ and ‘tails’ in the 3:5 ratio.00) Table 1 – Payoff Matrix Even if Steve follows his optimal strategy his expected loss per game is $1. “Well then. which he presumably already knew is to call ‘heads’ and ‘tails’ in a 5:3 ratio. lets just lie here and speak the words ‘heads’ or ‘tails’—and to make it interesting I’ll give you $30 when I call ‘tails’ and you call ‘heads’. The average loss to Steve is $1. and $10 when it’s the other way around.25 and his optimal strategy is to call ‘heads’ and ‘tails’ in a 3:5 ratio. To see how these numbers are obtained for Steve’s choices we subtract an element of column 1 from an element of column 2.00 Stranger T $30. The expected value of the game is just E [game] = (30 ⋅ $30. “Steve is approached by a stranger who suggests they match coins. The stranger says. Steve says that it is too hot for violent exercise.25.

With these probabilities we can calculate the expected value of Game A as follows: 1 1 − 0. The payoff for Player A in Game A is $1. Player B has no role in this game except as a bank for Player A’s wins or loses Parrondo describes two games.Parrondo’s Games Prior to inventing the paradoxical games Parrondo published in 1996 a paper which criticized Feynman’s analysis of the Brownian ratchet. If Player A has an amount of capital called X(t) and if <X(t)> is the average amount of capital. Both coins are unfair.005 = 0. but neither player chooses a strategy. Each game has a negative payoff or expected value. A game is losing if <X(t)> is a monotonically decreasing function of t.005 = 0. In Game B there are two coins.505 2 2 E [Game A] = (0. A and B.00 ) + (0. Player B again does nothing. Game A can is modeled by Player A flipping a slightly unfair coin while Player B does nothing. If his capital X(t) is a multiple of 3 he uses coin #3. while the P(tails) is just a bit more than ½.00 if it comes up heads. this took some courage. In several papers. To keep these coins separate from the coin in Game A we label that “coin #1”. P (tail ) = + 0.01 Thus we can see that Game A is a losing game. The P(heads) is just a bit less then ½. Player A will flip one of the two coins and win $1.00 from Player B for a head and $1. A game is fair if <X(t)> is constant. and simulations a “bit more” or a “bit less” is taken as 0. To keep track of the progress of our games we presume that Player A has some amount of capital with which to play the games. Which of the two coins does Player A flip? The answer depends on the amount of capital Player A presently holds.005. The strategy for both players is effectively chosen by the unfair coin.00 to Player B for a tail. The games are described in a paper submitted in 2002 and also on his website. The paradoxical game was the result of a challenge to create a discrete analog to the Brownian Ratchet. Considering Feynman’s iconic status in the field of Physics. The Game B coins will be label “coin #2” and “coin #3”. then a game is winning if <X(t)> is a monotonically increasing function of t.00 ) = −$0.505 ⋅ −$1.495. but in different directions and by different amounts. Game B is more complicated to describe and to analyze. Game A is a two player zero sum game.495 ⋅ $1. As in Game A. otherwise he uses coin #2. web pages.00 if it comes up tails. and loose $1. Another way of saying X(t) is a multiple of three is to write: P (head ) = X (t ) mod 3 = 0 Bruce Graham 6 4/26/07 .

00. For each state there is a probability of being in that state. P (tail ) = 1 + ε . but for coin #3: P (head ) = 1 − ε .Now the key feature of the whole paradox is the magnitude and direction of the unfairness of the two coins. These three probabilities can be represented with a vector as follows: P(Y (t ) = 0 ) V = P(Y (t ) = 1) P(Y (t ) = 2 ) Given the probability of being in a given state. but only if it is a multiple of three. How can we construct this matrix? For each state we can be in.1. We first define a state variable Y(t) = X(t) mod 3 which takes values in the set {0. If we ever find ourselves in the state where X(t) mod 3 is equal to 1. On the next round we use coin #2 and it is likely we will win and X(t) mod 3 goes from 2 back to zero. he does not care for coin #3. Playing Game B is equivalent to multiplying our probability vector by some constant transformation matrix each round.005 4 4 This is good for Player A. we know which coin must be flipped.2}.005 10 10 This is bad for Player A. For coin #2: P (head ) = 3 − ε . typically ε = 0. If we know which coin is flipped. typically ε = 0. P (tail ) = 9 + ε . The choice does not depend on Player A’s wishes. we use coin #2. we can use the rules for Game B to compute the probabilities of being in a particular state after each round of Game B. the pure strategy of choosing coin #2 is obvious. it is likely that we win and we go to X(t) mod 3=2 where we spend the bulk of our time bouncing back and forth between states zero and two. Game B needs to be analyzed as a Discrete Markov Chain. ε << 1. we know the probability of heads and tails. and finally we know the state transition rules: tails : state = (state − 1) mod 3 heads : state = (state + 1) mod 3 Bruce Graham 7 4/26/07 . he likes coin #2. So clearly if Player A has a choice in playing Game B. ε << 1. This would be wrong! If we use coin #3 it is likely we will loose and X(t) mod 3 goes from zero to 2 because we just lost $1. A naïve analysis of Game B might conclude that coin #3 would be used one third of the time and coin #2 would be use two thirds of the time. Game B does not care how Player A got to this level of capital or even the magnitude of his capital. it is dependent on the present state of his capital modulo 3.

2003 ⋅ 0.2003 ⋅ ε = 0. I think this may be related to the transient behavior. Each column in our transition matrix represents the probability of going from a present state to either of the two possible successor states.3836 To compute the chance of winning at Game B we use the following expression: 1 Pwin = (1 − P0 ) ⋅ ( 3 − ε ) + (P0 ) ⋅ (10 − ε ) = 0. cannot diverge. In this example the principal eigenvalue of this matrix is 1. Then we need to compute the probability of winning using coin #3. P0.5 − 0. which means the system has a stationary solution. where ε = 0. The array ym can be easily plotted in MATLAB Bruce Graham 8 4/26/07 . We denote this matrix with a capital pi. As epsilon is varied the eigenvalues and eigenvectors will change and imply different behaviors. At some point Π ⋅V = V and further multiplications by pi have no effect on V. given an initial condition for the probability vector we can iterate as many rounds of Game B as we like and see what happens.87 ⋅ ε = 0.87 ⋅ 0.4956 4 Game B is a losing game for all ε > 0.005 = 0. P0 = 0.5 − 0. after the transients disappear all three probabilities approach stationary values.005 = 0.3846 − 0. but I was unable to confirm this. Now. When we do this we notice three things.Note in particular that we cannot stay in the present state. which means things.005 0 The properties of the probability vector are determined by the algebraic properties of the pi matrix. is the same as the probability that Y(t)=0. plus the probability of winning if we use coin #2.3846 − 0. The other eigenvalues are a complex conjugate pair with a magnitude less than one. 0 1 Π = 10 − ε 9 +ε 10 1 4 +ε 0 −ε 3 4 1 4 3 4 −ε + ε . The probability of using coin #3. there is no dependence on initial conditions. In this example the array ym will have three rows and a number of columns equal to nsteps+1. Game B – Theoretical Analysis To analyze Game B we need to compute the probability of using coin #3. there is an initial transient in the probability vector. The following MATLAB program multiplies an initial vector y by the transition matrix M as many times as desired and keeps track of evolution of the probability vector in ym by concatenating the new vector onto the previous vectors.

25+epsilon].0.1/3].MATLAB program dmc.nsteps) global epsilon if(isempty(epsilon)) epsilon = 0.75-epsilon].1-epsilon. so they cannot be chaotic! The initial condition was [1/3.3836 in agreement with the theoretical result. It also illustrates that Discrete Markov Chains go to a stationary condition from any initial condition.0]] . Bruce Graham 9 4/26/07 . end end Running this program confirms that P(Y(t)=0)=0..m % Explore Discrete Markov Chains % function ym = dmc(y. for n = 1:nsteps y = M*y.25+epsilon. [0. ym = [ym y] .1/3. [0.0.75-epsilon..005 .0. end global M if(isempty(M)) M = [[0. end ym = y .9+epsilon.

even choose which game to play at random and the result is the same.5079.2] is 3 games A followed by 2 games B [2. so [2.005 Bruce Graham 10 4/26/07 . The transient is pretty much gone after 20 rounds. The following simulation graph from Parrondo’s website demonstrates this: [3.The Paradox If we alternate the playing of Game A with the playing Game B in much the same fashion that the Brownian ratchet is flashed ON and OFF the result is a net gain in capital. Parrondo’s theoretical results are summarized as follows: PwinB = 0.4956 PwinAB = 0.4] is 4 games A followed by 4 games B random is playing A and B in random order The effect of the pi matrix can be seen in the transient at the beginning of playing Game B alone. while the probability of winning the alternation of Game A and Game B is greater than ½. Even more surprising than the net gain in capital from two losing games is that we can alternate the games in a wide variety of ways.2] is 2 games A followed by 2 games B [4. ε = 0.2] would be two rounds of Game A followed by two rounds of Game B. This is written as an ordered pair. The paradox occurs when for a given value of ε and an alternation strategy the probability of winning Game B is less than ½.

Bruce Graham 11 4/26/07 . it sells the stock. Like BLSH. They chose for their data 10 stocks over 252 trading days. except BRSH randomly chooses what stock to sell Like BLSH. in a declining Swedish stock market. Buy low. Each strategy started with the same capital and one trade per day was allowed in which the strategy was allowed decide what to sell and what to reinvest in. sell high (BLSH) Buy low. starting March 1. A summary of the strategies from their paper is shown in the table below. except BRSH randomly chooses what stock to buy This Markovian trend investor strategy is the opposite of BLSH. This Markovian value investor strategy monitors if the stock has increased or decreased in value during the latest time interval. and if the value dropped. sell random (BLSR) Buy random. Strategy Buy-and-Hold (BaH) Random Insider Description The buy and hold strategy acts as a control strategy that trades no stocks This strategy trades stocks randomly The insider gets quality ex ante information about some stocks on which it may react before the market. 2000. Bowman. which the artificial traders could act upon. sell low (BHSL) Also present in their experiment was the passing of information messages. sell high (BRSH) Buy High. If the value has increased. Johansson and Lybäck investigated this possibility. but had little success in identifying the games to be played.Financial Application Parrondo’s paradoxical result led some investigators to wonder if artificial traders could succeed in producing a net gain in capital by alternately playing two losing games in the marketplace. it buys the stock.

such as during the period of almost universally receding prices of IT stocks in the autumn of 2000. since we cannot in general design in advance a portfolio of stocks. each with distinct properties. I doubt very much if they would publish that result. or regulators will forbid the use of artificial traders. they just might enjoy a comfortable retirement. the prices of which are all receding.29 Conclusions If Bowman. and nobody notices.Each strategy had an initial capital of 10.40 5338.60 5383. Our coursework covered many types of complex behavior.” and their objective: “We intend to pursue the important question of strategy programming for artificial traders. As soon as they succeed and their success is noticed then other players will try to duplicate it. we hope to achieve our ultimate goal of reasonably efficient strategies on real-time markets with nonlinear dynamics. In this project the focus is on stochastic behavior which is time varying and random. or some other factor will undoubtedly upset the delicate balance required to make the Parrondo strategies work. & Lybäck succeed in their objective. and implement their system on a small scale.88 5524.71 5140. to the disadvantage of the artificial traders and their sponsors.” Value 6110. In rare circumstances. our results are almost useless. I also believe that market makers and floor traders who knew that there were artificial traders in play would devise means to take advantage of their weak points. Strategy BLSR Random BaH BLSH BHSL BRSH From their conclusions: “For purposes of prediction. Johansson.00 units. ex ante portfolios could relatively easily be assembled. As complex as stochastic processes are they are definitely not chaotic. as we feel that such programming will be of increased importance in the future. Instead of having a sensitive dependence on initial conditions – in many cases they have no dependence on them.15 5202. Bruce Graham 12 4/26/07 . The final results are reproduced in the table below. and then Parrondo variations would indeed be an interesting alternative to buy-and-hold. By replacing our unrealistic assumptions one by one.000.

and Sands. Chaos. Morgenstern. B. D.1944 Williams. pp. Brownian ratchets and Parrondo’s games..ch/~elmer/bm/index.. Johansson. 11. von Neuman. Lybäck. M.ucm. 1963.46-1 to 46-9.. G.R.J. 2002 URL’s http://seneca. 1954 Parrondo. J. 64.J.umich..edu/~bftsplyk/CS541/Parrondo Bruce Graham 13 4/26/07 . Juegos paradójicos y máquinas térmicas brownianas. (Submitted in Spanish).P. M. #3. The Compleat Strategyst.J. Leighton....physik.J.M.. Boman. J. Volume I.engin. pp.R. O..B.unibas. R. and Español..705-714..M..P. Criticism of Feynman’s Analysis of the Ratchet as an Engine.html http://www-personal. R. Addison-Wesley.es/parr/ http://monet. September 2001.P. Parrondo Strategies for Artificial Traders.References Feynman...Theory of Games and Economic Behavior. McGraw Hill. D. February 2000 Harmer. Paradoxical games and Brownian thermal engines. submitted to World Scientific April 26. Cisneros. American Journal of Physics.. 1125(1996) Parrondo. Abbott. S.fis.D.The Feynman Lectures on Physics.

- Vector Auto Regressions
- Incanter Cheat Sheet
- Paranndo Nature Original
- Paradoxes of Economics
- Feynman Analysis Ratchet Engine
- Adaptive Strats Parrondo Games
- AIM-514
- Recognition and Resolution of “Comprehension Uncertainty” in AI, by Sukanto Bhattacharya and Kuldeep Kumar
- Econometrics Forecasting Financial Markets-Chaos Theory
- Trading the Sun
- Parrando & Capital Redistribution
- Parrondo Agent Portfolio
- Parrando Developments
- Normal in Vol Time
- High Frequency Trading a Cceleration Using FPGAs
- Incorporating Fat Tails in Financial Models Using Entropic Divergence Measures1203.0643v1
- Guerilla Gardening - Urban Permaculture - 30pgs
- Matrices Definition
- 1.2
- Lis
- f 0111025030
- 4
- 11ex Asmt Codes and Matrices Crit AD Shorter Version (3)
- fea-mezzadri-web
- List of Matrices
- Matrix Algebra 2
- Note16_Matrix_Algebra1_Basic_Operations.pdf
- Matrix Algebra
- Laplace Expansion Theorem
- Topic 1
- Parrondos Paradox

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd