Professional Documents
Culture Documents
random walk
1
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
2
Methods of statistical mechanics and random walk
They predict future position and momentum and wave function from initial
conditions
3
Successful in predicting properties of system where there is mild coupling –
interaction between them are limited
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
4
Now look at collective properties of a collection of particles
Temperature T
1 N Pressure P
Heat capacity C
v
position and momentum χv
Compressibility
⟨e i ⟩=E/ N
N t
Number of particles or time -at discrete intervals 7
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
8
Methods of statistical mechanics and random walk
All variables of constituent particles contained in the volume V in a chamber
execute random variation of variable such as
{r 1 , r 2 ,.... ,r N ; p1 , p2 ,.... , p N }
unbiased
T H
9
Second toss have head is highly unpredictable
In coin toss experiment the nature of the walk changes – when tossing repeated
Number of heads
Number of heads
N
1
N →∞ , P( H )=P(T )=
2 10
How random walks occurs in nature by random variation of coordinates of particles
V
1 N
HTTTHHHTT
H =1 , T =0
1
11
1 2 3 4 5 6 7 8 9
Consider the case when there is only two state for the coordinate tossing of of coin is an
example
Each event of in a random walk is completely unpredictable
HTTTHHHTT
H =1 , T =0 Random events
1
1 2 3 4 5 6 7 8 9
Final displacement
1
12
1 2 3 4 5 6 7 8 9
Diffusion is the probability model for random walks repeated many times
Probability profile of this walk many walkers in ink spreading in water is given
below
t 1 <t 2 <t 3 <t 4
13
https://physics.stackexchange.com/questions/552811/diffusion-of-ink-in-water
A simple case is random walk is in one-dimension
HTTTHHHTT
Let H =→ is the right move
HTTTHHHTT
←←
walker
A single walk
→→ →
generated from ←←←
coin toss
→
final destination
experiment
A single walk of is not repeated again in another set of coin toss experiment
Let us start with most ideal case walker moves either right or left in random, that is with
probability p and q.
Consider a single particle/walker executes random walk along a line starting from the
origin
-2 -1 0 1 2
Let each step of the walker be of length l , let the total number of steps be N , in such
a travel the total distance traveled is
x=m l
This is in terms of (fundamental) length l
−N ≤m≤N
The integer m satisfies the condition since maximum length a walker could have
traveled in positive direction or in negative direction is N
15
Let total number of right steps n1
-2 -1 0 1 2
l
−N ≤m≤N x=m l
Let all steps of the moves are statistically independent (decision on a next move does
not depend on previous move)
Let move to the right is taken with a probability p then probability of moving to the left
is given by q=1− p q+ p=1
n n
n
The probabilities for 1 right steps and n 2 left steps is given by p q 1 2
N!
The number of distinct possibilities in which this probability is achieved is
n 1 ! n2 ! 16
0
When particle is at the origin 0 step ( p+q) =1
Consider one step random walk – then distribution of probability is
2 2
(P(→)+ P(←)) =( p+q) =1
2 2
⏟
=P (→)+P(→) P(←)+P(←) P(→) + P (←)
multiplicity
redistribution of same probability
No of time tail/head
obtained Total number of trials 17
For large number of steps the probability re-distribute – for 3
3 3
(P(→)+ P(←)) =( p+q) =1
3 2 2 3
=( p +3 p q+3 p q +q )
multiplicity
If some balls have same color and if both ball selected have have same color
19
In the random walk problem in one dimension
Left moves
Right move
Total moves
N!
n1 ! n 2 !
20
Number of ways a particular are arranged
Total probability that in a random walk of N steps there are n1 right steps W N (n1 ) is
given by
N! n n
W N (n1 )= p q
1 2
n1 ! n2 !
This distribution can be easily be identified as expansion term in the binomial series, this
is given by
N
N!
( p+q)N =∑ p n q N−n
n=0 n!( N −n)!
The probability to obtain total displacement m is obtained from the probability of making
n1displacement towards right
W N (n1 )=P N (m)
It is more meaningful to express the probability in terms of the total displacement, by
change of variables n1 +n2=N
1 1
n1 = ( N +m) n2 = ( N −m) n2 =N −n1
2 2
m=n1−n2
n1 =0 n1 =N
22
Displacement versus steps
23
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
24
Random variables definition and few examples
X u=u
Examples of a sample space:
⟨ f (u)⟩=
∑ P (ui) f (ui) 26
∑ P (ui )
Other notable properties of the averages of random variables are – sum of averages of
two functions are equal to average of their sum N
Another useful average is second moment of u about its mean, known as dispersion
M
⟨(Δ u)2 ⟩=∑ P(ui )(ui −⟨u⟩)2 ≥0
i=1
This give the spread of the distribution around the mean – it may be expressed as
〈 (u−〈u〉)2 〉 = 〈 u2 〉 −〈u 〉2
as this quantity is always positive 〈 u2 〉 ≥〈u 〉2 ; the may be generalized to get
n 27
〈 u 〉
f (u)=u
28
Distributions that differs in moments
f u=u
Red is a distribution with mean same as blue, and black but third
moment is different – asymmetrical with respect to mean
Green is a distribution with mean same as blue, black and red differ
in all higher moments 29
Mean values of random walk problem
N! n n
The probability distribution is given by W N (n1 )= p q1 2
n 1 ! n2 !
N! n N −n
W N n1 = p q 1 1
as n 2=N −n1
n1 ! N −n1 !
M
The normalization condition
series expansion
∑ P ui =1 can be verified as follows, in the binomial
i=1
N p=q
N! n N −n
pq N =∑ p q 1
=1 N =1 1
where 1= p+q
n=0 n1 ! N −n1 ! M
Mean number of steps to right is obtained by the relation ⟨ f (u)⟩=∑ f (ui ) P(ui )
i=1
M N
N! n N −n
〈 n1 〉=∑ n1 W n1 =∑ n1 p q 1 1
∑ P(ui ) f (ui )
i=1 n=0 n1 !N −n1 ! ⟨ f (u)⟩=
∑ P(ui )
Evaluation of this sum is non-trivial, but possible from the relation
n 1 p1 = p ∂ p
n n 1
∂p
substituting this relation we get 30
substituting this relation we get
M N
N! n N −n
⟨n1 ⟩=∑ n1 W (n1 )=∑ n1 p q 1 1
=∑ p q 1
]
N
]
= p ∂ ( p+q)
N
∂p
=N p( p+q)N −1
Using equation 1= p+q
⟨n1 ⟩=N p
This is physically meaningful since it shows average number of right steps is equal to 31
probability to move toward right with total number of steps
Similarly the average left steps is given by ⟨n 2 ⟩=N q
Sum of the average move on left and right ⟨n1 ⟩+⟨n2 ⟩=N
Net displacement of the particles ⟨m ⟩=N ( p−q)
If both probabilities of left and right moves are equal then net displacement is zero
( ) 1
N
N! ∂
2
n N −n
=∑
n=0
p (
n1 !(N −n1 )! ∂ p
p q ) 1 1
32
N
2 N! ∂
2
n N −n
⟨n ⟩=∑
1
n=0
p
n1 !( N −n1 )! ∂ p (p q ) 1 1
2 N
N!
=p ∂
n N −n
(∂p )∑ n=0 n1 !( N −n1 )!
p q 1 1
2
= p ∂ ( p+q) N
( ∂p)
= p ∂ [ pN ( p+q) N −1
( ∂p) ]
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
35
Probability distribution for large N
For large N the the binomial probability distribution W n1 has a pronounced or
dominant maximum around n1 = n1 . The value of this function decreases from the
maximum. It is useful to find an expression for W (n1 ) for large N
Near the region where the maximum occurs where n1 is also large, the change in the
distribution is characterized by
|W (n1 +1)−W (n1 )|≪W (n1 )
1 2.
A direct Taylor series expansion gives f =1−Ny N N 1 y ..... 36
2
1 2. .....
f =1− Ny+ N ( N +1) y
2
For large N Ny>1
Therefore the series does not converge to a value therefore difficult to truncate
ln f =−N ln(1+ y )
1 2
ln f =−N ( y− y +....)
2
Here the power of N does not grow, and moreover we get an expression
1 2
− N ( y− y +....)
2
f =e
This expression is valid near y≤1
Now expanding the probability ln W (n1 ) in the Taylor series
~ 1 2 1 3 d k
ln W ( ~
n1 )
ln W (n1 )=ln W ( n1 )+ B 1 η+ B 2 η + B 3 η +... where B =
2 6 k
dn
k
1
Since derivative evaluated near the maximum the function has the following properties
B 1=0 B2> 0
This can be explicitly stated as B 2 =−|B 2| 37
~ ~
Now the function near the peak be written as W =W (~
n1 )
The probability distribution near the peak may be written as
1 2 1 3
− |B2|η + B3 η
~ 2 6
W (n1 )= W e
For sufficiently small higher order terms in η may be neglected
1 2
~ − 2 |B |η 2
W (n1 )=W e
In order to have explicit look at the expression of the derivative we may write starting
from the expression the binomial probability distribution
N! n N −n
W N (n1 )= p q 1 1
n1 !( N −n1 )!
Logarithm of this expression is
ln W N (n1 )=ln N !−ln n1 !−ln ( N −n1 )!+n1 ln p+ N −n1 ln q
( N −~n1 ) p ( N −~n1 ) p
ln
[ ~
n1
=0
q ] [ ~
n1
=1
q ] ( N −~
n1 ) p= ~
n1 q
d 2 ln W N (n1 ) 1 1
On further differentiation =− −
d n21 n1 ( N −n1 )
1= p+q
Evaluating this expression near n1 = ~
n1 avg n1
~
n =N p
1
1 1 1 1 Np+ Nq
B 2 =− − =− − =− 2
Is negative as required by N p ( N −N p) N p ( N q) N pq
p+q 1
B 2 =− =−
N pq N pq
On further differentiation it is possible to show that higher order terms are can indeed be
neglected
39
It is good to look at higher order terms
d 2 ln W N (n1 ) 1 1
=− −
d n21 n1 ( N −n1 )
d 3 ln W N (n1 ) 1 1
= −
d n31 n12 ( N −n1 )2
1 1
= 2 2− 2
N p ( N −Np)
11
= 2 2− 2 2
N p N q
q 2− p 2 1 d 2 ln W N (n1 )
= 2 2 2 ≪− =
N p q N p q d n21
At large N – higher order terms are safely ignored
40
The value of the constant in the probability distribution can be evaluated assuming the
variable is quasi-continuous variable. The summation of the probabilities may be
replaced by an integral
N ∞
For large N the probability distribution has negligible contribution away from the
maximum ∞ − 1 |B |η 2 1 2
~ 2 2
~ − |B |η
W∫e d η=1
2
W (n1 )=W e 2 2
−∞ ∞ b
n ~
n du e
2
−a u + b u π e4a
2 1 = 1 +η ∫ =
√
~
W
π
√
|B 2|
=1
Therefore the probability distribution can be written finally as
−∞ a
1 1
W (n1 )=
2π
Using the expression B 2 =−
√
|B 2|
(
exp − |B 2|η =
1
2
2
~
|B 2|
2π ) √ (
exp − |B 2|(n1 −~
2
n1 )
2
)
n1 =N p
N pq
2
Transforming in terms of continuous variable using the relation x=m l, l is the length
in each step
42
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
●
Gaussian probability distribution
●
Generalization of the results to many variables and unequal
random walk
43
2
1 −(m−N ( p−q))
P (m)=
√2π N p q
exp( 8 N pq )
1 − x− 2
P m x =
2
exp
22
2l
dx
m
44
As N ∞ the probability distribution P m becomes much larger, therefore variation
for the probability distribution for adjacent values of m negligible, this means
|P (m+2)−P (m)|≪ P (m)
For such distributions the variation in m may be replaced with x and change in the
variation may be written as x +dx
The transformation to continuous variable can be achieved by the following steps, using
the assumption that P m is same for a small region till next value of probability
corresponding to m2 occurs.
dx
ρ( x) dx=P (m)
2l
ρ( x) Is the probability density and is independent of the magnitude of d x
Therefore the final probability density may be obtained from the relation
2
1 −(m−N ( p−q))
P (m)=
√
2π N p q
exp (
8 N pq ) x=ml
As
P(m) 1 −( x−μ)2 = p−q N l
ρ( x)=
2l
=
√2 π σ
exp
2 σ2 ( ) Where
=2 N p q l
The final expression is the standard Gaussian probability distribution
45
We generalize the assumptions used to derive expression for Gaussian distribution to
many natural process and say that for all random walks that involve large number of
steps results in a Gaussian distribution
1 − x− 2
x =
2
exp
22
Since Gaussian distribution is a representation of the probability the integral on all range
of the distribution must yield one
∞
1
∞
− x− 2
Using standard integral of a Gaussian
∫
−∞
x dx= ∫
2 −∞
exp
22
dx=1
2
+∞ ∞ b
2
2
exp(−ax )dx= √ π/a ∫ du e −a u + bu
= πe 4a
Note: the standard result ∫
−∞ −∞ √a
The constant of the Gaussian distribution can be identified from properties of the
distribution
∞
− y2 − y2
+∞ +∞
1
Let y=x− = ∫
√ 2 π σ −∞
y exp
2σ
2 ( ) ∫ ( )
dy +μ exp
−∞ 2σ
2
dy 46
Now the integral yields 0 1
− y2 − y2
[∫ ( ) ]
+∞ +∞
1
=
√2 π σ −∞
y exp
2σ
2
dy +μ ∫ exp
−∞ 2σ ( )
2
dy
〈 x 〉=
Note that this shows th since the Gaussian distribution is symmetric around peak of the
distribution the average value is the value at the peak.
〈 x 〉=
Now the dispersion of the distribution is given by
∞
2 2
〈 x− 〉= ∫ x− x dx
−∞
47
∞
2 2
Now the integral yields 〈 x− 〉= ∫ x− x dx
∞−∞
1 −x− 2
Let y=x−μ
= ∫
2 −∞
+∞
2
x− exp
2
2 2 dx
+∞
1 −y 2
dy=dx
= ∫
√ 2 π σ −∞
2
y exp
+∞
( )
2σ
2
dy ∫
−∞
exp(−ax
a=(1/2 σ
)dx= √ π/a
2
)
−1 ∂
( ) ∫ exp ( y
2
) dy = −1 ∂ (π1/2 /a1 /2 )
=
√ 2 π σ ∂ a −∞
−a
( )
√2 π σ ∂ a
−1 π 1/ 2 −3/ 2 1 π
1/ 2
2 −3 /2
= (−1) a = (1/2 σ )
√2 π σ 2 √2 π σ 2
1 π 1/ 2 2 −3 /2
= (1/2 σ )
√2 π σ 2
3
=
1
2 2
[ ]
2 2 2 = 2
Now we have the results for mean and root mean square deviation as which is same
from the results obtained from binomial distribution
〈 x 2 〉=4 N p q l2 μ=( p−q) N l
σ=2 √ N p q l 48
〈 x 〉=N p−ql
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
●
Gaussian probability distribution
●
Generalization of the results to many variables and unequal
random walk
49
General discussion on the random walks
Now we may generalize these methods for steps of variable length with use of multiple variables
∑ ∑ P (ui , v j )=1 50
i=1 j=1
Example of simultaneous throw of dice and coin toss
vi ui
P (ui , v j )=1/12
Note that throwing dice and flipping of the coin are independent events
M
P u (ui )=∑ P (ui , v j )
j=1
M N
⟨ f (u) g( v)⟩=∑ P u (ui ) f (ui ) ∑ P v (v j ) f (v j )
i=1 j=1
ρ(u) du
u u+ du
The properties discrete random variable is true for continuous variable also
M
∑ P(u i)=1
i=1
a2
∫ ρ(u) du=1
a1
The average properties can be computed from this
M a2
⟨ f (ui )⟩=∑ f (ui ) P (ui ) ⟨ f (ui )⟩=∫ f (u)ρ(u) du 54
i=1 a1
The continuous probability distribution can be converted discrete probability similar to
Gaussian distribution in 1d random walk
dx
ρ( x) dx=P (m)
2l
du dv
ρ(u , v)du dv=P (u , v )
δu δ v
∫ ∫ ρ(u , v ) du dv=1
a1 b1
a2 b 2
⟨ F (u , v )⟩=∫∫ F (u , v )ρ(u , v ) du dv
a1 b1
55
Mean values of general random walk
Most random walks real microscopic particles have different lengths at fixed
time interval.
This means we have to prove the results of random walk of fixed length to
variable length to apply to natural systems.
w (s i ) w (s i )
si si
56
left -right fixed-size steps
continuous distribution of step-size
Let the probability distribution w (s i ) is same for all steps
Similar to walks in fixed size steps - total length of travel in N steps
N
x= ∑ s i
i=1
⟨ f (u)+ g (u)⟩=⟨ f (u)⟩+⟨ g(u)⟩
2 2
Dispersion is ⟨(Δ x) ⟩=⟨( x−⟨ x ⟩) ⟩
( x−⟨ x ⟩)=∑ (s i −⟨ si ⟩)
i
Δ x=∑ Δ s i 57
i
⟨(Δ x)2 ⟩=⟨ ∑ Δ si ∑ Δ s j ⟩
i j
=⟨ ∑ (Δ s i )2 + ∑ ∑ Δ si Δ s j ⟩
i i j ,i≠ j
si
mean
∑ ∑ Δ si Δ s j=0
i j ,i≠ j
⟨ Δ x 2 ⟩=N ⟨ Δ s2 ⟩
Δ x ∗= √ ⟨ Δ x 2 ⟩
Complete information of the random walk with differing walk length is contained in
the probability distribution
N
x=∑ si
i=1
One way to address this issue to express this in terms of a mathematical form
at where there are large number of steps in the random walk dx → 0
N
Width of
P ( x) the
distribution
reduces
x N
Therefore this constraint can be expressed as a delta function
(
δ x−∑ si
i=1
)
N
∫δ ( x−∑ si dx=1
i=1
) 61
N
−∞ −∞ ( i=1
)
= ∫ ... ∫ w( s1 )w ( s2 ) w( s3 )........ w( s N ) δ x−∑ si dx ds 1 ds 2 ds 3 ... ds N
Therefore by removing dx
∞ ∞ N
−∞ −∞ (
P ( x )= ∫ ... ∫ w ( s1 )w ( s2 ) w( s3 )........ w( s N ) δ x−∑ si ds 1 ds 2 ds 3 ... ds N
i=1
)
Substituting the expression of integral representation of the delta function
N
N
1 ik( ∑ si − x)
(
δ x−∑ si =
i=1
) 2π
∫ dk e i =1
N
∞ ∞ ∞
1 ik (∑ si −x)
P ( x )= ∫ ... ∫ w (s1 ) w( s2 ) w (s3 )........ w (s N ) ∫ dk e i=1
ds 1 ds 2 ds 3 ... ds N
2 π −∞ −∞ −∞
62
N
∞ ∞ ∞
1 ik (∑ si −x)
P ( x )= ∫ ... ∫ w (s1 ) w( s2 ) w (s3 )........ w (s N ) ∫ dk e i=1
ds 1 ds 2 ds 3 ... ds N
2 π −∞ −∞ −∞
∞ ∞ ∞ ∞
1 −ikx iks iks iks
P ( x )= ∫ dk e ∫ ds1 w (s1 ) e ∫ ds 2 w( s2 ) e ... ∫ ds N w (s N )e
1 2 N
2 π −∞ −∞ −∞ −∞
iks
Except first all integrals are identical let Q(k )=∫ ds w (s)e
1 −ikx N
P ( x )= ∫ dk e Q (k )
2π
63
∞
1 −ikx N
iks P ( x )= ∫ dk e Q (k )
Q(k )=∫ ds w (s)e 2 π −∞
When k is large
neighboring s
contributions cancel
each other thus
negligible
2
−(x −μ )
1 2 σ2 μ=N ⟨ s⟩
P ( x)= 2
e
√2 π σ σ=N ⟨Δ s2 ⟩ 66
We have arrived at Gaussian distribution in more general steps of the random walk
When these conditions are satisfied all natural distributions come from random walk
appears as Gaussian distribution
N →∞
This result is known as the central limit theorem – one of the important result of
probability theory
67
Properties of single continuous random variable
68
1
69
Then the expectation value of any function is
70
The cumulant generating function is the logarithm of the characteristic
function
We can get connection between the cumulants and moment by comparing this
relation with
where 71
The cumulant generating function is given by the relation
72
Reference
73