You are on page 1of 73

Introduction to probability and

random walk

1

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability for large N

2
Methods of statistical mechanics and random walk

Statistical mechanics apply to statistical methods to arrive at useful properties of


many-body systems.

Typical examples of many-body systems are -

Atoms, molecules, macro-molecules

Fundamental particles photons, phonons, etc

Single-particle mechanics that is from – Differential equations such as

Newtons equations of motion F


⃗ =m ⃗a
Schrodinger wave equation
2
∂ ψ( x , t ) ℏ ∂ ψ( x , t)
iℏ =− +V ( x) ψ( x , t )
∂t 2m ∂x 2

Address problem of mechanics in the single-particle level

They predict future position and momentum and wave function from initial
conditions
3
Successful in predicting properties of system where there is mild coupling –
interaction between them are limited

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability for large N

4
Now look at collective properties of a collection of particles

Example – Gas in a chamber

List of useful properties


are

Temperature T
1 N Pressure P
Heat capacity C
v
position and momentum χv
Compressibility

{r 1, r 2,.... , r N ; p1, p 2,.... , p N }

In equilibrium it is possible to obtain thermodynamics quantities from


fundamental properties.

Typically the thermodynamic variables are obtained directly from experiments


5
V
1 N

{r 1, r 2,.... , r N ; p1, p 2,.... , p N }


Consider a scenario of system evolve in a chamber of volume V
N
Let total energy sum energy of each particle
E=∑ ei
i=0

Each ei increase or decrease and fluctuate around


⟨e i ⟩=E/ N
6
{r 1, r 2,.... , r N ; p1, p 2,.... , p N }
V
at time {t 1, t 2,.... , t N }
1 N

then energy {e1, e 2,. ... , e N }


discrete sampling

{e1, e 2,. ... , e N }

⟨e i ⟩=E/ N
N t
Number of particles or time -at discrete intervals 7

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability for large N

8
Methods of statistical mechanics and random walk
All variables of constituent particles contained in the volume V in a chamber
execute random variation of variable such as

{r 1 , r 2 ,.... ,r N ; p1 , p2 ,.... , p N }

How basic rules of probability applicable to random walk?

Consider a coin toss experiment or a throw of dice

unbiased

T H

9
Second toss have head is highly unpredictable

In coin toss experiment the nature of the walk changes – when tossing repeated

When tossing is repeated

Number of heads

Number of heads

N
1
N →∞ , P( H )=P(T )=
2 10
How random walks occurs in nature by random variation of coordinates of particles

V
1 N

{r 1, r 2,.... , r N ; p1, p 2,. ... , p N }


Consider the case when there is only two state for the coordinate tossing of of coin is an
example
Each event of in a random walk is completely unpredictable

Example of random walk is outcome of toss of coin in N tosses

HTTTHHHTT
H =1 , T =0
1
11
1 2 3 4 5 6 7 8 9
Consider the case when there is only two state for the coordinate tossing of of coin is an
example
Each event of in a random walk is completely unpredictable

Example of random walk is outcome of toss of coin in N tosses

HTTTHHHTT
H =1 , T =0 Random events
1

1 2 3 4 5 6 7 8 9

Final displacement

1
12
1 2 3 4 5 6 7 8 9
Diffusion is the probability model for random walks repeated many times

Probability profile of this walk many walkers in ink spreading in water is given
below
t 1 <t 2 <t 3 <t 4

Ink spreading in a bottle


of water by diffusion-
black lines are path of Line of equal probability
random walkers where concentration is
constant – different circles
give a probability to find
at least a particle

13
https://physics.stackexchange.com/questions/552811/diffusion-of-ink-in-water
A simple case is random walk is in one-dimension

HTTTHHHTT
Let H =→ is the right move

Let T =← is the left move move

HTTTHHHTT

←←
walker
A single walk
→→ →
generated from ←←←
coin toss

final destination
experiment

A single walk of is not repeated again in another set of coin toss experiment

Similar to diffusion in a liquid – either large number of experiments are repeated


and averaged to find probability of an event
14
Probability and random walk in one dimension
Random walk in one dimension is one of the simplest problem that can introduce
methods of that are applicable to large class of problems

Let us start with most ideal case walker moves either right or left in random, that is with
probability p and q.
Consider a single particle/walker executes random walk along a line starting from the
origin

-2 -1 0 1 2

Let each step of the walker be of length l , let the total number of steps be N , in such
a travel the total distance traveled is
x=m l
This is in terms of (fundamental) length l
−N ≤m≤N
The integer m satisfies the condition since maximum length a walker could have
traveled in positive direction or in negative direction is N

15
Let total number of right steps n1

And let the total number of left steps n2 In a


typical
Then total number of steps n1 + n2 =N random
walk
The net displacement of the traveler m=n1−n2
As total left and right moves are related to N n 2=N −n1

-2 -1 0 1 2
l
−N ≤m≤N x=m l
Let all steps of the moves are statistically independent (decision on a next move does
not depend on previous move)

Let move to the right is taken with a probability p then probability of moving to the left
is given by q=1− p q+ p=1
n n
n
The probabilities for 1 right steps and n 2 left steps is given by p q 1 2

N!
The number of distinct possibilities in which this probability is achieved is
n 1 ! n2 ! 16
0
When particle is at the origin 0 step ( p+q) =1
Consider one step random walk – then distribution of probability is

P(→)+ P(←)= p+q


Consider two step random walk – then distribution of probability is

2 2
(P(→)+ P(←)) =( p+q) =1
2 2

=P (→)+P(→) P(←)+P(←) P(→) + P (←)
multiplicity
redistribution of same probability

Probability is calculated after repeating the experiments many times


In a typical unbiased coin toss experiment – one has to repeat in many
times to get ideal probability ½ for the head and the tail

No of time tail/head
obtained Total number of trials 17
For large number of steps the probability re-distribute – for 3
3 3
(P(→)+ P(←)) =( p+q) =1
3 2 2 3
=( p +3 p q+3 p q +q )

multiplicity

Total number of ways steps can be


Arranged among them selves
for N - we get the binomial series
N
N N! n N −n
( p+q) =∑ p q
n=0 n !(N −n)!

Total number of way for left or right step –


with re arrangement make same multiplicity –
←←
walker therefore must be removed from N! by
→→ → division
←←←
→ 18
final destination
Let there are N balls with different colors

Number of ways they can be arranged is

Number of ways two balls are chosen are

If some balls have same color and if both ball selected have have same color

19
In the random walk problem in one dimension

Left moves

Right move

Total moves

They are independent steps

Number of ways in which are arranged

Number of left and right steps are indistinguishable

Contribution from these steps must be divided

N!
n1 ! n 2 !
20
Number of ways a particular are arranged
Total probability that in a random walk of N steps there are n1 right steps W N (n1 ) is
given by
N! n n
W N (n1 )= p q
1 2

n1 ! n2 !

This distribution can be easily be identified as expansion term in the binomial series, this
is given by
N
N!
( p+q)N =∑ p n q N−n
n=0 n!( N −n)!

The probability to obtain total displacement m is obtained from the probability of making
n1displacement towards right
W N (n1 )=P N (m)
It is more meaningful to express the probability in terms of the total displacement, by
change of variables n1 +n2=N
1 1
n1 = ( N +m) n2 = ( N −m) n2 =N −n1
2 2
m=n1−n2

N! [(N +m)/2] [( N −m)/2]


W N (n1 )=P N (m)= p q
[( N +m)/2]![(N −m)/2]!
21
For unbiased random walk 1
p=q=
2
N
N! 1
W N (n1 )=P N (m)=
[( N + m)/2]![(N −m)/2]! 2 ()
What does this mean and what various quantities represent

multiplicity increases in the middle


W N (n1 )=P N (m) where probability increase

n1 =0 n1 =N
22
Displacement versus steps

23

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability for large N

24
Random variables definition and few examples

Random variable u is an outcome of a experiment that produces random values


Then we can introduce a random variable F u which is a function of u . A particular
example is identity function, X u, which is given by

X u=u
Examples of a sample space:

1) outcome of the throw of a dice

The random variables can have six possibilities

2) results of flipping of a coin

The random variable have two possibilities

3) Outcome of a random walk of walker for a total steps of N

Here random variables can have 2 N possibilities


Example of such random walk is the change in velocity of a particular molecule in a
system of particles that are at equilibrium after a time of t
25
Let u={u1, u2 ...... um } are the out come of a random variable
Respective probabilities
{P(u1 ), P (u2 )...... P (um )}
1
For dice experiment {P(u 1 ), P (u 2 )...... P (u m )}=
6
Average value of faces of dice
n1 ×1+n2 ×2+n3 ×3+n 4 ×4 +n5 ×5+n 6 ×6
From experiment ⟨u⟩= =3.5
n1 +n2 + n3 +n 4 +n5 +n6

From assumed possibilities 1/6×1+1/6×2+1/ 6×3+1/ 6×4+1/6×5+1/ 6×6


⟨u⟩= =3.5
1/6+1/ 6+1/6+1/6+1/ 6+1/6
N
N

In compact form in general ∑ P (ui )ui ∑ ni u i


⟨u⟩= i=0 ⟨u⟩= i=0N ni
N P (ui )= N
∑ P (ui) ∑ ni
i=0
i=0 ∑ ni
i=0
N

When probability is normalized ∑ P(u i)=1 ⟨u⟩=∑ P (ui )ui


i=0

Therefore any function of the random variable

⟨ f (u)⟩=
∑ P (ui) f (ui) 26
∑ P (ui )
Other notable properties of the averages of random variables are – sum of averages of
two functions are equal to average of their sum N

〈 f (u)+ g(u)〉=〈 f (u)〉+ 〈 g(u)〉 ⟨u⟩=∑ P (ui )ui


i=0
When a function is multiplied with a constant and averaged is equivalent to the average
of the function multiplied by the same constant
〈 c f (u)〉=c 〈 f (u)〉
In a probability distribution the mean values are very useful, simplest of them is mean
〈u〉
This is the central value around which all values are distributed. The deviation from the
central value is
 u=u−〈u〉 〈 u〉=0

Another useful average is second moment of u about its mean, known as dispersion
M
⟨(Δ u)2 ⟩=∑ P(ui )(ui −⟨u⟩)2 ≥0
i=1

This give the spread of the distribution around the mean – it may be expressed as

〈 (u−〈u〉)2 〉 = 〈 u2 〉 −〈u 〉2
as this quantity is always positive 〈 u2 〉 ≥〈u 〉2 ; the may be generalized to get
n 27
〈 u 〉
f (u)=u

Mean of the distribution


〈 f u〉

Dispersion is the root of


⟨ (u−⟨u⟩)2 ⟩ = ⟨ u2 ⟩ −⟨u ⟩2
Typical variation in random variables whose outcome in N trials

28
Distributions that differs in moments

f u=u

Blue is a distribution with all values are at mean

Black is a distribution with mean same as blue with different dispersion

Red is a distribution with mean same as blue, and black but third
moment is different – asymmetrical with respect to mean

Green is a distribution with mean same as blue, black and red differ
in all higher moments 29
Mean values of random walk problem
N! n n
The probability distribution is given by W N (n1 )= p q1 2

n 1 ! n2 !

N! n N −n
W N n1 = p q 1 1

as n 2=N −n1
n1 ! N −n1 !
M
The normalization condition
series expansion
∑ P ui =1 can be verified as follows, in the binomial
i=1

N p=q
N! n N −n
 pq N =∑ p q 1
=1 N =1 1
where 1= p+q
n=0 n1 ! N −n1 ! M
Mean number of steps to right is obtained by the relation ⟨ f (u)⟩=∑ f (ui ) P(ui )
i=1
M N
N! n N −n
〈 n1 〉=∑ n1 W n1 =∑ n1 p q 1 1
∑ P(ui ) f (ui )
i=1 n=0 n1 !N −n1 ! ⟨ f (u)⟩=
∑ P(ui )
Evaluation of this sum is non-trivial, but possible from the relation

n 1 p1 = p ∂ p
n n 1

∂p
substituting this relation we get 30
substituting this relation we get
M N
N! n N −n
⟨n1 ⟩=∑ n1 W (n1 )=∑ n1 p q 1 1

i=1 n=0 n1 !( N −n1 )!


N
N! ∂( pn ) N −n
1

=∑ p q 1

n=0 n1 !(N −n1 )! ∂p


By inter changing the order of summation and differentiation
N
=p ∂
∂p [
n=0 1
N!
∑ n !( N −n )! p n q N −n
1
1 1

]
N

Using binomial expansion


=p ∂
∂p [∑
n=0
N!
n1 !( N −n1 )!
p n q N −n
1 1

]
= p ∂ ( p+q)
N
∂p
=N p( p+q)N −1
Using equation 1= p+q
⟨n1 ⟩=N p
This is physically meaningful since it shows average number of right steps is equal to 31
probability to move toward right with total number of steps
Similarly the average left steps is given by ⟨n 2 ⟩=N q
Sum of the average move on left and right ⟨n1 ⟩+⟨n2 ⟩=N
Net displacement of the particles ⟨m ⟩=N ( p−q)
If both probabilities of left and right moves are equal then net displacement is zero

Dispersion of the random walk

The dispersion of the random walk is given by ⟨(Δ n1 )2 ⟩

⟨(Δ n1 )2 ⟩= ⟨ (n1−⟨ n1 ⟩)2 ⟩ =⟨n21 ⟩−⟨ n1 ⟩ 2


we have ⟨n1 ⟩=N p
Now we have to evaluate 〈 n21 〉
M N
N! n N −n
In terms of the sums ⟨n ⟩= ∑ n W (n1 )=∑ n21
2
1
2
1 p q 1 1

i=1 n=0 n1 !( N −n1 )!


Now using the relation 2
2 n ∂ n
n p =n1 p p = p ∂ pn
1 1
∂p ∂p
1

( ) 1

N
N! ∂
2
n N −n
=∑
n=0
p (
n1 !(N −n1 )! ∂ p
p q ) 1 1
32
N
2 N! ∂
2
n N −n
⟨n ⟩=∑
1
n=0
p
n1 !( N −n1 )! ∂ p (p q ) 1 1

2 N
N!
=p ∂
n N −n
(∂p )∑ n=0 n1 !( N −n1 )!
p q 1 1

2
= p ∂ ( p+q) N
( ∂p)
= p ∂ [ pN ( p+q) N −1
( ∂p) ]

= p[ N ( p+q)N −1 + pN ( N −1)( p+q)N−2 ]


q+ p=1
= p[ N + pN ( N −1)] =Np[1+ pN − p]
2
⟨n1 ⟩=N p
=Np[q+ pN ] =Np q+( Np)
=Np q+⟨n1 ⟩2
Therefore the dispersion is given by
⟨(Δ n1 )2 ⟩= ⟨ (n1−⟨ n1 ⟩)2 ⟩ =⟨n21 ⟩−⟨ n1 ⟩ 2
⟨(Δ n1 )2 ⟩=⟨ n21 ⟩−⟨n1 ⟩2 =Npq 33
The root mean square deviation is given by
∗ 2 1/2 N
Δ n1 =⟨(Δ n1 ) ⟩
Width of
It is a linear measure of width of the distribution the
distribution
reduces
The relative width of this distribution is
Δ ∗ n1 √ Npq =
√q
= √ Np
⟨n1 ⟩ Np
1
= when q= p=1/2
√N
The dispersion of the displacement may also be calculated
m=n1−n2=2n 1−N
Difference between average net displacement and instantaneous net displacement
 m=m−〈m〉=2 n1−N −2 〈n1 〉−N =2n1−〈n1 〉=2  n1
 m2 =4  n1 2
Taking averages both sides
〈 m2 〉=4 〈 n1 2 〉=4 N p q
For an unbiased random walk 34
1 2
p=q= 〈 m 〉=N
2

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability distribution for large N

35
Probability distribution for large N

For large N the the binomial probability distribution W n1  has a pronounced or
dominant maximum around n1 = n1 . The value of this function decreases from the
maximum. It is useful to find an expression for W (n1 ) for large N

Near the region where the maximum occurs where n1 is also large, the change in the
distribution is characterized by
|W (n1 +1)−W (n1 )|≪W (n1 )

we consider the W n1  as continuous function of n1 , now it is permissible to take N


derivative of this function,
d W (n1 ) d ln W (n1 )
=0 also =0 P (n1 )
d n1 d n1
The derivatives are evaluated very near to the maximum, represented by the
coordinates ~
n1 = n 1 + η n1
When the change  is sufficiently small it is possible to expand the function in the Taylor
series around the point 〈 n1 〉. If we choose to expand, instead of the original
function, the logarithm of this function. For example an approximate expression valid
near y ≪1 for the function f =1 y −N

1 2.
A direct Taylor series expansion gives f =1−Ny N  N 1 y ..... 36
2
1 2. .....
f =1− Ny+ N ( N +1) y
2
For large N Ny>1
Therefore the series does not converge to a value therefore difficult to truncate

Now we take logarithm of this function and try an expansion

ln f =−N ln(1+ y )
1 2
ln f =−N ( y− y +....)
2
Here the power of N does not grow, and moreover we get an expression
1 2
− N ( y− y +....)
2
f =e
This expression is valid near y≤1
Now expanding the probability ln W (n1 ) in the Taylor series

~ 1 2 1 3 d k
ln W ( ~
n1 )
ln W (n1 )=ln W ( n1 )+ B 1 η+ B 2 η + B 3 η +... where B =
2 6 k
dn
k
1
Since derivative evaluated near the maximum the function has the following properties
B 1=0 B2> 0
This can be explicitly stated as B 2 =−|B 2| 37
~ ~
Now the function near the peak be written as W =W (~
n1 )
The probability distribution near the peak may be written as
1 2 1 3
− |B2|η + B3 η
~ 2 6
W (n1 )= W e
For sufficiently small  higher order terms in η may be neglected
1 2
~ − 2 |B |η 2

W (n1 )=W e
In order to have explicit look at the expression of the derivative we may write starting
from the expression the binomial probability distribution
N! n N −n
W N (n1 )= p q 1 1

n1 !( N −n1 )!
Logarithm of this expression is
ln W N (n1 )=ln N !−ln n1 !−ln ( N −n1 )!+n1 ln p+ N −n1 ln q

For differentiating we need approximate expression for differential of a factorial


d ln n! ln (n+1)!−ln n ! (n+1)!
≃ =ln =ln(n+1)
dn 1 n!
d ln n!
≃ln n n≫1
dn
38
This formula may also be obtained from the Stirling's approximation
ln W N (n1 )=ln N !−ln n1 !−ln ( N −n1 )!+n1 ln p+ N −n1 ln q
Using this equation
d ln W N (n1 ) d ln n!
=−ln n1 +ln ( N −n1 )+ ln p−ln q ≃ln n
d n1 dn
~
The first derivative is zero near maximum n1 = n1

( N −~n1 ) p ( N −~n1 ) p
ln
[ ~
n1
=0
q ] [ ~
n1
=1
q ] ( N −~
n1 ) p= ~
n1 q

d 2 ln W N (n1 ) 1 1
On further differentiation =− −
d n21 n1 ( N −n1 )
1= p+q
Evaluating this expression near n1 = ~
n1 avg n1
~
n =N p
1

1 1 1 1 Np+ Nq
B 2 =− − =− − =− 2
Is negative as required by N p ( N −N p) N p ( N q) N pq
p+q 1
B 2 =− =−
N pq N pq
On further differentiation it is possible to show that higher order terms are can indeed be
neglected
39
It is good to look at higher order terms

d 2 ln W N (n1 ) 1 1
=− −
d n21 n1 ( N −n1 )
d 3 ln W N (n1 ) 1 1
= −
d n31 n12 ( N −n1 )2
1 1
= 2 2− 2
N p ( N −Np)
11
= 2 2− 2 2
N p N q
q 2− p 2 1 d 2 ln W N (n1 )
= 2 2 2 ≪− =
N p q N p q d n21
At large N – higher order terms are safely ignored

40
The value of the constant in the probability distribution can be evaluated assuming the
variable is quasi-continuous variable. The summation of the probabilities may be
replaced by an integral
N ∞

∫ W (~n1 +η)d η=1


∑ W (n1)≃∫ W (n1)d n1=−∞
n1 =0

For large N the probability distribution has negligible contribution away from the
maximum ∞ − 1 |B |η 2 1 2
~ 2 2
~ − |B |η
W∫e d η=1
2

W (n1 )=W e 2 2
−∞ ∞ b
n ~
n du e
2
−a u + b u π e4a
2 1 = 1 +η ∫ =

~
W
π

|B 2|
=1
Therefore the probability distribution can be written finally as
−∞ a

1 1
W (n1 )=

Using the expression B 2 =−

|B 2|
(
exp − |B 2|η =
1
2
2

~
|B 2|
2π ) √ (
exp − |B 2|(n1 −~
2
n1 )
2
)
n1 =N p
N pq
2

1 −(n −N p)2 √ ⟨(Δ n 1 ⟩= √ N p q


)
W (n1 )=
√ 2π N pq
exp 1
(
2N pq )
1 −(n1 −~n1 )2
alternatively we may write this expression as =
√2
2 π ⟨(Δ n1 ) ⟩
exp (
2N pq ) 41
The Gaussian probability distribution
2
1 −(n −N p)
The expression for probability distribution W (n )=
1

2π N p q
exp 1
(
2N pq )
may be rearranged to get probability distribution P  m for net displacement m
Number of right steps n1 to obtain a net displacement of m
1
n1 = ( N + m )
2
1 1
n1 −N p= N +m−2 N p = [ m−N ( p−q) ]
( )
2 2
2
N +m 1 −(m−N ( p−q))
P (m)=W
2 (= )
2π N pq √
exp
8N pq ( )
From the formula m=2 n1−N the variation in m is in the units of Δ m=±2
If we take the variation in m as infinitesimally small we can consider the distribution
as continuous. It also varies in even numbers, this fact is irrelevant when total number
of steps N  ∞

Transforming in terms of continuous variable using the relation x=m l, l is the length
in each step
42

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability distribution for large N


Gaussian probability distribution


Generalization of the results to many variables and unequal
random walk

43
2
1 −(m−N ( p−q))
P (m)=
√2π N p q
exp( 8 N pq )
1 − x− 2
P m  x =
2  
exp

22 

2l

dx
m

44
As N  ∞ the probability distribution P m becomes much larger, therefore variation
for the probability distribution for adjacent values of m negligible, this means
|P (m+2)−P (m)|≪ P (m)

For such distributions the variation in m may be replaced with x and change in the
variation may be written as x +dx

The transformation to continuous variable can be achieved by the following steps, using
the assumption that P  m is same for a small region till next value of probability
corresponding to m2 occurs.
dx
ρ( x) dx=P (m)
2l
ρ( x) Is the probability density and is independent of the magnitude of d x

Therefore the final probability density may be obtained from the relation
2
1 −(m−N ( p−q))
P (m)=

2π N p q
exp (
8 N pq ) x=ml
As
P(m) 1 −( x−μ)2  = p−q N l
ρ( x)=
2l
=
√2 π σ
exp
2 σ2 ( ) Where
=2  N p q l
The final expression is the standard Gaussian probability distribution
45
We generalize the assumptions used to derive expression for Gaussian distribution to
many natural process and say that for all random walks that involve large number of
steps results in a Gaussian distribution
1 − x− 2
 x =
2 
exp

22 
Since Gaussian distribution is a representation of the probability the integral on all range
of the distribution must yield one
∞
1
∞
− x− 2
Using standard integral of a Gaussian

−∞
 x dx= ∫
 2  −∞
exp
22
dx=1
 2
+∞ ∞ b
2
2
exp(−ax )dx= √ π/a ∫ du e −a u + bu
= πe 4a
Note: the standard result ∫
−∞ −∞ √a

The constant of the Gaussian distribution can be identified from properties of the
distribution
∞

The mean of the distribution is given by 〈 x 〉= ∫ x  xdx


−∞
+∞
1 +∞ −( x−μ)2

−∞
x ρ( x )dx= ∫
√ 2 π σ −∞
x exp
2σ (
2 )
dx=1

− y2 − y2
+∞ +∞
1
Let y=x− = ∫
√ 2 π σ −∞
y exp

2 ( ) ∫ ( )
dy +μ exp
−∞ 2σ
2
dy 46
Now the integral yields 0 1

− y2 − y2
[∫ ( ) ]
+∞ +∞
1
=
√2 π σ −∞
y exp

2
dy +μ ∫ exp
−∞ 2σ ( )
2
dy

〈 x 〉=
Note that this shows th since the Gaussian distribution is symmetric around peak of the
distribution the average value is the value at the peak.

〈 x 〉=
Now the dispersion of the distribution is given by
∞
2 2
〈  x−  〉= ∫ x−   x dx
−∞
47
∞
2 2
Now the integral yields 〈  x−  〉= ∫ x−  x  dx
∞−∞
1 −x− 2

Let y=x−μ
= ∫
 2   −∞
+∞
2
 x−  exp
2
2 2 dx
+∞
1 −y 2

dy=dx
= ∫
√ 2 π σ −∞
2
y exp
+∞
( )

2
dy ∫
−∞
exp(−ax
a=(1/2 σ
)dx= √ π/a
2
)
−1 ∂
( ) ∫ exp ( y
2
) dy = −1 ∂ (π1/2 /a1 /2 )
=
√ 2 π σ ∂ a −∞
−a
( )
√2 π σ ∂ a
−1 π 1/ 2 −3/ 2 1 π
1/ 2
2 −3 /2
= (−1) a = (1/2 σ )
√2 π σ 2 √2 π σ 2
1 π 1/ 2 2 −3 /2
= (1/2 σ )
√2 π σ 2
3
=
1 
2   2

[ ]
2  2  2 = 2

Now we have the results for mean and root mean square deviation as which is same
from the results obtained from binomial distribution
〈 x 2 〉=4 N p q l2 μ=( p−q) N l
σ=2 √ N p q l 48
〈 x 〉=N  p−ql

Requirement for a probabilistic approach to physics


Examples of random walk in physical systems


Random walk in one dimension


Physical interpretation and definition of probability


Probability distribution for large N


Gaussian probability distribution


Generalization of the results to many variables and unequal
random walk

49
General discussion on the random walks

The random-walk we discussed so far is in one-dimension, which need to be generalized to


many dimension – discrete to continuous steps to apply to real world problems

The method of analysis we have used is called combinatorial analysis (approach


based on how number of way different things are arranged)

Now we may generalize these methods for steps of variable length with use of multiple variables

u={u1, u2 ...... um } Two set of


random
v={v 1, v 2 ...... v m } variables

P (ui , v j ) Probability of finding ui


and v j

The normalization condition for the probability distribution


M N

∑ ∑ P (ui , v j )=1 50
i=1 j=1
Example of simultaneous throw of dice and coin toss
vi ui

( H ,1) ,( H ,2),( H ,3),( H , 4) ,( H , 5),( H , 6) Sample space


(T ,1) ,(T , 2) ,(T , 3) ,(T , 4) ,(T ,5),(T ,6)
Joint probability of getting a head + value of 2 = 1/2X1/6=1/12

P (ui , v j )=1/12

Note that throwing dice and flipping of the coin are independent events
M
P u (ui )=∑ P (ui , v j )
j=1

Gives probability that ui assumes a value irrespective of the value of


the v i
P u (ui )=6 /12=1/ 2 six possibilities

P v ( v i )=2/12=1/ 6 two possibilities 51


The coin and dice example is a special case where both variables are
statistically independent of each other

P (ui , v j )=P u (ui ) P v (v j )

Similar to probability distribution of single random variable


M N
⟨ F (u , v )⟩=∑ ∑ P (ui , v j ) F (ui , v j )
i=1 j=1
When function depend only on one of the variable – the other probability can be
summed out
M N M
⟨ f (u)⟩=∑ ∑ P(ui , v j ) f (ui )=∑ P u (ui ) f (ui )
i=1 j=1 i=1

This can be easily proved by direct expansion as sum


⟨ F (u , v)+G(u , v)⟩=⟨ F (u , v )⟩+⟨G(u , v)⟩
If the variables are statistically independent then the product of
average also can be distributed
M N
⟨ f (u) g( v)⟩=∑ ∑ P(ui , v j ) f (ui ) g( v j )
i=1 j=1
M N
52
=∑ ∑ P u (ui ) f (ui ) P v (v j ) f ( v j )
i=1 j=1
M N
⟨ f (u) g( v)⟩=∑ ∑ P u (ui ) f (ui ) P v (v j ) f (v j )
i=1 j=1

M N
⟨ f (u) g( v)⟩=∑ P u (ui ) f (ui ) ∑ P v (v j ) f (v j )
i=1 j=1

⟨ f (u) g( v)⟩=⟨ f (u)⟩ ⟨ g( v)⟩

Average product is equal to product of the average

These results can be generalized to more than one variable

Another generalization is for continuous probability distribution

Consider a variable in the continuous range


a 1 <u< a2

Now it is possible to look for change in the probability between u and u+ du


Probability P ∝ du
ρ(u) probability density independent of du 53
Expressed as ρ(u) du
The continuous variable can be converted in discrete

ρ(u) du

u u+ du

The interval we consider for converting it into discrete form is δ u≪du

In δ u interval the probability does not vary at all

The properties discrete random variable is true for continuous variable also
M

∑ P(u i)=1
i=1
a2

∫ ρ(u) du=1
a1
The average properties can be computed from this
M a2
⟨ f (ui )⟩=∑ f (ui ) P (ui ) ⟨ f (ui )⟩=∫ f (u)ρ(u) du 54
i=1 a1
The continuous probability distribution can be converted discrete probability similar to
Gaussian distribution in 1d random walk
dx
ρ( x) dx=P (m)
2l
du dv
ρ(u , v)du dv=P (u , v )
δu δ v

P (u , v ) - number of infinitesimal number of cells of magnitude δ u δ v contained


between u and u+ du and v and v +dv

The normalization condition is then given by


a2 b2

∫ ∫ ρ(u , v ) du dv=1
a1 b1

With averaging property

a2 b 2

⟨ F (u , v )⟩=∫∫ F (u , v )ρ(u , v ) du dv
a1 b1

55
Mean values of general random walk

Most random walks real microscopic particles have different lengths at fixed
time interval.

This means we have to prove the results of random walk of fixed length to
variable length to apply to natural systems.

Consider that si is the displacement in i th step.


For example a gas molecule in a 1d channel will displace according to its
velocity which is random number depending on the temperature – Velocity
distribution is from typical Maxwell-Boltzmann distribution – which we will
derive later in this course

In general w (s i )dsi s i +dsi


interval
Probability distribution of displacement
si is independent of s i+1

w (s i ) w (s i )
si si
56
left -right fixed-size steps
continuous distribution of step-size
Let the probability distribution w (s i ) is same for all steps
Similar to walks in fixed size steps - total length of travel in N steps
N
x= ∑ s i
i=1
⟨ f (u)+ g (u)⟩=⟨ f (u)⟩+⟨ g(u)⟩

⟨ x ⟩=N ⟨ s i ⟩ where ⟨ s i ⟩=∫ ds s w (s)


Mean displacement per step

2 2
Dispersion is ⟨(Δ x) ⟩=⟨( x−⟨ x ⟩) ⟩

( x−⟨ x ⟩)=∑ (s i −⟨ si ⟩)
i

Δ x=∑ Δ s i 57
i
⟨(Δ x)2 ⟩=⟨ ∑ Δ si ∑ Δ s j ⟩
i j

=⟨ ∑ (Δ s i )2 + ∑ ∑ Δ si Δ s j ⟩
i i j ,i≠ j

si
mean

Δ s i -deviation from the mean is symmetric


Δ si Δ s j - is also symmetric with respect to zero line
when one event is independent of other –
statistically independent

∑ ∑ Δ si Δ s j=0
i j ,i≠ j

⟨(Δ x)2 ⟩=∑ ⟨(Δ s i )2 ⟩=N ⟨(Δ si )2 ⟩ 58


i
Dispersion of displacement per step
2 2
⟨(Δ si ) ⟩=∫ w (si ) ds (Δ si )
Square of the width of the distribution

⟨ Δ x 2 ⟩=N ⟨ Δ s2 ⟩

Root mean square deviation is width of the distribution

Δ x ∗= √ ⟨ Δ x 2 ⟩

when ⟨ si ⟩≠0 ⟨ x⟩ increases with N


⟨ x ⟩=N ⟨ si ⟩

In addition the distribution around the mean also increases with N


⟨ Δ x 2 ⟩=N ⟨ Δ s2 ⟩ Width of
the
⟨ Δ x 2 ⟩1/ 2=N 1/ 2 ⟨ Δ s2 ⟩1/ 2 P(x) distribution
reduces
The relative variation with respect to mean is
2 1/ 2
⟨Δ x ⟩ N ⟨Δ s ⟩ 1/ 2 2 1/2 Δ x∗ Δ s∗ x
= = 59
⟨x⟩ N ⟨ si ⟩ ⟨ x ⟩ √ N ⟨ si ⟩
The calculation of the probability distribution

Complete information of the random walk with differing walk length is contained in
the probability distribution
N
x=∑ si
i=1

Probability of finding displacement between x and x +dx after N steps for a


particular sequence is product of probability of each step

Between s1 and s1 + ds1 is w( s1 )ds 1 w (s i )


Between s2 and s2 +ds 2 is w( s2 ) ds 2 si s
Between s3 and s3 +ds 3 is w( s3 ) ds 3
s3 4
s1 s2
Between s N and s N +ds N is w( s N ) ds N x
' '
P ( x)dx=∫ ...∫ w (s1 ) w (s2 ) w (s3 )... w (s N ) ds1 ds 2 ds 3 ... ds N
N

Subject to the constraint (prime) that x <∑ si < x+dx


60
i=1
Therefore the direct evaluation of the integral is difficult
This constraint prohibit combined displacement to exceed the limit
N
x < ∑ si < x +dx
i=1

One way to address this issue to express this in terms of a mathematical form
at where there are large number of steps in the random walk dx → 0

This is because at large number of steps the x=N ⟨ s⟩ as distribution


become sharper

N
Width of
P ( x) the
distribution
reduces

x N
Therefore this constraint can be expressed as a delta function
(
δ x−∑ si
i=1
)
N

∫δ ( x−∑ si dx=1
i=1
) 61
N

In the small range of integration


(
N
δ x−∑ si dx=1 )
∫δ ( i=1
)
x−∑ si dx=1
i=1
Delta function has dimension
Substituting this on the integral of the inverse of length
∞ ∞
P ( x)dx= ∫ ... ∫ w (s1 ) w (s2 ) w (s3 )... w (s N ) ds1 ds 2 ds 3 ... ds N
−∞ −∞
∞ ∞ N

−∞ −∞ ( i=1
)
= ∫ ... ∫ w( s1 )w ( s2 ) w( s3 )........ w( s N ) δ x−∑ si dx ds 1 ds 2 ds 3 ... ds N

Therefore by removing dx
∞ ∞ N

−∞ −∞ (
P ( x )= ∫ ... ∫ w ( s1 )w ( s2 ) w( s3 )........ w( s N ) δ x−∑ si ds 1 ds 2 ds 3 ... ds N
i=1
)
Substituting the expression of integral representation of the delta function
N
N
1 ik( ∑ si − x)

(
δ x−∑ si =
i=1
) 2π
∫ dk e i =1

N
∞ ∞ ∞
1 ik (∑ si −x)
P ( x )= ∫ ... ∫ w (s1 ) w( s2 ) w (s3 )........ w (s N ) ∫ dk e i=1
ds 1 ds 2 ds 3 ... ds N
2 π −∞ −∞ −∞
62
N
∞ ∞ ∞
1 ik (∑ si −x)
P ( x )= ∫ ... ∫ w (s1 ) w( s2 ) w (s3 )........ w (s N ) ∫ dk e i=1
ds 1 ds 2 ds 3 ... ds N
2 π −∞ −∞ −∞

Club each integral with same type of variable

∞ ∞ ∞ ∞
1 −ikx iks iks iks
P ( x )= ∫ dk e ∫ ds1 w (s1 ) e ∫ ds 2 w( s2 ) e ... ∫ ds N w (s N )e
1 2 N

2 π −∞ −∞ −∞ −∞

iks
Except first all integrals are identical let Q(k )=∫ ds w (s)e

1 −ikx N
P ( x )= ∫ dk e Q (k )

Finally we arrived at a simple form

For very large number of steps this further simplified

63

1 −ikx N
iks P ( x )= ∫ dk e Q (k )
Q(k )=∫ ds w (s)e 2 π −∞

When k is small the


w( s)
integral contribute

When k is large
neighboring s
contributions cancel
each other thus
negligible

e iks Is expanded in Taylor series to evaluate the integral ∞



iks
⟨ s⟩= ∫ ds w( s) s
Q(k )= ∫ ds w (s)e −∞

−∞
2 2

1 ⟨ s ⟩= ∫ ds w( s) s
−∞ [
Q(k )= ∫ ds w (s) 1 +iks− (ks)2 ......
2 ] −∞

With substituted value of integrals


1
[
Q(k )= 1 +ik ⟨ s⟩− k 2 ⟨ s 2 ⟩ ......
2 ] 64
1
[
Q(k )= 1 +ik ⟨ s⟩− k 2 ⟨ s 2 ⟩ ......
2 ] n
⟨ s ⟩=∫ ds w (s) s
n

These moments are finite since |w (s)|→ 0 as |s|→∞

For further simplification


1 2 2 1 2 2
N
[
ln Q (k )=N ln 1 +ik ⟨ s⟩− k ⟨ s ⟩ ......
2
1
]
y=ik ⟨ s⟩− k ⟨ s ⟩ ......
2
Using Taylor series expansion ln (1+ y )= y − y 2 , y ≪1
2 2.
1 2 2 1 1 2 2
N
[
ln Q (k )=N i ⟨ s⟩ k − k ⟨ s ⟩− (ik ⟨ s⟩− k ⟨ s ⟩ ......)
2
Ignoring terms larger than square
2 2 ]
1 2 2 1
N
[2 2
2
ln Q (k )=N i ⟨ s⟩ k − k ⟨ s ⟩− (i ⟨ s⟩ k ) ...... ]
1
[
=N i ⟨ s⟩ k− k 2 (⟨ s2 ⟩−⟨ s⟩ 2 )......
2 ] ⟨ Δ s2 ⟩=⟨ s2 ⟩−⟨ s⟩ 2
1
[
=N i ⟨ s⟩ k− k 2 ⟨Δ s 2 ⟩ ......
2 ] 65
1
[
ln Q N (k )=N i ⟨ s⟩ k − k 2 ⟨ Δ s2 ⟩ ......
2 ]
2
1 b
N [ i N ⟨s ⟩ k− N k 2 ⟨ Δ s 2 ⟩ ......
2 ] ∞
∫ du e
2
−a u + bu
= πe 4a
Q (k )=e
−∞ √a
Substituting in probability distribution

1 −ikx N
P ( x )= ∫ dk e Q (k ) 1
2 π −∞ 2
a= N ⟨ Δ s ⟩
2
1
−ikx [ ]
∞ 2 2
1 i N ⟨ s⟩ k− N k ⟨ Δ s ⟩
P ( x )= ∫ dk e e 2
4 a=2 N ⟨ Δ s 2 ⟩=2 σ 2
2 π −∞
1
b=i( N ⟨ s⟩−x )
=
1
∫ dk e
[∞ 2 2
i( N ⟨ s ⟩− x) k− N k ⟨ Δ s ⟩
2 ]
2 π −∞ b 2=−(N ⟨ s⟩−x )2=−( x−N ⟨ s⟩)2
This is a Gaussian integral in the standard form

2
−(x −μ )
1 2 σ2 μ=N ⟨ s⟩
P ( x)= 2
e
√2 π σ σ=N ⟨Δ s2 ⟩ 66
We have arrived at Gaussian distribution in more general steps of the random walk

All the steps must be statistically independent

The distribution of the probability must be |w (s)|→ 0 as |s|→∞

When these conditions are satisfied all natural distributions come from random walk
appears as Gaussian distribution

N →∞
This result is known as the central limit theorem – one of the important result of
probability theory

67
Properties of single continuous random variable

Let be a continuous random variable whose values are real

Then the cumulative probability is given by

Probability of all events that is

68
1

Let the probability density function be

Dimension of inverse of length


This is a positive function with

69
Then the expectation value of any function is

Moments of the PDF is n the moment is

The characteristic function of is by the Fourier Transform

The relationship between and by inverse FT

The moments of the distribution is obtained from the characteristic function

70
The cumulant generating function is the logarithm of the characteristic
function

We can get connection between the cumulants and moment by comparing this
relation with

By use of the expansion

Explicitly first two terms of the series is

where 71
The cumulant generating function is given by the relation

72
Reference

Fundamentals of statistical and thermal physics by F. Reif


Chapter 1

Statistical Physics of Particles by Mehran Kardar


Chapter 2

73

You might also like