You are on page 1of 10

Department of Applied Mechanics

APL 103

Experimental Methods
Semester I, 2020-21

L12

Murali R Cholemari
Gaussian Distribution
Limiting case (continuous limit) of a Binomial distribution

• What happens to 𝑄 𝑚; 𝑛 as 𝑚, 𝑛 are increased to large values?


n+𝑚 n−𝑚
• 𝑛 is always +ve but 𝑚 can be either +ve or –ve. Hence not clear why 2 and 2 be large
n+𝑚 n−𝑚
• Assumption: 𝑛 is increased at a much larger rate than 𝑚. Such that both 2 and 2 are
large
• This is equivalent to keeping 𝑒 finite, for a finite error ∈, (∈= 𝑚𝑒)

• Strling’s Formula: An approximate result for the factorials of large numbers (𝑛 is a large no.)
1
log 𝑛! ≈ log 2𝜋𝑛 − 𝑛 log 𝑛 − 1
2

Use this result for 𝑄 𝑚; 𝑛


𝑛+𝑚 𝑛−𝑚
log 𝑄 𝑚; 𝑛 = log 𝑛! − log ! − log ! − 𝑛 log 2
2 2
Use Stirling’s formula:
1 1 𝑛+𝑚 1 𝑚+𝑛
≈ 2 log 2𝜋𝑛 + 𝑛 log 𝑛 − 1 − 2 log 2𝜋 −2 𝑚+𝑛 log −1 −
2 2
1 𝑛−𝑚 1 𝑛−𝑚
log 2𝜋 −2 𝑛−𝑚 log 2 − 1 − 𝑛log2
2 2

Simplify. Since,

log 2𝜋𝑛 − log 𝜋 𝑛 + 𝑚 − log 𝜋 𝑛 − 𝑚 = log 2𝜋𝑛 − log 𝜋𝑛 − log 1 + 𝑚/𝑛 −


log 𝜋𝑛 − log 1 − 𝑚/𝑛
2 𝑚2
= log − log 1 − 2
𝜋𝑛 𝑛
And,
1 𝑚+𝑛 1 𝑛−𝑚
𝑛 log 𝑛 − 1 − 𝑚 + 𝑛 log −1 − 𝑛 − 𝑚 log − 1 − 𝑛log2
2 2 2 2
1 𝑚
= 𝑛 log 𝑛 − 𝑛 − 𝑛 + 𝑚 log 𝑛 + log 1 + − log2 − 1
2 𝑛
1 𝑚
𝑛 − 𝑚 log 𝑛 + log 1 − − log2 − 1 − 𝑛log2
2 𝑛
𝑚
1 𝑚2 1 1+
𝑛
= 2
𝑛 log 1− 𝑛2
− 2 𝑚 log 𝑚
1−
𝑛

Hence
1 2 1 𝑚2 1 + 𝑚Τ𝑛
𝑄 𝑚; 𝑛 ≈ log − 𝑛 + 1 log 1 − 2 − 𝑚 log
2 𝜋𝑛 2 𝑛 1 − 𝑚Τ𝑛
If 𝑛 increases very rapidly compared to 𝑚, 𝑚/𝑛 will be very small compared to 1; Thus, for
large 𝑛,
𝑚2 𝑚2 𝑚2
𝑛 + 1 log 1 − 2 ≅ 𝑛 + 1 − 2 ≅ −
𝑛 𝑛 𝑛
1+𝑚/𝑛 𝑚 𝑚 𝑚2
And 𝑚 log =𝑚 + − − = 2
1−𝑚/𝑛 𝑛 𝑛 𝑛
Then,
1 2 𝑚2
log 𝑄 𝑚: 𝑛 ≈ log −
2 𝜋𝑛 2𝑛
The error ∈= 𝑚𝑒 and this has to be finite.
1 2 ∈2
log 𝑄 𝑚: 𝑛 ≈ log − 2
2 𝜋𝑛 2𝑛𝑒
2 ∈2
2 − ∈ 2 2 −
Or, 𝑄 𝑚: 𝑛 = 𝑒 2𝑛𝑒 = 𝒆 𝑒 2𝑛𝒆𝟐
𝜋𝑛 𝜋𝑛𝒆𝟐
Define an error distribution function
∈2
1 −
𝑃 𝜖, 𝑛𝒆2 = 𝒆 𝑒 2𝑛𝒆
𝟐

2𝜋𝑛𝒆𝟐
Then
𝑄 𝑚: 𝑛 = 𝑃 𝜖, 𝑛𝒆2 ×∈ ′ with ∈′ = 2𝒆
Apart from 2𝒆, which is the increment from one possible error value to the next, 𝑃 𝜖, 𝑛𝒆2 is
the smooth curve approximation to the Binomial distribution
Denote 𝑛ⅇ2 by 𝜎 2 . As , 𝑒 is to be decreased towards zero, consider it so small that many steps
of 2𝑒 lie in the interval (∈, ∈+d ∈). Further d ∈ itself can be taken small enough such that
𝑃 𝜖, 𝑛𝒆2 = 𝑃 𝜖, 𝜎 2 = 𝑃 𝜖, 𝜎 will not change appreciably during the interval.

The sum of the binomial distribution terms 𝑄(𝑚; 𝑛) for which the corresponding interval of
errors is (∈≤ 𝑚𝑒 ≤∈+d ∈) will be,
෍ 𝑄 𝑚; 𝑛 = ෍ 𝑃 𝜖, 𝜎 𝜖 ′ = 𝑃 𝜖, 𝜎 𝑑𝜖
𝑚
1 − 𝜖2 Τ2𝜎 2
= ⅇ d𝜖
2𝜋𝜎

This is the relative distribution or the probability of measuring an error in the interval (∈, ∈+d
∈). Writing in terms of the true value 𝑋 and measured value 𝑥;
1 − (𝑥−𝑋)2 Τ2𝜎 2
𝑃 𝑥; 𝑋, 𝜎 𝑑𝑥 = ⅇ d𝑥
2𝜋𝜎
This is the Gaussian or Normal distribution
• Determined entirely by 𝑋 and 𝜎
• Symmetrical about 𝑋 and peaks at 𝑋
• Width of the peak increases with 𝜎 (but the value at the peak decreases)
• Change variables 𝑦 = 𝑥 − 𝑋 Τ𝜎 such that d𝑦 = 𝑑𝑥Τ𝜎 . This results in the standard form:
1 −𝑦2 Τ2
𝜙 𝑦 𝑑𝑦 = 𝑒 𝑑𝑦
2𝛱

𝑦 is called the standard normal variate

The way the distribution∞is defined ensures∞that it is normalised:


න 𝑃 𝑥; 𝑋, 𝜎 = න 𝑃 𝜖, 𝜎 d𝜖 න 𝜙 𝑦 d𝑦 = 1
−∞ −∞ −∞
Properties of the Gaussian Distribution

𝜙 𝑦

• Mean, median and mode at X


• Variance and std. deviation


2 2
1 2 −𝑦 2 Τ2
𝜎 𝑦 = න 𝑦 𝜙 𝑦 d𝑦 = න𝑦 ⅇ d𝑦
∞ 2𝜋
−∞

1 𝜋 𝑦 −𝑦 2 /2
= 𝑒𝑟𝑓 −𝑒 ×𝑦
2𝜋 2 2 −∞
−𝑦 2 /2 1
Since 𝑒𝑟𝑓 ±∞ = ±1 and 𝑒 × 𝑦 → 0 as 𝑦 → ±∞, 𝜎 𝑦 = 2𝜋
2𝜋 = 1
Thus 𝜎 𝑥 = 𝜎
The values of 𝜙 𝑦 are tabulated
• As 𝑥 changes from 𝑋 to 𝑋 ± 𝜎, the value of the distribution function drops to 𝑒 −1Τ2 =
0.607
• The probability that the measurement will lie somewhere in between these limits:
𝜎 1

න 𝑃 𝜖, 𝜎 d𝜖 = න 𝜙 𝑦 d𝑦 = 0.683
−𝜎
3𝜎 −13

න 𝑃 𝜖, 𝜎 d𝜖 = න 𝜙 𝑦 d𝑦 = 0.997
−3𝜎 −3
• The probability of obtaining an error less than 𝜖 is
𝜖 𝜖Τ𝜎
2 න 𝑃 𝜖, 𝜎 d𝜖 = 2 න 𝜙 𝑦 ′ d𝑦 ′
0 0
1
𝜖Τ𝜎 2 2 𝑦 2Τ
′ ′
Let 𝜑 𝑦 = 2 ‫׬‬0 𝜙 𝑦 d𝑦 = ධ ⅇ−𝑦 2 d𝑦
𝜋 0
Values of 𝜑 𝑦 are tabulated

The probability that the error lies in between 𝑥1 and 𝑥2 or equivalently between
𝑦1 and 𝑦2 with 𝑦𝑖 = 𝑥𝑖 − 𝑋 Τ𝜎 is,
𝑦2 𝑦2 𝑦1
න 𝜙 𝑦 𝑑𝑦 = 2 න 𝜙 𝑦 𝑑𝑦 − න 𝜙 𝑦 𝑑𝑦 = 𝜑 𝑦2 − 𝜑 𝑦1
𝑦1 0 0

You might also like