You are on page 1of 3

569 570 Appendix E.

Statistical Derivations

This definition implies that


N
σ2 =
 2
n Pn – 2µnPn + µ2 Pn

(E.6)

n=0

N N N
n2 Pn – 2µ nPn + µ2
  
= Pn . (E.7)
n=0 n=0 n=0

Appendix E The first summation gives the expected value of n2 , the square of the number of
Statistical Derivations disintegrations. From Eq. (E.4) it follows that the second term is –2µ2 . The sum in
the last term is unity [Eq. (11.14)]. Thus, we can write in place of Eq. (E.7)
N N
Binomial Distribution σ2 = n2 Pn – 2µ2 + µ2 = n2 P n – µ 2 .
 
(E.8)
n=0 n=0
Mean
We have previously evaluated µ [Eq. (E.4)]; it remains to find the sum involving n2 .
To this end, we differentiate both sides of Eq. (E.3) with respect to x:
The mean value µ of the binomial distribution is defined by Eq. (11.15):
N
N(N – 1)p2 (px + q)N–2 =

N
 N  
N n N–n n(n – 1)xn–2 Pn . (E.9)
µ≡ nPn = n p q . (E.1) n=0
n=0 n=0
n
Letting x = 1 with p + q = 1, as before, implies that
To evaluate this sum, we first use the binomial expansion to write, for an arbitrary
N
(continuous) variable x,
N(N – 1)p2 =

n(n – 1)Pn (E.10)
N   N n=0
 N n n N–n  n
(px + q)N = p x q = x Pn . (E.2) N N N
n n2 Pn – n2 Pn – µ.
  
n=0 n=0 = nPn = (E.11)
n=0 n=0 n=0
Differentiation with respect to x gives
Thus,
N
 N
Np(px + q)N–1 = nxn–1 Pn . (E.3) 
n2 Pn = N(N – 1)p2 + µ. (E.12)
n=0
n=0

Letting x = 1 and remembering that p + q = 1 gives Substituting this result into Eq. (E.8) and remembering that µ = Np, we find that
N
 σ 2 = N(N – 1)p2 + Np – N2 p2 = Np(1 – p) = Npq. (E.13)
Np = nPn ≡ µ. (E.4)
n=0 The standard deviation of the binomial distribution is therefore

σ = Npq. (E.14)
Standard Deviation

The variance is defined by Eq. (11.17): Poisson Distribution

N
As stated at the beginning of Section 11.5, we consider the binomial distribution
σ2 ≡ (n – µ)2 Pn .

(E.5)
when N ≫ 1, N ≫ n, and p ≪ 1. Under these conditions, the binomial coefficient
n=0

Atoms, Radiation, and Radiation Protection. James E. Turner


Copyright © 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
ISBN: 978-3-527-40606-7
Poisson Distribution 571 572 Appendix E. Statistical Derivations

in Eq. (11.13) is approximated well by Eq. (11.29). Also, the last factor in Eq. (11.13) Standard Deviation
can be written
The variance of the Poisson distribution is given by Eq. (E.8), with the Pn defined
qN–n ∼
= qN = (1 – p)N . (E.15) by Eq. (E.23). As before, it remains to find the expected value of n2 . Again, manip-
ulating the index of summation n, we write, using Eq. (E.23),
Using the binomial expansion for the last expression, we then have
∞ ∞ ∞
N(N – 1) 2 n2 µ n n2 µn
n2 Pn ≡ e–µ = e–µ
  
qN–n = 1 – Np + p – ··· (E.16) (E.24)
2! n=0 n=0
n! n!
n=1
(Np)2 ∞ ∞

= 1 – Np + – · · · = e–Np . (E.17) nµn–1 (n + 1)µn
= e–µ µ = e–µ µ
 
2! (E.25)
(n – 1)! n=0
n!
n=1
Substitution of Eqs. (11.29) and (E.17) into (11.13) gives
∞ 
nµn µn

= e–µ µ = µ(µ + 1) = µ2 + µ.

Nn n –Np (Np)n –Np + (E.26)
Pn = p e = e , (E.18) n=0
n! n!
n! n!
which is the Poisson distribution, with parameter Np. Substitution of this result into Eq. (E.8) gives for the variance

Normalization σ 2 = µ2 + µ – µ2 = µ. (E.27)

We obtain the important result that the standard deviation of the Poisson distribu-
The distribution (E.18) is normalized to unity when summed over all non-negative
tion is equal to the square root of the mean:
integers n:

∞ ∞ σ= µ. (E.28)

–Np
 (Np)n –Np Np
Pn = e =e e = 1. (E.19)
n=0 n=0
n!
Normal Distribution
For the binomial distribution, Pn = 0 when n > N. As seen from Eq. (E.18), the
terms in the Poisson distribution are never exactly zero.
We begin with Eq. (E.23) for the Poisson Pn and assume that µ is large. We also
assume that the Pn are appreciably different from zero only over a range of values
Mean
of n about the mean such that |n – µ| ≪ µ. That is, the distribution of the Pn is
The mean value of n can be found from Eq. (E.18). With some manipulation of the relatively narrow about µ; and both µ and n are large. We change variables by
summing index n, we write writing x = n – µ. Equation (E.23) can then be written

∞ ∞ µµ+x e–µ µµ µx e–µ


–Np
 n(Np)n –Np
 n(Np)n Px = = , (E.29)
µ≡e =e (E.20) (µ + x)! µ!(µ + 1)(µ + 2) · · · (µ + x)
n=0
n! n!
n=1
∞ ∞
with |x| ≪ µ. We can approximate the factorial term for large µ by means of the
(Np)n (Np)n–1 Stirling formula,
= e–Np = e–Np Np
 
(E.21)
(n – 1)! (n – 1)!
n=1 n=1
µ! = 2π µµµ e–µ ,

(E.30)

(Np)n
= e–Np Np = e–Np NpeNp = Np.

(E.22) giving
n=0
n!
µx
The mean of the Poisson distribution is thus identical to that of the binomial dis- Px = √ (E.31)
tribution. We write in place of Eq. (E.18) the usual form 2π µ(µ + 1)(µ + 2) · · · (µ + x)
1
µn e–µ =
√ 1  2  x
. (E.32)
Pn = . (E.23)

n! 2π µ 1 + 1+ ··· 1 +
µ µ µ
Error Propagation 573 574 Appendix E. Statistical Derivations

Since, for small y, ey ∼


= 1 + y, the series of factors in the denominator can be rewrit- since the sums of the xi – x̄ and yi – ȳ over all i in Eq. (E.36) are zero, by definition
ten (µ is large) to give of the mean values. Thus, the mean value of Q is the value of the function Q(x, y)
calculated at x = x̄ and y = ȳ.
1 1 The variance of the Qi is given by
Px = √ =√ e–(1+2+···+x)/µ . (E.33)
2π µe1/µ e2/µ · · · ex/µ 2π µ
N
1  2
The sum of the first x positive integers, as they appear in the exponent, is x(1 + σQ2 = Qi – Q . (E.38)
N
x)/2 = (x2 + x)/2 ∼
= x2 /2, where x has been neglected compared with x2 . Thus, we i=1

find that
Applying Eq. (E.36) with Q = Q(x̄, ȳ), we find that
1 2
Px = √ e–x /2µ . (E.34) N 2
2π µ 1  ∂Q ∂Q
σQ2 = (xi – x̄) + (yi – ȳ) (E.39)
N ∂x ∂y
i=1
This function, which is symmetric in x, represents an approximation to the Pois-
son distribution. The normal distribution is obtained when we replace the Poisson 
∂Q
2
1
N 
∂Q
2
1
N

standard deviation µ by an independent parameter σ and let x be a continuous = (xi – x̄)2 + (yi – ȳ)2
∂x N ∂y N
i=1 i=1
random variable with mean value µ (not necessarily zero). We then write for the
probability density in x (–∞ < x < ∞) the normal distribution 
∂Q
  N
∂Q 1 
+2 (xi – x̄)(yi – ȳ). (E.40)
∂x ∂y N
1 –(x–µ)2 /2σ 2
i=1
f(x) = √ e , (E.35)
2π σ The last term, called the covariance of x and y, vanishes for large N if the values of
2 x and y are uncorrelated. (The factors yi – ȳ and xi – x̄ are then just as likely to be
with σ > 0. It can be shown that this density function is normalized (i.e., its inte-
positive as negative, and the covariance also decreases as 1/N). We are left with the
gral over all x is unity) and that its mean and standard deviation are, respectively,
first two terms, involving the variances of the xi and yi :
µ and σ . The probability that the value of x lies between x and x + dx is f(x) dx.
Whereas the Poisson distribution has the single parameter µ, the normal distribu-  2  2
∂Q ∂Q
tion is characterized by the two independent parameters, µ and σ . σQ2 = σx2 + σy2 . (E.41)
∂x ∂y

This is one form of the error propagation formula, which is easily generalized to a
Error Propagation function Q of any number of independent random variables.

We determine the standard deviation of a quantity Q(x, y) that depends on two inde-
pendent, random variables x and y. A sample of N measurements of the variables
yields pairs of values, xi and yi , with i = 1, 2, . . . , N. For the sample one can compute
the means, x̄ and ȳ; the standard deviations, σx and σy ; and the values Qi = Q(xi , yi ).
We assume that the scatter of the xi and yi about their means is small. We can then
write a power-series expansion for the Qi about the point (x̄, ȳ), keeping only the
first powers. Thus,

∂Q ∂Q
Qi = Q(xi , yi ) ∼
= Q(x̄, ȳ) + (xi – x̄) + (yi – ȳ), (E.36)
∂x ∂y

where the partial derivatives are evaluated at x = x̄ and y = ȳ. The mean value of Qi
is simply

N N
1 1 1
Q≡ Qi = Q(x̄, ȳ) = NQ(x̄, ȳ) = Q(x̄, ȳ), (E.37)
N N N
i=1 i=1

You might also like