You are on page 1of 36


3 Counting Statistics and Error Prediction

Ch.3 : Counting Statistics and Error Prediction

Radiation measurements:
Any measurements employed to monitor the radiation emitted from a radioactive material have to take statistical fluctuation into account. This is because the decay of radioactive material is a random process.

The term Counting Statistics includes: 1 - Statistical analysis needed to obtain the measurement results from a nuclear test. 2- A framework to predict the expected quantities resulting from such measurements. Effects of statistical fluctuations: 1- The measurements will not be identical and will display some internal variation. This fluctuation, has to be quantified and compared with the results from prediction using statistical models. 2- In case of a single measurement situation, we need to find out the statistical uncertainty and estimate the accuracy of the single measurement

Characterization of data
First we assume that we have N integer values data 1 , 2 ., . These values represent N successful readings from any counter used in the radiation area. Sum=

Experimental mean :

The data can be represented by a Frequency distribution function F(x) =

# # =0

The above distribution can be normalized as follows:

= 1.


Table 3.1 Example of data distribution function ( Knoll,2010 page 66)

Fig.3.1 Distribution function for the data given in table 3.1 ( Knoll,2010 page 66)

The table 3.1 shows a dataset with 20 entries the maximum value is 14 while minimum value of 3. Fig. 3.1 shows the corresponding data distribution function - The horizontal bars represent dataset for the 20 entries from table 3.1. - The experimental mean value = 8.8 lies at the center of F(x). - The fluctuation in the data can be understood by looking at the data distribution function

The data distribution function can be used to calculate the experimental mean, as shown below: The mean of a distribution can be found as shown below: = Sample variance Sample variance quantifies the amount of fluctuation in the data To determine sample variance, the first step is to find the residual of any data point =
Amount Experimental mean

=0 ()

From the Fig. 3.2 , it can be concluded that - The data has equal positive and negative residuals as shown in part b - The square of the residuals always give positives result. This can be clearly seen in part c

Fig. 3.2 (a) Plot of the data given in table 3.1 corresponding values for the residual d for are shown in part (b) and (c) (Knoll 2010, p.68)

Can be defined as the amount differ from the true mean value X .

= - Sample variance
- It can be defined as the amount of internal scatter for the given data. - Can be calculated as the average of value of each deviation after squaring :

2 = 2 =


. )2

Here it difficult to apply because it is not easy to get on only after obtaining a large number of readings, so the easiest is to use the experimental mean and thus will become more residual than deviation .But with using the experimental mean : 2 =
1 1 =1(


Note: This equation with Small N , if N very large we will used the mean square value either residual or deviation

Fig.3.3 Distribution functions for two sets of data differing amounts of internal fluctuation. ( Knoll 2010, p.68).

For narrow distribution

Small deviation from the mean

Small Sample variance

For Wide distribution

Large deviation

Large sample variance

Note: Even if we increase the data from 20 to 40 of the value value of the sample variance will remain the same as it is the absolute measure about of internal scatter in data.

Another way to calculate sample variance by using the data distribution function:

S2 =


)2 F(x)

.. (a)
( b)

S 2 = 2 - ( x )2

From the equation ( a ) since this expression cant be used in the computation , we can use the expression in equation (b).

We conclude that any data set can be explained and clarified by using the data distribution function F(x).And from this distribution , the most important parameters are experimental mean and the sample variance.

Statistical Models
The Binomial Distribution Three-specific Statistical models The Poisson Distribution The Gaussian (Normal) Distribution

Binomial Distribution
Most general of all statistical models Rarely used in Nuclear applications.

The probability of counting the number of successes x, can be calculated as follows:

p(x) =

! ! !

px (1- p )

n & x are integers

where: n : Number of trials; probability of success in each trial is p (doesnt change)

Usage example of the Binomial Distribution

Suppose that we a basket with 5 discs numbered 1 to 5. If the probability of selecting any of the discs is same, then the probability that the next disc chosen has number 2, 3, 4 on it is: Probability =
3 5

= 0.60

If we conduct a total of n = 10 trials, the probability that x trials will be successful can be calculated by using the Table 2, where x is between 0 and 10. The Table 3.2 shows the predicted possibility if we conduct n trials. In this case n = 10 The plot of the Binomial distribution is shown in Fig.3.4. The distribution shows that the number of successes is 7.

Fig. 3.4 A plot of Bionomial distribution for p=2/3 and n=10 (Knoll 2010, p.72)

Table 3.2 Values of the Bionomial Distribution of the parameter p=4/6 or 2/3 , n=10 (Knoll 2010, p.71)

Properties of a Binomial Distribution

The distribution is normalized


= 1

The average value of the distribution is given as below: x= =0 = 0 From the definition of binomial distribution

P (x) =

! ! !

P x (1- P )

= P n
where : P: Probability n : # of Trials

Continue : Binomial Distribution

Predicted variance
- This is associated with the predicted probability and is also known as the scatter about the mean predicted by the statistical model P(x). 2 =

)2 P(x)

- The predicted amount of fluctuation : 2 = n p (1 - p ) 2 = (1 - p )

where = n p

2 = ( 1 )

Example: Suppose n=10 , p=0.60 Predicted mean = 6 Predicted variance = 2 = 6 0.4 = 2.4 Predicted standard deviation = 2 = 1.549

Poisson Distribution
General :
Similar to Binomial distribution but mathematically simplified. For a small and constant predicted probability, the binomial distribution reduces to Poisson distribution.

Uses :
For nuclear counting experiments where the number of nuclei is large and the observation time is small as compared to the half life.

Expression : P (x) =

( ) !

We saw that Binomial distribution requires two parameters n & p . In contrast, the Poisson distribution requires only one parameters = np

The Properties of the Poisson Distribution

The distribution is normalized


= 1

The mean value of the distribution is x =

2 The predicted variance 2 = =0( ) P(x) =pn 2 = x The predicted standard deviation =

Example: If the probability <<<< 1 we need to change to Poisson distribution For e.g. In case of annual vaccinations, if we have a sample group of 1,000 people with 1000 1 trials , the probability of anyone taking the vaccination today is p= = 0.00274. In this case 365 we need to change to Poission distribution. = pn = 2.74


Fig.3.5 The Poisson distribution for a mean value =2.74 (Knoll 2010, p 75).

P (x) =

The above equation is used with a mean value = 2.74

x 0 1 2 P(x) 0.065 0.177 0.242

( ) !

. .

. .

Gaussian or Normal Distribution

General :
This type of distribution is used in the following cases:
If average number of successes is large ( >25 ) If the mean value is large.

Expression :
1 2
( )2 2

P(x) =

where x is integer number

The Properties of the Gaussian or Normal Distribution



= 1

Mean value x = Predicted variance 2

Only one parameter is needed =np Example: For e.g. In case of annual vaccinations, if we have a sample group of 10,000 people with 1000 trials, the 1 the probability of anyone taking the vaccination today is p= = 0.00274. . In this case we need to 365 change to Gaussian distribution. n= 10,000 x =np =27.4

Large number

Normal distribution to Poisson distribution is are good for estimating : 1- Statistical properties . 2-Uncertainty.

Continue : The example for Gaussian Distribution

To calculate the predicted probability :
1 2 27.4
( 27.4)2 2 27.4

P(x) =

The predicted Standard deviation : = = 27.4 = 5.23

From Fig 3.6 (a ) We can see that the distribution symmetric about the mean value , p(x) depends on the absolute value of the deviation of any value x from mean =

As the mean value is large , so the p(x) varies slowly

From fig.3.6 (b) it can be seen that there is a shift to zero value of the deviation compared to the mean value location It is continuous for Gaussian distribution .

Fig.3.6 (a) Discrete Gaussian distribution for mean value x= 27.4. (b)Plot of the corresponding continuous from the Gaussian .( Knoll 2010, p.77).

Continue : The example for Gaussian Distribution

Distribution is an explicit function of deviation and continous function (more than discrete function ). From Fig.3.7: We can see the area under the curve by including the limits values represent any physical quantity .


2 2

Where : G () Differential probability of observing a deviation in d about .

Fig.3.7 Comparison of the discrete and continuous forms of the Gaussian distribution

Continue : The example for Gaussian Distribution

The standard deviation of a Gaussian distribution = The ratio t=

where x = 2

A new variable t is introduced here So the Gaussian distribution can be written with respect the ratio: G(t) =G()

= G()

G(t)= /

This is a universal form of the Gaussian Distribution

Applications of the Statistical Models

Main applications of Statistical Models in nuclear measurements: 1- Use statistical analysis to show that the internal fluctuation is compatible with statistical predictions. 2-In order to examine the method available to make uncertainty predictions about uncertainty , one should associate with single measurement to avoid effect of statistical fluctuation.

Application A: Check the Counting System to See whether Observed Fluctuations are consistent with expected statistical fluctuation.
The counting laboratories are subjected to quality control standards once a month on an ongoing basis. In this process a record of 20-50 from detector are read using the present procedure and any abnormal records of fluctuation can be monitored, which helps to find the defects.

From Fig.3.9 we can note : N ( in experimental data left side) is independent measurement of same physical quantity. is the value of the center of the distribution . 2 ( Sample variance) : it s used to give us the amount of fluctuation. When the value of = => We have fully specified statistic model .P(x) is represent Poisson or Gaussian distribution. Data distribution function F(x) must be approximated to P(x). When we draw these we can get the shape and the amplitude of the distribution.

Fig.3.9An illustration of Application A of counting statistics-the inspection of a data for consistency with statistical model.(Knoll 2010,p80

To make a comparison between the two functions we need to use one parameter ( mean value ). Since this parameter is equal for each of these functions we need to look for l for another parameter. This is done by using the variance parameter, where we take the value of predicted variance and compare it with the sample variance. This must give same value in case there is internal fluctuation.

Fig. 3.10 A direct comparison of experimental data from table 3.1 with predictions of a satirical model( Poisson distribution for mean value equal to 8.8 ( Knoll 2010, p 81)

For the same data mentioned in table 3.1 , Fig.3.10 shows the data distribution function for these data the mean value and the experimental mean value is equal 8.8 , which is not that large so we can use Poisson instead of the Gaussian distribution , as we know that Poisson is just for discrete x value . Since the sample variance 2 = 7.36 ,the predicted variance 2 =mean value = 8.8 ( because it is Poisson). From these value we considered that we have less fluctuation in the data. As we only have limited data and as these values are close to each other , we need to use another test which is Chisquared test. Chi-squared test is similar to the sample variance :

2 = 1 2 =

=1( (1)

- )2

using mean value

using sample variance.

If the fluctuation is closely modeled by Poisson 2 = 2 , since it is Poisson , so we will use x = .

Table 3.3 Portion of a Chi squared Distribution table (Knoll 2010, p 82).

From the table above , the p s represents the probability for any sample from the Poisson distribution a large value of 2 compared to the specific value in the Table 3.3. Very low probability (p< 0.2 ) . Very high probability (p> 0.98) Small fluctuation .

Fig.3.11 A plot of the Chi-squared distribution. For each curve , p gives the probability that a random sample of n numbers from a true Poisson distribution would have a larger value calculate the number of degrees of freedom v=N-1 (Knoll 2010, p 82)

than that of the ordinate. For data for which the experimental mean used to

Application B: Estimation of the precision of a single measurement.

In case we need to add a degree of uncertainty to the single reading or measurement . if we need to estimate the sample variance to be expected especially if we are looking to repeat the measurement many times. Since we have only one measurement we will estimate sample variance by analogy with good statistical model.

From fig.3.13 : The left hand side represents the experimental data while the right side represents the statistical model. Since we have single measurement x , we assume that x = x The expected sample variance 2 2 of the statistical model from measurement x - 2 = x - 2 =x Provided Poisson or Gaussian because we have only one measurement.

2 = If x =100 => =10 The mean is large => Normal distribution.

Fig.3.13 A graphical display of error bars associated with experimental data.( Knoll 2010, p.85)

Table 3.4 example of error interval for a single measurement x=100

From the table above, if we consider the uncertainty for the single measurement we will use x 10 which it will have a mean with probability 68%. To reach 99% we wll use x 2.58 => 100 25.8 .


Fractional standard deviation can be defined as of simple counting measurement, if we record 100 counts and fractional standard deviation is 10% , we can reduce it to 1%if we raise the counts to 10,000. All above is for general cases , for the nuclear counting we will apply = . Can not associate the standard deviation with square root if any quantity that t s not directly measured number of count. Association can not apply for the following : 1- Counting rate. 2-Sums of differences of counts. 3- Averages of independent counts 4- Any derives quantity.

Error Propagation
If we assume that the measurements are performed the expected Row Count and even followed Gaussian distribution for this case only one parameter will be needed. If we assume that the counts was recorded by using back ground free detector system , which record only half of the quantity emitted by the source from this we can conclude that only 50% is the efficiency for this detector. And until we get the correct number who get out of the source must be multiplied by 2 and if this repeated we will get new distribution. The new distribution will have a shape of Gaussian distribution , but here we can not get the ion from square root of the mean value , therefore here we need two parameter the mean value & Standard deviation . It is continuous Gaussian distribution : G(x) dx =
1 1

exp (2 2

( )2 2 2

) dx

Probability of observing value between x and x+dx

G(t ) dt =


Error propagation :
if we have counts .. x, y , z and related to variables x , y and z

2 = (

2 ) 2


2 ) 2


2 ) 2 +

U= u ( x, y , z, ..) derived quantity

*This equation known as error propagation formula where x , y and z are independent , and can be used in all nuclear measurements.
The use of this equation will be cleared by using t in some simple cases. Case # 1 Sum or differences of counts. u= x + y ==>

or or




By using the error propagation formula : 2 = (1)2 2 + ( 1)2 2 2 = 2 + 2 = 2 + 2

The application o source and need f this case , when we have radioactive source counts and need to correct it by subtracting the back ground count: net counts = total counts back ground counts this mean u = x-y

Example :

if the total counts = x = 1000 background counts= y = 500 net count = u = 500 = = 31.622 = = 22.361
= 2 + 2

+ = 38.729

=> net counts = 500 38.729

Case # 2 Multiplication or Division by a constant: u=Ax


not associate uncertainty

= A Similarly

v= v =

& are fractional errors same as the original fractional errors.

From all above we conclude that the multiply or divide value by constant does not change the relative error. Example: If x is counts recorded over time t , the counting rate = r = and considered to be constant x= 1220 , t = 5 s r=
1220 5

, where t measured in small uncertainty

= 224 1/s =
1220 5

= 6.7 s

Counting rate = 224 6.7 counts /second

Case # 3 Multiplication or Division of counts :




2 = 2 2 + 2 2

2 ) =

2 )


2 )

divided by 2 = 2 2

1 ( )2

2 )

= - 2

2 =( )2 2 +(( )2 =

2 ) 2 2


In both cases, the fractional error in x and y


=> will combine to give the fractional error in u

Example :

To calculate the ratio of two sources activities from independent counts. (Neglect the background) - Count from Source 1 = N1 = 16265 - Count from Source 2 = N2 = 8192 - Activity ratio R = -(
2 ) =

1 2

16265 8192

= 1.985 +
2 22

1 2 ) 1


2 2 ) 2

1 12

= 1.835*104

= 0.0135

= 0.013 * R = 0.027 R = 1.985 0.027

Case # 4 Mean value of multiple independent counts:

If N repeated counts of equal times are recorded from the same source. These counts are represented as x1 , x2 , x3, Also = 1 + 2 +
2 ) 2

To find the expected error in . we use 2 = ( so


2 ) 2


2 ) 2 + .

=1 for independent counts.



= 12 + 22 + .. 2

But = 2 = x1 + x2 +. = = and values would be same if we were dealing with single count & the mean value x = The expected standard deviation =

=> since any typical count would not be different from the mean . Based on N independent counts, the expected error is smaller by a factor of compared to any single measurement . Thus it can be concluded that if we aim to improve the statistical precision of measurement by factor of 2, we must invest 4 times the initial counting time.

Case # 5 Combination of independent measurements with unequal errors:

If each measurement xi has a weighting factor ai, we can calculate the best value , from the linear combination given below: