You are on page 1of 92

Notes for Theoretical Health Physics

Tae Young Kong


1. Stochastic processes
a. Independent events
Reference: James E. Turner, Darryl J. Downing, and James S. Bogard, Statistical
Methods in Radiation Physics, pp.29-45
In some cases, given additional information will cause no change in the
probability of the event occurring. Then, in symbols, Pr(A|B)=Pr(A), and so the
occurrence of B has no effect on the probability of the occurrence of A. We
then say that event A is independent of event B.
Two events A and B are independent if and only if
Pr(A|B)=Pr(A)
and
Pr(B|A)=Pr(B).
Independence Theorem
Two events, A and B, in a sample space are independent if and only if
Pr(A B)=Pr(A)Pr(B).
Example
Two photons of a given energy are normally incident on a metal foil. The
probability that a given photon will have an interaction in the foil is 0.2.
Otherwise, it passes through without interacting. What are the probabilities
that neither photon, only one photon, or both photons will interact in the foil?
Solution
The number of photons that interact in the foil is a random variable X, which
can take on the possible values 0, 1, or 2. There are four simple events for the
sample space for the two photons: (n, n), (n, y), (y, n), (y, y).Here y means
yes, there is an interaction and n means no, there is not, the pair of
symbols in parentheses denoting the respective fates of the two photons. The
probability of interaction for each photon is given as 0.2, and so the
probability for its having no interaction is 0.8.We are asked to find the
probabilities for the three possible values of X. For the probability that neither
photon interacts, we write
Pr(X=0) = Pr[(n,n)].
We can regard n for photon 1 and n for photon 2 as independent events, each
having a probability of 0.8. The probability that neither photon interacts is,
therefore,
Pr(X=0) = 0.8 0.8 = 0.64.
The probability that exactly one photon interacts is
Pr(X=1) = Pr[(y,n)U(n,y)] = Pr[(y,n)+(n,y)].

Since the probability of yes for a photon is 0.2 and that for no is 0.8, we
find

Pr(X=1) = 0.2 0.8 + 0.8 0.2 = 0.32.


The probability that both photons interact is
Pr(X=2) = Pr[(y,y)] = 0.2 0.2 = 0.04.
b. Poisson statistics
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.311-315
When N1, Nn, and p1, the binomial distribution are given to a very good
approximation by the Poisson distribution:

Pn

ne
n!

.
where = Np is the mean of the Poisson distribution. The standard deviation
of the Poisson distribution is the square root of the mean:


.
Generally, the Poisson distribution describes the number of successes for any
random process whose probability is small (p1) and constant.
Example
More realistically, consider a

42

K source with an activity of 37 Bq (= 1 nCi). The

source is placed in a counter, having an efficiency of 100%, and the numbers


of counts in one-second intervals are registered.
(a) What is the mean disintegration rate?
(b) Calculate the standard deviation of the disintegration rate.
(c) What is the probability that exactly 40 counts will be observed in any
second?
Solution
(a) The mean disintegration rate is the given activity, = 37 s1.
(b) The standard deviation is

37 6.08s 1
(c) The probability of exactly 40 disintegrations occurring in a given second is,

P40

n e 37 40 e 37

0.0559
n!
40!

c. Theoretical resolution of energy deposition


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.337-338

The resolution of the total-energy peak,


8%, refers to the relative width of the peak
at one-half its maximum value. Called the
full width at half maximum (FWHM), one
has FWHM = 0.08(662) = 53 keV. For a
normal curve, with standard deviation , it
can be shown that FWHM = 2.35. The
resolution of a spectrometer depends on
several factors. These include noise in the
detector and associated electronic systems
as well as fluctuations in the physical
processes that convert radiation energy
into a measured signal. Applying Poisson
statistics, we can express the resolution in
terms

of

the

average

number

of

photoelectrons (with standard deviation


= ):

FWHM 2.35 2.35

with FWHM now referring to the number, rather than energy, distribution. With
R = 0.08 for the scintillator, it follows that the average number of
photoelectrons collected per pulse is = 863.
For different types of detectors, the physical limitation on resolution imposed
by the inherent statistical spread in the number of entities collected can be
compared in terms of the average energy needed to produce a single entity.
For the Nal detector just given, since an event is registered with the
expenditure of 662 keV, this average energy is W = (662000 eV)/(863
photoelectrons) = 767 eV per photoelectron. By comparison, for a gas
proportional counter W 30 eV per ion pair. The average number of electrons
produced by the absorption of a

137

Cs photon in a gas is 662000/30 = 22100.

The resolution of the total-energy peak (other sources of fluctuations being


negligible) with a gas counter is R = 2.35/(22100) 1/2 = 0.016. For germanium,
W 3 eV per ion pair; and the resolution for

137

Cs photons is R = 2.35/

(221,000)1/2 = 0.0050. Resolution improves as the square root of the average


number of entities collected. Note that the resolution depends on the energy
of the photons being detected through the average value .

Example
For the scintillator analyzed in the example given after Fig. 10.30, it was found
that the average energy needed to produce a photoelectron was 155 eV.
(a) What is the resolution for the total-energy peak for 450-keV photons?
(b) What is the width of the total-energy peak (FWHM) in keV?
(c) What is the resolution for 1.2-MeV photons?
Solution
(a) The average number of photoelectrons produced by absorption of a 450keV photon is = 450,000/155 = 2900. The resolution is therefore, R = 2.35/
(2900)1/2 = 0.0436.
(b) For 450-keV photons, it follows that FWHM = 0.0436450 = 19.6 keV.
(c) The resolution decreases as the square root of the photon energy. Thus,
the resolution for 1.2-MeV photons is 0.0436 (0.450/1.2)1/2 = 0.0267.
d. Deviation from Poisson statistics Fano factor
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.338-339
The Fano factor has been introduced as a measure of the departure of
fluctuations from pure Poisson statistics. It is defined as the ratio of the
observed variance and the variance predicted by the latter:

Observed var iance


Poisson var iance

.
Reported values of Fano factors for gas proportional counters are in the range
from about 0.1 to 0.2 and, for semiconductors, from 0.06 to 0.15. For
scintillation detectors, F is near unity, indicating a Poisson-limited resolution.
2. Nuclear physics basics
a. Field descriptions
Reference: Radiation Protection Competency 1.1 p.22-25
Point source
The intensity of the radiation field decreases as the distance from the source
increases. Therefore, increasing the distance will reduce the amount of
exposure received. In many cases, especially when working with point
sources, increasing the distance from the source is more effective than
decreasing the time spent in the radiation field.
Theoretically, a point source is an imaginary point in space from which all the
radiation is assumed to be emanating. While this kind of source is not real (all
real sources have dimensions), any geometrically small source of radiation
behaves as a point source when one is within three times the largest
dimension of the source. Radiation from a point source is emitted equally in all

directions. Thus, the photons spread out to cover a greater area as the
distance from the point source increases. The effect is analogous to the way
light spreads out as we move away from a single source of light such as a light
bulb.
The radiation intensity for a point source decreases according to the Inverse
Square Law which states that as the distance from a point source decreases or
increases the dose rate increases or decreases by the square of the ratio of
the distances from the source. The inverse square law becomes inaccurate
close to the source (i.e., within three times the largest dimension of the
source).
As previously mentioned, the exposure rate is inversely proportional to the
square of the distance from the source. The mathematical equation is:

This equation is assuming the attenuation of the radiation in the intervening


space is negligible and the dimensions of the source and the detector are
small compared with the distance between them.
The inverse square law holds true only for point sources; however, it gives a
good approximation when the source dimensions are smaller than the
distance from the source to the exposure point. Due to distance constraints,
exposures at certain distances from some sources, such as for a pipe or tank,
cannot be treated as a point source. In these situations, these sources must
be treated as line sources or large surface sources.

Sp
4r

e x
for calculation of flux from the point

source
Line Sources
An example of a line source would be a pipe carrying contaminated cooling
water or liquid waste, a control rod, a series of point sources which are close
together, or a needle injecting a radioisotope into tissue. With line sources, an
assumption must be made that the distribution of radioactivity is uniform
throughout the source. When no attenuator is present, the relationship

between the line source emission rate and the flux at the
receptor (P) depends on the location of the receptor with
respect to the line source. However, this relationship is
more complex mathematically than in the case of the
point source, and the use of calculus is required. The
following figure and formula applies to line sources.

Sl
p
( 0
4h

) e x

Plane Sources
An example of a plane source would be a spill of liquid containing radioactivity
on the floor. Again, when estimating the amount of
radioactivity

emanating

from

an

area

source,

an

assumption must be made that the distribution of


radioactivity is uniform throughout the source. For an
area source with an attenuator present, the calculations
become very complicated. For illustrative purposes, an
example of a circular area source without an attenuator present is given.

Sa
a2
p ln( 1 2 )
4
h
b. Interaction of radiation with matter and interaction rates
i. Production of annihilation radiation, Bremsstrahlung, and
Auger electrons
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.76, 58-60, 45
Annihilation radiation
Annihilation photons are present with all positron emitters. A positron
slows down in matter and then annihilates with an atomic electron,
giving rise to two photons, each having energy mc 2 = 0.511 MeV and
traveling in opposite directions.
Bremsstrahlung
When a beta particle or an electron passes close to a nucleus, the
strong attractive Coulomb force causes the beta particle to deviate
sharply from its original path, thereby losing energy by irradiating Xray photons. Heavy nuclei are much more efficient than light nuclei in

producing the radiation because the deflections are


stronger. A single electron can emit an X-ray photon
having any energy up to its own kinetic energy. As a result,
a monoenergetic beam of electrons produces a continuous
spectrum of X rays with photon energies up to the value of
the beam energy. These continuous X rays are called
bremsstrahlung, or braking radiation.
Auger electrons
Following ejection of the photoelectron, the inner-shell vacancy in the
atom is immediately filled by an electron from an upper level resulting
in a release of energy. Although most of the time this energy is
released in the form of an emitted photon, the energy can also be
transferred to another electron, which is ejected from the atom. This
second ejected electron is called an Auger electron.
c. Radioactive decay
i. Half-life, mean life, decay constant, activity
Reference: Herman Cember. Health Physics. 4th edition, p.98
Half-life
The time required for any given radionuclide to decrease to one-half of
its original quantity is a measure of the speed with which it undergoes
radioactive transformation. This period of time is called the half-life and
is characteristic of the particular radionuclide. Each radionuclide has its
own unique rate of transformation, and no operation, either chemical
or physical, is known that will change the transformation rate; the
decay rate of a radionuclide is an unalterable property of that nuclide.
EXAMPLE 4.1
Cobalt-60, a gamma-emitting isotope of cobalt whose half-life is 5.3
years, is used as a radiation source for radiographing pipe welds.
Because of the decrease in radioactivity with increasing time, the
exposure time for a radiograph will be increased annually. Calculate the
correction factor to be applied to the exposure time in order to account
for the decrease in the strength of the source.
Solution
Equation can be written as

A0
2n
A

A
1
n
(since A0
2 )

By taking the logarithm of each side of the equation, we have

log

A0
n log 2
A

where n, the number of

60

Co half-lives in 1 year, is 1/(5.3) = 0.189.

A
log 0 0.189 0.301 0.0569
A

A0
A = inverse log 0.0569 = 1.14
The ratio of the initial quantity of cobalt to the quantity remaining after
1 year is 1.14. The exposure time after 1 year, therefore, must be
increased by 14%. It should be noted that this ratio is independent of
the actual amount of activity at the beginning and end of the year.
After the second year, the ratio of the cobalt at the beginning of the
second year to that at the end will be 1.14. The same correction factor,
1.14, therefore, is applied every year to the exposure time for the
previous year.
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.83-87
Mean-life
The average, or mean, life of a radionuclide is defined as the average
of all of the individual lifetimes that the atoms in a sample of the
radionuclide experience. It is equal to the mean value of t under the
exponential curve. (The average length of time that an element
remains in the set)

1
T

0.693

Decay constant
The constant of proportionality between the size of a population of
radioactive atoms and the rate at which the population decreases
because of radioactive decay

0.693
T

Activity
The rate of decay, or transformation, of a radionuclide is described by
its activity, that is, by the number of atoms that decay per unit time.

The unit of activity is the Becquerel (Bq), defined as one disintegration


per second: 1 Bq = 1 s1. The traditional unit of activity is the curie (Ci),
which was originally the activity ascribed to 1 g of
defined as 1 Ci = 3.710

10

226

Ra. The curie is

Bq, exactly.

ii. Simple decay


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.83-84
The activity of a pure radionuclide decreases exponentially with time. If
N represents the number of atoms of a radionuclide in a sample at any
given time, then the change dN in the number during a short time dt is
proportional to N and to dt. The negative sign is needed because N
decreases as the time t increases. The quantity is called the decay
constant. The decay rate, or activity, A, is given by

dN
N
dt

Integration of both sides gives

ln N t c

where c is an arbitrary constant of integration, fixed by the initial


conditions. If we specify that N0 atoms of the radionuclide are present
at time t = 0, then c = lnN0.

ln N t ln N 0
ln

N
t
N0

N
e t
N0

or
Since the activity of a sample and the number of atoms present are
proportional, activity follows the same rate of decrease,

A
e t
A0
iii. Composite decay
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p.91
General case

N1 N 2 N 3

dN1
1 N1
dt

N1 N1o e 1t

dN 2
1 N 1 2 N 2 1 N1o e 1t 2 N 2
dt
e 2 t
Multiply by

for both sides

e 2t dN 2 e 2t 2 N 2 dt 1 N10 e ( 2 1 )t dt
d [ N 2 e 2t ] 1 N10 e ( 2 1 )t dt
Integrate both sides

N 2 e 2t dt 1 N10 e ( 2 1 )t dt

N 2e

2 t

1 10 e ( 2 1 ) t
2 1

N 2 e 2 t
N2

1 N10 ( 2 1 ) t
[e
1]
2 1

1 N10 1t
[ e e 2 t ]
2 1

iv. Serial decay


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.89-93
Activity of a sample in which one radionuclide produces one or more
radioactive offspring in a chain
Secular Equilibrium (T1 T2)
First, we calculate the total activity present at any time when a longlived parent (1) decays into a relatively short-lived daughter (2), which,
in turn, decays into a stable nuclide. The half-lives of the two
radionuclides are such that T1 T2; and we consider intervals of time
that are short compared with T1, so that the activity A1 of the parent
can be treated as constant. The rate of change, dN 2/dt, in the number

of daughter atoms N2 per unit time is equal to the rate at which they
are produced, A1, minus their rate of decay, 2N2:

dN 2
A1 2 N 2
dt
dN 2
dt
A1 2 N 2
where A1 can be regarded as constant. Introducing the variable u = A 1
2N2, we have du = 2dN2 and,

du
2 dt
u
Integration gives

ln( A1 2 N 2 ) 2 t c

where c is an arbitrary constant. If N20 represents the number of atoms


of nuclide (2) present at t = 0, then we have c = ln(A 1 2N20).

ln

A1 2 N 2
2 t
A1 2 N 20

A1 2 N 2 ( A1 2 N 20 )e 2t
or

Since 2N2 = A2, the activity of nuclide (2), and 2N20 = A20 is its initial
activity,

A2 A1 (1 e 2t ) A20 e 2t
In

many

practical

instances one starts with a


pure sample of nuclide (1)
at t = 0, so that A20 = 0,
which

we

now

assume.

The activity A2 then builds


up as shown in Fig. 4.4.
After

about

daughter
7T2),

2t

seven

half-lives
1

and

(t
Eq.

reduces to the condition A1


= A2, at which time the
daughter activity is equal
to that of the parent. This condition is called secular equilibrium. The
total activity is 2A1. In terms of the numbers of atoms, N 1 and N2, of the
parent and daughter, secular equilibrium can be also expressed by
writing

1 N1 2 N 2
Transient Equilibrium (T1 T2)
when N20 = 0 and the half-life of the parent is greater than that of the
daughter, but not greatly so

2 N 2

2 1 N10 1t
(e e 2t )
2 1

With the continued passage of time, e2t eventually becomes negligible


with respect to e1t, since 2 > 1. In addition since A1 = 1N1 = 1N10e
1t

is the activity of the parent as a function of time, this relation says

that

A2

2 A1
2 1

The time at which the daughter activity is largest

1
ln 2
2 1 1

for maximum A2
The total activity is largest at the earlier time

22
1
ln
2 1 212 12
for maximum A1 + A2

v. Activation /decay relations


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p.228-229
Reference: Class 604, Lecture #21

A2 1 N1 (1 e 2tirr )e

2t delay

d. Nuclear decay schemes


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection, pp.6279
Refer to Appendix D and deduce the decay scheme of

26

13Al.
Al : 12.211 MeV, + 81.8% EC 18.2%, Half-life: 7.16 105 y
+ : 1.174 max (avg 0.544), : 0.511 (164%, ), 1.130 (2.5%), 1.809
26

13

(100%), Mg X rays
This nuclide decays by + emission (81.8%) and EC (18.2%) into

26

12

Mg. The

energy release for EC with a transition to the daughter ground state is, from
the values,
QEC = 12.211 + 16.214 = 4.003 MeV.
Here we have neglected the small binding energy of the atomic electron. The
corresponding value for + decay to the ground state is Q+ = 4.003 1.022 =
2.981 MeV.
A 1.809-MeV gamma photon is emitted with 100% frequency, and so we can
assume that both EC and + decay modes proceed via an excited daughter
state of this energy. Adding this to the maximum + energy, we have 1.809 +
1.174 = 2.983 MeV = Q+. Its 81.8% frequency accounts for the annihilation
photons listed with 164% frequency in Appendix D. The other 18.2% of the
decays via EC also go through the level at 1.809 MeV. An additional photon of
energy 1.130 MeV and frequency 2.5% is listed in Appendix D. This can arise if
a fraction of the EC transformations go to a level with energy 1.809 + 1.130 =
2.939 MeV above ground. The complete decay scheme is shown in below.

e. Shielding and radiation attenuation


Reference: Radiation Protection Competency 1.1 p.17
Alpha particles are relatively massive, slow moving particles that interact by
ionization and excitation. Therefore, alpha radiation is not very penetrating.
Alpha radiation is not an external hazard and can be shielded by:
A few inches of air.
A sheet of paper.
A dead layer of skin.
Beta particles are relatively light, fast moving particles that interact by
ionization and excitation. Beat radiation is moderately penetrating dependent
on the energy or velocity of the beta particle, and can be an external hazard if
it can penetrate the dead layer of skin. Beta radiation should be shielded by

low atomic number materials (i.e., Z) to prevent the production of


bremsstrahlung radiation. These materials include:
Plastic.
Wood.
Aluminum.
Neutron shielding involves slowing down fast neutrons and absorbing thermal
neutrons. For example, control rods in nuclear reactors can be fabricated from
boron, which is a good material to absorb thermal neutrons. Neutron shielding
is highly dependent on the energy of the neutron. The goal in neutron
shielding is to generate a charged particle via an interaction. The best
interaction for shielding neutrons would be an elastic collision with a light
nucleus such as a hydrogen atom. A hydrogen nucleus consists of a single
proton and allows a significant transfer of energy to a proton because the
masses of the proton and neutron are almost the same. The neutron collides
with the proton, transferring energy and recoils the proton away from its
electron cloud. The liberated protons range is then very short, causing
ionizations and excitations along the recoiled protons path. Neutrons can be
shielded by materials with a high hydrogen content such as:
Water.
Concrete.
Plastic.
Fuel Oil.
Paraffin.
Photon shielding is also highly dependent on the energy of the photon and the
atomic number Z of the shielding material. As in neutron shielding, the goal is
to produce a charged particle via an interaction, preferably the photoelectric
effect, in which all of the photon energy is transferred to the electron. The
photoelectrons range in matter is very short, causing ionizations and
excitations in the shielding material. The energy of the photon is then
transferred to the shield by photoelectrons. Since photons interact with
electrons, photons can be shielded by any material which provides an
adequate number of electrons. This can be done by use of high atomic
number (high Z), such as lead or uranium. If space is not limited, water or
concrete may be a practical shielding material.
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.190-191
What thickness of concrete and of lead is needed to reduce the number of
500- keV photons in a narrow beam to one-fourth the incident number?
Compare the thicknesses in cm and in g cm2. Repeat for 1.5-MeV photons.
Solution

We use Eq. (8.43) with N(x)/N0 = 0.25. The mass attenuation coefficients /
obtained from Figs. 8.8 and 8.9 are shown in Table 8.2. At 500 keV, the linear
attenuation coefficient for concrete is = (0.089 cm2 g1) (2.35 g cm3) =
0.209 cm1; that for lead is = (0.15)(11.4) = 1.71 cm1. Using Eq. (8.43), we
have for concrete
0.25 = e0.209x,
giving x = 6.63 cm. For lead,
0.25 = e1.71x,
giving x = 0.811 cm. The concrete shield is thicker by a factor of 6.63/0.811 =
8.18. In g cm2, the concrete thickness is 6.63 cm 2.35 g cm 3 = 15.6 g cm2,
while that for lead is 0.811 11.4 = 9.25 g cm 2. The concrete shield is more
massive in thickness by a factor of 15.6/9.25 = 1.69. Lead is a more efficient
attenuator than concrete for 500-keV photons on the basis of mass.
Photoelectric absorption is important at this energy, and the higher atomic
number of lead is effective. The calculation can be repeated in exactly the
same way for 1.5-MeV photons. Instead, we do it a little differently by using
the mass attenuation coefficient directly, writing the exponent in Eq. (8.43) as
x = (/)x. For 1.5-MeV photons incident on concrete,
0.25 = e0.052x,
giving x = 26.7 g cm2 and x = 11.4 cm. For lead,
0.25 = e0.051x,
and so x = 21.2 g cm2 and x = 2.39 cm. At this energy the Compton effect is
the principal interaction that attenuates the beam, and therefore all materials
(except hydrogen) give comparable attenuation per g cm 2. Lead is almost
universally used when low-energy photon shielding is required.
3. Ionizing radiation
a. Types and sources
Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation
Dosimetry, pp.2-3
The important types of ionizing radiations to be considered are:
i. r-rays: Electromagnetic radiation emitted from a nucleus or in
annihilation reactions between matter and antimatter.
ii. X-rays: Electromagnetic radiation emitted by charged particles (usually
electrons) in changing atomic energy levels (called characteristic or
fluorescence x-rays) or in slowing down in a Coulomb force field
(continuous or bremsstrahlung x-rays). Note that an x-ray and a y-ray
photon of a given quantum energy have identical properties, differing
only in mode of origin. Gamma rays originate in the nucleus. X-rays
originate in the electron fields surrounding the nucleus or are machineproduced.

iii. Fast electrons: If positive in charge, they are called positrons. If they
are emitted from a nucleus they are usually referred to as -rays
(positive or negative). If they result from a charged-particle collision
they are referred to as rays.
iv. Heavy charged particles: Usually obtained from acceleration by a
Coulomb force field in a Van de Graaff, cyclotron, or heavy-particle
linear accelerator. Alpha particles are also emitted by some radioactive
nuclei. Types include:
Proton - the hydrogen nucleus.
Deuteron - the deuterium nucleus, consisting of a proton and
neutron bound together by nuclear force.
Triton - a proton and two neutrons similarly bound.
Alpha particle - the helium nucleus, i.e., two protons and two
neutrons.
Other heavy charged particles consisting of the nuclei of
heavier atoms, either fully stripped of electrons or in any case
having a different number of electrons than necessary to
produce a neutral atom.
Pions - negative -mesons produced by interaction of fast
electrons or protons with target nuclei.
v. Neutrons: Neutral particles obtained from nuclear reactions [e.g., (p, n)
or

fission],

since

they

cannot

themselves

be

accelerated

electrostatically.
The ICRU (International Commission on Radiation Units and Measurements,
1971) has recommended certain terminology in referring to ionizing radiations
which emphasizes the gross differences between the interactions of charged
and uncharged radiations with matter:
i.
Directly Ionizing Radiation. Fast charged particles, which deliver their
energy
ii.

to

matter

directly,

through

many

small

Coulomb-force

interactions along the particles track.


Indirectly Ionizing Radiation. X- or r-ray photons or neutrons (i.e.,
uncharged particles), which first transfer their energy to charged
particles in the matter through which they pass in a relatively few large
interactions. The resulting fast charged particles then in turn deliver
the energy to the matter as above.

b. Characteristics
Stated above Types and sources
c. Field quantities

Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation


Dosimetry, pp.8-10
i.

FLUENCE
Referring to Fig. 1.1, Let N, be the expectation value of the number of
rays striking a finite sphere surrounding point P during a time interval
extending from an arbitrary starting time to to a later time t. If the
sphere is reduced to an infinitesimal at P with a great-circle area of da,
we may define a quantity called the fluence, , as the quotient of the
differential of Ne, by da:

ii.

dN e
da

which is usually expressed in units of m-2 or cm-2.


FLUX DENSITY (OR FLUENCE RATE)
may be defined above for all values of t through the interval from t =
to (for which = max). Then at any time t within the interval we may
define the flux density or fluence rate at P as

d d dN e

dt dt da

where d is the increment of fluence during the infinitesimal time


interval dt at time t, and the usual units of flux density are m -2 s-1 or cm2

iii.

s-1.
ENERGY FLUENCE
The simplest field-descriptive quantity which takes into account the
energies of the individual rays is the energy fluence , for which the
energies of all the rays are summed. Let R be the expectation value of
the total energy (exclusive of rest-mass energy) carried by all the N,
rays striking a finite sphere surrounding point P during a time interval
extending from an arbitrary starting time to to a later time t. If the
sphere is reduced to an infinitesimal at P with a great-circle area of da,
we may define a quantity called the energy fluence, , as the quotient
of the differential of R by da:

dR
da

which is usually expressed in units of J m-2 or erg cm-2.


For the special case where only a single energy E of rays is present, the
above equations are related by

R EN e

and
iv.

ENERGY FLUX DENSITY (OR ENERGY FLUENCE RATE)


may be defined above equation for all values of t throughout the
interval from t = to (for which = 0) to t = tmax (for which = max).
Then at any time t within the interval we may define the energy flux
density or energy fluence rate at P as:

d d dR

dt
dt da

where d is the increment of energy fluence during the infinitesimal


time interval dt at
-2

-1

-2

time t , and the usual units of energy flux density are J


-1

m s or erg cm s .
1 MeV = 1.602 10-6 erg = 1.602 10-13 J
d. Interaction with matter
i. Ionization, excitation, W-value
Reference: Radiation Protection Competency 1.1 p.5
Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, p.339
Charged particle radiations, such as alpha particles or electrons, will
continuously interact with the electrons present in any medium
through which they pass because of their electric charge. These
particles must undergo an interaction resulting in a full or partial
transfer of energy of the incident radiation to the electron or nuclei of
the constituent atom. If the energy transferred to the electron is
greater than the energy holding the electron to the atom, the electron
will leave the atom and create ionization. Ionization is the process of
turning an electrically neutral atom into an ion pair consisting of a
negatively charged electron unbound to an atom, and an atom missing
one electron creating a net positive charge. If insufficient energy is
transferred to the electron to leave the atom, the electron is said to be
excited. Excitation does not create ionization or ion pairs, but does
impart some energy to the atom. W-value is the mean energy (in eV)
spent by a charged particle of initial energy To in producing each ion
pair:

To
N

where N is the expectation value of the number of ion pairs produced


by such a particle stopping in the medium (usually a gas) to which W
refers. The value for electrons, W = 34.6 eV ip1.
ii. Range, CSDA range, density thickness, mean-free path
Reference: Radiation Protection Competency 1.1 p.18
Charged particles have a definite range in matter. The range of a
charged particle in an absorber is the average depth of penetration of
the charged particle into the absorber before it loses all of its kinetic
energy and stops. The energy of the particle, which is a function of the
mass of the particle and its velocity, and the electrical charge of the
particle affect the range of the charged particle in a material. The
atomic density (number of atoms per cubic centimeter) and the atomic
number (Z) of the shielding material also affect range.
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p. 131,
CSDA is the continuous-slowing-down approximation. It ignores
fluctuations of energy loss in collisions and assumes that a charged
particle loses energy continuously along its path at the linear rate
given by the instantaneous stopping power.
Reference: Radiation Protection Competency 1.1 p.18
The factor that affects the range of a charged particle in any material
is

unit

called

density-thickness.

Density-thickness

can

be

calculated by multiplying the density of a material in grams per cubic


centimeter (g/cm3) by the distance the particle traveled in that
material in centimeters. The product is density-thickness in units of
grams per square centimeter (g/cm2). Density-thickness can be
considered a cross-sectional target for a charged particle as it travels
through the material. The concept of density-thickness is important to
discussions of beta radiation attenuation by human tissue, detector
shielding/windows, and dosimetry filters. Although materials may have
different densities and thicknesses, if their density-thickness values are
the same, they will attenuate beta radiation in a similar manner. For

example, a piece of Mylar used as a detector window with a density of


7 mg/cm2 will attenuate beta radiation similar to the outer layer of
dead skin of the human body which has a density-thickness of
7mg/cm2.
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p. 116
The mean free path is the mean distance of travel of a charged
particle between collisions. It is the reciprocal of that is the
macroscopic cross section, the probability per unit distance of travel
that an electronic collision takes place. (1/ )
iii. Stopping power, linear energy transfer, linear energy transfer
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp. 115, 162
Reference:
http://www.med.harvard.edu/jpnm/physics/nmltd/radprin/sect7/7.2/7_2.
3.html
Stopping power is the average linear rate of energy loss of a heavy
charged particle in a medium, designated dE/dx, in MeV cm1. The
linear energy transfer (LET) is the rate of energy transfer per unit
distance along a charged-particle track as the quotient dEL/dx. The
LET is similar to the stopping power except that it does not include the
effects of radiative energy loss (i.e., Bremsstrahlung) or delta-rays. The
difference between LET and stopping power is that LET is local energy
deposition only and stopping power is concerned with the total energy
lost by the particle. The stopping power and LET are nearly equal for
heavy charged particles; for betas LET does not include delta-rays nor
Bremsstrahlung. The LET is related to Biological Damage. The severity
and permanence of biological changes are directly related to the local
rate of energy deposition along the particle track. The higher the LET,
the higher the Q quality factor in determining dose equivalent.
iv. Compton effect, photoelectric effect, pair production
Reference: Radiation Protection Competency 1.1 p.18
Photoelectric Effect
In the photoelectric effect the photon imparts all of its energy to an
orbital electron of some atom. The photon, since it consisted only of

energy in the first place,


simply

vanishes.

The

energy is imparted to
the orbital electron in
the

form

energy

of
of

kinetic
motion,

overcoming

the

attractive force of the


nucleus for the electron
(the binding energy) and
usually

causing

the

electron to fly from its


orbit with considerable velocity. Thus, an ion pair results. The
probability of photoelectric effect at its maximum, occurs when the
energy of the photon is equal to the binding energy of the electron.
The tighter an electron is bound to the nucleus, the higher the
probability of photoelectric effect, so most photoelectrons are inner
shell electrons. The photoelectric effect is seen primarily as an effect of
low energy photons with energies near the electron binding energies of
materials and high Z materials whose inner-shell electrons have high
binding energies.
Compton Scattering

In

Compton

scattering there is a
partial
for

energy

the

loss

incoming

photon. The photon


interacts
orbital

with
electron

an
of

some atom and only


part of the photon
energy is transferred
to the electron. After
the

collision,

the

photon is deflected in a different direction at a reduced energy. The


recoil electron, now referred to as a Compton electron, produces
secondary ionization in the same manner as does the photoelectron,
and the "scattered" photon continues on until it loses more energy in
another photon interaction. The probability of a Compton interaction
increases for loosely bound electrons and, therefore, increases
proportionally to the Z of the material. Compton scattering is primarily
seen as an effect of medium energy photons and its probability
decreases with increasing energy.

Er'

Er
Er
1
(1 cos )
m0c 2

Photon energy after Compton scattering

Er

Er'
Er'
1
(1 cos )
m0c 2

Incoming photon energy

cot


E
tan
1
2
m0 c 2

Angle of the photon scatter


( recoiled electrons angle)
Pair Production
Pair production occurs when the photon is converted to mass. In pair
production a photon simply disappears in the vicinity of a nucleus and
in its place appears a pair of electrons: one negatively and one
positively charged (anti-particles are also called electron and positron

respectively).

Pair

production

is

impossible

unless

the

photon

possesses greater than 1.022 MeV of energy to make up the rest mass
of the particles. Any excess energy in the photon above the 1.022 MeV
required to create the two electron masses is simply shared between
the two electrons as kinetic energy of motion, and they fly out of the
atom with great velocity. The probability increases for high Z materials
and high energies. The pair production electron travels through matter,
causing ionizations and excitations, until it loses all of its kinetic energy
and is joined with an atom. The positive electron (known as a positron)
also produces ionizations and excitations until it comes to rest. While at
rest, the positron attracts a free electron, which then results in
annihilation of the pair, converting both into electromagnetic energy.
Thus, two photons of 511 keV each arise at the site of the annihilation
(accounting for the rest mass of the particles). The ultimate fate of the
annihilation photons is either photoelectric absorption or Compton
scattering followed by photoelectric absorption.

v. Attenuation coefficients
Reference: Radiation Protection Competency 1.1 p.20
When shielding against x-rays and gamma rays, it is important to
realize that photons are removed from the incoming beam on the basis
of the probability of an interaction (photoelectric, Compton, or pair
production). This process is called attenuation and can be described
using the "linear attenuation coefficient," , which is the probability of
an interaction per path length x through a material. The linear
attenuation coefficient varies with photon energy and type of material.
Mathematically, the attenuation of a narrow beam of monoenergetic
photons is given by:

I ( x) I 0 e x
where:
I(x) = Radiation intensity exiting a material of thickness x
Io = Radiation intensity entering a material
e = Base of natural logarithms (2.714......)
= Linear attenuation coefficient
x = Thickness of material.
This equation shows that the intensity is reduced exponentially with
thickness. I(x) never actually equals zero because x-rays and gamma
rays interact based on probability and there is a finite (albeit small)
probability that a gamma could penetrate through a thick shield
without interacting. Shielding for x-rays and gamma rays then
becomes an ALARA issue and not an issue of shielding to zero
intensities.

The formula above is used to calculate the radiation intensity from a


narrow beam behind a shield of thickness x, or to calculate the
thickness of absorber necessary to reduce radiation intensity to a
desired level. Tables and graphs are available which give values of
determined experimentally for different radiation energies and many
absorbing materials. The larger the value of the greater the
reduction in intensity for a given thickness of material. The fact that
lead has a high for x- and gamma radiation is partially responsible for
its wide use as a shielding material.
vi. Rayleigh scattering (Coherent scattering)
Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, p.153
Rayleigh scattering is called coherent because the photon is
scattered by the combined action of the whole atom. The event is
elastic in the sense that the photon loses essentially none of its
energy; the atom moves just enough to conserve momentum. The
photon is usually redirected through only a small angle. Therefore the
effect on a photon beam can only be detected in narrow-beam
geometry. Rayleigh scattering contributes nothing to kerma or dose,
since no energy is given to any charged particle, nor is any ionization
or excitation produced.
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp. 186-187
Photon penetration in matter is governed statistically by the probability
per unit distance traveled that a photon interacts by one physical
process or another. This probability, denoted by , is called the linear
attenuation coefficient (or macroscopic cross section) and has the
dimensions of inverse length (e.g., cm1). The coefficient depends on
photon energy and on the material being traversed. The mass
attenuation coefficient / is obtained by dividing by the density of
the material. It is usually expressed in cm 2 g1, and represents the
probability of an interaction per g cm2 of material traversed.
Monoenergetic photons are attenuated exponentially in a uniform
target. The number of photons that reach a depth x without having an
interaction is given by N(x) = N0ex, where is the linear attenuation

coefficient. It follows that e x is just the probability (i.e., N/N0) that a


normally incident photon will traverse a slab of thickness x without
interacting. The factor ex thus generally describes the fraction of
uncollided photons that go through a shield.
At low photon energies the binding of the atomic electrons is important
and the photoelectric effect is the dominant interaction. High-Z
materials provide greater attenuation and absorption, which decrease
rapidly with increasing photon energy. The coefficients for Pb and U
rise abruptly when the photon energy is sufficient to eject a
photoelectron from the K shell of the atom. When the photon energy is
several hundred keV or greater, the binding of the atomic electrons
becomes relatively unimportant and the dominant interaction is
Compton scattering. Since the elements (except hydrogen) contain
about the same number of electrons per unit mass, there is not a large
difference between the values of the mass attenuation coefficients for
the different materials.

There are sharp increases in the attenuation coefficient for the


photoelectric effect when the photon energy just exceeds the binding
energy of the electron shell (K shell) of atom.
Example
What thickness of concrete and of lead are needed to reduce the
number of 500-

keV photons in a narrow beam to one-fourth the incident number?


Compare the
thicknesses in cm and in g cm2.
Solution
We use Eq. (8.43) with N(x)/N0 = 0.25. The mass attenuation
coefficients / obtained
from Figs. 8.8 and 8.9 are shown in Table 8.2. At 500 keV, the linear
attenuation coefficient for concrete is = (0.089 cm2 g1) (2.35 g cm3)
= 0.209 cm1; that for lead is = (0.15)(11.4) = 1.71 cm1. Using Eq.
(8.43), we have for concrete
0.25 = e0.209x,
giving x = 6.63 cm. For lead,
0.25 = e1.71x,
giving x = 0.811 cm. The concrete shield is thicker by a factor of
6.63/0.811 = 8.18. In g cm2, the concrete thickness is 6.63 cm 2.35
g cm3 = 15.6 g cm2, while that for lead is 0.811 11.4 = 9.25 g cm 2.
The concrete shield is more massive in thickness by a factor of
15.6/9.25 = 1.69. Lead is a more efficient attenuator than concrete for
500-keV photons on the basis of mass. Photoelectric absorption is
important at this energy, and the higher atomic number of lead is
effective.
The linear attenuation coefficient , is also equal to the product of the
atomic density NA and the total atomic cross section A for all
processes:

N 0 A

N A A
,

Example
What is the atomic cross section of lead for 500-keV photons?
Solution
From Fig. 8.8, the mass attenuation coefficient is / = 0.16 cm2 g1.
The gram atomic weight of lead is 207 g. We find from Eq. (8.50) that

A
cm 2 207 g
0.16
5.5 10 23 cm 2
23
N0
g 6.02 10
, A = 55 barn

vii. Photonuclear interactions


Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp. 187-192
A photon can be absorbed by an atomic nucleus and knock out a
nucleon (proton + neutron). This process is called photodisintegration.

An example is gamma-ray capture by a


a neutron:

206

82

Pb(, n)

205

82

206

Pb nucleus with emission of

Pb. The photon must have enough energy to

overcome the binding energy of the ejected nucleon, which is generally


several MeV. Photodisintegration can occur only at photon energies
above a threshold value. The kinetic energy of the ejected nucleon is
equal to the photon energy minus the nucleons binding energy.
The probability for photonuclear reactions is orders of magnitude
smaller than the combined probabilities for the photoelectric effect,
Compton effect, and pair production. However, unlike these processes,
photonuclear reactions can produce neutrons, which often pose special
radiation-protection problems. In addition, residual nuclei following
photonuclear reactions are often radioactive. For these reasons,
photonuclear reactions can be important around high-energy electron
accelerators that produce energetic photons.
The thresholds for (, p) reactions are often higher than those for (, n)
reactions because of the repulsive Coulomb barrier that a proton must
overcome to escape from the nucleus. Although the probability for
either reaction is about the same in the lightest elements, the (, n)
reaction is many times more probable than (, p) in heavy elements.
Example
Compute the threshold energy for the (,n) photodisintegration of
206

Pb. What is the energy of a neutron produced by absorption of a 10-

MeV photon?
Solution
The mass differences, , from Appendix D, are 23.79 MeV for
23.77 MeV for

205

206

Pb,

Pb, and 8.07 MeV for the neutron. The mass difference

after the reaction is 23.77 + 8.07 = 15.70 MeV. The threshold energy
needed to remove the neutron from

206

Pb is therefore 15.70 (23.79)

= 8.09 MeV. Absorption of a 10-MeV photon produces a neutron and


recoil

205

Pb nucleus with a total kinetic energy of 10 8.09 = 1.91 MeV.

The absorbed photon contributes negligible momentum. In analogy


with Eq. (3.18) for alpha decay (E=MQ/(m+M)), the energy of the
neutron is (1.91 205)/206 = 1.90 MeV.
e. Quantities describing interactions
i. Kerma
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p. 387

Kerma is defined as the total initial kinetic energy of all charged


particles liberated by uncharged radiation (or indirectly ionizing
radiation: photons and neutrons) per unit mass of mass. This quantity,
which has the dimensions of absorbed dose, is called the kerma
(Kinetic Energy Released per unit MAss). By definition, kerma includes
energy that may subsequently appear as bremsstrahlung and it also
includes Auger-electron energies.

The kerma decreases steadily

because of the attenuation of the primary radiation with increasing


depth. Specifically, kerma and absorbed dose at a point in an irradiated
target are equal when charged-particle equilibrium exists there and
bremsstrahlung losses are negligible.

dEtr
dm

*Charged particle equilibrium (CPE) exists for the volume V if each


charged particle of a given type and energy leaving V is replaced by an
identical particle of the same energy entering in terms of expectation
values.
ii. Absorbed dose
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp. 362-363
The primary physical quantity used in dosimetry is the absorbed dose.
It is defined as the energy absorbed per unit mass from any kind of
ionizing radiation in any target. The unit of absorbed dose, J kg 1, is
called the gray (Gy). The older unit, the rad, is defined as 100 erg g 1.
(1Gy=100rad).
Photons produce secondary electrons in air, for which the average
energy needed to make an ion pair is W= 34 eV ip1 = 34 JC1.

1R

2.58 10 4 C 34 J

8.76 10 3 J / kg
kg
C

Thus, an exposure of 1 R gives a dose in air of 8.76 10 3 Gy (= 0.876


rad).
Absorbed dose rate

Gy ab
D 8.7 10

water

X
air

iii. Exposure
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
p. 362
Exposure is defined for gamma and X rays in terms of the amount of
ionization they produce in air. The unit of exposure is called the
roentgen (R).

The roentgen is defined as the amount of energy

required to liberate 2.5810-4C of charge from 1 kg of air at standard


temperature (273 K) and pressure (760 mmHg). (1R = 2.5810-4C/kg).
The charge involved in the definition of the roentgen includes both the
ions produced directly by the incident photons as well as ions produced
by all secondary electrons.
Example
Show that 1 esu cm3 in air at STP is equivalent to 1 R of exposure.
Solution
Since the density of air at STP is 0.001293 g cm 3 and 1 esu =
3.341010 C, we have

1esu
3.34 10 10 C

2.58 10 4 C / kg
3
3
cm
0.001293 g 10 kg / g
Exposure rate

A R m 2 Ci

[ R / h]

d 2 h Ci m 2

Exposure rate constant

0.5

R m2
f i Ei

h Ci

Example
A 1 mg sample of Cs-137 is left in a fume hood. It is unshielded. The
DOE limit on a radiation area is 5 rem/hr at 1 m. Using calculation,
verify if this does or does not generate a radiation area.
Solution
Cs-137 emits only a 0.662 MeV gamma ray in 85% of
transformation.
Average energy per disintegration released as gamma radiation
0.85 0.662 = 0.05627 MeV
Estimated specific gamma-ray constant for Cs-137

0.5 E 0.28

R m2
Ci H

Activity of a 1 mg sample of Cs-137

its

A N

0.693 W Na
T1 / 2
M

0.693
10 3 g 6.02 10 23 atoms / mole
1Ci

0.0865Ci
30.17 365 24 3600 s
137 g / mole
3.7 1010 / s

Exposure rate at distance r = 1m from a point source of activity


0.0865Ci

R m 2 0.0865Ci
X 0.28

0.02422 R / h
Ci H
(1m) 2

ab

662keV

air

ab

cm 2
0.293
g

662keV

0.0323
water

cm 2
g

,
Dose rate

Gy ab

D 8.7 10

water

8.7 10 3

X
air

Gy 0.0323
R
Gy
rad

0.02422 0.0002323
0.02323
R 0.0293
h
h
h

Dose

H DW R 0.02323

rad
1
h

= 0.02323 rem/h 5 rem/h


Thus, this dose does not generate a radiation area.
4. Radiation measurement and counting
a. Theory
Reference: Herman Cember. Health Physics. 4th edition, pp.427-428
Radiation interacts with detector materials through photoelectric effects,
Compton scattering, or pair production. These interactions liberate electron
from the detector material. These electrons slow down by causing excitation
(scintillation detector) and ionization (gas-filled detector, semiconductor
detector) by striking the bound electron in the detector material. This
excitation and ionization cause the detector material to have its own effect.
Using the magnitude of the detector response which is proportional to its
effect, the radiation can be measured. For example, the detector material of

scintillation detector emits visible light. The light strikes the photo cathode
creating electron in the PM(photomultiplier) tube.
Some of the physical and chemical radiation effects that apply to radiation
detection and measurement for health physics purposes are listed in Table
below.

b. Gas-filled detectors
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, pp.48-52
Reference: James E. Turner, Darryl J. Downing, and James S. Bogard, Statistical
Methods in Radiation Physics, pp.241-301
Reference: Herman Cember. Health Physics. 4th edition, p.432
Each type of radiation has a specific probability of interaction with the
detector media. This probability varies with the energy of the incident
radiation and the characteristics of the detector gas. The probability of
interaction is expressed in terms of specific ionization with units of ion pairs
per centimeter. A radiation with a high specific ionization, such as alpha, will
produce more ion pairs in each centimeter that it travels than will a radiation
with a low specific ionization such as gamma.
Generally, the probability of interaction between the incident particle radiation
and the detector gas (and therefore the production of ions) decreases with
increasing radiation energy. In photon interactions, the overall probability of
interaction increases because of the increasing contribution of the pair
production reactions. As the energy of the particle radiation decreases, the
probability of interaction increases, not only in the gas, but also in the

materials of construction. Low energy radiations may be attenuated by the


walls of the detector and not reach the gas volume. As the number of
radiation events striking a detector increases, the overall probability of an
interaction occurring with the formation of an ion pair increases. In addition,
the number of ion pairs created increases and therefore detector response
increases.
The probability of an interaction occurring between the incident radiation and
a gas atom increases as the number of atoms present increases. A larger
detector volume offers more targets for the incident radiation, resulting in a
larger number of ion pairs. Since, each radiation has a specific ionization in
terms of ion pairs per centimeter, increasing the detector size also increases
the length of the path that the radiation traverses through the detector. The
longer the path, the larger the number of ion pairs.

Monoenergetic beam of particles stopping in parallel-plate ionization chamber


with variable potential difference V applied across plates P1 and P2
The amount of energy expended in the creation of an ion pair is a function of
the type of radiation, the energy of the radiation, and the characteristics of
the absorber (in this case, the gas). This energy is referred to as the ionization
potential, or W-Value, and is expressed in units of electron volts (eV) per ion
pair. Typical gases have W-Values of 25-50 eV, with an average of about 34 eV
per ion pair.
In the section on detector size, it was shown the probability of interaction
increases with detector size. In many cases, there is a practical limit to
detector size. Instead of increasing detector size to increase the number of
target atoms, increasing the pressure of the gas will accomplish the same
goal. Gas under pressure has a higher density (more atoms per cm 3) than a
gas not under pressure, and therefore offers more targets, a higher probability

of interaction, and greater ion pair production. For example, increasing the
pressure of a typical gas to 100 psig increases the density by about 7 times.
Once the ion pair is created, it must be collected in order to produce an output
pulse or current flow from the detector. If left undisturbed, the ion pairs will
recombine, and not be collected. If a voltage potential is applied across the
electrodes, a field is created in the detectors, and the ion pairs will be
accelerated towards the electrodes. The stronger the field, the stronger the
acceleration. As the velocity of the electron increases, the electron may cause
one or more ionizations on its own. This process is known as secondary
ionization. The secondary ion pairs are accelerated towards the electrode and
collected, resulting in a stronger pulse than would have been created by the
ions from primary ionization.

If the applied voltage potential is varied from 0 to a high value, and the pulse
size recorded, a response curve will be observed. For the purposes of
discussion, this curve is broken into six regions. The ion chamber region, the
proportional region, and the Geiger-Mller region are useful for detector
designs used in radiological control. Other regions are not useful. In the
recombination region, the applied voltage is insufficient to collect all of the ion
pairs before some of them recombine.
As the voltage to the detector is increased, a point is reached at which
essentially all of the ions are collected before they can recombine. No
secondary ionization or gas amplification occurs. At this point, the output

current of the detector will be at a maximum for a given radiation intensity


and will be proportional to that incident radiation intensity. Also, the output
current will be relatively independent of small fluctuations in the power supply.
The output of a gas-filled detector when 100% of the primary ion pairs are
collected is called the saturation current.
If the applied voltage potential is varied from 0 to a high value, and the pulse
size recorded, a response curve will be observed. For the purposes of
discussion, this curve is broken into six regions.
The Three Regions Useful for Radiation Detection and Measurement
The ion chamber region, the proportional region, and the Geiger-Mller region
are useful for detector designs used in radiological control. Other regions are
not useful. In the recombination region, the applied voltage is insufficient to
collect all of the ion pairs before some of them recombine. In the limited
proportional region, neither the output current nor the number of output
pulses are proportional to the radiation level. Calibration is impossible. In the
continuous discharge region, the voltage is sufficient to cause arcing and
breakdown of the detector gas.
The Sequence of Events that Occur Following an Initial Ionizing Event in an
Ionization Chamber, a Proportional Counter and a Geiger-Mller Detector
Ionization Chamber
As the voltage to the detector is increased, a point is reached at which
essentially all of the ions are collected before they can recombine. No
secondary ionization or gas amplification occurs. At this point, the output
current of the detector will be at a maximum for a given radiation intensity
and will be proportional to that incident radiation intensity. Also, the output
current will be relatively independent of small fluctuations in the power supply.
Proportional Counter
As the voltage on the detector is increased beyond the ion chamber region,
the ions created by primary ionization are accelerated by the electric field
towards the electrode. Unlike the ion chamber region, however, the primary
ions gain enough energy in the acceleration to produce secondary ionization
pairs. These newly formed secondary ions are also accelerated, causing
additional ionizations. The large number of events, known as an avalanche,
creates a single, large electrical pulse.
Geiger-Mller Detector
Continuing to increase the high voltage beyond the proportional region will
eventually cause the avalanche to extend along the entire length of the
anode. An avalanche across the entire length of the anode is called a

Townsend avalanche. When this happens, the end of the proportional region is
reached and the Geiger region begins. At this point, the size of all pulses regardless of the nature of the primary ionizing particle - is the same. When
operated in the Geiger region, therefore, a counter cannot distinguish among
the several types of radiations. However, the very large output pulses (>0.25
V) that result from the high gas amplification in a Geiger-Muller (GM) counter
means either the complete elimination of a pulse amplifier or use of an
amplifier that does not have to meet the exacting requirements of high pulse
amplification. Since all the pulses in a GM counter are about the same height,
the pulse height is independent of energy deposition in the gas.
c. Scintillation detectors
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, p.52
Reference: Herman Cember. Health Physics. 4th edition, pp.436-438
Scintillation detectors measure radiation by analyzing the effects of the
excitation of the detector material by the incident radiation. Scintillation is the
process by which a material emits light when excited. In a scintillation
detector, this emitted light is collected and measured to provide an indication
of the amount of incident radiation. Numerous materials scintillate - liquids,
solids, and gases. A common example is a television picture material which
scintillates is commonly called a phosphor or a fluor. The scintillations are
commonly detected by a photomultiplier tube.
A scintillation detector is a transducer that changes the kinetic energy of an
ionizing particle into a flash of light. Scintillation counters are widely used to
count gamma rays and low-energy beta particles.

d. Semiconductor detectors
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, pp.52-56
Note: Solid-state detectors are more commonly referred to as semiconductor
detectors (for example, germanium a common semiconductor used in
radiation detection).
If a strong electric field is applied to the crystal, the electron in the conduction
band moves in accordance with the applied field. Similarly, in the group of
filled bands, an electron from a lower energy band moves up to fill the hole
(vacancy) in the valence band. The hole it leaves behind is filled by an
electron from yet a lower energy band. This process continues, so the net
effect is that the hole appears to move down through the energy bands in the
filled group. Thus, the electron moves in one direction in the unfilled group of
bands, while the hole moves in the opposite direction in the filled group of
bands. This can be likened to a line of cars awaiting a toll booth, the toll booth
being the forbidden band. As a car leaves the "filled valence band" for the
unfilled conductance band, a hole is formed. The next car in line fills this hole,
and creates a hole, and so on. Consequently, the hole appears to move back
through the line of cars.

Any impurities in the crystalline structure can affect the conducting ability of
the crystalline solid. There are always some impurities in a semiconductor, no
matter how pure it is. However, in the fabrication of semiconductors,
impurities are intentionally added under controlled conditions. If the impurity
added has an excess of outer electrons, it is known as a donor impurity,
because the extra electron can easily be raided or donated to the
conduction band. In effect the presence of this donor impurity decreases the
gap between the group of filled bands and the group of unfilled bands. Since
conduction occurs by the movement of a negative charge, the substance is
known as an n-type material. Similarly, if the impurity does not contain
enough outer electrons, a vacancy or hole exists. This hole can easily accept
electrons from other energy levels in the group of filled bands, and is called an
acceptor substance. Although electrons move to fill holes, as described above,
the appearance is that the holes move in the opposite direction. Since this
impurity gives the appearance of positive holes moving, it is known as a ptype material.
Since any crystalline material has some impurities in it, a given semiconductor
will be an n-type or a p-type depending on which concentration of impurity is
higher. If the number of n-type impurities is exactly equal to the number of ptype impurities, the crystalline material is referred to as an intrinsic
semiconductor.
A semiconductor that has been doped with the proper amount of the correct
type of impurity to make the energy gap between the two groups of bands
just right makes a good radiation detector. A charged particle loses energy by
creating electron-hole pairs.
If the semiconductor is connected to an external electrical field, the collection
of electron-hole pairs can lead to an induced charge in the external circuit
much as the collect of electron-positive atom pairs (ion pairs) is used to
measure radiation in an ion chamber. Therefore, the semi-conductor detector
relies on the collection of electron-hole pairs to produce a usable electrical
signal.
One disadvantage of the semiconductor detector is that the impurities, in
addition to controlling the size of the energy gap also act as traps. As
electrons (or holes) move through the crystalline material, they are attracted
to the impurity areas or centers because these impurity centers usually have
a net charge. The carrier (electron or hole) may be trapped for a while at the

impurity center and then released. As it begins to move again, it may be


trapped at another impurity center and then released again. If the electron or
hole is delayed long enough during transit through the crystal, it may not add
to the electrical output.
Thus, although the carrier is not actually lost, the net effect on readout is that
it is lost. Another disadvantage of the semiconductor detector is that the
presence of impurities in the crystal is hard to control to keep the energy gap
where it is desired. A newer technique, the junction counter, has been
developed to overcome these disadvantages.
In a semiconductor junction counter, an n-type substance is united with a ptype substance. When the two are
diffused

together

to

make

diffused junction, a depletion layer


is

created

between

the

two

materials. (This depletion layer is


formed by the diffusion of electrons
from the n-type material into the ptype material and the diffusion of
holes from the p-type material into
the n-type material.)
This results in a narrow region which is depleted of carriers and which behaves
like an insulator bounded by conducting electrodes. That is, a net charge on
each side of the depletion region impedes the further transfer of charge. This
charge is positive in the n-region and negative in the p-region. This barrier can
be broken if we apply an external voltage to the system and apply it with the
proper bias. A forward bias is applied when we connect the positive
electrode to the p-region. In this case, the barrier breaks down and electrons
flow across the junction. However, if we apply a reverse bias (negative
electrode connected to the p-region), the barrier height is increased and the
depleted region is extended.
A further advancement in junction counters is the p-n type. This counter has
an intrinsic region between the n and p surface layers. (An intrinsic
semiconductor was discussed earlier and is effectively a pure semiconductor.)
The presence of an intrinsic region effectively creates a thicker depletion area.
A germanium-lithium Ge(Li) detector is an example of this type of detector.

Lithium (an n-type material) is diffused into p-type germanium. The n-p
junction that results is put under reverse bias, and the temperature of the
material is raised. Under these conditions, the lithium ions drift through the
germanium, balancing n and p material and forming an intrinsic region.
The heat and bias are removed and the crystal cooled quickly to liquid
nitrogen temperatures. This intrinsic region serves as the region in which
interactions can take place. The intrinsic region can be thought of as a built-in
depletion region.
Due to the large size of the depletion region and the reduced mobility of the
electrons and holes due to the depressed temperature, a high charge is
necessary to cause conduction. The charge is chosen high enough to collect
ion pairs, but low enough to prevent noise.
Due to the increased stopping power of germanium over air at -321 oF the
energy required to create an ion pair is only 2.96 eV compared to 33.7 eV for
air. This means that by theory, a germanium detector will respond to any
radiation that will create ion pairs. In actuality, however, the response to
radiations other than gamma is limited by the materials surrounding the
detector, material necessary to maintain temperature. Another consideration
limiting response is the geometry of the crystal. The most efficient response
occurs when the interaction takes place in the center of the intrinsic region,
this can only occur for gamma.
Radiation interacts with atoms in the intrinsic region to produce electron hole
pairs. The presence of ion pairs in the depletion region causes current flow.
This is similar to a transistor, in that instead of inducing charges in the center
section (the base in a transistor) by a battery or another source, the charge is
induced by the creation of ion pairs. Since it is not necessary for the ion
produced to reach the p and n region to be collected, as in a gas filled
chamber, the response is faster.
Since the number of ion pairs produced is a function of the incident energy,
and the resulting current is a function of the amount of ion pairs, Ge(Li)
response is in terms of energy.
e. Special detectors
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, pp.32-56
Thermoluminescent Dosimeter

Thermoluminescence (TL) is the ability of some materials to convert the


energy from radiation to a radiation of a different wavelength, normally in the
visible light range. There are two categories of thermoluminescence:
fluorescence and phosphorescence.
Fluorescence
This is emission of light during or immediately after irradiation of the
phosphor. This is not a particularly useful reaction for thermoluminescent
dosimetry (TLD) use.
Phosphorescence
This is the emission of light after the irradiation period. The delay time can be
from a few seconds to weeks or months. This is the principle of operation used
for TLD. The property of thermoluminescence of some materials is the main
method used for personnel dosimeters at DOE facilities.
TLDs use phosphorescence as their means of detection of radiation. Electrons
in some solids can exist in two energy states, called the valence band and the
conduction band. The difference between the two bands is called the band
gap. Electrons in the conduction band or in the band gap have more energy
than the valence band electrons. Normally in a solid, no electrons exist in
energy states contained in the band gap. This is a forbidden region.
In some materials, defects in the material exist or impurities are added that
can trap electrons in the band gap and hold them there. These trapped
electrons represent stored energy for the time that the electrons are held, as
shown below in figure 8. This energy is given up if the electron returns to the
valence band.

In most materials, this energy is given up as heat in the surrounding material,


however, in some materials a portion of energy is emitted as light photons.
This property is called luminescence. Heating of the TL material causes the
trapped electrons to return to the valence band. When this happens, energy is
emitted in the form of visible light. The light output is detected and measured
by a photomultiplier tube and a dose equivalent is then calculated. A typical
basic TLD reader contains the following components:
Heater - raises the phosphor temperature
Photomultiplier tube - measures the light output
Meter/Recorder - display and record data
A glow curve can be obtained from the heating process. The light output from
TL material is not easily interpreted. Multiple peaks result as the material is
heated and electrons trapped in shallow traps are released. This results in a
peak as these traps are emptied. The light output drops off as these traps are
depleted. As heating continues, the electrons in deeper traps are released.
This results in additional peaks. Usually the highest peak is used to calculate
the dose equivalent. The area under the curve represents the radiation energy
deposited on the TLD.
Albedo Dosimeter
Reference: Radiation Protection Competency 1.3. p.RP 1.3-16
TLDs used to detect neutrons incorporate two isotopes of lithium, Li-6 and Li7, both of which are equally sensitive to gamma radiation. However, Li-6 has a
large cross section for the thermal neutron (n, ) reaction. Production of the
alpha particle initiates the thermoluminescence process that ultimately results
in a measure of the dose due to thermal neutrons; whereas, Li-7 is relatively
insensitive to thermal neutrons. The Li-6 phosphor will read both neutron and
gamma radiation interactions; whereas, the Li-7 phosphor will read only
gamma interactions. Neutron dose is determined by subtracting the Li-7
reading (r) from the Li-6 reading (n+r) and applying a conversion factor to the
difference.

The term albedo stands for reflecting. Some of the thermal neutrons detected
by the Li-6 are originally fast neutrons that interact with hydrogen in the body,
are thermalized, reflected or scattered off the body and detected. This makes
the albedo dosimeter position sensitive; therefore, it must be properly
orientated. Because the neutrons can be moderated to thermal energies, they
are reflected from the body through the back of the badge into the albedo
dosimeter. Therefore, it is important to wear the dosimeter extremely close to
the body (on the flesh) to obtain accurate measurements. The front of the
badge is shielded with cadmium to reject external thermal neutrons.
Pocket Dosimeter
Pocket dosimeters are compact, easy-to-carry devices that indicate an
individuals accumulated dose to radiation at anytime, thus eliminating the
delay of film badge/TLD processing. However, because of the possibility of
faulty readings due to rough treatment, the dosimeter reading does not
constitute a permanent legal record of dose received. A pocket dosimeter can
be self-reading or not. In the self-reading type, a small compound microscope
is used to observe the response. The type which is not self-reading, called the
pocket chamber, is similar in construction to the self-reading type, but another
instrument called the charger reader must be used to read it. The self-reading
type is normally preferred since it can be read anywhere and at any time.
A self-reading pocket dosimeter consists of a small air-filled chamber in which
a quartz fiber electrometer, a small microscope and a graduated scope across
which the shadow of the quartz fiber moves to indicate the applied dose, is
suspended.
The design and operation of a self-reading pocket dosimeter utilizes the
principle of discharging a pair of opposite charged surfaces when the air
between them is exposed to ionizing radiation. The electric charge required to
attract the ionized gas particles is impressed on the electrometer and the
chamber wall by means of a suitable charging unit. Ionizing radiation
penetrating the chamber forms positively and negatively charged gas
particles. These charged particles are attracted to the oppositely charged
surface; i.e., the negative particles are attracted to the electrometer and the
positive particles are attracted to the chamber wall. The migration of the
negative particles to the electrometer permits the fiber to move closer to the
frame, which in turn causes the shadow of the fiber to move across the
calibrated scale.
Film Badge

Reference: James E. Turner. Atoms, Radiation, and Radiation Protection, pp.


275-279.
Film emulsions contain small crystals of a silver halide (e.g., AgBr), suspended
in a gelatine layer spread over a plastic or glass surface, wrapped in light-tight
packaging. Under the action of ionizing radiation, some secondary electrons
released in the emulsion become trapped in the crystalline lattice, reducing
silver ions to atomic silver. Continued trapping leads to the formation of
microscopic aggregates of silver atoms, which comprise the latent image.
When developed, the latent images are converted into metallic silver, which
appears to the eye as darkening of the film. The degree of darkening, called
the optical density, increases with the amount of radiation absorbed. An
optical densitometer can be used to measure light transmission through the
developed film.
Doses from gamma and beta radiation can be inferred by comparing
densitometer readings from exposed film badges with readings from a
calibrated set of films given different, known doses under the same
conditions. The darkening response of film to neutrons, on the other hand, is
too weak to be used in this way for neutron personnel monitoring.
Film calibration and the use of densitometer readings to obtain dose would
appear, in principle, to be straightforward. In practice, however, the procedure
is complicated by a number of factors. First, the density produced in film from
a given dose of radiation depends on the emulsion type and the particular lot
of the manufacturer. Second, firm is affected by environmental conditions,
such as exposure to moisture, and by general aging. Elevated temperatures
contribute to base fog in an emulsion before development. Third, significant
variations in density are introduced by the steps inherent in the filmdevelopment process itself. A serious problem of a different nature for dose
determination is presented by the strong response of film to low-energy
photons.
Film badges are also used for personnel monitoring of beta radiation, for
which there is usually negligible energy dependence of the response. For
mixed beta - gamma radiation exposures, the separate contribution of the
beta particles is assessed by comparing (1) the optical density behind a
suitable filter that absorbs them and (2) the density through a neighboring
open window. The latter consists only of the structural material enclosing
the film. Since beta particles have short ranges, a badge that has been
exposed to them alone will be darkened behind the open window, but not
behind the absorbing filter. Such a finding would also result from exposure to

low-energy photons. To distinguish these from beta particles, one can employ
two additional filters, one of high and the other of low atomic number, such as
silver and aluminum. They should have the same density thickness, so as to
be equivalent beta-particle absorbers. The high-Z filter will strongly absorb
low energy photons, which are attenuated less by the low-Z material. The
presence of low-energy photons will contribute to a difference in darkening
behind the two.
Readings from badges worn by personnel were analyzed to provide a number
of dose quantities, as mandated by regulations, basically in the following
ways. The optical density behind the Cd-Au-Cd filter served as a measure of
deep dose to tissues inside the body. The thickness of the plastic filter plus
the picture and film wrapper (300 mg cm 2) corresponded to the 3-mm depth
specified for the lens of the eye. Assessment of skin dose (specified at a
depth of 7 mg cm2) was based on the Cd-Au-Cd reading and the difference
between the densities behind the window and the plastic filter.
The three rods were surrounded by different shields of lead, copper, and
plastic. Comparing their relative responses gave an indication of the effective
energy of the photons. The response of the glass rods would be potentially
important for accidental exposures to high-level radiation.
Multi-element film dosimeters for personnel monitoring became largely
replaced by thermoluminescent dosimeters (next section) during the 1980s.
5. Dosimetry
a. Fundamentals and concepts
Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation
Dosimetry, p.264
Strictly,

radiation

dosimetry

(or

simply

dosimetry)

deals

with

the

measurement of the absorbed dose or dose rate resulting from the interaction
of ionizing radiation with matter. More broadly it refers to the determination
(i.e., by measurement or calculation) of these quantities, as well as any of the
other radiologically relevant quantities such as exposure, kerma, fluence, dose
equivalent, energy imparted, and so on. One often measures one quantity
(usually the absorbed dose) and derives another from it through calculations
based on the previously defined relationships.
b. Cavity theory
Reference: Audun Sanderud. Cavity theory dosimetry of small volume, FYSKJM 4710

Problem: dose to water (or other substance) is wanted, but dose


measured with a detector ( dosimeter) which have a different

composition (atom-number, density)


Transform dose to detector to dose in water?
The dose determination is based on both measurements and

calculations; dependent of the knowledge of radiation interaction


Cavity theory: dose of small volume, or volume of low density useful

for charged particles


Consider a field of charged particles in a medium x, with a cavity k
positioned inside:

When the fluence is unchanged over the cavity the dose becomes:

When the cavity is absent the dose in the same point in x:

The dose relation becomes:

i. Bragg-Gray theory
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp. 368-370

The B-G relation gives that the dose relation between the cavity and
the medium where the dose is to be determined is given by the

stopping power relation.


Assumption (B-G conditions):
Deposited dose are only due to charged particles
The particle fluence does not change over the cavity
The BraggGray principle provides a means of relating ionization
measurements in a gas to the absorbed dose in some convenient
material from which a dosimeter can be fabricated. Consider a gas in a

walled enclosure irradiated by photons, as illustrated in Fig. 12.3. The


photons lose energy in the gas by producing secondary electrons
there, and the ratio of the energy deposited and the mass of the gas is
the absorbed dose in the gas. This energy is proportional to the
amount of ionization in the gas when electronic equilibrium exists
between the wall and the gas. When the wall and gas are of different
atomic composition, the absorbed dose in the wall can be obtained
from the ionization in the gas. In this case, the cavity size and gas
pressure must be small, so that secondary charged particles lose only
a small fraction of their energy in the gas. The absorbed dose then
scales as the ratio Sw/Sg of the mass stopping powers of the wall and
gas:

DW

Dg S w
Sg

N gWS w

DW S w

Dg S g

mS g
or

Example
A chamber satisfying the BraggGray conditions contains 0.15 g of gas
with a W value of 33 eV ip1. The ratio of the mass stopping power of
the wall and the gas is 1.03. What is the current when the absorbed
dose rate in the wall is 10 mGy h1?
Solution
We apply Eq. (12.11) to the dose rate, with Sw/Sg = 1.03. From the
given conditions, Dw = 10 mGy h1 = (0.010 J kg1)/(3600 s) = 2.78
106 J kg1 s1. The rate of ion pair production in the gas is, from Eq.
(12.11),

Ng

DW mS g
WSW

2.78 10 6 J / kg 0.15 10 3 kg

7.67 10 7 ip / s
19
33eV / ip 1.6 10 J / eV 1.03

Since the electronic charge is 1.6010 19 C, the current is 7.6710 7


ip/s1.601019 C/ip = 1.231011 Cs1 = 1.231011 A

I Ng e

DW meS g
WSW

Expressing W= 33 JC1 and remembering that the conversion factor


from eV to J is numerically equal to the magnitude of the electronic
charge e, we write, in SI units,

I Ng e

DW meS g
WSW

2.78 10 6 J / kg 0.15 10 3 kg

1.23 10 11 A
33 J / C 1.03

ii. Spencer cavity theory


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.242-248

Goals: to account for -rays and cavity size effect


Delta ray is secondary electrons with enough energy to escape
a significant distance away from the primary radiation beam

and produce further ionization.


Starts with two B-G conditions (narrow g region and dose produced by
crossing particles) and two additional assumptions (existence of CPE

and absence of bremsstrahlung generation)


Introduces mean energy D, needed to cross the cavity
Based on their energy T, electrons in a spectrum are divided into fast

(T) and slow (T<) groups


Taking into account adjustment for electron spectrum, dose to the wall

R(T,T0) ratio of differential electron fluence, including -rays to

that of primary electrons alone


Ratio of doses in cavity and wall

The Spencer cavity theory gives somewhat better agreement with


experimental observations for small cavities than does simple B-G

theory, by taking account of -ray production and relating the dose

integral to the cavity size


However, it still relies on the B-G conditions, and therefore fails to the

extent that they are violated


In particular, in the case of cavities that are large (i.e., comparable to
the range of the secondary charged particles generated by indirectly
ionizing radiation), neither B-G condition is satisfied

iii. Burlin cavity theory


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.248-255

What happen when the cavity increase in volume or density; and the

photons also are absorbed?


Burlin derived a theory where both electron and photon absorption in
the cavity are accounted for:

d = 1 : no photon absorption B-G theory

Summary
Bragg-Gray theory works best for small cavities, media of similar

atomic numbers
Spencer theory includes delta rays, cavity size effect
Burlin theory works well for wide range of cavity sizes and materials,

no electron scattering included


Cavity theories create a basis for dosimetry

iv. Fano theorem


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.255-257

In practice the requirement for small cavity is ignored by matching

atomic numbers of wall and cavity materials


Theorem statement: In an infinite medium of given atomic composition
exposed to a uniform field of indirectly ionizing radiation, the field of
secondary radiation is also uniform and independent of the density of

the medium, as well as of density variations from point to point


Proof employs radiation transport equations

v. Other cavity theories


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.257-259

Kearsly theory - modification of Burlins theory, accounts for electron

scattering; predicts dose distribution across the cavity


Luo Zheng-Ming (1980) has developed a cavity theory based on
application of electron transport equation in the cavity and surrounding

medium. It is very detailed and provides good agreement with


experiment.
The effort to develop new and more complicated cavity theories may

be diminishing due to strong competition with Monte Carlo methods


vi. Interfaces
Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.257-259
Dose near interfaces between dissimilar media under gamma

irradiation
A minimum dose is observed just beyond the interface when the

photons go from a higher-Z to a lower-Z medium


A maximum is observed just beyond the interface when the photons

go from a lower-Z to a higher-Z medium


Tissue-bone interface is an example

c. Radiation equilibrium
Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation
Dosimetry, pp.61-65
Introduction
The concepts of radiation equilibrium (RE) and charged-particle equilibrium
(CPE) are useful in radiological physics as a means of relating certain basic
quantities. That is, CPE allows the equating of the absorbed dose D to the
collision kerma K, while radiation equilibrium makes D equal to the net rest
mass converted to energy per unit mass at the point of interest.

Consider an extended volume V containing a distributed radioactive


source with a smaller internal volume v about a point of interest, P
Radioactivity is emitted isotropically on average
V is required to be large enough so that the maximum distance of
penetration d of any emitted ray and its progeny (i.e., scattered
and secondary rays) is less than the minimum separation s of the

boundaries of V and v
Radiation equilibrium (RE) exists for the volume v if the following four
conditions exist throughout V (in the non-stochastic limit):
a. The atomic composition of the medium is homogeneous
b. The density of the medium is homogeneous
c. The radioactive source is uniformly distributed
d. There are no electric or magnetic fields present to perturb the
charged-particle paths, except the fields associated with the

randomly oriented individual atoms


Consider a plane T that is tangent to the volume v at a point P, and the
rays crossing the plane per unit area
In the non-stochastic limit there will be perfect reciprocity of rays of
each type and energy crossing both ways, due to uniform
distribution of the radioactive source within the sphere S
This will be true for all possible orientations of tangent planes
around the volume v; therefore, in the non-stochastic limit, for each

type and energy of ray entering v, another identical ray leaves


This condition is called radiation equilibrium (RE) with respect to v
As a consequence of radiation equilibrium the energy carried in and that
carried out of v are balanced for both indirectly and directly ionizing
radiation:
and
The energy imparted can then be simplified to
Therefore under RE conditions the expectation value of the energy
imparted to the matter in the volume v is equal to that emitted by

the radioactive material in v


In non-stochastic consideration the volume

infinitesimal dv, then RE exists at the point P


Since D=d/dm, under condition of radiation equilibrium at a point in a

v can be reduced to

medium, the absorbed dose is equal to the expectation value of the


energy released by the radioactive material per unit mass at that point,

ignoring neutrinos D=dQ/dm.


The concept of RE has practical importance in the fields of nuclear
medicine and radiobiology, where distributed radioactive sources may be

introduced into the human body or other biological systems for diagnostic,
therapeutic, or analytical purposes. The resulting absorbed dose at any
given point depends on the size of the object relative to the radiation
range and on the location of the point within the object
d. Charged particle equilibrium
Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation
Dosimetry, p.65
Charged particle equilibrium (CPE) exists for the volume v if each charged
particle of a given type and energy leaving v is replaced by an identical
particle of the same energy entering (in terms of
expectation values).
If CPE exists,
RE condition is sufficient for CPE to exist.
In many practical cases RE condition is not satisfied, but it can be adequately
approximated if CPE condition exists. Consider two general situations:
distributed radioactive sources
indirectly ionizing radiation from external sources
i. Distributed sources
Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.65-67
Case 1
For the trivial case of a distributed source emitting only charged
particles, in a system where radiative losses are negligible
The dimension s (the minimum separation of v from the
boundary) is taken to be greater than the maximum range d
of the particles
If all of the four conditions a d are satisfied, both RE and
CPE exist (they are identical for this case)
Case 2
Consider now the case where both charged particles and relatively
more penetrating indirectly ionizing radiation are emitted
Let the distance d be the maximum range of the charged
particles only, and distance s > d
Conditions a through d are satisfied
Only CPE exists in this case
RE is not attained since rays escaping from the volume v are
not replaced,

The equation for the expectation value of the energy


imparted
Since the indirectly ionizing rays are so penetrating that
they do not interact significantly in v, is equal to the
kinetic energy given to charged-particles by the radioactive
source in v, less any radiative losses by those particles while
in v
The average absorbed dose in v is thus D=d/dm for CPE

condition
Now assume that the size of the volume V occupied by the source
is expanded so that distance s increases to being greater than the
effective range of the indirectly ionizing rays and their secondaries
This transition will cause the (Rin)u term to increase until it
equals (Rout)u in value
RE will be restored
The energy imparted would be transformed into that for RE:

Case 3
A distributed source emitting penetrating indirectly ionizing
radiation
Achieving CPE will also require that RE is attained
The following equations are applicable
,

ii. Indirectly ionizing radiation


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp.67-70

A volume V contains a smaller volume v


The boundaries of v and V are required to be separated by
at least the maximum distance of penetration of any
secondary

charged

particle

present.

If

the

following

conditions are satisfied throughout V, CPE will exist for the


volume v:
a. The atomic composition of the medium is homogeneous
b. The density of the medium is homogeneous
c. There exists a uniform field of indirectly ionizing
radiation (rays are only negligibly attenuated passing
through the medium)

d. No inhomogeneous electric or magnetic fields are


present
These conditions are similar to those of RE, except for:
Condition c: uniform field of radiation replaces the

uniform radioactive source


The separation of boundaries of v and V are required to
be at least the maximum distance of penetration of any
secondary charged particle, rather than that of the most

penetrating radiation (indirectly ionizing)


The last condition d has been shown to be a sufficient
substitute for the requirement of a complete absence of

electric or magnetic fields


Provided that the volume v is small enough to allow radiative-loss
photons to escape

Reducing v to the infinitesimal volume dv, containing mass dm


about a point of interest P, we can write

Hence under CPE:

The derivation of this equation proves that under CPE conditions at


a point in a medium, the absorbed dose is equal to the collision

kerma there
That is true irrespective of radiative losses
This relationship equates the measurable quantity D with the
calculable quantity

iii. Failure of CPE


Reference: Frank H. Attix. Introduction to Radiological Physics and
Radiation Dosimetry, pp. 72-75
There
a.
b.
c.
d.

are four basic causes for CPE failure in an indirectly ionizing field:
Inhomogeneity of atomic composition within volume V
Inhomogeneity of density in V
Non-uniformity of the field of indirectly ionizing radiation in V
Presence of a non-homogeneous electric or magnetic field in V

Other causes
a. Proximity to the source
If the volume V is too close to the source of the
indirectly ionizing radiation, then the energy fluence
will be significantly non-uniform within V, being larger

on the side nearest to the source


Thus there will be more particles (e3) produced at
points like P3 than particles e1 at P1 => more particles

will enter v than leave it


CPE consequently fails for v

b. Proximity to a Boundary of Inhomogeneity in the Medium


If the volume V in Fig. 4.3 (above figure) is divided by a
boundary between dissimilar media, loss of CPE may result at v,
since the number of charged particles then arriving at v will
generally

be

different

than

would

be

the

case

for

homogeneous medium. This difference may be due to a change


in charged-particle production, or a change in the range or
geometry for scattering of those particles, or a combination of
these effects
c. High Energy Radiation
Since the measurements of exposure from x-rays and gammarays rely on the existence of CPE, exposure measurements have
been conventionally assumed to be infeasible for photon

energies > 3 MeV. However, if some known relationship


between Dair and (Kc)air can be attained under achievable
conditions, and substituted for the simple equality that exists
for CPE, exposure can still be measured, at least in principle.
Such a relationship does exist for a situation known as TCPE.
e. Transient charged particle equilibrium (TCPE)
Reference: Frank H. Attix. Introduction to Radiological Physics and Radiation
Dosimetry, pp.75-77
TCPE is said to exist at all points within a region in which D is proportional to
Kc, the constant of proportionality being greater than unity.
Case 1
In Fig. 4.7a the kerma at the surface is shown as KO, attenuating exponentially
with depth as indicated by the K-curve. We assume in this case that radiative
losses by the secondary charged particles are nil (Kr 0), which would be
strictly true only for incident neutrons. However, in carbon, water, air, and
other low-Z media Kr = K - Kc remains less than 1 % of K for photons up to 3
MeV. D therefore becomes proportional to Kc, and we say that TCPE exists

Case 2
Figure 4.7b shows the corresponding situation where Kr is significant and the
radiative-loss photons are allowed to escape from the phantom.

f.

Active and passive dosimeters


Reference : What Is a Dosimeter?,
http://www.ehow.com/about_5100987_dosimeter.html
Dosimeters can be active or passive in design. Active dosimeters measure
exposure in real time. These instruments have an analog or digital readout of
the immediate reading, the cumulative reading, or both. Some active
dosimeters will store a history of dosage in a recoverable memory. Example is
the Electronic Personal Dosimeter (EPD). Passive dosimeters do not provide
immediate feedback to the user. Additional analyses or calculation are
required to determine the dose. Examples are the Film Badge and the Thermo
Luminescent Dosimeter (TLD).

g. Mixed field measurements


Reference: IRPA-10. Mixed Field Dosimetry
For individual monitoring under beta-gamma fields, the detectors for the
measurement of penetrating (gamma) and non-penetrating radiations (beta)
use two different absorbers of 0.07 mm for beta and 10 mm for gamma.
(Hp(0.07) and Hp(10) for TLD)
Reference: The perfect dosimeter for mixed radiation fields,
http://www.helmholtz-muenchen.de/en/awst/services-products/whole-bodydosimetry/albedo/index.html
For individual monitoring under neutron-gamma fields, Albedo dosimeters are
ideally suited for the official surveillance of the personal dose in mixed
radiation fields. A detector card equipped with four TL detectors is located in a
plastic cassette. Two of the detectors are disposed behind an "Albedo neutron
window". The two-component Albedo dosimeter contains two TL-detector
pairs, of which the first measures the neutrons scattered back from the body
(albedo neutrons), while the second measures the thermal neutrons incident
on the body (field neutrons). The indication of the neutron dose equivalent is

generated from the display of the Albedo detector pairs and from the
corresponding calibration factor for the particular scope of application.
h. Calibration
Reference: Metrology in short, 3rd edition, EURAMET. 2008
A basic tool in ensuring the traceability of a measurement is the calibration of
a measuring instrument, measuring system or reference material. Calibration
determines the performance characteristics of an instrument, system or
reference material. It is usually achieved by means of a direct comparison
against measurement standards or certified reference materials. A calibration
certificate is issued and, in most cases, a sticker is provided for the
instrument.
Four main reasons for having an instrument calibrated:
1. To establish and demonstrate traceability.
2. To ensure readings from the instrument are consistent with other
measurements.
3. To determine the accuracy of the instrument readings.
4. To establish the reliability of the instrument i.e. that it can be trusted.
Traceability
This ensures that a measurement result or the value of a standard is related
to references at the higher levels, ending at the primary standard
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, pp.64-65
Calibration Source Selection and Traceability
Radiation doses and energies in the work areas should be well characterized.
Calibration of instruments should be conducted where possible under
conditions and with radiation energies similar to those encountered at the
work stations. Knowledge of the work area radiation spectra and instrument
energy response should permit the application of correction factors when it is
not possible to calibrate with a source that has the same energy spectrum. All
calibration sources should be traceable to recognized national standards.
Neutron energy spectral information is considered particularly important
because neutron instruments and dosimetry are highly energy-dependent.
Source Check and Calibration Frequency
ANSI N323 (ANSI, 1997b), Radiation

Protection

Instrumentation

and

Calibration, provides requirements on the calibration of portable instruments.


The reproducibility of the instrument readings should be known prior to
making

calibration

adjustments.

This

is

particularly

important

if

the

instrument has failed to pass a periodic performance test (i.e., the instrument
response varies by more than 20% from a set of reference readings using a
check source) or if the instrument has been repaired. The effect of energy
dependence,

temperature,

humidity,

ambient

pressure,

and

source-to-

detector geometry should be known when performing the primary calibration.


Primary calibration should be performed at least annually.
Instrument Energy Dependence
The calibration of alpha-detection instruments normally should be performed
with
239Pu, 241Am, or 230Th sources. Several sources of different activities should
be used to calibrate different ranges.
Whenever possible, beta detectors should be calibrated to the beta energies
of interest in the workplace. A natural or depleted uranium slab source can be
used for calibration of beta detectors when beta radiations in the workplace
have

energies

similar

to

the

uranium.

International

Organization

for

Standardization beta sources should be used for all other purposes: the
energy dependence of beta detectors can be tested using the calibration
sources listed in the ISO Publication 1980 (1984); these include 90Sr, 90Y,
204Tl, and 147Pm.
The calibration of photon monitoring instruments over the energy range from
a few keV to 300 keV is best accomplished with an x-ray machine and
appropriate filters that provide known x-ray spectra from a few kiloelectron
volts to approximately 300 keV. Radionuclide sources should be used for
higher energies. Most ion chambers used to measure photon radiations have a
relatively flat energy response above 80 to 100 keV; 137Cs or 60Co are
typically used to calibrate these instruments. These sources also may be used
to

calibrate

Geiger-Mller

(GM)

type

detectors

used

for

dose

rate

measurements. It should be noted that some GM detectors (e.g., those with


no energy compensation) can show a large energy dependence, especially
below approximately 200 keV. GM detectors should not be used if not energy
compensated.
The multisphere or Bonner sphere spectrometer (Bramblett et al., 1960) is the
neutron spectrometer system most often used by health physicists for neutron
energy spectrum measurements, perhaps because it is simple to operate.
Multisphere spectrometers are typically used for measuring neutron energy
spectra over a wide energy range from thermal energies to over 20 MeV
although detailed energy spectra are not obtained. With the use of an
appropriate spectrum unfolding code, the multisphere system will determine

the average neutron energy, dose equivalent rate, total flux, kerma, and
graphical plots of differential flux versus energy and dose equivalent
distribution versus energy.
i.

Quantities in radiation protection


i. Quality factor (radiation weighting factor)
Reference: National Nuclear Security Administration. Qualification
Standard Reference Guide Radiation Protection, p.27
The probability of stochastic radiation effects depends not only on the
absorbed dose, but also on the type and energy of the radiation
causing the dose. This is considered by weighting the absorbed dose
with a factor related to the radiation quality. In the past this factor was
known as the quality factor.
Quality factor(Q), now known as the radiation weighting factor, (WR)
means the modifying factor used to calculate the equivalent dose from
the average tissue or organ absorbed dose; the absorbed dose
(expressed in rad or gray) is multiplied by the appropriate radiation
weighting factor.
Q depends on radiation energy, which is based on LET (Linear Energy
Transfer). WR depends on radiation type and energy, which is based on
RBE (Relative Biological Effectiveness)
LET: the rate at which energy is transferred from ionizing radiation to
soft tissue, expressed in terms of kiloelectron volts per micrometer
(keV/m) of track length in soft tissue. The LET of diagnostic x-rays is
about 3 keV/m, whereas the LET of 5 MeV alpha particles is 100
keV/m
RBE: the ratio of biological effectiveness of one type of ionizing
radiation relative to another, given the same amount of absorbed
energy. The RBE is an empirical value that varies depending on the
particles, energies involved, and which biological effects are deemed
relevant. It is a set of experimental measurements.
ii. Tissue weighting factors
Reference: National Nuclear Security Administration. Qualification
Standard Reference Guide Radiation Protection, p.27

Tissue weighting factor, (WT) means the fraction of the overall health
risk, resulting from uniform, whole body irradiation, attributable to
specific tissue (T). The equivalent dose to tissue, (HT), is multiplied by
the appropriate tissue weighting factor to obtain the effective dose (E)
contribution from that tissue.
iii. Dose equivalent (equivalent dose)
Reference: National Nuclear Security Administration. Qualification
Standard Reference Guide Radiation Protection, p.26
Dose equivalent, now known as equivalent dose (HT), means the
product of average absorbed dose (D T) in rad (or gray) in a tissue or
organ (T) and a radiation (R) weighting factor(wR).
The equivalent dose replaces the dose equivalent for a tissue or organ.
The two are conceptually different. Whereas dose equivalent in an
organ is defined as a point function in terms of the absorbed dose
weighted by a quality factor everywhere, equivalent dose in the organ
is given simply by the average absorbed dose weighted by the factor
wR.
iv. Effective dose equivalent (effective dose)
Reference: National Nuclear Security Administration. Qualification
Standard Reference Guide Radiation Protection, p.26
Effective dose equivalent, now known as effective dose, means the
summation of the products of the equivalent dose received by
specified tissues or organs of the body (HT) and the appropriate tissue
weighting factor (WT) that is, E = WTHT.
Reference: Herman Cember. Health Physics. 4th edition, pp.348-349
On the principle that the risk of a stochastic effect should be equal
whether the whole body is uniformly irradiated or whether the
radiation dose is non-uniformly distributed, the ICRP introduced the
concept of effective dose in the 1977 review of its radiation safety
recommendations (ICRP 26).
For the purpose of setting radiation safety standards, we assume that
the probability of a detrimental effect in any tissue is proportional to
the dose equivalent to that tissue. However, because of the differences
in

sensitivity

among

the

various

tissues,

the

value

for

the

proportionality

factors

differs

among

the

tissues.

The

relative

sensitivity to detrimental effects, expressed as tissue weighting factors


wT of the several organs and tissues that contribute to the overall risk,
is shown in Table 8-3. If radiation dose is uniform throughout the body,
then the total risk factor has a relative weight of 1. For nonuniform
radiation, such as partial-body exposure to an external radiation field,
or from internal exposure where the radionuclide concentrates to
different degrees in the various organs, the weighting factors listed in
Table 8-3 are used to calculate an EDE. The EDE, HE, is given by

where wT is the weighting factor for tissue T and HT is the dose


equivalent to tissue T. Table 8-3 shows the weighting factors
recommended in ICRP 26 and ICRP 60. The U.S. NRC used the ICRP 26
values for the tissue-weighting factors in Title 10 of the Code of
Federal Regulations, Part 20, which usually is cited as 10 CFR 20, that
were approved in 1991 and became effective in 1994.

6. Radiobiology and biological effects


a. Relative biological effectiveness (RBE)
Reference: Herman Cember. Health Physics. 4th edition, p.330
The ratio of the amount of energy from 200-keV X-rays required to produce a
given biological effect to the amount of energy from any other radiation to
produce the same effect is called the relative biological effectiveness (RBE) of
that radiation.
Generally, doseresponse curves depend on the type of radiation used and on
the biological endpoint studied. As a rule, radiation of high LET is more

effective biologically than radiation of low LET. Different radiations can be


contrasted in terms of their relative biological effectiveness (RBE) compared
with X rays.
b. Cell type and radiation sensitivity
Reference: National Nuclear Security Administration. Qualification Standard
Reference Guide Radiation Protection, pp.43-44
As early as 1906 an attempt was made to correlate the differences in
sensitivity of various cells with differences in cellular physiology. These
differences in sensitivity are stated in the Law of Bergonie and Tribondeau:
The radiosensitivity of a tissue is directly proportional to its reproductive
capacity and inversely proportional to its degree of differentiation. In other
words, cells most active in reproducing themselves and cells not fully mature
will be most harmed by radiation. This law is considered to be a rule-of-thumb,
with some cells and issues showing exceptions. Since the time that the law of
Bergonie and Tribondeau was formulated, it is generally accepted that cells
tend to be radiosensitive if they
have a high division rate (i.e., cell cycle time, or time between
divisions)
have a high metabolic rate
are of a non-specialized type (i.e., a cell that is capable of
specialization into an adult cell type, such as a fertilized ovum)
are well nourished
Used generally, tissues that are young and rapidly growing are most likely
radiosensitive. The law can be used to classify the following tissues as
radiosensitive:
Germinal

(reproductive)

cells

of

the

ovary

and

testis,

i.e.,

spermatogonia
Hematopoietic (blood-forming) tissues, i.e., red bone marrow, spleen,
lymph nodes, thymus
Epithelium of the skin
Epithelium of the gastrointestinal tract (interstitial crypt cells)
The law can be used to classify the following tissues as radio-resistant:
Bone
Liver
Kidney
Cartilage
Muscle
Nervous tissue
Cells are least sensitive when in the S phase, then the G1 phase, then the G2
phase, and most sensitive in the M phase of the cell cycle.
Cell cycle

State

Descrip
tion

Abbreviation

quiesce
nt/
senesce

Gap 0

G0

A resting phase where the cell has left the cycle and has
stopped dividing.

nt
Cells increase in size in Gap 1. The G1 checkpoint control
Gap 1

G1

mechanism ensures that everything is ready


for DNAsynthesis.

Interpha
se

Synthesi
s

DNA replication occurs during this phase.


During the gap between DNA synthesis and mitosis, the cell

Gap 2

G2

will continue to grow. The G2checkpoint control mechanism


ensures that everything is ready to enter the M (mitosis)
phase and divide.
Cell growth stops at this stage and cellular energy is

Cell
division

Mitosis

focused on the orderly division into two daughter cells. A


checkpoint in the middle of mitosis (Metaphase Checkpoint)
ensures that the cell is ready to complete cell division.

c. Molecular processes
Reference: Herman Cember. Health Physics. 4th edition, pp.285
Radiation is seen to produce biological effects by two mechanisms, namely,
directly by dissociating molecules following their excitation and ionization and
indirectly by the production of free radicals and hydrogen peroxide in the
water of the body fluids.

i. Direct action
Reference: Herman Cember. Health Physics. 4th edition, pp.283-284
The gross biological effects resulting from overexposure to radiation
are the sequelae of a long and complex series of events that are
initiated by ionization or excitation of relatively few molecules in the
organism. Effects of radiation for which a zero-threshold dose is
postulated are thought to be the result of a direct insult to a molecule
by ionization and excitation and the consequent dissociation of the
molecule. Point mutations, in which there is a change in a single gene
locus, is an example of such an effect. The dissociation, due to
ionization or excitation, of an atom on the DNA molecule prevents the
information originally contained in the gene from being transmitted to
the next generation. Such point mutations may occur in the germinal
cells, in which case the point mutation is passed on to the next
individual; or it may occur in somatic cells, which results in a point
mutation in the daughter cell. Since these point mutations are
thereafter transmitted to succeeding generations of cells (except for
the highly improbable instance where one mutated gene may suffer
another mutation), it is clear that for those biological effects of
radiation that depend on point mutations, the radiation dose is
cumulative; every little dose may result in a change in the gene
burden,

which is

then continuously

transmitted.

When

dealing

quantitatively with such phenomena, however, we must consider the


probability of observing a genetic change among the offspring of an
irradiated individual. For radiation doses down to about 250 mGy (25
rads), the magnitude of the effect, as measured by frequency of gene
mutations, is proportional to the dose. For this reason, no reliable
experimental data are available for genetic changes in the range 0250
mGy.
ii. Indirect action
Reference: Herman Cember. Health Physics. 4th edition, pp.284-285
Direct effects of radiation, ionization, and excitation are nonspecific
and may occur anywhere in the body. When the directly affected atom
is in a protein molecule or in a molecule of nucleic acid, then certain
specific effects due to the damaged molecule may ensue. However,

most of the body is water, and most of the direct action of radiation is
therefore on water. The result of this energy absorption by water is the
production, in water, of highly reactive free radicals that are chemically
toxic (a free radical is a fragment of a compound or an element that
contains an unpaired electron) and which may exert their toxicity on
other molecules. When pure water is irradiated, we have
H2O H2O+ + e;
(7.1)
the positive ion dissociates immediately according to the equation
H2O+ H+ + OH,
(7.2)
while the electron is picked up by a neutral water molecule:
H2O + e H2O,
(7.3)
which dissociates immediately:
H2O H + OH.
(7.4)
The ions H+ and OH- are of no consequence, since all body fluids
already contain significant concentrations of both these ions. The free
radicals H and OH may combine with like radicals, or they may react
with other molecules in solution. Their most probable fate is
determined chiefly by the LET of the radiation. In the case of a high
rate of LET, such as that which results from passage of an alpha
particle or other particle of high specific ionization, the free OH radicals
are formed close enough together to enable them to combine with
each other before they can recombine with free H radicals, which leads
to the production of hydrogen peroxide,
OH + OH H2O2,
(7.5)
while the free H radicals combine to form gaseous hydrogen. Whereas
the products of the primary reactions of Eqs. (7.1) through (7.4) have
very short lifetimes, on the order of a microsecond, the hydrogen
peroxide, being a relatively stable compound, persists long enough to
diffuse to points quite remote from their point of origin. The hydrogen
peroxide, which is a very powerful oxidizing agent, can thus affect
molecules or cells that did not suffer radiation damage directly. If the
irradiated water contains dissolved oxygen, the free hydrogen radical
may combine with oxygen to form the hydroperoxyl radical as follows:
H + O2 HO2.
(7.6)
The hydroperoxyl radical is not as reactive as the free OH radical and
therefore has a longer lifetime than it. This greater stability allows the
hydroperoxyl radical to combine with a free hydrogen radical to form
hydrogen peroxide, thereby further enhancing the toxicity of the
radiation.
iii. Oxygen effect

If the irradiated water contains dissolved oxygen, the free hydrogen


radical may combine with oxygen to form the hydroperoxyl radical as
follows: H + O2 HO2. This hydroperoxyl radical combines with a free
hydrogen radical to form hydrogen peroxide, (H + HO 2 H2O2) thereby

further enhancing the toxicity of the radiation.


The effect is used in medical physics to increase the effect of radiation
therapy in oncology treatments. Additional oxygen abundance creates
additional free radicals and increases the damage to the target tissue.

d. DNA damage
Reference: Radiobiology for the Radiologist, 5th edition, Eric J. Hall, pp.17-18

If cells are irradiated with ionizing radiation, the breaks of a single strand or
two strands occur.
The single-strand breaks are of little biologic consequence as far as a cell
killing is concerned because they are repaired readily using the opposite
strand as a template. If the repair is incorrect (misrepair), it may result in a

mutation. (Case B, C in the figure below)


If the breaks in the two strands are opposite one another, or separated by
only a few base pairs, this may lead to a double-strand break; that is, the
piece of chromatin snaps into two pieces. A double strand break is believed to

be the most important lesion produced in chromosomes by radiation. The


interaction of two double-strand breaks may result in cell killing, mutation, or
carcinogenesis. (Case D in the figure below)
e. Repair and misrepair
Reference: Radiobiology for the Radiologist, 5th edition, Eric J. Hall, pp.17-31

Mammalian cells have developed a number of specialized pathways to

sense and repair DNA damage


Depending on a type of damage (base damage, SSB, DSB, sugar damage,

crosslinks) different mechanisms are invoked


Stage of a cell cycle also affects these pathways
DSB breaks (most lethal) are repaired by homologous recombination repair
(HRR) or nonhomologous end joining (NHEJ) mechanisms depending on the

phase of cell cycle


HRR provides more reliable repair, but errors are possible in both
mechanisms

7. Models of radiation damage


a. Single hit models
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.433-434
Exponential behavior can be accounted for by a single-target, single-hit
model of cell survival. We consider a sample of S0 identical cells and postulate
that each cell has a single target of cross section . We postulate further that
whenever radiation produces an event, or hit, in a cellular target, then that
cell is inactivated and does not survive. When the sample of cells is exposed
uniformly to radiation with fluence , then the total number of hits in cellular
targets is S0 . Dividing by the number of cells S0 gives the average number
of hits per target in the cellular population: K= .

The distribution of the

number of hits per target in the population is Poisson. The probability of there
being exactly k hits in the target of a given cell is therefore Pk = kkek/k!. The
probability that a given cell survives the irradiation is given by the probability
that its target has no hits: P0 = ek = e . Thus, the single-target, single-hit
model predicts exponential cell survival. Since P0 = S/S0, we can extend Eq.
(13.14) by writing
S/S0 = eD/D0 = e
(13.16)

In terms of the model, the inactivation cross section gives the slope of the
survival curve on the semilog plot in Fig. 13.13.
b. Multi-hit models
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection, p.435
Single-target, multi-hit models have been proposed, in which more than one
hit in a single cellular target is needed for killing.
c. Multi-target models (Multi-target, single hit model)
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.434-435

Multi-target single hit model: assume the cell has n targets to be hit for
the cell to not survive
Probability of each hit not being successful is e-D/D0
Probability of each hit being successful is 1-e-D/D0
Probability of all n targets within a cell to be hit is (1- e D/D0)n
The probability of survival of cell containing n targets: (=D/D0)
S/S0 =1-(1- eD/D0)n
(13.18)
When n = 1, this equation reduces to the single-target, single-hit result.
For n > 1 the survival curve has the shape shown in Fig. 13.14. There is a
shoulder that begins with zero slope at zero dose, reflecting the fact that
more than one target must be hit in a cell to inactivate it. As the dose
increases, cells accumulate additional struck targets; and so the slope
steadily increases. At sufficiently high doses, surviving cells are unlikely to
have more than one remaining unhit target. Their response then takes on
the characteristics of single-target, single-hit survival, and additional dose
produces an exponential decrease with slope 1/D0 on the semilog plot.
When D is large, eD/D0 is small, and one can use the binomial expansion) to
write, in place of Eq. (13.17),
S/S0 1-(1- eD/D0) = neD/D0 .
(13.18)
The straight line represented by this equation on a semilog plot intercepts
the ordinate (D = 0) at the value S/S0 = n, which is called the extrapolation
number. As shown in Fig. 13.14, the number of cellular targets n is thus
obtained by extrapolating the linear portion of the survival curve back to
zero dose.
Many experiments with mammalian cells yield survival curves with
shoulders. However, literal interpretation of such data in terms of the

elements of a multitarget, single-hit model is not necessarily warranted.


Cells in a population are not usually identical. Some might be in different
stages of the cell cycle, with different sensitivity to radiation. Repair of
initial radiation damage can also lead to the existence of a shoulder on a
survival curve.
d. Survival curves
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection,
pp.432-433

A cell survival curve


describes the relationship
between the radiation dose
and the proportion of cells

that survive
Survival could have
different meanings, e.g. if
cell is not capable to divide
- it did not survive (mitotic

death)
Cell inactivation is
conveniently represented
by plotting the natural
logarithm of the surviving fraction of irradiated cells as a function of the
dose they receive. A linear semilog survival curve, such as that shown in
Fig. 13.13, implies exponential survival of the form
S/S0 = eD/D0 .
(13.14)
Here S is the number of surviving cells at dose D, S0 is the original number
of cells irradiated, and D0 is the negative reciprocal of the slope of the
curve in Fig. 13.13. killed. The surviving fraction when D = D0 is, from Eq.
(13.14),
S/S0 = eD/D0 = e1 = 0.37.
(13.15)
For this reason, D0 is also called the D-37 dose.

Reference: Radiobiology for the Radiologist, 5th edition, Eric J. Hall, pp.4647
Problem 1

A tumor consists of 109 clonogenic cells. The effective dose-response


curve, given in daily dose fractions of 2 Gy, has no shoulder and a D o of 3
Gy. What total dose is required to give a 90% chance of tumor cure?
Answer
D10 = ln10 D0 = 2.3 D0 = 2.3 3 = 6.9 Gy in one decade of cell killing
The total dose for 10 decades of cell killing is: 10 6.9 = 69 Gy
(For calculation purposes, it is often useful to use the Dio, the dose
required to kill 90%
of the population. For example, D,o=2.3 xD0 in which 2.3 is the natural
logarithm of 10.)
Problem 2
Suppose that, in the previous example, the clonogenic cells underwent
three cell doublings during treatment. About what total dose would be
required to achieve the same probability of tumor control?
Answer
Three cell doublings would increase the cell number by
2x2x2=8
Consequently, about one extra decade of cell killing would be required,
corresponding to an additional dose of 6.9 Gy. Total dose is 69 + 6.9 =
75.9 Gy.
Problem 3
During the course of radiotherapy, a tumor containing 109 cells receives 40
Gy. If the Do is 2.2 Gy, how many tumor cells will be left?
Answer
If the Do, is 2.2 Gy the Dio, is given by
D10 = 2.3xD0 = 2.3x2.2 = 5Gy
Because the total dose is 40 Gy, the number of decades of cell killing is
40/5 = 8 .
Number of cells remaining = 109 x 10-8 = 10
Problem 4
If 107 cells were irradiated according to single-hit kinetics so that the
average number of hits per cell is one, how many cells would survive?
Answer
A dose that gives an average of one hit per cell is the Do; that is, the dose
that on the exponential region of the survival curve reduces the number of
survivors to 37%; the number of surviving cells therefore is
107 x 37/100 = 3.7x 106
e. Influence of radiation quality
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection, p.436
The dependence of relative biological effectiveness on radiation quality is
often discussed in terms of the LET of the radiation, or the LET of the

secondary

charged

particles

produced in the case of photons


and neutrons. As a general rule,
RBE increases with increasing LET,
as illustrated in Fig. 13.15, up to a
point.

Figure

schematically

13.16
the

represents

RBE

for

cell

killing as a function of the LET of


charged particles. Starting at low
LET,

the

efficiency

of

killing

increases with LET, evidently because of the increasing density of ionizations,


excitations, and radicals produced in critical targets of the cell along the
particle tracks. As the LET is increased further, an optimum range around 100
to 130 keV m1 is reached for the most efficient pattern of energy deposition
by a particle for killing a cell. A still further increase in LET results in the
deposition of more energy than needed for killing, and the RBE decreases.
Energy is wasted in this regime of overkill at very high LET.
f. Stochastic effects (Below)
g. Deterministic effects
Reference: James E. Turner. Atoms, Radiation, and Radiation Protection, p. 410
The biological effects of radiation can be divided into two general categories,
stochastic and deterministic, or nonstochastic. As the name implies, stochastic
effects are those that occur in a statistical manner. Cancer is one example. If a
large population is exposed to a significant amount of a carcinogen, such as
radiation, then an elevated incidence of cancer can be expected. Although we
might be able to predict the magnitude of the increased incidence, we cannot
say which particular individuals in the population will contract the disease and
which will not. Also, since there is a certain natural incidence of cancer
without specific exposure to radiation, we will not be completely certain
whether a given case was induced or would have occurred without the
exposure. In addition, although the expected incidence of cancer increases
with dose, the severity of the disease in a stricken individual is not a function
of dose. In contrast, deterministic effects are those that show a clear causal
relationship between dose and effect in a given individual. Usually there is a
threshold below which no effect is observed, and the severity increases with

dose. Skin reddening is an example of a deterministic effect of radiation.


(cataract)

Reference: Herman Cember. Health Physics. 4th edition, pp.280-282


Observed radiation effects (or effects of other noxious agents) may be broadly
classified into two categories, namely, stochastic (Effects that occur randomly,
and whose probability of occurrence rather than the severity of the effect,
depends on the size of the dose. Stochastic effects, such as cancer, are also
seen among persons with no known exposure to the agent associated with
that effect.) and nonstochastic, or deterministic effects. Most biological effects
fall into the category of deterministic effects. Deterministic effects are
characterized by the three qualities stated by the Swiss physician and
scientist Paracelsus about 500 years ago when he wrote the size of the dose
determines the poison.
1) A certain minimum dose must be exceeded before the particular effect
is observed.
2) The magnitude of the effect increases with the size of the dose.
3) There is a clear, unambiguous causal relationship between exposure to
the noxious agent and the observed effect.
For example, a person must exceed a certain amount of alcoholic intake
before he or she shows signs of drinking. After that, the effect of the alcohol
depends on how much the person drank. Finally, if this individual exhibits
drunken behavior, there is no doubt that the behavior is the result of drinking.
For such nonstochastic effects, when the magnitude of the effect or the
proportion of individuals who respond at a given dose is plotted as a function
of dose in order to obtain a quantitative relationship between dose and effect,
the dose-response curve A, shown in Figure 7-1, is obtained. Because of the

minimum dose that must be exceeded before an individual shows the effect,
nonstochastic effects are called threshold effects.

h. Relative and absolute risk models


Reference: 5.0 HOW ARE RADIATION RISKS EVALUATED? Available at
http://www.radiation-scott.org/radsource/5-0.htm
Cancer is the major risk associated with exposure of humans to low radiation
doses. Two types of models are usually used for evaluating the risk of
radiation-induced cancer in humans: (1) absolute-risk models and (2) relativerisk models.
Absolute-risk Models
With absolute-risk models, the excess risk due to exposure to radiation does
not depend on the normal risk that would arise when there is no radiation
exposure. Absolute risks are evaluated on a scale from 0 to 1. A risk of 1
corresponds to 100% of the exposed individuals being affected.
Absolute risks are usually based on the assumption of a linear risk vs. dose
relationship that passes through zero excess risk at the origin. This represents
what has become known as the linear, no-threshold (LNT) model.
As an example of how absolute risk is applied, if the normal risk over the
lifetime is 0.001 for a specific type of cancer, and radiation adds an additional

risk of 0.02, then the absolute risk of cancer over the lifetime is 0.001 + 0.02
or 0.021.
Relative-risk Models
With relative-risk models, the relative risk is a multiple of the normal
risk. Unlike absolute risks, relative risk values range from 1 to very large
numbers. A value of 1 for the relative risk means that there is no excess risk.
The relative risk considers how the normal risk changes with age. For
example, if the normal risk of developing a given type of cancer between age
50 and age 51 years is 0.001, and radiation exposure leads to a relative risk of
2, then the relative risk is used to multiply the normal risk, so one has to
calculate the product 2 x 0.001 or 0.002. Thus, instead of having a normal
risk of 0.001 for cancer in the age interval 50 to 51 years, the risk is increased
to 0.002 because of the radiation exposure.
Similar calculations are carried out for other age intervals depending on the
age of the person at the time of exposure and the latent period for the cancer
type of interest.
i.

Weaknesses and uncertainties


Reference: Herman Cember. Health Physics. 4th edition, p.98
There is a good deal of uncertainty in the estimation of risk coefficients for low
level irradiation. One source of these uncertainties is the statistical
uncertainty in the collection of data and the estimation of modeling
parameters.

Statistical

uncertainty

is

quantified

by

the

mathematical

statement of the confidence interval and accordingly is easily dealt with. The
second major source of uncertainty is based on our limited knowledge of
carcinogenic mechanisms and therefore of which of the postulated doseresponse models most accurately represents the actual dose-response
situation. In regard to this second source of uncertainty and effects at very
low doses, the BEIR V committee said the following:
Derivation of risk estimates for low doses and low dose rates through the use
of any type of risk model involves assumptions that remain to be validated
Epidemiological data cannot rigorously exclude the existence of a threshold in
the millisievert (hundreds of millirems) dose range. Thus, the possibility that
there may be no risks from exposures comparable to the external natural
background radiation cannot be ruled out The lower limit of the range of
uncertainty in the risk estimates extends to zero.
8. Radioactivity transport and pathways

a. Routes of entry into the body


Reference: Herman Cember. Health Physics. 4th edition, p.584
Radioactive substances, like other noxious agents, may gain entry into the
body through three pathways:
1. Inhalation - by breathing radioactive gases and aerosols.
2. Ingestion - by drinking contaminated water, eating contaminated food,
or tactilely transferring radioactivity to the mouth.
3. Absorption -through the intact skin or through wounds.
b. Routes of elimination from the body
Reference: IARC MONOGRAPHS VOLUME 78, p.43
Radionuclides may be eliminated from the body principally by exhalation and
excretion in urine, faeces, sweat, saliva and potentially in milk. Exhalation is a
major pathway for the undeposited fraction of inhaled aerosols, 3H2O vapor
and gases such as those containing

14

C and

220

Rn and

222

Rn produced in the

radioactive decay of internally deposited thorium and radium. In the course of


urinary excretion, certain radionuclides deliver a dose to the kidney and
bladder. Radionuclides in the faeces result either from ingested radionuclides
that have not been absorbed during gastrointestinal transit or from
radionuclides

absorbed

and

subsequently

excreted

back

into

the

gastrointestinal tract - most often via biliary excretion.


c. Biological half-times
Reference: Biological half-life, Available at
http://en.wikipedia.org/wiki/Biological_half-life
The biological half-life is the time it takes for a radioactive nuclide to lose half
of its radioactivity due to biological elimination. Typically, this refers to the
body's cleansing through the function of kidneys and liver in addition to
excretion functions to eliminate a radioactive nuclide from the body.
Effective half-life
The period during which the quantity of a radionuclide in a biological system is
reduced by half by interaction of radioactive decay and excretion due to
biological processes.

1
1
1

TE TR TB

TE
or

d. Systemic and metabolic models

TB TR
TR TB

Reference: ICRP internal dosimetry models, available at


http://pbadupws.nrc.gov/docs/ML1215/ML12159A433.pdf
Modeling
Mathematical descriptions used to describe the processes involved in physical
movement of radionuclides in the body following intake, and the deposition of
energy that constitutes exposure
Metabolic model
Describe deposition and movement of
radioactive material through the

body
Depend on the intake mode,
element, chemical form and physical form, and

particle size (inhalation)


Tissues (including fluids) and organs, termed

Compartments
Transfer routes
Transfer rates,
Excretion routes

Systemic Biokinetic Model

Describe what happens to radionuclides upon uptake, when they enter the
so-called transfer compartment, i.e., the blood stream and extracellular

fluid such as lymph


Which organs they deposit in what fraction of Which organs they deposit

in, what fraction of the uptake deposits in each


How long they are retained in each
These models are element-specific, not radionuclide-specific, so retention
must be modified by the physical half-life of the radionuclide

Dosimetry Model

Calculate the absorbed dose in each organ per decay of the radionuclide
The organ containing the radionuclide is the source organ source organ
The organ for which the dose is calculated is the target organ
The source organ is always its own target organ

e. Bioaccumulation factors
Reference : Bioaccumulation, available at
http://toxics.usgs.gov/definitions/bioaccumulation.html
Bioaccumulation

General term describing a process by which chemicals are taken up by an


organism either directly from exposure to a contaminated medium or by
consumption of food containing the chemical - U.S. Environmental Protection
Agency, 2010
Bioaccumulation Factor (BAF)
The ratio of the contaminant in an organism to the concentration in the
ambient environment at a steady state, where the organism can take in the
contaminant through ingestion with its food as well as through direct content U.S. Environmental Protection Agency, 2010
9. Other topics
a. X-ray machines and accelerators
X-ray machines
Reference: X-ray generator, Available at http://en.wikipedia.org/wiki/Xray_generator
An X-ray generator is a device used to generate X-rays. It is commonly used
by radiographers to acquire an x-ray image of the inside of an object (as in
medicine or non-destructive testing) but they are also used in sterilization or
fluorescence.
The heart of an X-ray generator is the X-ray tube. Like any vacuum tube, the
X-ray tube contains a cathode, which directs a stream of electrons into a
vacuum, and an anode, which collects the electrons and is made of copper to
evacuate the heat generated by the collision. When the electrons collide with
the target, about 1% of the resulting energy is emitted as X-rays, with the
remaining 99% released as heat. Due to the high energy of the electrons that
reach relativistic speeds the target is usually made of tungsten even if other
material can be used particularly in XRF applications. A cooling system is
necessary to cool the anode; many X-ray generators use water or oil
recirculating systems.
Accelerator
Reference: Particle accelerator, Available at
http://en.wikipedia.org/wiki/Particle_accelerator
A particle accelerator is a device that uses electromagnetic fields to propel
charged particles to high speeds and to contain them in well-defined beams.
Large accelerators are best known for their use in particle physics as colliders
(e.g. the Large Hadron Collider (LHC) at CERN, RHIC, and Tevatron). Other
kinds of particle accelerators are used in a large variety of applications,
including particle therapy for oncological purposes, and as synchrotron light

sources for the study of condensed matter physics. There are currently more
than 30,000 accelerators in operation around the world.
There are two basic classes of accelerators: electrostatic and oscillating field
accelerators. Electrostatic accelerators use static electric fields to accelerate
particles. A small-scale example of this class is the cathode ray tube in an
ordinary old television set. Other examples are the CockcroftWalton
generator and the Van de Graaff generator. The achievable kinetic energy for
particles in these devices is limited by electrical breakdown. Oscillating field
accelerators, on the other hand, use radio frequency electromagnetic fields to
accelerate particles, and circumvent the breakdown problem. This class, which
was first developed in the 1920s, is the basis for all modern accelerator
concepts and large-scale facilities.
b. Food irradiation
Reference: Particle accelerator, Available at
http://en.wikipedia.org/wiki/Food_irradiation
Food irradiation is a process in which food is exposed to high doses of
radiation in the form of gamma rays, X-rays or electron beams. This treatment
is used to preserve food, reduce the risk of food borne illness, prevent the
spread of invasive pests, delay or eliminate sprouting or ripening, increase
juice yield, and improve re-hydration. It is permitted by over 50 countries, with
500,000 metric tons of foodstuffs annually processed worldwide.
c. Neutron radiography
Reference: Neutron imaging, Available at
http://en.wikipedia.org/wiki/Neutron_imaging
Neutron imaging is the process of making an image with neutrons. The
resulting image is based on the neutron attenuation properties of the imaged
object. The resulting images have much in common with industrial X-ray
images, but since the image is based on neutron attenuating properties
instead of X-ray attenuation properties, some things easily visible with
neutron imaging may be very challenging or impossible to see with X-ray
imaging techniques (and vice versa).
X-rays are attenuated based on a material's density. Denser materials will stop
more X-rays. With neutrons, a material's likelihood of attenuation a neutron is
not related to density. Some light materials such as boron will absorb neutrons
while hydrogen will generally scatter neutrons, and many commonly used
metals allow most neutrons to pass through them. This can make neutron

imaging better suited in many instances than X-ray imaging; for example,
looking at O-ring position and integrity inside of metal components, such as
the segments joints of a Solid Rocket Booster.
d. Radiological terrorism
Reference: Radiological terrorism, Available at http://www.nctsnet.org/traumatypes/terrorism/radiological
Radiological terrorism is the intentional use of radiological materials to cause
physical and psychological damage to a civilian population. The terrorist seeks
to attack the basic sense of security and well-being of the general public
through inflicting physical injury, loss of life, and destruction of property. A
radiological attack may be overt, with the terrorist announcing the release, or
it may be covert, where the attack becomes clear only after people become ill
following exposure.
Radiological terrorism involves the dispersion of radiological material to
contaminate people. This can be accomplished by using an RDD, a
radiological dispersion device ("dirty bomb"), which refers to (1) placing
radiological material with a conventional bomb that explodes and disperses
the radioactive materials over a limited area (determined by the weather,
nature of the material, and so forth); or (2) placing radioactive materials in a
place where people come into close contact with the materials. Radiological
terrorism is not the same as nuclear terrorism, in that a nuclear detonation or
explosion involves a large geographical area and a different kind of radiation.
There are numerous possible radioactive sources for a dirty bomb, as
radioactive material is used in medical centers, laboratories, and industrial
plants. Radiological agents dispersed into fine particles can affect the body by
two primary processes. The first is "internal contamination," which involves
either inhalation (the breathing in) of contaminated material or ingestion of
contaminated food or water. The second is "external contamination," which
refers to radiation absorbed by the skin.
e. Radioactive waste management
Reference: Herman Cember. Health Physics. 4th edition, pp.595-596, 630-631
Proper collection and management of radioactive waste is an integral part of
contamination control and internal (as well as external) radiation protection. In
one sense, we cannot dispose of radioactive waste. All other types

(nonradioactive) of hazardous wastes can be treated, either chemically,


physically, or biologically in order to reduce their toxicity. In the case of
radioactive wastes, on the other hand, nothing can be done to decrease their
radioactivity, and hence their inherent toxic properties. The only means of
ultimate disposal is through time-to allow the radioactivity to decay. However,
the wastes can be treated and stored in a manner that essentially eliminates
their potential threat to the biosphere. Solid and liquid wastes are treated to
minimize their volume, and liquid wastes are converted into solids by such
means as vitrification, or by incorporating the liquid either into concrete or
asphalt, or into an insoluble plastic. The treated solid waste is packaged in
containers according to the class of the waste and buried either in shallow
engineered trenches in seismologically and hydrologically stable soil that are
then covered with soil, or in deep seismologically and hydrologically stable
geologic formations.
Radioactive wastes, which include materials of widely differing types and
activities, can originate from any industrial, medical, scientific, university,
decommissioning, or agricultural activity in which radioisotopes are used or
produced. For regulatory purposes, waste is considered to be radioactive if it
contains radionuclides at concentrations or activities greater than those
specified by a regulatory authority. For example, the U.S. Nuclear Regulatory
Commission (U.S. NRC) regulations state that 3H and 14C in animal tissues
and in liquid scintillation media in concentrations not greater than 0.05 Ci
(1850 Bq) per gram may be disposed of as if it were not radioactive. It must
be emphasized that this definition of radioactive waste is for regulatory
purposes only. Waste materials whose activity, quantity, or concentration does
not exceed this regulatory lower limit are radioactive from a physical point of
view. However, because of their low levels of activity, they are not considered
to be hazardous.
Although nothing short of allowing natural decay can be done to reduce the
radioactivity, and hence the inherent toxic properties, of radioactive wastes,
they can be treated to render them essentially nonhazardous. One of the main
objectives of waste treatment is to prevent the entrance of radioactive
nuclides into the biosphere, where even small amounts might be accumulated
by certain plants or animals to potentially toxic concentrations. One treatment
method that greatly reduces the potential hazard is immobilization in a highly
stable matrix, such as vitrification of high-level liquid waste by incorporating it

into glass, or by adsorbing the radionuclides onto clay and then firing the clay
at a high temperature, thereby locking the radionuclides into the clay. Both
these treatment methods prevent the radionuclides from entering into the
biosphere. Other stable matrices include concrete, asphalt, and plastics.
Treatment methods include volume reduction of liquid wastes by evaporation
and physical compaction of solid wastes, and then packaging and burial in a
designated

burial

site.

Low-level

liquid

wastes

may

be

diluted

to

concentrations within regulatory requirements, and then released to the


environment (although this treatment modality may not be societally
acceptable). The exact manner of treatment and disposition is determined by
public opinion and by technical and engineering considerations.
f.

Space radiation
Reference: Cosmic ray, Available at
http://en.wikipedia.org/wiki/Galactic_cosmic_rays
Cosmic rays are immensely high-energy radiation, mainly originating outside
the Solar System. They may produce showers of secondary particles that
penetrate and impact the Earth's atmosphere and sometimes even reach the
surface. Composed primarily of high-energy protons and atomic nuclei, they
are of mysterious origin. Data from the Fermi space telescope (2013) have
been interpreted as evidence that a significant fraction of primary cosmic rays
originate from the supernovae of massive stars. However, this is not thought
to be their only source. Active galactic nuclei probably also produce cosmic
rays.

g. Aerosol physics physical elements and determinants of exposure


Radioactive particles are defined as a localized aggregation of radioactive
atoms that give rise to an inhomogeneous distribution of radionuclides
significantly different from that of the matrix background. In water, particles
are defined as entities having diameters larger than 0.45 m, i.e. that will
settle due to gravity. Radionuclide species within the molecular mass range
0.001 m - 0.45 m are referred to as radioactive colloids or pseudo-colloids.
Using the grain size categories for sand, silt and clays, particles larger than 2
mm should be referred to as fragments. In air, radioactive particles ranging
from submicron in aerosols to fragments are classified according to the
aerodynamic diameters, where particles less than 10 m are considered
respiratory.

i. Deposition of particles as a function of size: diffusion,


impaction
1. environmental deposition
Reference: Commission of the European Communities. Aerosol
measurements and nuclear accidents: A reconsideration, p. 8,
1987.
Environmental deposition depends on many parameters in
addition to the properties of the aerosol. Consequently,
measurements of air concentration alone will never suffice.
Direct measurements of deposition in rain and in soil or
vegetation samples can be made simply and economically for
many radionuclides and will form an essential part of an
environmental monitoring program. Deposition from Chernobyl
not only illustrated the great importance of scavenging by rain,
but also the extreme spatial variability that occurred in many
countries where convective storms caused the bulk of the
deposition. Variations of an order of magnitude were observed
between sites a few kilometers apart. Another accident in
different weather conditions might yield a different kind of
spatial distribution, and a deposition monitoring program would
need to respond by adjusting the distribution of sampling points
for soil or vegetation.
2. internal deposition
Reference: Herman Cember. Health Physics. 4th edition, pp.362363

The combination of these three effects - inertial impaction,


gravitational settling, and Brownian motion - leads to a
maximum likelihood of deposition in the deep respiratory tract
for particles in the 1-2 m size range, and a minimum
deposition for particles between 0.1 and 0.5 m, as shown in
Figure 8-5.
The depth of penetration of airborne particles into the
respiratory tract depends on the size of the airborne particles.
Large dust particles, in excess of about 5 m, are likely to be
filtered

out

by

the

nasal

hair

or

to

impact

on

the

nasopharyngeal surface. The effect of gravitational settling


becomes less pronounced as the particle size decreases. For
practical purposes, therefore, small particles may be regarded
as remaining suspended in the atmosphere, and all but very
large particles may be considered to be carried by moving air. In
the respiratory tree, because of the relatively small cross
sectional areas of many of the air passages, the inspired air
may attain relatively high velocities. Large particles that escape
the hair-filter in the nose therefore have high kinetic energies as
they pass through the air passages. As a consequence of the
momentum of such a heavy particle, it cannot follow the
inspired air around sharp curves, and strikes the walls of the
upper respiratory tract. As the particle size decreases below 5
m, this inertial impact decreases, and an increasing number of
particles are carried down into the lung. The air in the alveoli is
relatively still - since only a small fraction of the air there is

exchanged with incoming air during a respiratory excursion.


Particles that are carried into the deep respiratory tract,
therefore, have the opportunity to settle out under the force of
gravity.

Gravitational

settling,

however,

decreases

with

decreasing particle size and reaches a minimum when the


particle size is about 0.5 m. As the particle size decreases
below about 0.1 m, the effect of Brownian motion becomes
significant. As the particles move randomly about, they may
strike the alveolar wall and get trapped on its moist surface.
ii. How radioactivity associates with particles
1. dispersion as particles of bulk radioactive materials
Reference: YAO Rentai. Atmospheric Dispersion of Radioactive
Material in Radiological Risk Assessment and Emergency
Response. Progress in NUCLEAR SCIENCE and TECHNOLOGY,
Vol. 1, p.7-13 (2011)
The three marked types of dispersion models, which may depict
the development of dispersion modeling technique for the
application in radiological risk assessment and emergency
response, are Gaussian plume models in the 1960s and 1970s,
Lagrangian-puff models and particle random walk models in the
1980s - 1990s, and developing CFD (Computational Fluid
Dynamics) models in the 2000s. Current available atmospheric
dispersion models range from the relatively simple to the highly
complex. In order to determine how dispersion models can be
applied most effectively, it is important to identify the needs in
radiological risk assessment and emergency response.
2. deposition of atomic or molecular radioactive species on
pre-existing particles
3. formation of particles about atomic or molecular
radioactive species
Reference: Constantin Papastefanou, Radioactive Aerosols, pp.
8-9, 2008.
The aerosol particles are formed either by coagulation and
condensation processes or by gasto-particle conversion.
Analytically:

Coagulation and condensation


Aerosol particles tend to coalesce when they collide with each
other. Since at normal humidities most particles are sheathed
with moisture, the sticking probability is close to unity. Collisions
between two particles lead to the formation of a new particle of
larger size. This process, called coagulation, causes the size
distribution to change in favour of large particles. Coagulation
must be distinguished from condensation, which describes the
deposition of vapour-phase material onto particulate matter. In
the absence of pre-existing particles, condensation lead to the
formation of new Aitken nuclei, provided that the vapour
pressure of the condensing substance is sufficiently high. The
last process is termed homogeneous nucleation or gas-toparticle conversion.
Gas-to-particle conversion
Atmospheric gas-phase reactions may lead to the formation of
condensible products, which subsequently associate with the
atmospheric aerosol. Condensation may either cause the
formation

of

new

particles

in

the

Aitken

size

range

(homogeneous nucleation) or deposit material onto pre-existing


particles (heterogeneous condensation). The gas-to-particle
conversion usually starts with air free from particles. The
development of the particle size range goes through three
successive stages, dominated by nucleation, coagulation and
heterogeneous condensation. In the atmosphere, all three
processes take place concurrently. The generation of new
particles requires conditions that allow the growth of molecular
clusters by condensation in the phase of competition from
heterogeneous condensation. Molecular clusters are formed due
to weakly attractive forces between molecules, the van der
Waals forces. Except under conditions of low temperature, it is
difficult to observe clusters containing more than a few
molecules.
iii. Environmental influences on carriers of radioactivity
1. humidity as a cause of size change condensation or
evaporation
2. background aerosols as scavengers of airborne
radioactivity for either beneficial purposes or otherwise

Reference: Herman Cember. Health Physics. 4th edition, pp.605607


Of particular interest in evaluating the safety of discharge into
the air is the relationships between the rate of discharge and
the ground-level concentrations - both in the breathing zone
and on the ground (as fallout)-of the discharged radioactivity.
The ground-level distribution of the discharged radioactivity
depends on a number of factors, including atmospheric
stability, wind velocity, type of terrain, the nature of the
boundary layer of air (the air layer immediately over the ground
for a distance of several hundred feet), and the height of the
chimney.

You might also like