You are on page 1of 13


The Geiger counter, or GeigerMller counter, is an

instrument used for measuring ionizing radiation. It detects
radiation such as alpha particles, beta particles and gamma
rays using the ionization produced in a GeigerMller tube,
which gives its name to the instrument. The original detection
principle was discovered in 1908, but it was not until the
development of the Geiger-Mller tube in 1928 that the GeigerMller counter became a popular instrument for use in such as
radiation dosimetry The original detection principle was
discovered in 1908, but it was not until the development of the
Geiger-Mller tube in 1928 that the Geiger-Mller counter
became a popular instrument for use in such as
radiation dosimetry, radiological protection, experimental physics and the nuclear industry. A typical Geiger
counter consists of a Geiger-Mueller tube, a visual readout, and an audio readout. The Geiger-Mueller tube or
detector is the heart of a Geiger counter. It is a type of ionization chamber that counts particles of radiation.
That count is read by the user through a visual readout in the form of a traditional analog meter, or an electronic
LCD (liquid crystal display) readout. These meters are available in different units, including mR/hr, or milliRoentgens per hour, and Sv/hr, or micro-Sieverts per hour.
The Geiger counter consists of two main elements; the Geiger-Mller tube which
detects the radiation and the processing and display electronics. The GeigerMller tube is filled with an inert gas such as helium, neon, or argon at low
pressure, which briefly conducts electrical charge when a particle or photon of
incident radiation makes the gas conductive by ionization. The ionization
current is greatly amplified within the tube by the Townsend avalanche effect
to produce an easily measured detection pulse. This makes the G-M counter
relatively cheap to manufacture, as the subsequent electronic processing is
greatly simplified.
In summary then, here's what happens when a Geiger counter
detects some radiation:
1. Radiation (dark blue) is moving about randomly outside
the detector tube.
2. Some of the radiation enters the window (gray) at the end
of the tube.
3. When radiation (dark blue) collides with gas molecules in
the tube (orange), it causes ionization: some of the gas
molecules are turned into positive ions (red) and electrons
4. The positive ions are attracted to the outside of the tube
(light blue).
5. The electrons are attracted to a metal wire (red) running
down the inside of the tube maintained at a high positive
6. Many electrons travel down the wire making a burst of
current in a circuit connected to it.

7. The electrons make a meter needle deflect and, if a

loudspeaker is connected, you can hear a loud click every time particles are detected. The number of
clicks you hear gives a rough indication of how much radiation is present (the meter gives you a much
more accurate idea).

In 1908 Hans Geiger, under the supervision of Ernest Rutherford at the Victoria University of
Manchester , developed an experimental technique for detecting alpha particles that would later be used in the
Geiger-Mller tube. This early counter was only capable of detecting alpha particles and was part of a larger
experimental apparatus. The fundamental ionization mechanism used was discovered by John Sealy
Townsend by his work between 1897 and 1901, and is known as the Townsend discharge, which is the
ionization of molecules by ion impact.
It was not until 1928 that Geiger and Walther Mller (a PhD student of Geiger) developed the sealed GeigerMller tube which developed the basic ionization principles previously used experimentally. This was relatively
small and rugged, and could detect more types of ionizing radiation. Now a practical radiation instrument could
be produced relatively cheaply, and so the Geiger-Muller counter was born. As the tube output required little
electronic processing, a distinct advantage in the thermionic valve era due to minimal valve count and low
power consumption, the instrument achieved great popularity as a portable radiation detector.
Modern versions of the Geiger counter use the halogen tube invented in 1947 by Sidney H. Liebson. It
superseded the earlier Geiger tube because of its much longer life and lower operating voltage, typically 400600 volts.
The first historical uses of the Geiger principle were for the detection of alpha and beta particles, and the
instrument is still used for this purpose today. For alpha particles and low energy beta particles the "endwindow" type of G-M tube has to be used as these particles have a limited range even in free air, and are
easily stopped by a solid material. Therefore the tube requires a window which is thin enough to allow as many
as possible of these particles through to the fill gas. The window is usually made of mica with a density of about
1.5 - 2.0 mg/cm2.
Geiger counters are widely used to detect gamma radiation, and for this the windowless tube is used.
However, efficiency is generally low due to the poor interaction of gamma rays compared with alpha and beta
particles. For instance, a chrome steel G-M tube is only about 1% efficient over a wide range of energies. For
high energy gamma it largely relies on interaction of the photon radiation with the tube wall material, usually 1
2 mm of chrome steel on a "thick-walled" tube, to produce electrons within the wall which can enter and ionize
the fill gas. This is necessary as the low pressure gas in the tube has little interaction with high energy gamma
photons. However, for low energy photons there is greater gas interaction and the direct gas ionization effect
increases. With decreasing energy the wall effect gives way to a combination of wall effect and direct
ionization, until direct gas ionization dominates. Due to the variance in response to different photon energies,
thick walled steel tubes employ what is known as "energy compensation" which attempts to compensate for
these variations over a large energy range. Low energy photon radiation such as low energy X rays or gamma
rays interacts better with the fill gas. Consequently a typical design for low energy photon detection for these is
a long tube with a thin wall or with an end window.
A variation of the Geiger tube is used to measure neutrons, where the gas used is boron trifluoride or
Helium 3 and a plastic moderator is used to slow the neutrons. This creates an alpha particle inside the detector
and thus neutrons can be counted.
The term "Geiger counter" is commonly used to mean a hand-held survey type meter, however the
Geiger principle is in wide use in installed "area gamma" alarms for personnel protection, and in process
measurement and interlock applications. A Geiger tube is still the sensing device, but the processing electronics
will have a higher degree of sophistication and reliability than that used in a hand held survey meter.


A scintillation counter is an instrument for

detecting and measuring ionizing radiation. It consists of
a scintillator which generates photons of light in
sensitive photomultiplier tube which converts the light to
an electrical signal, and the necessary electronics to
process the photomultiplier tube output. The scintillation
counter not only can detect the presence of a particle,
gamma ray, or x-ray, but can measure the energy, or the
energy loss. Scintillation counters are the tools of the
professional nuclear scientist. Scintillation counters exist
for gamma, x-ray, beta, and alpha radiation (a specific
unit for each). When used with a multi-channel spectrum
analyzer, the counter can identify isotopes by their energies.

When a charged particle
strikes the scintillator, its atoms
are excited and photons are
emitted. These are directed at
photocathode, which emits
electrons by the photoelectric
effect. These electrons are
electrostatically accelerated and
focused by an electrical
potential so that they strike the
first dynode of the tube. The
impact of a single electron on
the dynode releases a number of
secondary electrons which are
in turn accelerated to strike the
second dynode. Each subsequent dynode impact releases further electrons, and so there is a current amplifying
effect at each dynode stage. Each stage is at a higher potential than the previous to provide the accelerating
field. The resultant output signal at the anode is in the form of a measurable pulse for each photon detected at
the photocathode, and is passed to the processing electronics. The pulse carries information about the energy of
the original incident radiation on the scintillator. Thus both intensity and energy of the radiation can be
The scintillator must be in complete darkness so that visible light photons do not swamp the individual
photon events caused by incident ionising radiation. To achieve this, a thin opaque foil, such as aluminized
mylar, is often used, though it must have a low enough mass to prevent undue attenuation of the incident
radiation being measured.
The scintillator consists of a transparent crystal, usually a phosphor, plastic (usually
containing anthracene) or organic liquid (see liquid scintillation counting) that fluoresces when struck
by ionizing radiation. Cesium iodide (CsI) in crystalline form is used as the scintillator for the detection of
protons and alpha particles. Sodium iodide (NaI) containing a small amount of thallium is used as a scintillator
for the detection of gamma waves and Zinc Sulphide is widely used as a detector of alpha particles.

The modern electronic scintillation counter was invented in 1944 by Sir Samuel Curran whilst he was
working on the Manhattan Project at the University of California at Berkeley, and it is based on the work of
earlier researchers reaching back to Antoine Henri Becquerel, who is generally credited with
discovering radioactivity, whilst working on thephosphorescence of certain uranium salts (in 1896).
The spinthariscope was an early method of detecting the scintillation events by eye.
The quantum efficiency of a gamma-ray detector (per unit volume) depends upon
the density of electrons in the detector, and certain scintillating materials, such as sodium
iodide and bismuth germanate, achieve high electron densities as a result of the high atomic numbers of
some of the elements of which they are composed. However, detectors based on semiconductors,
notably hyperpure germanium, have better intrinsic energy resolution than scintillators, and are preferred
where feasible for gamma-ray spectrometry.
In the case of neutron detectors, high efficiency is gained through the use of scintillating
materials rich in hydrogen that scatter neutrons efficiently. Liquid scintillation counters are an efficient
and practical means of quantifying beta radiation.
Scintillation counters are used to measure radiation in a variety of applications.

Hand held radiation survey meters

Personnel and environmental monitoring for Radioactive contamination
Medical imaging
National and homeland security
Border security
Nuclear plant safety
Radon levels in water
Oil Well logging

Several products have been introduced in the market utilizing scintillation counters for detection of
potentially dangerous gamma-emitting materials during transport. These include scintillation counters designed
for freight terminals, border security, ports, weigh bridge applications, scrap metal yards and contamination
monitoring of nuclear waste. There are variants of scintillation counters mounted on pick-up trucks and
helicopters for rapid response in case of a security situation due to dirty bombs or radioactive waste. Hand-held
units are also commonly used.

Radiocarbon dating is a method of determining the age
of an object by using the properties of radiocarbon, a
radioactive isotope of carbon. The method was invented
by Willard Libby in the late 1940s and soon became a standard
tool for archaeologists. It depends on the fact that radiocarbon,
often abbreviated as 14C, is constantly being created in the
atmosphere by the interaction of cosmic rays with atmospheric
nitrogen. The resulting radiocarbon combines with atmospheric
oxygen to form radioactive carbon dioxide. This is then
incorporated into plants by photosynthesis, and animals
acquire 14C by eating the plants. When the animal or plant dies,
it stops exchanging carbon with its environment, and from that point the amount of 14C it contains begins to
reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or
animal such as piece of old wood or a fragment of bone provides information that can be used to calculate when
the animal or plant died. The oldest dates that can be reliably measured by radiocarbon dating are around 50,000
years ago, though special preparation methods occasionally permit dating of older samples. Measurement of
radiocarbon was originally done by beta-counting devices, so called because they counted the amount of beta
radiation emitted by decaying 14C atoms in a sample. More recently, accelerator mass spectrometry has become
the method of choice; it can be used with much smaller samples (as small as individual plant seeds), and gives
results much more quickly.
In the early 1930s Willard Libby was a chemistry student at the University of Berkeley, receiving his
Ph.D. in 1933. He remained there as an instructor until the end of the decade. In 1939 the Radiation Laboratory
at Berkeley began experiments to determine if any of the elements common in organic matter had isotopes with
half-lives long enough to be of value in biomedical research. It was soon discovered that 14C's half-life was far
longer than had been previously thought, and in 1940 this was followed by proof that the interaction of slow
neutrons with 14N was the main pathway by which 14C was created. It had previously been thought 14C would be
more likely to be created by deuterons interacting with 13C. At about this time Libby read a paper by W. E.
Danforth and S. A. Korff, published in 1939, which predicted the creation of 14C in the atmosphere by neutrons
from cosmic rays which had been slowed down by collisions with molecules of atmospheric gas. It was this
paper that first gave Libby the idea that radiocarbon dating might be possible.
In 1945, Libby moved to the University of Chicago. He published a paper in 1946 in which he proposed
that the carbon in living matter might include 14C as well as non-radioactive carbon. Libby and several
collaborators proceeded to experiment with methane collected from sewage works in Baltimore, and
after isotopically enriching their samples they were able to demonstrate that they contained radioactive 14C. By
contrast, methane created from petroleum had no radiocarbon activity. The results were summarized in a paper
in Science in 1947, and the authors commented that their results implied it would be possible to date materials
containing carbon of organic origin.
Libby and James Arnold proceeded to experiment with samples of wood of known age. For example,
two wood samples taken from the tombs of two Egyptian kings, Zoser and Sneferu, independently dated to
2625 BC plus or minus 75 years, were dated by radiocarbon measurement to an average of 2800 BC plus or
minus 250 years. These results were published in Science in 1949. In 1960, Libby was awarded the Nobel Prize
in Chemistry for this work.
A radiocarbon measurement is termed a conventional radiocarbon age (CRA). The CRA conventions
include (a) usage of the Libby half-life, (b) usage of Oxalic Acid I or II or any appropriate secondary standard
as the modern radiocarbon standard, (c) correction for sample isotopic fractionation to a normalized or base
value of -25.0 per mille relative to the ratio of carbon 12/carbon 13 in the carbonate standard VPDB Cretaceous belemnite formation at Peedee in South Carolina, (d) zero BP (Before Present) is defined as AD
1950, and (e) the assumption that global radiocarbon levels are constant. Standard errors are also reported in
a radiocarbon dating result, hence the values. These values have been derived through statistical means.

Libby's first detector was a Geiger counter of his own design. He converted the carbon in his
sample to lamp black (soot) and coated the inner surface of a cylinder with it. This cylinder was inserted
into the counter in such a way that the counting wire was inside the sample cylinder, in order that there
should be no material between the sample and the wire. Any interposing material would have interfered
with the detection of radioactivity; the beta particles emitted by decaying 14C are so weak that half are
stopped by a 0.01 mm thickness of aluminium.
Libby's method was soon superseded by gas proportional counters, which were less affected by
bomb carbon (the additional 14C created by nuclear weapons testing). These counters record bursts of
ionization caused by the beta particles emitted by the decaying 14C atoms; the bursts are proportional to
the energy of the particle, so other sources of ionization, such as background radiation, can be identified
and ignored. The counters are surrounded by lead or steel shielding, to eliminate background radiation
and to reduce the incidence of cosmic rays. In addition, anticoincidence detectors are used; these record
events outside the counter, and any event recorded simultaneously both inside and outside the counter is
regarded as an extraneous event and ignored.
The other common technology used for measuring 14C activity is liquid scintillation counting,
which was invented in 1950, but which had to wait until the early 1960s, when efficient methods of
benzene synthesis were developed, to become competitive with gas counting; after 1970 liquid counters
became the more common technology choice for newly constructed dating laboratories. The counters
work by detecting flashes of light caused by the beta particles emitted by 14C as they interact with a
fluorescing agent added to the benzene. Like gas counters, liquid scintillation counters require shielding
and anticoincidence counters.
For both the gas proportional counter and liquid scintillation counter, what is measured is the
number of beta particles detected in a given time period. Since the mass of the sample is known, this can
be converted to a standard measure of activity in units of either counts per minute per gram of carbon
(cpm/g C), or becquerels per kg (Bq/kg C, in SI units). Each measuring device is also used to measure
the activity of a blank samplea sample prepared from carbon old enough to have no activity. This
provides a value for the background radiation, which must be subtracted from the measured activity of
the sample which is being dated to get the activity due solely to that sample's 14C. In addition, a sample
with a standard activity is measured, in order to provide a baseline for comparison.
In nature, carbon exists as two stable, nonradioactive isotopes: carbon-12 (12C), and carbon-13 (13C), and
a radioactive isotope, carbon-14 (14C), also known as "radiocarbon". The half-life of 14C (the time it takes for
half of a given amount of 14C to decay) is about 5,730 years, so its concentration in the atmosphere might be
expected to reduce over thousands of years. However, 14C is constantly being produced in the
lower stratosphere and upper troposphere by cosmic rays, which generate neutrons that in turn create 14C when
they strike nitrogen-14 (14N) atoms. The 14C creation process is described by the following nuclear reaction:

where n represents a neutron and p represents a proton.

Once produced, the 14C quickly combines with the oxygen in the atmosphere to form carbon dioxide
(CO2). Carbon dioxide produced in this way diffuses in the atmosphere, is dissolved in the ocean, and is taken
up by plants via photosynthesis. Animals eat the plants, and ultimately the radiocarbon is distributed throughout
the biosphere. The ratio of 14C to 12C is approximately 1.5 parts of 14C to 1012 parts of 12C. In addition, about 1%
of the carbon atoms are of the stable isotope 13C.
The equation for the radioactive decay of 14C is:

By emitting a beta particle (an electron) and an electron antineutrino, one of the neutrons in the 14
C nucleus changes to a proton and the 14C nucleus changes back into the stable (non-radioactive) isotope
nitrogen-14 14N.


Once an organism dies the carbon is no longer replaced. Because the radiocarbon is radioactive, it will
slowly decay away. Obviously there will usually be a loss of stable carbon too but the proportion of radiocarbon
to stable carbon will reduce according to the exponential decay law:
R = A exp(-T/8033)
where R is 14C/12C ratio in the sample, A is the original 14C/12C ratio of the living organism and T is the
amount of time that has passed since the death of the organism.
By measuring the ratio, R, in a sample we can then calculate the age of the sample:
T = -8033 ln(R/A)
The radiocarbon age of a certain sample of unknown age can be determined by measuring its carbon 14
content and comparing the result to the carbon 14 activity in modern and background samples.
The principal modern standard used by radiocarbon dating labs was the Oxalic Acid I obtained from the
National Institute of Standards and Technology in Maryland. This oxalic acid came from sugar beets in 1955.
Around 95% of the radiocarbon activity of Oxalic Acid I is equal to the measured radiocarbon activity of the
absolute radiocarbon standarda wood in 1890 unaffected by fossil fuel effects.
When the stocks of Oxalic Acid were almost fully consumed, another standard was made from a crop of
1977 French beet molasses. The new standard, Oxalic Acid II, was proven to have only a slight difference with
Oxalic Acid I in terms of radiocarbon content. Over the years, other secondary radiocarbon standards have been
Radiocarbon activity of materials in the background is also determined to remove its contribution from
results obtained during a sample analysis. Background radiocarbon activity is measured, and the values obtained
are deducted from the samples radiocarbon dating results. Background samples analyzed are usually geological
in origin of infinite age such as coal, lignite, and limestone.
The variation in the 14C/12C ratio in different parts of the carbon exchange reservoir means that a
straightforward calculation of the age of a sample based on the amount of 14C it contains will often give an
incorrect result. There are several other possible sources of error that need to be considered. The errors are of
four general types:

variations in the 14C/12C ratio in the atmosphere, both geographically and over time;
the radiocarbon concentration of the atmosphere has not always been constant; in fact it has
varied significantly in the past
isotopic fractionation;
To determine the degree of fractionation that takes place in a given plant, the amounts of
both 12C and 13C isotopes are measured, and the resulting 13C/12C ratio is then compared to a
standard ratio known as PDB. The 13C/12C ratio is used instead of 14C/12C because the former is
much easier to measure, and the latter can be easily derived: the depletion of 13C relative to 12C is
proportional to the difference in the atomic masses of the two isotopes, so the depletion for 14C is
twice the depletion of 13C.
variations in the 14C/12C ratio in different parts of the reservoir;

these occur, for example, when some of the carbon reaches the sample by way of the oceans;
because the radiocarbon composition of the oceans differs from that of the atmosphere, this can
lead to erroneous dates; stable isotope measurements can be used to see if this effect is present
since the stable isotope concentration of the oceans is also different

where material from the soil or conservation work becomes incorporated into the sample
resulting in an admixture of carbon with a different radiocarbon content; the purpose of chemical
pre-treatment is to remove all such material


The radiocarbon formed in the upper atmosphere is mostly in the

carbon dioxide. This is taken up by plants through photosynthesis.
Because the carbon present in a plant comes from the atmosphere
this way, the radio of radiocarbon to stable carbon in the plant is
virtually the same as that in the atmosphere. Plant eating
animals (herbivores and omnivores) get their carbon by eating
plants. All animals in the food chain, including carnivores, get
their carbon indirectly from plant material, even if it is by
eating animals which themselves eat plants. The net effect of this
is that all living organisms have the same radiocarbon to stable
carbon ratio as the atmosphere.



During its life, a plant or animal is exchanging carbon with its surroundings, so the
carbon it contains will have the same proportion of 14C as the biosphere and the carbon exchange reservoir.
Once it dies, it ceases to acquire 14C, but the 14C within its biological material at that time will continue to decay,
and so the ratio of 14C to 12C in its remains will gradually reduce. Because 14C decays at a known rate, the
proportion of radiocarbon can be used to determine how long it has been since a given sample stopped
exchanging carbonthe older the sample, the less 14C will be left.
The equation governing the decay of a radioactive isotope is:

where N0 is the number of atoms of the isotope in the original sample (at time t = 0, when the organism from
which the sample was taken died), and N is the number of atoms left after time t. is a constant that depends on
the particular isotope; for a given isotope it is equal to the reciprocal of the mean-lifei.e. the average or
expected time a given atom will survive before undergoing radioactive decay. The mean-life, denoted by ,
of 14C is 8,267 years, so the equation above can be rewritten as:

The sample is assumed to have originally had the same 14C/12C ratio as the ratio in the biosphere, and
since the size of the sample is known, the total number of atoms in the sample can be calculated, yielding N0,
the number of 14C atoms in the original sample. Measurement of N, the number of 14C atoms currently in the
sample, allows the calculation of t, the age of the sample, using the equation above.
The half-life of a radioactive isotope (the time it takes for half of the sample to decay, usually denoted
by t1/2) is a more familiar concept than the mean-life, so although the equations above are expressed in terms of
the mean-life, it is more usual to quote the value of 14C's half-life than its mean-life. The currently accepted
value for the half-life of 14C is 5,730 years. This means that after 5,730 years, only half of the initial 14C will
have remained; a quarter will have remained after 11,460 years; an eighth after 17,190 years; and so on.
The above calculations make several assumptions, such as that the level of 14C in the biosphere has
remained constant over time. In fact, the level of 14C in the biosphere has varied significantly and as a result the
values provided by the equation above have to be corrected by using data from other sources in the form of a
calibration curve, which is described in more detail below. For over a decade after Libby's initial work, the
accepted value of the half-life for 14C was 5,568 years; this was improved in the early 1960s to 5,730 years,
which meant that many calculated dates in published papers were now incorrect (the error is about 3%).
However, it is possible to incorporate a correction for the half-life value into the calibration curve, and so it has
become standard practice to quote measured radiocarbon dates in "radiocarbon years", meaning that the dates
are calculated using Libby's half-life value and have not been calibrated. This approach has the advantage of
maintaining consistency with the early papers, and also avoids the risk of a double correction for the Libby halflife value.

In 1919, Ernest Rutherford used a radioactive source of alpha particles to bombard nitrogen-7 nuclei.
The result below not only provided evidence for the existence of protons, but also showed that one element

could be transmuted into another. Transmutation is the change of one element to another by bombarding the
nucleus of the element with nuclear particles or nuclei.

Since then, bombardment reactions have led to the discovery of the neutron (in 1932 by James
Chadwick), the production of new radioactive isotopes, and most currently, the study of "subnuclear" particles.
The British physicist James Chadwick suggested in 1932 that the radiation from beryllium consists of
neutral particles, each with a mass approximately that of a proton. Chadwicks suggestion led to the discovery
of the neutron.

Nuclear bombardment reactions are often referred to by an abbreviated notation. For example, the

is abbreviated

In order to carry out bombardment reactions on large nuclei requires very high speeds for incoming
particles. This required the development of "accelerator" technology. A particle accelerator is a device used to
accelerate electrons, protons, and alpha particles and other ions to very high speeds. Generally, the kinetic
energy of these particles is measured in electron-volts. An electron-volt (eV) is the quantity of energy required
to accelerate an electron by one volt potential difference. The equivalent, in Joules, is 1.602 x 10-19 J.
1 eV = 1.602 x 10-19 J
Typically, accelerators provide millions
of electron-volts to these charged "bullets".
Bombardment of heavy nuclei requires
accelerated particles for successful transmutation
reactions. The figure on the right shows a
diagram of a cyclotron, a type of particle
accelerator consisting of two hollow,
semicircular metal electrodes in which charged
particles are accelerated in stages.


Biological effect begins with the ionization

of atoms. The mechanism by which radiation
causes damage to human tissue, or any other
material, is by ionization of atoms in the
material. Ionizing radiation absorbed by human
tissue has enough energy to remove electrons
from the atoms that make up molecules of the
tissue. When the electron that was shared by the
two atoms to form a molecular bond is dislodged
by ionizing radiation, the bond is broken and thus,
the molecule falls apart. This is a basic model for
understanding radiation damage. When ionizing
radiation interacts with cells, it may or may not
strike a critical part of the cell. We consider the
chromosomes to be the most critical part of the
cell since they contain the genetic information and
instructions required for the cell to perform its
function and to make copies of itself for
reproduction purposes.
Even though all subsequent biological effects can be traced back to the interaction of radiation with
atoms, there are two mechanisms by which radiation ultimately affects cells. These two mechanisms are
commonly called direct and indirect effects.
If radiation interacts with the atoms of the DNA molecule, or
some other cellular component critical to the survival of the cell, it is
referred to as a direct effect. Such an interaction may affect the ability
of the cell to reproduce and, thus, survive. If enough atoms are affected
such that the chromosomes do not replicate properly, or if there is
significant alteration in the information carried by the DNA molecule,
then the cell may be destroyed by direct interference with its lifesustaining system.
If a cell is exposed to radiation, the
probability of the radiation interacting with the
DNA molecule is very small since these critical
components make up such a small part of the
cell. However, each cell, just as is the case for
the human body, is mostly water. Therefore,
there is a much higher probability of radiation
interacting with the water that makes up most of
the cells volume. When radiation interacts with
water, it may break the bonds that hold the
water molecule together, producing fragments
such as hydrogen (H) and hydroxyls (OH).
These fragments may recombine or may interact
with other fragments or ions to form compounds, such as water, which would not harm the cell.
However, they could combine to form toxic substances, such as hydrogen peroxide (H 2O2), which can
contribute to the destruction of the cell.


Cells are undamaged by the dose

Ionization may form chemically active substances which in some cases alter the
structure of the cells. These alterations may be the same as those changes that occur
naturally in the cell and may have no negative effect.
Cells are damaged, repair the damage and operate normally
Some ionizing events produce substances not normally found in the cell. These can
lead to a breakdown of the cell structure and its components. Cells can repair the
damage if it is limited. Even damage to the chromosomes is usually repaired. Many
thousands of chromosome aberrations (changes) occur constantly in our bodies. We
have effective mechanisms to repair these changes.
Cells are damaged, repair the damage and operate abnormally
If a damaged cell needs to perform a function before it has had time to repair itself, it
will either be unable to perform the repair function or perform the function incorrectly
or incompletely. The result may be cells that cannot perform their normal functions or
that now are damaging to other cells. These altered cells may be unable to reproduce
themselves or may reproduce at an uncontrolled rate. Such cells can be the underlying
causes of cancers.
Cells die as a result of the damage
If a cell is extensively damaged by radiation, or damaged in such a way that
reproduction is affected, the cell may die. Radiation damage to cells may depend on
how sensitive the cells are to radiation.


cells are not equally sensitive to radiation

damage. In general, cells which divide rapidly and/or
are relatively non-specialized tend to show effects at
lower doses of radiation then those which are less
rapidly dividing and more specialized. Examples of
the more sensitive cells are those which produce
blood. This system (called the hemopoietic system) is
the most sensitive biological indicator of radiation
exposure. The relative sensitivity of different human
tissues to radiation can be seen by examining the
progression of the Acute Radiation. Reproductive and
gastrointestinal cells are not regenerating as quickly and are less sensitive. The nerve and muscle cells are the
slowest to regenerate and are the least sensitive cells.
Acute Effects
A single accidental exposure to a high dose of radiation during a short period of time is referred to as an
acute exposure, and may produce biological effects within a short period after exposure. These effects include:

Skin damage

Nausea and vomiting

Malaise and fatigue

Increased temperature

Blood changes

Bone marrow damage

Damage to cells lining the small intestine

Damage to blood vessels in the brain

The most common delayed effects are various forms of cancer (leukaemia, bone cancer, thyroid cancer, lung
cancer) and genetic defects (malformations in children born to parents exposed to radiation). In any radiological
situation involving the induction of cancer, there is a certain time period between the exposure to radiation and
the onset of disease. This is known as the "latency period" and is an interval in which no symptoms of disease
are present. The minimum latency period for leukaemia produced by radiation is 2 years and can be up to 10
years or more for other types of cancer.
Cells, like the human body, have a tremendous
ability to repair damage. As a result, not all radiation
effects are irreversible. In many instances, the cells are
able to completely repair any damage and function
normally. If the damage is severe enough, the affected
cell dies. In some instances, the cell is damaged but is
still able to reproduce. The daughter cells, however, may
be lacking in some critical life-sustaining component,
and they die. The other possible result of radiation
exposure is that the cell is affected in such a way that it
does not die but is simply mutated. The mutated cell
reproduces and thus perpetuates the mutation. This could
be the beginning of a malignant tumor.
The connection between effects of exposure to radiation and dose (i.e., dose-response relationship) is
classified into 2 categories, non-stochastic, and stochastic.
The non-stochastic effects, also referred to as deterministic or tissues and organs effects, are specific to each
exposed individual. They are characterized by:
A certain minimum dose must be exceeded before the particular effect is observed. Because of this
minimum dose, the non-stochastic effects are also called Threshold Effects. The threshold may differ
from individual to individual
The magnitude of the effect increases with the size of the dose received by the individual
There is clear relationship between exposure to radiation and the observed effect on the individual
Stochastic effects are those that occur by chance. They are more difficult to identify since the same type of
effects may appear among individuals not working with radioactive materials. The main stochastic effects are
cancer and genetic defects. According to current knowledge of molecular biology, a cancer is initiated by
damaging chromosomes in a somatic cell. Genetic defects are caused by damage to chromosomes in a germ cell
(sperm or ovum). There is no known existing threshold for stochastic effects. One single photon or electron can
produce the effect. For these reasons, a stochastic effect is called a Linear or Zero-Threshold Dose-Response
Stochastic effects can also be caused by many other factors, not only by radiation. Since everybody is
exposed to natural radiation, and to other factors, stochastic effects can arise in all of us regardless of the type of
work (working with radiation or not). Whether or not an individual develops the effect is simply a question of
There is a stochastic correlation between the number of cases of cancers developed inside a population and
the dose received by the population at relatively large levels of radiation. Attempts have been made to
extrapolate the data from these levels of dose to low levels of dose (close to the levels received from
background radiation). There is no scientific evidence to prove the results of these attempts.

Since there is no evidence of a lower threshold for the appearance of Stochastic Effects, the prudent course
of action is to ensure that all radiation exposures follow a principle known as ALARA (As Low As Reasonable
Achievable). We will be referring to the application of this principle at U of T in subsequent modules.