P. 1
Health and Radiation Physics, Lecture Notes R20070927H

Health and Radiation Physics, Lecture Notes R20070927H

|Views: 136|Likes:
Published by Z. Yong Peng

More info:

Published by: Z. Yong Peng on Aug 23, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less






Health and Radiation Physics

Heiko Timmers

A lecture course for 2nd year students of physics University of New South Wales Australian Defence Force Academy

July 2003


1 Bones and body mechanics 1.1 Mechanical representation of the human skeleton . 1.2 Standing, bending, lifting . . . . . . . . . . . . . . 1.3 Walking and running . . . . . . . . . . . . . . . . 1.4 Bone: A good material choice? . . . . . . . . . . . 2 The 2.1 2.2 2.3

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

1 2 4 5 9

eye and vision 13 Cornea, iris, lens and retina . . . . . . . . . . . . . . . . . . . . . . 14 Colour perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Exploring vision with simple experiments . . . . . . . . . . . . . . . 20 . . . . ear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 22 24 25 28 33

3 Hearing 3.1 Hearing sensitivity . 3.2 Structure of the ear . 3.3 Outer ear and middle 3.4 Inner ear . . . . . . . 4 Alpha-decay

5 Beta-decay 43 5.1 The story of a carbon atom . . . . . . . . . . . . . . . . . . . . . . 44 5.2 Carbon-14 dating . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6 Gamma-decay 6.1 Nuclear medicine: history and modern practice 6.2 Gamma-rays in nuclear medicine . . . . . . . . 6.3 The equivalence of energy and mass: . . . . . . 6.4 Scintillation detectors . . . . . . . . . . . . . . . 6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 56 59 60 65 66

7 Appendix 67 7.1 First in-class test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 References


Chapter 1

Bones and body mechanics



Figure 1.1: The three lever classes and schematic examples of each in the body. W is a weight, F is the reaction force at the Fulcrum point and M is the muscle force.

The adult human body has 206 bones, forming the skeleton, which gives the body static rigidity and in conjunction with joints, ligaments and muscles allows dynamic motion. Insights into fundamental body actions, such as walking, running, or lifting loads, which are generally so familiar that we rarely analyze them, can be gained by applying mechanics principles. This can explain for example, why each person has a different advantageous step frequency, which surfaces require short strides, or why certain lifting techniques pose a health risk. Bone is a complex, living composite-material. It appears better suited than any synthetic material for its purpose, so that it is interesting to explore how our bone material provides the functionality required to ensure survival.


Mechanical representation of the human skeleton

The complexity of the human skeleton can be reduced by distinguishing three classes of lever action, see Fig. 1.1. The classes may be defined as follows: • Class 1 The fulcrum F is between the load W and the point where the muscle tendons are attached M . • Class 2 W is between M and F . • Class 3 M is between fulcrum F and load W . For an equilibrium situation, for example holding a weight in your hand or bending over, each of the three forces (W, F, and M ) can be calculated, if one of the other forces and the geometrical dimensions are known. This is possible, since for a static equilibrium, both, the vector sum of all external forces Fex , and the

Bones and body mechanics


Figure 1.2: The forearm. (a) The muscle and bone system. (b) Forces and dimensions: R is the reaction force of the humerus on the ulna joint. (c) The weight of the arm H is included at its centre of gravity.

vector sum of all external torques τex (about any point) are zero, i.e. F =0 τ =0 (1.1) (1.2)

Exercise 1 Which lever class represents the forearm, when holding a weight? For the forearm shown in Fig. 1.2 calculate the muscle force M required to hold a weight W = 1 kg in the hand. What reaction force R acts on the ulna bone at the joint with the humerus bone (fulcrum) for this weight. For a first estimate ignore the mass of the arm. Then include the mass of the arm (1.5 kg) in your calculation. Exercise 2 An exercise device is used to strengthen the leg muscles, see Fig. 7.1. Calculate the force M exerted by the muscle in the upper leg, when moving the foot forward to lift the weight. What is the reaction force exerted onto the joint? Experience tells that it is more difficult to hold a weight when the forearm is at angle α below or above the horizontal and that a weight can be sustained the longest, when forearm and body are at a right angle. This suggests that the muscle


Figure 1.3: Mechanical representation of an exercise to strengthen the leg muscles [McCormick & Elliot 2001].

force M is angle dependent. Indeed, for the torque τ at the joint of humerus and ulna it follows, using the same weight and dimensions as in Fig. 1.2, that τ = (30 cm · 10 N + 14 cm · 15 N) cos α (1.3)

The torque thus increases with angle, however, since τ M= (1.4) 4 cm · cos α the muscle force required remains constant with angle α. The apparent difficulty in holding a weight off the horizontal must therefore have a different reason. Indeed, it is found that this is a consequence of muscle physiology, which gives most strength at the muscle resting length, in between the two extremes of a fully stretched muscle and that of a contracted muscle.


Standing, bending, lifting

The mechanics of the spine may be investigated by treating the spine approximately as a rigid beam. When a person stands erect, the weight of the upper body W is directly over the legs and little force is exerted by the back and leg muscles. This changes, and considerable muscle work is then required, for an overweight or pregnant condition, since the person needs to tilt slightly backwards. It can be estimated that a disk between vertebrae in the spine column is likely to be damaged or even ruptures, when it experiences pressures of the order of 107 Pa, which is about 100 at.

Bones and body mechanics


Figure 1.4: Bending at an angle of 60◦ to the vertical with and without load L; W is the upper body weight, M is the force of the back muscles, exerted on the spine at a distance d from its base [McCormick & Elliot 2001].

Exercise 3 A person has a body mass of 75 kg and an upper body length of d, see Fig. 1.4. (a) Estimate the force M required from the back muscles, when the person bends to an angle of 60◦ to the vertical. (b) What pressure does the lumbrosacral disk at the base of the spine experience? (c) How do these values change, when the person lifts a weight of 20 kg? (d) For what bending angles with respect to the vertical is the pressure on the spinal disks the largest? The exercise shows that lifting heavy objects by bending over can increase the pressure on the disks between the vertebrae of the spine to values, which come close to their mechanical strength of about 107 Pa. This is illustrated in more detail in Fig. 1.5.


Walking and running

For a human standing still, the body weight is balanced by the reaction force G of the ground pointing in the opposite direction than the force associated with the weight, W = −G. When walking or running, an additional force comes into play,


Figure 1.5: (top) The average pressure on disks of the spine for different lifting situations. (bottom) The pressure as a function of time for the two extreme cases.

Bones and body mechanics material pair steel on steel steel on ice leather on metal oak on oak blocked tire on dry street blocked tire on wet street blocked tire on ice µst 0.15 0.027 0.6 0.58 µsl 0.10 - 0.05 0.014 0.4 0.48 0.8 0.5 0.05


Table 1.1: Coefficients µst and µsl for static and sliding friction, respectively, for some material pairs.

the frictional force associated with the ground F , which is directed horizontally, either in the direction of motion or opposite to it. The magnitude of the friction force is found to be a constant fraction of the normal force N |F | = µst |N | (1.5)

where µst is the coefficient of static friction, which is low for slippery surfaces and high for ‘firm’ ground. For a horizontal surface the normal force is equal to the weight (N = W ), so that (1.6) |F | = µst |W | The coefficient of sliding friction µsl is defined equivalently. Table 1.1 gives some characteristic values. It is interesting that the coefficient for sliding friction is always smaller than that for static friction. This explains for example, why it is usually difficult to avoid a fall on a slope, once your boots have lost grip. Exercise 4 Using a wooden board, sheet metal, a ruler and a weight, measure the static and sliding coefficients of friction for your shoe or boot on wood and on the metal. Give a rough estimate of the experimental uncertainty. The reaction force G and the friction force F combine to give a net force, which either stops or propels our strides, see Figure 1.6 The net force points more upward, the smaller the friction force is. Thus for slippery surfaces, it pays off to make short strides, so that the momentum of the foot is directed downward rather than forward. Figure 1.7 illustrates that walking legs are like two pendulums, which swing back and forth, and that most of the energy is used to move the mass associated with the legs forward and backward, rather than for lifting the feet of the ground. Since the least energy is required, when a pendulum swings with its eigen-frequency, it is advantageous to walk steady and move the legs at the eigen-frequency of our motoric system. Speed can then be controlled by varying the length of the strides. When running, the number of strides per second has to be increased and more energy is needed. In addition, we lean forward and launch the body into brief leaps.


Figure 1.6: Forces acting on the foot when walking.

Figure 1.7: Action of the leg muscles in walking [McCormick & Elliot 2001].

Bones and body mechanics


Figure 1.8: Action of the leg muscles in running [McCormick & Elliot 2001].

Still, the centre-of-mass of our body barely moves in the vertical and thus little energy is consumed to account for the slight changes in potential energy associated with such movements, see Fig. 1.8


Bone: A good material choice?

The theory of evolution would suggest that our natural bone material is optimized for its purpose. The question arises, if this can be supported with physics arguments. The following properties may be considered important for bones: • Strength, to sustain large forces • Elasticity, to avoid damage and thus immobility, which to a prehistoric human being was equivalent to death • Low weight, to reduce the energy, and thus the food, required to carry the bones around • Self-repair, to counter wear, achieve longevity, and regain mobility following damage It is instructive to compare the properties of bone with those of other common materials. This shows that bones are by far not the strongest materials available. The compressive breaking stress of trabecular bone is 2.2 N/mm2 , which is two orders of magnitude lower than than of granite (145 N/mm2 ) and steel (552 N/mm2 ). However, as demonstrated earlier, our bones are strong enough to sustain pressures of tens of atmospheres. Elasticity may be characterized using Young’s modulus L F · (1.7) ∆L A where F is the force pulling a cylinder, A the cross-section area of the cylinder, and L is its length. An elastic material extends a fair distance ∆L for a given force F , Y =


whereas an inelastic material does not. Thus a low modulus corresponds to good elasticity. Trabecular bone has a Young modulus of 0.76 N/mm2 , which is much lower than that of other strong biological materials such as oak (110 N/mm2 ), but not as low as the modulus of rubber (0.01 N/mm2 ). It is not immediately clear, why elasticity should be so important for bones. This may be illustrated using a car crashing into, say, a tree at about 50 km/h. In this case and without air bag and seat belt, the driver’s head impacts on the dash board with a velocity of about 15 m/sec and 5 mm of skin are the only protection for the skull bone. The force to be sustained by the skull bone can be estimated from the deceleration time ∆x 5 · 10−3 m ∼ ∆t = = (1.8) = 0.3 · 10−3 s v 15 m/s The force F to be sustained in this case is therefore of the order of ∆p 4 kg · 15 m/s F = = = 200, 000 N (1.9) ∆t 0.3 · 10−3 s where it has been assumed that the mass of the head is about 4 kg. The impacting force is equivalent to a weight of 20 tons ! The elasticity of bone reduces this force somewhat by increasing the deceleration time ∆t. However, it can be estimated with Eqs 1.8 and 1.9 that the skull bone would have to give by almost one meter to reduce the impacting force to below 100 kg. Since this is not possible without lethal damage, the need for other crunch zones, as provided by the body of the car or an air bag, is emphasized. With regard to ‘low weight’ compact bone actually performs poorly. The density of compact bone (1.9 g/cm3 ) is larger than that of water, compared to only 0.1 g/cm3 for balsa wood, another biological material, which might have been used in its place, however, would lack sufficient strength. The large density of compact bone very much explains the prevalence of structured, trabecular bone in the body. ‘Trabecular’ (Latin for ‘comprised of beams’) refers to the fact that most bone has a ‘cathedral-like’ structure shown in Figure 1.9, which, as for the comparison, gives great strength with a minimum of material and weight. It is interesting to note, in an age where materials science aims for so-called ‘nano-structured materials’, that bone is a good example for a nano-structured material, since it consists of particles of bone mineral (Ca10 PO4 OH2 ) with dimensions of the order of about 10 nm, which are extremely hard, but form elastic polymer tubes of collagen, which are typically about 200 nm large. The tubes combine to form trabecular cells, with diameters of about half-a-millimeter, which give bone its ‘spongy’ appearance. During our life-time the structure of bone changes to adapt to changing requirements (Wolf’s law). The elasticity required for babies and toddlers is reflected by large cells, whereas the more dense bone material of adults gives more strength at the expense of elasticity. This is shown in Fig. 1.10

Bones and body mechanics


Figure 1.9: This micrograph shows the ‘spongy’, cathedral-like structure of trabecular bone.

Figure 1.10: Bone cross-sections at different stages of a human’s life. Baby bones (a, d) have relatively large cells giving elasticity, compared to adult bones (b,c,e), which are structured more densely to increase strength.


In summary, it may be stated that there certainly exist many materials, which exceed bone with regard to either strength, elasticity, or weight. However, none of those reconciles these three properties better than trabecular bone and also allows for continuous restructuring to adapt to the changing human body, while in addition offering the option of biological repair following damage.

Chapter 2

The eye and vision



Figure 2.1: Cross-section of the human eye [McCormick & Elliot 2001].

The capabilities of the human eye are remarkable, when compared to even advanced high-tech cameras: • rapid automatic focussing • wide angle of view, while simultaneously detailed vision at large distances • brain transforms images from both eyes into three-dimensional perception • operation over a large range of light intensities: from dark night to bright day (7 orders of magnitude!) • some self-repair of local damage


Cornea, iris, lens and retina

Figure 2.1 show a cross-section through the human eye. Several properties of the eye can be understood by modelling it using the concepts of geometrical optics. In this approach the combination of cornea and lens may be viewed as a converging lens, while the iris determines how much light is focussed onto the retina, which is equivalent to an image screen. It is interesting that the cornea breaks the incident light much more strongly than the eye lens, which nevertheless is the active component of the lens system and fine-tunes the focussing. The strong breaking power of the cornea results from its relatively large refractive index of nco = 1.37. Table 2.1 compares this value with those for other media.

The eye and vision refractive index n 1.37 1.41 1 1.003 1.33 1.5 - 1.9 2.42


cornea eye lens vacuum air water glass diamond

Table 2.1: Refractive indices of some media.

Figure 2.2: Reflection and refraction of a light-ray at a plane glass surface [Halliday et al. 1993].

A light-ray incident on the eye passes from air with a refractive index nair = 1.003 into the watery medium of the cornea. As for any such transition between two transparent media at the interface both reflection and refraction are observed. This is demonstrated for air and glass in Fig. 2.2. The phenomenon is described within geometrical optics by the law of reflection θ1 = θ1 and Snell’s law of refraction n2 sin θ2 = n1 sin θ1 (2.2) (2.1)


Figure 2.3: Definition of focal length [McCormick & Elliot 2001]. (a) converging. (b) diverging.

and can be explained by a change in wave velocity which for every medium is given by v = c/n (2.3) where c is the speed of light. When n2 > n1 , it follows from Eq. 2.2 that θ2 < θ1 and the ray is refracted toward the surface normal. The relatively large change of wave velocity at the cornea surface is thus responsible for the strong breaking power of the cornea. Exercise 5 (a) Why is it easier under water to obtain a well focussed view with the help of a face mask as compared to not wearing a mask? (b) Calculate the change in direction for a light-ray incident on the cornea at an angle of 25◦ . Refraction can be exploited for focussing or de-focussing by curving the interface between two media of different refractive indices and making a lens. Lenses are either converging, e.g. convex lenses, or diverging, e.g. a concave lenses. The focal length of a lens f , which defines the lens power p, is equal to the distance between the optical centre of the lens and the focal point for rays of light which are incident on the lens and parallel to its optical axis. This is illustrated in Fig. 2.3. The lens power is then given by 1 p= [in Dioptres] (2.4) f [in m] In reasonable approximation the combination of cornea and eye lens can be described as a thin converging lens with a power of 60 D - 70 D, depending on the lens muscles being relaxed (unaccommodated) or tightened to focus on a near object (accommodated). For given power p the thin lens formula relates an object to the respective image as projected by the lens with 1 1 1 = + (2.5) f s s

The eye and vision


where s and s are the distance of the object and the image from the optical centre of the lens as measured along its optical axis, respectively. Exercise 6 The cornea has a power of about 50 Dioptres, while that of the eye lens is about 10 Didoptres, when unaccommodated. Calculated the combined focal length of cornea and unaccommodated eye lens. Is the result consistent with the physical dimensions of the human eye? A short-sighted person can see near objects clearly but distant objects appear blurred, because either the curvature of the cornea or the lens is too large or the distance between eye lens and retina is too large. The farthest point still in focus is referred to as ‘far point’. Equivalently a far-sighted person sees near objects blurred with the nearest point in focus being called the ‘near-point’. It is usually caused by a flattening of the eye lens. Exercise 7 (a) Suppose that a person is short-sighted with a far point at 0.2 m. The power of accommodation of that person is 4 Dioptres. Calculate the power of the spectacle lenses that are required to see distant objects in focus. What is the shape of those lenses? (b) Suppose that a person is far-sighted with the near point at 1.0 m. For correct vision the near point should be at 0.25 m from the eye. Calculate the power of the spectacle lenses that are required to see an object at a distant of 0.25 m in focus. What is the shape of those lenses? (Approximate the distance between the optical centre of the eye lens and the retina with 0.02 m.) Another common eye defect in astigmatism, which can be detected with the eye test shown in Fig. 2.4 (top). Astigmatism occurs, when the focal length of the eye is different, say, in the horizontal plane, than in the vertical plane. This is caused by a distortion of the cornea. A person suffering from astigmatism perceives the lines at some angle α and those at α + 180◦ blurred and greyish, while the lines at the other angles are perceived black and in focus. The defect can be correct with a cylindrical spectacle lens which thus de-focusses in one plane. The lens has to be positioned, so that the de-focussing coincides with the plane defined by α and α + 180◦ . Exercise 8 A person has astigmatism and wears glasses to correct this eye defect. The person removes the glasses and holds them at some distance looking through one of the lenses, while at the same time rotating the glasses. What happens to the image seen through that lens?


Colour perception

Our colour perception is complex, but can be characterized employing three different qualities:


Figure 2.4: Eye test to detect astigmatism (top). Cylindrical lens correcting astigmatism between the vertical and horizontal planes (bottom) [McCormick & Elliot 2001].

The eye and vision


Figure 2.5: The colour circle showing all hues. It incorporates all spectral colours and the colour triangle [McCormick & Elliot 2001]. The extraspectral hues result from combining spectral hues of low and high wave lengths. The limits of this are indicated by the solid and dashed lines respectively. The colour triangle identifies the three primary colours blue, green and red.

• Hue is what we colloquially call colour. It is comprised of all the light colours associated with the electromagnetic spectrum, which can be identified uniquely through their wavelength λ. In addition, it includes the extraspectral hues of magenta, which result from mixing violet and red. The various hues are compiled in the colour circle in Fig. 2.5. • Brightness is a qualitative measured of the light intensity. It is related to the power reaching the retina per unit area. • Saturation is the purity of the colour. The hues shown in Fig. 2.5 are saturated. When they are mixed with a neutral colour, such as white, black or grey, they become less saturated. The eye lens is opaque to wave length below 380 nm and thus limits our visibility for light of low wavelength. The long-wavelength limit is set by the sensitivity of the retina which is 760 nm. It is interesting to note that over this (narrow) range of wavelengths atmospheric absorption is minimal so that the eye takes full advantage of the naturally accessible part of the electromagnetic spectrum. Colour perception is made possible by the existence of three different types of light-sensitive cells (cones) on the retina, which respond differently to the three primary hues of blue, green and red. In addition to the cones a second type of receptor cells exists on the retina (rods) which are only sensitive to light intensity, but not to colour. The colour sensitivity of the cones is due to three different


photosensitive pigments. Each one absorbs light over a range of wave lengths, with peaks at 445 nm, 535 nm, and 575 nm, respectively. The absorption of light produces a chemical change in the pigment molecule resulting in a nerve pulse, which is transmitted to the brain for processing. When all three types of cones receive light of similar intensity and colour corresponding to their characteristic hue (blue, green, red), the perception is that of white light. The three combinations of two of the primary colours produce the secondary colours: • green + blue = cyan • blue + red = magenta • red+ green = yellow


Exploring vision with simple experiments

It is instructive to explore the eye with some simple experiments: Exercise 9 Yellow spot and blood vessels In front of an intense light source look at a pin hole in a piece of paper at a distance of about 10 cm. What can you see ? Exercise 10 Blind spot Try located the blind spot in you field-of-view. Estimate this angle. Exercise 11 Upward and downward rays on strong light sources When looking at a strong light source, maybe a street light at night, it often appears that light-rays extend downwards and/or upwards from it, but not to the either side. Why? Exercise 12 Curtains and Fechner’s law During the day, why do net curtains obscure the view? Why does this change at night?

Chapter 3




Figure 3.1: The sensitivity of the human ear as a function of sound frequency. The threshold sensitivity (solid curve) is indicated. Dashed curves show the average threshold level, the average sound level where discomfort, and that where pain was felt, for a group of test persons, respectively. The right ordinate axis gives the sound intensity in units of Watt per metre-squared on a logarithmic scale, while the left ordinate axis uses the specific units of decibel.

Sound is a longitudinal pressure wave, which is received by our ears and transformed into electrical signals, which can be processed by the brain. The ear thus acts as a ‘transducer’.


Hearing sensitivity

Our sense of hearing is astonishing. This is evident from Fig. 3.1, which demonstrates that the ear is sensitive over a frequency range of 20 − 20, 000 Hz, which corresponds to a change by a factor of 1000 or, in musical terminology, is equivalent to 8 octaves. For comparison, it may be noted that the frequency range visible by our eyes spans only 1 octave, ranging from 4 − 9 · 1014 Hz and is equivalent to a change of little more than a factor 2. The large range of our hearing appears even more remarkable, when remembering that musicians can tune their instruments to better than 1 Hz. Figure 3.1 also shows that the sensitivity of the ear covers a range of 12 orders of magnitude. Sound intensity is power per area and is therefore measured in units of Watt per metre-squared [W/m2 ]. Honouring Alexander Graham Bell, the quantity ‘intensity level’ has been introduced with the units ‘decibel’ and ‘bel’ (1 dB = 0.1 B). Intensity level is defined through 1 dB = 10 · log (I/I0 ) (3.1)

where I0 = 10−16 W/cm2 , which is taken as the sensitivity threshold (compare with Fig. 3.1). Some values for sound intensity and the corresponding intensity level have been compiled in Table 3.1.

Hearing power ≤ 1 nW 7 µW > 1000 W intensity level (dB) 0 50 130


threshold conversation pain

Table 3.1: Typical values for sound intensity I and the corresponding intensity level in dB.

Figure 3.2: The relation of intensity level and loudness as a function of frequency.

Exercise 13 At the nearest houses the traffic on a motorway produces noise with an intensity of 10−6 W/m2 . An expansion of the motorway with additional lanes is expected to double the traffic. Calculate the intensity level in dB before and after the motorway expansion. In the context of measuring and legislating environmental noise and its impact on humans, the quoting of intensity levels is not satisfactory, since the perceived “loudness” of a noise is frequency dependent and a subjective observation. Loudness perception decreases for low and high frequencies. In an attempt to quantify “loudness” the units “1 phon” has been introduced. For the frequency 1000 Hz 1 phon = 1 dB. The loudness of a sound at a different frequency is determined by comparing it with a sound at 1000 Hz until the loudness perception is the same. In practice frequency-dependent weights are used to simplify the measurement of loudness. The change of loudness in units of “phon” with frequency is illustrated in Figure 3.2. Its astonishing sensitivity aside, it may also be noted that hearing is a 360◦ , night-and-day sense and thus has been very useful in the evolution of the human species. The question arises how the ear achieves such extraordinary performance.


Figure 3.3: Cut-through view of the human ear.


Structure of the ear

Figure 3.3 shows a cut-through view of the human ear. The important parts of the ear are the auricle (Pinna), the outer ear, the middle ear and the inner ear. Besides this the brain plays a very important part in hearing by processing digital information received from the inner ear. The human auricle does not have any role in hearing and may be considered an evolutionary relict1 . However, cupping your hands behind the ears is equivalent to a 6-8 dB directional gain of sensitivity. The outer ear comprises mainly a 2.5 cm long tube, the ear canal. Its main purpose is to transport the sound to the middle ear, which is thus located well inside the body and protected. The length of the ear canal is optimized for the best transport of sound waves with a frequency of 3300 Hz. This is the frequency where human hearing is most sensitive. The middle ear comprises the eardrum and the three ossicles, referred to as hammer, anvil, and stirrup. The middle ear converts the sound waves into mechanical vibration with an almost ‘static’ gain of a factor x 20. Relaxing muscles in the middle ear, this gain factor can be reduced to protect middle and inner ear from brief excessive noise. The mechanical vibrations are then passed on to the inner ear, which houses the cochlea spiral, the complex hearing organ, which in many regards is still a mystery. In the cochlea the ‘analogue’ information contained in the mechanical vibrations of the ossicles is ‘dynamically’ amplified with large gain and ‘digitized’ using tiny hair bundles. The ‘digital’ information is then converted into electrical pulses, which are transported to the brain for processing by the auditory nerves. The hearing process thus involves the following steps:

In some animals the auricle actually still aids the hearing sense.



• (faint) sound • longitudinal pressure wave • mechanical motion and amplification • ‘dynamic’ amplification • digitization • electric pulses • brain processing and perception The steps shown in italic are still poorly understood.


Outer ear and middle ear

The ear canal may be likened with a cylindrical pipe of 2.4 cm length, which is open at one end and closed at the other by the 0.1 mm thin (paper thin) ear drum. The fact that the ear canal transports sound best in the most sensitive frequency region of the ear, and thus to some extend suppresses (background-)sounds at other frequencies, can be explained with wave physics. For clarity the longitudinal sound wave may be represented by a transversal wave, as it is shown in Fig. 3.4. For a singly-closed tube it holds that c (3.2) ν= 4 where ν is the sound frequency and c = 330 m/s is the velocity of sound in air. It therefore follows, using the dimensions of the ear canal, that the resonance frequency νr of the ear canal, for which no destructive interference occurs, is given by 330 m/s = 3300 Hz νr = (3.3) 4 · 2.5 · 10−2 m The ear canal is thus tuned to sounds like the cracking of a twig or high-pitch speech, such as that of an infant, which both must have had a particular importance to the survival and evolution of the human species. Figure 3.5 shows a cross-sectional view of the outer and middle ear illustrating the arrangement of hammer, anvil and stirrup, which form a leaver system. A schematic of this leaver system is displayed in Fig. 3.6. The physical motion of the eardrum is extremely small. At the hearing threshold the eardrum moves only about 1 ˚. This motion is passed on by the ossicles to another membrane, the A oval window. The leaver action of the ossicles ‘passively’ amplifies the mechanical motion. The amplification factor of about x 15 can be derived using the fact that the forces on both membranes are balanced and estimating the respective area of eardrum and oval window, as it is shown in Fig. 3.6. The leaver arrangement itself


Figure 3.4: Illustration of the relation between tube length and resonance wave length λ for an open and singly-closed tube, respectively. For clarity the longitudinal sound wave has been represented as transversal.



Figure 3.5: Cross-sectional view of outer and middle ear.

Figure 3.6: Transformation of air pressure changes into mechanical motion in the middle ear. This gives a mechanical amplification of a factor of 15.


Figure 3.7: Illustration of the cochlea. (a) The spiral duct of the cochlea is divided by the flexible basilar membrane and is filled with a fluid. The hair cells, which sit on the membrane, connect directly to the auditory nerve. (b) When sound enters the cochlea (here shown uncoiled) it agitates the fluid, causing a ripple to travel along the basilar membrane. This movement (which is grossly exaggerated here) is detected by sensory hair cells, supported on the membrane.

adds another factor x 1.3 to the gain, so that the total amplification of the middle ear amounts to a factor of η ∼ 20. Muscle action can reduce η briefly to protect = the inner ear from excessive noise. Another important part of the middle ear is the Eustachian tube which equilibrates the pressure on either side of the eardrum. This is required, so that the vibration of the eardrum is not hindered by pressure differences. During a cold and with sudden changes in external pressure (for example on board of an airplane) we experience the effect of different pressures on either side of the eardrum. Our hearing becomes less sensitive and distorted. Also physical pain may be experienced due to the permanent strain on ear drum and oval window.


Inner ear

On the other side of the oval window is the cochlea. Many aspects of the cochlea are still being researched, however, in general its functioning has been understood. A schematic illustration of the cochlea is shown in Fig. 3.7. It is a long, conical (i.e. the tube diameter decreases with length) tube which is coiled up towards the apex. The tube contains a fluid and it is separated into three chambers, referred to as



Figure 3.8: A cross-sectional view of the cochlea tube.

tympanic chamber, cochlear duct, and vestibular chamber, as it is shown in Fig. 3.8. Between the tympanic chamber and the cochlea duct is the basilar membrane. The stiffness of the basilar membrane is largest near the oval window and it becomes more elastic towards the other end of the cochlea tube. This change in elasticity is quite dramatic, covering two orders of magnitude. The mechanical motion of the oval window is transferred to the cochlea fluid. The fluid motion then causes ‘ripples’ of the basilar membrane, as it is illustrated in Fig. 3.7b. These ‘ripples’ pass along the membrane and cease at a certain distance from the oval window as a consequence of a complex interplay of fluid drag and the elasticity of the basilar membrane. Importantly, this distance is correlated with the frequency of the original sound. Thus the received sound frequency is converted to a stimulus along the basilar membrane with a well defined length. This is equivalent to a ‘digitization’ of the received frequency spectrum. In principle this ‘digital’ information can then be passed on to the brain for processing. ‘Ripples’ associated with high sound frequencies cease close to the oval window, whereas those corresponding to low sound frequencies travel a long distance along the basilar membrane. Several question arise: • How is the mechanical signal, or rather the distance travelled by this signal along the basilar membrane, converted into an electrical nerve pulse? • How can the extreme sensitivity of human hearing be achieved with such an arrangement? • How well can neighbouring frequencies be distinguished?


Figure 3.9: The organ of corti.

It is clear from the outset that the outstanding performance of the human ear cannot be achieved with a passive system, such as the leaver arrangement of the ossicles in the middle ear. The interface between the basilar membrane and the auditory nerve which connects to the brain is the organ of corti. An illustration is shown in Fig. 3.9. The organ of corti consist of hair bundle cells which are located above the basilar membrane and therefore move along with the ripples passing along. Importantly, the hairs are not stationary but permanently in spontaneous oscillatory motion. The frequencies of this spontaneous motion are random, however, very near to a certain eigenfrequency, which is characteristic for the cell location. This allows for ‘dynamic’ amplification. The external stimulus of the ripple, when of correct frequency, aligns the spontaneous oscillatory motion of all hairs at the characteristic eigenfrequency. Since the frequency of the hair motion is already very near this eigenfrequency, not much energy is required to achieve this. Thus a small stimulus achieves a large gain. This might explain the extreme sensitivity of our hearing. The frequency spectrum (the Fourier transform) of the hair oscillations, sharpens drastically, once the hairs go from the permanent nearfrequency motion to tuned motion, stimulated by the ripple. This is similar to the non-linear behaviour of a so-called Hopf resonator. The non-linear response of a hair bundle is illustrated in Fig. 3.10 for a hair bundle from a frog ear. The displacement of the hairs by their motion opens ion channels, which for example allows K+ ions to move in, see Fig. 3.11. This creates a potential difference and results in an electric nerve pulse through the auditory nerve to the brain.



Figure 3.10: (a) When a frog hair bundle is shaken using a micro-needle, its response is characteristic of a noisy Hopf oscillator (top). The applied force (shown in addition for all other spectra), which is related to the amplitude of the displacement of the needle, progressively increases down the figure. When the bundle is shaken gently, the Hopf oscillator’s gain - its response divided by the input stimulus - is large. (b) The Fourier transform of the bundle displacement has a peak at the stimulus frequency. The height of this peak grows as the cube root of the applied force. Thus a small stimulus force is amplified with larger gain than a larger stimulus.


Figure 3.11: (a) Microscopic image of a hair bundle cell. The bundle in this hair cell from a turtle is a pyramidal structure composed of stereocilia, which are connected by tip links. (b) The coordinated motion of the hair bundle opens ion channels (the gates are indicated by two red dots) and allows for the passage of K+ ions.

Chapter 4















Exercise 14 Depleted uranium (∼ 99.8 % 238 U, ∼ 0.199 % 235 U, 0.001 % 234 U) is a by-product of the enrichment process for reactor fuel and therefore relatively cheap. It has the same chemical and materials properties as natural uranium (∼ 99.275 % 238 U, ∼ 0.720 % 235 U, 0.005 % 234 U). Uranium has 1.7 times the density of lead and it is pyrophoric, i.e. fine uranium particles ignite in air, so that is well suited for application in high-impact ammunition. It is also often used as ballast in aircraft. The three long-lived uranium isotopes decay predominantly by alpha-decay. (a) Decide, if an area contaminated with depleted uranium is more radioactive, i.e. the overall activity is larger, than when it is contaminated with the same amount of natural uranium. Justify your answer. (b) Can you establish a trend relating the activity −dN/dt and the mass number A of the isotopic series of uranium nuclides, which decay by α-decay? Is this a general trend? Support your evidence for uranium with two other examples, i.e. two other chemical elements. (c) Inhaling of fine uranium dust can cause toxic reactions. However, how does the radiological impact of uranium dust, i.e. the emission of energetic alpha particles inside respiratory organs, compare with that of radium dust? [The Table of Nuclides can be found at “http://www2.bnl.gov/ton/”. A Periodic Table is available at “http://pearl1.lanl.gov/periodic/default.htm”.]


Chapter 5





The story of a carbon atom

Radioactive β-decay of 14 C nuclei forms the basis of the powerful technique of C-14 dating. In the story reprinted below [Levi 1975], Primo Levi beautifully narrates how carbon atoms are constantly incorporated into plants as long as the plant is alive, thus replenishing an equilibrium ratio of the stable 12 C and the radioactive 14 C carbon isotopes. The death of a plant then stops this process and starts the C-14 clock. Born in Turin in 1919, Primo Levi graduated in chemistry shortly before the Fascist race laws prohibited Jews like himself from taking university degrees. In 1943 he joined a partisan group in northern Italy, was arrested and deported to Auschwitz. His expertise as a chemist saved him from the gas chambers, however. He was set to work in a factory, and liberated in 1945. His memoir “The Periodic Table” takes its title from the table of elements, arranged according to their atomic mass, which was originally devised by Dmitri Mendeleyev in 1869. Levi links each episode of his life to a certain element. But in the book’s final section, printed below, he sets himself to imagine the life of a carbon atom. This was, he says, his first ‘literary dream’, and came to him in Auschwitz. “Our character lies for hundreds of millions of years, bound to three atoms of oxygen and one of calcium, in the form of limestone: it already has a very long cosmic history behind it, but we shall ignore it. For it time does not exist, or exists only in the form of sluggish variations in temperature, daily or seasonal, if, for the good fortune of this tale, its position is not too far from the earth’s surface. Its existence, whose monotony cannot be thought of without horror, is a pitiless alternation of hots and colds, that is, of oscillations (always of equal frequency) a trifle more restricted and a trifle more ample: an imprisonment, for this potentially living personage, worthy of the Catholic Hell. To it, until this moment, the present tense is suited, which is that of description, rather than the past tense, which is that of narration - it is congealed in an eternal present, barely scratched by the moderate quivers of thermal agitation. But, precisely for the good fortune of the narrator, whose story could otherwise have come to an end, the limestone rock ledge of which the atom forms a part lies on the surface. It lies within reach of man and his pickax (all honor to the pickax and its modern equivalents; they are still the most important intermediaries in the millennial dialogue between the elements and man): at any moment - which I, the narrator, decide out of pure caprice to be the year 1840 - a blow of the pickax detached it and sent it on its way to the lime kiln, plunging it into the world of things that change. It was roasted until it separated from the calcium, which remained so to speak with its feet on the ground and went to meet a less brilliant



destiny, which we shall not narrate. Still firmly clinging to two of its three former oxygen companions, it issued from the chimney and took the path of the air. Its story, which once was immobile, now turned tumultuous. It was caught by the wind, flung down on the earth, lifted ten kilometers high. It was breathed in by a falcon, descending into its precipitous lungs, but did not penetrate its rich blood and was expelled. It dissolved three times in the water of the sea, once in the water of a cascading torrent, and again was expelled. It traveled with the wind, for eight years: now high, now low, on the sea and among the clouds, over forests, deserts, and limitless expanses of ice; then it stumbled into capture and the organic adventure. Carbon, in fact, is a singular element: it is the only element that can bind itself in long stable chains without a great expense of energy, and for life on earth (the only one we know so far) precisely long chains are required. Therefore carbon is the key element of living substance: but its promotion, its entry into the living world, is not easy and must follow an obligatory, intricate path, which has been clarified (and not yet definitively) only in recent years. If the elaboration of carbon were not a common daily occurrence, on the scale of billions of tons a week, wherever the green of a leaf appears, it would by full right deserve to be called a miracle. The atom we are speaking of, accompanied by its two satellites, which maintained it in a gaseous state, was therefore borne by the wind along a row of vines in the year 1848. It had the good fortune to brush against a leaf, penetrate it, and be nailed there by a ray of the sun. If my language here becomes imprecise and allusive, it is not only because of my ignorance: this decisive event, this instantaneous work a tre - of the carbon dioxide, the light, and the vegetal greenery - has not yet been described in definitive terms, and perhaps it will not be for a long time to come, so different is it from the other organic chemistry which is the cumbersome, slow, and ponderous work of man: and yet this refined, minute, and quick-witted chemistry was invented two or three billion years ago by our silent sisters, the plants, which do not experiment and do not discuss, and whose temperature is identical to that of the environment in which they live. If to comprehend is the same as forming an image, we will never form an image of a happening whose scale is a millionth of a millimeter, whose rhythm is a millionth of a second and whose protagonists are in their essence invisible. Every verbal description must he inadequate, and one will be as good as the next, so let us settle for the following description. Our atom of carbon enters the leaf, colliding with other innumerable (but here useless) molecules of nitrogen and oxygen. It adheres to a large and complicated molecule that activates it, and simultaneously receives the decisive message from the sky, in the flashing form of a packet of solar light: in an instant, like an insect caught by a spider, it is separated from its oxygen, combined with hydrogen and (one thinks) phosphorus, and finally inserted in a chain, whether long or short does


not matter, but it is the chain of life. All this happens swiftly, in silence, at the temperature and pressure of the atmosphere, and gratis: dear colleagues, when we learn to do likewise we will be sicut Deus [like God], and we will have also solved the problem of hunger in the world. But there is more and worse, to our shame and that of our art. Carbon dioxide, that is, the aerial form of the carbon of which we have up till now spoken: this gas which constitutes the raw material of life, the permanent store upon which all that grows draws, and the ultimate destiny of all flesh, is not one of the principal components of air but rather a ridiculous remnant, an ’impurity’, thirty times less abundant than argon, which nobody even notices. The air contains 0.03 percent; if Italy was air, the only Italians fit to build life would be, for example, the fifteen thousand inhabitants of Milazzo in the province of Messina. This, on the human scale, is ironic acrobatics, a juggler’s trick, an incomprehensible display of omnipotence-arrogance, since from this ever renewed impurity of the air we come, we animals and we plants, and we the human species, with our four billion discordant opinions, our milleniums of history, our wars and shames, nobility and pride. In any event, our very presence on the planet becomes laughable in geometric terms: if all of humanity, about 250 million tons, were distributed in a layer of homogeneous thickness on all the emergent lands, the stature of man would not be visible to the naked eye; the thickness one would obtain would be around sixteen thousandths of a millimeter. Now our atom is inserted: it is part of a structure, in an architectural sense; it has become related and tied to five companions so identical with it that only the fiction of the story permits me to distinguish them. It is a beautiful ring-shaped structure, an almost regular hexagon, which however is subjected to complicated exchanges and balances with the water in which it is dissolved; because by now it is dissolved in water, indeed in the sap of the vine, and this, to remain dissolved, is both the obligation and the privilege of all substances that are destined (I was about to say ’wish’) to change. And if then anyone really wanted to find out why a ring, and why a hexagon, and why soluble in water, well, he need not worry; these are among the not many questions to which our doctrine can reply with a persuasive discourse, accessible to everyone, but out of place here. It has entered to form part of a molecule of glucose, just to speak plainly: a fate that is neither fish, flesh, nor fowl, which is intermediary, which prepares it for its first contact with the animal world but does not authorize it to take on a higher responsibility: that of becoming part of a proteic edifice. Hence it travels, at the slow pace of vegetal juices, from the leaf through the pedicel and by the shoot to the trunk, and from here descends to the almost ripe bunch of grapes. What then follows is the province of the winemakers: we are only interested in pinpointing the fact that it escaped (to our advantage, since we would not know how to put it in words) the alcoholic fermentation, and reached the wine without changing its



nature. It is the destiny of wine to be drunk, and it is the destiny of glucose to be oxidized. But it was not oxidized immediately: its drinker kept it in his liver for more than a week, well curled up and tranquil, as a reserve aliment for a sudden effort; an effort that he was forced to make the following Sunday, pursuing a bolting horse. Farewell to the hexagonal structure: in the space of a few instants the skein was unwound and became glucose again, and this was dragged by the bloodstream all the way to a minute muscle fiber in the thigh, and here brutally split into two molecules of lactic acid, the grim harbinger of fatigue: only later, some minutes after, the panting of the lungs was able to supply the oxygen necessary to quietly oxidize the latter. So a new molecule of carbon dioxide returned to the atmosphere, and a parcel of the energy that the sun had handed to the vine-shoot passed from the state of chemical energy to that of mechanical energy, and thereafter settled down in the slothful condition of heat, warming up imperceptibly the air moved by the running and the blood of the runner. ‘Such is life’, although rarely is it described in this manner: an inserting itself, a drawing off to its advantage, a parasitizing of the downward course of energy, from its noble solar form to the degraded one of low temperature heat. In this downward course, which leads to equilibrium and thus death, life draws a bend and nests in it. Our atom is again carbon dioxide, for which we apologize: this too is an obligatory passage; one can imagine and invent others, but on earth that’s the way it is. Once again the wind, which this time travels far; sails over die Apennines and the Adriatic, Greece, the Aegean, and Cyprus: we are over Lebanon, and the dance is repeated. The atom we are concerned with is now trapped in a structure that promises to last for a long time: it is the venerable trunk of a cedar, one of the last; it is passed again through the stages we have already described, and the glucose of which it is a part belongs, like the bead of a rosary, to a long chain of cellulose. This is no longer the hallucinatory and geological fixity of rock, this is no longer millions of years, but we can easily speak of centuries because the cedar is a tree of great longevity. It is our whim to abandon it for a year or five hundred years: let us say that after twenty years (we are in 1868) a wood worm has taken an interest in it. It has dug its tunnel between the trunk and the bark, with the obstinate and blind voracity of its race; as it drills it grows, and its tunnel grows with it. There it has swallowed and provided a setting for the subject of this story; then it has formed a pupa, and in the spring it has come out in the shape of an ugly gray moth which is now drying in the sun, confused and dazzled by the splendor of the day. Our atom is in one of the insects thousand eyes, contributing to the summary and crude vision with which it orients itself in space. The insect is fecundated, lays its eggs, and dies: the small cadaver lies in the undergrowth of the woods, it is emptied of its fluids, but the chitin carapace resists for a long time, almost indestructible. The snow and sun return above it without injuring it: it is buried


by the dead leaves and the loam, it has become a slough, a ’thing’, but the death of atoms, unlike ours, is never irrevocable. Here are at work the omnipresent, untiring, and invisible gravediggers of the undergrowth, the microorganisms of the humus. The carapace, with its eyes by now blind, has slowly disintegrated and the ex-drinker, ex-cedar, ex-wood worm has once again taken wing. We will let it fly three times around the world, until 1960, and in justification of so long an interval in respect to the human measure we will point out that it is, however, much shorter than the average: which, we understand, is two hundred years. Every two hundred years, every atom of carbon that is not congealed in materials by now stable (such as, precisely, limestone, or coal, or diamond, or certain plastics) enters and reenters the cycle of life, through the narrow door of photosynthesis. Do other doors exist? Yes, some syntheses created by man; they are a title of nobility for man-the-maker, but until now their quantitative importance is negligible. They are doors still much narrower than that of the vegetable greenery; knowingly or not, man has not tried until now to compete with nature on this terrain, that is, he has not striven to draw from the carbon dioxide in the air the carbon that is necessary to nourish him, clothe him, warm him, and for the hundred other more sophisticated needs of modern life. He has not done it because he has not needed to: he has found, and is still finding (but for how many more decades?) gigantic reserves of carbon already organicized or at least reduced. Besides the vegetable and animal worlds, these reserves are constituted by deposits of coal and petroleum: but these too are the inheritance of photosynthetic activity carried out in distant epochs, so that one can well affirm that photosynthesis is not only the sole path by which carbon becomes living matter, but also the sole path by which the sun’s energy becomes chemically usable. It is possible to demonstrate that this completely arbitrary story is nevertheless true. I could tell innumerable other stories, and they would all be true: all literally true, in the nature of the transitions, in their order and data. The number of atoms is so great that one could always be found whose story coincides with any capriciously invented story. I could recount an endless number of stories about carbon atoms that become colors or perfumes in flowers; of others which, from tiny algae to small crustaceans to fish, gradually return as carbon dioxide to the waters of the sea, in a perpetual, frightening round-dance of life and death, in which every devourer is immediately devoured, of others which instead attain a decorous semi-eternity in the yellowed pages of some archival document, or the canvas of a famous painter; or those to which fell the privilege of forming part of a grain of pollen and left their fossil imprint in the rocks for our curiosity; of others still that descended to become part of the mysterious shapemessengers of the human seed, and participated in the subtle process of division, duplication, and fusion from which each of us is born. Instead, I will tell just one more story, the most secret, and I will tell it with the humility and restraint of him who knows from the



Figure 5.1: Oetzi, the well preserved iceman and oldest known mummy, who was dated with the radiocarbon technique using accelerator mass spectrometry to have died sometime between 3360 - 3100 BC.

start that his theme is desperate, his means feeble, and the trade of clothing facts in words is bound by its very nature to fail. It is again among us, in a glass of milk. It is inserted in a very complex, long chain, yet such that almost all of its links are acceptable to the human body. It is swallowed; and since every living structure harbors a savage distrust toward every contribution of any material of living origin, the chain is meticulously broken apart and the fragments, one by one, are accepted or rejected. One, the one that concerns us, crosses the intestinal threshold and enters the bloodstream: it migrates, knocks at the door of a nerve cell, enters, and supplants the carbon which was part of it. This cell belongs to a brain, and it is my brain, the brain of the me who is writing; and the cell in question, and within it the atom in question, is in charge of my writing, in a gigantic minuscule game which nobody has yet described. It is that which at this instant, issuing out of a labyrinthine tangle of yeses and nos, makes my hand run along a certain path on the paper, mark it with these volutes that are signs: a double snap, up and down, between two levels of energy, guides this hand of mine to impress on the paper this dot, here, this one.”


Carbon-14 dating

The death of the icemen “Oetzi”, whose mummy is shown in Fig. 5.1, has been dated to have occurred between 3360 − 3100 BC using radiocarbon dating based on the isotope 14 C. Figure 5.2 is a picture of the mountain range in the Alps where the mummy was found, right on the border between Austria and Italy, sparking a dispute about the right to exhibit the remains. The isotopic composition of atmospheric carbon is 99% stable 12 C, 1% stable 13 C 6 6 −10 14 and only 1.2 × 10 % radioactive 6 C. The latter decays with a half-life of 5730 a, however, it is constantly reproduced in the atmosphere and then incorporated into living matter via photosynthesis due to the impact of cosmic rays on the earth


Figure 5.2: The ice covered mountain ridge between Italy and Austria where Oetzi was found.

atmosphere, see Fig. 5.3. The balance of decay and reproduction accounts for the stable concentration quoted. The death of an organism, however, terminates the incorporation of new carbon atoms and the amount of C-14 isotopes thus gradually decreases due to radioactive decays. The C-14 decay is a prominent example of beta-decay. The daughter product is a 14 N nucleus which is the result of the 7 emission of a fast electron. The decay is thus referred to as a β − -decay, in contrast to a β + -decay, where a positively charged positron is emitted. The positron is otherwise similar to an electron, however, being an anti-particle it annihilates with the nearest electron resulting in the emission of two 511 keV, ‘back-to-back’ γ-rays. Curiously the energy spectrum of β-decay is a continuous distribution of electron (or positron) energies, as it is shown in Fig. 5.4. This is in stark contrast to energy spectra for α-decay. A typical example is shown in Fig. 5.5. Four discrete lines can be distinguished, which reflect the states of the daughter nucleus (also shown in the figure). The question arises why the energy spectra of the emitted particle is fundamentally different for two seemingly similar processes. Wolfgang Pauli proposed that a continuous energy spectrum can only be in agreement with the conservation laws for energy and momentum, if the momentum and energy are not only shared among two particles as in α-decay, namely the α-particle and the recoil parent nucleus, but among three particles. He proposed that the third particle would be neutral and without rest mass, and named it appropriately ‘neutrino’. Indeed, a ‘neutrino’ is being observed in β + -decay, while an anti-neutrino is emitted in β − decay. As electrons, neutrinos are leptonic particles. Combining a particle with an anti-particle results in a lepton number 0, so that the conservation law of lepton conservation is also fulfilled in both β + - and in β − -decay. The complete decay equation is thus for 14 C:
14 6 C


14 7 N

+ e− + ν + ∆E ¯
18 9 F,

(5.1) which is

An important example of β + -decay is the decay of the radioisotope



Figure 5.3: The isotopic composition of carbon and the processes which lead to a stable C14 concentration in the atmosphere and through photosynthesis in all living plants. (1) GeV protons and other highly energetic particles from cosmic rays produce (2) particle showers in the atmosphere. (3) Neutrons from these showers undergo neutron/proton exchange reactions with 14 N nuclei, thus forming radioactive 14 C. (4) As carbon dioxide the 14 C is (5) incorporated into plants. (6) Following the death of the plant, photosynthesis and the incorporation of 14 C stop, and the concentration of this isotope gradually decreases through radioactive β − -decay.


Figure 5.4: Typical continuous energy spectrum for electrons from β-decay.

Figure 5.5: Energy spectrum for α-particles from the α-decay of





Figure 5.6: Schematic of a mass-spectrometer as envisaged by Wilhelm Wien and often referred to as ‘Wien’-filter. The balance of forces can be used to express the mass-charge ratio as a function of measurable quantities, as it is shown below.

often used in nuclear medicine. In this case the decay equation is:
18 9 F


18 8 O

+ e+ + ν + ∆E


In accelerator mass spectrometry and other mass-spectrometric techniques, isotopes with different masses are separated from each using combinations of electric and magnetic fields. This is illustrated in Fig. 5.6 Exercise 15 In positron emission tomography (PET) bio-molecules, such as glu-


cose, labelled with β-emitters are introduced into the body. The intake of the molecules by different organs or tissues varies, so that physiological phenomena can be imaged and studied. The positron from the β-decay of the radionuclide annihilates with a nearby electron and two 511 keV γ-photons are emitted in opposite directions, so that the location of the labelled molecules can be detected. (a) Write down the decay equations and the half-lives for the following radioisotopes used for PET imaging: 15 O, 13 N , 11 C, and 18 F . (b) Why do these radionuclides have to be produced on-site using for example a cyclotron facility in the hospital. (c) Show that using a mass spectrometer the radioactive 15 O can be separated from the stable 16 O. What magnetic field is required to select 1 MeV 15 O 3+ ions with this mass spectrometer, when the electric field E is perpendicular to the magnetic field B and has an electric field strengths of E = 103 kV/m ?

Chapter 6





Nuclear medicine: history and modern practice

Gamma-photons are extensively used in nuclear medicine for diagnosis and therapy. A typical radionuclide for such applications is 99 Tc which is well suited, since it has a meta-stable, excited state which gamma-decays with a half-life of 6 hours. Nuclear medicine using gamma-rays and other ionising radiation has a long history. An account given by Perkins [Perkins 1995] is reproduced here: “During the first half of the twentieth century radioactivity was used for a wide variety of purposes. The main radionuclide was radium-226 which was available in small quantities, being formed continuously by the decay of uranium-238 in pitchblende ore. Radium-226 has a half-life of 1600 years and decays by the emission of alpha particles and gamma radiation (4.8 MeV alpha particles, 0.186-0.601 MeV gamma rays). This was used as the activating agent in luminous watches, clock faces and dials produced up to the early 1960s. In some cases the radiation doses resulting from such items could be significant, for example the skin dose directly under the face of a pocket watch could be as high as 1.65 Gy. Radium was also used for instrument panels in aeroplanes, scientific instruments and survey meters many of which were in widespread use by the military. Nowadays many of these items such as clocks and watches ore highly collectable items and are commonly found by antique dealers at auctions and markets. Buffing and polishing of such items by dealers to polish brass ornamental instruments could result in contamination of radioactivity to surrounding surfaces and objects. Today radioactivity is still used for a number of industrial and domestic purposes. Radionuclides are used for several industrial applications such as in industrial radiography and nondestructive testing. Gamma ray sources are used for the sterilization of medical instruments, thickness gauging, automated production control lines, and tracer applications. The main household items familiar to most people ore smoke detectors which mostly contain Americium-241 (241 Am). The early medical use of radioactivity was mainly for treatment. The first recorded medical use of a radioactive substance took place around 1901 when Danlos and Block placed radium in contact with a tuberculous skin lesion. The long half-life of radium-226 (1600 years) provided on effectively constant rate of gamma radiation which would penetrate deep into the body, This source of gamma rays was more reliable than those which could be produced from the early x-ray tubes. Radium also had the advantage that it could be introduced into the body in tubes or needles so as to cause intense local irradiation of tumours with little damage to the skin, this often being the limiting factor with the therapeutic application of x-rays. The inability to provide uniform irradiation of tumours was a particular problem, as was the necessity to perform a second operation to remove the implant at the end of the period of treatment. Later treatments were carried out using the gas radon-222 which is a daughter product from radium-



226. The gas was filled into small metal seeds or tubes and then inserted into the body. The short 3.85 day physical half-life of radon meant that the tubes could be left in place permanently following the treatment period. The early successes of radiation therapy led to an almost religious belief in the therapeutic properties of radiation. Radioactivity became synonymous with good health and it became fashionable to visit spa resorts in order to take the waters containing natural radioactivity. An early American radiologist was quoted as saying radioactive water was a gift of God, curing practically all nervous disorders. Claims were made that radioactivity was a cure for virtually all known ailments. The product description of one preparation, ’Radiothor or perpetual sunshine’, stated-that radiation was ’a physical means of re-energizing weak and inactive cells with millions of rays’. It was a common belief of the time that deep radium therapy utilized the destructive properties of rays whereas mild radium therapy was based on the stimulating properties of small doses. A large number of devices became commercially available to produce home brews of radium water. Such cures were claimed to be valuable in the treatment of anaemia, arteriosclerosis, arthritis, catarrh, diabetes, goitre, high blood pressure, the menopause, menstrual disorders, nephritis, neuritis, nervous conditions, obesity, prostatitis, rheumatism, senility, sexual conditions and skin disorders. In some cases a number of bogus products were sold which claimed to contain radioactivity but in fact contained common chemicals or samples of earth. One such preparation which contained no radioactivity was named Hearium with the intention of curing ear problems. In some cases the makers and distributors of these fake remedies were prosecuted, fined and imprisoned because these remedies did not contain radioactivity. A number of devices used for radium cures still turn up from time to time, in some cases causing concern when the nature of the contents are realized. For example a number of radium water siphons have been picked up by antique dealers. Other unusual products include radioactive corsets for the treatment of back ache and a more unusual example of a curative device was the Q-ray electro-compress. Such a compress was bought by a school teacher at a jumble sale in Mansfield, Nottinghamshire UK, in 1986. This dry compress contained natural uranium ore sewn into an electric blanket and was claimed to combine the natural properties of radioactivity with heat. At the request of the local Police this particular blanket was retrieved by the local Medical Physics Department at Queen’s Medical Centre, Nottingham, for disposal and was found boxed complete with manufacturer’s literature including medical testimonies and photographs of the compress in use at St Thomas’s Hospital London. The early beliefs in the therapeutic properties of radiation lacked any real scientific basis. However as knowledge increased a number of different radionuclides and radioactive compounds become available and the concept of targeting radiation to sites of disease in the body gradually became a reality. It is also now apparent that the utilization of natural and artificial radionuclides has made an


unparalleled contribution to the understanding of human physiology and pathology. In 1923 Hevesey introduced the use of radionuclides as biological tracers by studying the absorption of radioactive lead in plants. He later used phosphate 32 P as a tracer to study the metabolism of phosphorus in the rat. Modern diagnostic nuclear medicine techniques are based on this concept. Unlike most other imaging modalities which provide information concerning human anatomy or structure, radionuclide counting and imaging techniques provide information on tissue and organ function. Imaging techniques using a gamma camera are capable of mapping out the biodistribution of an administered radiolabelled compound providing information concerning organ and tissue and function. The early development of clinical nuclear medicine in the 1940s was largely based on the use of radioisotopes of iodine. Iodine-130 was first used for the investigation of patients with thyroid disorders. Artificially produced iodine- -131 subsequently became the main radionuclide for the investigation and treatment of patients with thyroid disease. Iodine, with an atomic number of 53, is the heaviest element required for human metabolism. The body is unable to differentiate between the radioactive and non-radioactive isotopes and they are therefore metabolized in an identical manner. Substitution of a radioisotope of iodine for a naturally occurring atom provides a means of monitoring iodine distribution in the body and subsequently iodine metabolism in the thyroid gland. Iodine is therefore unique in this respect. There are very few biochemicals or drugs which contain an element suitable for isotopic substitution of a gamma emitter suitable for gamma camera imaging. It is therefore necessary to incorporate the radionuclide chemically without altering the biological properties of the material and ensuring that the compound remains stable after administration. In nuclear medical diagnosis we are principally concerned with gamma rays for external imaging and detection within radiation sample counters although some beta emitters are administered for diagnostic purposes. Some in vitro diagnostic procedures utilizing beta emitters are performed, and in particular there has recently been an increase in the use of phosphorus-32 for autoradiographic studies such as genetic mapping. In radiotherapy procedures the cell-killing properties of beta emitters are used, for example using iodine-131 for the treatment of thyrotoxicosis, phosphorus-32 for the treatment of polycythaemia and strontium-89 for bone pain palliation. Alpha particles are seldom used in medicine and are restricted to a few research applications. In each clinical case the administered activity is prescribed by the radiotherapist or oncologist and calculated by the medical physicist or therapy radiographer. Patients who receive therapeutic amounts of activity are admitted to specialized hospital suites and given special instructions for the duration of the period of active treatment.”



Figure 6.1: Single Photon Emission Tomography (left) and x-ray Computer Tomography (right) cross-sectional images of the brain of a patient diagnosed with Ischaemic Stroke. The reduction in blood flow in the left part of the brain is clearly visible with Single Photon Emission Tomography, whereas x-ray Computer Tomography cannot give an unambiguous diagnosis.


Gamma-rays in nuclear medicine

The strength of Single Photon Emission Tomography (SPET) may be appreciated when compared to the well known Computer Tomography (CT). Figure 6.1 shows as an example the tomography images obtained with 99m Tc SPET and x-ray CT for a patient diagnosed with ‘Ischaemic Stroke’, a condition where a blood clot blocks the blood circulation in parts of the brain. It is obvious that SPET can produce much more definite evidence of this condition than CT might do. While the tomographic image processing is similar, the two techniques differ in several ways. 1. In the case of SPET the source of the γ-photons is internal. This is achieved by injecting a bio-molecule such as glucose, which has been labelled with a radioisotope. CT employs an external source of x-rays. 2. While the energy of the γ-photon in SPET is well defined (for 143 MeV), CT uses a broad energy spectrum of x-rays.

Tc it is

3. The labelled bio-molecule is “selective” and accumulates in certain tissue or organ, so that even physiological processes can be studied in real time. In contrast, CT produces a “shadow”-images of the body, relying on differences in x-ray absorption throughout the body. Since x-ray absorption generally


changes little, when organs malfunction, CT is not very sensitive to such predicaments, while, for example, it is ideal for the identification of bone fractures, because of the large difference in x-ray absorption between bone and tissue. 4. X-ray CT is less complex than SPET, since the need to produce very active short-lived radioisotopes on-site and radiochemistry are not required. A schematic illustration of the tomography procedure, as it is applied in the case of SPET, is shown in Fig. 6.2. Figure 6.3 shows a picture of a typical SPET facility. In a hospital situation the 99m Tc is obtained from a technetium generator (see 43 99 Fig. 6.4) containing 42 Mo, which decays via β − -decay (t1/2 = 66 h) to 99 Tc with 43 81% of the decays populating an excited, meta-stable state of this nuclide with an excitation energy of 143 MeV. The 143 MeV γ-rays employed in SPET imaging correspond to transitions from this excited state, 99m Tc, to the long-living ground 43 99 state of 43 Tc. Since neither mass- nor atomic-number change, this nuclear transformation can only proceed via γ-decay, which releases energy as a single photon and, to conserve momentum and energy, as the kinetic recoil energy of the technetium nuclide.


The equivalence of energy and mass:

It is not immediately obvious, from where the γ-ray photons emitted by 99m Tc source their energy Eγ = 143 MeV. Figure 6.5 shows the energy difference between the mother nuclide 99 Mo and its daughter product 99 Tc including all intermediate excited states of 99 Tc, which are also populated by the β − -decay of 99 Mo. It is apparent that 99 Tc is energetically favoured compared to 99 Mo. However, since energy conservation has to be fulfilled, this energy difference has to be accounted for. It is a fundamental outcome of Albert Einstein’s General Theory of Relativity, that the energy difference between 99 Mo and 99 Tc corresponds to a slight mass difference between these nuclides, and that energy is conserved, because of the principle of the equivalence of energy E and mass m, famously expressed as E = mc2 (6.1) which has general validity. Equation 6.1 can be applied to express masses in terms of energy per light velocity-squared, since 1 atomic-mass-unit (1 amu or 1 u) equals 931.5 MeV/c2 . In this form the masses of the unbound proton, neutron, and electron are given as mp = 938.3 MeV/c2 mn = 939.6 MeV/c2 me = 0.511 MeV/c2 (6.2)



Figure 6.2: Principle of tomography as applied in SPET.


Figure 6.3: A SPET facility.

Exercise 16 (a) Calculate the mass-difference of the nuclei 99 Mo and 99 Tc in 42 43 2 99 99 units of MeV/c . (b) How do the masses of 42 Mo and 43 Tc compare with the combined mass of 42 protons and 57 neutrons, and 43 protons and 56 neutrons, respectively ? Express the difference in units of MeV/c2 . [According to the chart of nuclides, “http://www2.bnl.gov/ton/”, the relevant atomic masses are m(99 Mo) = 42 98.90772 amu and m(99 Tc) = 98.90626 amu, respectively.] 43 Part (a) of the Exercise shows that 99 Tc is lighter than 99 Mo by ∆m = 1.87 MeV/c2 . According to Einstein’s principle of the equivalence of energy and mass this mass difference accounts for the rest mass of the particles emitted during the β − -decay of 99 Mo, an electron and an anti-neutrino, and the kinetic energies of these particles plus the total energy of all γ-photons emitted subsequently until the ground state of 99 Tc has been attained. Since neutrinos do not have any rest mass, the only mass removed is that of the β − -electron. The rest is taken off in the form of energy ∆E, which can thus be calculated as ∆E = (1.87 MeV − 0.511 MeV) ∼ 1.36 MeV (6.3) = The decay equation can be written, augmented by this energy difference ∆E, as
99 42 Mo


99 43 Tc

+ e− + ν + ∆E ¯


The energy difference ∆E = Q, which can be positive or negative, is commonly referred to as the Q-value of a nuclear decay or a nuclear reaction. The exercise illustrates that the electron mass and the kinetic energy carried by the e− , the ν , and the recoiling nuclide has its origin in the mass difference between ¯ the two nuclei.



Figure 6.4: Cut-away view of a

99m 43 Tc

generator box.


Figure 6.5: The energy differences between 99 Mo and 99 Tc including all intermediate excited states of 99 Tc (Level diagram).

Part (b) of the Exercise shows that the total mass of the sum of all nucleon masses, i.e. the sum of all proton and neutron masses, is larger for both nuclides than their actual mass. This difference is referred to as the mass defect ∆m. The ‘defective’, or missing mass accounts for the nuclear binding energy B of the nucleus. For 99 Mo it is BMo = 853.4 MeV/c2 and for 99 Tc it is BTc = 855.3 MeV/c2 . It is apparent that the technetium nucleus is bound more strongly than the molybdenum nucleus with a difference of ∼ 1.9 MeV in binding energy, which is in agreement with the mass difference calculated in part (a) of the exercise. The fact that the binding energy of the technetium nuclide is larger is consistent with the observation that 99 Mo decays to 99 Tc, thus attaining more stability. As other physical systems, nuclei ‘aim’ for minima of the potential energy. Division by the nucleon number 99 shows that the binding energy per nucleon B/A is for both examples of the order of 8 MeV per nucleon. This value holds approximately for all isotopes with the exception of nuclei lighter than 16 O. A graph illustrating B/A as the function of mass number A is shown in Fig. 6.6. Exercise 17 Apply the principle of the equivalence of energy and mass. 1. Calculate the Q-values for the following decay processes and explain why some of them are not observed: (a) α-decay of 235 U (b) p-decay of 235 U 92 92 2. Calculate the Q-value for the β + -decay of of 18 F and 18 O. 9 8
18 9 F

and determine the mass defects



Figure 6.6: An illustration of the change of B/A with mass number A. B/A peaks at 58 Fe. Superimposed on the smooth function shown is a fine structure due to nuclear shell effects.

3. The α-decay of 226 Ra to 222 Rn involves four discrete α-lines. The most ener88 86 getic α-particle has an energy approaching 4.785 MeV. Four discrete γ-lines with energies of 0.601 MeV, 0.415 MeV, 0.186 MeV, and 0.262 MeV, respectively, are observed in coincidence with the α-decays or soon after. Draw a level scheme.


Scintillation detectors

The observation of γ-rays in medical and other applications requires sophisticated detection systems. An important example are scintillation detectors, which are based on materials, such as sodium iodide, which scintillate, i.e. when they absorb γ-radiation, they emit optical photons. The number of photons emitted is proportional to the energy absorbed. Thus, when the photons are counted, the energy of the original γ-ray can be determined. This can be achieved using the photocathode of a photomultiplier by taking advantage of the photoelectric effect. Above a certain energy, each emitted photo release an electron from the photocathode. Thus the number of electrons released from the photocathode is proportional to the number of emitted photons, and therefore also proportional to the energy of the original γ-ray. The number of electrons is too small that their combined charge could be measured directly. Their number is therefore multiplied using a sequence of dynodes


which are positively charged with increasing bias. Impact ionization results in secondary electrons, which in turn multiply at the next dynode. Eventually a measurable signal can be obtained. While the detection efficiency of scintillation detectors is excellent, their energy resolution is much poorer than that of semiconductor detectors. This due to the statistical nature of the scintillation process. However, in medical applications the energy of the important γ-rays is usually well known, so that energy resolution is not of importance, whereas image quality improves with the number of γ-photons detected. Scintillation detectors are thus the system of choice for techniques such as SPET or PET. In order to achieve position-sensitivity and obtain two-dimensional images, a large sodium iodide crystal is backed by an array of photomultiplier tubes. For each γ-ray detected the sum of the responses from all photomultiplier tubes is proportional to the energy of the γ-photon, while the tube with maximum response indicates the location of its impact. For a specific energy observation over time thus produces a two-dimensional image of the γ-ray activity inside a patient and the distribution of a particular radioisotope, such as 99m Tc, can be mapped. Such system is known as gamma camera, or Anger camera, named after the developer Hal Anger. In order to ensure that gamma-rays are incident perpendicular to the crystal surface and thus ensure a reliable correlation between the origin of the γ-photon and the point of observation, the crystal is covered with a thick collimator (typically lead, because of its large absorption coefficient), which has up to 35,000 parallel holes. This ensures that ‘stray γ-photons’, which leave the patient at angles much smaller or larger than 90◦ , are suppressed. A magnification effect can be achieved by using a special collimator, where the orientation of the channels associated with the holes is not parallel but such that they converge towards the patient.



It may be summarized that nuclear transmutations such as α- and β-decay are driven by increases in nuclear binding energy. The binding energy of a nucleus is equal to its mass defect, which is a consequence of the equivalence of energy and mass first suggested by Einstein and well-established experimentally. Nuclear transmutations do not necessarily lead directly to the ground state of a nuclide, but can populate excited states with varying life-times. Such states decay to the ground state via γ-decay and the emission of a γ-photon. An important example for this process is the decay of 99m Tc, which is used extensively in nuclear medicine, in particular for Single Photon Emission Tomography.

Chapter 7





First in-class test

In 20 minutes address the following questions. Calculators are permitted, however, no written documents or books are allowed. 1. [4 marks] An exercise device is used to strengthen the leg muscles, see Fig. 7.1. Calculate the force M exerted by the muscle in the upper leg, when moving the foot forward to lift the weight. What is the reaction force exerted onto the joint? 2. [4 marks] In about half-a-page comment on the material properties of our bones. In what ways does bone out-perform other possible biological or synthetic material choices for the human skeleton? 3. [4 marks] (a) Sketch the human eye, label your drawing and briefly explain the role of important parts. (b) Calculate the change in direction for a light-ray incident on the cornea at an angle of 25◦ with respect to the surface normal (refractive indices: nair 1, ncornea 1.37). 4. [3 marks] At the nearest houses the traffic on a motorway produces noise with an intensity of 10−6 W/m2 . An expansion of the motorway with additional lanes is expected to double the traffic. Calculate the intensity level in dB before and after the motorway expansion (Intensity level is defined through 1 dB = 10 · log (I/I0 ), where I0 = 10−16 W/cm2 ).

Figure 7.1: Mechanical representation of an exercise to strengthen the leg muscles [McCormick & Elliot 2001].



Exercise 1


Exercise 2



Exercise 3




Exercise 4


Exercise 7



Exercise 9 Yellow spot and blood vessels When looking at a pin hole hole in a piece of paper at a distance of about 10 cm in front of an intense light source, the blood vessels of the retina are visible as an intricate black network. In the centre of the image a (black) spot can be seen, which is the yellow spot. The yellow spot is an area of the retina, about 1 mm in diameter, right on the optical axis, where the eye is most sensitive. Exercise 10 Blind spot At the point where the optic nerve passes through the retina, it is not sensitive. The blind spot can be identified by concentrated looking at a certain object with one eye closed. For the left eye the blind spot is about 15◦ to the left and for the right eye it is about 15◦ to the right of the nose, respectively. Any objects located at those angles disappear from view. Exercise 11 Upward and downward rays on strong light sources When looking at a strong light source (maybe a street light at night) it often appears that light-rays extend downwards and/or upwards from it, but not to the either side. The reason for this phenomenon is the diffraction of incident light in the meniscus of the tear liquid just underneath the upper eyelid (perception of upward rays) and just above the lower eye lid (perception of downward rays). Exercise 12 Why do curtains obscure the view one way, but not the other? Eye and brain can distinguish objects, when their brightness ratio is above about 5% (Fechner’s law). When a curtain, during the day, is in bright sunshine all objects behind it reflect light with much lower absolute intensity, so that the relative differences are less than 5% on the scale defined by the bright curtain. At night, with light sources inside the room, the situation is somewhat reversed. Exercise 13


Exercise 14






Exercise 15




Exercise 16




Exercise 17


[Halliday et al. 1993] David Halliday, Robert Resnick, Jearl Walker, Fundamentals of Physics, John Wiley & Sons, USA, ISBN 0-471-57578-x (1993) Primo Levi, excerpt from The Periodic Table, in The Faber Book of Science, ed. John Carey, Faber & Faber Ltd, London, ISBN 0-571-16352-1, (1995) 338 - 344

[Levi 1975]

[McCormick & Elliot 2001] Andrew McCormick and Alexander Elliot, Health Physics, Cambridge University Press, UK, ISBN 0521-78726-2 (2001) [Perkins 1995] A.C. Perkins, Nuclear Medicine: Science and Safety (1995)


You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->