Professional Documents
Culture Documents
Biophysics course
Medical physics
HANDBOOK
for students of the 1st year
Almaty 2021
UDC: 577.3(075.8)
BBC: 28.071 я73
А-12
Authors:
Venera Abdrassilova, Master of Natural Science, Lecturer at the Department of
Normal Physiology with course of biophysics;
Gulzhakhan Baydullaeva, associate professor at the Department of Normal
Physiology with course of biophysics;
Gulzhan Ilyassova, Master of Natural Science, assistant at the Department of
Normal Physiology with course of biophysics.
Rewievers:
Abishev M., Professor, Al Farabi KazNU, Head of TNP department.
Ryspekova Sh. O., Candidate of Medical Sciences, associate professor and Head
of Department of Normal Physiology with course of biophysics.
Given educational-methodological complex is intended for 1st year students of the Medical
University for speciality «General Medicine» and «Pediatrics». The handbook was created to
help students and contains theoretical material for the study of topics provided for by the
educational program of the subject, there is a description and procedure for practical and
laboratory works. For self-examination of the acquired knowledge, there are test tasks (MCQ)
and situational tasks for independent solution.
UDC: 577.3(075.8)
BBC: 28.071 я73
А-12
©Authors
©, 2021
2
CONTENT
1 Medical physics subject and methods of mathematical processing
of experimental data and calculation of the error………………… 7
2 Sound. Physical properties of sound. Acoustics. Medical
application of sound………………………………………………. 14
3 Structure and functions of biological membranes and methods for
studying membrane structure…………………………………….. 26
4 Transport of substances across membrane. Passive and active
transport…………………………………………………………... 36
5 Electrical excitability of tissues. Biopotentials. Resting and
action potentials of cells………………………………………….. 48
6 Physical fundamentals of electrocardiography…………………… 56
7 Physical fundamentals of electroencephalography……………….. 67
8 Multistage amplifiers in medicine………………………………... 76
9 The principles of converting biological (non-electrical) signals
into electrical ones. Thermoregulation. Calibration of temperature
sensors……………………………………………………………. 80
10 The effect of electromagnetic fields and currents on the body.
Galvanization and electrophoresis. The physical basis of
rheography……………………………………………………… 94
11 Physical issues of hemodynamics based on hydrodynamics.
Bioreology. Determination of the viscosity of liquids using a
viscometer………………………………………………………… 111
12 Biophysics of vision. Special techniques of microscopy and
polarization of biological objects………………………………… 121
13 Eye refraction and refractometric research methods in medicine.
Refractive indices of the eye and fluids. Introscopy……………… 137
14 Registration of superweak bioluminescence and brightness
enhancement of the x-ray image. Photoelectric converters,
photoelectronic amplifiers, electron-optical converters………….. 144
15 Biological effects and mechanisms of action of radiation.
Radioactivity. X-ray and dosimetry……………………………… 156
16 Abbreviations…………………………………………………….. 167
17 Appendix A Table of Student's coefficients for calculating errors 169
18 Appendix B Tables of Physical Constants, SI units and
prefixes…………………………………………………………… 170
19 Appendix C Trigonometrical ratios table ………………………. 171
20 Appendix D Bradis table of sines and cosines ………………… 172
21 Appendix E Tables of density of aqueous solutions of glycerin
and Speed of sound in different medium………………………. 175
3
PREFACE
The handbook on the subject "Medical Physics" is intended for 1st-year
students of the specialties "General Medicine" and "Pediatrics" to help them study
the course program, and can also be used by students for independent study of the
discipline.
The handbook contains theoretical material and different types of practical tasks
(tests, situational tasks, working texts) for mastering the program, an algorithm for
conducting laboratory work and explanations to them are given.
The purpose of studying the subject is the formation of basic knowledge in
the field of medical physics and biophysics, the acquisition of practical skills for
working with medical devices; the study of the foundations of applied physics,
which are addressed to solving medical problems and issues related to the physical
principles of the operation of medical equipment; the study of the physical laws
underlying the functioning of human organs and tissues, as well as the influence of
various physical factors on the human body.
Currently, many biophysical methods are widely used to elucidate the
mechanism of action on the body of factors of the external and internal
environment, including physical, chemical, technogenic nature, the action of toxic
agents.
Our subject is a prerequisite for the subject "Normal Physiology", therefore,
the study program includes the physical basics of the transport of substances
through the biological membrane, the mechanism of resting and action potential,
electrocardiography, electroencephalography and hemodynamics.
Using the knowledge gained, the graduate will be able to apply the
knowledge of medical physics (devices, equipment and physical factors of
influence on a person, used in medicine) in providing high-quality patient-centered
treatment; interpret the results of functional research methods (ECG, EEG,
viscometry, rheography, introscopy, UHF, electrophoresis), using modern
equipment for diagnosing diseases.
4
INTRODUCTION
Despite the complexity and interconnection of various processes in the
human body, it is often possible to distinguish among them processes close to
physical ones. For example, such a complex physiological process as blood
circulation is fundamentally physical, since it is associated with the flow of fluid
(hydrodynamics), the propagation of elastic vibrations through the vessels
(oscillations and waves), the mechanical work of the heart (mechanics), the
generation of biopotentials (electricity) and etc. Breathing is associated with the
movement of gas (aerodynamics), heat transfer (thermodynamics), evaporation
(phase transformations), etc.
In the body, in addition to physical macroprocesses, as in inanimate nature,
there are molecular processes that ultimately determine the behavior of biological
systems. Understanding the physics of such microprocesses is necessary for a
correct assessment of the state of the body, the nature of certain diseases, the action
of drugs, etc. In all these issues, physics is so connected with biology that it forms
an independent science - biophysics, which studies physical and physicochemical
processes in living organisms, as well as the ultrastructure of biological systems at
all levels of organization - from submolecular and molecular to the cell and the
whole organism.
Many diagnostic and research methods are based on the use of physical
principles and ideas. Most modern medical devices are structurally physical
devices. The mechanical quantity, blood pressure, is a metric used to evaluate a
number of diseases. Listening to sounds from within the body provides information
about normal or abnormal organ behavior. A medical thermometer based on the
thermal expansion of mercury is a very common diagnostic device. Over the past
decade, in connection with the development of electronic devices, a diagnostic
method based on the recording of biopotentials arising in a living organism has
become widespread. The most famous method of electrocardiography is the
recording of biopotentials that reflect cardiac activity. The role of the microscope
for biomedical research is well known. Modern medical devices based on fiber
optics allow examining the internal cavities of the body. Spectral analysis is used
in forensic medicine, hygiene, pharmacology and biology; achievements of atomic
and nuclear physics - for well-known diagnostic methods: X-ray diagnostics and
the method of tagged atoms.
In the general complex of various methods of treatment used in medicine,
physical factors also find their place. Electric and electromagnetic influences are
widely used in physiotherapy. For therapeutic purposes, visible and invisible light
(ultraviolet and infrared radiation), X-ray and gamma radiation are used.
Medical bandages, instruments, electrodes, prostheses, etc. work under the
influence of the environment, including in the immediate environment of
5
biological media. To assess the possibility of operating such products in real
conditions, it is necessary to have information about the physical properties of the
materials from which they are made. For example, for the manufacture of
prostheses (teeth, vessels, valves, etc.), knowledge of mechanical strength,
resistance to repeated loads, elasticity, thermal conductivity, electrical
conductivity, and other properties is essential. In some cases, it is important to
know the physical properties of biological systems to assess their viability or
ability to withstand certain external influences.
By changing the physical properties of biological objects, it is possible to
diagnose diseases. A living organism functions normally only by interacting with
the environment. It reacts sharply to changes in such physical characteristics of the
environment as temperature, humidity, air pressure, etc. The effect of the external
environment on the body is taken into account not only as an external factor, it can
be used for treatment: climatotherapy and barotherapy. These examples
demonstrate that the physician must be able to assess the physical properties and
characteristics of the environment. The applications of physics listed above in
medicine constitute medical physics - a complex of branches of applied physics
and biophysics, in which physical laws, phenomena, processes and characteristics
are considered in relation to solving medical problems.
Modern medicine is based on the widespread use of a variety of equipment,
which is mostly physical in design, therefore, in the course of medical and
biological physics, the structure and principle of operation of the main medical
equipment are considered.
6
I
MEDICAL PHYSICS SUBJECT AND METHODS OF MATHEMATICAL
PROCESSING OF EXPERIMENTAL DATA AND CALCULATION OF
THE ERROR
Medical physics is the application of physics principles to medical practice.
It is most commonly used to refer to physical applications involving the use of
ionizing and non-ionizing radiation in medicine for diagnostic and therapeutic
purposes. Medical physics more broadly can refer to the physics of various forms
of electromagnetic waves used in medical operations (electrocardiography and
laser surgery) such as ultrasound.
Medical physics is the use of physics in medical diagnosis and treatment.
The major subfields of medical physics are
- diagnostic radiological physics,
- therapeutic radiological physics,
- medical nuclear physics
- medical health physics.
Biophysics is an interdisciplinary science that applies the principles of
physics and the methods of mathematical analysis and computer modeling to
understand how the mechanisms of biological systems work.
Biophysics uses mathematical and physical laws, as well as the latest
developments in computer technology as tools for studying phenomena on a wide
variety of scales, from the global human population to individual atoms in a
biomolecule. Appropriate modeling methodologies range in size from angstrom to
macro, depending on the field of study (evolutionary to atomistic effects) and
importance.
For large systems to study, the most common and reliable mathematical
strategy is to develop systems of differential equations. Molecular dynamics are
used frequently at the molecular level to describe biomolecules as a moving
Newtonian particle system with interactions specified by a frictional force, and a
wide range of approaches for addressing the problem of solvent effects. In some
cases pure quantum mechanics approaches, which describe molecules using both
wave functions and electron densities, can and should be used, but time and
resources computing costs can be prohibitive, and hybrid classical quantity
methods are usually more appropriate.
A process used to establish, disprove, or validate a hypothesis is known as
an experiment. Experiments reveal cause-and-effect relationships by illustrating
7
what happens when a particular component is changed. Experiments can have a
wide range of goals and scales, but they all rely on a repeatable technique and
logical analysis of the data.
There are always errors in any measurement; no physical quantity can be
measured with absolute certainty. This means that if we measure something and
then repeat the measurement, we will almost certainly get a different result the
second time. So, how can we determine the "true" value of a physical quantity? We
can't, to put it succinctly. However, by taking greater care in our measurements and
using ever more refined experimental methods, we can reduce errors and gain
greater confidence that our measurements are getting closer to the true value.
The study of errors in physical measurements is known as "error analysis"
and a thorough account would take far more time and space than we have in this
course. However, by taking the effort to master some fundamental error analysis
principles, we can:
1) know how to estimate experimental error,
2) know the types and sources of experimental mistakes, and
3) report values of measurements and their uncertainties clearly and correctly.
4) improve our measurement skills and design experimental procedures and
approaches to eliminate experimental errors.
8
Figure 1. Accuracy and precision.
Rounding values
Rounding numbers during calculations should be avoided because
approximations from rounding will accumulate. Throughout the calculations, add
one or two significant figures to all variables. For intermediate results, employ
rounded numbers, but for subsequent processing, only use non-rounded data. For
your final solution, use a rounded figure. There should be no more than two
significant figures in your final quoted errors.
9
but if discovered, they can only be removed by fine-tuning the measuring method
or approach.
Erroneous calibration of measuring devices, poorly maintained instruments, or
faulty reading of instruments by the user are all common sources of systematic
errors. The term "parallax error" refers to a type of systematic error that occurs
when a user reads an instrument at an angle, resulting in a reading that is regularly
high or consistently low.
Errors at Random
Random mistakes are errors that have an impact on the accuracy of a
measurement. Random mistakes are "two-sided" errors because, in the
unavailability of other types of errors, repeated measurements produce values that
fluctuate above and below the real or accepted value. Measurements prone to
random errors differ from one another due to random, unpredictable changes in the
measuring procedure. The precision of measurements that are susceptible to
random errors can be improved by repeating them. Random errors can be readily
analyzed using statistical analysis.
Problems estimating a number that lies between the graduations (the lines)
on an instrument and the inability to interpret an instrument because the readout
fluctuates during the measurement are common sources of random errors.
10
voltmeter, for example, reads 1.493 volts; the voltage value precision is 1/2 of 10-3
volts, or 5·10-4 volt.
Percent Error
The difference between a measured or experimental value E and a true or
accepted value A (also known as fractional difference) is used to calculate the
accuracy of a measurement. The following equation is used to compute the percent
error:
|𝑬−𝑨|
% 𝑬𝒓𝒓𝒐𝒓 = (1.1)
𝑨
Percent Difference
The difference between the measured or experimental values E1 and E2
represented as a fraction of the average of the two values measures the precision of
two measurements. To compute the % difference, use the following formula:
|𝐸1 −𝐸2 |
% 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝐸 +𝐸 (1.2)
( 1 2)
2
where xi denotes x's i-th measured value. Simply divide the sum of the measured
values by the number of measured values to get the mean.
The standard deviation of the measured values is denoted by the letter x and is
calculated using the following formula:
∑𝑵 ̅)𝟐
𝒊=𝟏(𝒙𝒊 −𝒙
𝝈𝒙 = √ (1.4)
𝑵(𝑵−𝟏)
12
t(𝜶,n) is Student’s coefficient which depends on confidence factor (𝛼) and number
of measurements (n). 𝛅 is confidence interval and calculated by following formula:
𝜹 = 𝝈 ∙ 𝒕(𝜶, 𝒏) (1.7)
5. Result of experiment write in the following form:
𝒙 = (𝒙
̅ ± 𝜹)𝒖𝒏𝒊𝒕 𝒐𝒇 𝒎𝒆𝒂𝒔𝒖𝒓𝒆𝒎𝒆𝒏𝒕; 𝜺% (1.8)
6. Make a conclusion.
Note
After each calculation work, laboratory work or experiment, it is necessary to
write conclusions that include information about the result obtained.
In the conclusion, the procedure of performing the work or theoretical
information should not be written, you need to explain the result you received
and compare it with theoretical data (if such are there) and prove the
correctness of your result using formulas or regularities.
13
II
SOUND. PHYSICAL PROPERTIES OF SOUND. ACOUSTICS.
MEDICAL APPLICATION OF SOUND
Our senses of hearing and sight provide us with the majority of information
about our physical surroundings. We get information about objects in both
circumstances without coming into physical contact with them. In the first
scenario, we receive information through sound, while in the second example, we
receive information through light. Sound and light are both waves, despite the fact
that they are completely different phenomena. A wave is a disturbance that
transports energy from one location to another without transferring mass. Our
sensory processes are stimulated by the energy conveyed by the waves.
Sound is a mechanical wave produced by vibrating bodies. When an object,
such as a tuning fork or human vocal chords, is brought into vibrational motion,
the surrounding air molecules are disrupted and pushed to follow the vibrating
body's motion. The vibrational disturbance propagates away from the source as the
vibrating molecules transfer their motion to neighbouring molecules. When air
vibrations hit the eardrum, the eardrum vibrates, causing nerve impulses to be
produced, which are then interpreted by the brain.
To some extent, all matter transmits sound, but sound propagation requires a
material channel between the source and the receiver. The well-known experiment
of the bell in the jar demonstrates this. The sound of the bell when it is placed in
motion is heard. The sound of the bell fades as the air in the jar is expelled, and the
bell eventually becomes inaudible. Alternate compressions and rarefactions of the
medium, which are initially created by the vibrating sound source, constitute the
spreading disturbance in the sound-conducting medium. These compressions and
rarefactions are merely departures from the average density of the medium. In a
gas, the variations in density are equivalent to pressure changes. Two important
characteristics of sound are intensity, which is determined by the magnitude of
compression and rarefaction in the propagating medium, and frequency, which is
determined by how often the compressions and rarefactions take place. The unit
Hertz, named after the scientist Heinrich Hertz, is used to measure frequency in
cycles per second. Hz is the sign for this unit. 1 Hz equals one cycle per second.)
Object vibrational motion can be quite complex, resulting in a complex sound
pattern. Still, analyzing the qualities of sound in terms of basic sinusoidal
vibrations, such as those produced by a vibrating tuning fork, is useful (see Fig.
2.1).
14
Figure 2.1. Sinusoidal sound wave produced by a vibrating tuning fork.
A pure tone is the type of basic sound pattern shown in Fig. 2.1. The pressure
differences caused by compressions and rarefactions are sinusoidal when a pure
tone travels through air.
We would witness pressure variations in space that are also sinusoidal if we took a
"snapshot" of the sound at a given point in time. (Special procedures are required
to obtain such images.) The wavelength is the distance between the closest equal
spots on the sound wave.
The speed of a sound wave, υ, is determined by the substance through which it
travels. The speed of sound in air at 20 degrees Celsius is around 3.3×104 cm/sec,
while in water it is approximately 1.4×105 cm/sec. The following equation
describes the relationship between frequency, wavelength, and propagation speed
in general:
𝓿= λf (2.1)
For all types of wave motions, the link between frequency, wavelength, and
speed is true.
The propagating sound's pressure changes are superimposed on the ambient
air pressure. As a result, the total pressure in a sinusoidal sound wave's course is of
the value
P = Pa + Po sin2π·f· t (2.2)
Where, Pa is the ambient air pressure (which is 1.01×105Pa=1.01×106
dyn/cm2 at sea level at 00 C), Po is the greatest pressure change caused by the sound
wave, and f is the sound frequency.
The intensity I of a sinusoidal sound wave is defined as the amount of energy
transmitted per unit time across each unit area perpendicular to the direction of
sound propagation:
15
P02
I (2.3)
2
Where, the subscripted quantities are the velocity and density in the different
media. The solution to Eq. 2.4 shows that when sound traveling in air is incident
perpendicular to a water surface, only about 0.1 percent of the sound energy enters
the water, while 99.9 percent is reflected. When the angle of incidence is oblique,
the fraction of sound energy entering the water is even lower. As a result, water
acts as an effective acoustic barrier.
Interference. When two (or more) waves travel in the same medium at the same
time, the total disturbance is the vectorial sum of the individual disturbances
caused by each wave at each point. Interference is the term for this phenomenon.
For example, if two waves are in phase, they add, increasing the wave disturbance
at each point in space. This is known as constructive interference (Fig. 2.3a).
When two waves are 180 degrees out of phase, the wave disturbance in the
propagating medium is reduced. This is known as destructive interference (Fig.
2.3b). The wave disturbance is completely cancelled if the magnitudes of two out-
of-phase waves are the same (Fig. 2.3c).
Figure 2.3 (a) interference of waves oscillating in one phase. (b, c) interference of waves
oscillating in antiphase. R is the result of propagating A and reflected B waves.
Two waves of the same frequency and magnitude traveling in opposite directions
cause a special type of interference. The resulting wave pattern is called a standing
wave because it is stationary in space. Standing sound waves are created in hollow
pipes like the flute. Standing waves in a given structure can be shown to exist only
at specific frequencies known as resonant frequencies.
Diffraction. As waves travel across a medium, they have a tendency to expand
out. As a result, when a wave hits an impediment, it spreads out into the area
17
behind it. Diffraction is the name for this occurrence. The degree of diffraction is
proportional to the wavelength: The larger the wave's spreading, the longer the
wavelength. Only when the size of the obstruction is smaller than the wavelength
does significant diffraction into the region behind it occur. The performer can be
heard by a person sitting behind a pillar in an auditorium because long wavelength
sound waves spread behind the pillar. However, because the wavelength of light is
different, the vision of the performance is hindered. It can be demonstrated that the
focused spot's diameter cannot be less than λ/2. These wave characteristics have
significant implications for the hearing process.
Frequency and Pitch. The human ear can detect sound at frequencies ranging
from 20 to 20,000 hertz. However, the ear's response is not uniform within this
frequency range. The ear is most sensitive to frequencies between 200 and 4000
Hz, and its response decreases as frequency increases. Individuals' frequency
responses differ greatly. Some people cannot hear sounds above 8000 hertz, while
others can hear sounds above 20,000 hertz. Furthermore, most people's hearing
deteriorates with age. The pitch sensation is related to the frequency of the sound.
The pitch rises as the frequency rises. Pitch and frequency, on the other hand, do
not have a simple mathematical relationship.
Intensity and Loudness
The ear can detect a wide range of intensities. The lowest intensity that the human
ear can detect at 3000 Hz is approximately 10−16 W/cm2. The loudest sound that
can be tolerated has an intensity of about 10−4 W/cm2. The threshold of hearing
and the threshold of pain are the two extremes of the intensity range. Sound
intensities above the pain threshold can permanently damage the eardrum and
ossicles.
The ear does not respond linearly to sound intensity; that is, a sound a million
times louder than another does not elicit a million times greater sensation of
loudness. The ear's response to intensity is logarithmic rather than linear. Because
of the nonlinear response of the ear and the wide range of intensities involved in
the hearing process, sound intensity is best expressed on a logarithmic scale. The
sound intensity is measured relative to a reference level of 10−16 W/cm2 on this
scale (which is approximately the lowest audible sound intensity). The logarithmic
intensity is expressed in decibel (dB) units and is defined as
sound intensity in W/cm2
Logarithmic intensity = 10𝑙𝑜𝑔 (2.5)
10−16 W/ cm2
Thus, for example, the logarithmic intensity of a sound wave with a power of 10 −12
W/cm2 is
18
Table 2.1
Sourse of sound Sound level (dB) Sound level(W/cm2)
Threshold of pain 120 10-4
Riveter 90 10-7
Busy street traffic 70 10-9
Ordinary conversation 60 10-10
Quiet automobile 50 10-11
Quiet radio at home 40 10-12
Average whisper 20 10-14
Rustle of leaves 10 10-15
Threshold of hearing 0 10-16
19
Figure 2.4. Equal-loudness contours
where, f denotes the frequency in the absence of motion, 𝞋 denotes the speed of
sound, and 𝞋s denotes the speed of the source. When the source is approaching the
observer, use a minus sign in the denominator, and a plus sign when it is receding.
It is possible to measure motions within a body using the Doppler effect.
The ultrasonic flow meter, which generates ultrasonic waves that are scattered by
blood cells flowing through blood vessels, is one tool for obtaining such
measurements. The Doppler effect changes the frequency of the scattered sound.
The blood flow velocity is calculated by comparing the incident frequency to the
frequency of the scattered ultrasound.
The mechanical energy in the ultrasonic wave is converted to heat within the
tissue. With enough ultrasonic energy, it is possible to heat specific parts of a
patient's body more efficiently and evenly than conventional heat lamps.
Diathermy is a type of treatment that is used to relieve pain and promote the
healing of injuries. It is possible to destroy tissue with extremely high-intensity
ultrasound. Ultrasound is now commonly used to dissolve kidney and gallstones
(lithotripsy).
Audiometry is a field of audiology that involves thresholds and different
frequencies to measure hearing acuity for fluctuations in sound strength and pitch,
as well as tonal purity. Audiometric exams use an audiometer to detect a subject's
hearing levels, but they can also examine capacity to discern between different
sound intensities, recognize pitch, or separate speech from background noise. The
results of audiometric testing are used to diagnose hearing loss or ear illnesses, and
an audiogram is frequently employed.
Auscultation is the process of listening to the body's internal sounds with a
stethoscope. Auscultation is used to examine the circulatory and respiratory
systems, as well as the alimentary canal (heart and breath sounds).
Percussion is a technique for determining the underlying structures of a surface
and is used in clinical tests to check the status of the thorax and abdomen. It's done
by employing a wrist action to tap the middle finger of one hand on the middle
finger of the other. The pleximeter (non-striking finger) is firmly placed on the
21
body over tissue. There are two forms of percussion: direct and indirect.
Percussion sounds are classified as resonant, hyper-resonant, stone dull, or dull.
The presence of a solid mass beneath the surface is indicated by a dull sound.
Hollow, air-containing structures are indicated by a more resonant sound. They
produce varied sensations in the pleximeter finger as well as different sounds that
can be heard.
2. To evoke sound sensations, the wave must have some minimal intensity, called
A) pain sensation
B) noise
C) hormonal spectrum
D) threshold of noise
E) threshold of hearing
5. The value that is directly proportional to the logarithm of the ratio of its intensity
to the value corresponding to the threshold of hearing called
22
A) volume level
B) threshold level
C) pain threshold
D) sound wave intensity
E) noise level
6. The distance between two adjacent nodes and antinodes of the standing wave is
equal to
A) length of traveling waves
B) half of the length of traveling waves
C) doubled wavelength of traveling waves
D) half the distance between nodes
E) doubled distance between nodes
7. Wave that occurs as a result of the interaction (interference) of the incident and
reflected waves
A) standing wave
B) longitudinal wave
C) transverse wave
D) perpendicular wave
E) reflected waves
Work procedure
1.With the permission of the teacher or laboratory assistant, enable sound generator
into the electrical network.
2. Put the "Network" toggle switch on the generator panel to position "On".
3. After 2-3 minutes, turn the knob and set the HZ frequency indicator to the value
indicated teacher.
4. Place the piston against the open end of pipes and by turning the knob of the
output voltage regulator of sound generator, set the sound level so that the signal
was heard.
23
5. Slowly and evenly pushing the piston away from membrane, determine the
coordinates of the nodes of the standing wave. Measurement of coordinates must
be made for 3-4 nodes.
7. From the obtained values of the length of the standing wave find the speed of
sound propagation by the formula.
8. Measure the wavelength and calculate the phase velocities of sound propagation
by setting on the sound generator another frequency specified the teacher, and after
do the operations specified in paragraphs 4-7.
9. Calculate by using the formulas below to calculate the relative and absolute
errors of the sound phase velocity measurement and calculate these errors.
№ , ℓ, ̅𝒊
=2ℓ = , 𝝑 ̅
𝝑 i ̅̅̅̅
∆𝝑 i2 S 𝛅 𝛆
(Hz) (cm) (m) (m/sec)
1
1000
2
1500
3
2000
24
n
i
2
S i 1
n(n 1)
25
III
STRUCTURE AND FUNCTIONS OF BIOLOGICAL MEMBRANES AND
METHODS FOR STUDYING MEMBRANE STRUCTURE
The cell membrane, also known as the plasma membrane, is a biological
membrane that divides all cells' interiors from the outside world. The cell
membrane controls the passage of substances in and out of cells by being
selectively permeable to ions and organic molecules. It essentially shields the cell
from outside influences. It is made up of a lipid bilayer with proteins inserted in it.
Cell membranes play a role in a variety of cellular activities such as cell adhesion,
ion conductivity, and cell signaling, as well as serving as the attachment surface for
a number of extracellular structures such as the cell wall, glycocalyx, and
intracellular cytoskeleton. Cell membranes can be reconstructed artificially.
The plasma membrane, like all other cellular membranes, is made up of both
lipids and proteins. The phospholipid bilayer, which forms a stable barrier between
two aqueous compartments, is the membrane's fundamental structure. These
compartments are the inside and outside of the cell in the case of the plasma
membrane. Proteins embedded within the phospholipid bilayer perform plasma
membrane functions such as selective molecule transport and cell-cell recognition.
Mammalian red blood cells (erythrocytes) plasma membranes have proven
to be a particularly helpful model for studying membrane structure. Because
mammalian red blood cells lack nuclei and internal membranes, they can easily be
separated for biochemical examination. The initial evidence that biological
membranes are made up of lipid bilayers came from studies of the red blood cell
plasma membrane. In 1925, two Dutch scientists (E. Gorter and R. Grendel)
isolated membrane lipids from a known number of red blood cells with a known
plasma membrane surface area. The surface area occupied by a monolayer of the
extracted lipid spread out at an air-water interface was then calculated. The lipid
monolayer's surface area was found to be double that of the erythrocyte plasma
membranes, indicating that the membranes were made up of lipid bilayers rather
than monolayers.
High-magnification electron micrographs clearly show the bilayer structure
of the erythrocyte plasma membrane. The plasma membrane appears as two dense
lines separated by an intervening space, a morphology known as a "railroad track"
appearance.
26
The binding of electron-dense heavy
metals used as stains in transmission
electron microscopy to the polar head
groups of the phospholipids results in
this image, which appears as dark lines.
The lightly stained interior portion of
the membrane, which contains the
hydrophobic fatty acid chains, separates
these dense lines.
Animal cell plasma membranes also include glycolipids and cholesterol in addition
to phospholipids. The glycolipids are only found in the plasma membrane's outer
leaflet, with their carbohydrate parts exposed on the cell surface. They are modest
membrane constituents, accounting for just around 2% of the lipids in most plasma
membranes. Cholesterol, on the other hand, is a key membrane constituent of
mammalian cells, with molar levels about equal to phospholipids.
Membrane function is dependent on two general characteristics of phospholipid
bilayers. The basic function of membranes as barriers between two aqueous
compartments is first and foremost due to the structure of phospholipids. The
membrane is impermeable to water-soluble molecules, such as ions and most
biological molecules, since the inside of the phospholipid bilayer is occupied by
hydrophobic fatty acid chains. Second, naturally occurring phospholipid bilayers
are viscous fluids rather than solids. Most natural phospholipids include one or
more double bonds in their fatty acids, which cause bends in the hydrocarbon
chains and make packing them together problematic. As a result of the lengthy
hydrocarbon chains of the fatty acids moving freely within the membrane, the
membrane is soft and flexible. Furthermore, both phospholipids and proteins are
free to migrate laterally within the membrane, which is an important characteristic
for many membrane functions.
Cholesterol plays a unique role in membrane construction due to its tight ring
shape. Cholesterol does not create a membrane on its own; instead, it inserts into a
bilayer of phospholipids with its polar hydroxyl group adjacent to the head groups
of the phospholipids. Cholesterol has different impacts on membrane fluidity
depending on the temperature. Cholesterol obstructs the mobility of the
phospholipid fatty acid chains at high temperatures, making the outer region of the
membrane less fluid and lowering its permeability to tiny molecules. Cholesterol,
on the other hand, has the opposite impact at low temperatures: Cholesterol
prevents membranes from freezing and preserves membrane fluidity by interfering
with fatty acid chain interactions. Although cholesterol is not found in bacteria, it
is a necessary component of the plasma membranes of mammalian cells. Plant
27
cells lack cholesterol as well, but they do have comparable chemicals (sterols) that
serve the same purpose.
According to recent research, not all lipids in the plasma membrane diffuse easily.
Instead, cholesterol and sphingolipids appear to be concentrated in specific
membrane domains (sphingomyelin and glycolipids). Sphingolipid and cholesterol
clusters are considered to create "rafts" that move laterally through the plasma
membrane and may bind to certain membrane proteins. Although the activities of
lipid rafts are unknown, they may play a role in processes such as cell signaling
and endocytosis, which involves the uptake of extracellular substances.
Proteins are responsible for carrying out specialized membrane tasks, while lipids
constitute the fundamental structural constituents of membranes. Most plasma
membranes are roughly 50 percent lipid and 50 percent protein by weight, with
glycolipid and glycoprotein carbohydrate components accounting for 5 to 10% of
the membrane composition. This ratio equates to around one protein molecule for
every 50 to 100 lipid molecules, due to the fact that proteins are significantly larger
than lipids. The fluid mosaic model of membrane construction, introduced by
Jonathan Singer and Garth Nicolson in 1972, is now widely acknowledged as the
primary paradigm for the architecture of all biological membranes. In this model,
membranes are viewed as two-dimensional fluids in which proteins are inserted
into lipid bilayers.
The phospholipids have an amphiphilic nature. The hydrophilic end usually has a
negatively charged phosphate group, while the hydrophobic end has two "tails"
that are long fatty acid residues.
Phospholipids in aqueous solutions are driven by hydrophobic interactions, which
cause the fatty acid tails to aggregate in order to minimize interactions with water
molecules. The end result is frequently a phospholipid bilayer: a membrane made
28
up of two layers of oppositely oriented phospholipid molecules, with their heads
exposed to the liquid on both sides and their tails directed into the membrane.
Figure 3.3. Scheme of mixing proteins of human and mouse cells after 40 minutes of incubation.
Membrane lipids and proteins have high mobility, which allows them to diffuse
due to thermal motion. If their molecules move within the same membrane layer,
the process is known as lateral diffusion; if their molecules move from one layer to
another, the process is known as a “flip-flop” transition.
30
The frequency of molecule jumps due to lateral diffusion is
D
2 3 (3.1)
A
where: D is the coefficient of lateral diffusion; A is the area occupied by one
molecule on the surface of the membrane.
The sedentary life of a molecule in one position is inversely proportional to the
hopping frequency:
A
1 / (3.2)
2 3D
In this case, the mean quadratic displacement of molecules during time t is:
S 2 Dt (3.3)
Lipid vesicles, also known as liposomes, are circular pockets surrounded by a lipid
bilayer. These structures are used in laboratories to study the effects of chemicals
on cells by delivering these chemicals directly to the cell and gaining a better
understanding of cell membrane permeability. Lipid vesicles and liposomes are
created by suspending a lipid in an aqueous solution and then sonicating the
mixture to create a vesicle. Researchers can better understand membrane
permeability by monitoring the rate of efflux from the inside of the vesicle to the
ambient solution. Vesicles can contain molecules and ions by creating the vesicle
with the desired molecule or ion present in the solution. Proteins can also be
embedded in the membrane by solubilizing the appropriate proteins in detergents
and then connecting them to the phospholipids that constitute the liposome. These
provide researchers with a tool to examine various membrane protein functions.
One property of a lipid bilayer is the relative mobility (fluidity) of the individual
lipid molecules and how this mobility changes with temperature (See fig.3.4). The
phase behavior of the bilayer is the name given to this response. A lipid bilayer can
exist in either a liquid or a solid phase at a given temperature. The solid phase is
sometimes known as the "gel" phase. Every lipid has a certain temperature at
which it transitions from the gel to the liquid phase (melt). Lipid molecules are
restricted to the two-dimensional plane of the membrane in both phases, although
in liquid phase bilayers, molecules can freely diffuse within this plane. As a result,
in a liquid bilayer, a particular lipid will rapidly switch places with its neighbor
millions of times per second and will migrate over great distances via a random
walk mechanism.
When the lipid membrane reaches the transition temperature of 35 degrees Celsius,
bilayer permeability increases, whereas below that temperature, lipid membranes
exist only in solid phase, with no drug release expected.
31
Figure 3.4. Lipid layer phase behavior.
32
0 S
С (3.4)
d
where: ε- is the dielectric constant, ε0- is the electric constant, s- is the area, d- is
the distance between the plates.
The physical properties of the membranes
• The density of the lipid bilayer is 800 kg / m3, which is lower than that of water.
• Dimensions. By electron microscopy data, membrane thickness (L) ranging from 4
to 13 nm, and various cell membranes characterized by different thickness.
• Viscosity. The lipid layer of the membrane has a viscosity η = 30-100 mPa*s
(corresponding to the viscosity of vegetable oil).
The membranes have a high electrical resistivity (about 107 Ohm * m) and a high
specific capacitance (approximately 0.5 * 10-2 F / m2). The dielectric constant of
membrane lipids is 2.
Membranes contain a large number of different proteins. Their number is so great
that the surface tension of the membrane is closer to the surface tension at the
protein-water interface ( 104 N / m ) than lipid-water ( 102 N / m ). The
concentration of membrane proteins depends on the type of cell.
Test tasks to consolidate the studied material on the topic:
1. Lateral diffusion is a transition of
A) ions through bilayer membrane
B) molecules from one lipid layer to another
C) molecules across biological membrane
D) molecules in the membrane within one layer
E) protein molecules from one lipid layer to another
2. The viscosity of the lipid layer of the membrane corresponds to the viscosity of
A) water
B) vegetable oil
C) human blood
D) plasma
E) air
33
4. Polar lipid heads
A) have a charge, hydrophilic and directed to the outside
B) directed to the inner side of the membrane, have no charge
C) tend not to contact with water molecules
D) hydrophobic, directed to the inner side of the membrane
E) hydrophilic, tend not to contact with water molecules
9. The function of the biological membrane, which ensures the delivery of nutrients,
the removal of the final products of metabolism, the creation of ionic gradients
A) transport
B) barrier
C) matrix
D) mechanical
E) energy
34
10. Physical value, the change of which leads to the transition of the membrane
from the liquid crystal to the gel state and back
A) temperature
B) pressure
C) weight
D) volume
E) square
3. How will the electrical (specific) capacity of membrane change at its transition
from liquid-crystal state to the gel if it is known that in liquid-crystal state the
thickness of hydrophobic layer is 3,9 nm and in the state of gel it is 4,7 nm.
Permittivity of lipids 2.
4. Calculate the permittivity of membrane lipids if the thickness of membrane
is d=10 nm, specific capacitance is С=1,7 mF/m2.
35
IV
TRANSPORT OF SUBSTANCES ACROSS MEMBRANE.
PASSIVE AND ACTIVE TRANSPORT.
When a drop of colored solution is dropped into a still liquid, the color
spreads out gradually over the liquid's volume. Color molecules spread from the
high-concentration region (of the initially injected drop) to lower-concentration
areas. Diffusion is the term for this process.
Diffusion is the primary means of delivering oxygen and nutrients to cells,
as well as removing waste products from them. Diffusive motion is relatively slow
on a wide scale (it may take hours for the colored solution in our example to spread
over a few centimeters), but on the small size of tissue cells, diffusive motion is
rapid enough to support cell life. Diffusion is the direct result of molecules' random
thermal mobility. Although a thorough explanation of diffusion is beyond the
scope of this paper, several aspects of diffusive motion can be derived from basic
kinetic theory.
The net flux from one region to another is determined by the density
difference between the two regions' diffusing particles. The flux increases as the
thermal velocity 𝞋 increases and decreases as the distance between the two regions
decreases.
𝐷
𝐽= (𝐶1 − 𝐶2 ) (4.1)
∆𝑥
where D is called the diffusion coefficient. In our case, the diffusion coefficient is
simply
𝐿𝜗
𝐷= (4.2)
2
We've just talked about free diffusion through a fluid so far, but the cells that make
up biological systems are enclosed by membranes that prevent free diffusion. To
keep life functions running, oxygen, nutrients, and waste materials must travel
36
through these membranes. The biological membrane can be thought of as porous in
the simplest model, with the size and density of the pores determining diffusion
through the membrane. The only function of the membrane is to diminish the
effective diffusion area and hence the diffusion rate if the diffusing molecule is
smaller than the size of the holes. If the diffusing molecule is larger than the size of
the pores, the flow of molecules through the membrane may be barred.
The net flux of molecules J flowing through a membrane is given in terms of the
permeability of the membrane P. We will enter designation as permeability
coefficient: P DK / l .
Diffusion (simple)
Facilitated diffusion
The movement of molecules across the cell membrane via special transport
proteins embedded within the cellular membrane is known as facilitated diffusion,
also known as carrier-mediated diffusion. Many large molecules, such as glucose,
are insoluble in lipids and are too large to pass through membrane pores.
As a result, it will bind to its specific carrier proteins, and the resulting complex
will be bonded to a receptor site and moved through the cellular membrane.
However, keep in mind that facilitated diffusion is a passive process, and the
solutes continue to move down the concentration gradient.
Filtration
Filtration is the movement of water and solute molecules across the cell membrane
caused by the cardiovascular system's hydrostatic pressure. Only certain solutes
can pass through the membrane pores, depending on their size.
39
Figure 4.4. Filtration.
The membrane pores of the Bowman's capsule in the kidneys, for example, are
very small, and only albumins, the smallest of the proteins, have any chance of
passing through. The membrane pores of liver cells, on the other hand, are
extremely large, allowing a wide range of solutes to pass through and be
metabolized.
40
Figure 4.5. Hypertonic, hypotonic and isotonic solutions.
When a cell is immersed in a hypertonic solution, water escapes and the cell
shrinks. There is no net water movement in an isotonic environment, so the cell
size does not change. When a cell is placed in a hypotonic environment, water
enters the cell, causing it to swell.
There are two forms of active transport, primary active transport and secondary
active transport.
The proteins involved in main active transport are pumps that generally employ
chemical energy in the form of ATP. Primary active transport, also known as
direct active transport, transports molecules across a membrane directly using
metabolic energy.
Transmembrane ATPases are the most common enzymes that undertake this type
of transport. The sodium-potassium pump is a major ATPase found in all animals
41
and is responsible for maintaining cell potential. Redox energy and photon energy
are two further sources of energy for primary active transport (light).
There are several active transport systems in plasma membrane ion (ion pump):
1) The sodium-potassium pump.
2) Calcium pump.
3) A hydrogen pump.
42
passive transport does not and moves substances in the same direction as their
concentration gradient.
One substrate is transported across the membrane in one way while another is
cotransported in the opposite direction in an antiporter.
Two substrates are transported across the membrane in the same direction by a
symporter. Secondary active transport is associated with antiport and symport
processes, which mean that one of the two substances is transported in the
direction of its concentration gradient, utilizing the energy derived from the
transport of such substance (mostly sodium, calcium, hydrogen ions) down its
concentration gradient.
Specific transmembrane carrier proteins are necessary if substrate molecules are
migrating from areas of lower concentration to areas of higher concentration (in
the opposite direction of, or against, the concentration gradient). The receptors on
these proteins bind to certain chemicals (such as glucose) and transport them
across the cell membrane. This procedure is termed as 'active' transportation since
it necessitates the application of energy. The sodium-potassium pump, which
transports sodium out of the cell and potassium into the cell, is an example of
active transport. The small intestine's interior lining is frequently used for active
transport.
Mineral salts must be absorbed by plants from the soil or other sources, but these
salts exist in very dilute solution. These cells can take up salts from this dilute
solution by actively transporting them in the opposite direction of the concentration
gradient.
Energy is used to transport molecules across a membrane in secondary active
transport, also known as coupled transport or co-transport; however, unlike
primary active transport, there is no direct coupling of ATP; instead, it relies on the
43
electrochemical potential difference created by pumping ions in and out of the cell.
Allowing one ion or molecule to flow down an electrochemical gradient, but
maybe against the concentration gradient from more concentrated to less
concentrated, increases entropy and can be used as a source of energy for
metabolism (e.g. in ATP synthase).
Examples: metal ions, such as sodium, potassium, calcium and magnesium require
ion pumps or ion channels to cross membranes and distribute through the body.
Endocytosis is a type of active transport in which a cell engulfs molecules (such as
proteins) in an energy-consuming process. Because most chemical substances
important to cells are large polar molecules that cannot pass through the
hydrophobic plasma or cell membrane passively, endocytosis and its counterpart,
exocytosis, are used by all cells.
Pinocytosis (cell drinking) and phagocytosis (cell eating) are examples of
endocytosis.
44
cannot pass through the hydrophobic portion of the cell membrane passively,
exocytosis and its counterpart, endocytosis, are used by all cells.
Membrane-bound secretory vesicles are transported to the cell membrane and their
contents (water-soluble substances like proteins) are secreted into the extracellular
environment during exocytosis. Because the vesicle transiently merges with the
outer cell membrane, this secretion is feasible. Exocytosis is also a method by
which cells can introduce membrane proteins, lipids, and other components into
the cell membrane (such as ion channels and cell surface receptors). Vesicles
containing these membrane components fully merge with the outer cell membrane
and become a part of it.
45
K1 and K2 - ion dissociation constant of the complex with the protein on
either side of the membrane. This equation allows us to calculate the maximum
concentration difference that can create the ion pump.
2. By what type of transport the substance under the letter (b) passes through the
membrane?
3. What kind of transport does oxygen penetrate into the cell through the
membrane?
A) through the pore in the lipid bilayer
B) through lipid bilayer
C) by exocytosis
D) using primary - active transport
E) through the protein pore
46
4. How does the permeability coefficient change with increasing diffusion
coefficient?
A) increases inversely
B) increases exponentially
C) decreases exponentially
D) increases in direct proportion
E) halved
47
V
An action potential (AP) occurs when the membrane potential of a specific cell
location rapidly rises and falls, causing adjacent locations to depolarize as well.
Action potentials are generated in a variety of animal cells known as excitable
cells, which include neurons, muscle cells, endocrine cells, glomus cells, and some
plant cells.
The presence of voltage-gated ion channels in a cell's membrane causes action
potentials. A voltage-gated ion channel is a protein cluster embedded in the
membrane that has three distinct properties:
1. It is capable of assuming more than one conformation.
2. At least one of the conformations creates a channel through the membrane that is
permeable to specific types of ions.
49
3. The transition between conformations is influenced by the membrane potential.
As a result, a voltage-gated ion channel is open for some membrane
potential values and closed for others. The link between membrane potential and
channel state is, in most situations, uncertain and requires a temporal delay. Ion
channels move between conformations at unpredictable periods, with the pace of
transitions and the likelihood of each type of change per unit time determined by
the membrane potential.
Figure 5.2. Phases of the action potential of a neuron. The resting membrane potential is
equal to -70 mV. in 1 ms, the membrane potential rises to -55 mV (threshold potential). after
passing the threshold an action potential occurs in 2 milliseconds, which corresponds to a value
of +40 mV. (-90 mV) means hyperpolarization, after which the cell can return to rest, where a
resting potential of -70 mV is restored in time = 5 ms.
Sodium ion channels open as the membrane potential rises, allowing sodium
ions to enter the cell. The opening of potassium ion channels, which allow
potassium ions to exit the cell, occurs next. The inflow of sodium ions raises the
concentration of positively charged cations in the cell, resulting in depolarization,
in which the cell's potential is higher than its resting potential. At the peak of the
action potential, sodium channels close, while potassium continues to leave the
cell. Potassium ion outflow lowers the membrane potential and hyperpolarizes the
cell. The potassium current exceeds the sodium current at low voltage increases,
and the voltage returns to its normal resting state, approximately -70 mV. The
sodium current takes over when the voltage rises above a crucial threshold, which
is typically 15 mV higher than the resting state. This causes a runaway situation in
50
which the sodium current's positive feedback activates even more sodium
channels. As a result of this, the cell fires, resulting in an action potential. A firing
rate, also known as a neural firing rate, is the frequency with which a neuron
evokes action potentials.
Nernst's equation governs the potential that is created. Nernst's equation
expresses the relationship between an ion's concentration outside and inside a cell
and its cellular potential (V). Nernst's equation is as follows:
RT X ECF
V ln (5.1)
zF X ICF
Example concentrations of the major ion species from frog skeletal muscle
Table-1
Species Intracellular (millimoles Extracellular (millimoles
per liter) per liter)
Na+ 12 145
K+ 155 4
Cl- 4 120
At rest, the membrane permeability for K+ ions is considerably greater than for
Na+, and is greater than C1: P(K) >> P(Na), P(K)> P(Cl). For example, for squid
axon the ratio of the permeability coefficients
P(K): P(Na): P(Cl) = 1: 0.04: 0.45.
In a state of excitement:
P(K):P(Na):P(Cl)=1:20:0.45
i.e., as compared to the non-excited state upon excitation permeability for sodium
increases 500 times.
51
Figure 5.3. Examples of varies action potentials.
A) in nerve cells
54
B) in cardiac muscle fibers
C) in skeletal muscle fibers
D) in axons
E) in myelinated fibers
2. A certain single valence ion has a Nernst potential of –60 mV. Assuming that
this measurement is being taken within the body (i.e. at normal body temperature),
calculate the ratio of this ion's concentration.
3. Which ion is most responsible for the depolarization of the nerve cell? And for
repolarization?
4. Potassium ion channels open more faster than sodium ion channels. What is the
significance of this? Can you imagine how the action potential would be different
if potassium channels opened significantly faster than sodium channels?
55
VI
PHYSICAL FUNDAMENTALS OF ELECTROCARDIOGRAPHY.
The electric fields linked with cell activities reach all the way to the animal's
surface. As a result, we can detect electric potentials throughout the skin's surface
that indicate the collective cell activity associated with various physiological
functions. Clinical techniques have been created based on this effect to get
information about the heart's and brain's activities from the skin surface
(electrocardiography) (electroencephalography). It is feasible to gather information
on the functioning of specific organs by measuring potential changes between
appropriate places on the body's surface. The surface potentials are usually very
small and, therefore, must be amplified before they can be displayed for
examination
Because the electrical impulses generated in this manner are frequently too
weak to operate the final equipment that displays the signals for our observation,
an amplifier is used to boost the signal's power and amplitude. The display device
is subsequently driven by the amplified signal. In some form or another, electrical
technology is used in most diagnostic equipment in medicine.
The Electrocardiograph.
A device that records the surface potentials associated with the electrical
activity of the heart is called an electrocardiograph (ECG). The electrical activity
of the muscles of the heart is recorded using electrodes that are placed on the
surface of the patient's skin. The electrodes are usually attached to special places
on the arms and legs and to the chest region above the heart.) The potential
difference between two electrodes is measured simultaneously. (Fig. 6.1.)
56
Each electrical stimulus takes the form of a wave and so patterns emerge made up
of a number of connected waves. A standard ECG is printed at 50 or 25 mm per
second. In this way it is possible to calculate the duration of individual waves.
Figure 6.2 shows the change in the potentials of ECG intervals under normal
conditions. This wave forms are identified by the letters P, Q, R, S, and T. The
shape of the ECG waves depends on the location of the electrodes. A competent
person can determine the work of the heart by the shapes and deviations of these
waves. The amplitude of the waves is fixed in mV (millivolt) and can be between
0.1 mV and 5 mV. Largest R-wave (up to 5 mV). In diagnostics, it is important to
know the amplitudes of the waves, the duration of the intervals.
The pacemaker, which is a specialized group of muscle cells towards the top of the
right atrium, starts the heart's rhythmic contractions. The action potential
propagates between the two atria immediately after the pacemaker fires. The
electrical activity that causes the atria to contract is connected with the P wave.
The action potential linked with the contraction of the ventricles causes the QRS
wave. Currents that bring about the recovery of the ventricle for the following
cycle cause the T wave.
When a cardiac myocyte depolarizes, the inside of one portion of the cell may be
electrically neutral, while the other section (which has not yet depolarized) has a
negative net electric charge. This denotes the presence of a net dipole moment
vector in the cell.
A system consisting of two equal in value but oppositely charged particles
separated by a distance l is called an electric dipole. The main characteristic of a
dipole is the dipole moment, denoted by the letter P and the direction of this vector
from negative to positive pole.
The electric dipole, as shown, consists of two equal and opposite charges, +q and –
q, separated by a distance l. The dipole moment P is defined: P=q·l
57
We define the vector dipole moment ⃗𝑷 ⃗ as a vector whose magnitude is equal to
the dipole moment and that points from the negative charge to the positive one.
• A large electric dipole serves as the heart. The dipole's orientation and strength
change with each heartbeat.
• A boundary separates the negative charges of depolarized cells from the positive
charges of cells that have not yet depolarized in the heart at any given time.
• The heart's dipole electric field and potential extend throughout the body.
An electric dipole generates an electric field around itself, and the potential of a
point in that field can be calculated using the formula:
1 𝑃𝑐𝑜𝑠 𝜃
𝜑= 2
(6.1)
4𝜋𝜀𝜀0 𝑟
58
Cardiomyocytes, or cardiac muscle cells, are the muscle cells that make up the
heart muscle (heart muscle). Within the heart, there are two types of cells:
cardiomyocytes and cardiac pacemaker cells. The atria (the chambers where blood
enters the heart) and the ventricles are made up of cardiomyocytes (the chambers
where blood is collected and pumped out of the heart). These cells must have the
ability to shorten and lengthen their fibers, and the fibers must be stretchable.
These functions are crucial to the heart's appropriate shape when it beats.
The impulses that cause the heart to beat are carried by cardiac pacemaker cells.
They are found all across the heart and are responsible for a variety of functions.
For starters, they're in charge of being able to create and send electrical impulses
on their own. They must also be able to receive and respond to brain electrical
impulses. Finally, they must be capable of transmitting electrical impulses from
one cell to another.
Cellular bridges link all of these cells together. Intercalated discs are porous
junctions that generate connections between cells. They allow sodium, potassium,
and calcium to move freely across cells. This makes depolarization and
repolarization in the myocardium much easier. The cardiac muscle can work as a
single coordinated unit thanks to these connections and bridges.
Cardiomyocytes are about 100 μm long and 10-25 μm in diameter.
Phase 0: Depolarization
Phase 2: Plateau
Phase 3: Repolarization
Phase 4: The resting potential is -90 mV. Gap junctions allow Na+ and Ca2+ ions
from neighboring cells to leak into the cell, raising the membrane potential to -70
mV.
Because the wave of depolarization does not spread evenly and instantly, and
because the mass of the heart's walls is not equal, the least and most depolarized
mass of myocardium changes over time. As the wave of depolarisation travels
across the chambers of the heart, it resembles a rotating battery with a positive and
negative terminal spinning in three dimensions.
59
Figure 6.5. Action potential of cardiomyocytes
The cardiac dipole's movements generate a current that flows towards or away
from electrodes attached to the skin as it rotates around the heart. The direction in
which the dipole is pointed with respect to a pair of electrodes determines whether
the dipole creates an upward or downward deflection (or none at all). ECG
examines 12 leads:
1) Bipolar 3 standard leads – I, II, III (the potential difference between the two
limbs is measured);
2) Unipolar 6 chest leads – V1,V2, V3, V4, V5, V6. (the potential of the place
of the attached electrode on the chest is measured);
60
Figure 6.7. Chest leads
A full 12-lead ECG depicts the movement of the cardiac dipole from a variety of
cleverly arranged points of view.
• The dipole exists because there is a charge difference between different areas of
the myocardium.
• An ECG reports the orientation and magnitude of the cardiac dipole from
multiple perspectives.
• There is no dipole and the ECG is flat when the heart is completely depolarised
or repolarised (isoelectric).
Einthoven’s triangle.
Einthoven's triangle, created by the two shoulders and the pubis, is an imaginary
configuration of three limb leads in a triangle used in electrocardiography. With
the heart in the center, the shape forms an inverted equilateral triangle. Willem
Einthoven, the theorist who proposed its existence, is its name.
61
These measuring sites were employed as contacts for Einthoven's string
galvanometer, the first practical ECG machine, by submerging the hands and feet
in pails of salt water.
Standard limb lead configurations are used in bipolar recordings, as shown in the
diagram. Lead I, which has the positive electrode on the left arm and the negative
electrode on the right, monitors the potential difference between the two arms by
convention. An electrode on the right leg serves as a reference electrode for
recording purposes in this and the other two limb leads. The positive electrode is
on the left leg and the negative electrode is on the right arm in the lead II
configuration. The positive electrode on Lead III is on the left leg, whereas the
negative electrode is on the left arm. The heart is in the center of the equilateral
triangle formed by these three bipolar limb leads, which is known as Einthoven's
triangle after Willem Einthoven, who invented the ECG in the early 1900s. It
makes no difference in the recording whether the limb leads are attached to the end
of the limb (wrists and ankles) or the origin of the limb (shoulder or upper thigh),
because the limb can simply be considered as a long wire conductor originating
from a location on the trunk of the body.
62
Figure 6.10. Position of the cardiac axis
ECGs are normally printed on a grid. The horizontal axis represents time and
the vertical axis represents voltage. The standard values on this grid are shown in
the adjacent image:
A small box is 1 mm×1 mm and represents 0.1 mV×0.04 seconds.
A large box is 5 mm×5 mm and represents 0.5 mV×0.20 seconds.
The "large" box is represented by a heavier line weight than the small boxes.
Using a real electrocardiogram that will be given by the teacher, do the following work:
1. Find the standard leads I, II, III.
2. Select one heart beat.
3. Measure the duration of the intervals indicated in the table.
4. Using the formula t , where – distance between the corresponding waves, -
tape speed, calculate the duration of the intervals in three leads.
5. Compare the received data with the norm.
𝞄=25 mm/sec
Table 1.
64
Symbols duration of 1 II III tI tII tIII Duration of
intervals intervals meeting
(mm) (mm) (mm) (sec) (sec) (sec) standard (sec)
PQ 0,12-0,2
ST 0,10-0,18
QRS 0,06-0,10
RR 0,8-0,9
QT 0,36-0,4
P 0,2-0,3
Q 0,05-1
R 0,8-1,5
S 0,05-1
66
VII
67
Synapses connect neurons in neural networks so that they can communicate with
one another.
The amount and location of synaptic inputs that pyramidal neurons receive
determine their ability to assimilate information. Approximately 30,000 excitatory
and 1700 inhibitory (IPSPs) inputs are received by a single pyramidal cell.
Inhibitory (IPSPs) inputs terminate on dendritic shafts, the soma, and even the
axon, whereas excitatory (EPSPs) inputs terminate primarily on dendritic spines.
The neurotransmitter glutamate can activate pyramidal neurons, while
neurotransmitters can inhibit them.
Pulsed discharge (action potential) with a duration of about 1 ms, and a slower
(gradual) oscillation of the membrane potential - inhibitory and excitatory
postsynaptic potentials (PSP).
The change in membrane potential causes the appearance of the two dipoles in
pyramidal cells, differing in cytological localization. One of them - the somatic
dipole with dipole moment . Between the soma and the dendritic trunk, it is created
when the cell body's membrane potential changes, as does the current in the dipole
and the external environment. The exciting memory bandwidth's dipole moment
vector is directed from the soma along the dendritic shaft in a pulsed discharge or
production in the body of the neuron, whereas the braking memory bandwidth
forms a somatic dipole with the dipole moment in the opposite direction. Another
dipole, called the dendritic arises as a result of the generation of the exciting
memory bandwidth at the bifurcation of the apical dendrites in the first, in
plexiform layer of the crust; in this dipole current flows between the barrel and
called the dendritic branching. Dipole moment vector D Д of dendritic dipole has
a direction toward the soma along the dendritic shaft.
Figure 7.1. A cortical pyramidal neuron is depicted with an extracellular current dipole
connecting spatially separated excitatory (open bullet) and inhibitory synapses (filled bullet).
Neural in- and outputs are indicated by the jagged arrows. Dendritic current ID causes dendritic
field potential (DFP).
68
In the study of external electric field of brain and interpret the recorded EEG
signal, and constant component, is generally not taken into account.
As can be seen in (Figure 7.2), EEG
background activity of the brain is a very
complex dependence of the potential
difference of the time and looks like a
collection of random fluctuations of the
potential difference. For the
characteristics of chaotic oscillations
("noise") use parameters that are known
from the theory of probability: mean
value and the standard deviation from
the mean.
To find , isolate EEG portion
which is split into small regular
intervals, and at the end of each interval
(ti, tj, tm) define a voltage U (Ui, Uj,
Um). The standard deviation is Figure 7.2. fluctuations of the potential
calculated by the usual formula: difference
(U
j 1
j U )2
, (7.1)
m 1
The typical alpha rhythm is the most well-known and studied rhythm in the human
brain. Alpha is frequently more prominent in the posterior and occipital areas, with
an amplitude of around 50 V. (peak-peak). Alpha activity is induced by closing the
eyes and relaxing, and it is inhibited by opening the eyes or being alerted by any
method (thinking, calculating). The majority of people are extremely sensitive to
the phenomenon of "eye shutting," in which their wave pattern shifts from beta to
alpha waves when they close their eyes.
EEG recording techniques.
Encephalographic measurements employ recording system consisting of
- electrodes with conductive media;
- amplifiers with filters;
- A/D converter;
- recording device.
Electrodes read the signal from the head surface, amplifiers bring the microvolt
signals into a range where they can be accurately digitalized, the converter
converts the signals from analog to digital, and the personal computer (or other
71
suitable device) saves and displays the data obtained. A set of the equipment is
shown in Figure 7.1.
72
Ag-AgCl disks, 1 to 3 mm in diameter, with long flexible leads that may be put
into an amplifier, are the most commonly used scalp electrodes. Even very modest
variations in potential can be accurately recorded using AgCl electrodes. Long
recordings are made with needle electrodes implanted invasively under the scalp.
With cap systems, skin scraping is done with an abutting needle at the end of
the injection, which can cause irritation, pain, and infection. There is a risk of pain
and bleeding, especially when a person's EEG is monitored frequently and a cap is
applied at the same electrode spots.
When using silver-silver chloride electrodes, cover the gap between the
electrode and the skin with conductive paste to aid in adhesion. There is a small
hole in the cap systems for injecting conductive jelly. To ensure that contact
impedance at the electrode-skin interface is reduced, conductive paste and
conductive jelly are used as media. The International Federation in
Electroencephalography and Clinical Neurophysiology established the 10-20
electrode placement system in 1958 as a standard for electrode placement. The
physical location and designations of electrodes on the scalp were standardized
using this approach. To ensure adequate coverage of all parts of the brain, the head
is split into proportional distances from major skull features (nasion, preauricular
points, inion). The label 10-20 indicates the proportionate distance in percents
between the ears and the nose, where electrode locations are chosen.
73
Figure 7.5. 10-20% electrode placement system.
2. Show in the figure the directions of the vectors of somatic and dendritic
dipole moments.
74
3. Fill in the cells:
Electrical activity of neurons
__________ ______________
_
occur occur
in_____________ in___________
____ ____
Numerical problems:
1. Thickness of cerebral cortex is h=1.7𝝻m, specific resistance of cortex
2 Оhm* m , coefficient of average density of pyramidal neurons in the cortex is
5 *1013 , mean standard deviation of the dipole moment of neurons is
N 7 *10 А * m . Calculate the standard deviation of the EEG of the pyramidal
15
75
VIII
MULTISTAGE AMPLIFIERS IN MEDICINE
For many applications, the performance obtainable from a single-
stage amplifier is often insufficient, hence several stages may be combined
forming a multistage amplifier. These stages are connected in cascade, i.e. output
of the first stage is connected to the input of second stage, whose output becomes
input of third stage, and so on.
A cascade amplifier is any two-port network constructed from a series of
amplifiers, where each amplifier sends its output to the input of the next amplifier
in a daisy chain.
The non-ideal coupling between stages due to loading complicates estimating the
gain of cascaded stages. Two common emitter levels are depicted in cascade. The
total gain is not the product of the individual (separated) stages since the second
stage's input resistance forms a voltage divider with the first stage's output
resistance.
A multistage amplifier's overall gain is the product of the gains of the individual
stages (ignoring potential loading effects):
If the gain of each amplifier stage is expressed in decibels (dB), the total gain is
the sum of the individual gains:
For true amplification, the amplifier should have the desired voltage gain and
current gain, as well as input and output impedances that match the source and
load, respectively. Because of the limitations of transistor/FET parameters, these
primary amplifier requirements are frequently not reached with single stage
amplifiers. In such cases, many amplifier stages are cascaded so that the input and
output stages offer some amplification while the remaining middle stages supply
the majority of the amplification.
When the amplification of a single stage amplifier is insufficient, or when the input or
output impedance is not of the correct magnitude, two or more amplifier stages are
connected in cascade for a specific application.
76
Two Stage Cascaded Amplifier
Vi1 is the input of the first stage and Vo2 is the output of second stage.
So,Vo2/Vi1 is the overall voltage gain of two stage amplifier.
𝑉0 2
𝐴= = Av2 ·Av1 (8.3)
𝑉𝑖 1
Voltage gain :
The resultant voltage gain of the multistage amplifier is the product of voltage
gains of the various stages.
Gain in Decibels
In many cases, comparing two powers on a logarithmic scale is preferable to
comparing them on a linear scale. The decibel is the unit of measurement for this
logarithmic scale (abbreviated dB). The number N decibels by which a power P 2
outperforms a power P1 is defined as
𝑃2
𝑁 = 10 · 𝑙𝑜𝑔 (8.5)
𝑃1
V02 Vo2 V V
N 10 log 10 10 log 2
10
10 2 log 10 0 20 log 10 o (8.7)
Vi 2
Vi Vi Vi
If the gain of each stage is known in dB, the gain of a multistage amplifier can be
easily calculated, as shown below:
Thus, the overall voltage gain in dB of a multistage amplifier is the sum of the
individual stage voltage gains. It can be given as follows:
The logarithmic scale is preferred over the linear scale for representing
voltage and power gains for the following reasons:
- In multistage amplifiers, it allows for the addition of individual gains from
each stage to calculate overall gain;
- It enables us to represent both very small and very large quantities of linear
scale with very small figures.
Medical Electrodes
Medical gadgets use electrodes to convert ionic current energy into electrical
current in the body. These currents can be increased and have been used to
diagnose a variety of disorders. A lead, metal, and electrode conducting paste make
up medical electrodes. Internal ionic currents are quantified using medical
electrodes, which leads to the diagnosis of a variety of ophthalmic, neurological,
cardiac, and muscular problems. The device operates by establishing an electrical
connection between the monitoring apparatus and the patient.
79
IX
THE PRINCIPLES OF CONVERTING BIOLOGICAL SIGNALS
INTO ELECTRICAL SIGNALS . THERMOREGULATION.
CALIBRATION OF TEMPERATURE SENSORS.
80
Biological sensors, also known as biosensors, are molecular recognition systems
that use biological active materials. This sort of sensor usually employs an enzyme
to catalyze a biological process or examines the type and content of large molecule
organic compounds using a mixture of chemicals. Enzyme sensors, microbe
sensors, immunity sensors, tissue sensors, DNA sensors, and other sensors were
developed in the second part of the twentieth century.
Classified by detection type, there are displacement sensors, flow sensors,
temperature sensors, speed sensors, pressure sensors, etc. As for pressure sensors,
there are metal strain foil pressure sensors, semiconductor pressure sensors,
capacity pressure sensors and other sensors that can detect pressure. As for
temperature sensors, it includes thermal resistance sensors, thermocouple sensors,
PN junction temperature sensors and other sensors that can detect temperature.
Electronic equipment is used to monitor non-electrical factors such as temperature,
heart sound, and blood pressure from the human body. The devices that transform
biological parameters to electrical impulses are known as transducers.
Transduction is the process of conversion. Transducers are devices that transfer
one kind of energy into another.
Transducers are of two types.
1. Active transducers are devices that transfer one form of energy into another
without the use of external power. A good example is a photovoltaic cell. It is a
device that transforms light energy to electrical energy.
𝜺 = −𝑩 · 𝒍 · 𝟅 (9.1)
where B is the magnetic induction, l is the length of the conductor, and 𝞋 is the
velocity of the moving conductor. The negative indication shows that the induced
EMF and induced current are traveling in opposite directions.
It's also true that there's an inverse magnetic effect. When current flows via an
electrical conductor in a magnetic field, the conductor is subjected to mechanical
force F
𝑭=𝑩·𝑰·𝒍 (9.2)
81
Applications Magnetic Induction Type Transducerscers
Passive Transducers: With the help of an external power source, it turns one kind
of energy into another. It is based on the regulation of DC voltage or an AC carrier
signal. Example: Strain Gauge, Load cell.
82
Types of Passive Transducers
1. Resistive -R
2. Inductive -L
3. Capacitive -C
Resistive Transducers
Resistive type passive transducers include strain gauges, photoresistor, photodiode,
phototransistor, and thermistor. They all work on the same principle, which
stipulates that the measured parameter causes a slight change in the transducer's
resistance. A Wheatstone bridge is used to assess resistance change.
Capacitive Transducers
There are two conducting surfaces on a capacitor. A dielectric medium functions as
a barrier between two surfaces, separating them. Changes in the area of conducting
plates, the thickness of the dielectric medium, and the distance between the plates
are all measured using capacitive transducers.
Inductive Transducers
The inductive transducer operates on the basis of the change in reluctance and the
number of turns in the coil. A Linear Variable Differential Transformer (LVDT) is
a type of inductive transducer that functions as a pressure sensor.
83
The input value of a parametric sensor is transformed into a change in any
electrical parameter of the sensor (R, L, or C).
Sensors requirements:
- Unambiguous dependence between input and output values;
- Stability of characteristics over time;
- high sensitivity;
- small size and mass;
- work during various exploitation environments;
- various montage variants.
Figure 9.2. Equivalent electrical circuit for measuring the biopotential from the skin surface.
Electrodes for the bioelectric signal measurements are divided into the following
groups:
1. For short-term use in the offices of functional diagnostics, for example, for one-
time electrocardiogram recording;
2. For long-term use, for example, with constant monitoring of seriously ill
patients in conditions of intensive care therapy;
3. For use on mobile subjects, for example, in sports or space medicine;
4. For emergency use, for example in ambulance conditions.
85
The value characterizing the dependence of the change in resistivity when
heated on the type of substance is called the temperature coefficient of resistance.
The temperature coefficient of resistance is measured by a number indicating
which part of its value, taken at 0° C, changes the resistivity when heated by 1° C
е 0
; [ ] = [°С -1] (9.5)
t 0 t 0
The “alpha” (α) constant is known as the temperature coefficient of resistance
and symbolizes the resistance change factor per degree of temperature change. Just
as all materials have a certain specific resistance (at 20°C), they also change
resistance according to temperature by certain amounts. This coefficient is positive
for pure metals, indicating that resistance increases as temperature rises. This
coefficient is negative for the elements carbon, silicon, and germanium, indicating
that resistance reduces as temperature rises. The temperature coefficient of
resistance for some metal alloys is very close to zero, implying that resistance
changes very little with temperature changes. Dependence of semiconductor
resistivity on temperature
E з
0e 2 kT
(9.6)
where E3 - band area; 0 - proportionality coefficient having the dimension of
resistivity; k - Boltzmann constant.
The orderly movement of electric charges is referred to as electric current.
The electric current is called conduction current when there is an orderly
movement of charges in a conductor. A number equal to the ratio of the charge
transported via the conductor's cross section over time to this time interval is the
current's quantitative characteristic: i q . This value is called amperage. If the
t
current and its direction do not change with time, then it is called direct current.
q
For DC i .
t
87
A thermistor must be calibrated before it can be used. Graduation based on
determining the thermistor resistance's relationship to temperature. Wheatstone
circuit for graduation thermistor. One bridge key is a thermistor in the schematic
(Fig. 9.2), the other is a resistance box, and the remaining two arms can be constant
resistances R1 and R3. The bridge diagonal includes a galvanometer. When
measuring, set the resistances R1 and R3 to zero so that no current flows through
the galvanometer. Balance or equilibrium refers to the status of the measuring
circuit while it is in this state.
Thermocouple graduation
A thermocouple is an electrical device made up of two dissimilar electrical
conductors that come together to form an electrical junction. As a result of the
thermoelectric effect, a thermocouple generates a temperature-dependent voltage,
which can be interpreted to measure temperature. Thermocouples are a type of
temperature sensor that is widely used. Thermocouples are used to measure
temperatures ranging from –270 to +15000 degrees Celsius.
88
The thermoelectric effect is the use of a thermocouple to convert temperature
variations to electric voltage and vice versa. When the temperature on both sides of
a thermoelectric device differs, a voltage is generated. Heat is transmitted from one
side to the other when a voltage is applied to it, resulting in a temperature
difference. An imposed temperature gradient causes charge carriers in a material to
diffuse from the hot side to the cold side at the atomic level.
This phenomenon can be utilized to create power, measure temperature, and alter
item temperatures. The applied voltage affects the direction of heating and cooling,
hence thermoelectric devices can be employed as temperature controllers.
E K (t 2 t1 ) (9.11)
As a result, the graph of the EMF thermocouples from the temperature of the
heated junction will be represented by a straight line in a limited temperature
range. Most thermocouples deviate from proportionality in a wide range of
temperature measurements.
According to the results of thermocouple calibration, it is possible to
calculate the sensitivity (constant) of thermocouple K, which depends on the nature
of the substances making up the thermocouple and corresponds to thermo-EMF
when the temperature of the heated junction changes by 10° C:
n2 n1
K t С ( Rt Rg Radd ) (9.12)
t 2 t1
90
2. Mark the initial temperature of water t 0 in the vessel that contains thermal
resistance and heater, balance the bridge.
3. Write the value of thermal resistance Rt to the table 1.
4. Heating the water till the boiling conduct the resistance measurement of
thermistor in every 10оС.
5. Obtained data write to the table 1.
Table 1.
0
t, C
R,
(Ohm)
8. Make a conclusion.
Task 2:
1. Immerse the ends of thermocouple to the vessels with water and measure the
temperature in both vessels.
2. Heating the water till boiling conduct the measurement with galvanometer in
every 10оС, write the number of division which were rejected by galvanometer
needle to the table 2.
Table 2.
0
t, C
n
I
3. Determine the sensitivity of thermocouple by the formula:
n2 n1
K T С ( RT Rg Radd )
t 2 t1
5. Make a conclusion.
91
Regulation of Body Temperature
The majority of the heat produced by the body is created deep within the
body, away from the surface. This heat must first be delivered to the skin in
order to be removed. There must be a temperature differential between the two
places for heat to flow from one to the other. As a result, the skin's temperature
must be lower than the internal body temperature. In a warm environment, the
temperature of the human skin is about 350C. In a cold environment, the
temperature of some parts of the skin may drop to 270C.
The circulatory system transports the majority of the heat from the inside of
the body. Conduction transports heat from an inner cell to the bloodstream.
Because the distances between the capillaries and the heat-producing cells are
tiny, heat transfer by conduction is particularly fast in this scenario. The
circulatory system transports the warm blood close to the skin's surface.
Conduction then transfers the heat to the exterior surface. The circulatory
system not only transports heat from the inside of the body, but it also regulates
the thickness of the body's insulation. When the body's heat escapes too
quickly, the capillaries near the surface constrict, and blood flow to the surface
is dramatically reduced. This treatment creates a heat insulating barrier around
the inner body core because tissue without blood conducts heat poorly.
92
As previously stated, the temperature of the skin must be lower than the internal
body temperature for heat to flow out of the body. As a result, heat must be
evacuated from the skin at a fast enough rate to sustain this condition. Because air
has a limited heat conductivity, the quantity of heat released via conduction is
small if the air around the skin is confined—for example, by clothing. Convection,
radiation, and evaporation are the main methods of cooling the skin's surface.
However, if the skin is in contact with a good thermal conductor, such as a metal,
conduction can remove a significant quantity of heat.
Numerical problems for independent solution:
1. Determine the resistance of an aluminum wire with a length of 20 m and a
cross-sectional area of 2 mm2 at 70ºС, given that the resistivity of aluminum at
20ºС is 2.7·10-8 Ohm•m and the temperature coefficient of resistance
is 4.6·10-3 0C-1.
93
X
THE EFFECT OF ELECTROMAGNETIC FIELDS AND CURRENTS ON
THE BODY.
GALVANIZATION AND ELECTROPHORESIS.
PHYSICAL BASIS OF RHEOGRAPHY.
Differences in voltage cause electric fields to form; the higher the voltage,
the stronger the resulting field. When electric current runs, magnetic fields are
formed; the greater the current, the stronger the magnetic field. Even if there is no
current flowing, an electric field will exist. If current flows, the magnetic field's
strength varies with power consumption, while the electric field's strength remains
constant.
The frequency or wavelength of an electromagnetic field (EMF) is one of the
most important features that defines it. Fields of various frequencies interact with
the human body in various ways. Electromagnetic waves can be thought of as a
series of incredibly regular waves traveling at the speed of light. The term
frequency basically refers to the number of oscillations or cycles per second, but
the term wavelength refers to the distance between each wave. As a result,
wavelength and frequency are inextricably linked: the shorter the wavelength, the
higher the frequency.
Another essential feature of electromagnetic fields is wavelength and
frequency: Quanta are particles that carry electromagnetic waves. Higher
frequency (shorter wavelength) waves carry more energy in their quanta than lower
frequency (longer wavelength) waves. Some electromagnetic waves carry so much
energy per quantum that they are capable of breaking molecular bonds. Gamma
rays emitted by radioactive materials, cosmic rays, and X-rays all have this feature
and are referred to as 'ionizing radiation' in the electromagnetic spectrum. Non-
ionizing radiation refers to fields with insufficient quanta to disrupt molecular
bonds. Electricity, microwaves, and radiofrequency fields are examples of man-
made electromagnetic fields that are found at the relatively long wavelength and
low frequency end of the electromagnetic spectrum, and their quanta are unable to
break chemical bonds.
Electromagnetic fields at low frequencies.
Electric fields exist whenever a positive or negative electrical charge is present.
They exert forces on other charges within the field. The strength of the electric
field is measured in volts per meter (V/m). When an electrical wire is charged, it
generates an electric field. Even when there is no current flowing, this field exists.
The greater the electric field at a certain distance from the wire, the higher the
voltage. Electric fields are strongest when they are close to a charge or a charged
conductor, and they weaken rapidly as they get further away. Metal and other
conductors efficiently insulate them. Other elements, such as construction
94
materials and plants, offer some protection. As a result, walls, buildings, and trees
limit the electric fields from power lines outside the house. When power lines are
buried in the ground, the electric fields at the surface are hardly detectable.
Magnetic fields arise from the motion of electric charges. The strength of the
magnetic field is measured in amperes per meter (A/m); more commonly in
electromagnetic field research, scientists specify a related quantity, the flux density
(in microtesla, µT) instead. Unlike electric fields, magnetic fields are only created
when a device is turned on and current flows. The magnetic field becomes stronger
as the current increases. Magnetic fields, like electric fields, are strongest close to
their source and rapidly diminish as they travel further away. Magnetic fields are
not obstructed by conventional materials like building walls.
Electric fields Magnetic fields
1. Electric fields arise from voltage. 1. Magnetic fields arise from current
2. Their strength is measured in Volts flows.
per metre (V/m) 2. Their strength is measured in
3. An electric field can be present even amperes per meter (A/m). Commonly,
when a device is switched off. EMF investigators use a related
4. Field strength decreases with distance measure, flux density (in microtesla
from the source. (µT) or millitesla (mT) instead.
5. Most building materials shield 3. Magnetic fields exist as soon as a
electric fields to some extent. device is switched on and current flows.
4. Field strength decreases with
distance from the source.
5. Magnetic fields are not attenuated by
most materials.
95
surface of conductive materials is influenced by electric fields acting on them.
They make current flow through the body and down to the ground. Within the
human body, low-frequency magnetic fields cause circulation currents. The
strength of these currents is determined by the external magnetic field's intensity.
These currents, if large enough, might stimulate neurons and muscles, as well as
alter other biological processes. Both electric and magnetic fields cause voltages
and currents in the body, although even just beneath a high-voltage transmission
line, the induced currents are negligible compared to shock and other electrical
effects thresholds.
Microwave thermotherapy is a method of treatment in which bodily tissue is
heated by microwave irradiation in order to harm and kill cancer cells or make
cancer cells more sensitive to the effects of radiation and certain anticancer
medications.
Diathermy is the use of high-frequency electromagnetic currents or electrically
produced heat in physical therapy and surgical treatments. Jacques Arsene
d'Arsonval produced the first observations on the effects of high-frequency
electromagnetic currents on the human body. In 1907, German physician Karl
Franz Nagelschmidt created the term diathermy, which is derived from the Greek
terms dia and therma and literally means "heating through."
Diathermy is a technique that is extensively used in medicine to relax muscles and
generate deep tissue warmth for therapeutic purposes. It's utilized in physical
therapy to target pathologic lesions in the body's deeper tissues with mild heat.
Diathermy is produced by three techniques: ultrasound (ultrasonic diathermy),
short-wave radio frequencies in the range 1–100 MHz (shortwave diathermy) or
microwaves typically in the 915 MHz or 2.45 GHz bands (microwave diathermy),
the methods differing mainly in their penetration capability. It exerts physical
effects and elicits a spectrum of physiological responses.
Hyperthermia treatment involves using the same procedures to raise tissue
temperatures in order to kill neoplasms (cancer and tumors), warts, and
contaminated tissues. Diathermy is a technique for cauterizing blood arteries and
preventing excessive bleeding in surgery. The approach is very useful in
neurosurgery and ocular surgery.
UHF therapy
Ultra high frequency therapy (UHF therapy) is one of the Physical therapy's
methods helping to treat and restore the health after injuries and maladies.
This is an apparatus approach that uses ultra high frequency electromagnetic fields
to provide heat to the tissues and organs, causing a number of physicochemical
processes and thereby delivering a therapeutic effect.
The intensity of the produced and applied electromagnetic field influences the
body's physiological responses to UHF therapy. A low-intensity field, for example,
96
has an anti-inflammatory impact by increasing blood and lymph flow in tissues; a
higher-intensity field stimulates metabolic processes and promotes cell
nourishment and vital activity, but a high-intensity field, on the other hand,
increases inflammation. This is why the course of UHF therapy should be planned
individually, taking into account the severity of the disease and the stage of the
pathological process.
The effectiveness of UHF therapy has been demonstrated in practice, and it is now
used in practically every sector of medicine. Electromagnetic therapy, in addition
to its anti-inflammatory properties, is beneficial in the angenesis of diseased or
traumatized tissues. UHF therapy enhances tissue metabolism and lowers vascular
permeability, vascular and muscular spasm, alleviating pain and restoring patient
performance by creating a protective barrier surrounding the inflammatory center.
The procedure does not require any special preparations. A patient is lying down
during the therapy and a phisician selects and applies electrodes corresponding to
the UHF area as well as the dose of electromagnetic fluxes. Any metal objects shall
be avoid in the applied area: dentures, earrings, chains and piercings. The
procedure lasts from 5 to 16 minutes, and is carried out with 10-15 of sessions per
a treating course. Sometimes it is neccessary to increase a sessions amounts during
the course. Sessions can be scheduled every day or in a day.
Body tissues are largely diamagnetic, like water. However, the body also contains
paramagnetic substances, molecules and ions.
Biocurrents arising in the body are a source of weak magnetic fields. In some
cases, the induction of such fields can be measured. So, for example, on the basis
of recording the time dependence of the magnetic field induction of the heart
(biocurrents of the heart), a diagnostic method was created –
magnetocardiography.
Investigation of the distribution of the electric field in space and the effect of
the UHF field on electrolytes and dielectrics.
UHF therapy is an effect on the body with therapeutic, prophylactic and
rehabilitative purposes with a continuous or pulsed ultra-high frequency electric
field (from 30 to 300 MHz). During UHF therapy, the electric field is supplied to
the body tissues using capacitor plates connected to the UHF oscillator.
Consider the physical processes that arise in conductors and dielectrics under the
influence of the UHF electric field. If a conductor is placed in an electric field, then
97
charged particles (charges) move in it in accordance with their polarity and field
direction, that is, an electric current arises, which is called the conduction current.
When a dielectric material is placed in an electric field, electric charges do not
flow through the material as they do in an electrical conductor but only slightly
shift from their average equilibrium positions causing dielectric polarization (see
fig.10.1). Positive charges are displaced in the direction of the field as a result of
dielectric polarization, while negative charges shift in the opposite direction (for
example, if the field is moving in the positive x-axis, the negative charges will shift
in the negative x-axis). This produces an internal electric field, which reduces the
overall field within the dielectric. When a dielectric is made up of weakly bonded
molecules, they become polarized and reorient so that their symmetry axes align
with the field.
Heat release in tissues is determined by both the conduction current and the
displacement current in the frequency range up to 300 MHz, and at a frequency of
about 1 MHz, the conduction current takes the lead, while at a frequency of more
than 20 MHz, the displacement current takes the lead (for muscle tissue). Both of
these currents cause biological tissues to heat up.
Even in the most sensitive neuromuscular fibers, currents created in biological
tissues at frequencies above 100 kHz are incapable of causing the formation of an
action potential.
Dielectric properties are concerned with the storage and dissipation of electric and
magnetic energy in materials. Dielectrics play an important role in explaining a
variety of phenomena in cell biophysics.
98
Figure 10.1. A dielectric material in an electric field.
99
generated by an order of magnitude less in tissues and media with significant
electrical conductivity and water content (blood, lymph, muscle tissue) (Fig. 10.2).
S M B M S
Figure 10.2. Distribution of absorbed electromagnetic energy in body tissues during UHF
therapy:
S - skin, M - muscle tissue, B - bone tissue
UHF generator.
There are various devices for UHF therapy. The apparatus is a device that
has electrodes and an alternating electric field appears between these electrodes
(figure 10.3). The absorption of the energy of the UHF electric field by biological
tissues is relatively low, due to which it has a pronounced penetrating ability and
penetrates through the part of the body located between the electrodes.
100
Figure 10.4. Equivalent electrical circuit of a chain of capacitive electrodes and a part of the
patient's body during UHF therapy
Because the area surrounding the electrodes, where there is the greatest
concentration of field lines of force, is located outside the patient's body, the
presence of gaps can considerably prevent unwanted heating of surface tissues.
Because there is no need to ensure contact between the electrode and the body, the
technique for administering UHF therapy is substantially simplified by the
presence of these gaps.
The square of the field strength E2 is proportional to the heating of biological
tissues in a UHF electric field. The intensity of an inhomogeneous field, which
occurs in real-world situations, varies and is defined by the concentration of field
lines of force.
The field between the electrodes is most uniform in the center in the absence of the
patient's body, but the electric field lines bend towards the periphery due to the
edge effect (Fig. 10.5). The smaller the ratio of the distance between the electrodes
to their diameter, the wider the uniform field zone. The field lines do not go
uniformly anyplace when the patient is positioned between the electrodes because
of the inhomogeneous construction; instead, they bend in the center zone so that
the maximum field strength is under the electrodes.
Figure 10.5. The electric field lines of the electric field, formed by two plates:
a - when the distance between the plates is less than their diameter;
b - when the distance between the plates is greater than their diameter.
101
Two electrodes are used in a longitudinal and transverse layout when
performing UHF treatment techniques. The strength and absorbed energy of the
UHF electric field formed in the treatment area by capacitive electrodes varies
depending on the distance between the tissues and the electrode as well as their
geographical location (Fig. 10.6). It is possible to give a preferential effect on a
certain portion of the body by adjusting the electrode size, gap size, and inclination
of the electrode with regard to the body surface. If the electrodes are the same,
then the impact is more intense from the side of the electrode located with a
smaller gap (Fig. 10.6, a). The same takes place when using one smaller electrode
(Fig. 10.6, b).
The field is focused towards the edge of the electrode that is closer to the body
when the electrode is installed obliquely to the body's surface, resulting in selective
heating (Fig. 10.6, c).
When heating the folds of the body, such as between the cheek and the nose, this
approach is used.
When exposed to uneven surfaces of the body, the concentration of the field and
overheating occurs on its protruding parts. In this case, either the gap is increased
(Fig. 10.6, d), or flexible electrodes are used.
Metal objects in an UHF electric field do not heat up, however, near them,
especially in the presence of sharp edges and protrusions, there is a concentration
of field lines of force (Fig. 10.6, e), and, as a result, local overheating and burns
may occur. For this reason, the seat or bed for a patient during UHF therapy
procedures should not have metal parts, but rings, pins, needles and others metal
objects in the patient's possession should be removed if they are located close to
the affected area.
102
metal
Figure 10.6. Distribution of electric field lines of the electric field during UHF therapy:
the degree of darkening of the object characterizes the intensity of heating.
During the procedure, the part of the patient's body on which the treatment is
carried out is rigidly fixed (Fig. 10.7, a, b). For the procedure, electrodes of the
appropriate size are selected, fixed in holders and installed in the position intended
for treatment.
103
Heating organs and tissues results in long-term, deep tissue hyperemia in the
affected area. Capillaries, in particular, expand rapidly, increasing in diameter by
3–10 times.
Strengthening regional blood and lymph outflow in affected tissues, changes in the
permeability of the microvasculature, tissue barriers, an increase in the number of
leukocytes and an increase in their phagocytic activity result in dehydration and
resorption of the inflammatory focus, as well as a reduction in pain caused by
edema. Activation of connective tissue stromal elements and cells of the
mononuclear phagocyte system (histiocytes and macrophages), an increase in the
dispersion of blood plasma proteins, an increase in the concentration of Ca2+ ions,
and activation of metabolism in the area of the lesion stimulate proliferative and
regenerative processes in the connective tissue surrounding the inflammatory
focus, and have been shown to stimulate proliferative and regenerative processes in
the connective tissue This allows UHF therapy to be used at various stages of the
inflammatory process.
104
Alternating current effect on biological tissues.
Voltage, current strength, frequency (and values referred to as frequency –
cycle frequency and period), and phase are all characteristics of alternating current.
Harmonic function caused voltage to induce alternating current to change
(sinusoid). Voltage, current strength, impulse form, and frequency are all
characteristics of pulse currents. The normal consequence of impulses with only
one-side voltage change is irritation. The effects of alternating current on living
tissues vary depending on the frequency of the current. Alternating current, similar
to pulse current, induces irritation of excitable tissues when frequencies are low. If
frequencies are high, when charged particles shift is not great in tissues, calorific
effect takes place, i.e. there is a heat emission in tissues as a result of current
flowing. In the case of alternating currents, the current-conducting qualities of
tissues are affected by the frequency of the current, as tissues have apparent
capacity properties. At alternating current, impedance (total resistance) is a
property of tissue.
Active and reactive resistance are two types of resistance found in biological
objects. Impedance is the total resistance of things. Impedance (symbol Z) is a
measurement of a circuit's overall resistance to current, or how much it obstructs
the flow of charge. It's similar to resistance, but it also takes capacitance and
inductance into account. Ohms are the unit of measurement for impedance ( ).
Because the effects of capacitance and inductance vary with the frequency of the
current traveling through the circuit, impedance fluctuates with frequency, it is
more complicated than resistance. Resistance has the same impact regardless of
frequency.
𝑍 = √𝑅 2 + (𝑋𝐿 − 𝑋𝐶 )2 (10.4)
1
Where: R-active resistance; 𝑋𝐿 = 𝜔𝐿 - inductive reactance; 𝑋𝐶 = - capacitive
𝜔𝐶
reactance.
An organism's tissues not only expend a continuous current, but also an alternating
current. There are no systems in an organism that are analogous to coils of
inductance, therefore its inductance is near to zero. Biological membranes, and
thus all organisms, have capacitor qualities, and the impedance of an organism's
tissues is solely characterized in terms of ohmic and capacitor resistance:
𝑍 = √𝑅2 + 𝑋𝐶 2 (10.5)
105
circuit of such type is shown in fig. 10.8. This circuit is called an electrical
equivalent of a biological tissue.
106
Galvanization is a medical treatment that uses a steady electric current of low
voltage (up to 80V) and low intensity (up to 50 mA). Galvanization is now limited
to current obtained through rectification and smoothing of alternating public
current.
Galvanic current passing through skin encounters strong resistance from the
epidermis. The majority of current energy is spent overcoming this resistance, and
most significant galvanization reactions are developing just where it absorbs the
most energy. First and foremost, it is hyperemia and burning sensations under
electrodes, which are induced by a change in the standard correlation of tissular
ions, pH medium, and heat production caused by the current. In addition, the
release of biochemically active chemicals, the activation of enzymes, and
metabolic processes generate a reflex increase in blood flow to the exposed area.
When the current intensity and exposure time are increased, the burning feeling
and tingling become unbearable, and chemical burns emerge after a lengthy period
of use.
Direct current action is reduced from the surface to deeper tissues due to current
distribution along tissues with strong electroconductivity and rapid density
reduction deep in tissues. Galvanisation improves blood circulation and
lymphokinesia, promotes tissue resorption ability, accelerates metabolic and
trophic processes, boosts gland secretory capabilities, and has analgesic properties.
Galvanic current, when administered to the hepatic area in particular, boosts the
action of coagulants and extends their time in the body by 2-4 hours. After
galvanization, the potentiation effect lasts for 4-6 hours. Anticoagulant action
against galvanization, which is first applied in tiny dosages, on the other hand,
diminishes and develops later (after 30-120 min.). Adrenaline, acetylcholine,
thrombin, and histamine all have a hypercoagulant impact when galvanized.
Galvanic current, delivered by orbital and occipital methods, prolongs and
potentiates the effect of psychotropic medicines (haloperidol, seduxen, amizylum,
sodium oxybutyrate, and others), as well as lowering the quantity and frequency of
adverse reactions.
Because galvanic current boosts their effect and favorably influences immunologic
processes and overall reactivity, antisensitizer and immunosuppressant doses can
be reduced during galvanization.
Indications for galvanization: diseases and lesions of different sections of
peripheral nervous system of infectious, toxic and traumatic origin (radiculitis,
plexitis, neuritis, neuralgia of different locality), consequences of diseases and
lesions of brain, spinal cord, brain tunics, neuroticisms, vegetative and vascular
disorders, chronic joint inflammations (arthritis) of traumatic, rheumatic and
metabolic origin, etc.
Apparatus for galvanization and electrophoresis POTOK-01М (fig. 10.10).is
intended to influence with constant electric current on the human body for
therapeutic and prophylactic purposes, as well as for iontophoresis in the
107
conditions of clinic and hospital. It is a plastic case with the switch and handles of
regulation of current submission. It comes bundled with two electrodes.
Additionally, the apparatus can be equipped with various kinds of soft pillows for
the physiotherapy. Power consumption, VA, – no more than 20. The maximum
current in the circuit of the patient at a load is 500 ohms, mA – 50 + -5.
108
UHF-generator Dipolar antenna
1. the data obtained during the experiment should be written in the table.
l (cm) 1 2 3 4 5 6
I (А)
To study the thermal effect of the UHF electric field on electrolytes and dielectrics,
vessels with the studied liquids are installed between the electrodes. The
temperature change is measured by thermometers mounted in vessels.
Salt solution is used as an electrolyte, and castor oil is used as a dielectric.
1. the data obtained during the experiment should be written in the table.
Heating time Temperature of Temperature of
t (min) Electrolyte , Тel Dielectric ,
Тdielectric
0
3
6
9
12
109
2. Draw the dependence graph of temperature of the studied liquids on time of
impact on them of UHF electric field . For comparison draw graphs on the same
coordinate axis.
Тel = f(tmin), Тdielectric= f(tmin)
3. Make a conclusion.
2. Determine the active resistance of the coil of the electromagnetic relay in the
X-ray machine, if the inductance of the coil is 150 H, and the current in the
industrial frequency network at a voltage of 120 V is 2.5 mA.
3. Determine the amount of charge that passes through a section of the human
body in 2 minutes, if the current density of galvanization is 0.1
A/cm , and the size of the electrodes is 4x6 cm2. How much heat will be
2
released in this case, if the resistance of the body area is 2·103 ohms?
110
XI
PHYSICAL ISSUES OF HEMODYNAMICS BASED ON
HYDRODYNAMICS. BIOREOLOGY. DETERMINATION OF THE
VISCOSITY OF LIQUIDS USING A VISCOMETER.
The most vital biological fluid in the human body is blood. Blood is made
up of a concentrated suspension of cells called corpuscles or formed elements,
which are split into RBC, white blood cells, and platelets, and is suspended in a
water-based fluid called plasma under physiological conditions. Blood is a fluid
connective tissue that performs several critical activities in the body, including
carrying oxygen, nutrients, and hormones to cells and tissues throughout the body,
eliminating waste, and regulating body pH and temperature.
Blood Pressure.
Electrical pulses given simultaneously to the left and right halves of the heart cause
the heart chambers to contract. The atria contract first, driving blood into the
ventricles, followed by the ventricles contracting, forcing blood out of the heart.
Blood enters the arteries in spurts or pulses as a result of the heart's pumping
motion. The systolic pressure is the highest pressure pushing the blood at the apex
of the pulse. The diastolic pressure is the lowest blood pressure between pulses.
The systolic pressure of a young healthy person is around 120 torr (mm Hg), and
112
the diastolic pressure is around 80 torr. Therefore the average pressure of the
pulsating blood at heart level is 100 torr.
The initial energy generated by the heart's pumping motion is wasted by two
loss mechanisms as blood passes through the circulatory system: losses associated
with the expansion and contraction of artery walls, and viscous friction associated
with blood movement. The early pressure oscillations are smoothed out as the
blood moves away from the heart, and the average pressure reduces as a result of
these energy losses. The blood flow is smooth by the time it reaches the capillaries,
and the blood pressure is only approximately 30 torr. The pressure in the veins
continues to decline until it is near to zero shortly before it returns to the heart. The
contraction of muscles that squeeze the blood toward the heart aids the
transportation of blood through the veins in this final stage of the flow.
Unidirectional valves in the veins provide one-way flow.
The cardiovascular system contains a number of flow-control mechanisms
that can adjust for the substantial fluctuations in arterial pressure that occur when
the body is moved. Even so, the system may take a few seconds to adjust. As a
result, when a person jumps up from a prone posture, he or she may feel dizzy for a
brief while. This is due to a brief decrease in blood flow to the brain caused by an
abrupt drop in blood pressure in the brain arteries.
The same hydrostatic variables operate in veins as well, and their impact
may be greater than in arteries. In the veins, blood pressure is lower than in the
arteries. When a person remains still, the blood pressure is barely enough to drive
blood back to the heart from the feet. Blood collects in the veins of the legs when a
person sits or stands without moving their muscles. This raises the pressure in the
capillaries, which might cause transient leg edema.
Viscosity of fluids.
Viscosity is the physical property that characterizes the flow resistance of
simple fluids. Newton’s law of viscosity defines the relationship between the shear
stress and shear rate of a fluid subjected to a mechanical stress.
𝒅𝒖 𝐹
𝝉=𝝁 ; 𝜏 = 𝑠ℎ𝑒𝑎𝑟 𝑠𝑡𝑟𝑒𝑠𝑠 = (11.1)
𝒅𝒚 𝐴
𝒅𝒖
where, 𝝻= dynamic viscosity; =rate of share deformation.
𝒅𝒚
The viscosity or coefficient of viscosity is a constant for a given temperature and
pressure and is defined as the ratio of shear stress to shear rate. Newtonian fluids
follow Newton's viscosity law. The shear rate has no effect on the viscosity.
Because non-Newtonian fluids do not obey Newton's law, their viscosity (the
ratio of shear stress to shear rate) is variable and depends on the shear rate. Blood
isn't a Newtonian fluid by any means. The viscosity is substantially influenced by
the proportion of red cells in the total volume (the hematocrit).
Newton's law of viscosity defines dynamic viscosity as the coefficient of viscosity.
Kinematic viscosity is the dynamic viscosity divided by the density.
𝝁
Kinematic viscosity 𝝂 = ; 𝞺-density of the fluid.
𝝆
The viscosity of all fluids varies. The presence of cells, proteins, and
macromolecules in biological fluids has a substantial impact on viscosity. At low
113
flow rates, cells and macromolecules can clump together. As the shear rate
increases, these aggregations disintegrate. Some proteins and carbohydrates have
extended forms as well. These molecules align themselves with the flow direction.
Furthermore, as shear stress increases, cells can alter form, becoming more oval
and orienting their long axis in the direction of flow.
If viscosity is taken into account, it can be shown that the rate of laminar flow Q
through a cylindrical tube of radius R and length L is given by Poiseuille’s law,
which is
𝜋𝑅 4 (𝑃1 −𝑃2 )
𝑄= (11.1)
8𝜂𝐿
where (P1 − P2) is the difference between the fluid pressures at the two ends of the
cylinder and η is the coefficient of viscosity measured in units of dyn (sec/cm2),
which is called a poise. In general, viscosity is a function of temperature and
increases as the fluid becomes colder.
The distinction between frictionless and viscous fluid flow is fundamental.
Without any external push, a frictionless fluid will flow continuously. This is
demonstrated by Bernoulli's equation, which states that if the fluid's height and
velocity remain constant, no pressure decrease occurs throughout the flow route.
However, Poiseuille's equation for viscous flow implies that viscous fluid flow is
always accompanied by a pressure drop. By rearranging Eq. 11.1, we can express
the pressure drop as
114
𝑄8𝜂𝐿
Δ𝑃 = 4 (11.2)
𝜋𝑅
The expression 𝛥P is the pressure drop that accompanies the flow rate Q along a
length L of the pipe. The force necessary to overcome the frictional forces that tend
to slow the flow in the pipe segment is the product of the pressure drop and the
pipe's area. The pressure drop necessary to overcome frictional losses reduces with
the fourth power of the pipe radius for a given flow rate. Even though all fluids are
subject to friction, frictional losses and the resulting pressure drop are modest and
can be ignored if the flow area is big. Bernoulli's equation can be employed with
little mistake in these situations.
Bernoulli’s Equation.
If frictional losses are neglected, the flow of an incompressible fluid is governed
by Bernoulli’s equation, which gives the relationship between velocity, pressure,
and elevation in a line of flow. Bernoulli’s equation states that at any point in the
channel of a flowing fluid the following relationship holds:
1
𝑃 + 𝜌𝑔ℎ + 𝜌𝜗 2 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 (11.3)
2
Here P is the pressure in the fluid, h is the height, ρ is the density, and 𝞋 is the
velocity at any point in the flow channel. The potential energy per unit volume of
the fluid owing to pressure is the first term in the equation. (It's worth noting that
the unit for pressure is dyn/cm2, which stands for energy per unit volume.) The
gravitational potential energy per unit volume is the second term, while the kinetic
energy per unit volume is the third. The law of energy conservation leads to
Bernoulli's equation. Because the three terms in the equation reflect the entire
energy in the fluid, their sum must remain constant in the absence of friction,
regardless of how the flow is changed.
We will illustrate the use of Bernoulli’s equation with a simple example.
Consider a fluid flowing through a pipe consisting of two segments with cross
sectional areas A1 and A2, respectively (see Fig. 11.3). The volume of fluid flowing
per second past any point in the pipe is given by the product of the fluid velocity
and the area of the pipe, A×𝞋. If the fluid is incompressible, in a unit time as much
fluid must flow out of the pipe as flows into it. Therefore, the rates of flow in
segments 1 and 2 are equal; that is
Figure 11.3. Flow of fluid through a pipe with two segments of different areas.
𝐴1
A1𝞋1=A2𝞋2 or 𝜗2 = 𝜗1 (11.4)
𝐴2
115
In our case A1 is larger than A2 so we conclude that the velocity of the fluid
in segment 2 is greater than in segment 1.
Bernoulli’s equation states that the sum of the terms in Eq. 8.1 at any point
in the flow is equal to the same constant. Therefore the relationship between the
parameters P, ρ, h, and v at points 1 and 2 is
1 1
𝑃1 + 𝜌𝑔ℎ1 + 𝜌𝜗12 = 𝑃2 + 𝜌𝑔ℎ2 + 𝜌𝜗22 (11.5)
2 2
where the subscripts designate the parameters at the two points in the flow.
Turbulent Flow.
If the velocity of a fluid is increased past a critical point, the smooth laminar flow
shown in Fig. 11.2 is disrupted. The flow becomes turbulent with eddies and whirls
disrupting the laminar flow (see Fig. 11.4). In a cylindrical pipe the critical flow
velocity 𝞋c above which the flow is turbulent, is given by
𝑅𝜂
𝜗𝑐 = (11.6)
𝜌𝐷
Here D is the diameter of the cylinder, ρ is the density of the fluid, and η is the
viscosity. The Reynold's number, denoted by the letter R, is a number that ranges
from 2000 to 3000 for most fluids. In turbulent flow, frictional forces are larger
than in laminar flow. As a result, forcing a fluid through a conduit becomes more
difficult as the flow becomes turbulent.
At rest, a person's blood flow is around 5 liters per minute. This means that the
blood's average velocity through the aorta is 26.5 cm/sec. The blood in the aorta,
on the other hand, does not flow continuously. It only moves in bursts. The blood
velocity is roughly three times higher than the total average value during the flow
period.
Because the overall area of the arteries grows as they branch, the kinetic energy in
the smaller arteries is even lower. As a result, the flow velocity drops. When the
total flow rate is 5 liters per minute, the blood velocity in the capillaries is just 0.33
millimeters per second.
As the rate of blood flow increases, the kinetic energy of the blood becomes
increasingly significant. For example, if the blood flow rate increases to 25 liters
116
per minute during physical exercise, the kinetic energy of the blood is 83,300
erg/cm3, which is comparable to a pressure of 62.5 torr. When compared to resting
blood pressure, this energy is no longer insignificant. The increased velocity of
blood flow during physical exercise is not an issue in healthy arteries. Blood
pressure rises to compensate for the pressure loss during vigorous exertion.
Equation 11.6 demonstrates that when a fluid's velocity surpasses a certain
critical value, the flow becomes turbulent. Blood flow is laminar throughout the
most part of the circulatory system. Only in the aorta does the flow become
turbulent on occasion. The critical velocity for the commencement of turbulence in
the 2-cm-diameter aorta, assuming a Reynold's number of 2000, is, according to
Eq. 11.6
𝑅𝜂 2000 × 0.04
𝜗𝑐 = = = 38𝑐𝑚/𝑠𝑒𝑐
𝜌𝐷 1.05 × 2
The flow velocity in the aorta is less than this while the body is at rest. However, if
physical activity levels rise, the aorta's flow rate may exceed the critical rate and
become turbulent. Unless the channels are unusually constricted, the flow in the
rest of the body remains laminar. Laminar flow is calm, but turbulent flow
produces noises as a result of vibrations in the surrounding tissues, which signal
circulatory system problems. A stethoscope can detect these noises, which are
known as bruits, and can aid in the diagnosis of circulatory diseases.
The rate of blood flow via the right ventricle, which pumps blood to the
lungs, is the same as the rate of blood flow through the left ventricle. However, the
blood pressure here is only one-sixth that of the aorta. As a result, the right
ventricle's power production is 0.25 W at rest and 4.5 W during vigorous physical
exercise. Depending on the intensity of the physical activity, the heart's overall
peak power output ranges between 1.9 to 14.6 W. While in fact the systolic blood
pressure rises with increased blood flow, in these calculations we have assumed
that it remains at 120 torr (the torr is unit of pressure based on an absolute scale,
1
defined as exactly of a standard atmosphere (101325 Pa) thus 1 torr is exactly
760
101325
pascals (≈133.32 Pa).
760
117
Measurement of Blood Pressure.
The arterial blood pressure is a key measure of a person's overall health. Blood
pressure readings that are unusually high or low suggest that there are problems in
the body that require medical treatment. High blood pressure, which can be
produced by constrictions in the circulatory system, indicates that the heart is
working harder than usual and that the extra load may be putting it at risk.
Inserting a vertical glass tube into an artery and watching the height to which the
blood rises is the most straightforward way to measure blood pressure. Reverend
Stephen Hales inserted a long vertical glass tube to an artery of a horse in 1733 to
measure blood pressure for the first time. Although complex variations of this
technique are still utilized in rare circumstances, it is clear that this method is
insufficient for normal clinical exams. The cut-off method is now the most often
used for routine blood pressure measurements. Although not as precise as direct
measurements, this procedure is straightforward and, in most instances, adequate.
A cuff containing an inflating balloon is tightened around the upper arm in this
procedure. A bulb inflates the balloon, and a pressure gauge measures the pressure
within the balloon. Because the initial pressure in the balloon is higher than the
systolic pressure, blood flow through the artery is stopped. By progressively
releasing some of the air, the observer allows the pressure in the balloon to decline.
She listens with a stethoscope over the artery downstream from the cuff as the
pressure lowers. Until the pressure in the balloon drops to the systolic pressure, no
sound is heard. Blood begins to flow through the artery just below this point;
however, because the artery is still somewhat restricted, the flow is turbulent and
accompanied by a distinctive sound. The systolic blood pressure is the pressure
measured at the start of sound. The artery grows to its usual size as the pressure in
the balloon lowers, the flow becomes laminar, and the noise goes away. The
diastolic pressure is the pressure at which the sound begins to diminish.
The variation of blood pressure along the body must be taken into account while
taking clinical measures. The cuff is put on the arm, approximately at heart level,
for the cut-off blood pressure measurement.
Whole blood is a circulating tissue made up of fluid plasma (55 percent by
volume) and formed elements (Red Blood Cells, White Blood Cells, and Platelets,
about 45 percent by volume), but it can be modeled as a suspension with
interacting deformable particles of various shapes and dimensions dispersed in a
complex solution called plasma. The exact content and dimensions of cells differ
slightly between people and depending on their health. Plasma, like other biofluids,
is a solution containing a variety of solutes (9.77 g/100ml) such as proteins,
carbohydrates, electrolytes, organic acids, lipids, and so on. The produced elements
are primarily erythrocytes (about 95%), leukocytes, and platelets, with sizes
ranging from 1 to 20 μm.
118
Laboratory work
The purpose of the laboratory work : Determination of the viscosity of the liquid
using an Ostwald viscometer. Study of the dependence of viscosity on
concentration. Plot the concentration dependence of viscosity and found from the
graph unknown concentrations.
№ H2O C= % C= % C= % C= X%
1
2
3
4
5
t
Here density of different liquids. Numerical values take from the table in the
appendix E.
119
3. Plot the dependence f ( C%) and find the unknown concentration of the
liquid.
4. Conclusions.
2. When the aorta's blood flow rate is 5 liters per minute, the blood velocity
in the capillaries is 0.33 mm/sec. Calculate the number of capillaries in the
circulatory system if the average diameter of a capillary is 0.008 mm.
120
XII
BIOPHYSICS OF VISION. SPECIAL TECHNIQUES OF MICROSCOPY
AND POLARIZATION OF BIOLOGICAL OBJECTS.
Vision is our most important source of information about the external world.
It has been estimated that about 70% of a person’s sensory input is obtained
through the eye. The three components of vision are the stimulus, which is light;
the optical components of the eye, which image the light; and the nervous system,
which processes and interprets the visual images.
Nature of Light
Experiments conducted in the nineteenth century proved beyond a shadow of a
doubt that light possesses all of the features of wave motion. However, it was
demonstrated at the turn of the century that wave conceptions alone do not fully
explain the properties of light. Light and other electromagnetic radiation can
appear to be made up of little packets of energy (quanta) in some instances.
Photons are the units of energy that make up these packets. Each photon has a
fixed amount of energy E for a certain frequency f of radiation:
E= hf (12.1)
where h is Planck’s constant, equal to 6.62607015×10−34 J⋅s in SI units. Both of
these qualities of light must be considered in our examination of vision. All
phenomena related with the propagation of light through bulk matter are explained
by the wave characteristics, and the quantum nature of light must be invoked to
understand the effect of light on photoreceptors in the retina.
Structure of the Eye
Figure 12.1 shows a depiction of the human eye. The eye is about a spherical
with a diameter of 2.4 cm. The construction of all vertebrate eyes is identical,
although the size varies. The cornea, a transparent part of the eyeball's outer
coating, allows light to enter the eye. The photosensitive retina, which covers the
rear surface of the eye, receives an inverted image of the light focussed by the lens
system of the eye. The light here causes nerve impulses, which send data to the
brain.
The curved surface of the cornea and the crystalline lens inside the eye work
together to focus light into a picture at the retina. The cornea's focusing power is
fixed. The crystalline lens' focus, on the other hand, may be adjusted, allowing the
eye to see objects from a variety of distances.
121
Figure 12.1. An illustration of the human eye.
The iris, which is located in front of the lens, regulates the size of the pupil, or
entrance aperture into the eye. The diameter of the aperture varies from 2 to 8 mm
depending on the intensity of the light. The eye's cavity is filled with two types of
fluid, both of which have a refractive index that is similar to that of water. The
aqueous humor is a watery fluid that fills the space between the lens and the
cornea in the front of the eye. The viscous vitreous humor fills the gap between
the lens and the retina.
The ciliary muscle, which may modify the thickness and curve of the lens,
controls the eye's focusing. Accommodation is the term for this focused procedure.
The crystalline lens is rather flat when the ciliary muscle is relaxed, and the eye's
focusing strength is at its lowest. A parallel beam of light is focussed at the retina
in these conditions. The relaxed eye is focused to perceive far objects because light
from distant objects is approximately parallel. In this context, “distant” refers to
distances of 6 meters or more.
The ability to focus on closer objects necessitates a higher level of focusing
capability. As light from adjacent objects enters the eye, it is divergent, so it must
be concentrated more intensely to form a picture at the retina. The crystalline lens's
focusing capability, on the other hand, has a limit. A normal young adult's eye can
concentrate on things up to 15 cm away when the ciliary muscle is fully contracted.
Closer objects have a blurry appearance. The near point of the eye is the shortest
distance at which acute focus can be achieved.
The crystalline lens's focusing range declines with aging. A 10-year-old child's
near point is approximately 7 cm, but by the age of 40, the near point has shifted to
approximately 22 cm. Following that, the decline is quick. The near point shifts to
roughly 100 cm at the age of 60. Presbyopia is the term for a decline in the eye's
ability to accommodate as one gets older.
Defects in Vision
Myopia (nearsightedness), hyperopia (farsightedness), and astigmatism are
three common vision disorders linked to the eye's focusing system. The first two of
these flaws are best explained by looking at how the eye perceives parallel light.
122
The typical eye focuses parallel light onto the retina when it is relaxed (Fig. 12.2).
The lens system of a myopic eye concentrates parallel light in front of the retina
(Fig. 12.3a). An enlarged eyeball or an excessive curvature of the cornea are the
most common causes of misfocusing. The situation is flipped with hyperopia (see
Fig. 12.4a). Behind the retina, parallel light is concentrated. Although the
hyperopic eye can see infinity, its near point is further away than normal. As a
result, hyperopia is identical to presbyopia. The following is a list of the two flaws:
The myopic eye converges light too much, while the hyperopic eye converges light
insufficiently. A nonspherical cornea causes astigmatism, which is a vision
problem. Because an oval-shaped cornea is more severely bent in one plane than
the other, it cannot create distinct images of two perpendicular lines at the same
time. The view is distorted because one of the lines is always out of focus.
Figure 12.3. Ray path for myopia (a) and its correction (b)
Figure 12.4. Ray path for hyperopia (a) and its correction (b)
Lenses placed in front of the eye can remedy all three of these flaws. To
compensate for the extra refraction in the eye, myopia requires a divergent lens. A
converging lens, which increases the eye's focusing power, is used to correct
hyperopia. A cylindrical lens compensates for astigmatism's uneven corneal
curvature by focusing light along one axis but not the other.
Here p is infinity, as this is the effective distance for sources of parallel light. The
desired location q for the virtual image is −200 cm. The focal length of the
diverging lens is, therefore,
𝟏 𝟏 𝟏
= + 𝐨𝐫 𝐟 = −𝟐𝟎𝟎 𝐜𝐦 = −𝟓 𝐝𝐢𝐨𝐩𝐭𝐞𝐫𝐬
𝐟 ∞ −𝟐𝟎𝟎
Extension of Vision
The eye's field of vision is limited. Because the images on the retina are too
small, details on distant objects cannot be seen. At a distance of 500 meters, the
retinal image of a 20-meter-high tree is just 0.6 mm height. The leaves on this tree
are impossible to resolve with the naked eye. The eye's accommodation power
limits the ability to observe small objects. We've already established that the
average eye's resolution is restricted to roughly 100μm since it can't concentrate
light from objects closer than about 20 cm.
Two types of optical tools have been developed to broaden the range of
vision in the last 300 years: the telescope and the microscope. The telescope is
intended for the study of faraway things. The microscope is used to examine small
objects that are difficult to see with the naked eye. The magnifying properties of
lenses are used in both of these gadgets. The fiberscope, a third more contemporary
vision aid, uses total internal reflection to allow visualization of objects that are
ordinarily obscured from view.
124
Microscope
The object is magnified by a single lens in a basic microscope (Fig. 12.5). A
two-lens system compound microscope, as shown in Fig. 12.6, can produce better
results. A compound microscope, like a telescope, has an objective lens and an
eyepiece, but the microscope's objective has a short focal length. The eye sees the
final magnified picture I2 created by the eyepiece, which is a true image I1 of the
object. In the life sciences, the microscope is a crucial tool. Its creation in the
1600s marked the start of the cellular level study of life. The early microscopes
produced images that were greatly distorted, but after years of improvement, the
instrument was nearly improved to its theoretical optimum. The diffraction
properties of light, which limit resolution to about half the wavelength of light,
define the resolution of the best current microscopes. In other words, we can see
objects as small as half the wavelength of the illuminating light with a competent
modern microscope.
An objective lens, ocular lens, lens tube, stage, and reflector are the major
components of a conventional biological microscope. Through the objective lens,
an object put on the stage is enlarged. Through the ocular lens, an enlarged picture
of the target can be seen when it is focussed.
The ability to consider the size of microobjects determines the depth of
penetration into the microworld and the study of the microworld. Large
magnifications with great resolution are feasible with modern optical microscopes.
Optical microscopy, on the other hand, has reached the limits of its capabilities due
to phenomena generated by light's wave nature (diffraction, interference). The
inability to obtain an image of an item smaller than the wavelength of
electromagnetic radiation is a basic restriction. Under typical accommodation
conditions, a microscope's optical system consists of a system of short-focus
lenses: an objective and an eyepiece, which produces a magnified, imaginary, and
125
inverse image seen by the eye. The magnification of the microscope is its most
important feature, as indicated by the formula:
LS
= (12.3)
f eyepiece f objective
м об eyepiece (12.4)
Figure 12.7. Principle that enables magnified observation with a biological microscope
𝜆
𝑍= (12.5)
2𝑁𝐴
Stereo Microscopes
Stereo microscopes are used to examine a wide
range of samples that can be held in your palm. A
stereo microscope produces a three-dimensional
image, or "stereo" image, with magnification
ranging from 10 to 40 times. Manufacturing,
quality control, coin collection, science, high
school dissection projects, and botany all employ
stereo microscopes. A stereo microscope can be
used to view a sample that does not allow light to
flow through it because it has both transmitted and
reflected illumination.
The following are samples often viewed under a
stereo microscope: coins, flowers, insects, plastic
or metal parts, printed circuit boards, fabric
weaves, frog anatomy, and wires.
Compound microscopes
A biological microscope is another name for a compound microscope. Compound
microscopes are used for histology and pathology in laboratories, schools,
wastewater treatment plants, and veterinary offices. Samples that will be seen
under a compound microscope must be flattened on a microscope slide using a
cover slip. Students frequently examine prepared slides under the microscope to
save time by obviating the need for slide preparation.
Blood cells, cheek cells, parasites, bacteria, algae, tissue, and thin portions of
organs are just a few of the items that may be viewed using a compound
127
microscope. Compound microscopes are used to examine samples that are invisible
to the naked eye. A compound microscope's magnification is often 40x, 100x,
400x, and occasionally 1000x. Microscopes with magnifications more than 1000x
should be avoided since they provide empty magnification with low resolution.
Inverted Microscopes
Biological inverted microscopes and
metallurgical inverted microscopes are two types
of inverted microscopes.
Biological inverted microscopes have
magnifications of 40x, 100x, and 200x and 400x,
respectively. Living samples in a petri dish are
viewed with these biological inverted
microscopes. The objective lenses are housed
beneath the stage in an inverted microscope,
allowing the operator to position the petri dish on
a flat stage.
In-vitro fertilization, live cell imaging,
developmental biology, cell biology,
neuroscience, and microbiology all use inverted
microscopes. In research, inverted microscopes
are frequently used to examine and study tissues
and cells, particularly living cells.
Polarizing Microscopes
Polarizing microscopes study chemicals, rocks, and minerals using polarized light,
as well as transmitted and reflected illumination. On a regular basis, geologists,
chemists, and the pharmaceutical industry use polarizing microscopes.
128
A polarizer and an analyser are included in every polarizing microscope. Only
particular light waves are allowed to pass through the polarizer. The amount and
direction of light that will illuminate the sample are determined by the analyzer.
Different wavelengths of light are focused into a single plane by the polarizer. The
microscope is ideal for viewing birefringent materials because to this feature.
129
A polarized wave is the other type of wave. Light waves that vibrate in a single
plane are known as polarized waves. Plane polarized light is made up of waves
with the same direction of vibration for all of them. A plane polarized light vibrates
on only one plane, as shown in the image on fig. 12.8. Polarization is the process
of converting non-polarized light into polarized light.
Malus’ law states that the intensity of plane-polarized light that passes through an
analyzer varies as the square of the cosine of the angle between the plane of the
polarizer and the transmission axes of the analyzer.
𝐼 = 𝐼0 𝑐𝑜𝑠 2 𝜃 (12.7)
Electron microscopy
This is a type of microscopy in which electron beams are utilized to make an
image of the object. They have a far higher magnification and, as a result, a lot
higher resolution than light microscopes, allowing us to observe smaller specimens
in greater detail. The resolution can be increased because the wavelength of
electrons shortens as they travel faster, hence there is a direct relationship between
reducing wavelength and increasing resolution. Transmission and scanning
electron microscopes are the two types of electron microscopes used. TEM is a
technique that includes passing a high-voltage beam through a thin layer of
specimen and collecting data on the structure. SEM, on the other hand, creates
pictures by detecting secondary electrons generated from the surface as a result of
initial electron beam excitation. As a result of mapping the observed signals with
the beam position, the electron beam over the surface is scanned in a raster pattern.
130
While TEM and SEM provide us with improved image quality and a wider range
of images, they also have drawbacks. Because it is sensitive to magnetic fields and
requires a constant supply of cool water running through the lens, it is
unfortunately exceedingly expensive to produce and maintain.
EM takes the shape of a vertically
mounted tall vacuum column. It
consists of the following elements:
1. Electron gun
• An electron cannon is a
device that creates electrons
by heating a tungsten
filament.
2. Electromagnetic lenses
The electron beam is focused
on the specimen by the
condenser lens. The electrons
are formed into a thin, tight
beam by a second condenser
lens.
The electron beam exiting the
specimen travels down the
second set of magnetic coils,
known as the objective lens,
which has a high power and 4. Image viewing and Recording
produces the intermediate System.
magnified image; the final The final image is projected on a
further magnified image is fluorescent screen.
produced by the third set of Below the fluorescent screen is a
magnetic coils, known as the camera for recording the image.
projector (ocular) lenses.
3. Specimen Recipient
131
Applications
• Electron microscopes are used to study the ultrastructure of microbes,
cells, big molecules, biopsy samples, metals, and crystals, among other
biological and inorganic objects.
• Electron microscopes are commonly used in industry for quality control
and failure analysis, and modern electron microscopes make electron
micrographs with the use of specialized digital cameras and frame grabbers.
Features
• Magnification is extremely high.
• Exceedingly high resolution
• The material is rarely deformed during preparation
• A greater depth of field can be investigated
• There are numerous applications
Boundaries
132
- An interference microscope - uses the phenomenon of interference. Each
beam entering the microscope bifurcates. One of the rays obtained is directed
through the observed particle, and the second past it (along the additional optical
branch of the microscope). In the ocular part of the microscope, both beams are
again connected and interfere with each other. This produces colored images that
provide very valuable information in the study of living objects.
Converging Lenses
A simple converging lens is shown in Fig. 12.9. This type of a lens is called
a convex lens. Parallel rays of light passing through a convex lens converge at a
point called the principal focus of the lens. The distance of this point from the lens
is called the focal length f. Conversely, light from a point source at the focal point
emerges from the lens as a parallel beam. The focal length of the lens is
determined by the index of refraction of the lens material and the curvature of the
lens surfaces.
Figure 12.9. The convex lens illuminated (a) by parallel light, (b) by point source at the focus.
where R1 and R2 are the curvatures of the first and second surfaces, respectively
(Fig. 14.5). In Fig. 12.10, R2 is a negative number.
133
Figure 12.10. Radius of curvature defined for a lens.
Focal length is a measure of the converging power of the lens. The shorter the
focal length, the more powerful the lens. The focusing power of a lens is often
expressed in diopters defined as
𝟏
𝒇𝒐𝒄𝒖𝒔𝒊𝒏𝒈 𝒑𝒐𝒘𝒆𝒓 = (𝒅𝒊𝒐𝒑𝒕𝒆𝒓𝒔) (12.9)
𝒇 (𝒎𝒆𝒕𝒆𝒓𝒔)
If two thin lenses with focal lengths f1 and f2, respectively, are placed close
together, the focal length fT of the combination is
𝟏 𝟏 𝟏
= + (12.10)
𝒇𝑻 𝒇𝟏 𝒇𝟐
Light from a point source located beyond the focal length of the lens is
converged to a point image on the other side of the lens (Fig. 12.11a). This type of
an image is called a real image because it can be seen on a screen placed at the
point of convergence. If the distance between the source of light and the lens is less
than the focal length, the rays do not converge. They appear to emanate from a
point on the source side of the lens. This apparent point of convergence is called a
virtual image (Fig. 12.11b).
Figure 12.11. Image formation by a convex lens: (a) real image, (b) virtual image.
For a thin lens, the relationship between the source and the image distances
from the lens is given by
134
1 1 1
+ = (12.11)
𝑝 𝑞 𝑓
Here p and q, respectively, are the source and the image distances from the lens.
By convention, q in this equation is taken as positive if the image is formed on the
side of the lens opposite to the source and negative if the image is formed on the
source side.
Light rays from a source very far from the lens are nearly parallel; therefore,
by definition we would expect them to be focused at the principal focal point of the
lens. This is confirmed by Eq. 12.11, which shows that as p becomes very large
(approaches infinity), q is equal to f.
Diverging Lenses
An example of a diverging lens is the
concave lens shown in Fig. 12.12. Parallel light
diverges after passing through a concave lens. The
apparent source of origin for the diverging rays is
the focal point of the concave lens.
All the equations we have presented for the
converging lens apply in this case also, provided
the sign conventions are obeyed. From Eq. 12.8, it
follows that the focal length for a diverging lens is Figure 12.12. A diverging lens.
always negative and the lens produces only virtual
images.
2. The optical power of a thin glass lens in air is D = 7 diopters. What is the
optical power of this lens immersed in water?
4. Compute the change in the position of the image formed by a lens with a
focal length of 1.5 cm as the light source is moved from its position at 6 m from
the lens to infinity.
5. Calculate the size of the retinal image of a 10-cm leaf from a distance of
500 m.
135
6. A third polaroid is placed between two crossed polaroids so that its main
plane makes an angle of 300 with the main plane of the first polaroid. How will the
intensity of the natural light beam passing through such a device change?
Absorption is neglected.
136
XIII
Eye refraction and refractometric research methods in medicine. Refractive
indices of the eye and fluids. Introscopy
Geometric Optics
The wave properties of light can be used to completely extract the features
of optical components such as mirrors and lenses. However, because the wave
front must be tracked along every point on the optical component, such
comprehensive calculations are usually quite difficult. If the optical components
are substantially larger than the wavelength of light, the problem can be simplified.
The simplification comprises ignoring some of light's wave qualities and treating
light as a ray going parallel to the wave front (Fig. 13.1). The ray of light travels in
a straight path in a homogeneous medium, changing direction only at the interface
between two media. Geometric optics is the name for this reduced method.
where c is the speed of light in vacuum and 𝓿 is the speed in the material.
When light enters from one medium into another, its direction of propagation is
changed (see Fig. C.2). This phenomenon is called refraction. The relationship
between the angle of incidence (θ1) and the angle of refraction (θ2) is given by
sin 𝜃1 𝑛2
= (13.2)
sin 𝜃2 𝑛1
The relationship in Eq. 13.2 is called Snell’s law. As shown in Fig. 13.2, some of
the light is also reflected. The angle of reflection is always equal to the angle of
incidence.
137
In Fig. 13.3, the angle of incidence θ1 for the entering light is shown to be greater
than the angle of refraction θ2. This implies that n2 is greater than n1 as would be
the case for light entering from air into glass. If, on the other hand, the light
originates in the medium of higher refractive index, as shown in Fig. 13.2, then the
angle of incidence θ1 is smaller than the angle of refraction θ2. At a specific value
of angle θ1 called the critical angle (designated by the symbol θc), the light
emerges tangent to the surface, that is, θ2= 900. At this point, sin θ2=1 and,
therefore, sin θ1= sin θc= n2/n1. Beyond this angle, that is for θ1 > θc, light
originating in the medium of higher refractive index does not emerge from the
medium. At the interface, all the light is reflected back into the medium. This
phenomenon is called total internal reflection. For glass, n2 is typically 1.5, and
the critical angle at the glass-air interface is sin θc=1/1.5 or θc=420.
Glass and other transparent materials can be fashioned into lenses to change the
direction of light in a precise way. There are two types of lenses: converging lenses
and diverging lenses. The beams of light are brought together by a converging lens,
which changes the direction of light. A diverging lens, on the other hand, has the
opposite effect, spreading light rays apart.
We can calculate the size and shape of images created by optical components using
geometric optics, but we can't forecast the unavoidable blurring of images caused
by light's wave nature.
138
Refraction is the phenomenon that allows the eye, as well as cameras and other
lens systems, to produce images. Because the transition from air to cornea is the
highest change in index of refraction that light undergoes, the majority of that
refraction occurs at the initial surface. The cornea accounts for around 80% of
refraction, while the inner crystalline lens accounts for 20%. While the inner lens
accounts for a smaller amount of the refraction, it is the sole source of the eye's
capacity to accommodate the focus for near viewing. The inner lens can modify the
total focal length of the eye by 7-8 percent in a typical eye. Common eye defects
are often called "refractive errors" and they can usually be corrected by relatively
simple compensating lenses.
Fiber Optics.
Fiber-optic devices are increasingly used in a variety of medical settings. Their
operation is based on a basic idea. If light moving through a substance with a high index of
refraction meets the border of a material with a lower refractive index at an angle greater
than the critical angle c, it is completely reflected back into the material. As seen in Fig.
13.4, light can be limited to travel within a glass cylinder in this manner. Since the dawn
of optics, this occurrence has been well documented. Before the phenomena could be
extensively used, however, important improvements in materials technology were
required.
Figure 13.4. Light confined to travel inside a glass cylinder by total reflection.
Optical fiber technology, which was developed in the 1960s and 1970s,
allowed for the production of low-loss, thin, highly flexible glass fibers capable of
carrying light over vast distances. The diameter of a typical optical fiber is roughly
10 µm, and it is composed of high purity silica glass. A cladding is applied to the
fiber to maximize light trapping. Light may be carried along complex twisting
routes for several kilometers without considerable loss using such fibers.
Fiberscopes, often known as endoscopes, are the most basic of fiber-optic
medical instruments. Internal organs such as the stomach, heart, and bowels are
visualized and examined with them. A fiberscope is a flexible apparatus made up
of two bundles of optical fibers. Each bundle has roughly 10,000 fibers and is
about a millimeter in diameter. The bundles are thicker in some applications, up to
1.5 cm in diameter. The length of the bundles varies depending on their function,
ranging from 0.3 to 1.2 meters.
The two bundles are strung toward the organ to be studied by orifices, veins,
or arteries and delivered into the body as a unit. A high-intensity source of light,
139
such as a xenon arc lamp, is focussed into a single bundle that carries the light to
the organ to be investigated.
Each of the fibers in the other
bundle collects light reflected from a
small region of the organ and carries
it back to the observer. Here the light
is focused into an image which can
be viewed by eye or displayed on a
cathode ray or some other type of
electronic screen. In the usual
arrangement, the illuminating bundle
surrounds the light-collecting
bundle. Most endoscopes now utilize
attached miniature video cameras to
Figure 13.5. Endoscope principle.
form images of the internal organs
for display on TV monitors.
The fraction of dissolved solids in a solution is usually linearly (or nearly linearly)
related to the refractive index. The concentration of a solute can be calculated with
high accuracy by comparing the value of a solution's refractive index to that of a
standard curve. A "Brix" scale, which is calibrated to give the percentage of
sucrose dissolved in water, is found in several refractometers.
Refractometry
Refractometry is a technique for determining a substance's refractive index. The
index of refraction or refractive index (n) of a substance is defined as the ratio of
the speed of light in a vacuum to the speed of light in another substance.
140
An Abbe-refractometer is used to make the measurement. The functioning
principle of the Abbe refractometer is based on the critical angle (Fig.13.6). The
sample is sandwiched between two prisms that measure and illuminate it. Light
enters the sample through the lighting prism, is refracted at a crucial angle at the
bottom surface of the measuring prism, and then the position of the border between
bright and light areas is measured using the telescope. The image is reverted by the
telescope, so the black area appears at the bottom, even though it should be at the
upper half of the field of vision. Calculating the refractive index of the sample is
simple when you know the angle and refractive index of the measurement prism.
The illuminating prism's surface is matted, allowing light to penetrate the sample
from all angles, including those practically parallel to the surface.
141
Protocol for measuring the concentration of a solution using the
refractometer.
Materials:
Abbe refractometer
Distilled water (refractive index for water: n0 = 1.333)
saline solutions: C1%, C2%, C3%, C4% and x%
Cotton, filter paper
Procedure:
A. Measuring the refractive indices:
1. open the prism block and keep the lower prism horizontally
2. add two-three drops of distilled water on the surface of the lower prism so that a
thin film of solution is formed.
Attention! Please add the solution carefully to not scratch the surface of the prism.
3. close the prism block.
4. start the source of light and adjust the mirror so that will be obtained a good
illumination of the visual field.
5. rotate the prism block (using the corresponding knob) until will be obtained a
dark region (down) and a bright region (upper) in the visual field.
6. adjust the dispersion using the compensator knob (no rainbow/ colored line
should be visible between two regions in the visual field).
7. bring the clear delimitation line localized between the bright and dark regions to
the center of the visual field (at the intersection of the rectangular wires).
8. read the refractive index with three decimals.
9. repeat the steps 1 to 8 for all saline solutions: C1%, C2%, C3%, C4% and x%
10. perform three measurements for each saline solution and write the results in a
table:
C1= C2= C3= C4= Cx=?
1
2
3
𝑛̅
142
StDev
StErM
Observation:
Clean carefully with distilled water and cotton the surfaces of the prisms between
the measurements
11. reorganize the working place.
B. Calculating the unknown concentration:
1. calculate the average, standard deviation, standard error mean for each analyzed
solution.
2. draw a graph n = f(c) – refractive index function of concentration.
Attention!
Start the refractive index scale with the value of refractive index of water (not from
zero)
Select a larger unit for the refractive index scale, so that you will have enough
space to represent values with three decimals.
3. draw the regression line for the experimental points
4. identify the concentration for x% saline solution by using graph.
5. state one conclusion according to your results.
2. A ray of light passes from kerosene into the air. The limiting angle of total
internal reflection for this case is 420 23'. What is the speed of propagation of light
in kerosene?
3. The refractive index of the glass is 1.52. Find the limiting angles of total
internal reflection for the interfaces: a) glass-air, b) water-glass
143
XIV
REGISTRATION OF SUPERWEAK BIOLUMINESCENCE AND
BRIGHTNESS ENHANCEMENT OF THE X-RAY IMAGE.
PHOTOELECTRIC CONVERTERS, PHOTOELECTRONIC
AMPLIFIERS, ELECTRON-OPTICAL CONVERTERS.
X-rays are now widely employed in diagnostic and interventional medical
imaging all around the world. X-rays are frequently used in industry as well, for
example, in the field of non-destructive testing to check for very small flaws in
metal parts. A range of applications in medical imaging have been developed that
go well beyond traditional radiographic imaging. Fluoroscopy, for example,
provides for real-time X-ray sequences, which are frequently required in minimally
invasive procedures. Furthermore, a point of quick progress is the recognition that
X-rays can be damaging as well. Ionization can occur as a result of high energy
emitted to the body during an X-ray collection. Radiation damages
deoxyribonucleic acid (DNA) in this area. In the vast majority of situations, the
DNA will be repaired by the cell. However, the repair process can sometimes fail,
resulting in uncontrolled cell division, which can lead to cancer. The majority of
people are now aware of the dangers of X-rays, and the patient dose conveyed
during X-ray scans has been greatly lowered in recent decades. An X-ray image of
a patient's thorax, or chest, is shown in Fig. 14.1.
Using a thin metal plate placed between the patient and the X-ray source, very low
energies of an X-ray spectrum are often eliminated prior to an interaction with the
patient in medical imaging. The reason for this is because the patient would absorb
practically all of the low-energy photons, resulting in a higher patient dose without
a significant gain in image quality. The metallic plate is also known as an X-ray
filter, which is not to be confused with image processing mathematical filters.
Although X-rays have the power to enter matter, the number of penetrating X-ray
photons varies depending on the material. Because of their ability to penetrate
human tissue, they can be used to obtain information on internal organs.
144
Figure 14.1. An example of an X-ray obtained from a patient's chest.
Higher or lower energy X-ray spectra are produced by varying tube voltages
between the cathode and the anode. When X-rays flow through materials in the
energy range utilized for medical imaging, there are three types of relevant
interactions that can occur:
interaction with atomic electrons,
interaction with nucleons,
interaction with electric fields associated to atomic electrons and atomic
nuclei.
Consequently, the X-ray photons either experience a complete absorption, elastic
scattering or inelastic scattering.
The interaction employed in medical imaging is a drop in radiation intensity, which
is nothing more than a decrease in the number of photons arriving at the detector.
Attenuation is the term used to describe this process. A change in photon count,
photon direction, or photon energy are all examples of physical factors that
contribute to attenuation. All of these effects have one thing in common: they're all
based on interactions between single photons and the material they're passing
through, and the attenuation they cause is very energy-dependent.
According to Lambert-law, Beer's the measured intensity I is linked to the
intersection length of the object x and the ray when a monochromatic X-ray beam
penetrates a homogeneous object with absorption coefficient:
𝐼 = 𝐼0 · 𝑒 −𝜇𝑥 (14.1)
145
Here, I0 is the X-ray intensity at the source.
This empirical relationship, also known as the Beer–Lambert–Bouguer law,
connects light absorption to the qualities of the substance through which the light is
passing.
I I 0 e kCl (14.2)
This law asserts that the transmission (or transmissivity) of light through a
substance, T, is proportional to the product of the substance's absorption
coefficient and the distance the light goes through the material (i.e., the path
length), l. (Fig. 14.2).
The Beer-Lambert law can be formulated as follows: the an optical density of the
sample is directly proportional to the concentration of material in the sample and
the length of the light path:
I0
lg D Cl (14.3)
I
is called the molar coefficient of absorption.
Photoelectric Effect
Einstein first described the photoelectric effect in the context of establishing the
quantized character of light. It occurs when the incident X-ray photon energy is
greater than the binding energy of an electron in the target material atom. To
liberate an electron from an inner-shell, the incident X-ray photon gives up all of
its energy. Photoelectron is the name for the expelled electron. After that, the
incident photon vanishes. The photo-electric process frequently leaves a vacancy in
the inner shell of the atom, which was previously occupied by the ejected electron.
As a result, an outer shell electron fills the “hole” created in the inner-shell. A
distinctive radiation develops because the outer shell electron is in a higher energy
state. As a result of the photoelectric effect, a positive ion, a photoelectron, and a
146
distinctive radiation photon are produced. The binding energy of the K-shell
electrons in tissue-like materials is relatively low. As a result, the photoelectron
absorbs nearly all of the energy of the X-ray photon.
When electromagnetic radiation, such as light, strikes a
material, it causes electrons to be emitted.
Photoelectrons are electrons that are emitted in this way.
Photomultipliers
These are, in effect, photocells with numerous stages of amplification. The
photocathode's generated electron is propelled by a voltage drop towards an
electrode (the first dynode), which produces a secondary electron shower upon
147
collision (see fig. 14.4). Until the anode is reached, the process is frequently
repeated up to 9 dynodes (or stages). The original photocathode current Ic has been
amplified to IcEn, where E is the secondary electron emission coefficient and n is
the number of dynodes. In most cases, the photocathode's quantum efficiency does
not surpass 40%.
Image Intensifiers
X-ray image intensifiers (X-ray image intensifiers) are vacuum tubes that
transform X-rays into visible light, i.e. a picture. Figure 14.5 depicts the process's
schematic principle. The incoming X-ray photons are first converted to light
photons using an input phosphor, which is a phosphorus substance. Using the
photoelectric effect within a photocathode, the generated light is further converted
to electrons. Using an electron optic system, these electrons are then accelerated
and focussed on the output phosphor. The electrons are transformed back to visible
light in the output phosphor, which can then be captured by film material or
television camera tubes.
Figure 14.5. The X-ray image intensifier is depicted in a schematic figure. First, X-rays
are turned to light, which is then converted to electrons. Electrons are accelerated towards a
fluorescent screen, which converts them to light, resulting in an image.
149
1. focus of the x-ray tube;
2. x-ray beam;
3. translucent object;
4. electron-optical converter;
5. photo cathode;
6. electrode for sharpening the image;
7. electron beam;
8. anode;
9. screen;
10. a beam of visible light;
11. a rotating prism;
12. button for changing the path of
light rays;
13. camera;
14. a prism that rotates the image 180
';
15. viewing magnifier;
16. place to watch
Figure 14.6. X-ray image amplifier circuit
X-rays passing through the patient fall on the photocathode of the image converter.
Under the influence of X-rays, electrons come out of the photocathode, which are
accelerated and focused by an electric field. When electrons reach a high speed,
they fall onto a fluorescent screen, cause it to glow with visible light, and an image
of a translucent object appears on it. This image is projected using a rotating prism
onto the image receiver (periscope device of a television or movie camera). The
image of the object appearing on a fluorescent screen with a diameter of 20 - 25
mm, with the help of an eyepiece, is increased to its true size.
Radiography
The practice of obtaining two-dimensional projection images by exposing an
anatomy of interest to X-rays and measuring the attenuation they experience as
they travel through the object is known as radiography. It's a widespread type of X-
ray imaging that's employed in clinics all over the world.
The assessment of fractures and alterations in the skeletal system is the principal
application area. The high attenuation coefficient of bones in comparison to
surrounding tissue provides good contrast and allows for fracture identification and
classification. Radiography can also be used to detect changes in the consistency or
density of a bone, such as in the case of osteoporosis or bone cancer. Two X-ray
images of an arm with Ulna and Radius bone fractures are presented on the left in
Fig. 14.7. The figure also includes a color image of the arm following intervention,
as well as two more X-ray photos of the treated arm, which indicates the bones
have been internally fixed with metal plates.
150
Figure 14.7. X-ray image of a broken arm before and after surgery.
Fluoroscopy
Contrast agent is frequently injected into the blood circulation to improve image
quality and contrast. When compared to typical soft tissue, a contrast agent is a
liquid that has a higher attenuation coefficient. Iodine and barium are common
contrast media, with the former being utilized for intravascular and the latter for
gastrointestinal studies.
Spectroscopy
Atoms and molecules have their own absorption and emission spectra, which
are unique to each species. They can be used to detect atoms and molecules in a
variety of substances. Spectroscopic techniques were first utilized in atom and
molecule research, but they were quickly embraced in a wide range of fields,
including the life sciences.
151
Spectroscopy is used in biochemistry to identify the results of complex
chemical reactions. Spectroscopy is commonly used in medicine to determine the
concentration of specific atoms and molecules in the body. A spectroscopic
analysis of urine, for example, can be used to assess the body's mercury level. The
level of blood sugar is determined by first causing a chemical reaction in the blood
sample, which produces a colored product. Absorption spectroscopy is used to
determine the concentration of this colored product, which is proportional to the
blood sugar level.
The fundamental principles of spectroscopy are straightforward. The sample
under inquiry is stimulated by an electric current or a flame in emission
spectroscopy. The light that has been emitted is next inspected and recognized. The
material is put in the path of a white light beam in absorption spectroscopy. The
missing wavelengths that identify the components in the substance are revealed by
examining the transmitted light. Both the absorption and emission spectra can be
used to determine the concentration of various components in a substance. The
intensity of emitted light in the spectrum is proportional to the number of atoms or
molecules in the given species in the case of emission. The amount of absorption
can be related to the concentration in absorption spectroscopy. A spectrometer is a
device that analyzes spectra. The intensity of light is measured as a function of
wavelength with this apparatus.
In its most basic form, a spectrometer consists of a focusing device, a prism,
and a light detector. A parallel beam of light is formed by the focusing device and
falls on the prism. The prism, which can be adjusted, divides the beam into its
individual wavelengths. The fanned-out spectrum can be photographed and
identified at this point. Typically, however, only a tiny portion of the spectrum is
recognized at a time. The small exit slit does this by intercepting only a fraction of
the spectrum. The entire spectrum is swept progressively past the slit as the prism
is turned. The wavelength impinging on the slit is used to calibrate the position of
the prism. A photodetector detects light passing through the slit and generates an
electrical output proportional to the light intensity. A chart recorder can display the
signal's strength as a function of wavelength.
Spectrometers used in everyday clinical practice are automated and can be
operated by anyone with little or no medical training. Identification and
interpretation of spectra produced by less well-known compounds, on the other
hand, necessitates extensive training and experience. Such spectra provide
information on the molecular structure in addition to identifying the molecule.
Colorimetry is a technique for determining the wavelength and intensity of
electromagnetic radiation in the visible range. It's widely used for identifying and
determining the quantities of light-absorbing compounds. Two fundamental laws
are used: Lambert's law, which was developed by a French scientist named Pierre
Bouguer and relates the amount of light absorbed to the distance it travels through
an absorbing medium; and Beer's law, which relates the amount of light absorbed
152
to the concentration of the absorbing substance. The equation can be used to
express the two laws together:
log I0/I = kcd (14.8)
where I0 is the initial intensity, I is the light intensity after passing through the
sample, C is the test substance concentration, d is the absorbing layer thickness,
and k is the molar absorption coefficient, is a constant that depends on the type of
the absorbing substance and wavelength. The transmittance T is commonly stated
as a percentage of the I / Io value, which is the proportion of incident light that
passes through the unabsorbed. The optical density, or log Io/I, is proportional to
the concentration of absorbing substances.
Working of Colorimeter
1) Step 1: It is necessary to calibrate the colorimeter before beginning the
experiment. It's done with the help of standard solutions containing the known
solute concentration to be determined. Fill the cuvettes with standard solutions and
set them in the colorimeter's cuvette holder. Set the zero value for the blank
solution using the proper wavelength filter based on the blank solution by adjusting
the knobs.
2) Step 2: In the direction of the test solution is a light ray with a particular
wavelength for the assay. The light is filtered through a succession of lenses and
filters. The colored light is guided by lenses, and the filter splits a beam of light
into different wavelengths, allowing only the required wavelength to pass through
and reach the standard test solution's cuvette.
3) Step 3: The laser beam is transmitted, reflected, and absorbed by the solution
when it reaches the cuvette. The photodetector device monitors the intensity of
transmitted light when the transmitted ray hits it. It turns the data into electrical
impulses, which it then sends to the galvanometer.
4) Step 4: The galvanometer measures electrical signals, which are shown in digital
form.
5) Step 5: Formula to determine substance concentration in the test solution. The
relation between absorbance A, concentration c (expressed in mol/m3) and path
length l (expressed in m) are given by Beer-Lambert's law.
153
Figure 14.8. Principle of photocolorimeter.
50
The transmission
coefficient Т (%)
Table 2.
Concentrations
С1 = С2 = С3 = С4= С5 = X
The transmission
coefficient
T (%)
154
2. According to the results construct the graph T = f (C).
3. Determine the unknown concentration from the graph.
4. Make a conclusion.
3. 1/3 of the incident light flux passes through a layer of aqueous solution
with a thickness of 4.2 cm. Determine the concentration of this solution, if the
specific indicator absorption 0.01 cm-1g-1mol. Consider that 10% of the
light flux is reflected from the surface of the solution.
155
XV
RADIOACTIVITY. X-RAY AND DOSIMETRY.
BIOLOGICAL EFFECTS AND MECHANISMS OF ACTION OF
RADIATION.
Despite the fact that all atoms of a given element have the same number of
protons in their nucleus, the number of neutrons might differ. Isotopes are atoms
with the same number of protons but differing numbers of neutrons. The nuclei of
oxygen atoms, for example, all have eight protons, but the number of neutrons in
the nucleus can be eight, nine, or ten. These are the isotopes of oxygen. They are
designated as 168𝑂, 178𝑂, and 188𝑂. This is a sort of nuclear symbology in which the
number of protons in the nucleus is the subscript to the chemical symbol of the
element, and the superscript is the sum of the number of protons and neutrons. The
stability of the nucleus is frequently determined by the quantity of neutrons.
Most naturally occurring atoms have stable nuclei. When left alone, they do
not alter. Many unstable nuclei, on the other hand, undergo changes that result in
the emission of intense radiation.
The emitted particles from these radioactive nuclei are divided into three
categories: (1) alpha (α) particles, which are high-speed helium nuclei with two
protons and two neutrons; (2) beta (β) particles, which are extremely high-speed
electrons; and (3) gamma (γ ) rays, which are highly intense photons.
A given element's radioactive nucleus does not produce all three radiations
at the same time. Alpha particles are emitted by some nuclei, whereas beta
particles are emitted by others, and gamma rays may accompany either process.
The transition of the nucleus from one element to another is linked to
radioactivity. When radium produces an alpha particle, for example, the nucleus is
converted into the element radon. Most physics textbooks go through the specifics
of the procedure.
A radioactive nucleus' decay or transmutation is a random process. Some
nuclei decay more quickly than others. When dealing with a large number of
radioactive nuclei, however, the rules of probability can be used to properly
forecast the aggregate decay rate. The half-life, which is the time it takes for half of
the initial nuclei to undergo transformation, characterizes this decay rate.
The half-life of radioactive materials varies significantly. Some decay
extremely quickly, with half-lives as little as a few microseconds. Others have a
long half-life, having a half-life of thousands of years. In the Earth's crust, only the
very long-lived radioactive elements occur naturally. For example, the uranium
isotope 238
92𝑈 , which has a half-life of 4.51 × 10 years, is one of them. By
9
157
Figure 15.1. Arrangement for detecting diffraction of X-rays by a crystal.
Molecules that can be formed into a regular periodic crystalline array are the
most successful for diffraction research. Many biological compounds can be
crystallized if the right circumstances are met. The diffraction pattern, however, is
not a unique, unambiguous representation of the molecules in the crystal. The
pattern is a representation of the arranged molecules' collective effect on the X-
rays that travel through the crystal. The structure of each individual molecule must
be deduced from the diffraction pattern's indirect evidence.
When the crystal has a simple structure, such as sodium chloride, the X-ray
diffraction pattern is equally simple and straightforward. Diffraction patterns
produced by complicated crystals, such as those made from organic molecules, are
extremely intricate. Even in this scenario, however, some information about the
structure of the molecules that make up the crystal can be obtained. Diffraction
patterns must be created from thousands of different angles to resolve the
threedimensional characteristics of the molecules. With the help of a computer, the
patterns are then examined. These types of investigations were crucial in
determining the structure of penicillin, vitamin B12, DNA, and a variety of other
biologically significant compounds.
X-rays are produced by extracting energy from electrons and turning it into
photons of sufficient energy. The x-ray tube is where this energy transfer takes
place. Adjusting the electrical amounts (KV, MA) and exposure time, S, applied to
the tube can alter the quantity (exposure) and quality (spectrum) of the x-
radiation produced.
An energy converter is an X-ray tube. It takes electrical energy and turns it
into two different types of energy: x-rays and heat. The heat is an unwelcome side
effect. X-ray tubes are designed and built to produce the most x-rays while
dissipating heat as quickly as feasible.
An X-ray tube is a relatively simple electrical device that normally has two
main components: a cathode and an anode. The electrons lose energy when the
electrical current passes through the tube from cathode to anode, resulting in the
creation of X-radiation. Below is a cross-sectional picture of a standard X-ray tube.
158
Figure 15.2. X-ray tube.
The anode is the component that generates the x-radiation. It is a reasonably large
piece of metal that connects to the electrical circuit's positive side. The anode has
two purposes: (1) to convert electronic energy into x-rays, and (2) to remove the
heat generated during the process. The anode's substance is chosen to enhance
these functions. During the x-ray manufacturing process, most anodes are formed
like beveled disks and attached to the shaft of an electric motor that rotates them at
relatively high speeds. Rotation of the anode is used to dissipate heat.
The cathode's main job is to expel electrons from the electrical circuit and
concentrate them into a narrow beam aimed towards the anode. A little coil of wire
(filament) is recessed within a cup-shaped section of a typical cathode. Electrons in
electrical circuits are unable to depart the conductor material and go into free space
in most cases. However, if given enough energy, they can. Thermal energy (or
heat) is used to eject electrons from the cathode in a procedure known as
thermionic emission. The filament of the cathode is heated in the same way that a
light bulb filament is heated by passing a current through it. This heating current is
distinct from the current passing through the x-ray tube, which generates x-rays.
The cathode is heated to a blazing temperature during tube operation, and the heat
energy expels some of the electrons from the cathode.
When electrons are emitted from the cathode, they are influenced by an electrical
force that pulls them toward the anode. This force causes them to accelerate,
increasing their velocity and kinetic energy. As the electrons travel from the
cathode to the anode, their kinetic energy increases. However, when the electron
travels from the cathode to the anode, its electrical potential energy falls as it is
converted into kinetic energy. The electron's potential energy is dissipated as soon
as it reaches the anode's surface, and all of its energy is now kinetic. The electron
is flying at a rather high velocity at this time, as defined by its real energy content.
A 100-keV electron arrives at the anode surface at a speed of more than half that of
light. When electrons collide with the anode's surface, they are slowed
159
dramatically and lose their kinetic energy, which is transformed into either x-
radiation or heat.
As seen in Fig.15.4, electrons interact with specific atoms of the anode material.
Radiation is produced by two types of interactions. Electron shell interactions
creates characteristic x-ray photons, while interactions with the atomic nucleus
produce Bremsstrahlung x-ray photons.
The Bremsstrahlung process is the one that produces the most photons.
Bremsstrahlung is a German word that means "braking radiation" and describes the
process well. The attractive force from the nucleus deflects and slows down
electrons that pierce the anode material and pass close to one. The electron's
energy is lost in the form of an x-ray photon during this collision. Photons of the
same energy are not produced by all electrons.
When an electron collides with something within the target, it is slowed down and
creates an x-ray photon. Electrons striking closest to the center are subjected to the
most force, losing the most energy and producing the highest energy photons. The
interactions between electrons in the outer zones are less, resulting in lower energy
photons. Despite the fact that the zones are almost the same width, they have
different areas. The size of a zone is determined by its distance from the nucleus.
Because the overall area within a zone determines the quantity of electrons that
reach it, it's evident that the outside zones capture more electrons and produce
more photons.
160
A collision between the high-speed electrons and the orbital electrons in the atom
produces characteristic radiation, which is also described above (in the "Kinetic"
paragraph). Only if the incoming electron has a kinetic energy larger than the
binding energy of the electron within the atom can the interaction take place. The
electron gets displaced from the atom when this condition prevails and a collision
occurs. When an orbital electron is withdrawn, a vacancy is created, which is then
replaced by an electron from a higher energy level. The filling electron gives off
energy in the form of an x-ray photon as it goes down to fill the vacancy.
The electron dislodges a tungsten K-shell electron with a binding energy of 69.5
keV in the sample presented. An electron from the L shell, with a binding energy
of 10.2 keV, fills the vacancy. As a result, the energy of the typical x-ray photon is
equal to the difference in energy between these two levels, or 59.3 keV.
In fact, a particular anode material produces a variety of distinct x-ray energies.
This is because bombarding electrons can dislodge electrons at different energy
levels (K, L, etc.), and vacancies can be filled from different energy levels. The
electronic energy levels of tungsten, as well as some of the energy shifts that lead
to distinctive photons, are depicted below. Although photons are produced when L-
shell vacancies are filled, their energy are too low for diagnostic imaging. Each
characteristic energy has a designation that identifies the shell in which the
vacancy occurred, as well as a subscript that indicates the filling electron's origin.
A subscript alpha (a) denotes filling with an L-shell electron, and beta ((3)
indicates filling from either the M or N shell.
The spectrum of tungsten's significant characteristic radiation is depicted below.
Bremsstrahlung creates a continuous spectrum of photon energies over a specified
range, whereas characteristic radiation produces a line spectrum with numerous
discrete energy. Because the chance of filling a K-shell vacancy varies from shell
to shell, the amount of photons generated at each characteristic energy is varied.
161
Figure 15.4. Distribution of energy levels of tungsten electrons and characteristic X-ray spectrum
corresponding to these levels
Only a small portion of the energy provided to the anode by the electrons is
converted to x-radiation; the majority is absorbed and transformed to heat by the
anode. The total x-ray energy represented as a fraction of the total electrical energy
transferred to the anode is the efficiency of x-ray production. The voltage supplied
to the tube, KV, and the atomic number of the anode, Z, are the two factors that
affect production efficiency. There is a rough relationship:
Efficiency = KV x Z x 10-6
The x-ray efficacy of the x-ray tube is defined as the amount of exposure, in
milliroentgens, delivered to a point in the center of the useful x-ray beam at a
distance of 1 m from the focal spot for 1 mAs of electrons passing through the
tube.
The effectiveness value is a measurement of a tube's capacity to convert electronic
energy into x-ray exposure. Knowing the effectiveness value for a particular tube
allows for the calculation of patient and image receptor exposures using methods
described in later chapters. The efficacy of a tube is determined by a variety of
elements, including KV, voltage waveform, anode material, filtration, tube age,
and anode surface degradation, much as it is for x-ray energy production.
Figure 15.5(a) schematic diagram of the operation of the tomograph. (b) Rotation of the source-
detector system makes it possible to obtain a layer-by-layer image of the patient's organ.
163
The scanning beam is shown schematically in Fig. 15.5b at two angles, each with
two lateral positions. While the detected signal at each position has integrated
information about the whole path, two paths that intersect share information about
the single point of intersection. At the intersection of the beams AB, A' B', CD, and
C' D', four such spots are indicated in the diagram. The numerous pictures acquired
by translation and rotation contain information on the X-ray transmission
properties of each point inside the plane of the item to be studied.
These signals are saved, and a point-by-point image of the narrow slice scanned
within the body is created using a relatively complicated computer analysis.
Typically, the visible slices within the body generated in this manner are around 2
mm thick. The item is scanned by a fan rather than an X-ray beam in more current
versions of the equipment, and the signal is recorded by an array of numerous
detectors. In this method, data collecting is sped up, resulting in an image in a
matter of seconds.
Radiation dosimetry is the measurement, calculation, and evaluation of the
ionizing radiation dose absorbed by an item, usually the human body, in the fields
of health physics and radiation protection. This is true both domestically, as a
result of eaten or inhaled radioactive compounds, and externally, as a result of
irradiation from radiation sources.
External dosimetry is based on measurements using a dosimeter or extrapolated
from data obtained by other radiological protection instruments, whereas internal
dosimetry is based on a variety of monitoring, bio-assay, or radiation imaging
techniques.
Absorbed dose (D) is a dosage quantity that represents the amount of energy
deposited in matter per unit mass by ionizing radiation:
D=dE/dm
In both radiation protection and dose uptake in living tissue, the absorbed dose is
employed (reduction of harmful effects). Dose rate refers to the amount of change
in dose per unit of time.
The gray (Gy) is the SI unit of measurement, and it is defined as one Joule of
energy absorbed per kilogram of mass.
The energy of particles is practically absorbed at the point of encounter in the case
of directly ionizing radiation. In the case of indirectly ionizing radiation, the
interaction may occur at locations other than those where secondary charged
particle energies are absorbed.
Equivalent dose (H) in millisieverts (mSv)
Effective dose (HE) in millisieverts (mSv).
The notion of effective dose was created as a tool for radiation safety in the
workplace and in the general population. It has the potential to be useful for
comparing dosages from various diagnostic and interventional treatments. It also
164
enables for the comparison of doses produced by different techniques or
technologies used for the same medical examination, as well as doses produced by
similar procedures conducted in different institutions. The assumption is that the
representative patients used to calculate the effective dose are similar in terms of
sex, age, and body mass. The effective dose was not designed to provide a precise
estimate of the risk of radiation damage for a single person. For individual risk
assessment as well as for epidemiological studies, the organ dose (either absorbed
or equivalent organ dose) would be a more appropriate quantity.
For medical exposures, the collective effective dose is used for comparison
of estimated population doses, but it is not intended for predicting the occurrence
of health effects. It's calculated by multiplying a radiological procedure's mean
effective dose by the predicted number of procedures in a given population. To
depict global trends in the medical use of radiation, the total effective dose from all
radiological operations for the entire population can be employed. The sievert is a
unit of equivalent and effective dosage (Sv). For diagnostic imaging, a sievert is a
relatively big measure, and millisieverts is frequently a more useful unit (mSv).
One thousand millisieverts equals one sievert. In person-sieverts, the total effective
dose is calculated (person-Sv).
Radiation Therapy
X-ray and gamma-ray photons, as well as particles released by radioactive nuclei,
have energies that are significantly larger than the energies that bind electrons to
atoms and molecules. As a result, when such radiation passes through biological
materials, it can rip electrons from biological molecules, causing significant
structural changes. The ionized molecule may disintegrate or chemically interact
with another molecule to generate an unwanted combination. If a damaged
molecule is a critical component of a cell, the cell as a whole may perish. Radiation
also breaks up water molecules in the tissue into reactive fragments (H + OH).
These fragments bind to biological molecules and cause them to change in a
negative way. Furthermore, radiation travelling through tissue may simply give up
its energy and heat the tissue to an extremely harmful temperature. A large dose of
radiation can kill an organism by damaging so many cells. Smaller but still lethal
amounts can result in irreversible alterations such mutations, sterility, and cancer.
Radiation, on the other hand, can be utilized medically at controlled dosages. An
ampul containing radioactive material such as radium or cobalt 60 is inserted near
the malignant tumor in the treatment of some forms of cancer. The objective is to
eliminate the cancer while causing minimal damage to healthy tissue by carefully
placing the radioactive material and managing the dose.
Regrettably, some damage to healthy tissue is inevitable. As a result, radiation
sickness symptoms are frequently associated with this treatment (diarrhea, nausea,
loss of hair, loss of appetite, and so on). When long-lived isotopes are utilized in
therapy, the material must be removed after a certain amount of time. Short-lived
isotopes, like gold 198, decay quickly enough that they don't need to be eliminated
after treatment. Certain materials that are injected or ingested into the body tend to
165
concentrate in specific organs. In radiation therapy, this phenomena is employed to
its advantage. Phosphorus 32, a radioactive isotope with a half-life of 14.3 days,
accumulates in the bone marrow. Iodine 131 accumulates in the thyroid and is used
to treat hyperthyroidism (half-life: 8 days). Cancerous tumors can also be
destroyed using an externally delivered beam of gamma rays or X-rays. The
benefit of this treatment is that it does not require surgery. By frequently changing
the direction of the beam travelling through the body, the effect of radiation on
healthy tissue can be decreased. Although the tumor is always in the path of the
beam, the dose delivered to healthy tissue is minimized.
Numerical problems for independent solution:
1. The power of bremsstrahlung X-ray can be approximately calculated by the
formula P =10-6·I·U2·Z, where I is the current in mA, U is the voltage in kV, Z
is the atomic number of the anode substance. Determine the efficiency X-ray
tube at a voltage of 100 kV.
2. Determine the equivalent dose of radiation to which human bones are exposed
for 1 year due to the content of 239
94𝑃𝑢 in them, with the maximum permissible
and unchanged radionuclide activity of 0.02 μCu. Skeleton weight 7 kg.
Effective energy for unit decay 270 MeV / decay.
3. The dose rate of 𝛾-radiation at a distance of 50 cm from the point source is 0.1 R
/ min. How long during a working day can person be at a distance of 10 m from
the source if the maximum permissible dose for a working day should not
exceed 17 mR?
166
Abbreviations
A angstrom
av average
atm atmosphere
A ampere
C coulomb
CT computerized tomography
cos cosine
cps cycles per second
cm2 square centimeters
cm centimeter
deg degree
dB decibel
diam diameter
ECG electrocardiography
EEG electroencephalography
F farad
F/m Farad/meters
g gram
h hour
Hz hertz (cps)
J joule
km kilometer
km/h kilometers per hour
kg kilogram
KE kinetic energy
kph kilometers per hour
lim limit
liter/min liters per minute
μ micron
μA microampere
μV microvolt
μV/m microvolt per meter
mV millivolt
ms millisecond
m meter
m/sec meters per second
ml milliliter
min minute
max maximum
mA milliampere
MRI magnetic resonance imaging
167
N Newton
N·m Newton meters
NMR nuclear magnetic resonance
Ω ohm
PE potential energy
sin sine
sec second
tan tangent
V volt
UHF ultra high frequency
W watt
168
APPENDIX A
Table of Student's coefficients for calculating errors
Student's coefficients
Confidence factor
n
0.6 0.8 0.95 0.99 0.999
2 1.376 3.078 12.706 63.657 636.61
3 1.061 1.886 4.303 9.925 31.598
4 0.978 1.638 3.182 5.841 12.941
5 0.941 1.533 2.776 4.604 8.610
6 0.920 1.476 2.571 4.032 6.859
7 0.906 1.440 2.447 3.707 5.959
8 0.896 1.415 2.365 3.499 5.405
9 0.889 1.397 2.306 3.355 5.041
10 0.883 1.383 2.262 3.250 4.781
11 0.879 1.372 2.228 3.169 4.587
12 0.876 1.363 2.201 3.106 4.437
13 0.873 1.356 2.179 3.055 4.318
14 0.870 1.350 2.160 3.012 4.221
15 0.868 1.345 2.145 2.977 4.140
16 0.866 1.341 2.131 2.947 4.073
17 0.865 1.337 2.120 2.921 4.015
18 0.863 1.333 2.110 2.898 3.965
19 0.862 1.330 2.101 2.878 3.922
20 0.861 1.328 2.093 2.861 3.883
21 0.860 1.325 2.086 2.845 3.850
169
APPENDIX B
Selected Prefixes used in the Metric System
Prefix Abbreviation meaning Example
Giga G 109 1 gigameter (Gm)=1⨯109 m
Mega M 106 1 megameter (Mm)=1⨯106 m
Kilo k 103 1 kilometer (km)=1⨯103 m
Deci d 10-1 1 decimeter (dm)=0.1 m
Centi c 10-2 1 centimeter (cm)=1⨯10-2 m
Milli m 10-3 1 millimeter (mm)=1⨯10-3 m
Micro 𝝻 10-6 1 micrometer (𝝻m)=1⨯10-6 m
Nano n 10-9 1 nanometer (nm)=1⨯10-9 m
Pico p 10-12 1 picometer (pm)=1⨯10-12 m
Femto f 10-15 1 femtometer (fm)=1⨯10-15 m
170
APPENDIX C
Trigonometrical ratios table
𝜃 00 300 450 600 900
sin 𝜃 𝟏 𝟏 √𝟑
0 𝟐 √𝟐 𝟐 1
cos 𝜃 √𝟑 𝟏 𝟏
1 𝟐 √𝟐 𝟐 0
171
APPENDIX D
Bradis table of sines and cosines
sine
172
sine
cosine
173
sine
cosine
174
APPENDIX E
175
literature and electronic resources:
1. Paul Davidovits. Physics in Biology and Medicine. The 3rd edition, 2008.
2. Suzanne Amador Kane. Introduction to physics in modern medicine -- 2nd
ed. Haverford College Pennsylvania, USA, 2009.
3. Patrick F. Dillon. Biophysics. A Physiological Approach. Cambridge
University Press The Edinburgh Building, Cambridge CB2 8RU, UK. 2012
4. Kukurova Elena. Basics of Medical Physics and Biophysics for electronic
education of health professionals. Asklepios, Bratislava 2013.
5. John R. Taylor, An Introduction to Error Analysis: The Study of
Uncertainties in Physical Measurements, 2d Edition, University Science
Books, 1997.
6. Philip R. Bevington and D. Keith Robinson, Data Reduction and Error
Analysis for the Physical Sciences, 2d Edition, WCB/McGraw-Hill, 1992.
7. https://www.researchgate.net/publication/228599963_Fundamental_of_EEG
_Measurement.
8. https://core.ac.uk/download/pdf/162012173.pdf
9. https://link.springer.com/chapter/10.1007%2F978-3-642-19525-9_1.
10.https://www.electrical4u.com/biomedical-transducers-types-of-biom edical-
transducers.
11.http://www.refractometer.pl/Abbe-refractometer.
176