You are on page 1of 176

Ministry of Education and Science of the Republic of Kazakhstan

Kazakh National Medical University named after S.D. Asfendiyarov

Biophysics course

Abdrassilova V.O., Baydullaeva G.E., Ilyassova G.O.

Medical physics

HANDBOOK
for students of the 1st year

Almaty 2021
UDC: 577.3(075.8)
BBC: 28.071 я73
А-12

Authors:
Venera Abdrassilova, Master of Natural Science, Lecturer at the Department of
Normal Physiology with course of biophysics;
Gulzhakhan Baydullaeva, associate professor at the Department of Normal
Physiology with course of biophysics;
Gulzhan Ilyassova, Master of Natural Science, assistant at the Department of
Normal Physiology with course of biophysics.

Rewievers:
Abishev M., Professor, Al Farabi KazNU, Head of TNP department.
Ryspekova Sh. O., Candidate of Medical Sciences, associate professor and Head
of Department of Normal Physiology with course of biophysics.

Medical physics. Handbook for students of the 1st year of Medical


university/created by Abdrassilova V.O., Baydullaeva G.E., Ilyassova G.O.—
Almaty:KazNMU, 2021.—179 pp.
ISBN ____________________(номер ISBN)

Given educational-methodological complex is intended for 1st year students of the Medical
University for speciality «General Medicine» and «Pediatrics». The handbook was created to
help students and contains theoretical material for the study of topics provided for by the
educational program of the subject, there is a description and procedure for practical and
laboratory works. For self-examination of the acquired knowledge, there are test tasks (MCQ)
and situational tasks for independent solution.

ISBN ____________________(номер ISBN)

UDC: 577.3(075.8)
BBC: 28.071 я73
А-12
©Authors
©, 2021

2
CONTENT
1 Medical physics subject and methods of mathematical processing
of experimental data and calculation of the error………………… 7
2 Sound. Physical properties of sound. Acoustics. Medical
application of sound………………………………………………. 14
3 Structure and functions of biological membranes and methods for
studying membrane structure…………………………………….. 26
4 Transport of substances across membrane. Passive and active
transport…………………………………………………………... 36
5 Electrical excitability of tissues. Biopotentials. Resting and
action potentials of cells………………………………………….. 48
6 Physical fundamentals of electrocardiography…………………… 56
7 Physical fundamentals of electroencephalography……………….. 67
8 Multistage amplifiers in medicine………………………………... 76
9 The principles of converting biological (non-electrical) signals
into electrical ones. Thermoregulation. Calibration of temperature
sensors……………………………………………………………. 80
10 The effect of electromagnetic fields and currents on the body.
Galvanization and electrophoresis. The physical basis of
rheography……………………………………………………… 94
11 Physical issues of hemodynamics based on hydrodynamics.
Bioreology. Determination of the viscosity of liquids using a
viscometer………………………………………………………… 111
12 Biophysics of vision. Special techniques of microscopy and
polarization of biological objects………………………………… 121
13 Eye refraction and refractometric research methods in medicine.
Refractive indices of the eye and fluids. Introscopy……………… 137
14 Registration of superweak bioluminescence and brightness
enhancement of the x-ray image. Photoelectric converters,
photoelectronic amplifiers, electron-optical converters………….. 144
15 Biological effects and mechanisms of action of radiation.
Radioactivity. X-ray and dosimetry……………………………… 156
16 Abbreviations…………………………………………………….. 167
17 Appendix A Table of Student's coefficients for calculating errors 169
18 Appendix B Tables of Physical Constants, SI units and
prefixes…………………………………………………………… 170
19 Appendix C Trigonometrical ratios table ………………………. 171
20 Appendix D Bradis table of sines and cosines ………………… 172
21 Appendix E Tables of density of aqueous solutions of glycerin
and Speed of sound in different medium………………………. 175

3
PREFACE
The handbook on the subject "Medical Physics" is intended for 1st-year
students of the specialties "General Medicine" and "Pediatrics" to help them study
the course program, and can also be used by students for independent study of the
discipline.
The handbook contains theoretical material and different types of practical tasks
(tests, situational tasks, working texts) for mastering the program, an algorithm for
conducting laboratory work and explanations to them are given.
The purpose of studying the subject is the formation of basic knowledge in
the field of medical physics and biophysics, the acquisition of practical skills for
working with medical devices; the study of the foundations of applied physics,
which are addressed to solving medical problems and issues related to the physical
principles of the operation of medical equipment; the study of the physical laws
underlying the functioning of human organs and tissues, as well as the influence of
various physical factors on the human body.
Currently, many biophysical methods are widely used to elucidate the
mechanism of action on the body of factors of the external and internal
environment, including physical, chemical, technogenic nature, the action of toxic
agents.
Our subject is a prerequisite for the subject "Normal Physiology", therefore,
the study program includes the physical basics of the transport of substances
through the biological membrane, the mechanism of resting and action potential,
electrocardiography, electroencephalography and hemodynamics.
Using the knowledge gained, the graduate will be able to apply the
knowledge of medical physics (devices, equipment and physical factors of
influence on a person, used in medicine) in providing high-quality patient-centered
treatment; interpret the results of functional research methods (ECG, EEG,
viscometry, rheography, introscopy, UHF, electrophoresis), using modern
equipment for diagnosing diseases.

4
INTRODUCTION
Despite the complexity and interconnection of various processes in the
human body, it is often possible to distinguish among them processes close to
physical ones. For example, such a complex physiological process as blood
circulation is fundamentally physical, since it is associated with the flow of fluid
(hydrodynamics), the propagation of elastic vibrations through the vessels
(oscillations and waves), the mechanical work of the heart (mechanics), the
generation of biopotentials (electricity) and etc. Breathing is associated with the
movement of gas (aerodynamics), heat transfer (thermodynamics), evaporation
(phase transformations), etc.
In the body, in addition to physical macroprocesses, as in inanimate nature,
there are molecular processes that ultimately determine the behavior of biological
systems. Understanding the physics of such microprocesses is necessary for a
correct assessment of the state of the body, the nature of certain diseases, the action
of drugs, etc. In all these issues, physics is so connected with biology that it forms
an independent science - biophysics, which studies physical and physicochemical
processes in living organisms, as well as the ultrastructure of biological systems at
all levels of organization - from submolecular and molecular to the cell and the
whole organism.
Many diagnostic and research methods are based on the use of physical
principles and ideas. Most modern medical devices are structurally physical
devices. The mechanical quantity, blood pressure, is a metric used to evaluate a
number of diseases. Listening to sounds from within the body provides information
about normal or abnormal organ behavior. A medical thermometer based on the
thermal expansion of mercury is a very common diagnostic device. Over the past
decade, in connection with the development of electronic devices, a diagnostic
method based on the recording of biopotentials arising in a living organism has
become widespread. The most famous method of electrocardiography is the
recording of biopotentials that reflect cardiac activity. The role of the microscope
for biomedical research is well known. Modern medical devices based on fiber
optics allow examining the internal cavities of the body. Spectral analysis is used
in forensic medicine, hygiene, pharmacology and biology; achievements of atomic
and nuclear physics - for well-known diagnostic methods: X-ray diagnostics and
the method of tagged atoms.
In the general complex of various methods of treatment used in medicine,
physical factors also find their place. Electric and electromagnetic influences are
widely used in physiotherapy. For therapeutic purposes, visible and invisible light
(ultraviolet and infrared radiation), X-ray and gamma radiation are used.
Medical bandages, instruments, electrodes, prostheses, etc. work under the
influence of the environment, including in the immediate environment of
5
biological media. To assess the possibility of operating such products in real
conditions, it is necessary to have information about the physical properties of the
materials from which they are made. For example, for the manufacture of
prostheses (teeth, vessels, valves, etc.), knowledge of mechanical strength,
resistance to repeated loads, elasticity, thermal conductivity, electrical
conductivity, and other properties is essential. In some cases, it is important to
know the physical properties of biological systems to assess their viability or
ability to withstand certain external influences.
By changing the physical properties of biological objects, it is possible to
diagnose diseases. A living organism functions normally only by interacting with
the environment. It reacts sharply to changes in such physical characteristics of the
environment as temperature, humidity, air pressure, etc. The effect of the external
environment on the body is taken into account not only as an external factor, it can
be used for treatment: climatotherapy and barotherapy. These examples
demonstrate that the physician must be able to assess the physical properties and
characteristics of the environment. The applications of physics listed above in
medicine constitute medical physics - a complex of branches of applied physics
and biophysics, in which physical laws, phenomena, processes and characteristics
are considered in relation to solving medical problems.
Modern medicine is based on the widespread use of a variety of equipment,
which is mostly physical in design, therefore, in the course of medical and
biological physics, the structure and principle of operation of the main medical
equipment are considered.

6
I
MEDICAL PHYSICS SUBJECT AND METHODS OF MATHEMATICAL
PROCESSING OF EXPERIMENTAL DATA AND CALCULATION OF
THE ERROR
Medical physics is the application of physics principles to medical practice.
It is most commonly used to refer to physical applications involving the use of
ionizing and non-ionizing radiation in medicine for diagnostic and therapeutic
purposes. Medical physics more broadly can refer to the physics of various forms
of electromagnetic waves used in medical operations (electrocardiography and
laser surgery) such as ultrasound.
Medical physics is the use of physics in medical diagnosis and treatment.
The major subfields of medical physics are
- diagnostic radiological physics,
- therapeutic radiological physics,
- medical nuclear physics
- medical health physics.
Biophysics is an interdisciplinary science that applies the principles of
physics and the methods of mathematical analysis and computer modeling to
understand how the mechanisms of biological systems work.
Biophysics uses mathematical and physical laws, as well as the latest
developments in computer technology as tools for studying phenomena on a wide
variety of scales, from the global human population to individual atoms in a
biomolecule. Appropriate modeling methodologies range in size from angstrom to
macro, depending on the field of study (evolutionary to atomistic effects) and
importance.
For large systems to study, the most common and reliable mathematical
strategy is to develop systems of differential equations. Molecular dynamics are
used frequently at the molecular level to describe biomolecules as a moving
Newtonian particle system with interactions specified by a frictional force, and a
wide range of approaches for addressing the problem of solvent effects. In some
cases pure quantum mechanics approaches, which describe molecules using both
wave functions and electron densities, can and should be used, but time and
resources computing costs can be prohibitive, and hybrid classical quantity
methods are usually more appropriate.
A process used to establish, disprove, or validate a hypothesis is known as
an experiment. Experiments reveal cause-and-effect relationships by illustrating
7
what happens when a particular component is changed. Experiments can have a
wide range of goals and scales, but they all rely on a repeatable technique and
logical analysis of the data.
There are always errors in any measurement; no physical quantity can be
measured with absolute certainty. This means that if we measure something and
then repeat the measurement, we will almost certainly get a different result the
second time. So, how can we determine the "true" value of a physical quantity? We
can't, to put it succinctly. However, by taking greater care in our measurements and
using ever more refined experimental methods, we can reduce errors and gain
greater confidence that our measurements are getting closer to the true value.
The study of errors in physical measurements is known as "error analysis"
and a thorough account would take far more time and space than we have in this
course. However, by taking the effort to master some fundamental error analysis
principles, we can:
1) know how to estimate experimental error,
2) know the types and sources of experimental mistakes, and
3) report values of measurements and their uncertainties clearly and correctly.
4) improve our measurement skills and design experimental procedures and
approaches to eliminate experimental errors.

Accuracy and Precision


The discrepancy between a measurement and the correct value, or between
two measured values, is known as experimental error. The accuracy and precision
of experimental error are used to assess it.

Accuracy is the degree to which a measured value is near to the true or


accepted value. It is often impossible to verify the accuracy of a measurement
because a true or accepted value for a physical quantity is unknown.

Precision is the degree to which two or more measurements agree. Precision


is also known as "repeatability" or "reproducibility." A highly reproducible
measurement tends to produce values that are quite near to each other.

Figure 1 defines accuracy and precision by analogy to the grouping of arrows in a


target.

8
Figure 1. Accuracy and precision.

Rounding values
Rounding numbers during calculations should be avoided because
approximations from rounding will accumulate. Throughout the calculations, add
one or two significant figures to all variables. For intermediate results, employ
rounded numbers, but for subsequent processing, only use non-rounded data. For
your final solution, use a rounded figure. There should be no more than two
significant figures in your final quoted errors.

Experimental mistakes, types and sources


When scientists talk about faults, blunders, or miscalculations, they're not
referring to what we call mistakes, blunders, or miscalculations. These types of
errors are also known as "illegitimate," "human," or "personal" errors and can
occur as a result of measuring a thickness when a length should have been
evaluated, measuring potential difference across the wrong portion of an electrical
circuit, misrepresenting a scale on an instrument, or forgetting to divide the
diameter by two before calculating the area of a circle using the formula A = r.
Such errors are unquestionably serious, but they can be prevented if the experiment
is repeated accurately the next time.
Experimental errors, on the other hand, are a natural part of the measuring
work and cannot be avoided by merely replication the experiment, no matter how
meticulously. Systematic and random errors are the two forms of experimental
errors.
Systematic Errors
Systematic mistakes are errors that affect the correctness of a measurement.
In the absence of other forms of errors, systematic errors are "one-sided" errors
because repeated measurements provide conclusions that differ by the same
amount from the real or accepted value. Repeating measurements that are prone to
systematic mistakes won't improve their precision. Systematic errors cannot be
assessed just through statistical analysis. Systematic errors are difficult to detect,

9
but if discovered, they can only be removed by fine-tuning the measuring method
or approach.
Erroneous calibration of measuring devices, poorly maintained instruments, or
faulty reading of instruments by the user are all common sources of systematic
errors. The term "parallax error" refers to a type of systematic error that occurs
when a user reads an instrument at an angle, resulting in a reading that is regularly
high or consistently low.
Errors at Random
Random mistakes are errors that have an impact on the accuracy of a
measurement. Random mistakes are "two-sided" errors because, in the
unavailability of other types of errors, repeated measurements produce values that
fluctuate above and below the real or accepted value. Measurements prone to
random errors differ from one another due to random, unpredictable changes in the
measuring procedure. The precision of measurements that are susceptible to
random errors can be improved by repeating them. Random errors can be readily
analyzed using statistical analysis.
Problems estimating a number that lies between the graduations (the lines)
on an instrument and the inability to interpret an instrument because the readout
fluctuates during the measurement are common sources of random errors.

Calculating Experimental Error


When a scientist presents the results of an experiment, the report must
include information about the precision and accuracy of the measurements. The
following are some frequent methods to define precision and accuracy.
Significant Figures
The smallest unit that can be measured with the measuring instrument
determines the least significant digit in a measurement. The number of significant
digits with which a measurement is presented can subsequently be used to
determine its precision. In general, any measurement is reported to a precision of
1/10 of the measuring instrument's smallest graduation, and the measurement's
precision is referred to as 1/10 of the smallest graduation.
A length measurement taken with a meterstick with 1-mm graduations, for
example, will be reported with a precision of 0.1 mm. The precision of a volume
measurement made with a graduated cylinder with 1-ml graduations is 0.1 ml.
Different rules apply to digital instruments. The precision of measurements
obtained with digital instruments is given with a precision of 12 of the smallest
unit of the instrument unless the instrument maker specifies otherwise. A digital

10
voltmeter, for example, reads 1.493 volts; the voltage value precision is 1/2 of 10-3
volts, or 5·10-4 volt.
Percent Error
The difference between a measured or experimental value E and a true or
accepted value A (also known as fractional difference) is used to calculate the
accuracy of a measurement. The following equation is used to compute the percent
error:
|𝑬−𝑨|
% 𝑬𝒓𝒓𝒐𝒓 = (1.1)
𝑨

Percent Difference
The difference between the measured or experimental values E1 and E2
represented as a fraction of the average of the two values measures the precision of
two measurements. To compute the % difference, use the following formula:
|𝐸1 −𝐸2 |
% 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝐸 +𝐸 (1.2)
( 1 2)
2

Mean and Standard Deviation


When a measurement is repeated several times, we see the measured values
are grouped around some central value. This grouping or distribution can be
described with two numbers: the mean, which measures the central value, and the
standard deviation which describes the spread or deviation of the measured values
about the mean.
The mean of a collection of N measured values for some quantity x is denoted by
the symbol and determined using the formula:
∑𝑵
𝒊=𝟏 𝒙𝒊
̅=
𝒙 (1.3)
𝑵

where xi denotes x's i-th measured value. Simply divide the sum of the measured
values by the number of measured values to get the mean.
The standard deviation of the measured values is denoted by the letter x and is
calculated using the following formula:

∑𝑵 ̅)𝟐
𝒊=𝟏(𝒙𝒊 −𝒙
𝝈𝒙 = √ (1.4)
𝑵(𝑵−𝟏)

The standard deviation, often known as the "mean square deviation," is a


measurement of how widely the measured values are distributed on each side of
the mean.
When a scientist reports the outcome of an experimental measurement of quantity
x, the result is split into two halves. First, the best measurement estimate is
11
presented. The mean x of the measurements is typically provided as the best
estimate of a collection of measurements. Second, the measurement variance is
reported. The standard deviation x of the measurements is frequently used to report
the variation in the measurements.
The best estimate for the measured amount is equal to the average, but it can
potentially range from + σx to - σx. After then, all experimental results should be
presented in the following format:
𝒙 = 𝑥̅ ± 𝝈𝒙 (1.5)
Relative error (RE)—when used as a measure of precision—is the ratio of
the absolute error of a measurement to the measurement being taken. In other
words, this type of error is relative to the size of the item being measured. RE is
expressed as a percentage and has no units:
̅̅̅̅
∆𝑥
𝜀= ∙ 100% (1.6)
∆𝑥

Experiment for self-performing: determining the value of heartbeats in the


norm and calculating the error.
What is your pulse?
Your heart rate, or the number of times your heart beats in one minute, is your
pulse. Pulse rates differ from one person to the next. When you are at rest, your
pulse is lower; when you exercise, it rises (more oxygen-rich blood is needed by
the body when you exercise).
How to take your pulse.
1. Place the tips of your index, second, and third fingers below the base of your
thumb on the palm side of your other wrist. Alternatively, place the tips of
your index and second fingers on either side of your windpipe on your lower
neck.
2. Gently press your fingers together until you feel blood pulsing beneath your
fingertips. You may need to move your fingers up and down slightly to feel
the pulse.
3. Look at a clock with a second hand if you're wearing a watch with one.
4. Fill up a table-1 by counting the beats you experience for 60 seconds.
Table 1.
№ xi 𝑥̅ ∆𝑥𝑖 ̅̅̅̅
∆𝑥 ∆𝑥𝑖 2 𝜎 t(𝜶,n) 𝛅 𝛆
1
2
3
...

12
t(𝜶,n) is Student’s coefficient which depends on confidence factor (𝛼) and number
of measurements (n). 𝛅 is confidence interval and calculated by following formula:
𝜹 = 𝝈 ∙ 𝒕(𝜶, 𝒏) (1.7)
5. Result of experiment write in the following form:
𝒙 = (𝒙
̅ ± 𝜹)𝒖𝒏𝒊𝒕 𝒐𝒇 𝒎𝒆𝒂𝒔𝒖𝒓𝒆𝒎𝒆𝒏𝒕; 𝜺% (1.8)
6. Make a conclusion.

Note
After each calculation work, laboratory work or experiment, it is necessary to
write conclusions that include information about the result obtained.
In the conclusion, the procedure of performing the work or theoretical
information should not be written, you need to explain the result you received
and compare it with theoretical data (if such are there) and prove the
correctness of your result using formulas or regularities.

13
II
SOUND. PHYSICAL PROPERTIES OF SOUND. ACOUSTICS.
MEDICAL APPLICATION OF SOUND
Our senses of hearing and sight provide us with the majority of information
about our physical surroundings. We get information about objects in both
circumstances without coming into physical contact with them. In the first
scenario, we receive information through sound, while in the second example, we
receive information through light. Sound and light are both waves, despite the fact
that they are completely different phenomena. A wave is a disturbance that
transports energy from one location to another without transferring mass. Our
sensory processes are stimulated by the energy conveyed by the waves.
Sound is a mechanical wave produced by vibrating bodies. When an object,
such as a tuning fork or human vocal chords, is brought into vibrational motion,
the surrounding air molecules are disrupted and pushed to follow the vibrating
body's motion. The vibrational disturbance propagates away from the source as the
vibrating molecules transfer their motion to neighbouring molecules. When air
vibrations hit the eardrum, the eardrum vibrates, causing nerve impulses to be
produced, which are then interpreted by the brain.
To some extent, all matter transmits sound, but sound propagation requires a
material channel between the source and the receiver. The well-known experiment
of the bell in the jar demonstrates this. The sound of the bell when it is placed in
motion is heard. The sound of the bell fades as the air in the jar is expelled, and the
bell eventually becomes inaudible. Alternate compressions and rarefactions of the
medium, which are initially created by the vibrating sound source, constitute the
spreading disturbance in the sound-conducting medium. These compressions and
rarefactions are merely departures from the average density of the medium. In a
gas, the variations in density are equivalent to pressure changes. Two important
characteristics of sound are intensity, which is determined by the magnitude of
compression and rarefaction in the propagating medium, and frequency, which is
determined by how often the compressions and rarefactions take place. The unit
Hertz, named after the scientist Heinrich Hertz, is used to measure frequency in
cycles per second. Hz is the sign for this unit. 1 Hz equals one cycle per second.)
Object vibrational motion can be quite complex, resulting in a complex sound
pattern. Still, analyzing the qualities of sound in terms of basic sinusoidal
vibrations, such as those produced by a vibrating tuning fork, is useful (see Fig.
2.1).

14
Figure 2.1. Sinusoidal sound wave produced by a vibrating tuning fork.

A pure tone is the type of basic sound pattern shown in Fig. 2.1. The pressure
differences caused by compressions and rarefactions are sinusoidal when a pure
tone travels through air.
We would witness pressure variations in space that are also sinusoidal if we took a
"snapshot" of the sound at a given point in time. (Special procedures are required
to obtain such images.) The wavelength is the distance between the closest equal
spots on the sound wave.
The speed of a sound wave, υ, is determined by the substance through which it
travels. The speed of sound in air at 20 degrees Celsius is around 3.3×104 cm/sec,
while in water it is approximately 1.4×105 cm/sec. The following equation
describes the relationship between frequency, wavelength, and propagation speed
in general:
𝓿= λf (2.1)
For all types of wave motions, the link between frequency, wavelength, and
speed is true.
The propagating sound's pressure changes are superimposed on the ambient
air pressure. As a result, the total pressure in a sinusoidal sound wave's course is of
the value
P = Pa + Po sin2π·f· t (2.2)
Where, Pa is the ambient air pressure (which is 1.01×105Pa=1.01×106
dyn/cm2 at sea level at 00 C), Po is the greatest pressure change caused by the sound
wave, and f is the sound frequency.
The intensity I of a sinusoidal sound wave is defined as the amount of energy
transmitted per unit time across each unit area perpendicular to the direction of
sound propagation:

15
P02
I (2.3)
2 

Here, ρ – the medium’s density, and 𝓿 - sound propagation speed.


Properties of Waves
Refraction, reflection, diffraction, and interference are all phenomena that occur in
waves, including sound and light. Most fundamental physics texts go into great
depth on these phenomena, which are crucial in both hearing and seeing. We will
simply go through them quickly here.
Reflection and refraction are two different types of reflection. When a
wave travels from first medium to another one, some of it is reflected at the border
and some of it travels through the medium. The reflection is specular if the
interface between the two media is smooth on a wavelength scale (i.e., the
imperfections of the interface surface are smaller than ). (mirrorlike). The
reflection is diffuse if the irregularities on the surface are bigger than the
wavelength. Light reflected from paper is an example of diffuse reflection.
If the wave falls on the interface at an inclined angle, the direction of
propagation of the incident wave in the new medium is altered (fig. 2.2). This
process is called refraction. The angle of reflection is always equal to the angle of
incidence, but the angle of the refracted wave is, in general, a function of the
properties of the two media.

Figure 2.2. Illustration of reflection and refraction. (θ is the angle of incidence.)

The fraction of energy transferred from one medium to another is


determined by the media's characteristics as well as the angle of incidence. The
ratio of transmitted to incident intensity for a sound wave incident perpendicular to
the interface is given by
16
I1 4 11  22
 (2.4)
I 2 11   2 2

Where, the subscripted quantities are the velocity and density in the different
media. The solution to Eq. 2.4 shows that when sound traveling in air is incident
perpendicular to a water surface, only about 0.1 percent of the sound energy enters
the water, while 99.9 percent is reflected. When the angle of incidence is oblique,
the fraction of sound energy entering the water is even lower. As a result, water
acts as an effective acoustic barrier.
Interference. When two (or more) waves travel in the same medium at the same
time, the total disturbance is the vectorial sum of the individual disturbances
caused by each wave at each point. Interference is the term for this phenomenon.
For example, if two waves are in phase, they add, increasing the wave disturbance
at each point in space. This is known as constructive interference (Fig. 2.3a).
When two waves are 180 degrees out of phase, the wave disturbance in the
propagating medium is reduced. This is known as destructive interference (Fig.
2.3b). The wave disturbance is completely cancelled if the magnitudes of two out-
of-phase waves are the same (Fig. 2.3c).

Figure 2.3 (a) interference of waves oscillating in one phase. (b, c) interference of waves
oscillating in antiphase. R is the result of propagating A and reflected B waves.

Two waves of the same frequency and magnitude traveling in opposite directions
cause a special type of interference. The resulting wave pattern is called a standing
wave because it is stationary in space. Standing sound waves are created in hollow
pipes like the flute. Standing waves in a given structure can be shown to exist only
at specific frequencies known as resonant frequencies.
Diffraction. As waves travel across a medium, they have a tendency to expand
out. As a result, when a wave hits an impediment, it spreads out into the area

17
behind it. Diffraction is the name for this occurrence. The degree of diffraction is
proportional to the wavelength: The larger the wave's spreading, the longer the
wavelength. Only when the size of the obstruction is smaller than the wavelength
does significant diffraction into the region behind it occur. The performer can be
heard by a person sitting behind a pillar in an auditorium because long wavelength
sound waves spread behind the pillar. However, because the wavelength of light is
different, the vision of the performance is hindered. It can be demonstrated that the
focused spot's diameter cannot be less than λ/2. These wave characteristics have
significant implications for the hearing process.
Frequency and Pitch. The human ear can detect sound at frequencies ranging
from 20 to 20,000 hertz. However, the ear's response is not uniform within this
frequency range. The ear is most sensitive to frequencies between 200 and 4000
Hz, and its response decreases as frequency increases. Individuals' frequency
responses differ greatly. Some people cannot hear sounds above 8000 hertz, while
others can hear sounds above 20,000 hertz. Furthermore, most people's hearing
deteriorates with age. The pitch sensation is related to the frequency of the sound.
The pitch rises as the frequency rises. Pitch and frequency, on the other hand, do
not have a simple mathematical relationship.
Intensity and Loudness
The ear can detect a wide range of intensities. The lowest intensity that the human
ear can detect at 3000 Hz is approximately 10−16 W/cm2. The loudest sound that
can be tolerated has an intensity of about 10−4 W/cm2. The threshold of hearing
and the threshold of pain are the two extremes of the intensity range. Sound
intensities above the pain threshold can permanently damage the eardrum and
ossicles.
The ear does not respond linearly to sound intensity; that is, a sound a million
times louder than another does not elicit a million times greater sensation of
loudness. The ear's response to intensity is logarithmic rather than linear. Because
of the nonlinear response of the ear and the wide range of intensities involved in
the hearing process, sound intensity is best expressed on a logarithmic scale. The
sound intensity is measured relative to a reference level of 10−16 W/cm2 on this
scale (which is approximately the lowest audible sound intensity). The logarithmic
intensity is expressed in decibel (dB) units and is defined as
sound intensity in W/cm2
Logarithmic intensity = 10𝑙𝑜𝑔 (2.5)
10−16 W/ cm2

Thus, for example, the logarithmic intensity of a sound wave with a power of 10 −12
W/cm2 is

Logarithmic intensity = 10 log (10−12/ 10−16) = 40 dB (2.6)

Intensities of some common sounds are listed in Table 2.1

18
Table 2.1
Sourse of sound Sound level (dB) Sound level(W/cm2)
Threshold of pain 120 10-4
Riveter 90 10-7
Busy street traffic 70 10-9
Ordinary conversation 60 10-10
Quiet automobile 50 10-11
Quiet radio at home 40 10-12
Average whisper 20 10-14
Rustle of leaves 10 10-15
Threshold of hearing 0 10-16

In the field of psychophysics, the Weber–Fechner laws are two related


hypotheses known as Weber's law and Fechner's law. Both laws are concerned
with human perception, specifically the relationship between actual and perceived
changes in a physical stimulus.
I
L   kdI I  k ln I I 0 . (2.7)
I0

Loudness is the subjective perception of sound pressure in acoustics. It is defined


more formally as "that attribute of auditory sensation in terms of which sounds can
be ordered on a scale extending from quiet to loud." The relationship between
physical sound attributes and perceived loudness has physical, physiological, and
psychological components. The study of apparent loudness is part of the field of
psychoacoustics and employs psychophysical methods.
The sensitivity of the human ear varies with frequency, as illustrated by the equal-
loudness graph. Each line on this graph represents the sound pressure level
required for frequencies to be perceived as equally loud, and different curves
correspond to different sound pressure levels. It also demonstrates that humans
with normal hearing are most sensitive to sounds between 2 and 4 kHz, with
sensitivity decreasing on either side of this range. The integration of SPL by
frequency will be included in a complete model of the perception of loudness.

19
Figure 2.4. Equal-loudness contours

Clinical Uses of Sound


The most well-known clinical application of sound is the use of a stethoscope to
analyze body noises. A little bell-shaped chamber is linked to a flexible hollow
tube in this instrument. The bell is put above the source of the body sound on the
skin (such as the heart or lungs). The sound is subsequently transmitted through the
pipe to the examiner's ears, who assesses the organ's functionality. Two bells are
put on various areas of the body in a modified stethoscope. One ear receives the
sound from one bell, while the other ear receives the sound from the other bell.
Following that, the two sounds are contrasted. After then, the two sounds are
compared. It is possible, for example, to listen to the heartbeats of the fetus and the
pregnant woman at the same time with this technology.
Ultrasonic Waves. Mechanical waves at very high frequencies, up to millions of
cycles per second, can be produced using special electronically driven crystals.
These waves are known as ultrasonic waves because they are simply the extension
of sound to higher frequencies. Because of their short wavelength, ultrasonic
waves can be focused and imaged in the same way that visible light can.
It is possible to create visible images of ultrasonic reflections and absorptions
using specialized techniques known as ultrasound imaging. As a result, structures
within living organisms can be examined using ultrasound in the same way that X-
20
rays can. Ultrasonic examinations are less dangerous than X-rays and can often
provide the same amount of information. Ultrasonic methods can show motion in
some cases, such as the examination of a fetus and the heart, which is very useful
in such displays. The frequency of sound detected by an observer is determined by
the source's and the observer's relative motion. The Doppler effect is the name
given to this phenomenon. It can be demonstrated that when the observer is
stationary and the source is moving, the frequency of the sound f' detected by the
observer is given by

f' f (2.8)
  s

where, f denotes the frequency in the absence of motion, 𝞋 denotes the speed of
sound, and 𝞋s denotes the speed of the source. When the source is approaching the
observer, use a minus sign in the denominator, and a plus sign when it is receding.
It is possible to measure motions within a body using the Doppler effect.
The ultrasonic flow meter, which generates ultrasonic waves that are scattered by
blood cells flowing through blood vessels, is one tool for obtaining such
measurements. The Doppler effect changes the frequency of the scattered sound.
The blood flow velocity is calculated by comparing the incident frequency to the
frequency of the scattered ultrasound.
The mechanical energy in the ultrasonic wave is converted to heat within the
tissue. With enough ultrasonic energy, it is possible to heat specific parts of a
patient's body more efficiently and evenly than conventional heat lamps.
Diathermy is a type of treatment that is used to relieve pain and promote the
healing of injuries. It is possible to destroy tissue with extremely high-intensity
ultrasound. Ultrasound is now commonly used to dissolve kidney and gallstones
(lithotripsy).
Audiometry is a field of audiology that involves thresholds and different
frequencies to measure hearing acuity for fluctuations in sound strength and pitch,
as well as tonal purity. Audiometric exams use an audiometer to detect a subject's
hearing levels, but they can also examine capacity to discern between different
sound intensities, recognize pitch, or separate speech from background noise. The
results of audiometric testing are used to diagnose hearing loss or ear illnesses, and
an audiogram is frequently employed.
Auscultation is the process of listening to the body's internal sounds with a
stethoscope. Auscultation is used to examine the circulatory and respiratory
systems, as well as the alimentary canal (heart and breath sounds).
Percussion is a technique for determining the underlying structures of a surface
and is used in clinical tests to check the status of the thorax and abdomen. It's done
by employing a wrist action to tap the middle finger of one hand on the middle
finger of the other. The pleximeter (non-striking finger) is firmly placed on the
21
body over tissue. There are two forms of percussion: direct and indirect.
Percussion sounds are classified as resonant, hyper-resonant, stone dull, or dull.
The presence of a solid mass beneath the surface is indicated by a dull sound.
Hollow, air-containing structures are indicated by a more resonant sound. They
produce varied sensations in the pleximeter finger as well as different sounds that
can be heard.

Test tasks to consolidate the studied material on the topic:


1. Sound waves propagate as
A) transverse waves
B) ultrashort waves
C) longitudinal waves
D) radio waves
E) photons

2. To evoke sound sensations, the wave must have some minimal intensity, called
A) pain sensation
B) noise
C) hormonal spectrum
D) threshold of noise
E) threshold of hearing

3. Hearing threshold value


A) 1012 W / m2
B) 10-12 W / m2
C) 1 W / m2
D) 106 W / m2
E) 10 W / m2

4. The speed of sound depends on


a) time
b) density of the medium
c) ambient temperature
d) elastic properties of the medium
e) distances
A) a,b,c
B) b,c,d
C) d,e,a
D) c,d,e
E) c,d,a

5. The value that is directly proportional to the logarithm of the ratio of its intensity
to the value corresponding to the threshold of hearing called
22
A) volume level
B) threshold level
C) pain threshold
D) sound wave intensity
E) noise level

6. The distance between two adjacent nodes and antinodes of the standing wave is
equal to
A) length of traveling waves
B) half of the length of traveling waves
C) doubled wavelength of traveling waves
D) half the distance between nodes
E) doubled distance between nodes

7. Wave that occurs as a result of the interaction (interference) of the incident and
reflected waves
A) standing wave
B) longitudinal wave
C) transverse wave
D) perpendicular wave
E) reflected waves

Laboratory work: Determination of the speed of sound in air by


standing wave method

Purpose of work: mastering the interference method determination of the speed of


sound propagation, determination of the speed of sound propagation in air by this
method and analysis of measurement errors.
Appliances and accessories: glass tube with movable piston and ruler, sound
generator with telephone diaphragm.

Work procedure

1.With the permission of the teacher or laboratory assistant, enable sound generator
into the electrical network.

2. Put the "Network" toggle switch on the generator panel to position "On".

3. After 2-3 minutes, turn the knob and set the HZ frequency indicator to the value
indicated teacher.

4. Place the piston against the open end of pipes and by turning the knob of the
output voltage regulator of sound generator, set the sound level so that the signal
was heard.
23
5. Slowly and evenly pushing the piston away from membrane, determine the
coordinates of the nodes of the standing wave. Measurement of coordinates must
be made for 3-4 nodes.

6. By the difference in the coordinates of the nodes, determine the length of


standing wave by the formula.

7. From the obtained values of the length of the standing wave find the speed of
sound propagation by the formula.

8. Measure the wavelength and calculate the phase velocities of sound propagation
by setting on the sound generator another frequency specified the teacher, and after
do the operations specified in paragraphs 4-7.

9. Calculate by using the formulas below to calculate the relative and absolute
errors of the sound phase velocity measurement and calculate these errors.

10. Fill in the results of measurements and calculations into table.

№ , ℓ, ̅𝒊
=2ℓ  = , 𝝑 ̅
𝝑   i ̅̅̅̅
∆𝝑 i2 S 𝛅 𝛆
(Hz) (cm) (m) (m/sec)
1
1000

2
1500

3
2000

11. Perform an analysis of experimental results:


a) calculate the arithmetic mean value of the velocity
n
i 1  2  ...  n 
  n
i 1

n
b) the absolute error of a single measurement:
i    i
c) determine the arithmetic mean value of absolute error:
n
i 1  2  ...  n 
   
i 1 n n

d) Determine the relative error of the experience:    100%

e) Determine the mean square error:

24
n

  i
2

S i 1

n(n  1)

f) determine the confidence interval:


S  t , n
 ,
n
Where t , n – Student's coefficient , – confidence factor, n – number of
measurements.
j) The final result can be written as:       m/sec.
11. Make a conclusion.

Numerical problems for independent solution

1. (1) A diagnostic ultrasound imaging device operates at a frequency of 6.0 MHz.


What wavelength corresponds to soft tissues in the body? (2) In air, what is the
wavelength associated with this frequency? In soft tissues, the average velocity of
ultrasound propagation is 1540 m/s.
2. (1) An ultrasound wavelength of 0.50 mm is desired for imaging soft tissues in
the body. To achieve this value, what frequency should be used? (2) The
wavelength associated with an ultrasound frequency of 2.0 MHz in some unknown
material is 1.75 mm. What is the sound speed in that material?
3. The human ear is able to perceive sound vibrations with a frequency of 16 to
20,000 Hz as a musical tone. What range of wavelengths of sound can a person
perceive at a speed of sound of 340 m / s?
4. An observer located at a distance of 2 km 150 m from the sound source hears the
sound that came through the air 4.8 seconds later than the sound from the same
source that came through the water. Determine the speed of sound in water if the
speed of sound in air is 345 m / s.

25
III
STRUCTURE AND FUNCTIONS OF BIOLOGICAL MEMBRANES AND
METHODS FOR STUDYING MEMBRANE STRUCTURE
The cell membrane, also known as the plasma membrane, is a biological
membrane that divides all cells' interiors from the outside world. The cell
membrane controls the passage of substances in and out of cells by being
selectively permeable to ions and organic molecules. It essentially shields the cell
from outside influences. It is made up of a lipid bilayer with proteins inserted in it.
Cell membranes play a role in a variety of cellular activities such as cell adhesion,
ion conductivity, and cell signaling, as well as serving as the attachment surface for
a number of extracellular structures such as the cell wall, glycocalyx, and
intracellular cytoskeleton. Cell membranes can be reconstructed artificially.
The plasma membrane, like all other cellular membranes, is made up of both
lipids and proteins. The phospholipid bilayer, which forms a stable barrier between
two aqueous compartments, is the membrane's fundamental structure. These
compartments are the inside and outside of the cell in the case of the plasma
membrane. Proteins embedded within the phospholipid bilayer perform plasma
membrane functions such as selective molecule transport and cell-cell recognition.
Mammalian red blood cells (erythrocytes) plasma membranes have proven
to be a particularly helpful model for studying membrane structure. Because
mammalian red blood cells lack nuclei and internal membranes, they can easily be
separated for biochemical examination. The initial evidence that biological
membranes are made up of lipid bilayers came from studies of the red blood cell
plasma membrane. In 1925, two Dutch scientists (E. Gorter and R. Grendel)
isolated membrane lipids from a known number of red blood cells with a known
plasma membrane surface area. The surface area occupied by a monolayer of the
extracted lipid spread out at an air-water interface was then calculated. The lipid
monolayer's surface area was found to be double that of the erythrocyte plasma
membranes, indicating that the membranes were made up of lipid bilayers rather
than monolayers.
High-magnification electron micrographs clearly show the bilayer structure
of the erythrocyte plasma membrane. The plasma membrane appears as two dense
lines separated by an intervening space, a morphology known as a "railroad track"
appearance.

26
The binding of electron-dense heavy
metals used as stains in transmission
electron microscopy to the polar head
groups of the phospholipids results in
this image, which appears as dark lines.
The lightly stained interior portion of
the membrane, which contains the
hydrophobic fatty acid chains, separates
these dense lines.

Animal cell plasma membranes also include glycolipids and cholesterol in addition
to phospholipids. The glycolipids are only found in the plasma membrane's outer
leaflet, with their carbohydrate parts exposed on the cell surface. They are modest
membrane constituents, accounting for just around 2% of the lipids in most plasma
membranes. Cholesterol, on the other hand, is a key membrane constituent of
mammalian cells, with molar levels about equal to phospholipids.
Membrane function is dependent on two general characteristics of phospholipid
bilayers. The basic function of membranes as barriers between two aqueous
compartments is first and foremost due to the structure of phospholipids. The
membrane is impermeable to water-soluble molecules, such as ions and most
biological molecules, since the inside of the phospholipid bilayer is occupied by
hydrophobic fatty acid chains. Second, naturally occurring phospholipid bilayers
are viscous fluids rather than solids. Most natural phospholipids include one or
more double bonds in their fatty acids, which cause bends in the hydrocarbon
chains and make packing them together problematic. As a result of the lengthy
hydrocarbon chains of the fatty acids moving freely within the membrane, the
membrane is soft and flexible. Furthermore, both phospholipids and proteins are
free to migrate laterally within the membrane, which is an important characteristic
for many membrane functions.
Cholesterol plays a unique role in membrane construction due to its tight ring
shape. Cholesterol does not create a membrane on its own; instead, it inserts into a
bilayer of phospholipids with its polar hydroxyl group adjacent to the head groups
of the phospholipids. Cholesterol has different impacts on membrane fluidity
depending on the temperature. Cholesterol obstructs the mobility of the
phospholipid fatty acid chains at high temperatures, making the outer region of the
membrane less fluid and lowering its permeability to tiny molecules. Cholesterol,
on the other hand, has the opposite impact at low temperatures: Cholesterol
prevents membranes from freezing and preserves membrane fluidity by interfering
with fatty acid chain interactions. Although cholesterol is not found in bacteria, it
is a necessary component of the plasma membranes of mammalian cells. Plant

27
cells lack cholesterol as well, but they do have comparable chemicals (sterols) that
serve the same purpose.
According to recent research, not all lipids in the plasma membrane diffuse easily.
Instead, cholesterol and sphingolipids appear to be concentrated in specific
membrane domains (sphingomyelin and glycolipids). Sphingolipid and cholesterol
clusters are considered to create "rafts" that move laterally through the plasma
membrane and may bind to certain membrane proteins. Although the activities of
lipid rafts are unknown, they may play a role in processes such as cell signaling
and endocytosis, which involves the uptake of extracellular substances.
Proteins are responsible for carrying out specialized membrane tasks, while lipids
constitute the fundamental structural constituents of membranes. Most plasma
membranes are roughly 50 percent lipid and 50 percent protein by weight, with
glycolipid and glycoprotein carbohydrate components accounting for 5 to 10% of
the membrane composition. This ratio equates to around one protein molecule for
every 50 to 100 lipid molecules, due to the fact that proteins are significantly larger
than lipids. The fluid mosaic model of membrane construction, introduced by
Jonathan Singer and Garth Nicolson in 1972, is now widely acknowledged as the
primary paradigm for the architecture of all biological membranes. In this model,
membranes are viewed as two-dimensional fluids in which proteins are inserted
into lipid bilayers.

Figure 3.1. Fluid mosaic model.

The phospholipids have an amphiphilic nature. The hydrophilic end usually has a
negatively charged phosphate group, while the hydrophobic end has two "tails"
that are long fatty acid residues.
Phospholipids in aqueous solutions are driven by hydrophobic interactions, which
cause the fatty acid tails to aggregate in order to minimize interactions with water
molecules. The end result is frequently a phospholipid bilayer: a membrane made

28
up of two layers of oppositely oriented phospholipid molecules, with their heads
exposed to the liquid on both sides and their tails directed into the membrane.

Figure 3.2. Structure of phospholipid.

Singer and Nicolson distinguished two types of membrane-associated


proteins: peripheral membrane proteins and integral membrane proteins.
Peripheral membrane proteins were operationally defined as proteins that
dissociate from the membrane following treatments with polar reagents that do not
disrupt the phospholipid bilayer, such as solutions of extreme pH or high salt
concentration. Peripheral membrane proteins are soluble in aqueous buffers once
they have been dissociated from the membrane. These proteins are not inserted into
the lipid bilayer's hydrophobic interior. Instead, they are linked to membranes
indirectly via protein-protein interactions. These interactions frequently involve
ionic bonds, which are broken down by high pH or salt.
Integral membrane proteins, unlike peripheral membrane proteins, can only
be released when the phospholipid bilayer is disrupted. Only chemicals that disrupt
hydrophobic interactions can separate portions of these integral membrane proteins
that are integrated into the lipid bilayer. Detergents, which are tiny amphipathic
molecules with both hydrophobic and hydrophilic groups, are the most often used
reagents for solubilizing integral membrane proteins. The hydrophobic regions of
detergents bind to the hydrophobic portions of integral membrane proteins,
displacing membrane lipids. Because the other end of the detergent molecule is
hydrophilic, the detergent-protein complexes are soluble in aqueous solutions.
Transmembrane proteins bridge the lipid bilayer, with sections exposed on both
sides of the membrane, and are found in many integral proteins. In electron
micrographs of plasma membranes made using the freeze-fracture technique, these
proteins can be seen. The membrane is broken and separated into two leaflets in
29
these examples. Transmembrane proteins then appear as particles on the
membrane's interior sides.
Mobility of membrane proteins.
Membrane proteins and phospholipids are unable to move rapidly back and forth
between the membrane's inner and outer leaflets. Proteins and lipids can both
diffuse laterally through the membrane because they are introduced into a fluid
lipid bilayer. This lateral movement was first directly demonstrated in a 1970
experiment by Larry Frye and Michael Edidin, which supported the fluid mosaic
hypothesis. Frye and Edidin created human-mouse cell hybrids by fusing human
and mouse cells in vitro.
Human and mouse proteins were found in different halves of the hybrid cells
immediately after fusion. However, after a brief incubation at 37°C, the human and
mouse proteins were completely intermixed across the cell surface, indicating that
they moved freely across the plasma membrane.
To create hybrid cells, human and mouse cells were fused. The distribution of cell
surface proteins was then examined using fluorescently labeled anti-human and
anti-mouse antibodies (red and green, respectively).

Figure 3.3. Scheme of mixing proteins of human and mouse cells after 40 minutes of incubation.

Membrane lipids and proteins have high mobility, which allows them to diffuse
due to thermal motion. If their molecules move within the same membrane layer,
the process is known as lateral diffusion; if their molecules move from one layer to
another, the process is known as a “flip-flop” transition.

30
The frequency of molecule jumps due to lateral diffusion is
D
 2 3 (3.1)
A
where: D is the coefficient of lateral diffusion; A is the area occupied by one
molecule on the surface of the membrane.
The sedentary life of a molecule in one position is inversely proportional to the
hopping frequency:
A
  1 /  (3.2)
2 3D
In this case, the mean quadratic displacement of molecules during time t is:
S  2 Dt (3.3)
Lipid vesicles, also known as liposomes, are circular pockets surrounded by a lipid
bilayer. These structures are used in laboratories to study the effects of chemicals
on cells by delivering these chemicals directly to the cell and gaining a better
understanding of cell membrane permeability. Lipid vesicles and liposomes are
created by suspending a lipid in an aqueous solution and then sonicating the
mixture to create a vesicle. Researchers can better understand membrane
permeability by monitoring the rate of efflux from the inside of the vesicle to the
ambient solution. Vesicles can contain molecules and ions by creating the vesicle
with the desired molecule or ion present in the solution. Proteins can also be
embedded in the membrane by solubilizing the appropriate proteins in detergents
and then connecting them to the phospholipids that constitute the liposome. These
provide researchers with a tool to examine various membrane protein functions.
One property of a lipid bilayer is the relative mobility (fluidity) of the individual
lipid molecules and how this mobility changes with temperature (See fig.3.4). The
phase behavior of the bilayer is the name given to this response. A lipid bilayer can
exist in either a liquid or a solid phase at a given temperature. The solid phase is
sometimes known as the "gel" phase. Every lipid has a certain temperature at
which it transitions from the gel to the liquid phase (melt). Lipid molecules are
restricted to the two-dimensional plane of the membrane in both phases, although
in liquid phase bilayers, molecules can freely diffuse within this plane. As a result,
in a liquid bilayer, a particular lipid will rapidly switch places with its neighbor
millions of times per second and will migrate over great distances via a random
walk mechanism.
When the lipid membrane reaches the transition temperature of 35 degrees Celsius,
bilayer permeability increases, whereas below that temperature, lipid membranes
exist only in solid phase, with no drug release expected.

31
Figure 3.4. Lipid layer phase behavior.

The biological membranes on the model of an electric capacitor.


The membrane's structure is similar to that of a flat capacitor, the plates of which
are formed by surface proteins, and the lipid bilayer serves as the dielectric (See
Fig.3.5)

Figure 3.5. Biological membrane as capacitor.

The plasma membrane operates as a capacitor, which leads to an increase in


membrane capacity: the phospholipid bilayer is a thin insulator with certain
dielectric permeability, that separates the two electrolytic media, extracellular and
intracellular fluid. The membrane capacity is directly proportional to the cell
surface area and, along with the membrane resistance, defines the membrane time
constant, which establishes how quickly the cell membrane potential reacts to the
ion channel current changes.
We can estimate the dielectric constant from the hydrophobic and hydrophilic
regions of the membranes using the flat capacitor formula, knowing the limits of
the change in membrane thickness.

32
 0 S
С (3.4)
d
where: ε- is the dielectric constant, ε0- is the electric constant, s- is the area, d- is
the distance between the plates.
The physical properties of the membranes
• The density of the lipid bilayer is 800 kg / m3, which is lower than that of water.
• Dimensions. By electron microscopy data, membrane thickness (L) ranging from 4
to 13 nm, and various cell membranes characterized by different thickness.
• Viscosity. The lipid layer of the membrane has a viscosity η = 30-100 mPa*s
(corresponding to the viscosity of vegetable oil).
The membranes have a high electrical resistivity (about 107 Ohm * m) and a high
specific capacitance (approximately 0.5 * 10-2 F / m2). The dielectric constant of
membrane lipids is 2.
Membranes contain a large number of different proteins. Their number is so great
that the surface tension of the membrane is closer to the surface tension at the
protein-water interface (   104 N / m ) than lipid-water (   102 N / m ). The
concentration of membrane proteins depends on the type of cell.
Test tasks to consolidate the studied material on the topic:
1. Lateral diffusion is a transition of
A) ions through bilayer membrane
B) molecules from one lipid layer to another
C) molecules across biological membrane
D) molecules in the membrane within one layer
E) protein molecules from one lipid layer to another

2. The viscosity of the lipid layer of the membrane corresponds to the viscosity of
A) water
B) vegetable oil
C) human blood
D) plasma
E) air

3. Equivalent electrical model of the membrane


A) an inductor
B) a rheostat
C) a hydrodynamic element
D) a flat capacitor
E) a thermodynamic element

33
4. Polar lipid heads
A) have a charge, hydrophilic and directed to the outside
B) directed to the inner side of the membrane, have no charge
C) tend not to contact with water molecules
D) hydrophobic, directed to the inner side of the membrane
E) hydrophilic, tend not to contact with water molecules

5. How does the permeability coefficient change with increasing diffusion


coefficient?
A) increases inversely
B) increases exponentially
C) decreases exponentially
D) increases in direct proportion
E) halved

6. Integral proteins are involved in


A) transport of substances
B) flip flop carry
C) lateral diffusion
D) barrier function
E) isolation

7. The transfer of molecules from one lipid layer to another


A) "flip - flop" transfer
B) light diffusion
C) active transport
D) lateral diffusion
E) passive transport

8. The ability of a membrane to pass through certain substances is


A) selective permeability
B) lateral diffusion
C) light diffusion
D) flip - flop transfer
E) full permeability

9. The function of the biological membrane, which ensures the delivery of nutrients,
the removal of the final products of metabolism, the creation of ionic gradients
A) transport
B) barrier
C) matrix
D) mechanical
E) energy
34
10. Physical value, the change of which leads to the transition of the membrane
from the liquid crystal to the gel state and back
A) temperature
B) pressure
C) weight
D) volume
E) square

Numerical problems for independent solution:


1. Calculate the time  of sedentary life and the frequency of hoppings  from one
membrane layer to another one of the lipids of sarcoplasmic reticulum membrane
if the coefficient of lateral diffusion is D=45 µm2/sec and the area of one
phospholipid molecule is А=1,9 nm2 .
2. Calculate the root mean square displacement of protein molecules in 2 sec if the
coefficient of lateral diffusion for them is approximately 10-12 mm2/sec.

3. How will the electrical (specific) capacity of membrane change at its transition
from liquid-crystal state to the gel if it is known that in liquid-crystal state the
thickness of hydrophobic layer is 3,9 nm and in the state of gel it is 4,7 nm.
Permittivity of lipids  2.
4. Calculate the permittivity of membrane lipids if the thickness of membrane
is d=10 nm, specific capacitance is С=1,7 mF/m2.

5. At the phasic transition of membrane phospholipids from liquid-crystal state


to the gel the thickness of bilayer changes. How will the capacitance of membrane
change? How will the electrical field strength change in membrane?

35
IV
TRANSPORT OF SUBSTANCES ACROSS MEMBRANE.
PASSIVE AND ACTIVE TRANSPORT.
When a drop of colored solution is dropped into a still liquid, the color
spreads out gradually over the liquid's volume. Color molecules spread from the
high-concentration region (of the initially injected drop) to lower-concentration
areas. Diffusion is the term for this process.
Diffusion is the primary means of delivering oxygen and nutrients to cells,
as well as removing waste products from them. Diffusive motion is relatively slow
on a wide scale (it may take hours for the colored solution in our example to spread
over a few centimeters), but on the small size of tissue cells, diffusive motion is
rapid enough to support cell life. Diffusion is the direct result of molecules' random
thermal mobility. Although a thorough explanation of diffusion is beyond the
scope of this paper, several aspects of diffusive motion can be derived from basic
kinetic theory.
The net flux from one region to another is determined by the density
difference between the two regions' diffusing particles. The flux increases as the
thermal velocity 𝞋 increases and decreases as the distance between the two regions
decreases.
𝐷
𝐽= (𝐶1 − 𝐶2 ) (4.1)
∆𝑥

where D is called the diffusion coefficient. In our case, the diffusion coefficient is
simply
𝐿𝜗
𝐷= (4.2)
2

However, the diffusion coefficient is a more complex function in general because


the mean free path L is affected by the size of the molecule as well as the viscosity
of the diffusing medium. The diffusion coefficient calculated from Eq. (4.2) in our
previous illustration of diffusion through a fluid, where L= 10−8 cm and 𝓿=104
cm/sec, is 5 × 10−5 cm2/sec. For example, the measured diffusion coefficient of salt
(NaCl) in water is 1.09×10−5 cm2/sec. As a result, our straightforward calculation
yields a reasonable estimate of the diffusion coefficient. Of course, larger
molecules have a lower diffusion coefficient. The diffusion coefficients for
biologically important molecules range from 10−7 to 10−6 cm2/sec.

We've just talked about free diffusion through a fluid so far, but the cells that make
up biological systems are enclosed by membranes that prevent free diffusion. To
keep life functions running, oxygen, nutrients, and waste materials must travel

36
through these membranes. The biological membrane can be thought of as porous in
the simplest model, with the size and density of the pores determining diffusion
through the membrane. The only function of the membrane is to diminish the
effective diffusion area and hence the diffusion rate if the diffusing molecule is
smaller than the size of the holes. If the diffusing molecule is larger than the size of
the pores, the flow of molecules through the membrane may be barred.

Figure 4.1. Diffusion through a membrane

The net flux of molecules J flowing through a membrane is given in terms of the
permeability of the membrane P. We will enter designation as permeability
coefficient: P  DK / l .

J= P (C1 − C2) (4.3)


This equation is similar to Eq. (4.1) except that the term D is replaced by the
permeability P, which includes the diffusion coefficient as well as the effective
thickness 𝛥x of the membrane. The permeability depends, of course, on the type of
membrane as well as on the diffusing molecule. Permeability may be nearly zero
(if the molecules cannot pass through the membrane) or as high as 10 −4 cm/sec.
Because permeability is dependent on diffusing species, the cell can
maintain a composition that differs from that of the surrounding environment.
Many membranes, for instance, are permeable to water but not to dissolved
molecules. As a result, water can enter the cell, but the cell's components cannot
leave the cell. Osmosis is the name for a one-way water route. The movement of
the molecules in the type of diffusive motion we've discussed so far is caused by
their thermal kinetic energy. However, some compounds are transported across
membranes via electric fields generated by charge differences across the
membrane.
Biological membranes, particularly their lipids, are amphiphilic in nature,
forming bilayers with an internal hydrophobic layer and an outward hydrophilic
layer. This structure enables transport via simple or passive diffusion, which
involves the diffusion of molecules across the membrane without the need of
37
metabolic energy or transport proteins. If the transported substance has a net
electrical charge, it will move in response to an electrochemical gradient caused by
the membrane potential as well as a concentration gradient.
Table 4.1
Relative permeability of a phospholipid bilayer to various substances
Type of substance Examples Behaviour
Gases CO2, N2, O2 Permeable
Small uncharged polar Urea, water, ethanol Permeable, totally or partially
molecules
Large uncharged polar glucose, fructose Not permeable
molecules
Ions K+, Na+, Cl−, HCO3− Not permeable
Charged polar molecules ATP, amino acids, glucose-6- Not permeable
phosphate

Passive transport refers to the movement of biochemical and other atomic or


molecular substances across membranes. Unlike active transport, this process does
not require chemical energy because, unlike active transport, transport across the
membrane is always coupled with the system's entropy growth. As a result, passive
transport is dependent on the permeability of the cell membrane, which is
dependent on the organization and properties of the membrane lipids and proteins.
Diffusion, facilitated diffusion, filtration, and osmosis are the four main types of
passive transport.

Diffusion (simple)

Diffusion is defined as the net transfer of material from a high-concentration area


to a lower-concentration area. The concentration gradient is the difference in
concentration between the two locations, and diffusion will continue until the
gradient is abolished.

Figure 4.2. Diffusion.

Diffusion is described as transporting solutes "down the concentration gradient"


because it moves materials from a higher concentration area to a lower
38
concentration area (compared with active transport, which often moves material
from area of low concentration to area of higher concentration, and therefore
referred to as moving the material "against the concentration gradient"). Osmosis
and simple diffusion are comparable. Simple diffusion is the passive movement of
a solute from a high concentration to a lower concentration until the solute
concentration is uniform throughout and equilibrium is reached. Osmosis is similar
to simple diffusion in that it explains the passage of water (not the solute) across a
membrane until both sides of the membrane have an equal concentration of water
and solute. Both simple diffusion and osmosis are passive transport methods that
do not use any of the cell's ATP energy.

Facilitated diffusion

The movement of molecules across the cell membrane via special transport
proteins embedded within the cellular membrane is known as facilitated diffusion,
also known as carrier-mediated diffusion. Many large molecules, such as glucose,
are insoluble in lipids and are too large to pass through membrane pores.

Figure 4.3. Facilitated diffusion.

As a result, it will bind to its specific carrier proteins, and the resulting complex
will be bonded to a receptor site and moved through the cellular membrane.
However, keep in mind that facilitated diffusion is a passive process, and the
solutes continue to move down the concentration gradient.

Filtration
Filtration is the movement of water and solute molecules across the cell membrane
caused by the cardiovascular system's hydrostatic pressure. Only certain solutes
can pass through the membrane pores, depending on their size.

39
Figure 4.4. Filtration.

The membrane pores of the Bowman's capsule in the kidneys, for example, are
very small, and only albumins, the smallest of the proteins, have any chance of
passing through. The membrane pores of liver cells, on the other hand, are
extremely large, allowing a wide range of solutes to pass through and be
metabolized.

Tonicity is a measure of the effective osmotic pressure gradient, which is the


difference in water potential between two solutions separated by a semipermeable
cell membrane. To put it another way, tonicity is the relative concentration of
dissolved solutes in solution that determines the direction and extent of diffusion.
To describe whether a solution will cause water to move into or out of a cell, three
terms are used: hypertonic, hypotonic, and isotonic.
- if a cell is immersed in a hypertonic solution, there will be a net flow of
water out of the cell, resulting in volume loss. A solution is hypertonic to a
cell if its solute concentration exceeds that of the cell, and the solutes are
unable to cross the membrane.
- placing a cell in a hypotonic solution causes a net flow of water into the cell,
causing the cell to gain volume. If the solute concentration outside the cell is
lower than the solute concentration inside the cell, and the solutes are unable
to cross the membrane, the solution is hypotonic to the cell.
- when a cell is immersed in an isotonic solution, there is no net flow of water
into or out of the cell, and the volume of the cell remains constant. If the
concentration of solutes outside the cell is the same as the concentration
inside the cell and the solutes cannot cross the membrane, the solution is
isotonic to the cell.

40
Figure 4.5. Hypertonic, hypotonic and isotonic solutions.

When a cell is immersed in a hypertonic solution, water escapes and the cell
shrinks. There is no net water movement in an isotonic environment, so the cell
size does not change. When a cell is placed in a hypotonic environment, water
enters the cell, causing it to swell.

Active transport is the flow of molecules across a cell membrane in the


opposite direction of a gradient or other obstructive factor from an area of lower
concentration to a region of greater concentration (often a concentration gradient).
Active transport, in contrast to passive transport, which uses the kinetic energy and
natural entropy of molecules going down a gradient, employs cellular energy to
push molecules against a gradient, polar repulsion, or other barrier. The
accumulation of large concentrations of substances that the cell requires, such as
ions, glucose, and amino acids, is frequently accompanied with active transport.
Primary active transport is defined as a process that employs chemical energy,
such as adenosine triphosphate (ATP). An electrochemical gradient is used in
secondary active transport. The uptake of glucose in the intestines of humans and
the uptake of mineral ions into root hair cells of plants are both examples of active
transport.

Because the phospholipid bilayer of the membrane is impermeable to the substance


moved or because the substance is pushed against the direction of its concentration
gradient, specialized transmembrane proteins recognize the substance and enable it
to penetrate the membrane when it would not otherwise.

There are two forms of active transport, primary active transport and secondary
active transport.
The proteins involved in main active transport are pumps that generally employ
chemical energy in the form of ATP. Primary active transport, also known as
direct active transport, transports molecules across a membrane directly using
metabolic energy.
Transmembrane ATPases are the most common enzymes that undertake this type
of transport. The sodium-potassium pump is a major ATPase found in all animals
41
and is responsible for maintaining cell potential. Redox energy and photon energy
are two further sources of energy for primary active transport (light).
There are several active transport systems in plasma membrane ion (ion pump):
1) The sodium-potassium pump.
2) Calcium pump.
3) A hydrogen pump.

One of the integral membrane proteins is the sodium-potassium pump. It has


enzymatic properties and can hydrolyze adenosine triphosphate (ATP), the primary
source and repository of energy metabolism in cells. This integral protein is known
as sodium-potassium ATPase. Adenosine diphosphate (ADP) and inorganic
phosphate are formed when an ATP molecule decomposes.
As a result, the sodium-potassium pump antiports sodium and potassium ions
across the membrane. Mutual transformation, which is promoted by ATP
hydrolysis, exists in two primary conformations of the pump molecule. These
functions control the sodium and potassium conformation vectors. Inorganic
phosphate is linked to the protein after the sodium-potassium ATPase molecule
ATP is cleaved. Sodium potassium ATPase binds three sodium ions in this state,
which are then pushed out of the cell. The pump is then removed from the
inorganic phosphate molecule protein and transformed into a potassium
transporter. Two potassium ions enter the cell as a result. When an ATP molecule
is split, three sodium ions are pushed out and two potassium ions are injected into
the cell. A sodium-potassium pump may move 150 to 600 sodium ions per second
through the membrane. His study has resulted in the maintenance of sodium and
potassium transmembrane gradients.

Figure 4.6. The operation of the sodium-potassium pump.

Secondary active transport, on the other hand, employs potential energy,


which is typically obtained by utilizing an electrochemical gradient. This is
accomplished through the use of pore-forming proteins, which form channels
across the cell membrane.
The distinction between active and passive transport is that active transport
requires energy and moves chemicals against their concentration gradient, whereas

42
passive transport does not and moves substances in the same direction as their
concentration gradient.
One substrate is transported across the membrane in one way while another is
cotransported in the opposite direction in an antiporter.

An antiporter (also known as an exchanger or counter-transporter) is a


cotransporter and integral membrane protein that is involved in the secondary
active transport of two or more different molecules or ions in opposite directions
across a phospholipid membrane such as the plasma membrane.

Figure 4.7. Types of secondary active transport.

Two substrates are transported across the membrane in the same direction by a
symporter. Secondary active transport is associated with antiport and symport
processes, which mean that one of the two substances is transported in the
direction of its concentration gradient, utilizing the energy derived from the
transport of such substance (mostly sodium, calcium, hydrogen ions) down its
concentration gradient.
Specific transmembrane carrier proteins are necessary if substrate molecules are
migrating from areas of lower concentration to areas of higher concentration (in
the opposite direction of, or against, the concentration gradient). The receptors on
these proteins bind to certain chemicals (such as glucose) and transport them
across the cell membrane. This procedure is termed as 'active' transportation since
it necessitates the application of energy. The sodium-potassium pump, which
transports sodium out of the cell and potassium into the cell, is an example of
active transport. The small intestine's interior lining is frequently used for active
transport.
Mineral salts must be absorbed by plants from the soil or other sources, but these
salts exist in very dilute solution. These cells can take up salts from this dilute
solution by actively transporting them in the opposite direction of the concentration
gradient.
Energy is used to transport molecules across a membrane in secondary active
transport, also known as coupled transport or co-transport; however, unlike
primary active transport, there is no direct coupling of ATP; instead, it relies on the
43
electrochemical potential difference created by pumping ions in and out of the cell.
Allowing one ion or molecule to flow down an electrochemical gradient, but
maybe against the concentration gradient from more concentrated to less
concentrated, increases entropy and can be used as a source of energy for
metabolism (e.g. in ATP synthase).

Figure 4.8. An example of secondary active transport.

Examples: metal ions, such as sodium, potassium, calcium and magnesium require
ion pumps or ion channels to cross membranes and distribute through the body.
Endocytosis is a type of active transport in which a cell engulfs molecules (such as
proteins) in an energy-consuming process. Because most chemical substances
important to cells are large polar molecules that cannot pass through the
hydrophobic plasma or cell membrane passively, endocytosis and its counterpart,
exocytosis, are used by all cells.
Pinocytosis (cell drinking) and phagocytosis (cell eating) are examples of
endocytosis.

Figure 4.9. Endocytosis.

Exocytosis is a type of active transport in which molecules (such as proteins) are


transported out of the cell by expelling them in an energy-consuming process.
Because most chemical substances important to cells are large polar molecules that

44
cannot pass through the hydrophobic portion of the cell membrane passively,
exocytosis and its counterpart, endocytosis, are used by all cells.
Membrane-bound secretory vesicles are transported to the cell membrane and their
contents (water-soluble substances like proteins) are secreted into the extracellular
environment during exocytosis. Because the vesicle transiently merges with the
outer cell membrane, this secretion is feasible. Exocytosis is also a method by
which cells can introduce membrane proteins, lipids, and other components into
the cell membrane (such as ion channels and cell surface receptors). Vesicles
containing these membrane components fully merge with the outer cell membrane
and become a part of it.

Figure 4.10. Exocytosis.

Only the conjugation of transport of a substance from the ATP hydrolysis


reaction allows for active transport. ATP energy is used to change the
conformation of the transport protein, which changes its affinity (binding constant)
to different ions. The binding constant of Na + in the cell transporter is much higher
than that of K+. As a result, sodium ions in the cell associate with proteins and are
transported to the outside environment. The penetration of these substances from
the environment into the cell against a concentration gradient is thought to be
related to active Na+ transport via carriers. If the amounts of these compounds
fluctuate between the medium and the cell, there is no increase in the rate of
penetration into the cell until it reaches a specific limit. If the electric potentials on
both sides of the membrane are equal, the ion flow created by active transport
systems is given by the equation:
c0 P  c1 c2 
j    (4.4)
2  k1  c1 k2  c2 

where: C0 - ATPase intramembranous concentration, energy, due to which occurs


the ion transfer;
P is the permeability coefficient for an ion complex with the protein
molecule;
C1 and C2 - ion concentrations on both sides of the membrane;

45
K1 and K2 - ion dissociation constant of the complex with the protein on
either side of the membrane. This equation allows us to calculate the maximum
concentration difference that can create the ion pump.

Test tasks to consolidate the studied material on the topic:


1. By what type of transport the substance under the letter (a) passes through the
membrane?

A) through the protein pore


B) through the pore in the lipid bilayer
C) through lipid bilayer
D) light diffusion
E) primary - active transport

2. By what type of transport the substance under the letter (b) passes through the
membrane?

A) through the protein pore


B) through the pore in the lipid bilayer
C) through lipid bilayer
D) light diffusion
E) primary - active transport

3. What kind of transport does oxygen penetrate into the cell through the
membrane?
A) through the pore in the lipid bilayer
B) through lipid bilayer
C) by exocytosis
D) using primary - active transport
E) through the protein pore
46
4. How does the permeability coefficient change with increasing diffusion
coefficient?
A) increases inversely
B) increases exponentially
C) decreases exponentially
D) increases in direct proportion
E) halved

5. In simple diffusion, particles of matter move


A) through bilipid layer
B) using carriers
C) using sodium-potassium pump
D) primary active transport
E) secondary transport

6. What transport creates concentration gradients and electric potential gradients?


A) simple diffusion
B) facilitated diffusion
C) passive
D) active
C) osmosis

Numerical problems for independent solution:


1. Calculate the distribution coefficient K for a substance if at thickness of
membrane l  18nm diffusion coefficient is 0,2 сm2/sec and permeability coefficient
is Р=210 сm/sec.
2. Calculate the permeability coefficient P for a substance which flow through the
membrane is J  5 105 mol / m2sec. Concentration of the substance inside the cell is
сi  1,8  10 4 , and outside the cell is со  3  10 5 mol / L .

3. Difference of concetrations of molecules on a cell is 45 mmol/L, distribution


coefficient between membrane and environment is K=30, diffusion coefficient is
1,5  10 10 m 2 / sec , flow density is j  25 mol / m 2  sec . Calculate the thickness of the
membrane.

47
V

ELECTRICAL EXCITABILITY OF TISSUES. BIOPOTENTIALS.


RESTING AND ACTION POTENTIALS OF CELLS.
The human body is a marvel of engineering, with mechanical, electrical, and
chemical systems that enable us to live and operate. The actin and myosin
filaments present in muscles that allow them to contract are an example of a
mechanical system in the body. Neurotransmitters, which are released by neurons
to communicate with other cells, are part of chemical systems. The electrical
potentials that propagate down nerve cells and muscle fibers are also included in
electrical systems. Brain function, muscle contractions or relaxation, heart
function, eye movements and many other body functions are controlled by
biopotentials. The movement of the ion flow into or out of the cell creates
bioelectrical potentials. The concentration gradient of ions on different sides of the
membrane is the main reason for the appearance of a potential difference across the
membrane. These potential differences are called biopotentials. Biopotentials can
be measured using microelectrodes and electronic measuring instruments to gain
insight into the functioning of various living biological systems.
Biopotential (bioelectric potential) is an energy property of the interaction
of charges in the examined live tissue, such as in different parts of the brain, cells,
and other structures. The potential difference between two sites of the tissue,
representing its bioelectric activity and the nature of metabolic activities, is
measured rather than the absolute potential. The biopotential is used to determine
the status and functionality of various organs.
Biopotentials emerge from biological tissue as potential differences between
compartments. In general, the compartments are separated by a (bio)membrane
that maintains ion concentration gradients via an active mechanism (e.g., the
Na+/K+ pump). Hodgkin and Huxley (1952) were the first to create an electronic
equivalent for a biopotential (the action potential in the squid giant axon). A
combination of ordinary differential equations (ODEs) and a model describing the
nonlinear behavior of ionic conductances in the axonal membrane nearly perfectly
described the results.
The resting membrane potential (RMP) is the relatively static membrane
potential of quiescent cells, as opposed to the specific dynamic electrochemical
phenomena known as action potential and graded membrane potential.
The resting membrane potential of cells varies depending on cell type; for neurons,
the resting potential is typically between -50 and -75 mV. This value is determined
by the types of ion channels that are open as well as the concentrations of various
ions in intracellular and extracellular fluids. K+ and organic anions are typically
found in greater concentrations within neurons than outside, whereas Na + and Cl-
are typically found in greater concentrations outside the cell.
48
When ions' channels are open, this difference in concentration creates a
concentration gradient for them to flow down. Most neurons are permeable to K +,
Na+, and Cl- at rest, and as a result, they will all flow along their concentration
gradients, with K+ flowing out and Na+ and Cl- moving in. However, because K+ is
the most permeable to the cell, it has the greatest influence on the resting
membrane potential – and its value is closest to the equilibrium potential (the
membrane potential at which an ion's concentration gradient is balanced) of the
three ions. The action of the Na+/K+ ATPase via active transport maintains these
concentration gradients, allowing the membrane potential to be maintained.

Figure 5.1. Ion distribution at rest potential.

An action potential (AP) occurs when the membrane potential of a specific cell
location rapidly rises and falls, causing adjacent locations to depolarize as well.
Action potentials are generated in a variety of animal cells known as excitable
cells, which include neurons, muscle cells, endocrine cells, glomus cells, and some
plant cells.
The presence of voltage-gated ion channels in a cell's membrane causes action
potentials. A voltage-gated ion channel is a protein cluster embedded in the
membrane that has three distinct properties:
1. It is capable of assuming more than one conformation.
2. At least one of the conformations creates a channel through the membrane that is
permeable to specific types of ions.

49
3. The transition between conformations is influenced by the membrane potential.
As a result, a voltage-gated ion channel is open for some membrane
potential values and closed for others. The link between membrane potential and
channel state is, in most situations, uncertain and requires a temporal delay. Ion
channels move between conformations at unpredictable periods, with the pace of
transitions and the likelihood of each type of change per unit time determined by
the membrane potential.

Figure 5.2. Phases of the action potential of a neuron. The resting membrane potential is
equal to -70 mV. in 1 ms, the membrane potential rises to -55 mV (threshold potential). after
passing the threshold an action potential occurs in 2 milliseconds, which corresponds to a value
of +40 mV. (-90 mV) means hyperpolarization, after which the cell can return to rest, where a
resting potential of -70 mV is restored in time = 5 ms.

Sodium ion channels open as the membrane potential rises, allowing sodium
ions to enter the cell. The opening of potassium ion channels, which allow
potassium ions to exit the cell, occurs next. The inflow of sodium ions raises the
concentration of positively charged cations in the cell, resulting in depolarization,
in which the cell's potential is higher than its resting potential. At the peak of the
action potential, sodium channels close, while potassium continues to leave the
cell. Potassium ion outflow lowers the membrane potential and hyperpolarizes the
cell. The potassium current exceeds the sodium current at low voltage increases,
and the voltage returns to its normal resting state, approximately -70 mV. The
sodium current takes over when the voltage rises above a crucial threshold, which
is typically 15 mV higher than the resting state. This causes a runaway situation in
50
which the sodium current's positive feedback activates even more sodium
channels. As a result of this, the cell fires, resulting in an action potential. A firing
rate, also known as a neural firing rate, is the frequency with which a neuron
evokes action potentials.
Nernst's equation governs the potential that is created. Nernst's equation
expresses the relationship between an ion's concentration outside and inside a cell
and its cellular potential (V). Nernst's equation is as follows:
RT X ECF
V ln (5.1)
zF X ICF

Where R is the gas constant, T is temperature in Kelvin, F is Faraday’s


constant (approximately 96485 C mol-1), Z is the valence of the ion, and [X] is the
ionic concentration, externally and internally.
Goldman–Hodgkin–Katz (GHK) formulation accounts for influence of other
ionic species in internal/external media, P is the permeability coefficient of the
membrane for a particular ionic species M (K, Na, Cl)

RT  PK K o  PNa Na o  PCl Cl i 


E ln   (5.2)
F  PK K i  PNa Na i  PCl Cl o 

Example concentrations of the major ion species from frog skeletal muscle
Table-1
Species Intracellular (millimoles Extracellular (millimoles
per liter) per liter)
Na+ 12 145
K+ 155 4
Cl- 4 120

At rest, the membrane permeability for K+ ions is considerably greater than for
Na+, and is greater than C1: P(K) >> P(Na), P(K)> P(Cl). For example, for squid
axon the ratio of the permeability coefficients
P(K): P(Na): P(Cl) = 1: 0.04: 0.45.
In a state of excitement:
P(K):P(Na):P(Cl)=1:20:0.45
i.e., as compared to the non-excited state upon excitation permeability for sodium
increases 500 times.

51
Figure 5.3. Examples of varies action potentials.

We shall use some electrical engineering approaches to the examination of


the axon's electrical properties. However, the increased complexity is required for
a quantitative comprehension of the neurological system. Although the axon is
frequently compared to an electrical cord, the two are vastly different. Even yet,
examining the axon as an insulated electric cable buried in a conducting fluid can
provide some insight into its operation. We must consider the resistance of the
fluids both inside and outside the axon, as well as the electrical characteristics of
the axon membrane, in such an analysis. Capacitance and resistance are both
characteristics of the membrane since it is a leaky insulator. To specify the cable
qualities of the axon, we'll need four electrical parameters.
Table-2
Property Nonmyelinated Myelinated axon
axon
1 Axon radius 5× 10−6 m 5× 10−6 m
2 Resistance per unit length of fluid 6.37× 109/m 6.37× 109/m
both inside and outside axon (r)
3 Conductivity per unit length of axon 1.25× 10−4 Ohm/m 3× 10−7 Ohm/m
membrane (gm)
4 Capacitance per unit length of axon 3× 10−7 F/m 8× 10−10 F/m
(C )

The method of pulse transmission in nerves with unmyelinated axons is dependent


on the lateral spreading of excitability by the electric field of stimulation itself. The
neighboring proteins in the membrane are triggered by a local action potential.
52
Because the proteins require a refractory period of several milliseconds after an
excitation to become excitable again, the impulse can only go in one way.
Propagation in the context of a voltage pulse transmission through an electric
cable. The time properties of the nerve pulse, unlike the wire, stay consistent even
after a given distance of transmission. On the other hand, the absolute velocity of
pulse transmission in a cable is obviously far faster than in a nerve axon. Many
vertebrate and a few invertebrate axons with a fatty sheath made of myelin take
advantage of this benefit of simple electrical conductivity. Myelinated nerves are
those that have been myelinated. The sheath is broken at millimeter intervals by
so-called Ranvier nodes. Simple electric conductivity of the pulse occurs in
myelinated regions, much like in a cable. Ranvier's nodes represent membrane
areas that are excitable in a normal manner. If one of Ranvier's nodes is activated,
the pulse propagates along the myelinated length by simple electric conduction and
excites the next node. This so-called saltatory conduction is a method of pulse
amplification that results in speedier data transmission.
We can describe the propagation of an impulse along nerve fibers using the
telegraph equation. Telegraph equation is
 2 4  a    
  C M   (5.3)
x 2
D  t  M l 

where: D – diameter of the fiber; l-thickness of the membrane; C-electrical


capacitance;
𝜌a- axoplasm resistivity; 𝜌M – membrane resistivity.
The solution of the telegraph equation is
x
x 
   0 exp(  )   0 e 
(5.4)

X is distance; 𝜑0 is the potential of the point x=0.


Dl M
𝝺 is a constant length of the fiber:  (5.5)
4 a

How Fast are Nerve Impulses?


• Action potentials can move along axons at speeds ranging from 0.1 to 100
m/s. This means that nerve impulses can travel from one part of the body to
another in a matter of milliseconds, allowing for quick responses to stimuli.
(Impulses are much slower than electrical currents in wires, which travel at
close to the speed of light, 3x108 m/s.)
The speed is affected by 3 factors:
• Temperature - The higher the temperature, the faster the speed. As a result,
homoeothermic (warm-blooded) animals respond faster than poikilothermic
(cold-blooded) animals.
• Axon diameter - The larger the diameter, the faster the speed. To speed up
their responses, marine invertebrates that live at temperatures close to 0°C
have developed thick axons. This explains why squid have such large axons.
53
• Myelin sheath - Only vertebrates have a myelin sheath that protects their
neurons. Voltage-gated ion channels are only found at Ranvier nodes, and
the myelin sheath acts as a good electrical insulator between the nodes. As a
result, the action potential can jump large distances (1mm) from node to
node, a process known as saltatory propagation. This dramatically increases
the speed of propagation, so that while nerve impulses in unmyelinated
neurons have a maximum speed of around 1 m/s, they travel at 100 m/s in
myelinated neurons.

Test tasks to consolidate the studied material on the topic:

1. Resting potential is created due to


A) having a period of refractoriness
B) the presence of the threshold value of the depolarizing potential
C) potassium ion concentration differences inside and outside the cell
D) differences in the concentration of sodium ions
E) flow of sodium ions inside the cell

2. When cells are excited


A) the flow of sodium ions into the cell decreases
B) the flow of potassium ions into the cell increases
C) sodium ion flux does not change
D) the flow of potassium and sodium ions does not change
E) the flow of sodium ions into the cell increases

3. The speed of impulse propagation along the nerve fibers depends on


A) axon length
B) thickness of myelin sheath
C) presence of myelin sheath
D) quantities of dendrites
E) number of Ranvier nodes

4. In which case the value of the membrane potential can change?

A) when the ratio of permeability coefficients are Pk: PCl = 1: 0.04


B) under the influence of irritants
C) when potassium ions inside the cell are larger than outside
D) when the ratio of the permeability coefficients are Pk: PNa = 1: 0.04
E) when all sodium channels are closed

5. In which fibers is the duration of the action potential longer?

A) in nerve cells
54
B) in cardiac muscle fibers
C) in skeletal muscle fibers
D) in axons
E) in myelinated fibers

Situational tasks and questions for independent work


1. Determine the Nernst potential of each of these ions at room temperature
using the table below of various ionic concentrations inside and outside the
cell (250C). Assume that at 25 degrees Celsius, RT/F simplifies to 26 mV.
The valence of Na and K is 1, while that of Cl is –1.

Ion Outside Inside


K 125 mM 410 mM
Na 450 mM 60 mM
Cl 380 mM 45 mM

2. A certain single valence ion has a Nernst potential of –60 mV. Assuming that
this measurement is being taken within the body (i.e. at normal body temperature),
calculate the ratio of this ion's concentration.
3. Which ion is most responsible for the depolarization of the nerve cell? And for
repolarization?
4. Potassium ion channels open more faster than sodium ion channels. What is the
significance of this? Can you imagine how the action potential would be different
if potassium channels opened significantly faster than sodium channels?

55
VI
PHYSICAL FUNDAMENTALS OF ELECTROCARDIOGRAPHY.
The electric fields linked with cell activities reach all the way to the animal's
surface. As a result, we can detect electric potentials throughout the skin's surface
that indicate the collective cell activity associated with various physiological
functions. Clinical techniques have been created based on this effect to get
information about the heart's and brain's activities from the skin surface
(electrocardiography) (electroencephalography). It is feasible to gather information
on the functioning of specific organs by measuring potential changes between
appropriate places on the body's surface. The surface potentials are usually very
small and, therefore, must be amplified before they can be displayed for
examination
Because the electrical impulses generated in this manner are frequently too
weak to operate the final equipment that displays the signals for our observation,
an amplifier is used to boost the signal's power and amplitude. The display device
is subsequently driven by the amplified signal. In some form or another, electrical
technology is used in most diagnostic equipment in medicine.
The Electrocardiograph.
A device that records the surface potentials associated with the electrical
activity of the heart is called an electrocardiograph (ECG). The electrical activity
of the muscles of the heart is recorded using electrodes that are placed on the
surface of the patient's skin. The electrodes are usually attached to special places
on the arms and legs and to the chest region above the heart.) The potential
difference between two electrodes is measured simultaneously. (Fig. 6.1.)

Figure 6.1 Diagnostic equipment.

56
Each electrical stimulus takes the form of a wave and so patterns emerge made up
of a number of connected waves. A standard ECG is printed at 50 or 25 mm per
second. In this way it is possible to calculate the duration of individual waves.
Figure 6.2 shows the change in the potentials of ECG intervals under normal
conditions. This wave forms are identified by the letters P, Q, R, S, and T. The
shape of the ECG waves depends on the location of the electrodes. A competent
person can determine the work of the heart by the shapes and deviations of these
waves. The amplitude of the waves is fixed in mV (millivolt) and can be between
0.1 mV and 5 mV. Largest R-wave (up to 5 mV). In diagnostics, it is important to
know the amplitudes of the waves, the duration of the intervals.
The pacemaker, which is a specialized group of muscle cells towards the top of the
right atrium, starts the heart's rhythmic contractions. The action potential
propagates between the two atria immediately after the pacemaker fires. The
electrical activity that causes the atria to contract is connected with the P wave.
The action potential linked with the contraction of the ventricles causes the QRS
wave. Currents that bring about the recovery of the ventricle for the following
cycle cause the T wave.

Figure 6.2 An electrocardiogram.

When a cardiac myocyte depolarizes, the inside of one portion of the cell may be
electrically neutral, while the other section (which has not yet depolarized) has a
negative net electric charge. This denotes the presence of a net dipole moment
vector in the cell.
A system consisting of two equal in value but oppositely charged particles
separated by a distance l is called an electric dipole. The main characteristic of a
dipole is the dipole moment, denoted by the letter P and the direction of this vector
from negative to positive pole.
The electric dipole, as shown, consists of two equal and opposite charges, +q and –
q, separated by a distance l. The dipole moment P is defined: P=q·l

57
We define the vector dipole moment ⃗𝑷 ⃗ as a vector whose magnitude is equal to
the dipole moment and that points from the negative charge to the positive one.

Figure 6.3. An electric dipole.

• A large electric dipole serves as the heart. The dipole's orientation and strength
change with each heartbeat.
• A boundary separates the negative charges of depolarized cells from the positive
charges of cells that have not yet depolarized in the heart at any given time.
• The heart's dipole electric field and potential extend throughout the body.

An electric dipole generates an electric field around itself, and the potential of a
point in that field can be calculated using the formula:
1 𝑃𝑐𝑜𝑠 𝜃
𝜑= 2
(6.1)
4𝜋𝜀𝜀0 𝑟

Work done by a dipole:


1
𝑊 = −𝑝𝐸 ∫2 𝑠𝑖𝑛𝜑𝑑𝜑 = 𝑝𝐸𝑐𝑜𝑠𝜑1 − 𝑝𝐸𝑐𝑜𝑠𝜑2 (6.2)
The magnitude of the torque 𝑀⃗⃗ depends on the field strength 𝐸⃗ , the dipole moment
𝑃⃗ and the angle between these vectors:
𝑀 = 𝑃 · 𝐸 · 𝑐𝑜𝑠𝛼 (6.3)

During cardiac depolarization, the extracellular fluid surrounding the myocardium


becomes more negative (because the action potential spreads from the sinus node
in the atria to the base of the heart, passing through the ventricles), because
positively charged sodium and calcium ions enter the cardiac myocytes.

Figure 6.4. Depolarisation of the heart.

Action potential of cardiomyocytes

58
Cardiomyocytes, or cardiac muscle cells, are the muscle cells that make up the
heart muscle (heart muscle). Within the heart, there are two types of cells:
cardiomyocytes and cardiac pacemaker cells. The atria (the chambers where blood
enters the heart) and the ventricles are made up of cardiomyocytes (the chambers
where blood is collected and pumped out of the heart). These cells must have the
ability to shorten and lengthen their fibers, and the fibers must be stretchable.
These functions are crucial to the heart's appropriate shape when it beats.
The impulses that cause the heart to beat are carried by cardiac pacemaker cells.
They are found all across the heart and are responsible for a variety of functions.
For starters, they're in charge of being able to create and send electrical impulses
on their own. They must also be able to receive and respond to brain electrical
impulses. Finally, they must be capable of transmitting electrical impulses from
one cell to another.
Cellular bridges link all of these cells together. Intercalated discs are porous
junctions that generate connections between cells. They allow sodium, potassium,
and calcium to move freely across cells. This makes depolarization and
repolarization in the myocardium much easier. The cardiac muscle can work as a
single coordinated unit thanks to these connections and bridges.
Cardiomyocytes are about 100 μm long and 10-25 μm in diameter.
 Phase 0: Depolarization

Phase 1: Brief repolarization

 Phase 2: Plateau

 Phase 3: Repolarization

Phase 4: The resting potential is -90 mV. Gap junctions allow Na+ and Ca2+ ions
from neighboring cells to leak into the cell, raising the membrane potential to -70
mV.

Because the wave of depolarization does not spread evenly and instantly, and
because the mass of the heart's walls is not equal, the least and most depolarized
mass of myocardium changes over time. As the wave of depolarisation travels
across the chambers of the heart, it resembles a rotating battery with a positive and
negative terminal spinning in three dimensions.

59
Figure 6.5. Action potential of cardiomyocytes

The cardiac dipole's movements generate a current that flows towards or away
from electrodes attached to the skin as it rotates around the heart. The direction in
which the dipole is pointed with respect to a pair of electrodes determines whether
the dipole creates an upward or downward deflection (or none at all). ECG
examines 12 leads:
1) Bipolar 3 standard leads – I, II, III (the potential difference between the two
limbs is measured);

Figure 6.6. Standard leads

2) Unipolar 6 chest leads – V1,V2, V3, V4, V5, V6. (the potential of the place
of the attached electrode on the chest is measured);

60
Figure 6.7. Chest leads

3) 3 augmented leads – aVR, aVL, aVF. (These are termed


unipolar leads because there is a single positive electrode that is referenced
against a combination of the other limb electrodes).

Figure 6.8. Augmented leads

A full 12-lead ECG depicts the movement of the cardiac dipole from a variety of
cleverly arranged points of view.
• The dipole exists because there is a charge difference between different areas of
the myocardium.
• An ECG reports the orientation and magnitude of the cardiac dipole from
multiple perspectives.
• There is no dipole and the ECG is flat when the heart is completely depolarised
or repolarised (isoelectric).
Einthoven’s triangle.

Einthoven's triangle, created by the two shoulders and the pubis, is an imaginary
configuration of three limb leads in a triangle used in electrocardiography. With
the heart in the center, the shape forms an inverted equilateral triangle. Willem
Einthoven, the theorist who proposed its existence, is its name.

61
These measuring sites were employed as contacts for Einthoven's string
galvanometer, the first practical ECG machine, by submerging the hands and feet
in pails of salt water.
Standard limb lead configurations are used in bipolar recordings, as shown in the
diagram. Lead I, which has the positive electrode on the left arm and the negative
electrode on the right, monitors the potential difference between the two arms by
convention. An electrode on the right leg serves as a reference electrode for
recording purposes in this and the other two limb leads. The positive electrode is
on the left leg and the negative electrode is on the right arm in the lead II
configuration. The positive electrode on Lead III is on the left leg, whereas the
negative electrode is on the left arm. The heart is in the center of the equilateral
triangle formed by these three bipolar limb leads, which is known as Einthoven's
triangle after Willem Einthoven, who invented the ECG in the early 1900s. It
makes no difference in the recording whether the limb leads are attached to the end
of the limb (wrists and ankles) or the origin of the limb (shoulder or upper thigh),
because the limb can simply be considered as a long wire conductor originating
from a location on the trunk of the body.

Figure 6.9. Einthoven's triangle.


The ECG axis represents the major direction of the heart's overall electrical
activity. It can be normal, skewed to the left (left axis deviation, or LAD), skewed
to the right (right axis deviation, or RAD), or indeterminate (northwest axis). The
most important axis to determine is the QRS axis. The electrical axis of the heart
is the frontal plane projection of the resulting vector of ventricular excitation
(projection onto the I standard lead). Normally, it is directed downward and to the
right (normal values: 30°-70°), but it can exceed these limits in tall people, people
with excess body weight, and children (vertical EOS with an angle of 70°-90°, or
horizontal - with an angle of 30°-70°).

62
Figure 6.10. Position of the cardiac axis

ECGs are normally printed on a grid. The horizontal axis represents time and
the vertical axis represents voltage. The standard values on this grid are shown in
the adjacent image:
 A small box is 1 mm×1 mm and represents 0.1 mV×0.04 seconds.
 A large box is 5 mm×5 mm and represents 0.5 mV×0.20 seconds.
The "large" box is represented by a heavier line weight than the small boxes.

Figure 6.11. Electrocardiogram grid.

A phonocardiogram (or PCG) is a high-fidelity recording of the heart's sounds


and murmurs using a machine called a phonocardiograph; thus,
phonocardiography is the recording of all the sounds made by the heart during a
cardiac cycle.
The ballistocardiograph (BCG) is a device that measures the heart's ballistic
forces. With each heartbeat, the downward passage of blood through the
63
descending aorta causes an upward rebound, propelling the body higher. The body
moves downward and upward in a recurring pattern as different segments of the
aorta expand and contract. Ballistocardiography is a graphical representation of
the repetitive motions of the human body caused by the rapid ejection of blood into
the major arteries with each heartbeat. It's a vital sign with a frequency range of 1–
20 Hz that's caused by the mechanical movement of the heart and can be detected
using noninvasive methods from the body's surface.
Experimental task: determination of the durations of intervals and
amplitudes of the waves.
Task 1. Measurement work using an electrocardiogram.
an example of how to measure the duration of waves, intervals and segments

PR interval 0.12-0.20 sec QT interval 0.4-0.43 sec


QRS duration 0.08-0.10 sec RR interval 0.6-1.0 sec

Using a real electrocardiogram that will be given by the teacher, do the following work:
1. Find the standard leads I, II, III.
2. Select one heart beat.
3. Measure the duration of the intervals indicated in the table.

4. Using the formula t  , where  – distance between the corresponding waves,  -

tape speed, calculate the duration of the intervals in three leads.
5. Compare the received data with the norm.
𝞄=25 mm/sec
Table 1.

64
Symbols duration of 1  II  III tI tII tIII Duration of
intervals intervals meeting
(mm) (mm) (mm) (sec) (sec) (sec) standard (sec)

PQ 0,12-0,2

ST 0,10-0,18

QRS 0,06-0,10

RR 0,8-0,9

QT 0,36-0,4

Task 2. Calculate the potential difference in the leads


1. Measure the height (h) of ECG waves for each lead and put the data into the table;
calculate the potential difference (U) that corresponds to each wave using the measured

height (h) and sensitivity (S) by the formula: 𝑈 =
𝑆

2. Obtained results compare with norm and make a conclusion.


S=10mm/mV
Table 2.

Symbols of the h1 hII hIII S UI UII UIII Amplitud


waves of an e of the
electrocardiogram mm mm mm mm/mV mV mV mV waves in
norm
(mV)

P 0,2-0,3

Q 0,05-1

R 0,8-1,5

S 0,05-1

3. Using the calculated values of UI, UII, и UIII calculate tg  :


1 U II  U III
tg  
3 U II  U III

4. Calculate pulse rate


60
P=
RR * t
Where RR is distance between adjacent R waves, t=0.04 sec.
5. Make a conclusion.
65
Situational tasks and questions for independent work
1. What is the maximum moment of force acting in an electric field with an
intensity of E = 20 kV/m on a water molecule p = 3.7 * 10-29 Kl·m? What is the
difference between the action of uniform and non-uniform fields on a molecule?
2. Find the potential of the field created by the dipole at point A, at a distance of r
= 0.5 m, in a direction at an angle of 30 degrees relative to the electric moment p
of the dipole. Medium is water. The dipole is formed by charges q = 2 * 10-7 C,
located at a distance of l = 0.5 cm.
3. Why do we compare the work of the heart with the work of an electric dipole?
4. Complete the sentences by putting words or phrases in the right place: (total
dipole moment, frontal plane, projections, an equilateral triangle, amplitude,
Einthoven triangle, cosine of the angle).
One of the significant problems in electrocardiography is determining the direction
of the electrical axis of the heart. It is determined by measuring the ………… of
the ECG deviations in standard Einthoven's leads. Standard leads make it possible
to study the projection of the electrical axis of the heart on the ………….
To determine the direction of the electrical axis of the heart, it is necessary to
introduce some simplifications:
- neglect the electrical resistance of the limbs;
- consider Einthoven's triangle as equilateral;
- consider that the heart is located in the center of …………...
The amplitude of each ECG deviation is equal to the ………… of the heart
multiplied by the ………… between the electrical axis of the heart and the axis of
the corresponding lead. These amplitudes can also be defined as the ……………
of the total dipole moment of the heart on the corresponding lead axes, which are
the sides of the………. .

66
VII

Physical fundamentals of electroencephalography


Electroencephalography (EEG) is a method of recording electrical activity in the
brain via electrophysiological monitoring. The electrodes are usually inserted
along the scalp, making it noninvasive. EEG monitors voltage variations caused by
ionic current in the brain's neurons. EEG is a term used in medicine to describe the
recording of the brain's spontaneous electrical activity over time using several
electrodes implanted on the scalp.
Hans Berger, a German physician, found that electrodes placed on the scalp could
detect different patterns of electrical activity in 1929. Scientists began studying
these "brain waves" after confirming that they were truly recordings from the brain
and not artifacts of muscle or scalp. EEG is still a medically important recording
for brain function in today's world. The study of the relationship between specific
brain waves and sleep phases, emotional states, psychological profiles, and types
of mental activity is ongoing in medical and basic research.
The EEG signal is made up of different frequency components and the amplitude
of the signal varies in the different frequency bands: alpha, beta, delta, theta,
gamma.
Electrodes are placed on the subject's scalp to record their EEG. When comparing
and reporting results, the electrode placement is critical to guarantee that you are
recording data from the correct region. The electrodes are put on the head in
particular areas. An alphanumeric name is assigned to each electrode site. F-
frontal, C-central, T-temporal, P-parietal, and O-occipital zones correspond to the
real electrode placement zones: F-frontal, C-central, T-temporal, P-parietal, and O-
occipital. Numeric is divided into two hemispheres, with odd numbers denoting the
left half and even numbers denoting the right.
Local current flows are formed when brain cells (neurons) are stimulated. The
currents that flow during synaptic excitations of the dendrites of numerous
pyramidal neurons in the cerebral cortex are usually measured by EEG. Combined
postsynaptic graded potentials from pyramidal cells form electrical dipoles
between the soma (body of the neuron) and the apical dendrites, causing electrical
potential differences (neural branches). Brain electrical current consists mostly of
Na+, K+, Ca2+, and Cl- ions that are pumped through channels in neuron membranes
in the direction governed by membrane potential. The detailed microscopic image
is more complex, with many types of synapses involving a number of
neurotransmitters. Electrical activity that can be recorded on the skull surface can
only be generated by huge populations of activated neurons. Current passes
through the skin, the skull, and several additional layers between the electrode and
neural layers. The scalp electrodes detect weak electrical signals, which are
enormously amplified and then printed on paper or saved to computer memory.

67
Synapses connect neurons in neural networks so that they can communicate with
one another.
The amount and location of synaptic inputs that pyramidal neurons receive
determine their ability to assimilate information. Approximately 30,000 excitatory
and 1700 inhibitory (IPSPs) inputs are received by a single pyramidal cell.
Inhibitory (IPSPs) inputs terminate on dendritic shafts, the soma, and even the
axon, whereas excitatory (EPSPs) inputs terminate primarily on dendritic spines.
The neurotransmitter glutamate can activate pyramidal neurons, while
neurotransmitters can inhibit them.
Pulsed discharge (action potential) with a duration of about 1 ms, and a slower
(gradual) oscillation of the membrane potential - inhibitory and excitatory
postsynaptic potentials (PSP).
The change in membrane potential causes the appearance of the two dipoles in
pyramidal cells, differing in cytological localization. One of them - the somatic
dipole with dipole moment . Between the soma and the dendritic trunk, it is created
when the cell body's membrane potential changes, as does the current in the dipole
and the external environment. The exciting memory bandwidth's dipole moment
vector is directed from the soma along the dendritic shaft in a pulsed discharge or
production in the body of the neuron, whereas the braking memory bandwidth
forms a somatic dipole with the dipole moment in the opposite direction. Another
dipole, called the dendritic arises as a result of the generation of the exciting
memory bandwidth at the bifurcation of the apical dendrites in the first, in
plexiform layer of the crust; in this dipole current flows between the barrel and
called the dendritic branching. Dipole moment vector D Д of dendritic dipole has
a direction toward the soma along the dendritic shaft.

Figure 7.1. A cortical pyramidal neuron is depicted with an extracellular current dipole
connecting spatially separated excitatory (open bullet) and inhibitory synapses (filled bullet).
Neural in- and outputs are indicated by the jagged arrows. Dendritic current ID causes dendritic
field potential (DFP).

68
In the study of external electric field of brain and interpret the recorded EEG
signal, and constant component, is generally not taken into account.
As can be seen in (Figure 7.2), EEG
background activity of the brain is a very
complex dependence of the potential
difference of the time and looks like a
collection of random fluctuations of the
potential difference. For the
characteristics of chaotic oscillations
("noise") use parameters that are known
from the theory of probability: mean
value and the standard deviation from
the mean.
To find  , isolate EEG portion
which is split into small regular
intervals, and at the end of each interval
(ti, tj, tm) define a voltage U (Ui, Uj,
Um). The standard deviation is Figure 7.2. fluctuations of the potential
calculated by the usual formula: difference

 (U
j 1
j  U )2
 , (7.1)
m 1

where U - the arithmetic mean of the potential difference; m - the number of U


samples. At EEG derivations from dura mater the value of  for the background
activity is 50-100 mV.
A similar response (standard deviation) is used to describe a gradual activity of
individual neurons ( н ) . In the study of rhythmic EEG, characterized by a certain
amplitude and frequency change of the potential difference, an indicator of the
value of EEG can serve as the amplitude of these oscillations.
At present in research of EEG for modeling the electrical activity of the
cerebral cortex examine the behavior of the aggregate of current electric dipoles of
single neurons. Were proposed several models to explain the single of EEG
features. Consider the model of M.N. Zhadin which is the example of EEG genesis
in abduction with the dura mater to identify common patterns of occurrence of the
total external electric field cortex.
69
The main assumptions of the model: 1) the external field of the brain at
some point registration - integrated field generated by the current dipole of cortical
neurons, and 2) determine gradual genesis of EEG electrical activity of pyramidal
neurons, and 3) the activity of different pyramidal neurons to some extent
interrelated; 4) in the cortex neurons distributed evenly and their dipole moments
perpendicular to the cortical surface, 5) flat core has a final thickness, and its other
dimensions are infinite; brain from the skull limited infinite planar non-conducting
medium.
The link between the electrical activity of pyramidal neurons and the creation
of EEG has a significant meaning. Because the increase in capacity by increasing
the activity of a single neuron to a large extent compensates for a reduction in the
chaotic activity of other neurons, the variable component of their overall capacity
of the external electric field would be small if the gradual change in membrane
potential over time occurred in each neuron completely independently of the other
cells.
The relatively high value of the egistered EEG r in the experiment implies that
there is a positive association between the activity of the pyramidal neurons. A
correlation coefficient characterizes this event quantitatively. When there is no link
between the activities, this coefficient is 0; nevertheless, if the change in
membrane potential (dipole moments) of the cells occurs totally synchronously, it
is one. In reality, an intermediate value shows that the activity of neurons is only
partially coordinated.
If the vectors of the dipole moments of elementary current sources are
orientated chaotically in the cortex, the integrated field set of dipoles- neurons
would be very weak at high levels of synchronization. There was a strong mutual
compensation of individual neuron fields in this scenario.
In reality, cytological evidence shows that dendritic trunks pyramidal cells in
the new cortex (which make up 75 percent of the cortical cells) are orientated
nearly identically, perpendicular to the cortex's surface. Fields created by dipoles
oriented identical cells are not compensated, and add up. Calculations made on the
basis of these provisions, showed that for the EEG withdrawn from the dura mater,
  kh H  R , (7.2)

where k - that is numerically equal to the average density of pyramidal neurons in


the cortex; ρ - resistivity of the cortex; σH - the average standard deviation of the
change in time of the dipole moment of neurons; Rk - average pairwise correlation
coefficient of neuronal activity. From independent experiments, we can find the
parameters in the formula (7.2). So, for the rabbit h ≈ 0,0017 m, ρ ≈ 3 Ohm ∙ m, k
≈ 4 · 1013. According to the calculations of M. Gutman, σN ≈ 6 ∙ 10-15
A • m. If you take Rk = 0,003, then calculated the deviation of the EEG is about 70
mV, i.e, very close to the actual values.
As a result, the addition of fields of the pyramidal neurons of the new cortex
can be considered as a result of the external electric field of the brain recorded as
EEG. The identical dipole orientation of neurons and a positive correlation in their
70
progressive electrical activity, while sufficient, even extremely modest levels of
pairwise correlation, are critical conditions for the formation of EEG.
To obtain information on the brain's conformity with existing normative data. We
can identify local flaws of the brain, suggesting the defeat of its tissue at a specific
region, by recording activity on distinct areas of the brain. We can estimate the rate
of brain maturation in this particular young kid based on data on changes in the
electroencephalogram during the child's growth and development.
Subjects are instructed to close their eyes and relax in order to get basic brain
patterns. Sinusoidal wave forms are formed by brain patterns. They are usually
measured from peak to peak and have an amplitude of 0.5 to 100 V, which is
around 100 times lower than ECG signals. The five basic groups of brain waves
have been identified.
Table 7.1
Frequency, Hz State
gamma >32 Heightened perception, learning, problem-
solving tasks
beta 14 - 30 Alert, normal alert consciousness, active
thinking
alpha 8 - 13 Physically and mentally relaxed
theta 4-7 Creativity, insight, dreams, reduced
consciousness.
It is perfectly normal in children up to 13
years and in sleep but abnormal in awake
adults.
delta 0.5 - 3.5 It is normal as the dominant rhythm in infants
up to one year and in stages 3 and 4 of sleep

The typical alpha rhythm is the most well-known and studied rhythm in the human
brain. Alpha is frequently more prominent in the posterior and occipital areas, with
an amplitude of around 50 V. (peak-peak). Alpha activity is induced by closing the
eyes and relaxing, and it is inhibited by opening the eyes or being alerted by any
method (thinking, calculating). The majority of people are extremely sensitive to
the phenomenon of "eye shutting," in which their wave pattern shifts from beta to
alpha waves when they close their eyes.
EEG recording techniques.
Encephalographic measurements employ recording system consisting of
- electrodes with conductive media;
- amplifiers with filters;
- A/D converter;
- recording device.
Electrodes read the signal from the head surface, amplifiers bring the microvolt
signals into a range where they can be accurately digitalized, the converter
converts the signals from analog to digital, and the personal computer (or other
71
suitable device) saves and displays the data obtained. A set of the equipment is
shown in Figure 7.1.

Figure 7.3. Equipment for EEG recording.

EEG (electroencephalogram) recordings of neuronal activity in the brain


allow for the determination of potential changes in a basic electric circuit
conducting between the signal (active) electrode and the reference electrode over
time. The addition of a third electrode, referred to as the ground electrode, is
required to obtain differential voltage by subtracting the same voltages displayed at
the active and reference sites. One active electrode, one (or two carefully coupled
together) reference electrode, and one ground electrode are the bare minimum for
mono-channel EEG measurement. Up to 128 or 256 active electrodes can be used
in multi-channel arrangements.
Recording electrodes. The proper function of the EEG recording electrodes is
essential for obtaining high-quality data for interpretation. There are many distinct
types of electrodes, each with its own set of features. Electrodes can be classified
into the following categories:
- disposable (gel-less, and pre-gelled types)
- reusable disc electrodes (gold, silver, stainless steel or tin)
- headbands and electrode caps
- saline-based electrodes
- needle electrodes

For multichannel montages, electrode caps are preferred, with number of


electrodes installed on its surface (Figure 7.4).

72
Ag-AgCl disks, 1 to 3 mm in diameter, with long flexible leads that may be put
into an amplifier, are the most commonly used scalp electrodes. Even very modest
variations in potential can be accurately recorded using AgCl electrodes. Long
recordings are made with needle electrodes implanted invasively under the scalp.

Although skin preparation varies, it is typically recommended that oil be removed


from the surface and dried sections be brushed away. Abrasive paste is used for
minor skin abrasion using disposable and disc electrodes.
With cap systems, skin scraping is done with an abutting needle at the end of the
injection, which can cause irritation, pain, and infection.

Figure 7.4. EEG cap.

With cap systems, skin scraping is done with an abutting needle at the end of
the injection, which can cause irritation, pain, and infection. There is a risk of pain
and bleeding, especially when a person's EEG is monitored frequently and a cap is
applied at the same electrode spots.
When using silver-silver chloride electrodes, cover the gap between the
electrode and the skin with conductive paste to aid in adhesion. There is a small
hole in the cap systems for injecting conductive jelly. To ensure that contact
impedance at the electrode-skin interface is reduced, conductive paste and
conductive jelly are used as media. The International Federation in
Electroencephalography and Clinical Neurophysiology established the 10-20
electrode placement system in 1958 as a standard for electrode placement. The
physical location and designations of electrodes on the scalp were standardized
using this approach. To ensure adequate coverage of all parts of the brain, the head
is split into proportional distances from major skull features (nasion, preauricular
points, inion). The label 10-20 indicates the proportionate distance in percents
between the ears and the nose, where electrode locations are chosen.

73
Figure 7.5. 10-20% electrode placement system.

Situational tasks and questions for independent work


1. Determine type of EEG rhythm on a given sketch

2. Show in the figure the directions of the vectors of somatic and dendritic
dipole moments.

74
3. Fill in the cells:
Electrical activity of neurons

__________ ______________
_
occur occur
in_____________ in___________
____ ____

Numerical problems:
1. Thickness of cerebral cortex is h=1.7𝝻m, specific resistance of cortex
  2 Оhm* m , coefficient of average density of pyramidal neurons in the cortex is
5 *1013 , mean standard deviation of the dipole moment of neurons is
 N  7 *10 А * m . Calculate the standard deviation of the EEG of the pyramidal
15

neurons when Rk  0,004 .

2. Find the potential of field produced by the dipole at a point A at a distance


of r  0,5 m in the direction at angle of   30 0 relative to the electric dipole
7
moment p . Medium - water. Dipole is consist of charges q  2 *10 Coulomb
separated by a distance of l  0,5 cm.

75
VIII
MULTISTAGE AMPLIFIERS IN MEDICINE
For many applications, the performance obtainable from a single-
stage amplifier is often insufficient, hence several stages may be combined
forming a multistage amplifier. These stages are connected in cascade, i.e. output
of the first stage is connected to the input of second stage, whose output becomes
input of third stage, and so on.
A cascade amplifier is any two-port network constructed from a series of
amplifiers, where each amplifier sends its output to the input of the next amplifier
in a daisy chain.
The non-ideal coupling between stages due to loading complicates estimating the
gain of cascaded stages. Two common emitter levels are depicted in cascade. The
total gain is not the product of the individual (separated) stages since the second
stage's input resistance forms a voltage divider with the first stage's output
resistance.
A multistage amplifier's overall gain is the product of the gains of the individual
stages (ignoring potential loading effects):

Gain (A) = A1· A2· A3 · A4·... ·An. (8.1)

If the gain of each amplifier stage is expressed in decibels (dB), the total gain is
the sum of the individual gains:

Gain in dB (A) = A1 + A2 + A3 + A4 + ... An (8.2)

For true amplification, the amplifier should have the desired voltage gain and
current gain, as well as input and output impedances that match the source and
load, respectively. Because of the limitations of transistor/FET parameters, these
primary amplifier requirements are frequently not reached with single stage
amplifiers. In such cases, many amplifier stages are cascaded so that the input and
output stages offer some amplification while the remaining middle stages supply
the majority of the amplification.
When the amplification of a single stage amplifier is insufficient, or when the input or
output impedance is not of the correct magnitude, two or more amplifier stages are
connected in cascade for a specific application.

Multistage amplifier refers to an amplifier that has two or more stages.

76
Two Stage Cascaded Amplifier

Vi1 is the input of the first stage and Vo2 is the output of second stage.
So,Vo2/Vi1 is the overall voltage gain of two stage amplifier.
𝑉0 2
𝐴= = Av2 ·Av1 (8.3)
𝑉𝑖 1

n-Stage Cascaded Amplifier

Voltage gain :
The resultant voltage gain of the multistage amplifier is the product of voltage
gains of the various stages.

Av = Avl· Av2· ...· Avn (8.4)

Gain in Decibels
In many cases, comparing two powers on a logarithmic scale is preferable to
comparing them on a linear scale. The decibel is the unit of measurement for this
logarithmic scale (abbreviated dB). The number N decibels by which a power P 2
outperforms a power P1 is defined as
𝑃2
𝑁 = 10 · 𝑙𝑜𝑔 (8.5)
𝑃1

Power ratio is denoted by the decibel, abbreviated as dB. Negative dB values


indicate that the power P2 is less than the reference power P1, while positive dB
values indicate that the power P2 is greater than the reference power P1.
In the case of an amplifier, P1 could represent input power and P2 could represent
output power.
Both can be provided as
77
Vi 2 Vo2
P1  and P2  (8.6)
R1 Ro
where Ri and Ro are the input and output impedances of the amplifier respectively.
If the input and output impedances of the amplifier are equal i.e. Ri = Ro= R, then

V02  Vo2  V V
N  10 log 10  10 log  2
10 
  10  2 log 10 0  20 log 10 o (8.7)
Vi 2 
 Vi  Vi Vi

Gain of Multistage Amplifier in dB

If the gain of each stage is known in dB, the gain of a multistage amplifier can be
easily calculated, as shown below:

20 log10 Av = 20 log10 Avl + 20 log10Av2 +… + 20 log10 Avn (8.8)

Thus, the overall voltage gain in dB of a multistage amplifier is the sum of the
individual stage voltage gains. It can be given as follows:

AvdB = AvldB + Av2dB + ... + AvndB (8.9)

Benefits of Gain Representation in Decibels

The logarithmic scale is preferred over the linear scale for representing
voltage and power gains for the following reasons:
- In multistage amplifiers, it allows for the addition of individual gains from
each stage to calculate overall gain;
- It enables us to represent both very small and very large quantities of linear
scale with very small figures.

The brain-computer interface (BCI) is a communication system that is independent


of the brain's usual peripheral nerve output channels. EEG activity of individual
cortical neurons obtained from implanted electrodes is used in BCI. BCI was
developed to record the subject's EEG data and process them for later use.
EEG source, amplifier, filters, A/D converter, and display make up a
comprehensive EEG signal acquisition system.
Because EEG signals have very low voltage levels ranging from 1𝝻V to 100 mV,
they must be amplified in order to be compatible with other devices such as
displays, recorders, or A/C converters for computerized equipment. To boost signal
strength to an acceptable level as an input to recording devices, a specific high gain
amplifier (gain of 10,000 – 1,000,000) is required.
In addition to EEG, other important biopotentials in clinical tests such as
electrocardiogram (ECG), electromyogram (EMG), and electrooculogram (EOG)
are produced in the low voltage range. As a result, a variable gain amplifier can be
easily used to acquire various biopotential signals.
78
Figure 8.1. Overall block diagram of the BCI system.

Medical Electrodes
Medical gadgets use electrodes to convert ionic current energy into electrical
current in the body. These currents can be increased and have been used to
diagnose a variety of disorders. A lead, metal, and electrode conducting paste make
up medical electrodes. Internal ionic currents are quantified using medical
electrodes, which leads to the diagnosis of a variety of ophthalmic, neurological,
cardiac, and muscular problems. The device operates by establishing an electrical
connection between the monitoring apparatus and the patient.

Figure 8.2. Medical electrodes.

Reusable disc electrodes, disposable disc electrodes, headband electrodes, and


saline based electrodes are all types of electrodes used in medical devices. ECG
Electrodes, Blood Gas Electrodes, EEG/EMG/ENG Electrodes, and Defibrillator
Electrodes are all examples of electrodes that can be categorised based on their
applications. Fetal Scalp Electrodes, Electrosurgical Electrodes, TENS Electrodes,
Pacemaker Electrodes, pH Electrodes, Nasopharyngeal Electrodes, and Ion-
selective Electrodes are some of the sub-categories.

79
IX
THE PRINCIPLES OF CONVERTING BIOLOGICAL SIGNALS
INTO ELECTRICAL SIGNALS . THERMOREGULATION.
CALIBRATION OF TEMPERATURE SENSORS.

Sensors are electronic devices that can convert non-electrical signals to


electrical signals. The biological sensor is an essential type of sensor. A sensor,
sometimes known as a transducer, is a device that can respond to a measured
object and convert it into detectable signals. A sensor is often made up of a
sensitive component that responds directly to the thing being measured, a
conversion component, and related electronic circuits. Electrical signals are the
most convenient for processing, transferring, displaying, and recording, thanks to
the development of current electronic technology, micro-electronic technology,
and communication technology, which represent a variety of valuable signals.

Figure 9.1. Principle of sensors.

Sensors frequently offer data about a system's physical, chemical, or


biological condition. Measurement is defined as an operation aimed at obtaining
the quantity's measured value. Sensor detection technology, thus, is one that uses
sensors to convert measured quantities into physical quantities that are easy to
communicate and process, and then goes on to transform, communicate, display,
record, and analyze those physical quantities.
According to the detection quantities, biomedical sensors can be categorized into
the following groups. Sensors are divided into three categories based on their
operating principles: physical sensors, chemical sensors, and biological sensors.
Sensors developed according to physical nature and effect are referred to as
physical sensors. Metal resistance strain sensors, semiconductor piezoresistive
sensors, piezoelectric sensors, photoelectric sensors, and other sensors fall within
this category.
Chemical sensors: These are sensors that are produced based on the chemical
nature and effect. Various ion sensitive electrodes, ion sensitive tubes, humidity
sensors, and other sensors use ion-selective sensitive film to convert non-electricity
such as a chemical component, content, density, and so on to relevant electric
quantities.

80
Biological sensors, also known as biosensors, are molecular recognition systems
that use biological active materials. This sort of sensor usually employs an enzyme
to catalyze a biological process or examines the type and content of large molecule
organic compounds using a mixture of chemicals. Enzyme sensors, microbe
sensors, immunity sensors, tissue sensors, DNA sensors, and other sensors were
developed in the second part of the twentieth century.
Classified by detection type, there are displacement sensors, flow sensors,
temperature sensors, speed sensors, pressure sensors, etc. As for pressure sensors,
there are metal strain foil pressure sensors, semiconductor pressure sensors,
capacity pressure sensors and other sensors that can detect pressure. As for
temperature sensors, it includes thermal resistance sensors, thermocouple sensors,
PN junction temperature sensors and other sensors that can detect temperature.
Electronic equipment is used to monitor non-electrical factors such as temperature,
heart sound, and blood pressure from the human body. The devices that transform
biological parameters to electrical impulses are known as transducers.
Transduction is the process of conversion. Transducers are devices that transfer
one kind of energy into another.
Transducers are of two types.
1. Active transducers are devices that transfer one form of energy into another
without the use of external power. A good example is a photovoltaic cell. It is a
device that transforms light energy to electrical energy.

Types of Active Transducers


1. Magnetic Induction
2. Piezoelectric
3. Photovoltaic
4. Thermoelectric

Magnetic Induction Type Transducers. The magnetic flux through an electrical


conductor changes when it moves in a magnetic field. This results in a voltage that
is proportional to the rate of flux change. Induced EMF is given as

𝜺 = −𝑩 · 𝒍 · 𝟅 (9.1)

where B is the magnetic induction, l is the length of the conductor, and 𝞋 is the
velocity of the moving conductor. The negative indication shows that the induced
EMF and induced current are traveling in opposite directions.

It's also true that there's an inverse magnetic effect. When current flows via an
electrical conductor in a magnetic field, the conductor is subjected to mechanical
force F

𝑭=𝑩·𝑰·𝒍 (9.2)

81
Applications Magnetic Induction Type Transducerscers

 Electromagnetic flow meter


 Heart sound Microphones
 Indicating instruments
 Pen motor in biomedical recorders

Piezoelectric Transducers. Charge separation happens in crystals when


compression or tension is applied to them. The Piezoelectric Effect is created as a
result of this electrical voltage. Piezoelectric transducers transform pressure or
displacement into an electrical signal. A few piezoelectric transducer materials are
Barium Titanium, Rochelle salt, and Lithium Niobate.

Applications of Piezoelectric Transducers


 Piezoelectric Transducer acts as a pulse sensor to measure the pulse rate of a
human.
Photovoltaic Transducers. When light or any other wavelength of radiation
strikes a metal or semiconductor surface, electrons are ejected. The Photoelectric
Effect is what we're talking about here. Photoelectric Transducers can be
photoemissive, photoconductive, or photovoltaic. Photovoltaic is an active
transducer that generates an electrical voltage proportional to the amount of
radiation it receives.

Applications of Photovoltaic Transducers

• Silicon photovoltaic cells are used as a pulse sensor in Photoelectric


Plethysmography.

• Using light absorption techniques, determine the concentration of sodium and


potassium ions in a sample.

Thermoelectric Transducers. The Seebeck Effect is used to power these


transducers. The Seebeck effect asserts that when two thermocouple junctions are
at different temperatures, a potential voltage is generated. The voltage created is
proportional to the temperature difference between the thermocouple's two
junctions.

Applications of Thermoelectric Transducers


 To measure physiological temperature in remote sensing circuits and
biotelemetry circuits.
 In the doctor’s cold box to store plasma, antibiotics, etc.

Passive Transducers: With the help of an external power source, it turns one kind
of energy into another. It is based on the regulation of DC voltage or an AC carrier
signal. Example: Strain Gauge, Load cell.
82
Types of Passive Transducers
1. Resistive -R
2. Inductive -L
3. Capacitive -C

Resistive Transducers
Resistive type passive transducers include strain gauges, photoresistor, photodiode,
phototransistor, and thermistor. They all work on the same principle, which
stipulates that the measured parameter causes a slight change in the transducer's
resistance. A Wheatstone bridge is used to assess resistance change.

Applications of Resistive Transducers


 Finger-mounted strain gauge measures small changes in blood volume
flowing via the finger.
 To measure intraarterial and intravenous pressure in the body.
 LDR or photoresistor measures the pulsatile blood volume changes.

Capacitive Transducers
There are two conducting surfaces on a capacitor. A dielectric medium functions as
a barrier between two surfaces, separating them. Changes in the area of conducting
plates, the thickness of the dielectric medium, and the distance between the plates
are all measured using capacitive transducers.

Applications of Capacitive Transducers


 Differential capacitive transducers measure blood pressure.

Inductive Transducers
The inductive transducer operates on the basis of the change in reluctance and the
number of turns in the coil. A Linear Variable Differential Transformer (LVDT) is
a type of inductive transducer that functions as a pressure sensor.

Application of Inductive Transducers


 To measure tremor in patients suffering from Parkinson’s disease.

Bioelectric potentials are a useful diagnostic tool for a variety of disorders. As a


result, it's critical to first appropriately register these potentials, and then to be able
to extract the necessary medical data from the gathered measurements.
The sensors can be classified into two types based on their functioning principles:
generator and parametric (modulation sensors).
The input value is directly converted into an electrical signal by generator sensors.

83
The input value of a parametric sensor is transformed into a change in any
electrical parameter of the sensor (R, L, or C).

Sensors requirements:
- Unambiguous dependence between input and output values;
- Stability of characteristics over time;
- high sensitivity;
- small size and mass;
- work during various exploitation environments;
- various montage variants.

In medical electronics, a field sensor converts a non-electric value that is measured


or regulated to an electric value that is merely observed. Sensor with conversion
function, functional dependence of input value X and output value Y, and an
analytical function for input value: y= f(x) or a graph. Sensitivity of a sensor
shows, at what degree output value reacts on input signal changes:
∆𝑌
𝑍= (9.3)
∆𝑋

Measuring units of sensitivity of sensor are ohm/mm or mV/K.


Electrodes and sensors are the two types of removal devices used in medical
electronics. Electrodes are specialized conductors that connect the measuring
circuit to the biological system.
Certain requirements are imposed on electrodes:
 they should be fixed quickly.
 they could be easily removed.
 have high stability of electrical parameters
84
 be strong
 do not interfere
 do not irritate biological tissue.

Equivalent circuitry of a circuit including a biological system and electrodes.

Figure 9.2. Equivalent electrical circuit for measuring the biopotential from the skin surface.

Electrodes for the bioelectric signal measurements are divided into the following
groups:
1. For short-term use in the offices of functional diagnostics, for example, for one-
time electrocardiogram recording;
2. For long-term use, for example, with constant monitoring of seriously ill
patients in conditions of intensive care therapy;
3. For use on mobile subjects, for example, in sports or space medicine;
4. For emergency use, for example in ambulance conditions.

Two issues arise when employing electrodes in electrophysiological research.


One of them is the generation of galvanic EMF when the electrodes come into
touch with biological tissue. The other is electrolytic polarization of the electrodes,
which occurs when current passes through and results in the release of reaction
products on the electrodes. As a result, the main EMF's counter shows.
The resultant EMF distorts the usable bioelectric signal captured by the electrodes
in both circumstances.
The change in current suggests that biological systems have the ability to
polarize current. The resistivity of a metallic conductor is increased when
impurities are present. Because the chaotic motion of a substance's particles grows
more intense as it heats up, the resistance to the guided motion of current carriers
increases. The increase in metal resistivity is exactly proportional to the increase in
temperature throughout a wide temperature range. If the resistivity at 0° C is
denoted by  0 , and at temperature t by  t , then
 t   0   (t  0)  0 or  t   0  t 0 (9.4)

85
The value characterizing the dependence of the change in resistivity when
heated on the type of substance is called the temperature coefficient of resistance.
The temperature coefficient of resistance is measured by a number indicating
which part of its value, taken at 0° C, changes the resistivity when heated by 1° C
 е   0 
  ; [  ] = [°С -1] (9.5)
t 0 t 0
The “alpha” (α) constant is known as the temperature coefficient of resistance
and symbolizes the resistance change factor per degree of temperature change. Just
as all materials have a certain specific resistance (at 20°C), they also change
resistance according to temperature by certain amounts. This coefficient is positive
for pure metals, indicating that resistance increases as temperature rises. This
coefficient is negative for the elements carbon, silicon, and germanium, indicating
that resistance reduces as temperature rises. The temperature coefficient of
resistance for some metal alloys is very close to zero, implying that resistance
changes very little with temperature changes. Dependence of semiconductor
resistivity on temperature
E з

  0e 2 kT
(9.6)

where E3 - band area; 0 - proportionality coefficient having the dimension of
resistivity; k - Boltzmann constant.
The orderly movement of electric charges is referred to as electric current.
The electric current is called conduction current when there is an orderly
movement of charges in a conductor. A number equal to the ratio of the charge
transported via the conductor's cross section over time to this time interval is the
current's quantitative characteristic: i  q . This value is called amperage. If the
t
current and its direction do not change with time, then it is called direct current.
q
For DC i .
t

The current density in the electrolyte:


j  qn(b  b ) E (9.7)

where b and b - mobility of ions of the corresponding signs; E - electric field


strength.
For the electric current to appear in a closed conductive circuit, it is necessary that
external forces act in the entire circuit or in its individual sections, i.e. non-
electrical forces.
Thermistor graduation
The property of metals and semiconductors is used to change the value of
resistance with temperature when building resistance thermometers. Resistance
increases with increasing temperature in metals, but decreases in semiconductors.
86
The conductor resistance can be calculated using the following formula:
Rt  R0 (1  t ) (9.8)
where  is the temperature coefficient of resistance that depends on the nature of
the substance, corresponds to a change in resistance with a temperature change on
1° C.
At the moment, semiconductor resistances, known as thermistors, have been
developed, and their resistance changes with temperature 10-20 times faster than
metals, and their resistance decreases with increasing temperature. Thermistors can
be very small in size due to the high resistivity of semiconductor materials.
Thermistors are used as electrothermometers in medicine. Because of their
extremely low thermal inertia, they allow temperature to be measured in a very
short period of time. Thermistors are shaped like small beads and are embedded in
the thin end of a plastic case resembling a pen, through which conclusions are
made for connection to a Wheatstone bridge circuit. (Fig.9.2).
A Wheatstone bridge is a type of electrical circuit that is used to measure an
unknown electrical resistance by balancing two legs of a bridge circuit, one of
which contains the unknown component.

A Wheatstone bridge Thermistor


Figure 9.3. Circuit for measuring unknown resistance by the Wheatstone bridge balance method.

The temperature sensor is used as a thermistor in this scheme, and it is located in


one of the bridge's arms. The resistances of the other shoulders are selected so that
the bridge is balanced at a specific beginning temperature. The thermistor, which
has extremely little thermal inertia, instantly takes the temperature of the test
medium or item when it comes into touch with it. A change in the thermistor's
temperature causes a change in its resistance, causing the bridge's balance to be
disrupted and the galvanometer's arrow to be deflected. The galvanometer's scale
can be directly calibrated in degrees Celsius.

87
A thermistor must be calibrated before it can be used. Graduation based on
determining the thermistor resistance's relationship to temperature. Wheatstone
circuit for graduation thermistor. One bridge key is a thermistor in the schematic
(Fig. 9.2), the other is a resistance box, and the remaining two arms can be constant
resistances R1 and R3. The bridge diagonal includes a galvanometer. When
measuring, set the resistances R1 and R3 to zero so that no current flows through
the galvanometer. Balance or equilibrium refers to the status of the measuring
circuit while it is in this state.
Thermocouple graduation
A thermocouple is an electrical device made up of two dissimilar electrical
conductors that come together to form an electrical junction. As a result of the
thermoelectric effect, a thermocouple generates a temperature-dependent voltage,
which can be interpreted to measure temperature. Thermocouples are a type of
temperature sensor that is widely used. Thermocouples are used to measure
temperatures ranging from –270 to +15000 degrees Celsius.

Figure 9.4. Working principle of the thermocouple.

Thermocouple, also known as thermal junction, thermoelectric thermometer, or


thermel, is a temperature-measuring device made up of two wires of different
metals that are connected at each end. The temperature is measured at one
connection, while the other is kept at a constant lower temperature. The circuit
includes a measurement equipment. The temperature difference causes an
electromotive force (known as the Seebeck effect) to emerge that is roughly
proportional to the temperature difference between the two junctions. Temperature
can be determined using standard tables or by calibrating the measuring instrument
to read temperature directly.

88
The thermoelectric effect is the use of a thermocouple to convert temperature
variations to electric voltage and vice versa. When the temperature on both sides of
a thermoelectric device differs, a voltage is generated. Heat is transmitted from one
side to the other when a voltage is applied to it, resulting in a temperature
difference. An imposed temperature gradient causes charge carriers in a material to
diffuse from the hot side to the cold side at the atomic level.
This phenomenon can be utilized to create power, measure temperature, and alter
item temperatures. The applied voltage affects the direction of heating and cooling,
hence thermoelectric devices can be employed as temperature controllers.

The Seebeck effect occurs when an electric potential builds up as a result of a


temperature gradient. A thermocouple is a device that monitors the potential
difference between two dissimilar materials at their hot and cold ends. The
temperature differential between the hot and cold ends determines the potential
difference. It was called after the Baltic German physicist Thomas Johann
Seebeck, who independently found it in 1821. It was first discovered in 1794 by
Italian scientist Alessandro Volta. A closed loop created by two different metals
linked in two places, with an applied temperature difference between the joints,
was discovered to deflect a compass needle. This was due to the fact that the
electron energy levels in different metals moved differently, resulting in a potential
difference between the junctions, which caused an electrical current to flow
through the wires and, as a result, a magnetic field to form around the wires.
Because Seebeck was unaware that an electric current was involved, he coined the
term "thermomagnetic effect." Hans Christian Oersted, a Danish physicist,
corrected the error and coined the term "thermoelectricity."

The Seebeck effect is a typical electromotive force (EMF) that produces


quantifiable currents or voltages in the same manner that other EMFs do. The local
current density is given by

𝐽 = 𝜎(−∆𝑉 + 𝐸𝑒𝑚𝑓 ) (9.9)


where V is the local voltage, and 𝜎 is the local conductivity. In general, the
Seebeck effect is described locally by the creation of an electromotive field
𝐸𝑒𝑚𝑓 = −𝑆 · ∆𝑇 (9.10)
where S is the Seebeck coefficient (also known as thermopower), a property of the
local material, and 𝛥T is the temperature gradient.
Seebeck coefficients depend on the temperature and composition of the
material from which the conductor is made. For pure materials at room
temperature, the Seebeck coefficient can range from -100 microvolts per Kelvin to
+1000 microvolts per Kelvin. Heat is evolved at one junction and absorbed at the
other junction when an electric current is carried through a thermocouple circuit.
The Peltier Effect is the name for this phenomenon.
89
The Peltier effect is defined as the presence of heating or cooling at an
electrified junction of two distinct conductors. It was discovered in 1834 by French
physicist Jean Charles Athanase Peltier.
To use a thermocouple to measure temperatures, it must first be calibrated,
that is, the relationship between the EMF present in the thermocouple circuit (or
the corresponding deviations of the galvanometer) and the temperature difference
between the heated junction and the junction of a constant temperature must be
established empirically. Galvanometers (millivoltmeters) or potentiometers are
used to measure thermo-electromotive force.
Calibration is done by measuring the galvanometer's deviations or the
thermo-EMF values at various temperatures of the heated junction. The calibration
results in a graph, with the temperatures of the heated junction plotted on the
horizontal axis and galvanometer deviations or thermo-EMF values plotted on the
vertical axis.
In a small range of temperature changes, thermo-EMF E is proportional to
the temperature difference of the junctions t 2  t1 ,

E  K (t 2  t1 ) (9.11)
As a result, the graph of the EMF thermocouples from the temperature of the
heated junction will be represented by a straight line in a limited temperature
range. Most thermocouples deviate from proportionality in a wide range of
temperature measurements.
According to the results of thermocouple calibration, it is possible to
calculate the sensitivity (constant) of thermocouple K, which depends on the nature
of the substances making up the thermocouple and corresponds to thermo-EMF
when the temperature of the heated junction changes by 10° C:
n2  n1
K t  С ( Rt  Rg  Radd )  (9.12)
t 2  t1

where I - the current measured by the galvanometer; RT - the resistance of the


thermocouple; Rг– the resistance of the winding of the galvanometer and Radd - the
additional resistance.

Laboratory work order

Thermistor and thermocouple calibration


Task 1:
1. Assemble the scheme of installation.

90
2. Mark the initial temperature of water t 0 in the vessel that contains thermal
resistance and heater, balance the bridge.
3. Write the value of thermal resistance Rt to the table 1.
4. Heating the water till the boiling conduct the resistance measurement of
thermistor in every 10оС.
5. Obtained data write to the table 1.

Table 1.
0
t, C
R,
(Ohm)

6. Determine the thermal coefficient of resistance by the formula:


Rt  R0

R0 t

7. Draw the graph of the dependence of conductors resistance on the


temperature R  f (t C ) .
0

8. Make a conclusion.

Task 2:
1. Immerse the ends of thermocouple to the vessels with water and measure the
temperature in both vessels.
2. Heating the water till boiling conduct the measurement with galvanometer in
every 10оС, write the number of division which were rejected by galvanometer
needle to the table 2.
Table 2.
0
t, C
n
I
3. Determine the sensitivity of thermocouple by the formula:
n2  n1
K T  С ( RT  Rg  Radd ) 
t 2  t1

4.Draw the graph of the dependence n  f (t C ) and make a conclusion.


5. Make a conclusion.

91
Regulation of Body Temperature

Warm-blooded creatures, such as humans, must keep their body


temperatures relatively consistent. A person's typical internal body temperature,
for example, is around 370 degrees Celsius. A one- or two-degree variation in
either direction could indicate a problem. The protein structures are irreparably
destroyed if the temperature-regulating mechanisms fail and the body
temperature rises to 440 or 450C. Heart stoppage occurs when the body
temperature drops below 280C.

The temperature of the body is detected by specialized nerve centers in the


brain and receptors on the body's surface. The body's numerous cooling and
heating processes are then triggered in response to the temperature. Muscles
have a maximum efficiency of 20% when doing external labor. As a result, at
least 80% of the energy expended in performing a physical activity is
transformed to heat within the body. Furthermore, all of the energy used to
maintain basic metabolic activities is eventually transformed to heat. The body
temperature would swiftly rise to a dangerous level if this heat was not
removed. A 70-kg male, for example, may consume 260 Cal/hr with moderate
physical activity. At least 208 Cal of this is turned to heat. If this heat remained
inside the body, the temperature would rise by 3 degrees Celsius per hour. Two
hours of such effort would result in utter exhaustion. The body, fortunately, has
a number of highly efficient systems for managing heat flow out of the body
and so keeping a constant internal temperature.

The majority of the heat produced by the body is created deep within the
body, away from the surface. This heat must first be delivered to the skin in
order to be removed. There must be a temperature differential between the two
places for heat to flow from one to the other. As a result, the skin's temperature
must be lower than the internal body temperature. In a warm environment, the
temperature of the human skin is about 350C. In a cold environment, the
temperature of some parts of the skin may drop to 270C.

The circulatory system transports the majority of the heat from the inside of
the body. Conduction transports heat from an inner cell to the bloodstream.
Because the distances between the capillaries and the heat-producing cells are
tiny, heat transfer by conduction is particularly fast in this scenario. The
circulatory system transports the warm blood close to the skin's surface.
Conduction then transfers the heat to the exterior surface. The circulatory
system not only transports heat from the inside of the body, but it also regulates
the thickness of the body's insulation. When the body's heat escapes too
quickly, the capillaries near the surface constrict, and blood flow to the surface
is dramatically reduced. This treatment creates a heat insulating barrier around
the inner body core because tissue without blood conducts heat poorly.

92
As previously stated, the temperature of the skin must be lower than the internal
body temperature for heat to flow out of the body. As a result, heat must be
evacuated from the skin at a fast enough rate to sustain this condition. Because air
has a limited heat conductivity, the quantity of heat released via conduction is
small if the air around the skin is confined—for example, by clothing. Convection,
radiation, and evaporation are the main methods of cooling the skin's surface.
However, if the skin is in contact with a good thermal conductor, such as a metal,
conduction can remove a significant quantity of heat.
Numerical problems for independent solution:
1. Determine the resistance of an aluminum wire with a length of 20 m and a
cross-sectional area of 2 mm2 at 70ºС, given that the resistivity of aluminum at
20ºС is 2.7·10-8 Ohm•m and the temperature coefficient of resistance
is 4.6·10-3 0C-1.

2. What is the smallest change in human body temperature that can be


determined using an iron-constant thermocouple if the galvanometer's resistance is
R = 20 Ohm, its division value is a = 10-9 A/div, and the thermocouple's sensitivity
and resistance are equal to 𝛾 = 50 μV / K and r = 5 ohms, respectively?
3. A Pb-Ag thermocouple produces a thermoelectromotive force of 3 μV at a
temperature difference of 1 K between the junctions. Is it possible to confidently
establish an increase in the temperature of a human body from 36.5 to 37.0°C
using such a thermocouple, if the potentiometer can measure the voltage with an
accuracy of μV?

93
X
THE EFFECT OF ELECTROMAGNETIC FIELDS AND CURRENTS ON
THE BODY.
GALVANIZATION AND ELECTROPHORESIS.
PHYSICAL BASIS OF RHEOGRAPHY.

Differences in voltage cause electric fields to form; the higher the voltage,
the stronger the resulting field. When electric current runs, magnetic fields are
formed; the greater the current, the stronger the magnetic field. Even if there is no
current flowing, an electric field will exist. If current flows, the magnetic field's
strength varies with power consumption, while the electric field's strength remains
constant.
The frequency or wavelength of an electromagnetic field (EMF) is one of the
most important features that defines it. Fields of various frequencies interact with
the human body in various ways. Electromagnetic waves can be thought of as a
series of incredibly regular waves traveling at the speed of light. The term
frequency basically refers to the number of oscillations or cycles per second, but
the term wavelength refers to the distance between each wave. As a result,
wavelength and frequency are inextricably linked: the shorter the wavelength, the
higher the frequency.
Another essential feature of electromagnetic fields is wavelength and
frequency: Quanta are particles that carry electromagnetic waves. Higher
frequency (shorter wavelength) waves carry more energy in their quanta than lower
frequency (longer wavelength) waves. Some electromagnetic waves carry so much
energy per quantum that they are capable of breaking molecular bonds. Gamma
rays emitted by radioactive materials, cosmic rays, and X-rays all have this feature
and are referred to as 'ionizing radiation' in the electromagnetic spectrum. Non-
ionizing radiation refers to fields with insufficient quanta to disrupt molecular
bonds. Electricity, microwaves, and radiofrequency fields are examples of man-
made electromagnetic fields that are found at the relatively long wavelength and
low frequency end of the electromagnetic spectrum, and their quanta are unable to
break chemical bonds.
Electromagnetic fields at low frequencies.
Electric fields exist whenever a positive or negative electrical charge is present.
They exert forces on other charges within the field. The strength of the electric
field is measured in volts per meter (V/m). When an electrical wire is charged, it
generates an electric field. Even when there is no current flowing, this field exists.
The greater the electric field at a certain distance from the wire, the higher the
voltage. Electric fields are strongest when they are close to a charge or a charged
conductor, and they weaken rapidly as they get further away. Metal and other
conductors efficiently insulate them. Other elements, such as construction
94
materials and plants, offer some protection. As a result, walls, buildings, and trees
limit the electric fields from power lines outside the house. When power lines are
buried in the ground, the electric fields at the surface are hardly detectable.
Magnetic fields arise from the motion of electric charges. The strength of the
magnetic field is measured in amperes per meter (A/m); more commonly in
electromagnetic field research, scientists specify a related quantity, the flux density
(in microtesla, µT) instead. Unlike electric fields, magnetic fields are only created
when a device is turned on and current flows. The magnetic field becomes stronger
as the current increases. Magnetic fields, like electric fields, are strongest close to
their source and rapidly diminish as they travel further away. Magnetic fields are
not obstructed by conventional materials like building walls.
Electric fields Magnetic fields
1. Electric fields arise from voltage. 1. Magnetic fields arise from current
2. Their strength is measured in Volts flows.
per metre (V/m) 2. Their strength is measured in
3. An electric field can be present even amperes per meter (A/m). Commonly,
when a device is switched off. EMF investigators use a related
4. Field strength decreases with distance measure, flux density (in microtesla
from the source. (µT) or millitesla (mT) instead.
5. Most building materials shield 3. Magnetic fields exist as soon as a
electric fields to some extent. device is switched on and current flows.
4. Field strength decreases with
distance from the source.
5. Magnetic fields are not attenuated by
most materials.

Electrical appliances emit time-varying electromagnetic fields, which are examples


of extremely low frequency (ELF) fields. Frequencies up to 300 Hz are common in
ELF fields. Other technologies generate intermediate frequency (IF) fields between
300 Hz and 10 MHz, as well as radiofrequency (RF) fields between 10 MHz and
300 GHz. The effects of electromagnetic fields on the human body are influenced
by their frequency and energy as well as their field level. The main sources of ELF
fields are our electricity power supply and all appliances that use electricity; the
main sources of IF fields are computer screens, anti-theft devices, and security
systems; and the main sources of RF fields are radio, television, radar, and cellular
telephone antennas, as well as microwave ovens. These fields cause currents to
flow through the human body, which, depending on their amplitude and frequency
range, can cause a variety of consequences such as heating and electrical shock.
(However, to produce such effects, the fields outside the body would have to be
very strong, far stronger than present in normal environments.)
Low-frequency electric fields have the same effect on the human body as they do
on any other charged-particle substance. The distribution of electric charges at the

95
surface of conductive materials is influenced by electric fields acting on them.
They make current flow through the body and down to the ground. Within the
human body, low-frequency magnetic fields cause circulation currents. The
strength of these currents is determined by the external magnetic field's intensity.
These currents, if large enough, might stimulate neurons and muscles, as well as
alter other biological processes. Both electric and magnetic fields cause voltages
and currents in the body, although even just beneath a high-voltage transmission
line, the induced currents are negligible compared to shock and other electrical
effects thresholds.
Microwave thermotherapy is a method of treatment in which bodily tissue is
heated by microwave irradiation in order to harm and kill cancer cells or make
cancer cells more sensitive to the effects of radiation and certain anticancer
medications.
Diathermy is the use of high-frequency electromagnetic currents or electrically
produced heat in physical therapy and surgical treatments. Jacques Arsene
d'Arsonval produced the first observations on the effects of high-frequency
electromagnetic currents on the human body. In 1907, German physician Karl
Franz Nagelschmidt created the term diathermy, which is derived from the Greek
terms dia and therma and literally means "heating through."
Diathermy is a technique that is extensively used in medicine to relax muscles and
generate deep tissue warmth for therapeutic purposes. It's utilized in physical
therapy to target pathologic lesions in the body's deeper tissues with mild heat.
Diathermy is produced by three techniques: ultrasound (ultrasonic diathermy),
short-wave radio frequencies in the range 1–100 MHz (shortwave diathermy) or
microwaves typically in the 915 MHz or 2.45 GHz bands (microwave diathermy),
the methods differing mainly in their penetration capability. It exerts physical
effects and elicits a spectrum of physiological responses.
Hyperthermia treatment involves using the same procedures to raise tissue
temperatures in order to kill neoplasms (cancer and tumors), warts, and
contaminated tissues. Diathermy is a technique for cauterizing blood arteries and
preventing excessive bleeding in surgery. The approach is very useful in
neurosurgery and ocular surgery.
UHF therapy
Ultra high frequency therapy (UHF therapy) is one of the Physical therapy's
methods helping to treat and restore the health after injuries and maladies.
This is an apparatus approach that uses ultra high frequency electromagnetic fields
to provide heat to the tissues and organs, causing a number of physicochemical
processes and thereby delivering a therapeutic effect.
The intensity of the produced and applied electromagnetic field influences the
body's physiological responses to UHF therapy. A low-intensity field, for example,

96
has an anti-inflammatory impact by increasing blood and lymph flow in tissues; a
higher-intensity field stimulates metabolic processes and promotes cell
nourishment and vital activity, but a high-intensity field, on the other hand,
increases inflammation. This is why the course of UHF therapy should be planned
individually, taking into account the severity of the disease and the stage of the
pathological process.

The effectiveness of UHF therapy has been demonstrated in practice, and it is now
used in practically every sector of medicine. Electromagnetic therapy, in addition
to its anti-inflammatory properties, is beneficial in the angenesis of diseased or
traumatized tissues. UHF therapy enhances tissue metabolism and lowers vascular
permeability, vascular and muscular spasm, alleviating pain and restoring patient
performance by creating a protective barrier surrounding the inflammatory center.

The procedure does not require any special preparations. A patient is lying down
during the therapy and a phisician selects and applies electrodes corresponding to
the UHF area as well as the dose of electromagnetic fluxes. Any metal objects shall
be avoid in the applied area: dentures, earrings, chains and piercings. The
procedure lasts from 5 to 16 minutes, and is carried out with 10-15 of sessions per
a treating course. Sometimes it is neccessary to increase a sessions amounts during
the course. Sessions can be scheduled every day or in a day.

The procedure is effective in the treatment of various acute and chronic


inflammatory processes of internal organs; degenerative processes; diseases of the
musculoskeletal system; ear, throat, nose; peripheral nervous system; female
reproductive organs.

Body tissues are largely diamagnetic, like water. However, the body also contains
paramagnetic substances, molecules and ions.
Biocurrents arising in the body are a source of weak magnetic fields. In some
cases, the induction of such fields can be measured. So, for example, on the basis
of recording the time dependence of the magnetic field induction of the heart
(biocurrents of the heart), a diagnostic method was created –
magnetocardiography.

Investigation of the distribution of the electric field in space and the effect of
the UHF field on electrolytes and dielectrics.
UHF therapy is an effect on the body with therapeutic, prophylactic and
rehabilitative purposes with a continuous or pulsed ultra-high frequency electric
field (from 30 to 300 MHz). During UHF therapy, the electric field is supplied to
the body tissues using capacitor plates connected to the UHF oscillator.
Consider the physical processes that arise in conductors and dielectrics under the
influence of the UHF electric field. If a conductor is placed in an electric field, then
97
charged particles (charges) move in it in accordance with their polarity and field
direction, that is, an electric current arises, which is called the conduction current.
When a dielectric material is placed in an electric field, electric charges do not
flow through the material as they do in an electrical conductor but only slightly
shift from their average equilibrium positions causing dielectric polarization (see
fig.10.1). Positive charges are displaced in the direction of the field as a result of
dielectric polarization, while negative charges shift in the opposite direction (for
example, if the field is moving in the positive x-axis, the negative charges will shift
in the negative x-axis). This produces an internal electric field, which reduces the
overall field within the dielectric. When a dielectric is made up of weakly bonded
molecules, they become polarized and reorient so that their symmetry axes align
with the field.

The displacement current is the movement of electric charges caused by the


displacement of the charges of the dipoles in the dielectric. As a result, when an
electric field is applied to the conductors, a conduction current appears, and a
displacement current appears in the dielectric.

Heat release in tissues is determined by both the conduction current and the
displacement current in the frequency range up to 300 MHz, and at a frequency of
about 1 MHz, the conduction current takes the lead, while at a frequency of more
than 20 MHz, the displacement current takes the lead (for muscle tissue). Both of
these currents cause biological tissues to heat up.
Even in the most sensitive neuromuscular fibers, currents created in biological
tissues at frequencies above 100 kHz are incapable of causing the formation of an
action potential.
Dielectric properties are concerned with the storage and dissipation of electric and
magnetic energy in materials. Dielectrics play an important role in explaining a
variety of phenomena in cell biophysics.

98
Figure 10.1. A dielectric material in an electric field.

There are characteristics of the HF electric field's action on electrolytes and


dielectrics. Ionic conductivity causes heating of electrolytes, while the energy of
the electric current is converted into internal thermal energy. The amount of heat
produced during this process:
𝑞1 = 𝜎𝐸 2 (10.1)
where, 𝜎 is the specific conductivity of the electrolyte; E is the effective value of
the electric field strength.
Continuous reorientation of dipole molecules in a dielectric under the influence of
a high-frequency electric field, with heat release calculated using the formula:
q2 = 𝜔 E2 𝛆𝛆0 tg𝛅, (10.2)
where 𝛆 is the relative dielectric constant of the dielectric; 𝜔 - circular frequency; 𝛅
is the angle of dielectric losses.
The body includes tissues that have the properties of both electrolytes and
dielectrics. Therefore, under the action of a high-frequency electric field, an
amount of heat is released in the tissues:
q = q1 + q2 = E2 (𝜔 𝛆𝛆0 tg𝛅 + 𝜎). (10.3)
Because ions have a lower total mass than protein molecules during
orientation vibrations, electromagnetic energy is absorbed by an order of
magnitude more than with linear ion movement. The maximum amount of heat is
formed in tissues with pronounced dielectric properties, which are poor in water,
due to the different absorption of the energy of the UHF field (nervous, bone and
connective tissue, subcutaneous fatty tissue, tendons and ligaments). Heat is

99
generated by an order of magnitude less in tissues and media with significant
electrical conductivity and water content (blood, lymph, muscle tissue) (Fig. 10.2).

S M B M S

Figure 10.2. Distribution of absorbed electromagnetic energy in body tissues during UHF
therapy:
S - skin, M - muscle tissue, B - bone tissue

UHF generator.
There are various devices for UHF therapy. The apparatus is a device that
has electrodes and an alternating electric field appears between these electrodes
(figure 10.3). The absorption of the energy of the UHF electric field by biological
tissues is relatively low, due to which it has a pronounced penetrating ability and
penetrates through the part of the body located between the electrodes.

Figure 10.3. UHF generator.

Figure 10.4 describes an equivalent electrical diagram of a capacitive electrode


circuit and a portion of the patient's body. The patient's body impedance (resistance
R and capacitance C) in the UHF range is comparable to the capacitance of the
circuit section between the electrodes and the patient's body surface (capacitance
C0), the value of which is determined by the air gaps between them.

100
Figure 10.4. Equivalent electrical circuit of a chain of capacitive electrodes and a part of the
patient's body during UHF therapy

Because the area surrounding the electrodes, where there is the greatest
concentration of field lines of force, is located outside the patient's body, the
presence of gaps can considerably prevent unwanted heating of surface tissues.
Because there is no need to ensure contact between the electrode and the body, the
technique for administering UHF therapy is substantially simplified by the
presence of these gaps.
The square of the field strength E2 is proportional to the heating of biological
tissues in a UHF electric field. The intensity of an inhomogeneous field, which
occurs in real-world situations, varies and is defined by the concentration of field
lines of force.
The field between the electrodes is most uniform in the center in the absence of the
patient's body, but the electric field lines bend towards the periphery due to the
edge effect (Fig. 10.5). The smaller the ratio of the distance between the electrodes
to their diameter, the wider the uniform field zone. The field lines do not go
uniformly anyplace when the patient is positioned between the electrodes because
of the inhomogeneous construction; instead, they bend in the center zone so that
the maximum field strength is under the electrodes.

Figure 10.5. The electric field lines of the electric field, formed by two plates:
a - when the distance between the plates is less than their diameter;
b - when the distance between the plates is greater than their diameter.

101
Two electrodes are used in a longitudinal and transverse layout when
performing UHF treatment techniques. The strength and absorbed energy of the
UHF electric field formed in the treatment area by capacitive electrodes varies
depending on the distance between the tissues and the electrode as well as their
geographical location (Fig. 10.6). It is possible to give a preferential effect on a
certain portion of the body by adjusting the electrode size, gap size, and inclination
of the electrode with regard to the body surface. If the electrodes are the same,
then the impact is more intense from the side of the electrode located with a
smaller gap (Fig. 10.6, a). The same takes place when using one smaller electrode
(Fig. 10.6, b).
The field is focused towards the edge of the electrode that is closer to the body
when the electrode is installed obliquely to the body's surface, resulting in selective
heating (Fig. 10.6, c).
When heating the folds of the body, such as between the cheek and the nose, this
approach is used.
When exposed to uneven surfaces of the body, the concentration of the field and
overheating occurs on its protruding parts. In this case, either the gap is increased
(Fig. 10.6, d), or flexible electrodes are used.
Metal objects in an UHF electric field do not heat up, however, near them,
especially in the presence of sharp edges and protrusions, there is a concentration
of field lines of force (Fig. 10.6, e), and, as a result, local overheating and burns
may occur. For this reason, the seat or bed for a patient during UHF therapy
procedures should not have metal parts, but rings, pins, needles and others metal
objects in the patient's possession should be removed if they are located close to
the affected area.

102
metal

Figure 10.6. Distribution of electric field lines of the electric field during UHF therapy:
the degree of darkening of the object characterizes the intensity of heating.

During the procedure, the part of the patient's body on which the treatment is
carried out is rigidly fixed (Fig. 10.7, a, b). For the procedure, electrodes of the
appropriate size are selected, fixed in holders and installed in the position intended
for treatment.

Figure 10.7 - UHF therapy: a - ankle joint; b – eyeball.

103
Heating organs and tissues results in long-term, deep tissue hyperemia in the
affected area. Capillaries, in particular, expand rapidly, increasing in diameter by
3–10 times.
Strengthening regional blood and lymph outflow in affected tissues, changes in the
permeability of the microvasculature, tissue barriers, an increase in the number of
leukocytes and an increase in their phagocytic activity result in dehydration and
resorption of the inflammatory focus, as well as a reduction in pain caused by
edema. Activation of connective tissue stromal elements and cells of the
mononuclear phagocyte system (histiocytes and macrophages), an increase in the
dispersion of blood plasma proteins, an increase in the concentration of Ca2+ ions,
and activation of metabolism in the area of the lesion stimulate proliferative and
regenerative processes in the connective tissue surrounding the inflammatory
focus, and have been shown to stimulate proliferative and regenerative processes in
the connective tissue This allows UHF therapy to be used at various stages of the
inflammatory process.

Electrical characteristics of biological tissues


Biological tissues are conductors of electricity. Tissues are ionic conductors
because ions are current carriers in them (or second class conductors). The effect
of electrical current on tissues varies depending on the current type. Direct, pulse
(impact dependent on impulse shape), and alternating currents are the three types
of current that can be distinguished.
Direct current effect on biological tissues. Direct current is flowing
through tissues under influence of applied constant voltage. A current strength (I)
and a current density (J = I/S) are direct current characteristics. Current strength
(or current commonly) in tissues is determined by an applied voltage (V) and tissue
specific resistance (ρ) or specific conductance (σ = 1/ρ). Specified resistance has
name active (ohmic). I = V· σ. Positive ions shift to one side and accumulate in
specific regions of tissues, whereas negative ions shift to the opposite side and
accumulate in different parts of tissues. The basic mechanism of the direct current
action on biological tissues is a shift in ion concentrations in different areas of the
tissues compared to normal values. Polarization is the name given to this
appearance. If the voltage remains constant, the direct current flowing in tissue can
drop to a very low level at any time. The fact that ions shift and ion concentration
changes in different sections of electric field exposed tissue results in the formation
of an electric field in tissues. The electromotive force of polarization is its name.
This field is oriented in the opposite direction as the external field. It partially
compensates for an external field and decreases current.

104
Alternating current effect on biological tissues.
Voltage, current strength, frequency (and values referred to as frequency –
cycle frequency and period), and phase are all characteristics of alternating current.
Harmonic function caused voltage to induce alternating current to change
(sinusoid). Voltage, current strength, impulse form, and frequency are all
characteristics of pulse currents. The normal consequence of impulses with only
one-side voltage change is irritation. The effects of alternating current on living
tissues vary depending on the frequency of the current. Alternating current, similar
to pulse current, induces irritation of excitable tissues when frequencies are low. If
frequencies are high, when charged particles shift is not great in tissues, calorific
effect takes place, i.e. there is a heat emission in tissues as a result of current
flowing. In the case of alternating currents, the current-conducting qualities of
tissues are affected by the frequency of the current, as tissues have apparent
capacity properties. At alternating current, impedance (total resistance) is a
property of tissue.
Active and reactive resistance are two types of resistance found in biological
objects. Impedance is the total resistance of things. Impedance (symbol Z) is a
measurement of a circuit's overall resistance to current, or how much it obstructs
the flow of charge. It's similar to resistance, but it also takes capacitance and
inductance into account. Ohms are the unit of measurement for impedance ( ).
Because the effects of capacitance and inductance vary with the frequency of the
current traveling through the circuit, impedance fluctuates with frequency, it is
more complicated than resistance. Resistance has the same impact regardless of
frequency.

𝑍 = √𝑅 2 + (𝑋𝐿 − 𝑋𝐶 )2 (10.4)
1
Where: R-active resistance; 𝑋𝐿 = 𝜔𝐿 - inductive reactance; 𝑋𝐶 = - capacitive
𝜔𝐶
reactance.
An organism's tissues not only expend a continuous current, but also an alternating
current. There are no systems in an organism that are analogous to coils of
inductance, therefore its inductance is near to zero. Biological membranes, and
thus all organisms, have capacitor qualities, and the impedance of an organism's
tissues is solely characterized in terms of ohmic and capacitor resistance:

𝑍 = √𝑅2 + 𝑋𝐶 2 (10.5)

Ohmic and capacitor properties of biological tissues can be modeled, using


equivalent electric schemes. An electrical circuit having impedance dependence on
alternating current frequency consists of resistors and capacitors. The simplest

105
circuit of such type is shown in fig. 10.8. This circuit is called an electrical
equivalent of a biological tissue.

Figure 10.9. Muscle tissue impedance


Figure 10.8. Equivalent electrical scheme of
dependence on alternating current frequency in
a biological tissue.
different states.

Here a resistor Re corresponds to extracellular fluid, a capacitor C corresponds to


cell membranes, and a resistor Ri, corresponds to intercellular contents.
Different events that occur in developing in tissues (inflammation, necrosis, etc.)
alter the electrical characteristics of these tissues. As a result, the values and forms
of impedance dependency on alternate current frequencies shift (fig.10.9).
The frequency dependence of impedance allows predicting the survival of an
organism's tissues, which is vital to know for tissue and body replacement
(transplantation). A membrane's dead tissues are desolate, and the tissues have just
ohmic resistance. In the case of healthy and ill tissues, a distinction in frequency
dependency impedance emerges.
The physiologic state of tissues and bodies determine their impedance. The
impedance varies based on the state of cardiovascular activity during blood vessel
filling.
Rheography is a diagnostic procedure based on the recording of changes in fabric
impedance during intimate activity (impedance- plethysmography).
The rheograms of the brain (rheoencephalography), hearts (rheocardiogram), main
vessels, lungs, liver, and extremities can all be seen using this technology.
Typically, measurements are done at a frequency of 30 KHz.
Rheography is used to diagnose a variety of circulatory problems in the brain,
limbs, lungs, hearts, and liver. Rheography of the extremities is used to diagnose
disorders of the peripheral vasculature, which are characterized by changes in tone,
elasticity, constriction, or complete corking of the arteries.
Record rheogram is made from both symmetric sites of extremities, on which
electrodes are imposed with identical area and width is equal to 1020 mm.

106
Galvanization is a medical treatment that uses a steady electric current of low
voltage (up to 80V) and low intensity (up to 50 mA). Galvanization is now limited
to current obtained through rectification and smoothing of alternating public
current.
Galvanic current passing through skin encounters strong resistance from the
epidermis. The majority of current energy is spent overcoming this resistance, and
most significant galvanization reactions are developing just where it absorbs the
most energy. First and foremost, it is hyperemia and burning sensations under
electrodes, which are induced by a change in the standard correlation of tissular
ions, pH medium, and heat production caused by the current. In addition, the
release of biochemically active chemicals, the activation of enzymes, and
metabolic processes generate a reflex increase in blood flow to the exposed area.
When the current intensity and exposure time are increased, the burning feeling
and tingling become unbearable, and chemical burns emerge after a lengthy period
of use.
Direct current action is reduced from the surface to deeper tissues due to current
distribution along tissues with strong electroconductivity and rapid density
reduction deep in tissues. Galvanisation improves blood circulation and
lymphokinesia, promotes tissue resorption ability, accelerates metabolic and
trophic processes, boosts gland secretory capabilities, and has analgesic properties.
Galvanic current, when administered to the hepatic area in particular, boosts the
action of coagulants and extends their time in the body by 2-4 hours. After
galvanization, the potentiation effect lasts for 4-6 hours. Anticoagulant action
against galvanization, which is first applied in tiny dosages, on the other hand,
diminishes and develops later (after 30-120 min.). Adrenaline, acetylcholine,
thrombin, and histamine all have a hypercoagulant impact when galvanized.
Galvanic current, delivered by orbital and occipital methods, prolongs and
potentiates the effect of psychotropic medicines (haloperidol, seduxen, amizylum,
sodium oxybutyrate, and others), as well as lowering the quantity and frequency of
adverse reactions.
Because galvanic current boosts their effect and favorably influences immunologic
processes and overall reactivity, antisensitizer and immunosuppressant doses can
be reduced during galvanization.
Indications for galvanization: diseases and lesions of different sections of
peripheral nervous system of infectious, toxic and traumatic origin (radiculitis,
plexitis, neuritis, neuralgia of different locality), consequences of diseases and
lesions of brain, spinal cord, brain tunics, neuroticisms, vegetative and vascular
disorders, chronic joint inflammations (arthritis) of traumatic, rheumatic and
metabolic origin, etc.
Apparatus for galvanization and electrophoresis POTOK-01М (fig. 10.10).is
intended to influence with constant electric current on the human body for
therapeutic and prophylactic purposes, as well as for iontophoresis in the
107
conditions of clinic and hospital. It is a plastic case with the switch and handles of
regulation of current submission. It comes bundled with two electrodes.
Additionally, the apparatus can be equipped with various kinds of soft pillows for
the physiotherapy. Power consumption, VA, – no more than 20. The maximum
current in the circuit of the patient at a load is 500 ohms, mA – 50 + -5.

Figure10.10. galvanizing apparatus Figure 10.11. Medical electrophoresis

Electrophoresis: A method used in clinical and research laboratories for


separating molecules according to their size and electrical charge. An electric
current is passed through a medium that contains the mixture of molecules. Each
kind of molecule travels through the medium at a different rate, depending on its
electrical charge and molecular size. Separation of the molecules occurs based on
these differences. Medicinal electrophoresis is a therapeutic method that
combines the effect of a low-power direct current on the body and a drug
introduced with its help through the skin without injection using electrodes (fig.
10.11).
Laboratory work

Task A. RESEARCH OF SPATIAL DISTRIBUTION of UHF ELECTRIC


FIELD
The distribution of the electric field strength between the patient's electrodes
depends on the size of the electrodes, the distance between them, and their relative
location. This distribution can be studied using a dipole antenna (DA), which
consist of two conductors with a dielectric connected between them. The dipole
antenna is connected to a microammeter.
The current that occurs in the dipole antenna circuit is proportional to the strength
of the UHF electric field. The dipole antenna is located at the end of a wooden rail
that can move along the guides in two mutually perpendicular directions. On the
guides are marked with divisions through each centimeter. This allows you to
determine the position of the dipole antenna relative to the patient's electrodes.

108
UHF-generator Dipolar antenna

1. the data obtained during the experiment should be written in the table.
l (cm) 1 2 3 4 5 6
I (А)

2. use the table data to plot the dependency graph I  f (lx ) .


3. Make a conclusion.

Task B. RESEARCH OF THERMAL IMPACT of the UHF-FIELD ON


ELECTROLYTES AND DIELECTRICS.

To study the thermal effect of the UHF electric field on electrolytes and dielectrics,
vessels with the studied liquids are installed between the electrodes. The
temperature change is measured by thermometers mounted in vessels.
Salt solution is used as an electrolyte, and castor oil is used as a dielectric.

1. the data obtained during the experiment should be written in the table.
Heating time Temperature of Temperature of
t (min) Electrolyte , Тel Dielectric ,
Тdielectric
0
3
6
9
12

109
2. Draw the dependence graph of temperature of the studied liquids on time of
impact on them of UHF electric field . For comparison draw graphs on the same
coordinate axis.
Тel = f(tmin), Тdielectric= f(tmin)

3. Make a conclusion.

Numerical problems for independent solution:


1. In an alternating current circuit with a voltage of U =220 V and a frequency
of 𝝼 = 50 Hz, an ohmic resistance R = 100 Ohm and a capacitor of
C = 10 μF are connected in series. Determine circuit impedance, current in
the chain and voltage across the capacitor.

2. Determine the active resistance of the coil of the electromagnetic relay in the
X-ray machine, if the inductance of the coil is 150 H, and the current in the
industrial frequency network at a voltage of 120 V is 2.5 mA.

3. Determine the amount of charge that passes through a section of the human
body in 2 minutes, if the current density of galvanization is 0.1
A/cm , and the size of the electrodes is 4x6 cm2. How much heat will be
2

released in this case, if the resistance of the body area is 2·103 ohms?

Work on the text


Task: cross out unnecessary words in curly braces
UHF therapy (ultra high frequency therapy) is a physiotherapeutic method of
{treatment, diagnostics, surgical operation} that uses ultra high frequency
electromagnetic fields. UHF therapy is a treatment with {cold, boiling, heat,
current} which, with the help of special equipment, penetrates into human tissues
and organs.
UHF therapy is based on the action of an ultra-high frequency
electromagnetic field on a pathological focus. During the procedure, the patient
feels warmth, and the energy {absorbed, released} by the tissues of the body
improves microcirculation at the site of exposure.
Indications for UHF therapy may be as follows: {diseases of the gastrointestinal
tract, diseases of the ENT organs, diseases of organs of vision, various types of
cancerous tumors, skin diseases, upon detection of a neoplasm, diseases of the
musculoskeletal system}.

110
XI
PHYSICAL ISSUES OF HEMODYNAMICS BASED ON
HYDRODYNAMICS. BIOREOLOGY. DETERMINATION OF THE
VISCOSITY OF LIQUIDS USING A VISCOMETER.

Circulation of the Blood.


Blood circulation in the body is sometimes compared to a plumbing system,
with the heart acting as the pump and the veins, arteries, and capillaries acting as
the pipes. This analogy isn't completely accurate. Blood is more than just a fluid; it
contains cells that make the flow more difficult, especially when the pathways
become narrow. Furthermore, veins and arteries are not hard pipes, but rather
elastic structures that change shape in response to fluid forces. Even so, using the
principles developed for simple fluids flowing in rigid pipes, it is possible to study
the circulatory system with good accuracy.
The human circulatory system is depicted in Figure 11.1 as a diagram. The
circulatory system's blood transports oxygen, nutrition, and other necessary
substances to the cells while also removing metabolic waste products. The heart
pumps blood via the circulatory system, and it leaves the heart through arteries and
returns through veins.
The mammalian heart is made up of two separate pumps, each with two
chambers known as the atrium and ventricle. The valves that control the flow of
blood in and out of these chambers are organized to keep the blood flowing in the
right direction. The right atrium receives blood from all regions of the body except
the lungs, which contracts and forces the blood into the right ventricle. The
ventricle then contracts, forcing blood into the lungs via the pulmonary artery. The
blood releases carbon dioxide and absorbs oxygen as it passes through the lungs.
The pulmonary vein then transports the blood to the left atrium. The blood is
forced into the left ventricle by the contraction of the left atrium, which sends
oxygen-rich blood through the aorta into the arteries that lead to all areas of the
body save the lungs. As a result, the blood is pumped through the lungs by the
right side of the heart, while the blood is pumped through the rest of the body by
the left side.
The aorta, a major artery that delivers oxygenated blood away from the
heart's left chamber, branches into smaller arteries that travel to different areas of
the body. These branch off into even smaller arteries, the smallest of which are
known as arterioles. The arterioles, as we'll see later, serve a crucial function in
regulating blood flow to various parts of the body. The arterioles branch out into
thin capillaries that are often just broad enough for single blood cells to flow
through. Nearly every cell in the body is adjacent to a capillary because capillaries
are widely distributed throughout the tissue. Gases, nutrients, and waste products
are exchanged between the blood and the surrounding tissue via diffusion via the
capillary walls, which are very thin. The capillaries connect to form venules, which
then merge to form larger veins that return the oxygen-depleted blood to the right
atrium of the heart.
111
Figure 11.1 Schematic diagram showing various routes of the circulation.

The most vital biological fluid in the human body is blood. Blood is made
up of a concentrated suspension of cells called corpuscles or formed elements,
which are split into RBC, white blood cells, and platelets, and is suspended in a
water-based fluid called plasma under physiological conditions. Blood is a fluid
connective tissue that performs several critical activities in the body, including
carrying oxygen, nutrients, and hormones to cells and tissues throughout the body,
eliminating waste, and regulating body pH and temperature.
Blood Pressure.
Electrical pulses given simultaneously to the left and right halves of the heart cause
the heart chambers to contract. The atria contract first, driving blood into the
ventricles, followed by the ventricles contracting, forcing blood out of the heart.
Blood enters the arteries in spurts or pulses as a result of the heart's pumping
motion. The systolic pressure is the highest pressure pushing the blood at the apex
of the pulse. The diastolic pressure is the lowest blood pressure between pulses.
The systolic pressure of a young healthy person is around 120 torr (mm Hg), and
112
the diastolic pressure is around 80 torr. Therefore the average pressure of the
pulsating blood at heart level is 100 torr.
The initial energy generated by the heart's pumping motion is wasted by two
loss mechanisms as blood passes through the circulatory system: losses associated
with the expansion and contraction of artery walls, and viscous friction associated
with blood movement. The early pressure oscillations are smoothed out as the
blood moves away from the heart, and the average pressure reduces as a result of
these energy losses. The blood flow is smooth by the time it reaches the capillaries,
and the blood pressure is only approximately 30 torr. The pressure in the veins
continues to decline until it is near to zero shortly before it returns to the heart. The
contraction of muscles that squeeze the blood toward the heart aids the
transportation of blood through the veins in this final stage of the flow.
Unidirectional valves in the veins provide one-way flow.
The cardiovascular system contains a number of flow-control mechanisms
that can adjust for the substantial fluctuations in arterial pressure that occur when
the body is moved. Even so, the system may take a few seconds to adjust. As a
result, when a person jumps up from a prone posture, he or she may feel dizzy for a
brief while. This is due to a brief decrease in blood flow to the brain caused by an
abrupt drop in blood pressure in the brain arteries.
The same hydrostatic variables operate in veins as well, and their impact
may be greater than in arteries. In the veins, blood pressure is lower than in the
arteries. When a person remains still, the blood pressure is barely enough to drive
blood back to the heart from the feet. Blood collects in the veins of the legs when a
person sits or stands without moving their muscles. This raises the pressure in the
capillaries, which might cause transient leg edema.
Viscosity of fluids.
Viscosity is the physical property that characterizes the flow resistance of
simple fluids. Newton’s law of viscosity defines the relationship between the shear
stress and shear rate of a fluid subjected to a mechanical stress.
𝒅𝒖 𝐹
𝝉=𝝁 ; 𝜏 = 𝑠ℎ𝑒𝑎𝑟 𝑠𝑡𝑟𝑒𝑠𝑠 = (11.1)
𝒅𝒚 𝐴
𝒅𝒖
where, 𝝻= dynamic viscosity; =rate of share deformation.
𝒅𝒚
The viscosity or coefficient of viscosity is a constant for a given temperature and
pressure and is defined as the ratio of shear stress to shear rate. Newtonian fluids
follow Newton's viscosity law. The shear rate has no effect on the viscosity.
Because non-Newtonian fluids do not obey Newton's law, their viscosity (the
ratio of shear stress to shear rate) is variable and depends on the shear rate. Blood
isn't a Newtonian fluid by any means. The viscosity is substantially influenced by
the proportion of red cells in the total volume (the hematocrit).
Newton's law of viscosity defines dynamic viscosity as the coefficient of viscosity.
Kinematic viscosity is the dynamic viscosity divided by the density.
𝝁
Kinematic viscosity 𝝂 = ; 𝞺-density of the fluid.
𝝆
The viscosity of all fluids varies. The presence of cells, proteins, and
macromolecules in biological fluids has a substantial impact on viscosity. At low
113
flow rates, cells and macromolecules can clump together. As the shear rate
increases, these aggregations disintegrate. Some proteins and carbohydrates have
extended forms as well. These molecules align themselves with the flow direction.
Furthermore, as shear stress increases, cells can alter form, becoming more oval
and orienting their long axis in the direction of flow.

Viscosity and Poiseuille’s Law.


A frictionless flow is a fantasy. Because the molecules in a real fluid attract each
other, relative motion between the fluid molecules is countered by a frictional force
known as viscous friction. For a particular fluid, viscous friction is proportional to
the velocity of flow and the coefficient of viscosity. The velocity of a fluid moving
through a pipe changes across the pipe due to viscous friction. The fluid's velocity
is highest in the center and diminishes as it approaches the pipe's walls; the fluid is
motionless at the pipe's walls. Laminar fluid flow is the name given to this type of
fluid flow. Figure 8.2 shows the velocity profile for laminar flow in a pipe. The
lengths of the arrows are proportional to the velocity across the pipe diameter.

Figure 11.2. Laminar flow diagram.

If viscosity is taken into account, it can be shown that the rate of laminar flow Q
through a cylindrical tube of radius R and length L is given by Poiseuille’s law,
which is

𝜋𝑅 4 (𝑃1 −𝑃2 )
𝑄= (11.1)
8𝜂𝐿

where (P1 − P2) is the difference between the fluid pressures at the two ends of the
cylinder and η is the coefficient of viscosity measured in units of dyn (sec/cm2),
which is called a poise. In general, viscosity is a function of temperature and
increases as the fluid becomes colder.
The distinction between frictionless and viscous fluid flow is fundamental.
Without any external push, a frictionless fluid will flow continuously. This is
demonstrated by Bernoulli's equation, which states that if the fluid's height and
velocity remain constant, no pressure decrease occurs throughout the flow route.
However, Poiseuille's equation for viscous flow implies that viscous fluid flow is
always accompanied by a pressure drop. By rearranging Eq. 11.1, we can express
the pressure drop as
114
𝑄8𝜂𝐿
Δ𝑃 = 4 (11.2)
𝜋𝑅
The expression 𝛥P is the pressure drop that accompanies the flow rate Q along a
length L of the pipe. The force necessary to overcome the frictional forces that tend
to slow the flow in the pipe segment is the product of the pressure drop and the
pipe's area. The pressure drop necessary to overcome frictional losses reduces with
the fourth power of the pipe radius for a given flow rate. Even though all fluids are
subject to friction, frictional losses and the resulting pressure drop are modest and
can be ignored if the flow area is big. Bernoulli's equation can be employed with
little mistake in these situations.
Bernoulli’s Equation.
If frictional losses are neglected, the flow of an incompressible fluid is governed
by Bernoulli’s equation, which gives the relationship between velocity, pressure,
and elevation in a line of flow. Bernoulli’s equation states that at any point in the
channel of a flowing fluid the following relationship holds:
1
𝑃 + 𝜌𝑔ℎ + 𝜌𝜗 2 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 (11.3)
2
Here P is the pressure in the fluid, h is the height, ρ is the density, and 𝞋 is the
velocity at any point in the flow channel. The potential energy per unit volume of
the fluid owing to pressure is the first term in the equation. (It's worth noting that
the unit for pressure is dyn/cm2, which stands for energy per unit volume.) The
gravitational potential energy per unit volume is the second term, while the kinetic
energy per unit volume is the third. The law of energy conservation leads to
Bernoulli's equation. Because the three terms in the equation reflect the entire
energy in the fluid, their sum must remain constant in the absence of friction,
regardless of how the flow is changed.
We will illustrate the use of Bernoulli’s equation with a simple example.
Consider a fluid flowing through a pipe consisting of two segments with cross
sectional areas A1 and A2, respectively (see Fig. 11.3). The volume of fluid flowing
per second past any point in the pipe is given by the product of the fluid velocity
and the area of the pipe, A×𝞋. If the fluid is incompressible, in a unit time as much
fluid must flow out of the pipe as flows into it. Therefore, the rates of flow in
segments 1 and 2 are equal; that is

Figure 11.3. Flow of fluid through a pipe with two segments of different areas.

𝐴1
A1𝞋1=A2𝞋2 or 𝜗2 = 𝜗1 (11.4)
𝐴2
115
In our case A1 is larger than A2 so we conclude that the velocity of the fluid
in segment 2 is greater than in segment 1.
Bernoulli’s equation states that the sum of the terms in Eq. 8.1 at any point
in the flow is equal to the same constant. Therefore the relationship between the
parameters P, ρ, h, and v at points 1 and 2 is
1 1
𝑃1 + 𝜌𝑔ℎ1 + 𝜌𝜗12 = 𝑃2 + 𝜌𝑔ℎ2 + 𝜌𝜗22 (11.5)
2 2
where the subscripts designate the parameters at the two points in the flow.

Turbulent Flow.
If the velocity of a fluid is increased past a critical point, the smooth laminar flow
shown in Fig. 11.2 is disrupted. The flow becomes turbulent with eddies and whirls
disrupting the laminar flow (see Fig. 11.4). In a cylindrical pipe the critical flow
velocity 𝞋c above which the flow is turbulent, is given by
𝑅𝜂
𝜗𝑐 = (11.6)
𝜌𝐷

Here D is the diameter of the cylinder, ρ is the density of the fluid, and η is the
viscosity. The Reynold's number, denoted by the letter R, is a number that ranges
from 2000 to 3000 for most fluids. In turbulent flow, frictional forces are larger
than in laminar flow. As a result, forcing a fluid through a conduit becomes more
difficult as the flow becomes turbulent.

Figure 11.4. Turbulent fluid flow.

At rest, a person's blood flow is around 5 liters per minute. This means that the
blood's average velocity through the aorta is 26.5 cm/sec. The blood in the aorta,
on the other hand, does not flow continuously. It only moves in bursts. The blood
velocity is roughly three times higher than the total average value during the flow
period.
Because the overall area of the arteries grows as they branch, the kinetic energy in
the smaller arteries is even lower. As a result, the flow velocity drops. When the
total flow rate is 5 liters per minute, the blood velocity in the capillaries is just 0.33
millimeters per second.
As the rate of blood flow increases, the kinetic energy of the blood becomes
increasingly significant. For example, if the blood flow rate increases to 25 liters
116
per minute during physical exercise, the kinetic energy of the blood is 83,300
erg/cm3, which is comparable to a pressure of 62.5 torr. When compared to resting
blood pressure, this energy is no longer insignificant. The increased velocity of
blood flow during physical exercise is not an issue in healthy arteries. Blood
pressure rises to compensate for the pressure loss during vigorous exertion.
Equation 11.6 demonstrates that when a fluid's velocity surpasses a certain
critical value, the flow becomes turbulent. Blood flow is laminar throughout the
most part of the circulatory system. Only in the aorta does the flow become
turbulent on occasion. The critical velocity for the commencement of turbulence in
the 2-cm-diameter aorta, assuming a Reynold's number of 2000, is, according to
Eq. 11.6
𝑅𝜂 2000 × 0.04
𝜗𝑐 = = = 38𝑐𝑚/𝑠𝑒𝑐
𝜌𝐷 1.05 × 2
The flow velocity in the aorta is less than this while the body is at rest. However, if
physical activity levels rise, the aorta's flow rate may exceed the critical rate and
become turbulent. Unless the channels are unusually constricted, the flow in the
rest of the body remains laminar. Laminar flow is calm, but turbulent flow
produces noises as a result of vibrations in the surrounding tissues, which signal
circulatory system problems. A stethoscope can detect these noises, which are
known as bruits, and can aid in the diagnosis of circulatory diseases.

Power Produced by the Heart.


The heart's pumping motion provides the energy in the flowing blood. We'll now
calculate the amount of energy the heart expends to maintain the blood circulating
through the circulatory system. The product of the flow rate Q and the energy E
per unit volume of blood is the power Ph produced by the heart:
𝑷𝒉 = 𝑸 · 𝑬

The rate of blood flow via the right ventricle, which pumps blood to the
lungs, is the same as the rate of blood flow through the left ventricle. However, the
blood pressure here is only one-sixth that of the aorta. As a result, the right
ventricle's power production is 0.25 W at rest and 4.5 W during vigorous physical
exercise. Depending on the intensity of the physical activity, the heart's overall
peak power output ranges between 1.9 to 14.6 W. While in fact the systolic blood
pressure rises with increased blood flow, in these calculations we have assumed
that it remains at 120 torr (the torr is unit of pressure based on an absolute scale,
1
defined as exactly of a standard atmosphere (101325 Pa) thus 1 torr is exactly
760
101325
pascals (≈133.32 Pa).
760

117
Measurement of Blood Pressure.
The arterial blood pressure is a key measure of a person's overall health. Blood
pressure readings that are unusually high or low suggest that there are problems in
the body that require medical treatment. High blood pressure, which can be
produced by constrictions in the circulatory system, indicates that the heart is
working harder than usual and that the extra load may be putting it at risk.
Inserting a vertical glass tube into an artery and watching the height to which the
blood rises is the most straightforward way to measure blood pressure. Reverend
Stephen Hales inserted a long vertical glass tube to an artery of a horse in 1733 to
measure blood pressure for the first time. Although complex variations of this
technique are still utilized in rare circumstances, it is clear that this method is
insufficient for normal clinical exams. The cut-off method is now the most often
used for routine blood pressure measurements. Although not as precise as direct
measurements, this procedure is straightforward and, in most instances, adequate.
A cuff containing an inflating balloon is tightened around the upper arm in this
procedure. A bulb inflates the balloon, and a pressure gauge measures the pressure
within the balloon. Because the initial pressure in the balloon is higher than the
systolic pressure, blood flow through the artery is stopped. By progressively
releasing some of the air, the observer allows the pressure in the balloon to decline.
She listens with a stethoscope over the artery downstream from the cuff as the
pressure lowers. Until the pressure in the balloon drops to the systolic pressure, no
sound is heard. Blood begins to flow through the artery just below this point;
however, because the artery is still somewhat restricted, the flow is turbulent and
accompanied by a distinctive sound. The systolic blood pressure is the pressure
measured at the start of sound. The artery grows to its usual size as the pressure in
the balloon lowers, the flow becomes laminar, and the noise goes away. The
diastolic pressure is the pressure at which the sound begins to diminish.
The variation of blood pressure along the body must be taken into account while
taking clinical measures. The cuff is put on the arm, approximately at heart level,
for the cut-off blood pressure measurement.
Whole blood is a circulating tissue made up of fluid plasma (55 percent by
volume) and formed elements (Red Blood Cells, White Blood Cells, and Platelets,
about 45 percent by volume), but it can be modeled as a suspension with
interacting deformable particles of various shapes and dimensions dispersed in a
complex solution called plasma. The exact content and dimensions of cells differ
slightly between people and depending on their health. Plasma, like other biofluids,
is a solution containing a variety of solutes (9.77 g/100ml) such as proteins,
carbohydrates, electrolytes, organic acids, lipids, and so on. The produced elements
are primarily erythrocytes (about 95%), leukocytes, and platelets, with sizes
ranging from 1 to 20 μm.

118
Laboratory work

Determination of viscosity of the liquid by capillary viscometer. Study of viscosity


dependence on concentration.

The purpose of the laboratory work : Determination of the viscosity of the liquid
using an Ostwald viscometer. Study of the dependence of viscosity on
concentration. Plot the concentration dependence of viscosity and found from the
graph unknown concentrations.

Facilities: Capillary viscometer, stopwatch, rubber bulb, distilled water (reference


fluid) investigated liquids (solutions of different concentrations of glycerol).
The order of work performance
1. Pour the reference liquid into the wide tube of viscometer.
2. Pump a reference liquid with a blower brush into the measuring tank, limited
with labels a-b.
3. Measure the flow time of the reference fluid from measuring tank via a
stopwatch.
4. Repeat the measurement 5 times, according to p.2-3.
5. Replace the water in the viscometer to investigated liquid and 5 times measure
its flow time under pp. 2-4
6. The results of measurements record into the table 1.

№ H2O C= % C= % C= % C= X%
1
2
3
4
5
t

Here  density of different liquids. Numerical values take from the table in the
appendix E.

1. Calculate K   0 where  0 - viscosity coefficient of water at room


 0t0
temperature, 1 ,  0 - the density of the investigated liquid and water respectively.
Numerical values take from the table in the appendix E.
2. Calculate the value of the viscosity of the investigated liquid by the formula:
  k liquidt

119
3. Plot the dependence   f ( C%) and find the unknown concentration of the
liquid.
4. Conclusions.

Numerical problems for independent solution:


1. When the blood flow rate is 25 liters per minute, calculate the drop in
pressure per centimeter length of the aorta. The aorta's radius is around 10-2
m, and the blood viscosity coefficient is 4·10-2 poise.

2. Determine the reduction in blood pressure along a 30-cm artery with a


radius of 5 mm. Assume that the artery transports 8 liters of blood per
minute.

2. When the aorta's blood flow rate is 5 liters per minute, the blood velocity
in the capillaries is 0.33 mm/sec. Calculate the number of capillaries in the
circulatory system if the average diameter of a capillary is 0.008 mm.

120
XII
BIOPHYSICS OF VISION. SPECIAL TECHNIQUES OF MICROSCOPY
AND POLARIZATION OF BIOLOGICAL OBJECTS.
Vision is our most important source of information about the external world.
It has been estimated that about 70% of a person’s sensory input is obtained
through the eye. The three components of vision are the stimulus, which is light;
the optical components of the eye, which image the light; and the nervous system,
which processes and interprets the visual images.
Nature of Light
Experiments conducted in the nineteenth century proved beyond a shadow of a
doubt that light possesses all of the features of wave motion. However, it was
demonstrated at the turn of the century that wave conceptions alone do not fully
explain the properties of light. Light and other electromagnetic radiation can
appear to be made up of little packets of energy (quanta) in some instances.
Photons are the units of energy that make up these packets. Each photon has a
fixed amount of energy E for a certain frequency f of radiation:
E= hf (12.1)
where h is Planck’s constant, equal to 6.62607015×10−34 J⋅s in SI units. Both of
these qualities of light must be considered in our examination of vision. All
phenomena related with the propagation of light through bulk matter are explained
by the wave characteristics, and the quantum nature of light must be invoked to
understand the effect of light on photoreceptors in the retina.
Structure of the Eye
Figure 12.1 shows a depiction of the human eye. The eye is about a spherical
with a diameter of 2.4 cm. The construction of all vertebrate eyes is identical,
although the size varies. The cornea, a transparent part of the eyeball's outer
coating, allows light to enter the eye. The photosensitive retina, which covers the
rear surface of the eye, receives an inverted image of the light focussed by the lens
system of the eye. The light here causes nerve impulses, which send data to the
brain.
The curved surface of the cornea and the crystalline lens inside the eye work
together to focus light into a picture at the retina. The cornea's focusing power is
fixed. The crystalline lens' focus, on the other hand, may be adjusted, allowing the
eye to see objects from a variety of distances.

121
Figure 12.1. An illustration of the human eye.

The iris, which is located in front of the lens, regulates the size of the pupil, or
entrance aperture into the eye. The diameter of the aperture varies from 2 to 8 mm
depending on the intensity of the light. The eye's cavity is filled with two types of
fluid, both of which have a refractive index that is similar to that of water. The
aqueous humor is a watery fluid that fills the space between the lens and the
cornea in the front of the eye. The viscous vitreous humor fills the gap between
the lens and the retina.
The ciliary muscle, which may modify the thickness and curve of the lens,
controls the eye's focusing. Accommodation is the term for this focused procedure.
The crystalline lens is rather flat when the ciliary muscle is relaxed, and the eye's
focusing strength is at its lowest. A parallel beam of light is focussed at the retina
in these conditions. The relaxed eye is focused to perceive far objects because light
from distant objects is approximately parallel. In this context, “distant” refers to
distances of 6 meters or more.
The ability to focus on closer objects necessitates a higher level of focusing
capability. As light from adjacent objects enters the eye, it is divergent, so it must
be concentrated more intensely to form a picture at the retina. The crystalline lens's
focusing capability, on the other hand, has a limit. A normal young adult's eye can
concentrate on things up to 15 cm away when the ciliary muscle is fully contracted.
Closer objects have a blurry appearance. The near point of the eye is the shortest
distance at which acute focus can be achieved.
The crystalline lens's focusing range declines with aging. A 10-year-old child's
near point is approximately 7 cm, but by the age of 40, the near point has shifted to
approximately 22 cm. Following that, the decline is quick. The near point shifts to
roughly 100 cm at the age of 60. Presbyopia is the term for a decline in the eye's
ability to accommodate as one gets older.
Defects in Vision
Myopia (nearsightedness), hyperopia (farsightedness), and astigmatism are
three common vision disorders linked to the eye's focusing system. The first two of
these flaws are best explained by looking at how the eye perceives parallel light.
122
The typical eye focuses parallel light onto the retina when it is relaxed (Fig. 12.2).
The lens system of a myopic eye concentrates parallel light in front of the retina
(Fig. 12.3a). An enlarged eyeball or an excessive curvature of the cornea are the
most common causes of misfocusing. The situation is flipped with hyperopia (see
Fig. 12.4a). Behind the retina, parallel light is concentrated. Although the
hyperopic eye can see infinity, its near point is further away than normal. As a
result, hyperopia is identical to presbyopia. The following is a list of the two flaws:
The myopic eye converges light too much, while the hyperopic eye converges light
insufficiently. A nonspherical cornea causes astigmatism, which is a vision
problem. Because an oval-shaped cornea is more severely bent in one plane than
the other, it cannot create distinct images of two perpendicular lines at the same
time. The view is distorted because one of the lines is always out of focus.

Figure 12.2. Ray path in a normal eye

Figure 12.3. Ray path for myopia (a) and its correction (b)

Figure 12.4. Ray path for hyperopia (a) and its correction (b)

Lenses placed in front of the eye can remedy all three of these flaws. To
compensate for the extra refraction in the eye, myopia requires a divergent lens. A
converging lens, which increases the eye's focusing power, is used to correct
hyperopia. A cylindrical lens compensates for astigmatism's uneven corneal
curvature by focusing light along one axis but not the other.

Lens for Myopia


Let us assume that the farthest object a certain myopic eye can properly focus is 2
m from the eye. This is called the far point of the eye. Light from things that are
123
further away is concentrated in front of the retina (Fig. 12.3a). The objective of the
corrective lens in this scenario is to make parallel light appear to come from the
eye's distant point (in this case, 2 m). The eye can construct images of objects all
the way to infinity with such a corrective lens. The focal length of the lens is
obtained by using equation which is
𝟏 𝟏 𝟏
+ = (12.2)
𝒑 𝒒 𝒇

Here p is infinity, as this is the effective distance for sources of parallel light. The
desired location q for the virtual image is −200 cm. The focal length of the
diverging lens is, therefore,
𝟏 𝟏 𝟏
= + 𝐨𝐫 𝐟 = −𝟐𝟎𝟎 𝐜𝐦 = −𝟓 𝐝𝐢𝐨𝐩𝐭𝐞𝐫𝐬
𝐟 ∞ −𝟐𝟎𝟎

Lens for Presbyopia and Hyperopia


The eye cannot focus correctly on close objects in various conditions. The eye is
too far away from the close point. The lens' goal is to make light from close objects
appear to come from the unaided eye's near position. Assume that a particular
hyperopic eye has a near point of 150 cm. The ideal lens would allow the eye to
see objects at a distance of 25 cm. The focal length of the lens is again obtained
from equation (12.2), where p is the object distance at 25 cm and q is −150 cm,
which is the distance of the virtual image at the near point. The focal length f for
the converging lens is given by
𝟏 𝟏 𝟏
= − 𝒐𝒓 𝒇 = 𝟑𝟎 𝒄𝒎 = 𝟑𝟑. 𝟑 𝒅𝒊𝒐𝒑𝒕𝒆𝒓𝒔
𝒇 𝟐𝟓 𝒄𝒎 𝟏𝟓𝟎 𝒄𝒎

Extension of Vision
The eye's field of vision is limited. Because the images on the retina are too
small, details on distant objects cannot be seen. At a distance of 500 meters, the
retinal image of a 20-meter-high tree is just 0.6 mm height. The leaves on this tree
are impossible to resolve with the naked eye. The eye's accommodation power
limits the ability to observe small objects. We've already established that the
average eye's resolution is restricted to roughly 100μm since it can't concentrate
light from objects closer than about 20 cm.
Two types of optical tools have been developed to broaden the range of
vision in the last 300 years: the telescope and the microscope. The telescope is
intended for the study of faraway things. The microscope is used to examine small
objects that are difficult to see with the naked eye. The magnifying properties of
lenses are used in both of these gadgets. The fiberscope, a third more contemporary
vision aid, uses total internal reflection to allow visualization of objects that are
ordinarily obscured from view.

124
Microscope
The object is magnified by a single lens in a basic microscope (Fig. 12.5). A
two-lens system compound microscope, as shown in Fig. 12.6, can produce better
results. A compound microscope, like a telescope, has an objective lens and an
eyepiece, but the microscope's objective has a short focal length. The eye sees the
final magnified picture I2 created by the eyepiece, which is a true image I1 of the
object. In the life sciences, the microscope is a crucial tool. Its creation in the
1600s marked the start of the cellular level study of life. The early microscopes
produced images that were greatly distorted, but after years of improvement, the
instrument was nearly improved to its theoretical optimum. The diffraction
properties of light, which limit resolution to about half the wavelength of light,
define the resolution of the best current microscopes. In other words, we can see
objects as small as half the wavelength of the illuminating light with a competent
modern microscope.

Figure 12.5. Magnification of the ordinary loupe.

Figure 12.6. Diagram of magnification of a compound microscope

An objective lens, ocular lens, lens tube, stage, and reflector are the major
components of a conventional biological microscope. Through the objective lens,
an object put on the stage is enlarged. Through the ocular lens, an enlarged picture
of the target can be seen when it is focussed.
The ability to consider the size of microobjects determines the depth of
penetration into the microworld and the study of the microworld. Large
magnifications with great resolution are feasible with modern optical microscopes.
Optical microscopy, on the other hand, has reached the limits of its capabilities due
to phenomena generated by light's wave nature (diffraction, interference). The
inability to obtain an image of an item smaller than the wavelength of
electromagnetic radiation is a basic restriction. Under typical accommodation
conditions, a microscope's optical system consists of a system of short-focus
lenses: an objective and an eyepiece, which produces a magnified, imaginary, and
125
inverse image seen by the eye. The magnification of the microscope is its most
important feature, as indicated by the formula:
LS
= (12.3)
f eyepiece f objective

The magnification of the microscope is numerically equal to the product of the


linear increase in the lens and the angular increase in the eyepiece

 м   об   eyepiece (12.4)

Figure 12.7. Principle that enables magnified observation with a biological microscope

With transmitted light, point objects appear as fuzzy discs encircled by


diffraction rings at very high magnifications. Airy disks are what they're called.
The ability of a microscope's resolving power is defined as the ability to
discriminate between two closely spaced Airy disks (or, in other words the ability
of the microscope to reveal adjacent structural detail as distinct and separate). The
capacity to resolve small details is hampered by diffraction effects. The
wavelength of light (λ), the refractive materials used to make the objective lens,
and the numerical aperture (NA) of the objective lens all influence the extent and
magnitude of the diffraction patterns. As a result, the diffraction limit is a finite
limit beyond which it is impossible to resolve individual points in the objective
field. The resolution Z can be expressed as follows, assuming that optical
aberrations in the entire optical set-up are negligible:

𝜆
𝑍= (12.5)
2𝑁𝐴

Usually a wavelength of 550 nm is assumed, which corresponds to green light.


With air as the external medium, the highest practical NA is 0.95, and with oil, up
to 1.5. In practice the lowest value of Z obtainable with conventional lenses is
about 200 nm. A new type of lens using multiple scattering of light allowed to
improve the resolution to below 100 nm.
126

Z (12.6)
2n sin U
A further improvement of the microscope is the use of an immersion lens. This is
the name of a lens in which the space between the observed object and the lens is
filled with a transparent liquid, with a refractive index close to glass n (1.45-1.65).
With an immersion lens, the brightness of the image is significantly increased first
(due to the fact that when immersed, the light from the object to the lens passes
through an optically homogeneous medium and does not cause reflection loss), the
value NA = nsinU for an immersion lens, for a dry lens (A = sinU), is indicated on
its frame along with the increase.
There are a number of different types of microscopes and each of them solves
unique problems.
1. Stereo Microscope
2. Compound Microscope
3. Inverted Microscope
4. Metallurgical Microscope
5. Polarizing Microscope

Stereo Microscopes
Stereo microscopes are used to examine a wide
range of samples that can be held in your palm. A
stereo microscope produces a three-dimensional
image, or "stereo" image, with magnification
ranging from 10 to 40 times. Manufacturing,
quality control, coin collection, science, high
school dissection projects, and botany all employ
stereo microscopes. A stereo microscope can be
used to view a sample that does not allow light to
flow through it because it has both transmitted and
reflected illumination.
The following are samples often viewed under a
stereo microscope: coins, flowers, insects, plastic
or metal parts, printed circuit boards, fabric
weaves, frog anatomy, and wires.

Compound microscopes
A biological microscope is another name for a compound microscope. Compound
microscopes are used for histology and pathology in laboratories, schools,
wastewater treatment plants, and veterinary offices. Samples that will be seen
under a compound microscope must be flattened on a microscope slide using a
cover slip. Students frequently examine prepared slides under the microscope to
save time by obviating the need for slide preparation.
Blood cells, cheek cells, parasites, bacteria, algae, tissue, and thin portions of
organs are just a few of the items that may be viewed using a compound
127
microscope. Compound microscopes are used to examine samples that are invisible
to the naked eye. A compound microscope's magnification is often 40x, 100x,
400x, and occasionally 1000x. Microscopes with magnifications more than 1000x
should be avoided since they provide empty magnification with low resolution.

This image of mushroom spores was


captured under a compound biological
microscope at 400x magnification.

Inverted Microscopes
Biological inverted microscopes and
metallurgical inverted microscopes are two types
of inverted microscopes.
Biological inverted microscopes have
magnifications of 40x, 100x, and 200x and 400x,
respectively. Living samples in a petri dish are
viewed with these biological inverted
microscopes. The objective lenses are housed
beneath the stage in an inverted microscope,
allowing the operator to position the petri dish on
a flat stage.
In-vitro fertilization, live cell imaging,
developmental biology, cell biology,
neuroscience, and microbiology all use inverted
microscopes. In research, inverted microscopes
are frequently used to examine and study tissues
and cells, particularly living cells.

Polarizing Microscopes
Polarizing microscopes study chemicals, rocks, and minerals using polarized light,
as well as transmitted and reflected illumination. On a regular basis, geologists,
chemists, and the pharmaceutical industry use polarizing microscopes.

128
A polarizer and an analyser are included in every polarizing microscope. Only
particular light waves are allowed to pass through the polarizer. The amount and
direction of light that will illuminate the sample are determined by the analyzer.
Different wavelengths of light are focused into a single plane by the polarizer. The
microscope is ideal for viewing birefringent materials because to this feature.

This is Vitamin C captured under a polarizing


microscope at 200x magnification.
The interaction of electric and magnetic fields traveling across space is known as
light. A light wave's electric and magnetic vibrations are perpendicular to each
other. The magnetic field goes in one direction and the electric field in the other,
but they are always perpendicular. So we have an electric field in one plane, a
magnetic field perpendicular to it, and a travel direction that is perpendicular to
both. Electric and magnetic vibrations can happen in a variety of planes.
Unpolarized light is defined as a light wave that vibrates in more than one plane.
Unpolarized light sources include the light emitted by the sun, a lamp, and a tube
light. The direction of propagation is constant, but the planes on which the
amplitude occurs change, as seen in the graphic below.

Figure 12.8. Polarization of light.

129
A polarized wave is the other type of wave. Light waves that vibrate in a single
plane are known as polarized waves. Plane polarized light is made up of waves
with the same direction of vibration for all of them. A plane polarized light vibrates
on only one plane, as shown in the image on fig. 12.8. Polarization is the process
of converting non-polarized light into polarized light.

Malus’ law states that the intensity of plane-polarized light that passes through an
analyzer varies as the square of the cosine of the angle between the plane of the
polarizer and the transmission axes of the analyzer.

𝐼 = 𝐼0 𝑐𝑜𝑠 2 𝜃 (12.7)

where, I is the intensity of the


polarized light passed through the
analyzer; I0 is the intensity of the
light beam incident on the
polarizer, 𝜃 is the angle between
the polarization planes of the
polarizer and the analyzer.

Several birefringent or double-refractive structures in the body are visualized


using polarized light microscopy, including teeth, striated bone, muscular tissue,
neurons, spindles, and actomyosin fibers. By adding a dye, these structures can be
observed with tremendous contrast; however, because these are live structures, this
procedure results in cell death.
As a result, this technology provides a non-invasive high-contrast imaging
option for these tissues and cells. Polarized light microscopy produces high-
contrast pictures without the need of a contrast agent or dye. It can also be
conducted non-invasively.

Electron microscopy
This is a type of microscopy in which electron beams are utilized to make an
image of the object. They have a far higher magnification and, as a result, a lot
higher resolution than light microscopes, allowing us to observe smaller specimens
in greater detail. The resolution can be increased because the wavelength of
electrons shortens as they travel faster, hence there is a direct relationship between
reducing wavelength and increasing resolution. Transmission and scanning
electron microscopes are the two types of electron microscopes used. TEM is a
technique that includes passing a high-voltage beam through a thin layer of
specimen and collecting data on the structure. SEM, on the other hand, creates
pictures by detecting secondary electrons generated from the surface as a result of
initial electron beam excitation. As a result of mapping the observed signals with
the beam position, the electron beam over the surface is scanned in a raster pattern.
130
While TEM and SEM provide us with improved image quality and a wider range
of images, they also have drawbacks. Because it is sensitive to magnetic fields and
requires a constant supply of cool water running through the lens, it is
unfortunately exceedingly expensive to produce and maintain.
EM takes the shape of a vertically
mounted tall vacuum column. It
consists of the following elements:
1. Electron gun
 • An electron cannon is a
device that creates electrons
by heating a tungsten
filament.
2. Electromagnetic lenses
 The electron beam is focused
on the specimen by the
condenser lens. The electrons
are formed into a thin, tight
beam by a second condenser
lens.
 The electron beam exiting the
specimen travels down the
second set of magnetic coils,
known as the objective lens,
which has a high power and 4. Image viewing and Recording
produces the intermediate System.
magnified image; the final  The final image is projected on a
further magnified image is fluorescent screen.
produced by the third set of  Below the fluorescent screen is a
magnetic coils, known as the camera for recording the image.
projector (ocular) lenses.

 Each of these lenses works as


an image magnifier while
retaining a high level of detail
and resolution.

3. Specimen Recipient

 The specimen holder is a


metal grid that holds an
incredibly thin film of carbon
or collodion.

131
Applications
• Electron microscopes are used to study the ultrastructure of microbes,
cells, big molecules, biopsy samples, metals, and crystals, among other
biological and inorganic objects.
• Electron microscopes are commonly used in industry for quality control
and failure analysis, and modern electron microscopes make electron
micrographs with the use of specialized digital cameras and frame grabbers.

• The electron microscope is responsible for the advancement of


microbiology science. The study of microorganisms such as bacteria, viruses,
and other pathogens has greatly improved illness treatment.

Features
• Magnification is extremely high.
• Exceedingly high resolution
• The material is rarely deformed during preparation
• A greater depth of field can be investigated
• There are numerous applications

Boundaries

• There is no way to see the live specimen.


• Because the electron beam's penetrating power is so low, the item should be
ultra-thin. Before examination, the specimen is dried and cut into ultra-thin
slices.
• The specimen should be entirely dry because the EM works in a vacuum.
• Expensive to construct and keep in good working order
• Requiring researcher education and training
• Artifacts in the image caused by specimen preparation.
• This is a huge, heavy microscope that is particularly sensitive to vibration and
external magnetic fields.

Special types of microscopy


- A stereoscopic binocular microscope is designed to obtain a three-
dimensional image of an object. It consists of two separate microscopic systems,
provides a small increase (up to 100); used in the assembly of electronic
miniature components, surgical operations.
- A luminescent microscope illuminates the sample with ultraviolet or blue
light. Absorbing this radiation, the sample emits visible luminescence light. LM
are used in medicine for diagnosis.
- A dark field microscope is used to obtain images of transparent living
objects (especially living cells). Using special devices, part of the light passing
through the microscope is phase-shifted by half the wavelength, thereby
achieving image contrast.

132
- An interference microscope - uses the phenomenon of interference. Each
beam entering the microscope bifurcates. One of the rays obtained is directed
through the observed particle, and the second past it (along the additional optical
branch of the microscope). In the ocular part of the microscope, both beams are
again connected and interfere with each other. This produces colored images that
provide very valuable information in the study of living objects.
Converging Lenses
A simple converging lens is shown in Fig. 12.9. This type of a lens is called
a convex lens. Parallel rays of light passing through a convex lens converge at a
point called the principal focus of the lens. The distance of this point from the lens
is called the focal length f. Conversely, light from a point source at the focal point
emerges from the lens as a parallel beam. The focal length of the lens is
determined by the index of refraction of the lens material and the curvature of the
lens surfaces.

Figure 12.9. The convex lens illuminated (a) by parallel light, (b) by point source at the focus.

We adopt the following convention in discussing lenses.


1. Light travels from left to right.
2. The radius of curvature is positive if the curved surface encountered by the light
ray is convex; it is negative if the surface is concave.
It can be shown that for a thin lens the focal length is given by
1 1 1
= (𝑛 − 1)( − ) (12.8)
𝑓 𝑅1 𝑅2

where R1 and R2 are the curvatures of the first and second surfaces, respectively
(Fig. 14.5). In Fig. 12.10, R2 is a negative number.

133
Figure 12.10. Radius of curvature defined for a lens.

Focal length is a measure of the converging power of the lens. The shorter the
focal length, the more powerful the lens. The focusing power of a lens is often
expressed in diopters defined as
𝟏
𝒇𝒐𝒄𝒖𝒔𝒊𝒏𝒈 𝒑𝒐𝒘𝒆𝒓 = (𝒅𝒊𝒐𝒑𝒕𝒆𝒓𝒔) (12.9)
𝒇 (𝒎𝒆𝒕𝒆𝒓𝒔)

If two thin lenses with focal lengths f1 and f2, respectively, are placed close
together, the focal length fT of the combination is
𝟏 𝟏 𝟏
= + (12.10)
𝒇𝑻 𝒇𝟏 𝒇𝟐

Light from a point source located beyond the focal length of the lens is
converged to a point image on the other side of the lens (Fig. 12.11a). This type of
an image is called a real image because it can be seen on a screen placed at the
point of convergence. If the distance between the source of light and the lens is less
than the focal length, the rays do not converge. They appear to emanate from a
point on the source side of the lens. This apparent point of convergence is called a
virtual image (Fig. 12.11b).

Figure 12.11. Image formation by a convex lens: (a) real image, (b) virtual image.

For a thin lens, the relationship between the source and the image distances
from the lens is given by

134
1 1 1
+ = (12.11)
𝑝 𝑞 𝑓

Here p and q, respectively, are the source and the image distances from the lens.
By convention, q in this equation is taken as positive if the image is formed on the
side of the lens opposite to the source and negative if the image is formed on the
source side.
Light rays from a source very far from the lens are nearly parallel; therefore,
by definition we would expect them to be focused at the principal focal point of the
lens. This is confirmed by Eq. 12.11, which shows that as p becomes very large
(approaches infinity), q is equal to f.
Diverging Lenses
An example of a diverging lens is the
concave lens shown in Fig. 12.12. Parallel light
diverges after passing through a concave lens. The
apparent source of origin for the diverging rays is
the focal point of the concave lens.
All the equations we have presented for the
converging lens apply in this case also, provided
the sign conventions are obeyed. From Eq. 12.8, it
follows that the focal length for a diverging lens is Figure 12.12. A diverging lens.
always negative and the lens produces only virtual
images.

Numerical problems for independent solution:


1. Two lenses of the same shape are made of glasses with refractive indices n1
= 1.5 and n2 = 1.7. How are the focal lengths of these lenses different?

2. The optical power of a thin glass lens in air is D = 7 diopters. What is the
optical power of this lens immersed in water?

3. The microscope consists of an objective with a focal length 𝑓𝑜 = 0.2 cm and


an eyepiece with a focal length 𝑓𝑒𝑦𝑒𝑝𝑖𝑒𝑐𝑒 = 4 cm. The distance between the
objective and the eyepiece is L = 20.2 cm. Find the magnification given
by the microscope.

4. Compute the change in the position of the image formed by a lens with a
focal length of 1.5 cm as the light source is moved from its position at 6 m from
the lens to infinity.

5. Calculate the size of the retinal image of a 10-cm leaf from a distance of
500 m.
135
6. A third polaroid is placed between two crossed polaroids so that its main
plane makes an angle of 300 with the main plane of the first polaroid. How will the
intensity of the natural light beam passing through such a device change?
Absorption is neglected.

136
XIII
Eye refraction and refractometric research methods in medicine. Refractive
indices of the eye and fluids. Introscopy
Geometric Optics
The wave properties of light can be used to completely extract the features
of optical components such as mirrors and lenses. However, because the wave
front must be tracked along every point on the optical component, such
comprehensive calculations are usually quite difficult. If the optical components
are substantially larger than the wavelength of light, the problem can be simplified.
The simplification comprises ignoring some of light's wave qualities and treating
light as a ray going parallel to the wave front (Fig. 13.1). The ray of light travels in
a straight path in a homogeneous medium, changing direction only at the interface
between two media. Geometric optics is the name for this reduced method.

Figure 13.1. Light rays perpendicular to the wave front.

The speed of light depends on the medium in which it propagates. In


vacuum, light travels at a speed of 3 × 108 m/sec. In a material medium, the speed
of light is always less. The speed of light in a material is characterized by the index
of refraction (n) defined as
𝐶
𝑛= (13.1)
𝜗

where c is the speed of light in vacuum and 𝓿 is the speed in the material.
When light enters from one medium into another, its direction of propagation is
changed (see Fig. C.2). This phenomenon is called refraction. The relationship
between the angle of incidence (θ1) and the angle of refraction (θ2) is given by
sin 𝜃1 𝑛2
= (13.2)
sin 𝜃2 𝑛1

The relationship in Eq. 13.2 is called Snell’s law. As shown in Fig. 13.2, some of
the light is also reflected. The angle of reflection is always equal to the angle of
incidence.
137
In Fig. 13.3, the angle of incidence θ1 for the entering light is shown to be greater
than the angle of refraction θ2. This implies that n2 is greater than n1 as would be
the case for light entering from air into glass. If, on the other hand, the light
originates in the medium of higher refractive index, as shown in Fig. 13.2, then the
angle of incidence θ1 is smaller than the angle of refraction θ2. At a specific value
of angle θ1 called the critical angle (designated by the symbol θc), the light
emerges tangent to the surface, that is, θ2= 900. At this point, sin θ2=1 and,
therefore, sin θ1= sin θc= n2/n1. Beyond this angle, that is for θ1 > θc, light
originating in the medium of higher refractive index does not emerge from the
medium. At the interface, all the light is reflected back into the medium. This
phenomenon is called total internal reflection. For glass, n2 is typically 1.5, and
the critical angle at the glass-air interface is sin θc=1/1.5 or θc=420.

Figure 13.3. Total internal reflection.


Figure 13.2. Reflection and refraction of light.

Glass and other transparent materials can be fashioned into lenses to change the
direction of light in a precise way. There are two types of lenses: converging lenses
and diverging lenses. The beams of light are brought together by a converging lens,
which changes the direction of light. A diverging lens, on the other hand, has the
opposite effect, spreading light rays apart.
We can calculate the size and shape of images created by optical components using
geometric optics, but we can't forecast the unavoidable blurring of images caused
by light's wave nature.

Refraction in the Eye


The ability of the eye to refract light is crucial to the vision process. Both the
cornea and the lens of the eye are affected.
The light going through the cornea is the initial step in the vision process. Because
the cornea has a spherical surface, it serves as a converging lens, refracting light
rays as they pass through it. Because of the variations in refractive indices between
air (refractive index of roughly 1.00) and aqueous fluid (refractive index of 1.34),
the cornea provides the majority of the refractive power in the eye.

138
Refraction is the phenomenon that allows the eye, as well as cameras and other
lens systems, to produce images. Because the transition from air to cornea is the
highest change in index of refraction that light undergoes, the majority of that
refraction occurs at the initial surface. The cornea accounts for around 80% of
refraction, while the inner crystalline lens accounts for 20%. While the inner lens
accounts for a smaller amount of the refraction, it is the sole source of the eye's
capacity to accommodate the focus for near viewing. The inner lens can modify the
total focal length of the eye by 7-8 percent in a typical eye. Common eye defects
are often called "refractive errors" and they can usually be corrected by relatively
simple compensating lenses.

Fiber Optics.
Fiber-optic devices are increasingly used in a variety of medical settings. Their
operation is based on a basic idea. If light moving through a substance with a high index of
refraction meets the border of a material with a lower refractive index at an angle greater
than the critical angle c, it is completely reflected back into the material. As seen in Fig.
13.4, light can be limited to travel within a glass cylinder in this manner. Since the dawn
of optics, this occurrence has been well documented. Before the phenomena could be
extensively used, however, important improvements in materials technology were
required.

Figure 13.4. Light confined to travel inside a glass cylinder by total reflection.

Optical fiber technology, which was developed in the 1960s and 1970s,
allowed for the production of low-loss, thin, highly flexible glass fibers capable of
carrying light over vast distances. The diameter of a typical optical fiber is roughly
10 µm, and it is composed of high purity silica glass. A cladding is applied to the
fiber to maximize light trapping. Light may be carried along complex twisting
routes for several kilometers without considerable loss using such fibers.
Fiberscopes, often known as endoscopes, are the most basic of fiber-optic
medical instruments. Internal organs such as the stomach, heart, and bowels are
visualized and examined with them. A fiberscope is a flexible apparatus made up
of two bundles of optical fibers. Each bundle has roughly 10,000 fibers and is
about a millimeter in diameter. The bundles are thicker in some applications, up to
1.5 cm in diameter. The length of the bundles varies depending on their function,
ranging from 0.3 to 1.2 meters.
The two bundles are strung toward the organ to be studied by orifices, veins,
or arteries and delivered into the body as a unit. A high-intensity source of light,

139
such as a xenon arc lamp, is focussed into a single bundle that carries the light to
the organ to be investigated.
Each of the fibers in the other
bundle collects light reflected from a
small region of the organ and carries
it back to the observer. Here the light
is focused into an image which can
be viewed by eye or displayed on a
cathode ray or some other type of
electronic screen. In the usual
arrangement, the illuminating bundle
surrounds the light-collecting
bundle. Most endoscopes now utilize
attached miniature video cameras to
Figure 13.5. Endoscope principle.
form images of the internal organs
for display on TV monitors.

By attaching fiber-optic devices to endoscopes and using remotely controlled small


instruments to do surgical operations without substantial incisions, the use of fiber-
optic devices has been greatly expanded. Fiber optics has recently been used to
assess pressure in arteries, bladders, and uteruses using optical sensors, as well as
laser surgery, in which a powerful laser light is delivered through one of the
bundles to selectively kill tissue.
Determining Concentrations of Solutions

The most common application of refractometry is to determine the concentration of


a solute in a solution. Refractometer-based methods for determining the amount of
sugar in fruits, juices, and syrups, the percentage of alcohol in beer or wine, the
salinity of water, and the concentration of antifreeze in radiator fluid, for example,
have been created. In several sectors, refractometer-based quality control systems
are used.

The fraction of dissolved solids in a solution is usually linearly (or nearly linearly)
related to the refractive index. The concentration of a solute can be calculated with
high accuracy by comparing the value of a solution's refractive index to that of a
standard curve. A "Brix" scale, which is calibrated to give the percentage of
sucrose dissolved in water, is found in several refractometers.
Refractometry
Refractometry is a technique for determining a substance's refractive index. The
index of refraction or refractive index (n) of a substance is defined as the ratio of
the speed of light in a vacuum to the speed of light in another substance.

140
An Abbe-refractometer is used to make the measurement. The functioning
principle of the Abbe refractometer is based on the critical angle (Fig.13.6). The
sample is sandwiched between two prisms that measure and illuminate it. Light
enters the sample through the lighting prism, is refracted at a crucial angle at the
bottom surface of the measuring prism, and then the position of the border between
bright and light areas is measured using the telescope. The image is reverted by the
telescope, so the black area appears at the bottom, even though it should be at the
upper half of the field of vision. Calculating the refractive index of the sample is
simple when you know the angle and refractive index of the measurement prism.
The illuminating prism's surface is matted, allowing light to penetrate the sample
from all angles, including those practically parallel to the surface.

Figure 13.6. Abbe refractometer.

A substance's refractive index is a function of its wavelength. If the light source is


not monochromatic (which it rarely is in simple devices), light disperses and the
shadow boundary is not well defined, resulting in a hazy blue or red border instead
of a clean edge between white and black. In the vast majority of circumstances,
this means that measurements are either extremely inaccurate or unattainable.
Abbe included two compensating Amici prisms in his design to prevent dispersion.
Not only can the location of the telescope be altered to measure the angle, but the
position of the Amici prisms can also be varied to correct dispersion. In practice,
the shadow's boundary is strongly defined and easy to spot.

141
Protocol for measuring the concentration of a solution using the
refractometer.
Materials:
 Abbe refractometer
 Distilled water (refractive index for water: n0 = 1.333)
 saline solutions: C1%, C2%, C3%, C4% and x%
 Cotton, filter paper
Procedure:
A. Measuring the refractive indices:
1. open the prism block and keep the lower prism horizontally
2. add two-three drops of distilled water on the surface of the lower prism so that a
thin film of solution is formed.
Attention! Please add the solution carefully to not scratch the surface of the prism.
3. close the prism block.
4. start the source of light and adjust the mirror so that will be obtained a good
illumination of the visual field.
5. rotate the prism block (using the corresponding knob) until will be obtained a
dark region (down) and a bright region (upper) in the visual field.
6. adjust the dispersion using the compensator knob (no rainbow/ colored line
should be visible between two regions in the visual field).
7. bring the clear delimitation line localized between the bright and dark regions to
the center of the visual field (at the intersection of the rectangular wires).
8. read the refractive index with three decimals.
9. repeat the steps 1 to 8 for all saline solutions: C1%, C2%, C3%, C4% and x%
10. perform three measurements for each saline solution and write the results in a
table:
C1= C2= C3= C4= Cx=?
1
2
3

𝑛̅
142
StDev
StErM

Observation:
Clean carefully with distilled water and cotton the surfaces of the prisms between
the measurements
11. reorganize the working place.
B. Calculating the unknown concentration:
1. calculate the average, standard deviation, standard error mean for each analyzed
solution.
2. draw a graph n = f(c) – refractive index function of concentration.
Attention!
 Start the refractive index scale with the value of refractive index of water (not from
zero)
 Select a larger unit for the refractive index scale, so that you will have enough
space to represent values with three decimals.
3. draw the regression line for the experimental points
4. identify the concentration for x% saline solution by using graph.
5. state one conclusion according to your results.

Numerical problems for independent solution:


1. Determine a liquid if, when light is reflected from its surface, the reflected
beam is completely polarized at a refractive angle of 370.

2. A ray of light passes from kerosene into the air. The limiting angle of total
internal reflection for this case is 420 23'. What is the speed of propagation of light
in kerosene?

3. The refractive index of the glass is 1.52. Find the limiting angles of total
internal reflection for the interfaces: a) glass-air, b) water-glass

143
XIV
REGISTRATION OF SUPERWEAK BIOLUMINESCENCE AND
BRIGHTNESS ENHANCEMENT OF THE X-RAY IMAGE.
PHOTOELECTRIC CONVERTERS, PHOTOELECTRONIC
AMPLIFIERS, ELECTRON-OPTICAL CONVERTERS.
X-rays are now widely employed in diagnostic and interventional medical
imaging all around the world. X-rays are frequently used in industry as well, for
example, in the field of non-destructive testing to check for very small flaws in
metal parts. A range of applications in medical imaging have been developed that
go well beyond traditional radiographic imaging. Fluoroscopy, for example,
provides for real-time X-ray sequences, which are frequently required in minimally
invasive procedures. Furthermore, a point of quick progress is the recognition that
X-rays can be damaging as well. Ionization can occur as a result of high energy
emitted to the body during an X-ray collection. Radiation damages
deoxyribonucleic acid (DNA) in this area. In the vast majority of situations, the
DNA will be repaired by the cell. However, the repair process can sometimes fail,
resulting in uncontrolled cell division, which can lead to cancer. The majority of
people are now aware of the dangers of X-rays, and the patient dose conveyed
during X-ray scans has been greatly lowered in recent decades. An X-ray image of
a patient's thorax, or chest, is shown in Fig. 14.1.
Using a thin metal plate placed between the patient and the X-ray source, very low
energies of an X-ray spectrum are often eliminated prior to an interaction with the
patient in medical imaging. The reason for this is because the patient would absorb
practically all of the low-energy photons, resulting in a higher patient dose without
a significant gain in image quality. The metallic plate is also known as an X-ray
filter, which is not to be confused with image processing mathematical filters.
Although X-rays have the power to enter matter, the number of penetrating X-ray
photons varies depending on the material. Because of their ability to penetrate
human tissue, they can be used to obtain information on internal organs.

144
Figure 14.1. An example of an X-ray obtained from a patient's chest.

Higher or lower energy X-ray spectra are produced by varying tube voltages
between the cathode and the anode. When X-rays flow through materials in the
energy range utilized for medical imaging, there are three types of relevant
interactions that can occur:
 interaction with atomic electrons,
 interaction with nucleons,
 interaction with electric fields associated to atomic electrons and atomic
nuclei.
Consequently, the X-ray photons either experience a complete absorption, elastic
scattering or inelastic scattering.
The interaction employed in medical imaging is a drop in radiation intensity, which
is nothing more than a decrease in the number of photons arriving at the detector.
Attenuation is the term used to describe this process. A change in photon count,
photon direction, or photon energy are all examples of physical factors that
contribute to attenuation. All of these effects have one thing in common: they're all
based on interactions between single photons and the material they're passing
through, and the attenuation they cause is very energy-dependent.
According to Lambert-law, Beer's the measured intensity I is linked to the
intersection length of the object x and the ray when a monochromatic X-ray beam
penetrates a homogeneous object with absorption coefficient:
𝐼 = 𝐼0 · 𝑒 −𝜇𝑥 (14.1)
145
Here, I0 is the X-ray intensity at the source.
This empirical relationship, also known as the Beer–Lambert–Bouguer law,
connects light absorption to the qualities of the substance through which the light is
passing.
I  I 0 e  kCl (14.2)

This law asserts that the transmission (or transmissivity) of light through a
substance, T, is proportional to the product of the substance's absorption
coefficient and the distance the light goes through the material (i.e., the path
length), l. (Fig. 14.2).

Figure 14.2. The light travels through the colored solution.

The Beer-Lambert law can be formulated as follows: the an optical density of the
sample is directly proportional to the concentration of material in the sample and
the length of the light path:
I0
lg  D  Cl (14.3)
I
 is called the molar coefficient of absorption.

Photoelectric Effect
Einstein first described the photoelectric effect in the context of establishing the
quantized character of light. It occurs when the incident X-ray photon energy is
greater than the binding energy of an electron in the target material atom. To
liberate an electron from an inner-shell, the incident X-ray photon gives up all of
its energy. Photoelectron is the name for the expelled electron. After that, the
incident photon vanishes. The photo-electric process frequently leaves a vacancy in
the inner shell of the atom, which was previously occupied by the ejected electron.
As a result, an outer shell electron fills the “hole” created in the inner-shell. A
distinctive radiation develops because the outer shell electron is in a higher energy
state. As a result of the photoelectric effect, a positive ion, a photoelectron, and a
146
distinctive radiation photon are produced. The binding energy of the K-shell
electrons in tissue-like materials is relatively low. As a result, the photoelectron
absorbs nearly all of the energy of the X-ray photon.
When electromagnetic radiation, such as light, strikes a
material, it causes electrons to be emitted.
Photoelectrons are electrons that are emitted in this way.

Instead, the experimental results reveal that electrons are


only dislodged when the light frequency surpasses a
particular threshold, regardless of the intensity or time of
exposure. Albert Einstein proposed that a beam of light
is not a wave propagating through space, but a collection
Figure 14.3. The emission
of discrete wave packets known as photons, because a of electrons from a metal
low-frequency beam at a high intensity could not build plate caused by light
up the energy required to produce photoelectrons like it quanta-photons.
could if light's energy came from a continuous wave.

In 1905, Einstein proposed a hypothesis of the photoelectric effect based on Max


Planck's idea that light is made up of small energy packets known as photons or
light quanta. Each packet contains energy h that is proportional to the frequency of
the electromagnetic wave it corresponds to. The Planck constant has been named
after the proportionality constant h. The maximum kinetic energy Kmax of the
electrons that were delivered this much energy before being removed from their
atomic binding is
Kmax=h𝝼 – W (14.4)
where W is the smallest amount of energy required to remove an electron from a
material's surface. It's known as the surface's work function. If the work function is
written in the following format:
W= h𝝼0 (14.5)
The formula for the maximum kinetic energy of the ejected electrons becomes
Kmax=h(𝝼 – 𝝼0) (14.6)
The photoelectric effect requires a positive kinetic energy and a value of 𝝼>𝝼0 . The
frequency 𝝼0 is the material's threshold frequency. The maximal kinetic energy of
the photoelectrons, as well as the stopping voltage in the experiment, are higher
than that frequency.
𝒉
𝒗𝟎 = (𝒗 − 𝒗𝟎 ) (14.7)
𝒆
rise linearly with the frequency, and have no dependence on the number of
photons and the intensity of the impinging monochromatic light.

Photomultipliers
These are, in effect, photocells with numerous stages of amplification. The
photocathode's generated electron is propelled by a voltage drop towards an
electrode (the first dynode), which produces a secondary electron shower upon
147
collision (see fig. 14.4). Until the anode is reached, the process is frequently
repeated up to 9 dynodes (or stages). The original photocathode current Ic has been
amplified to IcEn, where E is the secondary electron emission coefficient and n is
the number of dynodes. In most cases, the photocathode's quantum efficiency does
not surpass 40%.

Figure 14.4. Photomultiplier tube principle.

Image Intensifiers
X-ray image intensifiers (X-ray image intensifiers) are vacuum tubes that
transform X-rays into visible light, i.e. a picture. Figure 14.5 depicts the process's
schematic principle. The incoming X-ray photons are first converted to light
photons using an input phosphor, which is a phosphorus substance. Using the
photoelectric effect within a photocathode, the generated light is further converted
to electrons. Using an electron optic system, these electrons are then accelerated
and focussed on the output phosphor. The electrons are transformed back to visible
light in the output phosphor, which can then be captured by film material or
television camera tubes.

Figure 14.5. The X-ray image intensifier is depicted in a schematic figure. First, X-rays
are turned to light, which is then converted to electrons. Electrons are accelerated towards a
fluorescent screen, which converts them to light, resulting in an image.

Fluoroscopic detection systems comprised of only one phosphorus material where


X-rays were directly converted to light until the development of image intensifiers
in the late 1940s. The mismatch between the huge number of required X-ray
148
quanta and the low amount of emerging visible light quanta, on the other hand,
resulted in extremely dark images and excessive radiation exposure. As a result,
the radiologists had to evaluate the photos in dim lighting and after a period of dark
adaptation. The most significant benefit of image intensifier systems is that the
brightness of the output image may now be controlled by the degree of acceleration
provided by the electron optics.
The widespread usage of X-ray machines, as well as the desire to limit the study's
radiation exposure, posed new requirements for X-ray units. Traditional X-ray
exposure and operating room x-ray investigations are associated with high initial
loads on both the doctor and the patient. Because of the feeble light of the screen
during fluoroscopy, X-ray filmmaking and X-ray television transmission at low
doses of radiation are not possible.
These issues were resolved with the introduction of x-ray image brightness amplifiers,
which allow transillumination in lighted rooms with a modest x-ray beam intensity.
The first burden on the patient is reduced in this situation, as is the scattered radiation.
Even with extensive x-ray investigations, the radiation risk does not dramatically
increase. The image is sharper when scanned with the "image intensifier tube,"
therefore the study time is decreased. The picture may be replaced in many situations
by clear translucency. The usage of X-ray filming and television transmission is made
possible by the screen's high brightness.
Characteristics of X-ray image intensifiers. Low-intensity x-rays that have gone
through the object under inquiry are amplified and converted into visible radiation
using an image converter.
The structure of the brightness amplifier x-ray image.
The device consists of two blocks: from a block for enhancing the brightness of an
X-ray image containing an optical system and an electronic image converter, and a
high-voltage power supply unit for the image converter. These two blocks are
considered separately.

149
1. focus of the x-ray tube;
2. x-ray beam;
3. translucent object;
4. electron-optical converter;
5. photo cathode;
6. electrode for sharpening the image;
7. electron beam;
8. anode;
9. screen;
10. a beam of visible light;
11. a rotating prism;
12. button for changing the path of
light rays;
13. camera;
14. a prism that rotates the image 180
';
15. viewing magnifier;
16. place to watch
Figure 14.6. X-ray image amplifier circuit

X-rays passing through the patient fall on the photocathode of the image converter.
Under the influence of X-rays, electrons come out of the photocathode, which are
accelerated and focused by an electric field. When electrons reach a high speed,
they fall onto a fluorescent screen, cause it to glow with visible light, and an image
of a translucent object appears on it. This image is projected using a rotating prism
onto the image receiver (periscope device of a television or movie camera). The
image of the object appearing on a fluorescent screen with a diameter of 20 - 25
mm, with the help of an eyepiece, is increased to its true size.

Radiography
The practice of obtaining two-dimensional projection images by exposing an
anatomy of interest to X-rays and measuring the attenuation they experience as
they travel through the object is known as radiography. It's a widespread type of X-
ray imaging that's employed in clinics all over the world.
The assessment of fractures and alterations in the skeletal system is the principal
application area. The high attenuation coefficient of bones in comparison to
surrounding tissue provides good contrast and allows for fracture identification and
classification. Radiography can also be used to detect changes in the consistency or
density of a bone, such as in the case of osteoporosis or bone cancer. Two X-ray
images of an arm with Ulna and Radius bone fractures are presented on the left in
Fig. 14.7. The figure also includes a color image of the arm following intervention,
as well as two more X-ray photos of the treated arm, which indicates the bones
have been internally fixed with metal plates.
150
Figure 14.7. X-ray image of a broken arm before and after surgery.

Fluoroscopy

The acquisition of a single or limited number of X-ray projection images for a


specific view is known as conventional radiography.

Fluoroscopy is especially important in minimally invasive procedures, when


catheters, endoscopes, and other equipment must be directed and operated without
having direct visual contact with the area where the intervention is being
performed. It is also the most important technology for using contrast agents to
visualize vessels such as arteries and veins.
Angiography

Angiography is the imaging of arteries (venography is the imaging of veins) to


determine their shape, size, lumen, and flow rate. The attenuation properties of
vessels are usually similar to those of the surrounding tissue, making X-ray
imaging difficult and yielding low contrast.

Contrast agent is frequently injected into the blood circulation to improve image
quality and contrast. When compared to typical soft tissue, a contrast agent is a
liquid that has a higher attenuation coefficient. Iodine and barium are common
contrast media, with the former being utilized for intravascular and the latter for
gastrointestinal studies.
Spectroscopy
Atoms and molecules have their own absorption and emission spectra, which
are unique to each species. They can be used to detect atoms and molecules in a
variety of substances. Spectroscopic techniques were first utilized in atom and
molecule research, but they were quickly embraced in a wide range of fields,
including the life sciences.

151
Spectroscopy is used in biochemistry to identify the results of complex
chemical reactions. Spectroscopy is commonly used in medicine to determine the
concentration of specific atoms and molecules in the body. A spectroscopic
analysis of urine, for example, can be used to assess the body's mercury level. The
level of blood sugar is determined by first causing a chemical reaction in the blood
sample, which produces a colored product. Absorption spectroscopy is used to
determine the concentration of this colored product, which is proportional to the
blood sugar level.
The fundamental principles of spectroscopy are straightforward. The sample
under inquiry is stimulated by an electric current or a flame in emission
spectroscopy. The light that has been emitted is next inspected and recognized. The
material is put in the path of a white light beam in absorption spectroscopy. The
missing wavelengths that identify the components in the substance are revealed by
examining the transmitted light. Both the absorption and emission spectra can be
used to determine the concentration of various components in a substance. The
intensity of emitted light in the spectrum is proportional to the number of atoms or
molecules in the given species in the case of emission. The amount of absorption
can be related to the concentration in absorption spectroscopy. A spectrometer is a
device that analyzes spectra. The intensity of light is measured as a function of
wavelength with this apparatus.
In its most basic form, a spectrometer consists of a focusing device, a prism,
and a light detector. A parallel beam of light is formed by the focusing device and
falls on the prism. The prism, which can be adjusted, divides the beam into its
individual wavelengths. The fanned-out spectrum can be photographed and
identified at this point. Typically, however, only a tiny portion of the spectrum is
recognized at a time. The small exit slit does this by intercepting only a fraction of
the spectrum. The entire spectrum is swept progressively past the slit as the prism
is turned. The wavelength impinging on the slit is used to calibrate the position of
the prism. A photodetector detects light passing through the slit and generates an
electrical output proportional to the light intensity. A chart recorder can display the
signal's strength as a function of wavelength.
Spectrometers used in everyday clinical practice are automated and can be
operated by anyone with little or no medical training. Identification and
interpretation of spectra produced by less well-known compounds, on the other
hand, necessitates extensive training and experience. Such spectra provide
information on the molecular structure in addition to identifying the molecule.
Colorimetry is a technique for determining the wavelength and intensity of
electromagnetic radiation in the visible range. It's widely used for identifying and
determining the quantities of light-absorbing compounds. Two fundamental laws
are used: Lambert's law, which was developed by a French scientist named Pierre
Bouguer and relates the amount of light absorbed to the distance it travels through
an absorbing medium; and Beer's law, which relates the amount of light absorbed
152
to the concentration of the absorbing substance. The equation can be used to
express the two laws together:
log I0/I = kcd (14.8)
where I0 is the initial intensity, I is the light intensity after passing through the
sample, C is the test substance concentration, d is the absorbing layer thickness,
and k is the molar absorption coefficient, is a constant that depends on the type of
the absorbing substance and wavelength. The transmittance T is commonly stated
as a percentage of the I / Io value, which is the proportion of incident light that
passes through the unabsorbed. The optical density, or log Io/I, is proportional to
the concentration of absorbing substances.
Working of Colorimeter
1) Step 1: It is necessary to calibrate the colorimeter before beginning the
experiment. It's done with the help of standard solutions containing the known
solute concentration to be determined. Fill the cuvettes with standard solutions and
set them in the colorimeter's cuvette holder. Set the zero value for the blank
solution using the proper wavelength filter based on the blank solution by adjusting
the knobs.
2) Step 2: In the direction of the test solution is a light ray with a particular
wavelength for the assay. The light is filtered through a succession of lenses and
filters. The colored light is guided by lenses, and the filter splits a beam of light
into different wavelengths, allowing only the required wavelength to pass through
and reach the standard test solution's cuvette.
3) Step 3: The laser beam is transmitted, reflected, and absorbed by the solution
when it reaches the cuvette. The photodetector device monitors the intensity of
transmitted light when the transmitted ray hits it. It turns the data into electrical
impulses, which it then sends to the galvanometer.
4) Step 4: The galvanometer measures electrical signals, which are shown in digital
form.
5) Step 5: Formula to determine substance concentration in the test solution. The
relation between absorbance A, concentration c (expressed in mol/m3) and path
length l (expressed in m) are given by Beer-Lambert's law.

153
Figure 14.8. Principle of photocolorimeter.

Task №1. Determination of dependence of the transmission coefficient on the


thickness of material.
1. Results of experiment fill in the table-1.
Table 1.
Cuvette length, mm l1 = l2 = L l3 = l4 =

50
The transmission
coefficient Т (%)

2. According to the results plot the dependency: T = f (l)


3. Make a conclusion.

Task №2. Determination of the concentration of the colored solution by


photocolorimeter.

1.Results of experiment fill in the table-2.

Table 2.
Concentrations
С1 = С2 = С3 = С4= С5 = X

The transmission
coefficient
T (%)

154
2. According to the results construct the graph T = f (C).
3. Determine the unknown concentration from the graph.
4. Make a conclusion.

Numerical problems for independent solution:


1. The transmittances for the three different solutions are 10%, 1% and 0.1%.
Determine the optical densities of these solutions.

2. After passing through a colored solution with a concentration of 30 mmol / L


in a three-centimeter cuvette, light is absorbed by 73%. Find the molar absorption
coefficient for a given substance.

3. 1/3 of the incident light flux passes through a layer of aqueous solution
with a thickness of 4.2 cm. Determine the concentration of this solution, if the
specific indicator absorption 0.01 cm-1g-1mol. Consider that 10% of the
light flux is reflected from the surface of the solution.

4. A 4% solution of the substance in a transparent solvent attenuates the light


intensity at a depth of 20 mm by 2 times. How many times is the light beam
attenuated at a depth of 30 mm in an 8% solution of the same substance?

155
XV
RADIOACTIVITY. X-RAY AND DOSIMETRY.
BIOLOGICAL EFFECTS AND MECHANISMS OF ACTION OF
RADIATION.
Despite the fact that all atoms of a given element have the same number of
protons in their nucleus, the number of neutrons might differ. Isotopes are atoms
with the same number of protons but differing numbers of neutrons. The nuclei of
oxygen atoms, for example, all have eight protons, but the number of neutrons in
the nucleus can be eight, nine, or ten. These are the isotopes of oxygen. They are
designated as 168𝑂, 178𝑂, and 188𝑂. This is a sort of nuclear symbology in which the
number of protons in the nucleus is the subscript to the chemical symbol of the
element, and the superscript is the sum of the number of protons and neutrons. The
stability of the nucleus is frequently determined by the quantity of neutrons.
Most naturally occurring atoms have stable nuclei. When left alone, they do
not alter. Many unstable nuclei, on the other hand, undergo changes that result in
the emission of intense radiation.
The emitted particles from these radioactive nuclei are divided into three
categories: (1) alpha (α) particles, which are high-speed helium nuclei with two
protons and two neutrons; (2) beta (β) particles, which are extremely high-speed
electrons; and (3) gamma (γ ) rays, which are highly intense photons.
A given element's radioactive nucleus does not produce all three radiations
at the same time. Alpha particles are emitted by some nuclei, whereas beta
particles are emitted by others, and gamma rays may accompany either process.
The transition of the nucleus from one element to another is linked to
radioactivity. When radium produces an alpha particle, for example, the nucleus is
converted into the element radon. Most physics textbooks go through the specifics
of the procedure.
A radioactive nucleus' decay or transmutation is a random process. Some
nuclei decay more quickly than others. When dealing with a large number of
radioactive nuclei, however, the rules of probability can be used to properly
forecast the aggregate decay rate. The half-life, which is the time it takes for half of
the initial nuclei to undergo transformation, characterizes this decay rate.
The half-life of radioactive materials varies significantly. Some decay
extremely quickly, with half-lives as little as a few microseconds. Others have a
long half-life, having a half-life of thousands of years. In the Earth's crust, only the
very long-lived radioactive elements occur naturally. For example, the uranium
isotope 238
92𝑈 , which has a half-life of 4.51 × 10 years, is one of them. By
9

bombarding some stable elements with high-energy particles, short-lived


radioactive isotopes can be created in accelerators. The nucleus of naturally
156
occurring phosphorus, for example, ( 31
15𝑃) contains 15 protons and 16 neutrons. By
hitting sulfur with neutrons, the radioactive phosphorus isotope 3215𝑃 with 17
neutrons can be created. The response is
32 32
16𝑆+ neutron → 15𝑃 + proton
The half-life of this radioactive phosphorus is 14.3 days. In a similar
method, radioactive isotopes of other elements can be created. Many of these
isotopes have shown to be quite beneficial in biological and therapeutic research.
X-rays
Wilhelm Conrad Roentgen first announced the discovery of X-rays in 1895.
He discovered that when high-energy electrons collided with a medium like glass,
the material emitted radiation that penetrated light-opaque materials. X-rays was
the name he gave to this type of radiation. X-rays are short-wavelength
electromagnetic radiation released by highly excited atoms, it was later discovered.
X-rays were shown by Roentgen to be capable of exposing film and producing
images of objects in opaque containers. If the container transmits X-rays more
readily than the object inside, such images are feasible. The shadow formed by the
object is visible on a film exposed by X-rays.
Two French physicians, Oudin and Barthelemy, obtained X-rays of bones in
a hand within three weeks after Roentgen's revelation. X-rays have since become
one of medicine's most significant diagnostic tools. Internal bodily organs that are
transparent to X-rays can now be viewed using current technology. This is
accomplished by injecting a fluid that is opaque to X-rays into the organ. The
organ's walls are then vividly visible due to the contrast.
X-rays have also been useful in determining the structure of biologically
significant substances. Crystallography is the method adopted in this case. X-rays
have a wavelength of around 10-10 m, which is about the same as the distance
between atoms in a molecule or crystal. When an X-ray beam passes through a
crystal, the transmitted rays generate a diffraction pattern, which provides
information on the crystal's structure and composition. The diffraction pattern is
made up of high and low X-ray intensity zones that appear as dots of varied
brightness when photographed (Fig. 15.1).

157
Figure 15.1. Arrangement for detecting diffraction of X-rays by a crystal.

Molecules that can be formed into a regular periodic crystalline array are the
most successful for diffraction research. Many biological compounds can be
crystallized if the right circumstances are met. The diffraction pattern, however, is
not a unique, unambiguous representation of the molecules in the crystal. The
pattern is a representation of the arranged molecules' collective effect on the X-
rays that travel through the crystal. The structure of each individual molecule must
be deduced from the diffraction pattern's indirect evidence.
When the crystal has a simple structure, such as sodium chloride, the X-ray
diffraction pattern is equally simple and straightforward. Diffraction patterns
produced by complicated crystals, such as those made from organic molecules, are
extremely intricate. Even in this scenario, however, some information about the
structure of the molecules that make up the crystal can be obtained. Diffraction
patterns must be created from thousands of different angles to resolve the
threedimensional characteristics of the molecules. With the help of a computer, the
patterns are then examined. These types of investigations were crucial in
determining the structure of penicillin, vitamin B12, DNA, and a variety of other
biologically significant compounds.
X-rays are produced by extracting energy from electrons and turning it into
photons of sufficient energy. The x-ray tube is where this energy transfer takes
place. Adjusting the electrical amounts (KV, MA) and exposure time, S, applied to
the tube can alter the quantity (exposure) and quality (spectrum) of the x-
radiation produced.
An energy converter is an X-ray tube. It takes electrical energy and turns it
into two different types of energy: x-rays and heat. The heat is an unwelcome side
effect. X-ray tubes are designed and built to produce the most x-rays while
dissipating heat as quickly as feasible.
An X-ray tube is a relatively simple electrical device that normally has two
main components: a cathode and an anode. The electrons lose energy when the
electrical current passes through the tube from cathode to anode, resulting in the
creation of X-radiation. Below is a cross-sectional picture of a standard X-ray tube.

158
Figure 15.2. X-ray tube.

The anode is the component that generates the x-radiation. It is a reasonably large
piece of metal that connects to the electrical circuit's positive side. The anode has
two purposes: (1) to convert electronic energy into x-rays, and (2) to remove the
heat generated during the process. The anode's substance is chosen to enhance
these functions. During the x-ray manufacturing process, most anodes are formed
like beveled disks and attached to the shaft of an electric motor that rotates them at
relatively high speeds. Rotation of the anode is used to dissipate heat.

The cathode's main job is to expel electrons from the electrical circuit and
concentrate them into a narrow beam aimed towards the anode. A little coil of wire
(filament) is recessed within a cup-shaped section of a typical cathode. Electrons in
electrical circuits are unable to depart the conductor material and go into free space
in most cases. However, if given enough energy, they can. Thermal energy (or
heat) is used to eject electrons from the cathode in a procedure known as
thermionic emission. The filament of the cathode is heated in the same way that a
light bulb filament is heated by passing a current through it. This heating current is
distinct from the current passing through the x-ray tube, which generates x-rays.
The cathode is heated to a blazing temperature during tube operation, and the heat
energy expels some of the electrons from the cathode.

When electrons are emitted from the cathode, they are influenced by an electrical
force that pulls them toward the anode. This force causes them to accelerate,
increasing their velocity and kinetic energy. As the electrons travel from the
cathode to the anode, their kinetic energy increases. However, when the electron
travels from the cathode to the anode, its electrical potential energy falls as it is
converted into kinetic energy. The electron's potential energy is dissipated as soon
as it reaches the anode's surface, and all of its energy is now kinetic. The electron
is flying at a rather high velocity at this time, as defined by its real energy content.
A 100-keV electron arrives at the anode surface at a speed of more than half that of
light. When electrons collide with the anode's surface, they are slowed

159
dramatically and lose their kinetic energy, which is transformed into either x-
radiation or heat.

As seen in Fig.15.4, electrons interact with specific atoms of the anode material.
Radiation is produced by two types of interactions. Electron shell interactions
creates characteristic x-ray photons, while interactions with the atomic nucleus
produce Bremsstrahlung x-ray photons.

Figure 15.3. X-Ray photons are produced via electron-atom interactions

The Bremsstrahlung process is the one that produces the most photons.
Bremsstrahlung is a German word that means "braking radiation" and describes the
process well. The attractive force from the nucleus deflects and slows down
electrons that pierce the anode material and pass close to one. The electron's
energy is lost in the form of an x-ray photon during this collision. Photons of the
same energy are not produced by all electrons.
When an electron collides with something within the target, it is slowed down and
creates an x-ray photon. Electrons striking closest to the center are subjected to the
most force, losing the most energy and producing the highest energy photons. The
interactions between electrons in the outer zones are less, resulting in lower energy
photons. Despite the fact that the zones are almost the same width, they have
different areas. The size of a zone is determined by its distance from the nucleus.
Because the overall area within a zone determines the quantity of electrons that
reach it, it's evident that the outside zones capture more electrons and produce
more photons.

160
A collision between the high-speed electrons and the orbital electrons in the atom
produces characteristic radiation, which is also described above (in the "Kinetic"
paragraph). Only if the incoming electron has a kinetic energy larger than the
binding energy of the electron within the atom can the interaction take place. The
electron gets displaced from the atom when this condition prevails and a collision
occurs. When an orbital electron is withdrawn, a vacancy is created, which is then
replaced by an electron from a higher energy level. The filling electron gives off
energy in the form of an x-ray photon as it goes down to fill the vacancy.
The electron dislodges a tungsten K-shell electron with a binding energy of 69.5
keV in the sample presented. An electron from the L shell, with a binding energy
of 10.2 keV, fills the vacancy. As a result, the energy of the typical x-ray photon is
equal to the difference in energy between these two levels, or 59.3 keV.
In fact, a particular anode material produces a variety of distinct x-ray energies.
This is because bombarding electrons can dislodge electrons at different energy
levels (K, L, etc.), and vacancies can be filled from different energy levels. The
electronic energy levels of tungsten, as well as some of the energy shifts that lead
to distinctive photons, are depicted below. Although photons are produced when L-
shell vacancies are filled, their energy are too low for diagnostic imaging. Each
characteristic energy has a designation that identifies the shell in which the
vacancy occurred, as well as a subscript that indicates the filling electron's origin.
A subscript alpha (a) denotes filling with an L-shell electron, and beta ((3)
indicates filling from either the M or N shell.
The spectrum of tungsten's significant characteristic radiation is depicted below.
Bremsstrahlung creates a continuous spectrum of photon energies over a specified
range, whereas characteristic radiation produces a line spectrum with numerous
discrete energy. Because the chance of filling a K-shell vacancy varies from shell
to shell, the amount of photons generated at each characteristic energy is varied.

161
Figure 15.4. Distribution of energy levels of tungsten electrons and characteristic X-ray spectrum
corresponding to these levels

Only a small portion of the energy provided to the anode by the electrons is
converted to x-radiation; the majority is absorbed and transformed to heat by the
anode. The total x-ray energy represented as a fraction of the total electrical energy
transferred to the anode is the efficiency of x-ray production. The voltage supplied
to the tube, KV, and the atomic number of the anode, Z, are the two factors that
affect production efficiency. There is a rough relationship:
Efficiency = KV x Z x 10-6
The x-ray efficacy of the x-ray tube is defined as the amount of exposure, in
milliroentgens, delivered to a point in the center of the useful x-ray beam at a
distance of 1 m from the focal spot for 1 mAs of electrons passing through the
tube.
The effectiveness value is a measurement of a tube's capacity to convert electronic
energy into x-ray exposure. Knowing the effectiveness value for a particular tube
allows for the calculation of patient and image receptor exposures using methods
described in later chapters. The efficacy of a tube is determined by a variety of
elements, including KV, voltage waveform, anode material, filtration, tube age,
and anode surface degradation, much as it is for x-ray energy production.

Special types of X-ray procedures


Mammograms are x-rays with fixed plates that are used to find malignancies
in the breasts. Dental x-rays are used to detect decay inside the tooth. To help
define internal organs like the intestines, a liquid called contrast material (for
example, barium) is sometimes utilized. The contrast substance absorbs x-rays,
allowing soft tissue to be seen more clearly on x-ray films. When taking x-rays of
the digestive system, contrast material is frequently employed. Depending on the
portion of the body being x-rayed, the contrast liquid might be eaten or injected.
It's possible that this will cause some slight discomfort.

Fluoroscopy is a type of x-ray that produces images on a television monitor in real


time. Contrast material is injected into a blood artery during fluoroscopy. The
physician can then monitor the contrast material's progress in real time to see if
there are any blockages in the circulation. Fluoroscopy is also utilized to assist in
the placement of catheters in the heart during cardiac catheterization and the
guidance of an endoscope during endoscopic surgery.

Fluorography is the formation of an x-ray image using relatively powerful (50-


1000 mA) pulsed x-ray exposures (pulses are of short duration and delivered at 1-
12 pulses/second). The resulting images have a reasonably high signal to noise
ratio (SNR), implying that they are of greater quality than fluoroscopy images but
obtained at larger dosages. Fluorography can be used for diagnostic purposes, and
162
it was once widely utilized for mass tuberculosis screening called chest
photofluorography.

Computed tomography or CT scans are similar to fixed plate x rays in that an x


ray tube spins around the patient, capturing hundreds of images that are
subsequently processed by a computer to provide a two-dimensional cross section
of the body. Despite the fact that a CT scan involves a large number of pictures,
the total dose of radiation to which the patient is exposed is minimal. MRI and
ultrasound, for example, do not use x-rays.

X-ray Computerized Tomography


The standard X-ray image does not show depth information. As the X-ray beam
passes through the object in its path, the image depicts the complete attenuation. A
traditional X-ray of the lung, for example, may disclose the presence of a tumor,
but it will not reveal how deep in the lung the tumor is placed. To obtain slice-
images within the body that provide depth information, several tomographic
procedures (CT scans) have been developed. (The name "tomography" comes from
the Greek word "tomos," which means "section"). X-ray computed tomography
(CT scan), which was developed in the 1960s, is currently the most widely utilized
of these. Figures 15.5a and b show the technique's underlying idea in its most basic
version. A narrow X-ray beam travels through the plane we want to see and is
detected by a detector on the opposite side. The X-ray source-detector combination
is moved laterally scanning the region of interest for a specific angle with respect
to the object (in this case the head), as illustrated by the arrow in Fig. 15.5a. The
detected signal at each place contains integrated information about the whole path's
X-ray transmission qualities in this case A-B. The angle is then altered by a little
amount (approximately 10) and the process is repeated around the object in a full
circle. As shown in Fig. 15.5b, information on the intersection points of the X-ray
beams can be obtained by rotating the source-detector combination.

Figure 15.5(a) schematic diagram of the operation of the tomograph. (b) Rotation of the source-
detector system makes it possible to obtain a layer-by-layer image of the patient's organ.

163
The scanning beam is shown schematically in Fig. 15.5b at two angles, each with
two lateral positions. While the detected signal at each position has integrated
information about the whole path, two paths that intersect share information about
the single point of intersection. At the intersection of the beams AB, A' B', CD, and
C' D', four such spots are indicated in the diagram. The numerous pictures acquired
by translation and rotation contain information on the X-ray transmission
properties of each point inside the plane of the item to be studied.
These signals are saved, and a point-by-point image of the narrow slice scanned
within the body is created using a relatively complicated computer analysis.
Typically, the visible slices within the body generated in this manner are around 2
mm thick. The item is scanned by a fan rather than an X-ray beam in more current
versions of the equipment, and the signal is recorded by an array of numerous
detectors. In this method, data collecting is sped up, resulting in an image in a
matter of seconds.
Radiation dosimetry is the measurement, calculation, and evaluation of the
ionizing radiation dose absorbed by an item, usually the human body, in the fields
of health physics and radiation protection. This is true both domestically, as a
result of eaten or inhaled radioactive compounds, and externally, as a result of
irradiation from radiation sources.
External dosimetry is based on measurements using a dosimeter or extrapolated
from data obtained by other radiological protection instruments, whereas internal
dosimetry is based on a variety of monitoring, bio-assay, or radiation imaging
techniques.

Absorbed dose (D) is a dosage quantity that represents the amount of energy
deposited in matter per unit mass by ionizing radiation:
D=dE/dm
In both radiation protection and dose uptake in living tissue, the absorbed dose is
employed (reduction of harmful effects). Dose rate refers to the amount of change
in dose per unit of time.
The gray (Gy) is the SI unit of measurement, and it is defined as one Joule of
energy absorbed per kilogram of mass.
The energy of particles is practically absorbed at the point of encounter in the case
of directly ionizing radiation. In the case of indirectly ionizing radiation, the
interaction may occur at locations other than those where secondary charged
particle energies are absorbed.
Equivalent dose (H) in millisieverts (mSv)
Effective dose (HE) in millisieverts (mSv).

The notion of effective dose was created as a tool for radiation safety in the
workplace and in the general population. It has the potential to be useful for
comparing dosages from various diagnostic and interventional treatments. It also
164
enables for the comparison of doses produced by different techniques or
technologies used for the same medical examination, as well as doses produced by
similar procedures conducted in different institutions. The assumption is that the
representative patients used to calculate the effective dose are similar in terms of
sex, age, and body mass. The effective dose was not designed to provide a precise
estimate of the risk of radiation damage for a single person. For individual risk
assessment as well as for epidemiological studies, the organ dose (either absorbed
or equivalent organ dose) would be a more appropriate quantity.
For medical exposures, the collective effective dose is used for comparison
of estimated population doses, but it is not intended for predicting the occurrence
of health effects. It's calculated by multiplying a radiological procedure's mean
effective dose by the predicted number of procedures in a given population. To
depict global trends in the medical use of radiation, the total effective dose from all
radiological operations for the entire population can be employed. The sievert is a
unit of equivalent and effective dosage (Sv). For diagnostic imaging, a sievert is a
relatively big measure, and millisieverts is frequently a more useful unit (mSv).
One thousand millisieverts equals one sievert. In person-sieverts, the total effective
dose is calculated (person-Sv).
Radiation Therapy
X-ray and gamma-ray photons, as well as particles released by radioactive nuclei,
have energies that are significantly larger than the energies that bind electrons to
atoms and molecules. As a result, when such radiation passes through biological
materials, it can rip electrons from biological molecules, causing significant
structural changes. The ionized molecule may disintegrate or chemically interact
with another molecule to generate an unwanted combination. If a damaged
molecule is a critical component of a cell, the cell as a whole may perish. Radiation
also breaks up water molecules in the tissue into reactive fragments (H + OH).
These fragments bind to biological molecules and cause them to change in a
negative way. Furthermore, radiation travelling through tissue may simply give up
its energy and heat the tissue to an extremely harmful temperature. A large dose of
radiation can kill an organism by damaging so many cells. Smaller but still lethal
amounts can result in irreversible alterations such mutations, sterility, and cancer.
Radiation, on the other hand, can be utilized medically at controlled dosages. An
ampul containing radioactive material such as radium or cobalt 60 is inserted near
the malignant tumor in the treatment of some forms of cancer. The objective is to
eliminate the cancer while causing minimal damage to healthy tissue by carefully
placing the radioactive material and managing the dose.
Regrettably, some damage to healthy tissue is inevitable. As a result, radiation
sickness symptoms are frequently associated with this treatment (diarrhea, nausea,
loss of hair, loss of appetite, and so on). When long-lived isotopes are utilized in
therapy, the material must be removed after a certain amount of time. Short-lived
isotopes, like gold 198, decay quickly enough that they don't need to be eliminated
after treatment. Certain materials that are injected or ingested into the body tend to
165
concentrate in specific organs. In radiation therapy, this phenomena is employed to
its advantage. Phosphorus 32, a radioactive isotope with a half-life of 14.3 days,
accumulates in the bone marrow. Iodine 131 accumulates in the thyroid and is used
to treat hyperthyroidism (half-life: 8 days). Cancerous tumors can also be
destroyed using an externally delivered beam of gamma rays or X-rays. The
benefit of this treatment is that it does not require surgery. By frequently changing
the direction of the beam travelling through the body, the effect of radiation on
healthy tissue can be decreased. Although the tumor is always in the path of the
beam, the dose delivered to healthy tissue is minimized.
Numerical problems for independent solution:
1. The power of bremsstrahlung X-ray can be approximately calculated by the
formula P =10-6·I·U2·Z, where I is the current in mA, U is the voltage in kV, Z
is the atomic number of the anode substance. Determine the efficiency X-ray
tube at a voltage of 100 kV.

2. Determine the equivalent dose of radiation to which human bones are exposed
for 1 year due to the content of 239
94𝑃𝑢 in them, with the maximum permissible
and unchanged radionuclide activity of 0.02 μCu. Skeleton weight 7 kg.
Effective energy for unit decay 270 MeV / decay.

3. The dose rate of 𝛾-radiation at a distance of 50 cm from the point source is 0.1 R
/ min. How long during a working day can person be at a distance of 10 m from
the source if the maximum permissible dose for a working day should not
exceed 17 mR?

166
Abbreviations
A angstrom
av average
atm atmosphere
A ampere
C coulomb
CT computerized tomography
cos cosine
cps cycles per second
cm2 square centimeters
cm centimeter
deg degree
dB decibel
diam diameter
ECG electrocardiography
EEG electroencephalography
F farad
F/m Farad/meters
g gram
h hour
Hz hertz (cps)
J joule
km kilometer
km/h kilometers per hour
kg kilogram
KE kinetic energy
kph kilometers per hour
lim limit
liter/min liters per minute
μ micron
μA microampere
μV microvolt
μV/m microvolt per meter
mV millivolt
ms millisecond
m meter
m/sec meters per second
ml milliliter
min minute
max maximum
mA milliampere
MRI magnetic resonance imaging
167
N Newton
N·m Newton meters
NMR nuclear magnetic resonance
Ω ohm
PE potential energy
sin sine
sec second
tan tangent
V volt
UHF ultra high frequency
W watt

168
APPENDIX A
Table of Student's coefficients for calculating errors
Student's coefficients

Confidence factor
n
0.6 0.8 0.95 0.99 0.999
2 1.376 3.078 12.706 63.657 636.61
3 1.061 1.886 4.303 9.925 31.598
4 0.978 1.638 3.182 5.841 12.941
5 0.941 1.533 2.776 4.604 8.610
6 0.920 1.476 2.571 4.032 6.859
7 0.906 1.440 2.447 3.707 5.959
8 0.896 1.415 2.365 3.499 5.405
9 0.889 1.397 2.306 3.355 5.041
10 0.883 1.383 2.262 3.250 4.781
11 0.879 1.372 2.228 3.169 4.587
12 0.876 1.363 2.201 3.106 4.437
13 0.873 1.356 2.179 3.055 4.318
14 0.870 1.350 2.160 3.012 4.221
15 0.868 1.345 2.145 2.977 4.140
16 0.866 1.341 2.131 2.947 4.073
17 0.865 1.337 2.120 2.921 4.015
18 0.863 1.333 2.110 2.898 3.965
19 0.862 1.330 2.101 2.878 3.922
20 0.861 1.328 2.093 2.861 3.883
21 0.860 1.325 2.086 2.845 3.850

169
APPENDIX B
Selected Prefixes used in the Metric System
Prefix Abbreviation meaning Example
Giga G 109 1 gigameter (Gm)=1⨯109 m
Mega M 106 1 megameter (Mm)=1⨯106 m
Kilo k 103 1 kilometer (km)=1⨯103 m
Deci d 10-1 1 decimeter (dm)=0.1 m
Centi c 10-2 1 centimeter (cm)=1⨯10-2 m
Milli m 10-3 1 millimeter (mm)=1⨯10-3 m
Micro 𝝻 10-6 1 micrometer (𝝻m)=1⨯10-6 m
Nano n 10-9 1 nanometer (nm)=1⨯10-9 m
Pico p 10-12 1 picometer (pm)=1⨯10-12 m
Femto f 10-15 1 femtometer (fm)=1⨯10-15 m

Fundamental Quantities of the International System of Units


Fundamental Quantity SI unit
Name Symbol Name Symbol
Mass m kilogram kg
length l metre m
Time t second s
Current I ampere A
Temperature T kelvin K
Amount of substances n mole mol
Luminous Intensity Iv candela cd

Selected Physical Constants


Acceleration due to gravity g 9.80665 ms-2
Speed of light (in vacuum) c 2.99792458⨯108 ms-1
Gas constant R 8.314472 J mol-1K-1
Electron charge e- -1.602176462⨯10-19 C
Electron rest mass me 9.10938188⨯10-31 kg
Planck’s constant h 6.62606876⨯10-34 j·s
Faraday constant F 9.64853415⨯104 C·mol-1
Avogadro number NA 6.02214199⨯1023 mol-1

170
APPENDIX C
Trigonometrical ratios table
𝜃 00 300 450 600 900
sin 𝜃 𝟏 𝟏 √𝟑
0 𝟐 √𝟐 𝟐 1

cos 𝜃 √𝟑 𝟏 𝟏
1 𝟐 √𝟐 𝟐 0

tan 𝜃 𝟏 √𝟑 Not defined


0 √𝟑 1

cot 𝜃 Not defined 𝟏


√𝟑 1 √𝟑 0

171
APPENDIX D
Bradis table of sines and cosines
sine

172
sine

cosine

173
sine

cosine

174
APPENDIX E

Table of density of aqueous solutions of glycerin


Concentration, % Density, kg/m3 Concentration, % Density, kg/m3
5 1010 55 1140
10 1022 60 1153
15 1034 65 1167
20 1047 70 1181
25 1060 75 1194
30 1073 80 1208
35 1086 85 1221
40 1099 90 1235
45 1113 95 1248
50 1126 100 1261

Speed of sound in different medium


Medium Sound speed (m/s)
Air (00C) 330
Air (200C) 343
Water (250C) 1493
Salt water (250C) 1533
Rubber 1550
Gold 3240
Brick 3650
Wood 4000
Concrete 5000
glass 5100
Steel 5790
aluminum 6420

Material/tissue Velocity of sound (m/s)


Fat 1450
Average human soft tissue 1540
Braine 1540
Liver 1550
Kidney 1560
Blood 1570
Muscle 1580
Skull Bone 4080

175
literature and electronic resources:
1. Paul Davidovits. Physics in Biology and Medicine. The 3rd edition, 2008.
2. Suzanne Amador Kane. Introduction to physics in modern medicine -- 2nd
ed. Haverford College Pennsylvania, USA, 2009.
3. Patrick F. Dillon. Biophysics. A Physiological Approach. Cambridge
University Press The Edinburgh Building, Cambridge CB2 8RU, UK. 2012
4. Kukurova Elena. Basics of Medical Physics and Biophysics for electronic
education of health professionals. Asklepios, Bratislava 2013.
5. John R. Taylor, An Introduction to Error Analysis: The Study of
Uncertainties in Physical Measurements, 2d Edition, University Science
Books, 1997.
6. Philip R. Bevington and D. Keith Robinson, Data Reduction and Error
Analysis for the Physical Sciences, 2d Edition, WCB/McGraw-Hill, 1992.
7. https://www.researchgate.net/publication/228599963_Fundamental_of_EEG
_Measurement.
8. https://core.ac.uk/download/pdf/162012173.pdf
9. https://link.springer.com/chapter/10.1007%2F978-3-642-19525-9_1.
10.https://www.electrical4u.com/biomedical-transducers-types-of-biom edical-
transducers.
11.http://www.refractometer.pl/Abbe-refractometer.

176

You might also like