You are on page 1of 158

Lecture 1.

Linear momentum. Impulse. Newton's Laws and the Conservation of Momentum.

The concept of momentum is extremely important in physics. Whenever we examine a


moving object, we must consider both: its mass and its velocity. The linear momentum of a
body with mass m, traveling with velocity v, is defined to be the product of the mass and the
velocity. Momentum is associated with an object's translational motion. Since mass is a scalar
quantity and velocity is avector quantity, their product, momentum (which we designate with the
letter p), is a vector quantity:
⃗p=m ⃗v
Early we introduced Newton's second law in the form

F net=m a⃗
Remind that acceleration is the derivative of velocity with time:
⃗ d ⃗v d (m ⃗v ) d ⃗p
F net=m = =
dt dt dt
⃗ d ⃗p
F net=
dt
This is a more general form of Newton's second law. Where Fnet is the net force applied to an
object.
We can rewrite this equation

F net dt=d ⃗p
The quantity on the left, Fdt, is called the impulse. It is the product of the force F and the
time interval dt over which the force acts. Changing of momentum is equal to impulse.

Even though the


force occurs very
briefly, the force is
not usually constant
over the time
interval. For this
reason, we often
replace the force
with the average
force over the time
of interaction ∆t.

If a system of mass m is subject to zero net external force⃗


F net=0, Newton's second law states
that the rate of change of momentum with time is zero.
The System's momentum remains constant when the net external force acting on it is zero.
That is, if
⃗ d ⃗p
F net= =0
dt
Then:⃗p=m ⃗v =constant
1. Conservation of Momentum in head-on collisions.

For a collision involving two bodies, the conservation law can be expressed symbolically as

unprimed quantities stand for values before the collision and primed quantities stand for
values after the collision.
Note that we do not need to know anything about the details of the collision mechanism
itself.
The rule holds for collisions between hard elastic bodies, such as billiard balls or glass
spheres, as well as for collisions between soft bodies that do not "bounce" upon colliding, such
as blobs of putty. These cases in which the two objects stick together are an important class of
collisions, known as perfectly inelastic collisions.
First, let's simplify our discussion to just
head-on, or one-dimensional, perfectly
inelastic collisions.
so that they move only in one direction
means that the vector equation for
momentum conservation reduces to a single
one - dimensional algebraic equation:

If the collision is perfectly inelastic, the two bodies stick together and v'1 = v'2. We can
immediately determine the final velocity in terms of the masses and the initial velocities.

Solving for the final velocity v'2, we get

In collisions that are not perfectly inelastic, knowing the masses and initial velocities is not
enough to determine the final velocities. We need more information. For example, for elastic
collisions we mast to use the law of conservation of energy.

Jet force is the exhaust from some machine, esp. aircraft, propelling the object itself in the opposite
direction as per Newton's Third Law. An understanding of jet force is intrinsic to the launching of
drones, satellites, rockets, airplanes and other airborne machines.
Lecture 2. Damped Oscillations. Forced Oscillations. Resonance
1. Damped free mechanical and electromagnetic oscillations
The oscillatory motions we have considered so far have been for ideal systems, that is,
systems that oscillate indefinitely under the action of only one force, a linear restoring force. In
many real systems, nonconservative forces such as friction or air resistance also act and retard
the motion of the system. These forces we will call retarding force. Consequently, the
mechanical energy of the system diminishes in time, and the motion is said to be damped.
Attenuation of oscillations in electrical systems circuits is caused by heat losses, energy losses
to radiation of electromagnetic waves, as well as heat losses in dielectrics and ferromagnets due
to electric and magnetic hysteresis.
One common type of retarding force is the force which proportional to the speed of the
moving object and acts in the direction opposite the velocity of the object with respect to the
medium. This retarding force is often observed when an object moves through air, for instance.
Because the retarding force can be expressed as R = -bv (where b is a constant called the
retarding coefficient) and the restoring force of the system, for example, is -kx, we can write
Newton’s second law as
∑ F x =−kx−bv =m ax
2
dx d x
−kx −b =m 2
dt dt
2
d x b dx k
2
+ + x=0 (1)
d t m dt m
The solution to this equation requires mathematics that may be unfamiliar to you; we simply
state it here without proof. When the retarding force is small compared with the maximum
restoring force—that is, when the damping coefficient b is small—the solution to Equation (1) is
−(b/ 2 m)t
x= A o e cos ( ωt+ φ )= A cos ( ωt +φ ) (2)
This result can be verified by substituting Equation (2) into Equation (1).
Here A=Aoe-(b/2m)t is a amplitude of damped oscillations; ω - the angular frequency of damped
oscillation is

ω=
k
√−
m 2m
b 2
( )
It is convenient to express the angular frequency of a damped oscillator in the form

2

ω= ω20−
b 2
2m ( )
where ωo = k/m represents the angular frequency in the absence of a retarding force (the
undamped oscillator, i.e. b=0) and is called the natural frequency of the system.
Any system with exponentially decreasing
amplitude is known as a damped oscillator. The
dashed red lines in Figure, which define the
envelope of the oscillatory curve, represent the
exponential factor in Equation (2). This envelope
shows that the amplitude decays exponentially
with time.

2. Types of damped oscillations

When the magnitude of the retarding force is small such that b/2m < ωo, the system is said to
be underdamped. The resulting motion is represented by Figure above and the blue curve 3 in
the next Figure below.
The period of time during which the amplitude of the damped oscillations decreases by a
factor e is called the relaxation time
2m
τ=
b
Reciprocal quantity called damping coefficient (attenuation constant):
1 b
δ= =
τ 2m
Damping violates periodicity of oscillations. However, if the damping is small, it is possible
to use the concept of conditional period of damped oscillations as the time interval between two
successive peaks of the oscillating values and calculate it using the formula
2π 2π 2π
T= = = 2 2

√ √ ω0 −δ
ω
( )
2
b
ω20−
2m

As the value of b increases, the amplitude of the


oscillations decreases more and more rapidly. When b
reaches a critical value bc such that bc/2m = ωo, the
system does not oscillate and is said to be critically
damped. In this case, the system, once released from rest
at some nonequilibrium position, approaches but does
not pass through the equilibrium position. The graph of
position versus time for this case is the red curve in the
Figure.
If the medium is so viscous that the retarding force is
large compared with the restoring force—that is, if b/2m
> ωo — the system is overdamped.
Again, the displaced system, when free to move, does not oscillate but rather simply returns to
its equilibrium position. As the damping increases, the time interval required for the system to
approach equilibrium also increases as indicated by the green curve in Figure above. For
critically damped and overdamped systems, there is no angular frequency ω and the solution in
Equation (2) is not valid.

3. Logarithmic Decrement, Relaxation Time and Quality factor.

The period of time during which the amplitude of the damped oscillations decreases by a
factor e is called the relaxation time relatively amplitude is
2m
τT= (3)
b
Reciprocal quantity called damping coefficient (attenuation constant):
1 b
δ= = (4)
τT 2 m

Let's take amplitudes of two consequently oscillations A(t) and A(t + T). Ratio of these
amplitudes
−δt
A (t) Aoe 1 δT
= = =e
A (t +T ) A o e−δ(t +T ) e−δT

is called Decrement of damping, and its logarithm:


A (t) T 1
¿ ln =δ T = = (5)
A (t +T ) τT N e

is called Logarithmic Decrement of damping. Here Ne - number of oscillations over time when
amplitude decreases e times (e ≈ 2.7 -natural number).
Energy of oscillations is proportional to the second order of amplitude.
W ~ A2 = (Aoe-(b/2m)t)2 = Ao2 ∙e-(b/m)t
We see that relaxation time for energy is two times less than relaxation time for amplitude:
τT 1 m
τ= = = (6)
2 2δ b
The Quality factor (Q- factor) of oscillatory system is a dimensionless quantity which is
equal to the product of 2π to ratio of the oscillations energy W (t) of the system at an arbitrary
time t to the loss of energy during the time interval from t to t + T:
2
W ( t) A (t)
Q=2 π =2 π
W ( t )−W (t+T ) A (t)2− A (t+T )2
2 −2 δt −2 δt
Ao e e
Q=2 π 2 −2 δt 2 −2 δ(t +T )
=2 π −2 δt −2 δ(t +T )
Ao e − Ao e e −e
2π 2 π ωo
¿ −2 δT
≈ = =ωo τ
1−e 2 δT 2 δ
2π 2 π ωo
Q= −2δT
≈ = =ω o τ (7)
1−e 2 δT 2 δ

The Q- factor is a measure of the “quality” of an oscillator (such as a bell): how long will it
keep ringing once you hit it? Essentially, it is a measure of how many oscillations take place
during the time the energy decays by the factor of 1/e. Q- factor, strictly speaking, measures how
many radians the oscillator goes around in time τ. For a typical bell, τ would be a few seconds, if
the note for example frequency f is 256 Hz, that’s ω = 2πf = 2π∙256, so Q would be of order a
few thousand.

Note, that oscillations in LC - circuit are damped because of


active resistance of connecting wires and coil. Real circuits are
RLC and they can be represented by scheme, shown on the figure.
If there is a resistance in the circuit, the current flow in the
circuit will produce ohmic losses to heat. Thus the energy of the
circuit will decrease because of these losses. The rate of energy
loss (heat emitted) is given by:
dQ dW 2
= =i R
dt dt
To derive differential of this oscillations we can write the 2nd Kirchoff's rule for RLC circuit:
ℰL = UC+iR

Remind that UC =q/C- potential difference between capacitor's plates; i = dq/dt and ℰL - emf
of coil. According to the Faraday's law:
di
E L =−L
dt
Combine all these we get
2
d q R dq q
2
+ + =0
d t L dt LC
The solution of this differential equation is:
−(R / 2 L)t
q=q o e cos ⁡(ωt +φ)

(damped harmonic oscillation!), where


ω= ω20− ( )
R 2
2L
and ω o=
1
√ LC
is natural angular frequency
4. Forced oscillations. The differential equations of forced mechanical and
electromagnetic oscillations.

We have seen that the mechanical energy of a damped oscillator decreases in time as a result
of the retarding force. It is possible to compensate for this energy decrease by applying a
periodic external force that does positive work on the system. At any instant, energy can be
transferred into the system by an applied force that acts in the direction of motion of the
oscillator. For example, a child on a swing can be kept in motion by appropriately timed
“pushes.” The amplitude of motion remains constant if the energy input per cycle of motion
exactly equals the decrease in mechanical energy in each cycle that results from retarding forces.
A common example of a forced oscillator is a damped oscillator driven by an external force
that varies periodically, such as F(t) = Fo cos ωt, where Fo is a constant and ω is the angular
frequency of the driving force.

Modeling an oscillator with both retarding and driving forces as a particle under a net force,
Newton’s second law in this situation gives:
2
dx d x
∑ F x =ma x F 0 cos ωt−kx −b =m 2 and finally:
dt dt
2
d x dx
m 2
+kx + b =F 0 cos ⁡ωt (8)
dt dt

Equation (8) is the differential equation of forced oscillations. In mathematics there is the
theorem that the general solution of the inhomogeneous linear second-order differential
equations with constant coefficients will be the sum of two expressions: x = xo+ x1

xo - general solution of the homogeneous equation, when the


right side is zero
x1 - any particular solution of the inhomogeneous equation
In this situation, the solution of Equation (8) is:
−δt
x=x o + x 1= A o e cos ( ωo t+ φ ) + A cos ⁡(ωt + φ) (9)
where
−δt
x o= A o e cos ⁡(ω o t+ φ) and x 1= A cos ⁡(ωt +φ)

After the driving force on an initially stationary object begins


to act, the amplitude of the oscillation will increase. After same
sufficiently period of time, when the energy input per cycle
from the driving force equals the amount of mechanical energy
transformed to internal energy for each cycle, a steady-state
condition is reached in which the oscillations proceed with
constant amplitude.
Superposition of equation (9) finally get a function:
x=Acos(ωt + φ) (10)
where
F o /m
A= (11)
√( ω −ω ) + 4 δ ω
2
o
2 2 2 2

where ωo2 = k/m - natural frequency of the undamped oscillator (b = 0). Equations (10) and
(11) show that the forced oscillator vibrates at the frequency of the driving force and that the
amplitude of the oscillator is constant for a given driving force because it is being driven in
steady-state by an external force.

The differential equation for forced electromagnetic oscillations with external voltage V m cos
ωt as a driving force has a form:
d q R dq q V m
2

2
+ + = cos ωt
d t L dt LC L

5. Resonance.

For small damping (b/2m < ωo or δ < ωo), the amplitude is large when the frequency of the
driving force is near the natural frequency of oscillation, or when ω ≈ ω o. The dramatic increase
in amplitude near the natural frequency is called resonance.
More precise resonance frequency
is:
ω res =√ ω2o −2 δ 2
resonance amplitude is
F o /m
Ares =
2 δ √ ω o−δ
2 2

When ω0, amplitude approaches


limit value which is called statistic
deviation:
F o /m
A o= 2
ωo

When ω∞, amplitude approaches


zero.

In case weak damping when δ2 << ωo2 resonance amplitude


F o /m ω o F o /m
Ares = = =Q ∙ A o
2 δ ω o 2 δ ω 2o
where Q - Quality factor (Q- factor), Ao - statistic deviation. Thus, the larger the Q factor, the
greater the amplitude of the resonance.
How can be seen from figure amplitude of resonance decreases with decreasing Q factor and
increasing of δ. Also the frequency of resonance slightly decreasing with increasing of δ.
The rate at which work is done on the oscillator by F equals the dot product F∙v; this rate is
the power delivered to the oscillator. Because the product F∙v is a maximum when F and v are in
phase, we conclude that at resonance, the applied force is in phase with the velocity and the
power transferred to the oscillator is a maximum. Thus, the oscillations of the force outstrip the
oscillations of the system by a phase π/2.
A resonant vibration you may have experienced is the “singing” of telephone wires in the
wind. Machines often break if one vibrating part is in resonance with some other moving part.
Soldiers marching in cadence across a bridge have been known to set up resonant vibrations in
the structure and thereby cause it to collapse. Whenever any real physical system is driven near
its resonance frequency, you can expect oscillations of very large amplitudes.

1. Wave process. Transverse and longitudinal waves. Wave graphs and wave properties.
Wave front and wave ray.
If excite oscillations at any point in the medium (solid, liquid or gaseous) then because the
interaction between the particles of the medium, these oscillations are transmitted from one
medium point to another at a rate that depends on the properties of the medium.
In considering the oscillations we are not take into account the detailed structure of the
environment; medium is regarded like something continuously distributed in space and has
elastic properties.
Wave processes is the process of propagation of oscillations in a continuous medium. When
the wave propagates particles of the medium oscillate around their equilibrium positions and do
not move after wave. With a wave only state of the vibrational motion and its energy is
transmitted from one particle to another.
The main property of all waves is energy transfer without the transfer of matter.

The propagation of disturbances in an elastic medium are called elastic (or mechanical)
waves.
The elastic wave is called harmonic if the oscillations of medium particle are harmonic.

A traveling wave or pulse that causes the points of the disturbed medium to move
perpendicular to the direction of propagation is called a transverse wave.
A traveling wave or pulse that causes
the elements of the medium to move
parallel to the direction of propagation is
called a longitudinal wave.
Consider harmonic wave travels at the speed v along the OX axis. Denote the displacement of
medium points by y =y (x, t). For a given time t relation between the displacement of the
particles and the distance x of the particles from the oscillation source O can be represented as a
wave graph.
Wave graph differs from the graph of the
harmonic oscillation:
 the waves graph represents dependency
of the displacement of all the medium
particles on the distance from
oscillations source for the specific time y
= y (x, t = const). Curve represents a
snapshot of the wave at some specific
time t.
 the graph of harmonic oscillations is the
dependence of the displacement of one
specific particle from the time y = y(x =
const, t).

It is important to differentiate between the motion of the wave and the motion of the elements
of the medium.
A point in Figure at which the displacement of the element from its normal position is highest
is called the crest of the wave. The lowest point is called the trough. The distance from one crest
to the next is called the wavelength λ (Greek letter lambda). More generally, the wavelength is
the minimum distance between any two identical points on adjacent waves as shown in the next
Figure.
If you count the number of seconds between the arrivals of two adjacent crests at a given
point in space, you measure the period T of the waves. The period of the wave is the same as the
period of the simple harmonic oscillation of one element of the medium. The same information is
more often given by the inverse of the period, which is called the frequency f. In general, the
frequency of a periodic wave is the number of crests (or troughs, or any other point on the wave)
that pass a given point in a unit time interval. The frequency of a sinusoidal wave is related to the
period by the expression
1
f=
T

The frequency of the wave is the same as the frequency of the simple harmonic oscillation of
one element of the medium. The most common unit for frequency, as we learned before, is s -1, or
hertz (Hz). The corresponding unit for T is seconds.
We can give another definition of wavelength, linking it with the velocity of the wave.
Wavelength is the distance which is covered by a harmonic wave for a time equal to the
period of oscillation T.
λ=vT
as f= 1/Т we also have:
v 2 πv
λ= =
f ω
where ω - angular frequency.

The wave is called a traveling wave, if all points of the wave move in space at a constant
speed.
The locus of points which wave reaches at some instant of time t and oscillating with the same
phase called the wave front. These points can be represented in the form of an imaginary
surface.
In general, the set of points oscillating with the same phase is called the wave surface. The
wave front is also one of the wave surfaces. The number of Wave surfaces are countless, but the
wave front is the only. (Sometimes all wave surfaces called wave front.) The wave is called
plane (flat) if its wave surfaces are a set of planes parallel to each other. The wave is called
spherical if its wave surfaces have the form of concentric spheres.

The wave rays are a lines that are perpendicular to the wave surface and tangent to it at each
point coincides with the direction of wave propagation.

2. Wave function

a) Wave function of plane wave

Consider the sinusoidal wave, which shows the oscillations of the wave point at position x =
0. Because the wave is sinusoidal, we expect the wave function at this point to be expressed as
y (0, t) = A cos ωt,
where y - displacement of the point; A - the amplitude and ω - the angular frequency.
Oscillations of point B, situated at distance x from the origin will be described by the same
function, but it oscillations is delayed on time τ = x/v, because wave needs this time is to reach
point B. Equation for oscillations of particle that lie in a plane passing through the point x is
given by
y (x, t) = A cos ω (t - x/v)

In general wave function has a form:


y (x , t)= Acos ω t−
[ ( xv )+ φ ]
o

where φo - initial phase of wave and ω t− ( xv )+ φ o - phase of wave.


We can express the wave function in a convenient form by defining other quantitie, the
angular wave number k (usually called simply the wave number)
2π 2π ω
k= = = (1)
λ vT v
The wave number shows how many wavelengths fit a distance equal to the length of 2π units.
Wave function can be rewritten:
y (x, t) = A cos (ωt - kx +φo) (2)

Also we can use more advanced form


i(ωt−kx +φ )
y ( x , t )= A e o
(3)

b) Wave function of spherical wave


A o i (ωt −kx+φ )
y ( x , t )= e o
(4)
x

where Ao - amplitude of the wave at its origin point; x - is the distance from the origin to the
certain point. As it is seen amplitude of spherical wave decrease with distance according to law
A = Ao/x.

3. Phase speed of a wave.

The phase speed of a wave is the rate at which the phase of the wave propagates in space.
Any given phase of the wave (for example, the crest) will appear to travel at the phase speed.
The phase velocity is given in terms of the wavelength λ and period T as
λ
v=
T
Equivalently, in terms of the wave's angular frequency ω, which specifies angular change per
unit of time, and wavenumber (or angular wave number) k, which represents the proportionality
between the angular frequency ω and the linear speed (speed of propagation) νp,
To understand where this equation comes from, consider a basic sine wave, A cos (kx−ωt).
After time t, the source has produced ωt/2π = ft oscillations. After the same time, the initial wave
front has propagated away from the source through space to the distance x to fit the same number
of oscillations,
kx = ωt.
Thus the propagation velocity is
v = dx/dt = ω/k.

4. Wave equation.

Wave propagation in a homogeneous isotropic medium, in general, is described by the wave


equation. The wave equation is a linear second odder differential equation in partial derivatives.
For the plane wave propagates through the OX axis wave equation has a form:
2 2
∂ y 1 ∂ y
2
= 2 2 (5)
∂ x v ∂t

Solution of wave equation (5) is any wave function, for example (2), (3) or (4)
Wave equation in 3 dimensions usually written in form:
2 2 2 2
∂ ξ ∂ξ ∂ξ 1 ∂ ξ
2
+ 2 + 2= 2 2 (6)
∂ x ∂ y ∂z v ∂t

where we use Greece letter ξ (ksi) to denote the displacement of points of medium.
This equation often write by using Laplace operator or Laplacian:
2 2 2
∂ ∂ ∂
∆= 2 + 2 + 2
∂ x ∂ y ∂z
2
1 ∂ξ
∆ ξ= 2 2 (7)
v ∂t

Solution of wave equation (6) or (7) has a form such as:


i(ωt −kr +φo )
ξ (r, t) = A cos (ωt - kr +φo) or ξ ( r ,t )= A e

Ao i (ωt−kr +φ )
ξ ( r ,t )= e o

where Ao - amplitude of the wave at its origin point; r - is the distance from the origin to the
certain point. As it is seen amplitude of spherical wave decrease with distance according to law
A = Ao/r. First two equations for plane waves and third one - for spherical.

5. The Rate of Energy Transfer by Sinusoidal Waves.

The medium with propagating wave has certain energy. The energy density at each point of
the medium is determined by the formula:
2 2
W ρA ω
w= = (8)
V 2
where ρ= m/V - density of the medium; A -amplitude and ω - angular frequency of wave.
This energy is delivered from the oscillation source at different points of the medium wave
itself. Thus wave transports energy.
The amount of energy carried by a wave through a surface per unit time ΔΦ = ΔW/Δt is called
energy flux through the surface. To characterize the energy flow in different points of the space
medium, we introduce a vector quantity called the energy flux density.
ΔΦ ∆W
j= = (9)
∆ A ⊥ ∆ A ⊥ Δt
ΔΦ - energy flux
ΔA- area perpendicular to the propagation of wave
Δt - time range
ΔW=wΔAvΔt
Substitute this equation in (9) we get
j = wv (10)
Combine (10) with (8) we can write for energy flux density:
1 2 2
j=wv= ρ A ω v (11)
2

Lecture 3. Superposition of waves. Sound and Ultrasound.


1. The principle of superposition.
Waves have a remarkable difference from particles in that waves can be combined at the
same location in space. To analyze such wave combinations, we make use of the superposition
principle:
If two or more traveling waves are moving through a medium, the resultant value of the
wave function at any point is the algebraic sum of the values of the wave functions of the
individual waves.
One consequence of the superposition principle is that two traveling waves can pass through
each other without being destroyed or even altered. For instance, when two pebbles are thrown
into a pond and hit the surface at different locations, the expanding circular surface waves from
the two locations simply pass through each other with no permanent effect. The resulting
complex pattern can be viewed as two independent sets of expanding circles.

2. Interference of waves.

The combination of separate waves in the same region of space to produce a resultant wave is
called interference.
For the two pulses shown in Figure, the displacement of
the elements of the medium is in the same phases for both
pulses, and the resultant pulse (created when the individual
pulses overlap) exhibits an amplitude greater than that of
either individual pulse. Because the displacements caused by
the two pulses are in the same direction, we refer to their
superposition as constructive interference.
Now consider two pulses where one pulse is inverted
relative to the other as illustrated in Figure. When these
pulses begin to overlap, the resultant pulse is given by y1 +
y2, but the values of the function y2 are negative when y1 are
positive and vice versa y2 are positive when y1 are negative.
Again, the two pulses pass through each other; because the
displacements caused by the two pulses are in opposite
directions, however, we refer to their superposition as
destructive interference.
In generally Interference of waves is called wave superposition phenomenon in which
there is a steady over time their mutual reinforcing in some points of space and the
weakening of other depending on the relationship between the phases of these waves.

To produce a stable in time interference we need the waves to be coherent.


Waves called coherent if they have the same frequency and phase difference between
them remains constant over time
Consider superposition of two coherent spherical waves with the equal amplitude Ao
frequency ω and stable phase difference.
Ao Ao
ξ 1= cos ⁡(ωt−k r 1 +φ 1) and ξ 2= cos ⁡( ωt−k r 2 +φ 2)
r1 r2
where r1 and r2 - distance from the wave sources to the point of investigation; k - wave
number, φ1 and φ2 - initial phases of corresponding waves.
We will notate as:
ΔΦ=⁡(ωt−k r 2 +φ 2)−(ωt−k r 1+ φ1 ) - phase difference between two waves at the point of
investigation;
Δφ = φ2 - φ1 - phase difference of wave sources;
A1 = Ao/r1 and A2 = Ao/r2 - amplitudes of waves at the point of investigation.
To define amplitude of resulting oscillations it is convenient to use Phasor Diagram:
2 2 2
A = A 1+ A 2 +2 A 1 A2 cos ( ΔΦ )=¿

{
2 1 1
¿ Ao 2 + 2 +
2
r 1 r2 r1 r2
cos [ k ( r 1−r 2 )−(φ 1−φ2 ) ]
}
For coherent waves, phase difference of wave sources Δφ = φ2 -
φ1 is constant. So result of interference of two waves depends on Δr
= r1 -r2. We will call it the path difference.
When the difference in the path lengths Δr = |r2 - r1| is either zero or some integer multiple of
the wavelength λ, that is:
Δr = nλ, where n = (0, 1, 2, 3, . . .),
or Δr = n∙λ/2 (for n zero and even, n = 0, 2, 4,
6,....),
the two waves reaching the point of investigation at any instant are in phase and interfere
constructively.
Thus the maximum amplitude
A o Ao
A= +
r1 r2
is observed in the points where
k(r1 - r2) - (φ1 - φ2) = ± 2nπ
where n = (0,1, 2, 3 ...) and we call it order of interference maximum.

If the path difference Δr = λ/2, 3λ/2, nλ/2 is equal odd number of half wavelength λ:

Δr = n∙λ/2 (for n odd, n = 1, 3, 5, ....),

the two waves are exactly π rad, or 180°, out of phase at the point of investigation and hence
reduce each other. In this case destructive interference is detected.
The amplitude minimum

|A=
|
Ao Ao

r1 r2
is observed in the points where
k(r1 - r2) - (φ1 - φ2) = ± (2m + 1)π
where m = (0,1, 2, 3 ...) and we call it order of interference minimum.
The phase difference may arise between two waves generated by the same source when they
travel along paths of unequal lengths.

3. The group velocity.

Any complex oscillation can be represented as the sum of simultaneously performs harmonic
oscillations (Fourier decomposition).
Therefore, any wave can be represented as a
sum of harmonic waves, that is in the form of a
wave packet or group of waves.
Wave packet is called a superposition of
waves differ a little from each other in frequency
and occupying at any one time restricted region
of space.
For the propagation velocity of the wave
packet we take the maximum travel speed of its
amplitude (the center of the wave packet) and
call it Group velocity u.
Group velocity u is the velocity of the wave group forming at each time, a localized wave
packet in the space, or the velocity of the wave packet center. Its value defined as:
dx dω
u= =
dt dk
Relation between group velocity u and phase velocity v:
dx
u=v−

remained that phase velocity v = ω/k

4. Standing waves.

Standing wave is a wave made of superposition of two traveling wave of the same frequency
but opposite direction.
We can analyze such a situation by considering wave functions for two transverse sinusoidal
waves having the same amplitude, frequency, and wavelength but traveling in opposite directions
in the same medium:
ξ 1= A cos ⁡(ωt −kx ) and ξ 2= A cos ⁡(ωt + kx)

Adding these two functions gives the resultant wave function:


ξ (x ,t )=ξ 1 + ξ2=2 A cos kx ∙ cosωt =2 A cos x ∙ cosωt
λ

Notice that Equation does not contain a


function of kx - ωt. Therefore, it is not an
expression for a single traveling wave. When
you observe a standing wave, there is no sense
of motion in the direction of propagation of
either original wave.
Every element of the medium oscillates in simple harmonic motion with the same angular
frequency ω (according to the cos ωt factor in the equation). The amplitude of the simple
harmonic motion of a given element (given by the factor 2A cos kx, the coefficient of the cosine
function) depends on the location x of the element in the medium, however.
The points of zero amplitude are called nodes. The element of the medium with the greatest
possible displacement from equilibrium has amplitude of 2A, which we define as the amplitude
of the standing wave. The positions in the medium at which this maximum displacement occurs
are called antinodes. The antinodes are located at positions for which the coordinate x satisfies
the condition cos kx = ±1, that is, when:

k= x=± πm
λ
Coordinates of antinodes xa:
λ
x a=± m (m = 0,1, 2, 3, ....)
2

Coordinates of nodes xn we obtain from condition cos kx =0:


k=

λ ( )
x=± m+ π
1
2

and we get:

( 12 ) 2λ
x n=± m+ (m = 0,1, 2, 3, ....)
Distance between two neighbor nodes or two neighbor antinodes is the same and equal to half
of wavelength λ of traveling waves. This distance is called wavelength of standing wave: λ st =
λ/2
In the table below there are comparison between properties of traveling and standing waves.

Traveling wave Standing wave


Amplitude of oscillations
All points oscillate with the same All points oscillates with different amplitudes
amplitude. depends of coordinate.
Phase of oscillations
Phase of oscillations depends on coordinate All points between two neighbor nodes
oscillates with the same phase.
When passing through the node phase changes
to π. The points lying on the opposite side of the
node oscillate in antiphase
Transfer of energy
Energy transfer to the direction of No energy transfer. Only within the distance λ/2
propagation of the wave. happens mutual conversion of kinetic energy
into potential energy and vice versa.

For example standing waves can be obtained when traveling wave is superposed with
reflected by the boundary of two medium waves.
If reflecting medium is less dense then at the boundary arise antinode, if second medium is
greater dense, then at the boundary arise node.
5. Standing Waves on a String. Waves in a Vibrating Column of Air.

Consider a string of length L fixed at both ends as


shown in Figure. Waves can travel in both directions
on the string. Therefore, standing waves can be set up
in the string by a continuous superposition of waves
incident on and reflected from the ends. Notice that
there is a boundary condition for the waves on the
string: because the ends of the string are fixed, they
must necessarily have zero displacement and are
therefore nodes by definition.
The boundary condition results in the string
having a number of discrete natural patterns of
oscillation, called normal modes, each of which has
a characteristic frequency that is easily calculated.
This situation in which only certain frequencies of
oscillation are allowed is called quantization.
Thus if the string is struck sharply and thereafter
allowed to vibrate on its own, only the resonant
frequencies will persist. The lowest resonant
frequency of vibration of the string (or other object)
is called its fundamental frequency.
The lowest frequency of vibration of the string is
v v
f= =
λ 2L
where L is the length of the string and v is the
speed of the wave along the string.
Nodes - points of zero vibrational amplitude—and
the distance between adjacent nodes is always one
half of the wavelength.
Halfway between each pair of nodes is an antinode, a point on the string that vibrates with
the greatest amplitude.
Thus the fundamental vibration of a string with fixed end points has a node at each end and a
single antinode in between.
Resonant frequencies that are integer multiples of the fundamental frequency are called
harmonic frequencies. For our string, the fundamental frequency is the first harmonic; the
frequency that is double this value is the second harmonic, and so on.
For a string of length L, the resonant wave-lengths must be consistent with
L = n×λ/2 where n = 1, 2, 3, . . .
For fixed L, there is a wavelength λn associated with each integer n.
2L
λ n=
n
The frequency is
v v
f n= =n
λn 2L
For flexible strings the wave speed is given in meters per second by


v=
T
m/ L
Where T is the tension in the string in newtons, (m/L) is the linear mass density in kilograms
per meter.
The resonant frequencies of a stretched flexible string are then given by:

f n=
n
√ T
2 L m/ L
Increasing the tension in the string raises the speed of waves along it and thus raises the
natural vibrational frequencies.
Now consider standing wave in the pipe. Hollow pipes have long been used for making
musical sounds. Flutes, organ pipes, and children's whistles produce sounds in similar ways.
For cases two closed or two open ends of
pipe the wavelength λn associated with each
integer n is:
2L
λ n=
n
The frequency is:
v v
f n= =n
λn 2L
n = 1, 2, 3,...

For case one open and one closed end of pipe


formula for wavelength and frequency:

4L v
λ n= f n=n
n 4L

Where n = 1, 3, 5, ... (only odd numbers)

6.Sound waves. Hearing threshold and pain threshold


The minimum sound level that can
be heard by the human ear is called the
hearing threshold.
The maximum sound level above
which a person experiences pain is
called pain threshold.
The unit of sound intensity
measurement is bels (a unit named in
honor of the inventor of the telephone,
Alexander Graham Bell). Usually we
use the decibel, dB. The decibel unit is
one-tenth the size of the bel. The
intensity level “IL” in decibels is
defined to be 10 times the logarithm of
the ratio of two intensities I and I0; that
is,
I L (in decibels) = 10 lg (I/Io)
For example, if the intensity I exceeds the reference intensity I 0 by a factor of 4, the intensity
level of I is 6 decibels (6 dB) above I0:
I L = 10 lg 4 = 10(0.6) = 6 dB.
Often I0 is threshold of sensitivity of human ear.
The standard reference level of sound intensity is 10 -12 W/m2, which is approximately the
intensity that can just barely be heard by a person with good hearing. This intensity is called the
threshold of hearing. The intensity corresponding to the highest frequency is equal to
J 0 =1 Vt /m2 .The intensity level of a very quiet recording studio is about 20 dB, corresponding to
a sound intensity 100 times greater than the quietest sound you can hear.
There are waves of different frequencies in the environment. However, not all of them can be
heard by the human ear. The human ear can hear only the waves whose frequencies range from
20 to 20,000 Hz. They are also called sound waves. Infrasound for waves with a frequency less
than 20 Hz, and ultrasound for waves with a frequency greater than 20000 Hz. It is called The
sound is mainly characterized by its pitch (tone), intensity (loudness) and timbre. The pitch of
sound is determined by its frequency. That is, the intensity of the sound is proportional to its
frequency.
The intensity of the sound has an energetic character, and the sound waves are subjected to
thermal energy. The amount of energy passing through the surface S (fig. 2.5) at time t is the
quantity measured by -E:
E
J 
St
We perceive the power of the voice as its loudness. As we have shown in harmonic
oscillation, the energy of the oscillation is directly proportional to the square of its amplitude.
m 2 A 2  V 2 A 2
E 
2 2
Here, ρ is the density of the medium, A- the amplitude of the oscillation, and the frequency of
the ω- oscillation. The power of the sound is measured by the waist.

The intensity of a sound wave is the energy per unit volume:

E ρω2 A 2
J= =
SυΔt 2

Pitch is the sound quality, subjectively determined by the person at the hearing, and
depending on the sound frequency. With increasing frequency, the pitch increases, i.e., the sound
becomes higher. The nature of the acoustic spectrum and the distribution of energy between the
frequencies determines the originality of sound sensations, called timbre of sound.
Sound waves are longitudinal waves. Calculations show that the speed of propagation of
sound waves in the medium is determined as follows:

F .l σ
u=
√ E
ρ
Here E= = - the Young's modulus (elastic coefficient) of the medium, ρ-the density of
S.∆l ε
the medium. σ=F/S is mechanical stress, ε=Δl/l is relative elongation.
In elastic media, the source of sound waves can be any object that oscillates with the
frequency of sound. An oscillating object makes its particles near it oscillates at that frequency.
This oscillating motion is transmitted sequentially and propagates in the medium in the form of a
wave with a frequency equal to the frequency of the source.
The pressure difference ΔP, which causes this mechanical
stress to escape the air, and the relative elongation ε should be
replaced by the relative change in volume ΔV/V. In this case
∆P dP
E= or E=−V .
∆ l/l dV
Air compression and rarefaction in the process of sound
vibrations can be considered adiabatic. Therefore, the change of
state of the gas can be considered adiabatic.
PVγ=const
Here γ=CP/CV is called polytropic coefficient. After some conversions, we get:


u= γ
RT
μ √
= γ
P
ρ
Thus, the rate of diffusion in a gas is directly proportional to the square root of the
absolute temperature and depends on the type of gas.
The speed of propagation of sound waves in the atmosphere is influenced by the
homogeneity of the environment (density and wind factor), humidity, and temperature. The
speed of sound is higher in hot air than in cold air. For this reason, the speed of sound decreases
as it moves away from the surface of the Earth and the sound rays are bent upwards. This is why
the sound is hard to hear on a sunny day. In the evening, the Earth's surface is relatively cold, and
the air above is warm, so the sound rays are bent down. This explains the good hearing of the
voice in the evening.
7. Ultrasound and its application

By its nature, the ultrasound wave is elastic wave, and in that it does not differ from the
sound. Ultrasound has high frequencies (f > 20 kHz) and, consequently, low wavelengths.
Therefore, it is characterized by special properties, which allows distinguish it in a separate class
of phenomena. Due to the small wavelengths of the ultrasonic they can be produced as rigorously
directed beams as light.
To generate the ultrasound are mainly used two phenomena:
1. Inverse piezoelectric effect - the occurrence of deformation in the certain way cutout
quartz plate (recently instead of quartz used barium titanate) under the influence of an
electric field. If such a plate placed in a high-frequency alternating field, it can be produced
forced oscillations. At the resonance frequency of the plate on its own get the big vibration
amplitude and consequently large intensity emitted ultrasonic waves.
2. Magnetostriction - the occurrence of deformation of ferromagnets under the influence of a
magnetic field. Placing a ferromagnetic core (e.g., nickel or iron) in a rapidly alternating
magnetic field. That magnetic field excites the mechanical oscillations, whose amplitude is
maximum in case of resonance.

 Ultrasounds are widely used in technology, such as directional underwater signaling,


detection of underwater objects and determine the depth (sonar, echo sounder).
 For example, piezoelectric sounder on the generator, fortified on the ship, sent directed
ultrasonic signals reaching the bottom, reflected from it and come back. Knowing the
speed of sound in water and determining the transit time (from emission to return) of the
ultrasonic signal the depth can be calculated. Echo reception is also done with the help of
piezoelectric quartz. Sound vibrations, reaching piezoquartz causes it elastic vibrations. As
a result, an electric potential difference produced on the opposite surfaces of quartz and it
can be measured.
 If an ultrasonic signal pass through the test detail, it is possible to detect defects in it by the
characteristic scattering of the beam and by the appearance of ultrasound shadows. On this
principle it created a new branch of Engineering - ultrasonic testing.
 The use of ultrasound is also laid down the basis for a new field of acoustics - acousto-
electronics allowing on its basis to develop tools for processing signaling information in
micro-radioelektronics.
 Ultrasound is used to influence the variety of processes (crystallization, diffusion, mass
transfer and heat in metallurgy etc.) and biological properties (increasing intensity of
exchange processes, etc.), to study the physical properties of substances (absorption
material structure etc.).
 Ultrasound is also used for the machining of very hard and very brittle bodies
 In medicine (diagnostics, ultrasound surgery, and so on).
General Physics Lecture 4. Fluid Mechanics
A fluid is a collection of molecules that are randomly arranged and held together byweak
cohesive forces and by forces exerted by the walls of a container.
The attractive force between molecules that acts to hold a fluid together is called cohesion.
The attractive force between unlike molecules - say, between water and glass - is called
adhesion.
Both liquids and gases are fluids.
In our treatment of the mechanics of fluids, we’ll be applying principles and analysis models
that we have already discussed. First, we consider the mechanics of a fluid at rest, that is, fluid
statics, and then study fluids in motion, that is, fluid dynamics.

1. Pressure. Hydrostatic Pressure.

The pressure P is the force acts per unit area:


F
P=
A
In SI it is given the name Pascal for pressure unit. Pa= N/m2. Another units of pressure and
their relationship with Paskal:
Name Value
(N/m2 = Pa)
1 bar 1.00 ∙ 105
1 atmosphere (atm) 1.01 ∙ 105
1 mm Hg 1.33 ∙ 102
Consider an upright cylinder containing a liquid. A force acts
on the bottom of the cylinder as a result of the weight of the
liquid inside. The pressure of the liquid on the bottom is:
w mg
P= =
A A
where m is the mass of the liquid, A is the area of the bottom
of the cylinder, and g is the acceleration of gravity. Note that
although force is a vector quantity, pressure is a scalar.

It is convenient to use the concept of density when discussing pressure. The density of a
substance, d, was defined as its mass per unit volume d = m/V. Pressure can be written in term
density
P = dgh (hydrostatic pressure)
We can now see that pressure is directly proportional to both the density and the depth of the
liquid

2. Pascal’s Principle.
The pressure applied at one point in an enclosed fluid is transmitted undiminished to every
part of the fluid and to the walls of the container.
Pascal's principle holds for gases as well as for liquids, with some minor modifications due to
the change in volume of a gas when the pressure is changed.
Some Applications of Pascal's principle:
1) hydraulic lift: P1 = P2
A2>A1
F1 = P1 A1; F2 = P2 A2
F2> F1

2) pressure-
measuring
devices

Tire gauge with a


spring-loaded Liquid Manometer
piston

The Torricelli experiment


3. Archimedes’s Principle.

A body, whether completely or partially submerged in a


fluid,is buoyed upward by a force that is equal to the
weight of the displaced fluid.
Fbuoyant = dfluid∙g∙Vsubmerged

Buoyant force depends only on the volume of the


submerged object, not on its mass or density. It depends of
the density of environment
1. Of course, if body is denser than fluid, the buoyant
force alone cannot support it and body would sink to the
bottom.
2. If the fluid has a densitygreater than that of the body,
it would float.
Blimps, hot-air balloons, and other lighter-than-air craft furnish another example of
Archimedes' principle. They float in the air just as a submerged fish floats in the water. The
blimp obviously has individual parts that will not float in air.
However, the average density of the whole craft, including passengers, must be less than the
density of air to accomplish an unpowered takeoff. To meet this requirement in a practical way,
the blimp contains a large volume of helium. The same type of observation can be made about
large ships, which float even though they are made of steel and carry dense objects. The
explanation is that the average density of the entire ship, including all of the air spaces, is less
than the density of water.

4. Streamlines and Equation of Continuity.


Consider the flow of a fluid.First
assume fluid1) is incompressible 2) has
no internal friction, 3) has no viscosity.
Figure shows a fluid flowing.
If the neighboring layers of fluid move
past each other smoothly, each small
element of fluid follows a path called a
streamline.
If we draw all of the streamlines from
the boundary of region A to some later
position B, we outline a tube of flow
(stream tube).
Fluid particles cannot flow into or out of the sides of this tube; if they could, the streamlines
would cross one another.
The smooth streamline flow is known as
laminar flow.
When the fluid exceeds a certain critical
velocity, the flow is no longer laminar but
becomes turbulent and is characterized byan
irregular, complex motion.

Let us consider an incompressible fluid


flowing steadily through a tube from a region
of cross-sectional area A1 to a region of area
A2.
If the density of the fluid is d, then the
mass of fluid that flows into the tube in time
Δt is d∙v1A1Δt. Similarly, the mass of fluid that
flows out of the tube through A2 in the same
time Δt is d∙ v 2 A2 Δt . Since the mass of fluid
entering is the same as the mass leaving:
d∙ v 1A1Δt = d ∙v 2 A2 Δt
Divide out the density d because it is constant for an incompressible fluid we get
v 1A1 = v 2 A2
This equation is called the equation of continuity and will be useful throughout our
discussion of fluids in motion. It states that the flow of material (mass) through a tube of
changing cross section is constant when the density of the fluid does not change.
Notice that the product A is the volume rate of flow. In SI units the volume rate of flow is
measured in m3/s.

5. Bernoulli’s Equation.
As a fluid moves through a region where its speed or elevation above the Earth’s surface
changes, the pressure in the fluid varies with these changes. Consider the flow of a segment of an
ideal fluid through a nonuniform pipe in a time interval Δt as illustrated in Figure. We have
added two features: the forces on the outer ends of the ΔV portions of fluid and the heights of
these portions above the reference position h= 0.
We calculate the work done on a small element
of fluid moving along a tube of flow and then use
the work - energy theorem to equate the change in
kinetic energy to this work.
To move a small element of fluid through a
distance ofΔx1 at region 1 requires an amount of
work P1 A1Δx1 =P1 ΔV. (Amount of volume ΔV is
equal at both regions, according with equation of
continuity.)
At the same time, the same amount of fluid
(given by ΔV =A2Δx2) moves a distanceΔx2 at
region 2. The work in this case is ‫־‬P2 A2Δx2=- P2ΔV.
The negative sign indicates that the element of fluid
at region 2 moves against the force due to the
pressure of the fluid to its left.
Net work is: W = (P1 - P2)ΔV
This work is spent to changing of potential energy and kinetic energy of m amount of fluid.
W =( P1−P2 ) ΔV =ΔPE + ΔKE

( )
2 2
mv2 mv 1
( P1−P2 ) ΔV =mg ( h 2−h1 ) + 2 − 2
Recall that mass of fluid m = ρΔV, where ρ - is a density of fluid

( )
2 2
v2 v1
( P1−P2 ) ΔV =ρΔVg ( h2−h1 ) + ρΔV 2 2

If we divide each term by the portion volume ΔV this expression reduces to

( )
2 2
v2 v1
( P1−P2 ) =ρg ( h2−h1 ) + ρ 2 − 2
Rearranging terms gives

This expression is called Bernoulli's equation. It describes the relationship of a fluid's


pressure, velocity, and height as it moves along a pipe or other tube of flow.
First summand in Bernoulli's equation is called static pressure (P), second summand -
dynamic pressure (½ρv2) and third summand - hydrostatic pressure (ρgh)
Remember, that Bernoulli's equation is valid only for:
1) incompressible fluids of 2) negligible viscosity in 3) laminar flow.

Under these conditions, Bernoulli's equation expresses conservation of energy in a moving


fluid.
Let us consider a horizontal pipe (h1 = h2)

The equation of continuity tells us that


the fluid flows more rapidly in a
constricted region of the pipe. If we
combine the equation of continuity with
Bernoulli's equation we get that when a
moving fluid enters a narrower section of
pipe, its speed increases but the pressure
on the fluid decreases.
If v2 is greater than v1, then P2 is
smaller than P1
Bernoulli's equation gives us methods to measure velocity of flow and different kinds of
pressure of flow.
Venturi meter

Bernoulli's equation help to explain how airplanes can fly and why they are not fall.

6. Viscosity and Poiseuille’s Law.


Viscosity is a property of a fluid that indicates its internal friction. The more viscous a fluid,
the greater the force required to cause one layer of fluid to slide past another.
Viscosity is what prevents objects from moving freely through a fluid, or a fluid from flowing
freely in a pipe. The viscosity of gases is less than that of liquids.
The viscosity of different fluids can be expressed
quantitatively by a coefficient of viscosity, h (the
Greek lowercase letter eta), which is defined in the
following way. A thin layer of fluid is placed between
two flat plates. One plate is stationary and the other is
made to move as on Figure. The fluid directly in
contact with each plate is held to the surface by the
adhesive force between the molecules of the liquid
and those of the plate.
Thus the upper surface of the fluid moves with the same speed v as the upper plate, whereas
the fluid in contact with the stationary plate remains stationary. The stationary layer of fluid
retards the flow of the layer just above it, which in turn retards the flow of the next layer and so
on. Thus the velocity varies continuously from 0 to v, as shown. The increase in velocity divided
by the distance lover which this change is made—equal to v/l—is called the velocity gradient
(more exactly dv/dx).
To move the upper plate requires a force which you can verify by moving a flat plate across a
puddle of syrup on a table. For a given fluid, it is found that the force required, F, is proportional
to the area of fluid in contact with each plate, A, and to the speed, v, and is inversely proportional
to the separation l, of the plates. For different fluids, the more viscous the fluid, the greater is the
required force. The proportionality constant for this equation is defined as the coefficient of
viscosity, h
F=η() v
l
A
or
F=η( )
dv
dx
A
Solving forh
Fl
η=
vA
The SI unit for h is N∙s/m2 = Pa∙s (Pascal ∙ second). Viscosities are often given in centipoise
(1cP = 10-3 Pa∙s)
Viscosity increases with decreasing temperature.
If there were no viscosity in horizontal pipe of uniform cross section, the pressure in a moving
fluid would be constant.
Viscosity acts like a sort of friction
(between fluid layers moving at slightly
different speeds), so a pressure difference
between the ends of a level tube is
necessary for the steady flow of any real
fluid, be it water or oil in a pipe, or blood in
the circulatory system of a human.
The French scientist J. L. Poiseuille
(1799–1869), who was interested in the
physics of blood circulation (and after
whom the “poise” is named), determined
how the variables affect the flow rate of an
incompressible fluid undergoing laminar
flow in a cylindrical tube.
His result, known as Poiseuille’s
equation, is:

where Q = vA is the volume rate of flow


in m3/s,
η is the coefficient of viscosity,
R is the radius of the pipe,
L is the separation between the test
points.

If R and L are given in meters and the pressure is given in pascals, the unit of the coefficient
of viscosity η is the pascal-second (Pa.s).
Poiseuille's law is often used experimentally to determine the coefficient of viscosity of a
liquid.
Poiseuille’s equation tells us that the flow rate Q is directly proportional tothe “pressure
gradient,” and it is inversely proportional to the viscosityof the fluid. This is just what we might
expect. It may be surprising, however,that Q also depends on the fourth power of the tube’s
radius. This means that forthe same pressure gradient, if the tube radius is halved, the flow rate is
decreasedby a factor of 16!

7. Stokes’s Law.
An object moving through a fluid experiences a resistive force, or drag, that is proportional to
 the viscosity of the fluid η
 speedv(if the object is moving slowly enough)
For spherical object of radius r, the force is:
F = 6πηrv
where η is the coefficient of viscosity. This equation is known as Stokes's law.
Stokes’s law can be used to measuring the viscosity of the fluid.

8. Terminal Velocity.

Consider a small solid sphere of radius r dropped into the top of a


column of liquid.
At the top of the column the sphere accelerates downward under the
influence of gravity.
However, there are two additional forces, both acting upward:
 the constant buoyant force
 velocity-dependent retarding force.
When the sum of the upward forces is equal to the gravitational force,
the sphere travels with a constant speed vt , called the terminal velocity.
To determine this speed, we write the equation for the equilibrium of forces:
W = Fgrav = Fbuoyancy + Fdrag
Fgrav = 4/3 πr3ρg;
Fbuoyancy = 4/3 πr3ρ'g;
Fdrag = 6πηrvt
where ρ and ρ'are density of the sphere and liquid respectively.
Combining these equations, we get the terminal velocity:

9. Laminar and Turbulent Flow.


The ratio of the inertial force to the viscous force, called the Reynolds number, is a useful
parameter for describing fluid flow and for determining the onset of turbulence. It is given by

where ρ and η are the density and viscosity of the fluid, v is the velocity of the object, and L is
a length characteristic of the moving object, or diameter of the tube.
Reynolds numbers:
 less than about 2000 correspond to laminar flow
 greater than 3000 correspond to turbulent flow
 at values between 2000 and 3000 the flow is unstable and may change back and forth
from one type of flow to another.

(* For Additional reading)


An important class of effects occurs whenever an object moves more rapidly than we assume
it does for the case in which Stokes's law holds.
As long as the speed is low enough so that the fluid flows smoothly around the object, the
retarding force, or drag, is proportional to the speed:
F= bv
b - proportionality coefficient.
But when the speed becomes so great that the streamline flow breaks up and the fluid swirls
around in eddies, the drag becomes more nearly proportional to the square of the velocity. The
retarding force can then be represented as a sum of two terms:
F = bv + cv2
where b and c are constants determined for each case being considered.

Lecture 5
Poisson's Ratio
When a material is stretched in one direction it tends to get thinner in the other two
directions.
When a sample of material is stretched in one direction it tends to get thinner in the lateral
direction - and if a sample is compressed in one direction it tends to get thicker in the lateral
direction.

Poisson's ratio is

 the ratio of the relative contraction strain (transverse, lateral or radial strain) normal to the
applied load - to the relative extension strain (or axial strain) in the direction of the
applied load
Poisson's Ratio can be expressed as

μ = - εt / εl (1)

where

μ = Poisson's ratio

εt = transverse strain (m/m, ft/ft)

εl = longitudinal or axial strain (m/m, ft/ft)

Strain is defined as "deformation of a solid due to stress".

Longitudinal (or axial) strain can be expressed as


εl = dl / L (2)

where

εl = longitudinal or axial strain (dimensionless - or m/m, ft/ft)

dl = change in length (m, ft)

L = initial length (m, ft)

Contraction (or transverse, lateral or radial) strain can be expressed as

εt = dr / r (2)

where

εt = transverse, lateral or radial strain (dimensionless - or m/m, ft/ft)

dr = change in radius (m, ft)

r = initial radius (m, ft)

Example - Stretching Aluminum


An aluminum bar with length 10 m and radius 100 mm (100 10-3 m) is stretched 5 mm (5 10-3 m).
The radial contraction in lateral direction can be calculated by combining eq. (1) and (2) to

μ = - (dr / r) / (dl / L) (3)

- and rearranging to

dr = - μ r dl / L (3b)

With Poisson's ratio for aluminum 0.334 - the contraction can be calculated as

dr = - 0.334 (100 10-3 m) (5 10-3 m) / (10 m)

= 1.7 10-5 m

= 0.017 mm

Poisson's Ratios for Common Materials


For most common materials the Poisson's ratio is in the range 0 - 0.5. Typical Poisson's Ratios
for some common materials are indicated below.
Poisson's Ratio
Material
-μ-

Upper limit 0.5

Aluminum 0.334

Aluminum, 6061-T6 0.35

Aluminum, 2024-T4 0.32

Beryllium Copper 0.285

Brass, 70-30 0.331

Brass, cast 0.357

Bronze 0.34

Clay 0.41

Concrete 0.1 - 0.2

Copper 0.355

Cork 0

Glass, Soda 0.22

Glass, Float 0.2 - 0.27

Granite 0.2 - 0.3

Ice 0.33
Poisson's Ratio
Material
-μ-

Inconel 0.27 - 0.38

Iron, Cast - gray 0.211

Iron, Cast 0.22 - 0.30

Iron, Ductile 0.26 - 0.31

Iron, Malleable 0.271

Lead 0.431

Limestone 0.2 - 0.3

Magnesium 0.35

Magnesium Alloy 0.281

Marble 0.2 - 0.3

Molybdenum 0.307

Monel metal 0.315

Nickel Silver 0.322

Nickel Steel 0.291

Polystyrene 0.34
Poisson's Ratio
Material
-μ-

Rubber 0.48 - ~0.5

Sand 0.29

Sandy loam 0.31

Sandy clay 0.37

Stainless Steel 18-8 0.305

Steel, cast 0.265

Steel, Cold-rolled 0.287

Steel, high carbon 0.295

Steel, mild 0.303

Titanium (99.0 Ti) 0.32

Wrought iron 0.278

Z-nickel 0.36

Zinc 0.331
DEFORMATION OF ELASTIC BODIES
An external force acting on an object not only causes it to move, but also causes a change in its
size and shape. This is due to the fact that the ions or atoms that make up the crystal lattice of a
solid object shift their position relative to each other due to the influence of an external force,
and as a result, the object changes its shape and dimensions. As a result of the influence of an
external force, the phenomenon of an object changing its size and shape is called deformation.
There are two types of deformation: elastic and plastic. With plastic deformation, after the
external force has ceased, the object cannot fully restore its previous shape. During plastic
deformation, as a result of a slight displacement of the lattice planes (planes passing through the
nodes), the former atomic or ionic bond breaks, resulting in the formation of a new atomic or
ionic bond. This prevents the return to its original state, which leads to rest deformation. Under
elastic deformation, the atoms or ions that make up the crystal lattice of a solid can return to their
previous state. In other words, during this deformation, the bond between the ions is not broken
and a new atomic or ionic bond is not formed. Deformation is mainly of 4 kinds in nature:
tension or compression, sliding, torsion and bending.
Consider the simplest type of deformation - tensile deformation. Suppose that a steel rod of
length l, fixed at one end, is subjected to a tensile force Fn at the other end (Fig. 4.1). If the length
of the steel rod under the action of the force Fn has become l1, then the
value l1-l=l is called absolute deformation. The larger the impact force
value, the larger the absolute strain value.
Deformation is also characterized by relative deformation. The ratio
Δl

of the increase in the length of an object to its former length l is
called relative deformation. Experiments show that the value of relative
strain,
is directly proportional to the force acting on the body, and inversely
proportional to its cross-sectional area, i.e.: Fig. 4.1

Δl F
=α n
l S (4.1)
Fn
=Pn
Its value is the force per unit cross section of the body in the normal direction S , and is
called mechanical stress. Here  is the coefficient of longitudinal elasticity, the value is constant
1 P
( E= = n )
for a given body. The reciprocal of the elasticity coefficient α ε is called Young's
modulus or modulus of elasticity. Then expression (4.1):
Δl 1
= P
l E n. (4.2)
If l=l , then E=Pn. That is, Young's modulus is a physical quantity, numerically equal to the
stress, which can extend to twice the length of the body within the limits of elasticity. This is not
possible for most bodies because the body is already broken before being stretched.
The value of Young's modulus is different for different objects. For example: for copper and for
iron. This shows that in order to stretch soft iron of a given length l along its length within the
limits of elasticity, it is necessary to apply mechanical stress to it. Given this, Young's modulus is
not theoretical and is widely used in some engineering calculations.
The English scientist Robert Hooke found that the
magnitude of relative deformation within the limits of
elasticity is directly related to stress. Hook's Law
S ES
F n= Δl= Δl=kΔl
αl l (4.3)

ES
=k
can be expressed as l and it is called rigidity.
Elastic the absolute strain within the strain limit is directly
proportional to the applied force. The amount of relative
strain does not always change linearly with stress.
That is, for a given body, Hooke's law is satisfied up to a
certain value of stress. There is a limit of elasticity beyond which plastic (residual) deformation
appears. This dependence between relative deformation and stress is shown in figure 4.2. The
value of the relative deformation first increases in direct proportion to the tension. Since Hooke's
law is completely satisfied in the oa-part of the graph, this region is called the region of direct
proportionality, and the point a itself is the limit point of direct proportionality. Starting from this
case, if we increase the value of the stress, the elasticity property of the body will remain for the
time being, despite the violation of the region of direct proportionality. This case corresponds to
the ab region in the graph. This means that if we have completely removed the stress starting
from point b, the body will take its previous shape with the bao curve. The point corresponding
to the condition b of the ab region of the curve is called the elastic limit. Because starting from
this point, if we increase the stress value, the elastic limit will be violated, and the object will
undergo plastic deformation. If we take the value of the stress starting from point c, then the
body will not be able to take its previous shape, since it will not return in the direction of the
cbao-curve, but in the direction of co1. As a result, a residual deformation of OO1 will occur in
the body. If we keep the stress value constant for a long time, starting from point c, the cross-
sectional area will begin to decrease during this period, that is, the object will continue to be in a
state of gradual plastic deformation under constant stress (corresponds to cd-zone in this graph).
If we increase the value of stress starting from point d of the curve, the body will be deformed
again within the limit of plastic deformation. In this area, Hooke's law seems to show itself
again. It corresponds to the df-zone in the graph. The strength limit corresponds to point f of this
region. At a further increase in stress beyond the strength limit, the object already breaks
(disintegrates). There are objects whose strength limit coincides with the elasticity limit. Such
objects are called fragile objects. The wider (more) the area bd of the curve, the softer the
objects.
It should be noted that longitudinal stretching of a body causes its transverse compression, and
Δd
=βPn
longitudinal compression causes transverse expansion. In this case, d is a relative
deformation of the thickness of the object. Here d- transverse linear dimension, β is the
coefficient of transverse compression in longitudinal tension. The ratio of the transverse
compression coefficient (α) to the longitudinal elastic coefficient in longitudinal tension

( αβ )=σ− is called Poisson's ratio. In this case, for the relative deformation of the thickness,
finding:
Work must be done to deform objects. If an object is stretched or shortened as a result of an
external force (F), then the work done by this force is
dA=Fxdx

can According to Hooke's law


F x =kx , dA=kxdx
Then the work done by the elastic force during finite displacement
x
kx 2 Ex 2 S E x 2 Eε2
A=∫ k xdx= = = ( ) . S . l= .V
0 2 2l 2 l 2 (4.5)
can Here x is the absolute strain, V is the volume of the deformed body. As can be seen, the work
done during the deformation is directly proportional to the square of the relative deformation.
During elastic deformation, this work done by external forces is used to dissipate the body's
Eε 2
ω=
potential energy. From the formula (4.5) 2 it is called the energy density of elastic
deformation (energy per unit volume).

Lecture 6
Gauss Law in Dielectrics
Consider a parallel-plate capacitor with pate area A and having vacuum between its plates. Let +
q and - q be the charges on the plates and E0 the uniform
electric field between the plates. Let PQRS be a Gaussian
surface.

By Gauss’s law, φ E0. dS = q/ε0

E0 || dS and constant over the Guassian surface.


Φ E0. dS = Φ E0dS = E0A
. : E0A = q/ε0 pr E0 = q/ε0A. … (1)

Suppose a material of dielectric constant K is introduced in


the intervening space between the two plates. The dielectric
slab gets polarized. A negative charged – q’ and + q’ on the
dielectric are called the ‘induced charges’ or ‘bound charges’
while the charges +q and – q on the capacitor plates are
called free charges. These induced charges produce their
own field which opposes the external field E0. Let E be the
resultant field within the dielectric. The net charge within the
Gaussian surface is q – q’.

. : by Gauss’s Law, Φ E. dS = q – q’/ε0 …


(2)

Or EA = q – q’/ε0 or E = = q – q’/ε0 A … (3)

Now, E0/E = K, where K is dielectric constant.


Eq. (1) becomes, E = q/ Kε0A
Inserting this in Eq. (3), q/Kε0A = q – q’/ε0A

Or q – q’ = q/K
Substituting this value of q – q’ in Eq. (2),
Φ= E. dS = q/Kε0

.: Φ= k E. dS = q/ε0 … (4)

It is the Gauss’s law in the presence of a dielectric. Here we see that the flux integral contains a
factor K. the effect of the induce surface charge is ignored by taking into accounring K, the
dielectric contant.
For dielectric constant, εr is used instead of K.

Ferroelectricity
Ferroelectricity is a characteristic of certain materials that have a spontaneous electric
polarization that can be reversed by the application of an external electric field. All ferroelectrics
are also piezoelectric and pyroelectric, with the additional property that their natural electrical
polarization is reversible.
The polarization of a di- or ferroelectric is related to the electric field by
P=ε 0 χE=ε 0 ( ε −1 ) E

Polarization

Linear dielectric polarization

Paraelectric polarization

Ferroelectric polarization
When most materials are electrically polarized, the polarization induced, P, is almost exactly
proportional to the applied external electric field E; so the polarization is a linear function. This
is called linear dielectric polarization (see figure). Some materials, known
as paraelectric materials, show a more enhanced nonlinear polarization (see figure). The
electric permittivity, corresponding to the slope of the polarization curve, is not constant as in
linear dielectrics but is a function of the external electric field.
In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero
polarization (after entrainment, see figure) even when the applied field E is zero. The
distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by a
suitably strong applied electric field in the opposite direction; the polarization is therefore
dependent not only on the current electric field but also on its history, yielding a hysteresis loop.
They are called ferroelectrics by analogy to ferromagnetic materials, which have
spontaneous magnetization and exhibit similar hysteresis loops.
Typically, materials demonstrate ferroelectricity only below a certain phase transition
temperature, called the Curie temperature (TC) and are paraelectric above this temperature: the
spontaneous polarization vanishes, and the ferroelectric crystal transforms into the paraelectric
state. Many ferroelectrics lose their pyroelectric properties above TC completely, because their
paraelectric phase has a centrosymmetric crystal structure.
METALS AND DIELECTRICS IN THE ELECTRICAL FIELD

According to electrical conductivity, objects are divided into metals and dielectrics. Metals have
free electrons separated from atoms. When the metal is placed in an electric field, the internal
charges become polarized. As a result, the final electric field inside the metal becomes zero.
In dielectrics, the current of phenomena occurs differently. In dielectrics, since the total
positive charge of the nuclei is equal to the total negative charge of the electrons, the molecule
becomes neutral. So, if we denote the sum of positive charges of the nucleus of the molecule by -
and the total charge of negative electrons by - q , then this system of charges can be considered as
an electric dipole.
Three types of polarization are observed in dielectrics, depending on the structure of
molecules: directed polarization, electronic polarization and ionic polarization observed in solid
dielectrics.

In a group of dielectrics ( N 2 , He, O2 , CO2  ), the centers of gravity of positive and


negative charges coincide in the absence of an external electric field, that is, the dipole moments
of the molecules of such substances are equal to zero. Dielectrics whose molecules are of this
type are called non-polar dielectrics. In this type of dielectrics, under the influence of an external
electric field, the positive and negative charge centers shift due to the influence of the external
electric field, and as a result, the molecule behaves like an electric dipole. A system of charges of
equal value and opposite sign located at a certain distance from each other is called a dipole.
Since the arm (shift) of the generated electric dipole depends on the external electric field, such
dipoles are called elastic dipoles. When the external electric field is removed, the asymmetry of
the molecule is immediately broken and returns to its previous symmetrical state.
In some dielectrics, the centers of the negative charges of the atoms are not the same as the
centers of the positive charge of the nucleus. Such molecules are called polar molecules, and
dielectrics are called polar dielectrics.
The dipole moment of a polar molecule does not depend on the external electric field
(deformation is neglected because it is very small), that is, its dipole moment is constant.
Normally, as a result of heat movement inside the substance, the dipole moments of polar
molecules are directed in different directions, so the total dipole moment of the dielectric is equal
to zero, that is, the dielectric becomes neutral.
When a polar molecule enters an external uniform electric field, a pair of forces directed in
opposite directions acts on it, and the molecule is aligned in the direction of the external field.
This is called the polarization of the dielectric.
Let's determine the intensity of the electric field inside the dielectric in the external electric
field. Let's assume that a dielectric is located in a homogeneous electric field created by two
parallel planes (Fig. 5.1). As a result of the influence of the external electric field, the dielectric
becomes polarized. Positive charges accumulate on the surface of the dielectric close to the
negative plate and vice versa. These charges are called bound charges and their surface density is
On the opposite surface of the dielectric, there are bound charges with a density. Part of the lines
of force of the external electric field is compensated by the field created by the connected
charges. As a result, the polarization of the dielectric leads to a weakening of the electric field
within the dielectric.
The electric field created by the connected
charges is opposite to the external electric field.
As a result, we get the following expression for
the intensity of the final electric field inside the
dielectric:


E  E0  E   E0 
0 (5.1)
Here, E0 is the intensity of the main field, Eı is the intensity of the additional electric field
formed inside the dielectric, σ ı is the surface density of the connected charges. Let's determine
the surface density of bound charges. The total dipole moment of the dielectric plate:
Ptam =PS⋅d = qd=σ ' . Sd

Here, P=ε 0 χE is the final dipole moment of molecules per unit volume in dielectric and is
called the polarization vector. χ - is the permittivity of the dielectric, q - is the amount of
connected charges on one surface of the dielectric plate, and d - is the thickness of the dielectric
plate. From this last statement:
   Sd  P  Sd , that is, we get    P

Then:

P  0 E
E  E0   E0   E 0  E
0 0 or
E 0  E  E  (1   ) E

Here 1     - is called the dielectric constant (permeability) of the medium. The


dielectric constant determines how many times the electric field in the medium weakens
compared to the vacuum:
E0
ε=
E
The second vector quantity, the electrostatic induction vector, is used to characterize
electrostatic fields. The relationship between these two vector quantities
 
D   0 E
(5.2)
is defined by the expression The induction vector is also directed tangent to the lines of force of
the electric field. Lines of force characterizing the induction vector start at free and connected
loads and end at free load. Induction lines pass through the part of the field where connected
loads are located. A flux of an induction vector passing through an arbitrary closed surface

 D   Dn dS
S

is set. Then the Gauss theorem in a dielectric medium is written as follows

 D dS   q
n i
i (5.3)
The current of the displacement vector (induction vector) passing through the closed surface
is equal to the algebraic sum of the electric charges located inside this surface.
In vacuum, then

 0 E n dS   qi
D   0 En i
, S (5.4)

5.1. FERROELECTRICITY. PIEZO EFFECT.

The main property that distinguishes ferroelectrics from ordinary dielectrics is that they have
a high polarization ability, even in the absence of an external electric field in a certain
temperature range. An example of magnetoelectrics is the senet plane and its junction. Within
magnetoelectrics, there are small areas with strong polarizability, which are called domains. The
direction of electric moments of the domains is related to the symmetry structure of the crystal.
As a result of
 the influence of the external electric field, the boundary of the domains changes,
P the linear dimensions of the domains whose dipole
moments are close to the direction of the intensity of the
 external field increase and they try to adjust in the
P0 direction of the external field.

 E 0 
E
Figure 5.2. Magnetoelectrical
hysteresis curve
An increase in the intensity of the external electric field causes saturation polarization. In this
case, the polarization vector remains constant regardless of the intensity of the external electric
field. Dielectric hysteresis is observed in ferroelectrics (Fig. 5.2). It can be seen from the picture
that when the intensity of the external electric field approaches zero, residual polarization is
observed in the ferroelectric (P0 ). In order for the residual polarization to completely disappear,
it is necessary to apply its intensity directed in the opposite direction to the dielectric. Here it is
called -coercive force. The periodic change of magnetoelectric polarization causes it to heat up.
The amount of heat released during one period is proportional to the area of the hysteresis loop
(Fig. 5.2). For each magnetoelectric, there is a certain temperature above which magnetoelectrics
behave like ordinary polar dielectrics. This temperature is called the Curie point. At this
temperature, a type II transition occurs in the substance.
On the other hand, ferroelectrics have the property of piezo effect and electrostriction.
In magnetoelectrics, the generation of electric charges on opposite surfaces as a result of
mechanical deformation is called the piezo effect. The Piezo effect occurs in some crystals that
do not have a center of symmetry.
There is also the opposite effect. In the reverse piezo effect, mechanical deformation occurs
in the crystal under the influence of an external electric field. The phenomenon of piezo effect in
magnetoelectrics is widely used in technology. For example, in the conversion of sound signals
to electrical signals and vice versa.

Applications
The nonlinear nature of ferroelectric materials can be used to make capacitors with adjustable
capacitance. Typically, a ferroelectric capacitor simply consists of a pair of electrodes
sandwiching a layer of ferroelectric material. The permittivity of ferroelectrics is not only
adjustable but commonly also very high, especially when close to the phase transition
temperature. Because of this, ferroelectric capacitors are small in physical size compared to
dielectric (non-tunable) capacitors of similar capacitance.
The spontaneous polarization of ferroelectric materials implies a hysteresis effect which can be
used as a memory function, and ferroelectric capacitors are indeed used to make ferroelectric
RAM for computers and RFID cards. In these applications thin films of ferroelectric materials
are typically used, as this allows the field required to switch the polarization to be achieved with
a moderate voltage. However, when using thin films a great deal of attention needs to be paid to
the interfaces, electrodes and sample quality for devices to work reliably.
Ferroelectric materials are required by symmetry considerations to be also piezoelectric and
pyroelectric. The combined properties of memory, piezoelectricity, and pyroelectricity make
ferroelectric capacitors very useful, e.g. for sensor applications. Ferroelectric capacitors are used
in medical ultrasound machines (the capacitors generate and then listen for the ultrasound ping
used to image the internal organs of a body), high quality infrared cameras (the infrared image is
projected onto a two dimensional array of ferroelectric capacitors capable of detecting
temperature differences as small as millionths of a degree Celsius), fire sensors, sonar, vibration
sensors, and even fuel injectors on diesel engines.

Lecture 7
Ferroelectrics. Piezoelectric effects and its application.
The main property that distinguishes ferroelectrics from ordinary dielectrics is that they have a
high polarization ability, even in the absence of an external electric field in a certain temperature
range. An example of ferroelectrics is the NaKC4 H 4 O6  4 H 2 O senet plane and
BaTiO3
junction. Within ferroelectrics, there are small areas with strong polarizability, which are called
domains. The direction of electric moments of the domains is related to the symmetry structure
of the crystal. As a result of the influence of the external electric field, the boundary of the
domains changes, the linear dimensions of the domains whose dipole moments are close to the
direction of the intensity of the external field increase and they try to adjust in the direction of the
external field.

P


P0

 E 0 
E

Figure 5.2. Ferroelectrics hysteresis curve


An increase in the intensity of the external electric field causes saturation polarization. In this
case, the polarization vector remains constant regardless of the intensity of the external electric
field. Dielectric hysteresis is observed in ferroelectrics (Fig. 5.2). It can be seen from the picture
that when the intensity of the external electric field approaches zero, residual polarization is

observed in the ferroelectrics ( Po ). In order for the residual polarization to completely
E   E
disappear, it is necessary to apply its intensity directed in the opposite direction to the
E
dielectric. Here it is called  -coercive force. The periodic change of ferroelectrics polarization
causes it to heat up. The amount of heat released during one period is proportional to the area of
the hysteresis loop (Fig. 5.2). For each ferroelectrics, there is a certain temperature above which
ferroelectrics behave like ordinary polar dielectrics. This temperature is called the Curie point.
At this temperature, a type II transition occurs in the substance.
On the other hand, ferroelectrics have the property of Piezo effect and electrostriction.
In ferroelectrics, the generation of electric charges on opposite surfaces as a result of
mechanical deformation is called the piezo effect. The piezo effect occurs in some crystals that
do not have a center of symmetry.
There is also the opposite effect. In the reverse piezo effect, mechanical deformation occurs
in the crystal under the influence of an external electric field. The phenomenon of piezo effect in
ferroelectrics is widely used in technology. For example, in the conversion of sound signals to
electrical signals and vice versa
Superconductivity

In our previous course we learned that there is a class of metals and compounds known as
superconductors whose electrical resistance decreases to virtually zero below a certain
temperature Tc called the critical temperature. This phenomenon was discovered in 1911 by
Dutch physicist Heike Kamerlingh-Onnes (1853–1926) as he worked with mercury, which is a
superconductor below 4.2 K. Measurements have shown that the resistivity of superconductors
below their Tc values are less than 4∙10-25 Ω∙m, or approximately 1017 times smaller than the
resistivity of copper. In practice, these resistivities are considered to be zero.
One truly remarkable feature of superconductors is that once a current is set up in them, it
persists without any applied potential difference (because R = 0). Steady currents have been
observed to persist in superconducting loops for several years with no apparent decay!

The more recently identified ones are essentially ceramics


with high critical temperatures;
Superconducting materials such as those observed by
Kamerlingh-Onnes are metals. If a room-temperature
superconductor is ever identified, its effect on technology could
be tremendous.
It was found that the best conductors along the electrical
parameters - platinum, gold, copper - not become
superconducting.
A successful theory for superconductivity in metals was published in 1957 by John Bardeen,
L. N. Cooper (b. 1930), and J. R. Schrieffer (b. 1931); it is generally called BCS theory, based on
the first letters of their last names. This theory led to a Nobel Prize in Physics for the three
scientists in 1972.
In this theory, two electrons can interact via
distortions in the array of lattice ions so that
there is a net attractive force between the
electrons. A highly simplified explanation of
this attraction between electrons is as follows.
The attractive Coulomb force between one
electron and the surrounding positively
charged lattice ions causes the ions to move
inward slightly toward the electron. As a
result, there is a higher concentration of
positive charge in this region than elsewhere
in the lattice.
A second electron is attracted to the higher concentration of positive charge. Two electrons
can form pair only if their spins are opposite. As a result, the two electrons are bound into an
entity called a Cooper pair, which behaves like a particle with integral spin. Particles with
integral spin are called bosons.
An important feature of bosons is that they do not obey the
Pauli exclusion principle. Consequently, at very low
temperatures, it is possible for all bosons in a collection of
such particles to be in the lowest quantum state. The entire
collection of Cooper pairs in the metal is described by a single
wave function. Above the energy level associated with this
wave function is an energy gap equal to the binding energy of
a Cooper pair. Under the action of an applied electric field,
the Cooper pairs experience an electric force and move
through the metal.
A random scattering event of a Cooper pair from a lattice ion would represent resistance to
the electric current. Such a collision would change the energy of the Cooper pair because some
energy would be transferred to the lattice ion. Cooper pairs can't take energy, if it is less than
2ΔE (counting upon one electron pair - ΔE) ΔE - the distance to the nearest permitted level of the
core, where the condensed Cooper pairs. There are no available energy levels below that of the
Cooper pair (it is already in the lowest state), however, and none available above because of the
energy gap 2ΔE. As a result, collisions do not occur and there is no resistance to the movement
of Cooper pairs.
Dimensions of pair is 104 times greater than the interatomic distance. Between the pair of
electrons there are many other electrons.

Josephson effect, the flow of a superconducting current through a thin dielectric layer
separating two superconductors (the so-called Josephson contact); predicted on the basis of the
theory of superconductivity by the English physicist B. Josephson in 1962, discovered by
American physicists P. Anderson and J. Rowell in 1963. Conduction electrons pass through a
dielectric (usually a metal oxide film ~ 10 A thick) due to the tunnel effect. If the current through
the Josephson contact does not exceed a certain value, called the critical current of the contact,
then there is no voltage drop across the contact. If, however, a current greater than the critical
current is passed through the contact, then a voltage drop V occurs on the contact, and the
contact radiates electromagnetic waves. The radiation frequency v is related to the contact
voltage by the relation v = 2eU/h, where e is the electron charge and h is Planck's constant. The
appearance of radiation is due to the fact that the paired electrons that create the superconducting
current, when passing through the contact, acquire an excess energy of 2 eU relative to the
ground state of the superconductor. The only possibility for a pair of electrons to return to the
ground state is to emit a quantum of electromagnetic energy hv = 2eU.

A similar effect is also observed when the superconductors are connected by a thin bridge
(bridge or point contact) or there is a thin layer of metal in the normal state between them. Such
systems, together with Josephson junctions, are called weakly coupled superconductors. Based
on D. e. superconducting interferometers containing two parallel-connected weak links between
superconductors have been created. The special, quantum nature of the superconducting state
leads to the interference of superconducting currents that have passed through weak links. In this
case, the critical current turns out to be dependent on the external magnetic field, which makes it
possible to use such a device for extremely accurate measurements of magnetic fields. There are
also possibilities for using weakly coupled superconductors as low-power generators, sensitive
detectors, amplifiers, and other devices in the microwave and far infrared ranges that can be
easily tuned over a wide frequency range.
As the energy of the Cooper pair increases during the tunneling process, the phase difference
will also increase:
φ=2π∆vt=2π(2qUt/h)
If we consider this expression of the phase difference in the formula, we get the new
formula for the superconducting part of the tunnel current passing through the contact:
j=j0 sin(2π(2qUt/h)

The Hall Effect

When a current-carrying conductor is placed in a magnetic field, a potential difference is


generated in a direction perpendicular to both the current and the magnetic field. This
phenomenon, first observed by Edwin Hall (1855–1938) in 1879, is known as the Hall effect.
The arrangement for observing the Hall
effect consists of a flat conductor carrying a
current I in the x direction as shown in Figure. A
uniform magnetic field B is applied in the y
direction. If the charge carriers are electrons
moving in the negative x direction with a drift
velocity vd, they experience an upward magnetic
force FB = qvd x B, are deflected upward, and
accumulate at the upper edge of the flat
conductor, leaving an excess of positive charge
at the lower edge. This accumulation of charge
at the edges establishes an electric field in the
conductor and increases until the electric force
on carriers remaining in the bulk of the
conductor balances the magnetic force acting on
the carriers.

The electrons can now be described by the particle in equilibrium model, and they are no
longer deflected upward.
A sensitive voltmeter connected across the
sample as shown in Figure can measure the
potential difference, known as the Hall voltage
ΔVH, generated across the conductor.
If the charge carriers are positive and hence
move in the positive x direction (for rightward
current) as shown in Figures, they also
experience an upward magnetic force qvd B,
which produces a buildup of positive charge on
the upper edge and leaves an excess of negative
charge on the lower edge.
Hence, the sign of the Hall voltage generated in the sample is opposite the sign of the Hall
voltage resulting from the deflection of electrons. The sign of the charge carriers can therefore be
determined from measuring the polarity of the Hall voltage.
In deriving an expression for the Hall voltage, first note that the magnetic force exerted on the
carriers has magnitude qvdB. In equilibrium, this force is balanced by the electric force qEH,
where EH is the magnitude of the electric field due to the charge separation (sometimes referred
to as the Hall field). Therefore,
qvdB= qEH and vdB= EH
If d is the width of the conductor, the Hall voltage is
ΔVH = EHd = vdBd (10)

Therefore, the measured Hall voltage gives a value for the drift speed of the charge carriers if
d and B are known.
We can obtain the charge-carrier density n by measuring the current in the sample. From
equation current we can express the drift speed as
I
v d=
nqA
where A is the cross-sectional area of the conductor. Substituting this Equation into Equation
(13) gives
IBd
ΔV H = (11)
nqA

Because A = td, where t is the thickness of the conductor, we can also express Equation (14)
as
IB R H IB
ΔV H = = (12)
nqt t

where RH = 1/nq is called the Hall coefficient. This relationship shows that a properly
calibrated conductor can be used to measure the magnitude of an unknown magnetic field.
Because all quantities in Equation (15) other than nq can be measured, a value for the Hall
coefficient is readily obtainable. The sign and magnitude of RH give the sign of the charge
carriers and their number density.

Lecture 8,9
ALTERNATING CURRENT AND ITS GENERATION

The phenomenon of electromagnetic induction is used to convert


mechanical energy into electrical energy. The device used for this purpose is called
a generator. Let us consider the rotation of a rectangular frame with uniform speed
(ω=const ) in a uniform magnetic field (B=const). (Fig. 9.1). A magnetic flux
passing through a frame of area S at any instant
Φ=BS cosα=BS cos ωt
Here , α =ωt is the rotation angle of the frame at time t.
As a result of the rotation of the frame, the induction e.m.f. appeases.

ε i=− =BS ω sin ωt=ε m sin ωt
dt (9.1)

Fig. 9.1

Here ε m=BS ω is the maximum value of inductive e.m.f.


As it can be seen, the value of inductive e.m.f. also changes periodically.
The periodic change of the inductive e.m.f. will create a current that changes both
in
value and direction in the frame (Fig. 9.1). Alternating current can be viewed as a
quasi-stationary current. This is due to the fact that the change of the current
depending on time is much smaller than the propagation speed of the
electromagnetic field in the wire. The instantaneous value of the current
can be considered the same in all sections of the contour. For this reason, Ohm's
law and Kirchhoff's rules are satisfied for the circuit part.
Let's look at the variation of voltage and current in a series connected RLC current
circuit.
a) an ohmic resistance (resistor) is connected to the circuit. (L=0, C=0)
i  0 sin  t
J   J 0 sin  t
R0 R0
(9.2)
In this case, the current and voltage are in the same phase.
b) an inductance is connected to the circuit. (R=0, C=0)
When alternating current passes through the circuit, self-induction e.m.f.
appeases that is:
dJ dI
 oz   L ε 0 sin ωt=−L
dt , dt ,

Integrating the last expression, we get:


ε 0 cos(ωt ) ε 0 π ε π
I= = sin(ωt − )= 0 sin( ωt− )
ωL ωL 2 RL 2 (9.3)
R =ωL
L - is called inductive resistance. The current lags behind the voltage by a
π
phase of 2 .
c) a capacitor is connected to the circuit. (R=0. L=0)
dq π
I= =ε 0 Cω cos (ωt )=ε 0 Cω sin(ωt + )
q=ε 0 C sin( ωt ), dt 2 (9.4)
1
RC =
ωC - is called capacitance resistance. The current leads the voltage by a
π
phase of 2 .
WORK AND POWER OF ALTERNATING CURRENT

In an alternating current circuit, the power is equal to the product of the


instantaneous values of current and voltage:
P(t )=J (t ). U (t )

Considering that J=J m cos (ωt−ϕ) and U =U m cos(ωt )


Here ϕ is the phase difference between the current and voltage in the circuit.
We replace these expressions in the power formula and get after transformations:
1
P(t )=J m U m cos(ωt−ϕ )cos ωt= J m U m (cosϕ+cos(2 ωt−ϕ ))
2
It is not practical to use the instantaneous price of power. The average value of
power is important.
1 1
P̄(t )= J m U m (cos ϕ+ c̄ ō s̄ (2 ωt−ϕ))= J m U m cos ϕ
2 2 (9.15)
is taken. In the last statement, let's make substitutions.
Jm Um
=J ef =U ef
√2 and √2 (9.16)
we get the formula for quantities called the effective values of current and voltage
respectively in an alternating current circuit. It should be noted that the measuring
devices used in the alternating current circuit are adjusted according to the
effective values of current and voltage.
As can be seen from the power formula (6.16), the power depends on the given
voltage, current and the phase difference between them. As is known
R
cos ϕ=

√ R2 +(ωL−
1 2
ωC
)
(9.17)
If there are only reactive resistances in the circuit, R=0 and cos ϕ=0 , then no
power is dissipated in the circuit: P=0. If the circuit consists only of active
resistance cos ϕ=1 , the power is maximum. Always in technique cos ϕ ⊲1 .
Lecture 9: SERIES CONNECTED RLC IN AC CIRCUIT.

Suppose a circuit consisting of a resistor (R), a capacitor (C) and an inductor


(L) is connected to an alternating current source (Figure). Let's accept that e.m.f. of
the source obeys the law
   0 sin  t . (9.5)
In an alternating current circuit, a self-induced e.m.f. will appear in the part of the
circuit with inductive resistance:
dJ
 i  L
dt (9.6)

According to Kirchhoff's 2nd law,


since the algebraic sum of e.m. forces
in the circuit is equal to the algebraic
sum of potential drops, we can write:
dJ q
 L  JR 
dt C
dJ q
L  JR    0 sin  t
or dt C
(9.7)
If we differentiate this expression
dq
J
respect with time and consider dt

and J  J 0 sin  t    , then expression (9.7) takes the form of


d 2J dJ J 0 sin  t   
L 2
R    0 cos t
dt dt C (9.8)
The expression that we get is the second order of current according to time
is a differential equation. By solving this differential equation for the amplitude
value of the current
0
J0 
R 2  R L  RC 
2
(9.9)
0
J0 
2
 1 
R   L 
2

 c 
or (9.10)
1
L  R L
=RC
we get the expression. Here, is the inductive resistance, and ωC is the
capacitive resistance, generally called reactive resistances. (9.10) is an expression
of Ohm's law for an AC circuit with inductance, capacitance, and ohmic resistance.
R 2  R L  RC   Z
2
is called the total resistance of the circuit. From the solution of
equation (9.8), we can also find the value of the phase shift between J0 and 0. From
the vector diagram (Fig. 6.3) for the phase shift between current and voltage:
R L  RC
tg  
R (9.11)
expression is obtained. As can be seen from here, the phase shift formed between
the value of the current and the value of the e.m.f. depends on the corresponding
values of the RL and RC resistances. If RL  RC , then e.m.f. leads the current, and if
R L  RC
e.m.f. lags behind the current, and finally RL  RC , it happens when tg  0
, that is, the current and e.m.f. change with the same phase. In this case:
1
ωL=
ωC (9.12)
This situation is called electrical resonance. From here we get that the frequency of
electrical resonance
1
ω=
√ LC (9.13)
is for the inductive and capacitive resistances at resonance, we get:

R L=RC =ωL=L .
1
=
√ LC C
L
√ (9.14)
It should be noted that alternating current sources are also widely used in the
food industry. Thus, when alternating current is released from meat samples, it
softens its muscle fibers and thus increases its tenderness and quality, which is
used in the preparation of meat products. Currently, cattle, pigs and poultry are
first stunned and then anesthetized using high-voltage alternating current in order
to process them comfortably and safely. Alternating current is also used to dry
bones that are not suitable for food products and to obtain technical oils from them.
For this purpose, an electric current is released from different types of crushed
bone samples using the electrocontact method, and according to the Joule-Lens
law, the amount of heat released from the sample depends on the intensity of the
current and time. Thus, the fat separated from the bone as a result of heating is
filtered through one channel, and the dried bones are washed through another
channel.

OSCILLATING CONTOUR

A circuit formed by a capacitor with a capacity C and a coil with an


inductance L is called an oscillation circuit. If the circuit closes after charging the
capacitor, free oscillations of the electric charge and the current in the coil are
observed in the oscillation contour. The resulting alternating electromagnetic field
8
is propagated by light (c=3 . 10 m/ san ). Since the field is widely spread, the
instantaneous value of the current in the contour can be considered the same. Such
alternating current is called quasi-stationary current. For the circuit Ohm's law is
written as follows:
q dJ
JR=ϕ 1−ϕ2 +ε i =− −L
C dt (10.1)
dq
J
Given that dt :

d 2 q R dq q
2
  0
dt L dt LC (10.2)
Figure 10.1

This expression is the differential equation of electric charge oscillations.


Assuming that the active resistance in the circuit is not taken into account: R=0,
then the equation of the free oscillations of the electric charge in the circuit is
obtained:
d 2q q
 0
dt 2 LC (10.3)
1
0 
Here LC - is called the characteristic frequency of the contour.

d 2q
 0 q  0
2
2
dt (10.4)
Equation (10.4) is a second order linear differential equation. The solution is
q=q 0 sin( ω0 t +α )

From here we can write for the current:


J=J 0 cos (ωt+ α )

q0
J 0  q 0 
where LC is the amplitude value of the current.

For the period of free oscillations in the contour, we can write:



T= =2 π √ LC
ω0 (10.5)
(10.5) is called Thomson's formula.
Let's find the law of variation of the potential difference between the plates of the
capacitor:
q q0 q
   sin(0 t   )  U 0 sin(0 t   ), U 0  0
C C C

Let's determine the relationship between the amplitude values of the current and
voltage in the oscillating circuit:


q0 U 0C C
J 0= = =U 0
√ LC √ LC L (10.6)
L

Compared to Ohm's law, C plays the role of the resistance of the oscillating
circuit. The oscillations occurring in the oscillating contour are called
electromagnetic oscillations. The propagation of these oscillations in space is
called electromagnetic waves.

Lecture 10
ELECTROMAGNETIC WAVES
The waves described before are mechanical waves. By definition, the propagation of
mechanical disturbances - such as sound waves, water waves, and waves on a string - requires
the presence of a medium. This lecture is concerned with the properties of electromagnetic
waves, which (unlike mechanical waves) can propagate through empty space.
We begin by considering Maxwell’s contributions in modifying Ampère’s law, which we
studied in our previous course. We then discuss Maxwell’s equations, which form the theoretical
basis of all electromagnetic phenomena. These equations predict the existence of electromagnetic
waves that propagate through space at the speed of light c according to the traveling wave
analysis model. Heinrich Hertz confirmed Maxwell’s prediction when he generated and detected
electromagnetic waves in 1887. That discovery has led to many practical communication
systems, including radio, television, cell phone systems, wireless Internet connectivity, and
optoelectronics.
The source of the electromagnetic field and the electromagnetic wave is a charge moving with
acceleration. Charge can oscillate for example.
The source of electromagnetic waves may in fact be any electrical oscillatory circuit or a
conductor through which flows an alternating electric current since for excitation of
electromagnetic waves in space need to create an alternating electric field (displacement current)
or, respectively, the alternating magnetic field

1. Displacement Current and the General Form of Ampère’s Law


In our previous course, we discussed using Ampere’s law to
analyze the magnetic fields created by currents:
∑ B ∆ s cos=❑o I
or
∮ ⃗B d ⃗s =❑o I
In this equation, the line integral is over any closed path through
which conduction current passes, where conduction current is
defined by the expression I = dq/dt. (In this section, we use the
term conduction current to refer to the current carried by charge
carriers in the wire to distinguish it from a new type of current we
shall introduce shortly.)

We now show that Ampere’s law in this form is valid only if any electric fields present are
constant in time. James Clerk Maxwell recognized this limitation and modified Ampere’s law to
include time-varying electric fields.
As it is known from Faraday's law of electromagnetic induction, any change in the magnetic
flux passing through a closed circuit causes induction e.h.q. Maxwell put forward such a
hypothesis that the induced e.m.f or induced current in a closed circuit is caused by the time-
varying magnetic field creating a new type of electric field in the surrounding space, which is not
electrostatic in nature - the eddy electric field. Maxwell hypothesized that a time-varying
magnetic field creates an eddy electric field of intensity EB, whose circulation along any closed
contour is equal to the induced e.m.f.:

ε i=∮ ⃗E B d ⃗l =−
L
dt
∂ ⃗B ⃗
∮ E B d l =−∫ ∂t d S
⃗ ⃗
L S

As we know, the circulation of the intensity vector of the electrostatic field on a closed circuit
is equal to zero:
∮ ⃗E q d ⃗l =0
L

Eq
There is a fundamental difference between Eq and EB fields: unlike the circulation of the

vector, the circulation of the E B vector is not equal to zero. Hence, the electric field created by
the changing magnetic field is a eddy field like the magnetic field.
By studying various electromagnetic processes, Maxwell came to
the conclusion that the opposite phenomenon must also occur: if any
changing magnetic field creates a vortex electric field in space, then
the opposite phenomenon must also exist, that is, a change in the
electric field must also create a vortex magnetic field in the
surrounding space. Since the magnetic field is the main
characteristic of any current, Maxwell called the alternating electric
field the drift current. This current is different from the conduction current caused by the
movement of charged particles (electrons, ions).
Let's look at a DC circuit with a capacitor connected. When the circuit closes, a short-term
conduction current occurs only in the first moments when the capacitor is filled. Apart from the
gap between the plates of the capacitor, instantaneous conduction current exists in other parts.
Lines of conduction current are broken on the surface of the capacitor, and direct current cannot
flow through such a circuit (Figure). After the capacitor is charged, if we connect the source in
the opposite direction, then the direction of the current will change and again a short-term current
will appear in the circuit. Thus, every time the switch is opened and closed, a short current
occurs in the circuit and the lamp glow is observed.
Maxwell hypothesized that at the boundary of the capacitor's shirts, the conduction current
lines transition seamlessly into the slip current lines. Instantaneous value of conduction current
dq d ∂σ
I= = ∫ σ dS=∫ dS
dt dt S S
∂t
for the corresponding current density since it is defined as
∂σ
j keç =
dt
is taken. σ is the surface density of charges, S is the area of one plate of the capacitor.
Due to the fact that the density of the slip current and the density of the conduction current
near the surface of the shirts are equal
∂σ
j keç = j sür =
. dt
Let's express its quantity by the parameters of the field between the plates of the capacitor.
σ
The field intensity between the plates of the capacitor is determined by the expression E0 = .
ε0
From here σ =ε 0 E 0=D. Here D is the electric induction vector. Hence the drift current density
∂D
j sür=
dt
happens. So for drift current

I sür =∫ j sür dS= ∫ ∂ D dS


S S
∂t
If there is an alternating current in any wire, there is also an alternating electric field inside the
wire. Therefore, both conduction and slip current are created in the wire. The magnetic field of
the wire is also determined by their sum, that is, the total current. Full current density expressed
by the formula.

∂D
j tam = j +
∂t
Note that the drift current is not determined by the vector D itself, but by its change. For
example, in the electric field of a plane capacitor, D is directed from the positive plate to the
negative plate (Fig.).
d⃗D
As the electric field increases and correspondingly the drift current moves from the
dt
d⃗D
positive plate to the negative plate as shown. When the electric field weakens, moves from
dt
the negative plate to the positive one.
Using the concept of complete current, Maxwell generalized the theorem about the
circulation of the H vector:

∮ H⃗ d ⃗l =I=( I keç+ I sür )


L


L

H d ⃗
l =
S
j(+
d D⃗ ⃗
∮ dt d S

)
This expression is a generalized theorem about the circulation of the intensity vector of the
magnetic field, or the complete current law.

Maxwell's equations
Let's get acquainted with Maxwell's equations. According to Faraday's law of electromagnetic
induction, the change of the magnetic flux associated with the circuit causes the induced e.m.f. in
the circuit. and as a result induce current. The cause of the E.H.Q. in any circuit is the presence
of external forces. Experiments show that the nature of these external forces is not related to
heat, chemical processes that may occur in the circuit, and also to the Lorentz force (the Lorentz
force does not affect stationary loads). According to Maxwell, the induced current is caused by a
changing magnetic field creating a changing electric field in the environment. The generated

ε i=−
electric field causes induction current in the circuit. As it is known, the induced e.m.f. dt
caused by the change of the magnetic flux crossing the closed circuit dl . , on the other hand, is
ε i=∮ E Bl dl
defined as l . From the equality of the right sides expression is obtained

∮ E Bl dl=− dt
l .
E Bl
This refers to the vorticity of the electric field. Here - is the projection of the electric field
Φ=∫ BdS
intensity vector in the dl direction. Considering that S , then expression can be written
d
∮ E Bl dl=− dt ∫ BdS
as follows l S .
If the surface we are looking at and the contour surrounding it are stationary
∂B
∮ E Bl dl=−∫ ∂ t dS
l S .
As is known, the vorticity of the electrostatic field intensity is zero. Therefore, if we add the
electrostatic field circuit to the left side in the expression, the equation does not change:
∂B
∮ (E ql+ EBl )dl=∮ E l dl=−∫ ∂t dS
l l S
This expression is the integral form of Maxwell's first equation. Differential form of
Maxwell's first equation:

∂ B⃗
rot { ⃗E =− ¿
∂t .

The complete current law for magnetic fields is used to derive Maxwell's second equation.

Here, J is the total current strength, which is equal to the sum of the inrush current and the
slip current in the circuit, i.e.:
J=J keç + J sür
To quantify the relationship between a change in electric field and the magnetic field it
produces, Maxwell introduced the concept of displacement current. According to Maxwell, the
alternating current circuit is closed in all cases. In the non-conducting part of the circuit (for
example, the capacitor), the electric current is made possible by the displacement current.
Displacement current creates an eddy magnetic field around it like a conduction current.
Displacement current, unlike conduction current, does not participate in the release of Gaul-Lens
∂D ∂Φ
J sür=∫ j sür dS=∫ dS= ∂ ∫ D n dS=
heat. S S
∂t ∂t S ∂t

∂Φ
J=J k +
Then for the full current we get: ∂t
Maxwell generalizing the complete current law
∂D
∮ H l dl=J k + J d ∮ H l dl=∫ ( j k + ∂ t )dS
l or l S
Expression is the integral form of Maxwell's second equation.
The vorticity of the magnetic field intensity along a fixed arbitrarily closed contour taken
in the electromagnetic field is equal to the sum of the drift current and the algebraic sum of the
conductor currents passing through the surface covered by this contour. The differential form of
this equation is:

∂D
rot { H⃗ =⃗jk + ¿
∂t
Gauss's theorem for electric fields forms Maxwell's third equation:
∮ Dn dS=q ∮ DdS=∫ ρ dV
S or S V

Here - is the volume density of free electric charges.
The Gauss theorem defined for magnetic fields constitutes Maxwell's fourth equation:
∮ BdS=0
S
Note that the physical quantities included in Maxwell's equations have the following
relationships:
D   0 E B  0  H j  E

2. Differential equation of electromagnetic waves

Thus Maxwell showed that changing with time the electric field creates a changing magnetic
field. In turn, the changing with time magnetic field creates an electric field according to
Faraday's law of induction.
These oscillating that are interlinked electric and magnetic fields propagate in the space in the
form of electromagnetic waves (EM). The wave equation of these fields is as follows:
∂ ⃗ ∂ ⃗
2 2
E H
Δ⃗
E =ε ε o μ μ o 2 and Δ⃗ H =ε ε o μ μo 2 (3)
∂t ∂t
where
2 2 2
∂ ∂ ∂
∆= 2 + 2 + 2
∂ x ∂ y ∂z
is a Laplace operator or Laplacian.
H - strength of magnetic field, B = o H; mo = 4π∙10-7 N/A2 - magnetic permeability of free
space;  - magnetic permeability of matter; εo = 8.85∙10-12 F/m - permittivity of free space; ε -
permittivity of dielectric.
If we consider the one directional wave we can rewrite equations (3) as:
2 2 2 2
∂ Ey 1 ∂ Ey ∂ Hz 1 ∂ Hz
2
= 2 2
and 2
= 2 2
(4)
∂x v ∂t ∂x v ∂t
where v:
1 1 1 1
v= = ∙ =c ∙
√ ε ε o μ μ o √ ε o μ o √ εμ √ εμ
is a phase speed of electromagnetic wave in the medium.
1 8 8
and c= ≈ 2.99792458∙ 10 m/s ≈3 ∙ 10 m/s is a speed of electromagnetic wave in the
√ ε o μo
vacuum.
The solution to these two equations (4) shows that the speed at which electromagnetic waves
travel equals the measured speed of light. This result led Maxwell to predict that light waves are
a form of electromagnetic radiation.
Quantity n=√ εμ is called absolute refraction index of medium and speed of the wave in
medium v = c/n

Solutions of wave equations (4) are wave functions of plane waves:

Ey = Eocos(ωt - kx + φ), Hz = Hocos(ωt - kx + φ),


(5)
where k = ω/v - wave number.

3. Properties of electromagnetic waves

1) Electromagnetic waves are transversal. Vectors E and H


are perpendicular to each other and perpendicular to the
propagation direction

E⊥⃗ H ⃗
E ⊥ ⃗v ⃗
H ⊥ ⃗v

If rotate right hand from vectors E to H we define direction


of vector v

2) Vectors E and H in EM wave


always oscillates in phase and
instantaneous value of E and H are related
by equality:
√ ε ε o ∙ E= √ μ μo ∙ H (6)

3) Energy of electromagnetic waves

Electromagnetic waves carry energy. If we select the area, oriented perpendicularly to the
direction of wave propagation, then within a short time Δt through the area will flow energy the
density of which consists of the volume densities of the electric and magnetic fields:
If we use equation (6) we get:

or
2 2
w=ε ε o E =μ μo H

The rate of transfer of energy by an electromagnetic wave is described by a vector S, called


the Poynting vector, which is defined by the expression:
1 ⃗ ⃗
S⃗ = ⃗v w=⃗E ×⃗
H= E×B
μ μo
The magnitude of the Poynting vector represents the rate at which energy passes through a
unit surface area perpendicular to the direction of wave propagation. Therefore, the magnitude of
S represents power per unit area. The direction of the vector is along the direction of wave
propagation.
The SI units of S are J/(s ∙ m2) = W/m2.
What is of greater interest for a sinusoidal plane electromagnetic wave is the time average of S
over one or more cycles, which is called the wave intensity I. The average value of S (in other
words, the intensity of the wave) is

I =Savg =
2❑o
= = =

E max B max E2max cB 2max 1 ε o 2
2❑o c 2❑o 2 ❑o max
where Emax , Bmax - amplitude of electric and magnetic fields.
E

The formula shows, the intensity is directly proportional to E 2, and E is proportional to ω2.
Therefore the intensity is proportional to ω4, where ω -frequency of EM wave.

4) Momentum of electromagnetic wave


Momentum of unit volume of EM wave is equal:
w
p=
c
1 ⃗ 1⃗ ⃗
⃗p= 2 S = 2 E × H
c c

where w -density of EM energy, с - speed of EM wave in vacuum.

5) Pressure of electromagnetic wave


Pressure of EM wave can be explained by two ways. First, as EM wave own momentum,
reflection on some surface will be followed by pressure to this surface. Second, the electric
component of the electromagnetic wave generates weak currents in the second medium. These
currents experience influence magnetic component of the wave. This force creates a pressure on
the second medium.

6) Reflection and refraction at the boundary of two dielectric mediums


If an EM wave is incident on the boundary between two dielectrics, we can observe two
phenomena:
- Reflection. Reflection angle is equal to incident angle;
- Refraction.
sin i n2

= = 2= 1
sin r n1
ε v
ε1 v2

where n1, ε1, v1 refraction index, dielectric permittivity and speed of EM wave for first
medium and
n2, ε2, v2 refraction index, dielectric permittivity and speed of EM wave for second medium
respectively.

The Electromagnetic spectrum

Lecture 11
1. Experimental creation of electromagnetic waves
Hertz performed experiments that verified Maxwell’s
prediction. The experimental apparatus Hertz used to generate
and detect electromagnetic waves is shown schematically in
Figure. An induction coil is connected to a transmitter made up
of two spherical electrodes separated by a narrow gap. The coil
provides short voltage surges to the electrodes, making one
positive and the other negative. A spark is generated between
the spheres when the electric field near either electrode
surpasses the dielectric strength for air (3∙ 10 6 V/m). Free
electrons in a strong electric field are accelerated and gain
enough energy to ionize any molecules they strike. This
ionization provides more electrons, which can accelerate and
cause further ionizations. As the air in the gap is ionized, it
becomes a much better conductor and the discharge between
the electrodes exhibits an oscillatory behavior at a very high
frequency. From an electric-circuit viewpoint, this
experimental apparatus is equivalent to an LC circuit in which
the inductance is that of the coil and the capacitance is due to
the spherical electrodes.
Because L and C are small in Hertz’s apparatus, the frequency of oscillation is high, on the
order of 100 MHz. (Recall that ω=1/ √ LC for an LC circuit.) Electromagnetic waves are
radiated at this frequency as a result of the oscillation of free charges in the transmitter circuit.
Hertz was able to detect these waves using a single loop of wire with its own spark gap (the
receiver). Such a receiver loop, placed several meters from the transmitter, has its own effective
inductance, capacitance, and natural frequency of oscillation. In Hertz’s experiment, sparks were
induced across the gap of the receiving electrodes when the receiver’s frequency was adjusted to
match that of the transmitter. In this way, Hertz demonstrated that the oscillating current induced
in the receiver was produced by electromagnetic waves radiated by the transmitter. His
experiment is analogous to the mechanical phenomenon in which a tuning fork responds to
acoustic vibrations from an identical tuning fork that is oscillating nearby.

4. Radiation of Dipole
Stationary charges and steady currents cannot
produce electromagnetic waves. If the current in a
wire changes with time, however, the wire emits
electromagnetic waves. The fundamental
mechanism responsible for this radiation is the
acceleration of a charged particle. Whenever a
charged particle accelerates, energy is
transferred away from the particle by
electromagnetic radiation.
The simplest emitter of electromagnetic waves is
the electric dipole which electric moment p varies
harmonically with time: p = pocosωt. Remember
that we define vector of electric dipole moment as
p = qd
The current representing the movement of charges between the ends of the antenna produces
magnetic field lines forming concentric circles around the dipole that are perpendicular to the
electric field lines at all points. The magnetic field is zero at all points along the axis of the
dipole. Poynting vector S is directed radially outward, indicating that energy is flowing away
from the antenna T
The source of this radiation is the continuous induction of an electric field by the time-varying
magnetic field and the induction of a magnetic field by the time-varying electric field, predicted
by Faradey's law. The electric and magnetic fields produced in this manner are in phase with
each other and vary as 1/r. The result is an outward flow of energy at all times.
The angular dependence of the radiation intensity
produced by a dipole is shown in Figure. Notice that the
intensity and the power radiated are a maximum in a
plane that is perpendicular to the dipole and passing
through its midpoint. Furthermore, the power radiated is
zero along the dipole's axis. A mathematical solution to
Maxwell’s equations for the dipole shows that the
intensity of the radiation varies as
I = (sin2 )/r 2,
where  is measured from the axis of the antenna.

3. Applications of electromagnetic waves

1) Electromagnetic waves in the centimeter and millimeter ranges, meeting obstacles in its
path, reflected by them. This phenomenon underlies the radar - detection of objects (such
as aircraft, ships, etc.) at great distances and precise determining their position.
R = ct/2
2) For electromagnetic waves is typical the phenomenon of diffraction waves upon different
obstacles. Because of diffraction of radio waves it is possible steady radio communications
between the remote objects, separated by a convexity of the Earth.
3) Long wavelength (hundreds or thousands of meters) are used in the radio, short waves (a
few meters or less) are used in television for the transmission of images over short
distances (a little more than direct line of sight).
4) In astronomy to study radio emission from celestial bodies

To give full description of using of electromagnetic waves is almost impossible, since there
are no areas of science and technology, wherever they are not used.
EXPERIMENTAL CREATION AND APPLICATIONS OF
ELECTROMAGNETIC WAVES
Radio waves are the result of accelerated motion of charged particles
when alternating current flows through wires. These waves vary in the
range of λ=¿ 0.1m ÷ 104 m. These waves are usually radiated through
LC-type electronic devices and are used in radio-television
communication systems.
Let's connect a pair of metal rods to the battery. (a) When the switch
is open, there is no current and no electric or magnetic fields. (b) If we
immediately close the switch, the rods are charged, and since the current
also changes, the rods radiate alternating electric and magnetic fields (c)
after the rods are fully charged, the current is cut off, the electric field is
maximum, and the magnetic field is zero. Now let's look at the radiation
of the e-m wave through the antenna. In this device, two rods are
connected to an alternating current source. Under the influence of the
source, charges move back and forth between the rods, gaining
momentum and radiating the antenna.
For microwaves, it varies in the interval λ=10−4 ÷ 0 , 3 m . These types of
waves can also be radiated by electronic devices. Because of their short
wavelength, they are used to study radar systems and atomic and
molecular properties of matter. Microwave ovens are also one of the
interesting home applications of this type of waves.
The wavelength of infrared waves varies in the range of 10−3 m÷ 7 ∙10−7 m.
(visible light with the longest wavelength). These types of waves are
emitted by molecules and objects at room temperature and are easily
absorbed by most substances. This energy absorbed by objects excites
atoms, intensifies their oscillations and forward motions and increases
their internal energy (temperature). IR radiation has scientific and
practical applications in many fields - e.g. physical therapy, IR
photography, vibration spectroscopy, night vision devices, thermal
imaging devices.
As a result of non-homogeneous heating of the surface, the heat
radiation of its different parts also differs, and as a result, it is possible to
record the temperature distribution picture. It is possible to make this
view visible by choosing a color for each temperature on the display.
Temperature resolution reaches 0.05 -0.1 degree.
Since the Earth's atmosphere is more transparent for the range of 8-14
microns and 3-5.5 microns, thermal imagers work in this range and it
becomes possible to observe objects with a temperature of -50 to 500
degrees from greater distances. In this range, the influence of factors
such as fog, rain, snow, smoke on observation is minimal.
Visible light is the most familiar type of radiation on the
electromagnetic wave scale. It is this type of radiation that is perceived
by the human eye. Visible light is emitted when electrons move in atoms
and molecules. The wavelengths corresponding to different colors range
from red (7 ∙ 10−7 m) to violet (4 ∙10−7 m). The sensitivity of the human eye
depends on the wavelength and is most sensitive at a wavelength of
5 , 5∙ 10 m (yellow-green region)
−7

A significant part of the radiation of solid bodies heated up to 3000


degrees (for example, the spiral of a powerful electric lamp) is
accounted for by UV radiation and increases with increasing
temperature. In a gas discharge, the luminous trace of the current is a
source of strong UV radiation, for example in lightning, arc discharge.
The sun and stars are also a source of UV radiation. Only the long-
wavelength part of their UV radiation reaches the Earth's surface. Short-
wave UV radiation is absorbed 30-200 km above the Earth's surface and
ionizes the air.
The applications of UV radiation are related to its main properties: high
chemical activity (accelerates chemical reactions and biological
processes), bactericidal effect (destroys microorganisms), luminescence
of substances (they emit different colored lights). UV radiation is the
most powerful weapon of art experts. Fresher varnishes and paints
appear darker in UV light. In order to protect the money of different
countries from counterfeiting, they are equipped with protective
elements that are visible only in UV rays.
The wavelength of X-rays varies in the interval 10−8 ÷ 10−12 m. As a
widespread source of X-rays, the braking process of high-energy
electrons bombarding metal targets can be shown. X-rays is used in
medicine for diagnosis and treatment of a number of cancer diseases. X-
rays can destroy living tissues, so high
It is necessary to protect against dosed X-rays. Based on the diffraction
of X-rays from crystals, it is used to study their structure.
Gamma rays are emitted by radioactive nuclei (eg Co,Cs) as well as in
certain nuclear reactions. High-energy CO is also present in the cosmic
rays entering the Earth from the universe. Their wavelength is in the
order of 10−10 ÷ 10−14 m. These rays have a great penetrating ability and are
dangerous for living organisms when absorbed. Those working near
such sources should be protected by thick protective layers of lead.

DOPPLER EFFECT FOR ELECTROMAGNETIC WAVES


When the source and receiver of electromagnetic waves move
relative to each other, the Doppler effect is observed - i.e., a change in
the frequency of the wave received by the receiver. Unlike the Doppler
effect in acoustics, the regularity of the Doppler effect here can be
determined on the basis of the theory of relativity. When the source and
receiver move along a straight line, the frequency of the wave recorded
by the receiver is determined by the following expressions
When you get close
f =f 0
√ 1+ υ/ c
1−υ / c

when you get away


f =f 0
√ 1−υ /c
1+ υ/c
If we consider 2
v≪c v /c
2
for the non-relativistic case
1
γ ≈ γ0 ≈ γ 0 (1+ v /c)
v
1−
c

Lecture 12

Basic of photometry.
Energy is carried everywhere by electromagnetic waves radiating from the source. This energy is
evaluated by effect to the eye or to some receiving devices. Optical phenomena that
characterized quantitatively such quantities are called photometric quantities.
The branch dealing with the measurement of photometric quantities is called photometry, and
the devices used for this purpose are called photometers.
Photometers are subjective (visual, i.e. based on eye observation) and objective (eye
participation is not important). Determination of photometric quantities in objective photometers
based on photographic and electrical methods.
The basis of photographic methods is the darkening of the photographic plate that is
proportional to the light energy falling.
The principle of operation of electric photometers is based on the electrical effect of light
(photocell, photo amplifier, photo resistance, etc.). The simplest photoelectric photometer
consists of a photocell and a galvanometer. A sensitive galvanometer measures the photocurrent
produced by light. If the galvanometer is measured with lux, it indicates direct illumination. One
of the advantages of objective photometers is that they can also be used to determine photometric
quantities for invisible ultraviolet and infrared radiation.
The main photometric quantities are as follows.
Light flux: The amount of light energy emitted in a unit time from the closed surface surrounding
the light source is called the full light flux of that source.
If the amount of energy received by the light source from the outside per unit time remains
constant for any period of time, the total luminous flux of the source remains constant. However,
by changing the flux of light emitted in other directions by any means, it is also possible to
change the flux emitted by the source in a certain direction. Therefore, we are often talking about
a flux of light passing through a given surface.
Here, dW is the light energy passing through a given surface in a certain time dt. The units of
the light flux is the same as the power units and is determined by the amount of heat it gives to
the body when it is completely absorbed.
However, since the electromagnetic waves radiated by light sources are not only visible
light, they measure light flux in other units. Therefore, only the total radiation flux can be
measured in units of power. The unit of light flux is determined with the help of the unit of
photometric quantity called luminous intensity. The light flux radiated by a point light source
within a unit solid angle is called light intensity:

Ф=dW\dt I= (11.1)

In general, the light intensity (I) depends on the direction of radiation. If the spread of light
intensity does not depend on its direction, such sources are called isotropic sources. For isotropic
light sources light intensity
Φ
I= (11.2)
Ω
is expressed in the form. Here, Φ is the total light flux from the spheric surface surrounding the
light source, and Ω is the value of the solid angle surrounding the point source. Sources with
different light intensity in different directions are called anisotropic sources.
The unit of luminous intensity in SI is also called candela (kd) and is one of the seven basic
units of measurement. At normal atmospheric pressure (101325Pa), the intensity of light radiated
by a complete radiator whose temperature is equal to the solidification temperature of pure
platinum (2046.6K) from every 1/600000m2 of its surface in the direction perpendicular to this
surface is called 1 candela. The unit of luminous flux can be given using the unit of luminous
intensity.
Illumination of the surface is characterized by the physical quantity:
İ
E= 2
cosφ (11.3)
r
Here r-is the distance from the light source to a given point on the surface, φ- is the angle
between the radius vector and the vertical. The unit of illumination is lux.1lk=cd/m 2
The unit of luminous flux is a derived unit. According to expression (11.2).
dФ dI.d (11.4)
you can write. The unit of luminous flux is called lumen (lm). So, the light intensity is 1 candela
(cd).
Luminous flux emitted by a point light source within 1 steradian (sr) object angle per 1 lumen
(lm) is equal to: 1 lm= 1cd .sr
During practical work, it is considered more appropriate to express the flux of light in
terms of power units. Therefore, let's try to make a connection between lumen and time.
However, it should be noted that this relationship cannot be universal. Since the luminous flux
for a given light source is a part of the total luminous flux, the number of lumens per watt is a
characteristic quantity for that source. This quantity is called the light effect or visual function of
the radiation flux. Visibility is measured in lm/W. The human eye shows different sensitivity to
different wavelengths of light. This dependence is characterized by visual function. The normal
human eye is more sensitive to the wavelength of λ=0555 µm (yellow light). Therefore, the light
effect corresponding to that value of the wavelength is considered uniform. Measurements show
that the light effect for such a wavelength is 625 lm/W.

Lecture 13
1. Interference. Coherent sources. Two-source interference. Interference condition.
Interference in light waves from two sources was first demonstrated by Thomas Young in
1801. A schematic diagram of the apparatus Young used is shown in Figure.

Plane light waves arrive at a barrier that contains two slits S 1 and S2. The light from S1 and S2
produces on a viewing screen a visible pattern of bright and dark parallel bands called fringes
(Fig. b). When the light from S1 and that from S2 both arrive at a point on the screen such that
constructive interference occurs at that location, a bright fringe appears. When the light from the
two slits combines destructively at any location on the screen, a dark fringe results.
Next Figure shows some of the ways in which two waves can combine at the screen.

In Figure a, the two waves, which leave the two slits in phase, strike the screen at the central
point. Because both waves travel the same distance, they arrive at point in phase. As a result,
constructive interference occurs at this location and a bright fringe is observed. In Figure b, the
two waves also start in phase, but here the lower wave has to travel one wavelength farther than
the upper wave to reach the point. Because the lower wave falls behind the upper one by exactly
one wavelength, they still arrive in phase at P and a another bright fringe appears at this location.
At point in Figure c, however, the lower wave has fallen half a wavelength behind the upper
wave and a crest of the upper wave overlaps a trough of the lower wave, giving rise to
destructive interference at this point. A dark fringe is therefore observed at this location.
If two lightbulbs are placed side by side so that light from both bulbs combines, no
interference effects are observed because the light waves from one bulb are emitted
independently of those from the other bulb. The emissions from the two lightbulbs do not
maintain a constant phase relationship with each other over time.
Light waves from an ordinary source such as a lightbulb undergo random phase changes in
time intervals of less than a nanosecond. Therefore, the conditions for constructive interference,
destructive interference, or some intermediate state are maintained only for such short time
intervals. Because the eye cannot follow such rapid changes, no interference effects are
observed. Such light sources are said to be incoherent. To observe interference of waves from
two sources, the following conditions must be met:
• The sources must be coherent; that is, they must maintain a constant phase with respect to
each other.
• The sources should be monochromatic; that is, they should be of a single wavelength.
Let’s look in more detail at the two-
dimensional nature of Young’s
experiment with the help of Figure. The
viewing screen is located a perpendicular
distance L from the barrier containing
two slits, S1 and S2. These slits are
separated by a distance d, and the source
is monochromatic. To reach any arbitrary
point P in the upper half of the screen, a
wave from the lower slit must travel
farther than a wave from the upper slit.
The extra distance traveled from the
lower slit is the path difference δ (Greek
letter delta). If we assume the rays
labeled r1 and r2 are parallel, which is
approximately true if L is much greater
than d, then δ is given by
δ = r2 - r1 = d sin 
The value of δ determines whether the two waves are in phase when they arrive at point P. If
δ is either zero or some integer multiple of the wavelength, the two waves are in phase at point P
and constructive interference results. Therefore, the condition for bright fringes, or constructive
interference, at point P is
d sin bright = m λ m = 0, ± 1, ± 2, ±3, ...
The number m is called the order number. For constructive interference, the order number is
the same as the number of wavelengths that represents the path difference between the waves
from the two slits. The central bright fringe at bright = 0 is called the zeroth-order maximum. The
first maximum on either side, where m = ±1, is called the first-order maximum, and so forth.
When δ is an odd multiple of λ/2, the two waves arriving at point P are 180° out of phase and
give rise to destructive interference. Therefore, the condition for dark fringes, or destructive
interference, at point P is
d sin dark = (m+1/2) λ m = 0, ± 1, ± 2, ±3, ...
These equations provide the angular positions of the fringes. It is also useful to obtain
expressions for the linear positions measured along the screen from O to P.
From the triangle OPQ in Figure, we see that
tan  = y/L
Using this result, the linear positions of bright and dark fringes are given by
ybright =L∙tan bright
ydark =L∙tan dark
When the angles to the fringes are small, the positions of the fringes are linear near the center
of the pattern. That can be verified by noting that for small angles, tan  ≈ sin , so Equations
give the positions of the bright fringes as ybright = L sin bright and dark fringes ydark = L sin dark.
Incorporating these Equations give

y bright =L (small angles)
d
y dark =L
( )
m+
1
2
λ
(small angles)
d
This result shows that ybright and ydark are linear in the order number m, so the fringes are equally
spaced for small angles. As demonstrated, Young’s double-slit experiment provides a method for
measuring the wavelength of light.

The use of interference in technology

The phenomenon of interference is widely used to create various measuring and control devices.
Interferometers are devices that extract information from interference. They are widely used in
science and industry for the measurement of microscopic displacements, refractive index changes
and surface irregularities. In the case with most interferometers, light from a single source is split into
two beams that travel in different optical paths, which are then combined again to produce
interference; two incoherent sources can also be made to interfere under some circumstances
though.[3] The resulting interference fringes give information about the difference in optical path
lengths. In analytical science, interferometers are used to measure lengths and the shape of optical
components with nanometer precision; they are the highest precision length measuring instruments
in existence. In Fourier transform spectroscopy they are used to analyze light containing features of
absorption or emission associated with a substance or mixture. An astronomical
interferometer consists of two or more separate telescopes that combine their signals, offering a
resolution equivalent to that of a telescope of diameter equal to the largest separation between its
individual elements.

1. There are special devices - interferometers, the operation of which is based on the phenomenon of
interference. Their purpose is to accurately measure wavelengths, refractive indices, linear expansion
coefficients, etc.

The operation of all interferometers is based on the same principle, and the interferometers differ only
in design. The figure shows a simplified diagram of the Michelson interferometer.

Fig. 28

A monochromatic beam of light from a source S is incident at an


angle of 45° onto a plane-parallel plate P1. The side of the plate
far from S is covered with a thin layer of silver in such a way that
it will let through half of the light beam and reflect half (a
translucent plate), i.e. here the beam is divided into two parts:
beam 1 is reflected from the silver-plated layer, beam 2 passes
through it. Beam 1 is reflected from the mirror M1 and,
returning back, again passes through the plate P1 (beam 1').
Beam 2 goes to mirror M2, is reflected from it, returns back and
is reflected from plate P1 (beam 2'). Since the first beam passes
plate P1 twice, to compensate for the path difference that has arisen, a plate P2 is placed in the path of
the second beam (exactly the same as P1, only not covered with a layer of silver).

Beams 1' and 2' are coherent, therefore, interference will be observed, the result of which depends on
the optical path difference of beam 1 from point O to mirror M1 and beam 2 from point O to mirror M2.
When one of the mirrors is moved to a distance of λ/4, the difference between the paths of both beams
will change by λ/2, and in the interference pattern the maximum will shift to the minimum, and vice
versa, i.e. the interference maximum will shift by half the distance between the fringes. Such a shift of
the fringes is clearly visible to the observer. Therefore, by a slight shift of the interference pattern, one
can judge the small displacement of one of the mirrors and use the interferometer for fairly accurate (-
10-9 m) measurements of lengths (body lengths, light wavelengths, determinations of the temperature
coefficient of linear expansion, etc.).

2. Using the phenomenon of interference, it is possible to assess the quality of the surface treatment of
the product with an accuracy of 10-6 cm. To do this, you need to create a thin wedge-shaped layer of air
between the surface of the sample and a very smooth reference plate. Surface irregularities will cause
noticeable curvature of the interference fringes formed by the reflection of light from the surface under
test and the bottom of the reference plate. Figure 29 shows the observed interference patterns when
deviating from the required machining accuracy and when the required machining accuracy of the flat
surface of the part D is reached.

Fig. 29

3. Enlightenment of optics. The polished glass surface reflects about 4% of the light perpendicular to it.
Modern optical devices consist of a large number of optical glasses - lenses, prisms, etc. Therefore, the
total loss of light in a camera lens is about 25%, in a microscope - 50%, etc. As a result, the illumination
of the image is low, and the quality of the image is also degraded.

Part of the light beam, after multiple reflections from internal surfaces, still passes through the optical
device, but is scattered and no longer participates in creating a clear image, and a "veil" is formed in the
photograph.

Thus, the intensity of the transmitted light is attenuated and the luminosity of the optical device
decreases. In addition, reflections from lens surfaces lead to glare, which often (for example, in military
technology) unmasks the position of the device.

To eliminate these shortcomings, the so-called enlightenment of optics is carried out. To do this, thin
films with a refractive index lower than that of the lens material are applied to the free surfaces of the
lenses. When light is reflected from the air-film and film-glass interfaces, interference of coherent rays
1 and 2' occurs (Fig. 30).
AR layer

Fig. 30

The film thickness d and the refractive indices of glass nc and film n can be chosen so that the waves
reflected from both surfaces of the film cancel each other out. To do this, their amplitudes must be
λ0
Δ=( 2 m+1 )
equal, and the optical path difference is equal 2 . The calculation shows that the
amplitudes of the reflected rays are equal if

n=√ nc (3.10)

Since nC, n and the refractive index of air n0 satisfy the conditions nC > n > n0, then the loss of the half-
wave occurs on both surfaces; hence the minimum condition (assume the light is incident normally, i.e.
I0 = 0)

λ0
Δ=2 nd=( 2 m+1 )
2
λ0
nd=
where nd is the optical thickness of the film. Usually take m = 0, then 4 .

Thus, if this condition is satisfied and the optical thickness of the film is equal to λ0/4, then as a result of
interference, the reflected rays are quenched. Since it is impossible to achieve simultaneous quenching
for all wavelengths, this is usually done for the wavelength most susceptible to the eye λ 0 = 0.55 μm.
Therefore, lenses with coated optics have a bluish-red tint.

The creation of highly reflective coatings became possible only on the basis of multipath interference.
Unlike two-beam interference, which we have considered so far, multipath interference occurs when a
large number of coherent light beams are superimposed. The intensity distribution in the interference
pattern differs significantly; the interference maxima are much narrower and brighter than when two
coherent light beams are superimposed. Thus, the resulting amplitude of light oscillations of the same
amplitude at the intensity maxima, where the addition occurs in the same phase, is N times greater, and
the intensity is N2 times greater than from a single beam (N is the number of interfering beams). Note
that to find the resulting amplitude, it is convenient to use the graphical method, using the rotating
amplitude vector method. Multipath interference is carried out in a diffraction grating.
INTERFEROMETERS.

An interferometer is a measuring device that works on the basis of the interference


phenomenon of light. The working principle of the interferometer is as follows: light is separated
into two or more coherent bundles by means of one or another device. Light bundles pass
through different optical paths and fall on the screen, and as a result of their collection, an
interference picture is created. On the basis of the obtained interference picture, it is possible to
determine the phase difference of the interfered rays at any point. Interferometers are used to
measure small distances (for example, in instrumentation and machinery), as well as to evaluate
the quality of optical surfaces, to measure the refractive index and its variation (for example,
depending on pressure and temperature), and to check optical systems in general.
The most common types are Michelson and Jamen interferometers. Interferometers are
mainly used to measure the dependence of the refractive index of gases on pressure and
temperature, and to accurately measure small distances (for example, linear expansion).
Michelson interferometer. This interferometer consists of two smooth-surfaced homogeneous
glass plates of the same thickness. Fig. shows the path of the rays in the Michelson
interferometer. Monochromatic rays from source S fall on glass plate P1 at an angle of 45˚. One
side of the P1 plate is polished with silver and becomes semi-transparent. Part of the beam is
reflected from this surface, and the second part passes through the glass.
Since rays 1 and 2 are coherent, M1 and M2 are
reflected from plane mirrors and interfere. If we
λ
change the position of one of the mirrors by 4 , the
λ
difference in the paths of the rays will change 2 , and
small lengths can be determined very precisely with
the Michelson interferometer. Figure 11.1

Jamen interferometer. These interferometers are mainly used to determine the dependence of
refractive indices of gases on pressure and temperature. The Jamen interferometer consists of
two uniform glass plates of the same size placed in parallel at a certain distance from each other.
The path of the beam in these interferometers is given in figure. To determine the refractive
index, two tubs of the same length l are taken. When beams 1 and 2 pass through gases with
known (n1) and unknown (
n x ) refraction coefficients, an additional path difference occurs:

  (n x  n1 )

If we use the maximum condition: kλ=l ( n x −n1 )

The refractive index of an unknown gas can be


determined using this formula:

λ
n x =n1 + k
l
For example, if the shift of the
interference pattern by k=1/5 is registered, then we
get
n x −n=¿10-6 for the case where l=¿ 10 cm, λ=¿0,5 μm. Interferometers measure the refractive
index with very high accuracy.
Lecture 14
Diffraction Grating.
A set of parallel slits of the same size, located at the same distance from each
other, is called a diffraction grating. The distance between the centers of two
adjacent slits (d ) is called the period of the diffraction grating or the grating
constant. A diffraction grating can be one-, two-, or three-dimensional (space
grating). A diffraction grating is used to determine the wavelength. When
explaining diffraction from a slit, we noted that the diffraction pattern on the
screen, that is, the intensity distribution, depends only on the direction of the
diffracted rays. In other words, moving
the slit to the left or right does not
affect the intensity distribution. Then, if
we look at diffraction from N slits, the
rays diffracted at the same angle from
all slits will be collected at a certain
point of the screen and increase the
total intensity:
J=N 2 J ϕ

Note that the scene observed in the diffraction grating is the result of the collection
of coherent rays diffracted from all slits of the grating. The minimum intensity
condition that is true for diffraction from a slit is also true for a diffraction grating.
That is, from the edges of each slit Figure.
If rays radiating at a certain angle create darkness at a certain point on the screen,
then rays radiating from all slits will create darkness at that point. The minimum
intensity condition for the diffraction grating is:
b sin   k (k=1,2,3…)

If the path difference of the coherent rays emitting from the edges of two adjacent
slits is equal to whole numbers of wavelengths, then we will get the maximum
illumination on the screen. Then
( a+ b) sin ϕ=kλ or d sin   k (k=1,2,3…)

is the maximum condition of the diffraction grating. The maxima determined from
this expression are called main maxima. In addition, rays radiating from two
adjacent slits at a certain angle create additional minima on the screen. Between
the two additional minima lies an additional weak maximum. If the number of slits
of the diffraction grating is N, then the condition of additional minima:

d sin   k 
N

Here: kı =1, 2, N-1, N+1... can take values. 0, N, 2N, etc. and the main maxima are
obtained in their prices. Thus, if a diffraction grating consists of N slits, there are
N-1 additional minima and N-2 additional weak maxima between any two
neighboring main maxima. Number of main maxima observed in the diffraction
grating is determined from the condition
d sin  d
k 
 
The large number of lattice slits leads to an increase in the intensity of the main
maxima. The location of the main maxima depends on the wavelength of the beam.
If an ordinary (white) beam falls on the diffraction grating, all maxima, except
the central maximum, are painted in metallic colors. The violet rays are located
towards the central maximum. This property of the diffraction grating allows it to
be used as a spectral device.

CHARACTERISTICS OF SPECTRAL DEVICES. RELAY CRITERION

As we mentioned, the diffraction grating is the most important element of


spectral devices used to measure the spectrum of radiation emitted by light sources.
If the difference in the lengths of two waves in the radiation is very small, the
distribution of intensities in the diffraction pattern
corresponding to these waves in the spectrum
intersects in such a way that it is impossible to
distinguish them on the screen. Therefore, there is
such a minimal difference in wavelengths that it
is possible to distinguish the waves in the spectrum
only at values of the difference in wavelengths
greater than that. The main characteristics of spectral devices (including diffraction
gratings) are dispersion and resolution. Dispersion is defined as the angular (or
linear) distance between two spectral lines of differing wavelengths. Angular
dispersion

D

Then if we differentiate the maximum condition of the diffraction grating for the
angular dispersion
d cos   k
 k k
D  
 d cos  d cos  1
we get, that is, the angular dispersion is inversely proportional to the lattice
constant.
dl
Dl =
Linear dispersion: defined as dλ

One of the main quantities characterizing optical devices within a given


dispersion is its resolution. Depending on the characteristic of the device, two
wavelengths such as 1 and 2 with the same intensity and the same symmetrical
contour structure can be seen either as one or as separate spectral lines. The

resolution of a spectral device is determined by its nameless quantity . Here


is the difference in the wavelengths of the two spectral lines that can be
observed separately. Rayleigh proposed the following criterion to determine
quantity.
Two closely located identical point sources or spectral lines of equal
intensity can be distinguished if the central maximum of the diffraction
pattern of one point (line) coincides with the first minimum of the diffraction
pattern of the other point.
Let's find the relationship between the resolution and the parameters of the
diffraction grating. Suppose that the minimum of one of the waves diffracted at the
angle φ with wavelengths 1 and 2 falls on the maximum of the other (according
to the Rayleigh condition). In this case, as their maximum condition d sin ϕ=kλ2 ;
the minimum condition is:
λ1
d sin ϕ=kλ1 +
N
According to the Rayleigh criterion, we get from these two expressions:
λ1 λ1
kλ 1 + =kλ2 k ( λ2 −λ1 )=
N or N

Resolution in this case


λ1 λ1 kN
R= = =kN
δλ λ1

As can be seen from this statement, the resolution of the lattice depends on the
number of slits. So, to get high resolution, it is necessary to increase the number of
slits. Given that the intensity of the spectrum decreases sharply in large arrays, it is
more appropriate to use its first and second array spectra when using diffraction
grating spectra.
From here the maximum resolution can be determined. of the grating for sin=1
d sin ϕ π d
k= ϕ= k max =
From the maximum condition ( λ ) at sin=1 ( 2) λ . In this case,
for maximum resolution
d l
R max =k max . N = . N =
λ λ
we get that Here l=dN is called the working length of the cage. From here we get
that diffraction gratings with different grating constants (d1, d2,. . . . dn), but with
the same working length (d1N1= d2N2= . . . .= dnNn), have the same resolution.
A diffraction grating can be used as a spectral device. With its help, you can
get the spectrum of any chemical element (any body in an incandescent state). In
this way, the composition of the irradiated body can be studied (spectral analysis).
The spectral device used for this purpose is called a spectrograph. As can be seen
from the expression of d, at small values of the diffraction angle (cos=1) and
d=const value, with an increase in the number of slits N, along with an increase in
the intensity of the spectral lines, the value of their angular widths decreases
sharply .

Lecture 15
MICROSCOPE

A microscope is a device for enlarging the image of the viewed object, as well as for measuring the
part that cannot be seen with the naked eye. It is caused by the construction of lenses with a special
principle

The microscope-stand consists of a tube, an object chair, a gear called a rack, an eyepiece, a
revolver, and an objective.

It is located at the top of the eyepiece tube. When we look at an object in a microscope, we put our
eyes on the eyepiece. The task of the eyepiece is to support the person's eye and show him the image
taken under the microscope. The eyepiece consists of 2 lenses and a frame that holds them. The one
farthest from the tube is called the superior lens or "eye lens" and the other is called the inferior lens.
The eyepiece is one of the two main magnifying parts of a microscope and performs the function of a
magnifying glass.

One of the two main magnifying parts of a microscope is the objective. The lens magnifies the
image by focusing on the object we want to capture. A microscope has 2-4 (mostly 3) lenses. Each lens
has a certain degree of magnification and consists of several lenses. Of these lenses, the task of the one
closest to the object under study is to magnify the image of the object, and the task of the others is to
eliminate the defects of the previous lens. The lenses are located on the rotating lower surface of the
revolver.

The revolver is located in the lower part of the tube in the microscope, above the objectives. The
objective surface of the revolver has the ability to rotate. Its main task is to move the lens to the right
place. There is a spring inside the revolver that when the objective is in the right position to sight the
object, the spring opens and enters the objective. Thus, it prevents the lens from moving.

Between the eyepiece and the revolver is a tube or sight tube. The tube has a guiding function. In
other words, it directs the light waves from the ghost towards the eye. The distance between the
objective and the eyepiece is called the optical length of the tube.

The object table is the part of the microscope where the preparation is placed and examined. There
are fixing springs on it. These clamping springs are elastic and press the object under examination to the
stage. The object table is included in the mechanical part of the microscope.

The display capabilities of microscopes are that microscopes can display objects located at two close
points with a separate, accurate image. Access to the microworld depends on the display capabilities of
the device. This characteristic depends on the length of the beam used in the microscope. The limitation
is that an object smaller than the beam length can never be imaged. Therefore, it is possible to describe
the microworld only through smaller rays.

Depending on the required size, microscopes are divided into the following types:

Optical microscope, electron microscope, X-ray microscope, transmission atomic force microscope, laser
X-ray microscope (XFEL)

Lecture 12
Hydrogen-like atoms in quantum mechanics
Bohr combined ideas from Planck’s original quantum theory, Einstein’s concept of the
photon, Rutherford’s planetary model of the atom, and Newtonian mechanics to arrive at a
semiclassical structural model based on some revolutionary ideas. The structural model of the
Bohr theory as it applies to the hydrogen atom has the following properties:
1) The electron moves in circular orbits around the proton under the
influence of the electric force of attraction as shown in Figure
2)
(a) Only certain electron orbits are stable. When in one of these
stationary states, as Bohr called them, the electron does not emit
energy in the form of radiation, even though it is accelerating.
Hence, the total energy of the atom remains constant and
classical mechanics can be used to describe the electron’s
motion.
Bohr’s model claims that the centripetally accelerated electron does not continuously emit
radiation, losing energy and eventually spiraling into the nucleus, as predicted by classical
physics in the form of Rutherford’s planetary model.
(b) The atom emits radiation when the electron makes a transition from a more energetic initial
stationary state to a lower-energy stationary state. This transition cannot be visualized or
treated classically. In particular, the frequency f of the photon emitted in the transition is
related to the change in the atom’s energy and is not equal to the frequency of the electron’s
orbital motion. The frequency of the emitted radiation is found from the energy-conservation
expression
Ei - Ef = hf , (1)
where Ei is the energy of the initial state, Ef is the energy of the final state, and Ei > Ef . In
addition, energy of an incident photon can be absorbed by the atom, but only if the photon has
an energy that exactly matches the difference in energy between an allowed state of the atom
and a higher-energy state. Upon absorption, the photon disappears and the atom makes a
transition to the higher-energy state.
(c) The size of an allowed electron orbit is determined by a condition imposed on the electron’s
orbital angular momentum: the allowed orbits are those for which the electron’s orbital
angular momentum about the nucleus is quantized and equal to an integral multiple of ħ =
h/2π,
mevr = nħ (n = 1, 2, 3, ...) (2)
where me is the electron mass, v is the electron’s speed in its orbit, and r is the orbital radius.

The potential energy function for the hydrogen-like


atom is that due to the electrical interaction between the
electron and the proton
2
−Z e
U ( r )= (3)
4 π εo r
where Z - number of protons in nuclear; r - radius of
electron's orbit.
The time-independent Schrödinger equation for to
three-dimensional rectangular coordinates is

( )
2 2 2 2
−ℏ ∂ ∂ ∂
+ + +U ( x )=E (4)
2 m ∂ x2 ∂ y2 ∂ z2
Equations type (4) have a solution that meets to conditions of finite, continuous, and single-
valued fore wave function , only if own energy is:
n = 1, 2, 3, .... (5)
The Figure of an energy-level diagram showing the
energies of these discrete energy states and the
corresponding quantum numbers n. The uppermost
level corresponds to n = ∞ (or r = ∞) and E = 0.
Equations (1) and (3) can be used to calculate the
frequency of the photon emitted when the electron
makes a transition from an outer orbit to an inner orbit:

( )
Ei−E f e
2
1 1
f= = − (4)
h 8 π ε o ao h n2f n2i
Because the quantity measured experimentally is
wavelength, it is convenient to use λ =c/f to express
Equation (4) in terms of wavelength:
Remarkably, this expression, which is purely
theoretical, is identical to the general form of the
empirical relationships discovered by Balmer and
Rydberg:

2
1
λ (1 1
=R H 2 − 2
nf n i )
provided the constant R= e /8πεoa0hc is equal to the experimentally determined Rydberg
constant (1.0973732∙107 m-1).

Spontaneous and stimulated emission. Lasers.


We have seen that an atom absorbs and emits electromagnetic radiation only at frequencies
that correspond to the energy differences between allowed states. At ordinary temperatures, most
of the atoms in a sample are in the ground state. If a vessel containing many atoms of a gaseous
element is illuminated with radiation of all possible photon frequencies (that is, a continuous
spectrum), only those photons having energy E2 - E1, E3 - E1, E4 - E1, and so on are absorbed by
the atoms. As a result of this absorption, some of the atoms are raised to excited states.
Once an atom is in an excited state,
the excited atom can make a transition
back to a lower energy level, emitting a
photon in the process as in Figure. This
process is known as spontaneous
emission because it happens naturally,
without requiring an event to trigger the
transition. Typically, an atom remains in
an excited state for only about 10-8 s.
In addition to spontaneous emission, stimulated emission occurs. Suppose an atom is in an
excited state E2 as in Figure. If the excited state is a metastable state—that is, if its lifetime is
much longer than the typical 10-8s lifetime of excited states—the time interval until spontaneous
emission occurs is relatively long.
Let’s imagine that during that interval a photon of energy hf = E2 - E1 is incident on the atom.
One possibility is that the photon energy is sufficient for the photon to ionize the atom. Another
possibility is that the interaction between the incoming photon and the atom causes the atom to
return to the ground state and thereby emit a second photon with energy hf = E2 - E1. In this
process, the incident photon is not absorbed; therefore, after the stimulated emission, two
photons with identical energy exist: the incident photon and the emitted photon. The two are in
phase and travel in the same direction. These photons can stimulate other atoms to emit photons
in a chain of similar processes. The many photons produced in this fashion are the source of the
intense, coherent light in a laser (light amplification by stimulated emission of radiation).
The primary properties of laser light that make it useful in these technological applications are
the following:
• Laser light is coherent. The individual rays of light in a laser beam maintain a fixed phase
relationship with one another.
• Laser light is monochromatic. Light in a laser beam has a very narrow range of wavelengths.
• Laser light has a small angle of divergence. The beam spreads out very little, even over large
distances.
• Stimulated emission is linearly polarized with the same polarization plane as that of the
incident radiation.
We have described how an incident photon can cause atomic energy transitions either upward
(stimulated absorption) or downward (stimulated emission). The two processes are equally
probable. When light is incident on a collection of atoms, a net absorption of energy usually
occurs because when the system is in thermal equilibrium, many more atoms are in the ground
state than in excited states. If the situation can be inverted so that more atoms are in an excited
state than in the ground state, however, a net emission of photons can result. Such a condition is
called population inversion.
Population inversion is, in fact, the fundamental principle involved in the operation of a laser.
For the stimulated emission to result in laser light, there must be a buildup of photons in the
system. The following three conditions must be satisfied to achieve this buildup:
• The system must be in a state of population inversion: there must be more atoms in an excited
state than in the ground state. That must be true because the number of photons emitted must
be greater than the number absorbed.
• The excited state of the system must be a metastable state, meaning that its lifetime must be
long compared with the usually short lifetimes of excited states, which are typically 10 -8 s. In
this case, the population inversion can be established and stimulated emission is likely to
occur before spontaneous emission.
• The emitted photons must be confined in the system long enough to enable them to stimulate
further emission from other excited atoms. That is achieved by using reflecting mirrors at the
ends of the system. One end is made totally reflecting, and the other is partially reflecting. A
fraction of the light intensity passes through the partially reflecting end, forming the beam of
laser light.
The three-level diagram of optical pumping is shown in the
figure. Lifetime of levels E 2 ⁓ 10-3s and E3 ⁓ 10-8s. Level E2 -
metastable. The transition between the levels E 3 and E2
radiationless. The laser transition occurs between the levels E 2
and E1. The E1 levels, E2 and E3 of ruby crystal belong to the
chromium impurity atoms.
Ruby laser is pulsed at a wavelength of 694 nm. The
radiation power per pulse can reach 106 -109 watts.
A more complex level structure and working principle have a helium-neon gas laser.

Types of lasers: gas, solid, semiconductors.

Modes of operation: pulsed, continuous. The radiation power of 1012 - 1013 watts.

All lasers are composed of three main parts:1. Active (working) environment; 2. pump
systems (power source); 3. optical resonator

1 - active zone
2 - the source of pumping energy
3 - opaque mirror
4 - semitransparent mirror
5 - Laser Light

The lack of lasers - low efficiency (a few percent). However recently developed semiconductor
lasers have an efficiency of 50%. These lasers are LEDs, operating at high current density.

X-ray spectra
X-rays are emitted when high-energy electrons or any other
charged particles bombard a metal target. The x-ray spectrum
typically consists of a broad continuous band containing a series
of sharp lines as shown in Figure. In lecture 5, we mentioned that
an accelerated electric charge emits electromagnetic radiation.
The x-rays in Figure are the result of the slowing down of high-
energy electrons as they strike the target. It may take several
interactions with the atoms of the target before the electron gives
up all its kinetic energy. The amount of kinetic energy given up in
any interaction can vary from zero up to the entire kinetic energy
of the electron. Therefore, the wavelength of radiation from these
interactions lies in a continuous range from some minimum value
up to infinity.
It is this general slowing down of the electrons that provides the continuous curve in Figure,
which shows the cutoff of x-rays below a minimum wavelength λ min value that depends on the
kinetic energy of the incoming electrons. X-ray radiation with its origin in the slowing down of
electrons is called bremsstrahlung, the German word for “braking radiation.”
The discrete lines in Figure, called characteristic x-rays and discovered in 1908, have a
different origin. Their origin remained unexplained until the details of atomic structure were
understood. The first step in the production of characteristic x-rays occurs when a bombarding
electron collides with a target atom. The electron must have sufficient energy to remove an
innershell electron from the atom. The vacancy created in the shell is filled when an electron in a
higher level drops down into the level containing the vacancy. The existence of characteristic
lines in an x-ray spectrum is further direct evidence of the quantization of energy in atomic
systems.
The characteristic spectra differ visible simplicity. They
consist of several series, designated by the letters K, L, M, N
and O. (which corresponds to the excitation of atomic levels
with a principal quantum number, respectively, equal to, n =
1,2,3,4 ...). Each series, in turn, contains a small set of
individual lines designated in the descending order of the
wavelength of the indices α, β, γ, ... (K α, Kβ, Kγ, ... Lα, Lβ,
Lγ, ...). The spectra of different elements have a similar
character.
With increasing atomic number Z of the entire X-ray
spectrum moves to shorter part, without changing its structure.
This is because the X-ray spectra in electron transitions occur
in the inner parts of the atoms, which have a similar structure.
Exploring the X-ray spectra of the elements, G. Moseley set
ratio, called the Moseley's law.
f =R ¿

Where R - Rydberg constant, m = 1, 2, 3, ... defines a series of X-ray (L, M, N, ...), n is an


integer from m + 1 (defines a separate line α, β, γ ... appropriate series), σ - constant screening,
taking into account the shielding of this electron from the atomic nucleus by other electrons of
the atom.
The meaning of shielding constant is that on the electron that makes a transition, acts not all
charge Ze of nucleus, but the charge (Z - σ)2, because charge is weakened by the shielding effect
of the other electrons.
Moseley's Law allowed to determine the nuclear charge, by knowing of the wavelength of the
characteristic X-ray lines. It is the X-ray studies have definitively arranges the elements in the
periodic table.

Using this expression, research can be carried out in two directions based on the diffraction
pattern of X-rays observed in crystals:
1) if the wavelength λ of the incident X-rays is known, the structure of various crystals can be
determined by determining the interlayer distance d that characterizes the crystal lattice (X-ray
structural analysis) 2) if d is known in the crystals, then the wavelength λ of the incident X-rays
can be accurately determined (X-ray spectroscopy). Since the distance d (spatial crystal lattice
constant) is in the wavelength range of X-rays, the condition for diffraction from the crystal is
satisfied.
Suppose that X-rays of wavelength  fall on a crystal with a space lattice constant d. Since
rays 1 and 2 from the same source that have diffracted from the crystal are coherent, if we collect
them through the L lens, we will observe the interference picture on the E screen. Optical path
difference  between beams:
=2dsin
is taken to be. Here, d is the distance between the atomic planes of the given crystal, and  is the
angle between the beam and the lattice plane. On the other hand, considering that the path
difference is max=k according to the maximum condition:
2dsin=k
This expression is called Bragg's formula.
The structure of crystals and their chemical composition can be studied with the help of
Bragg's formula. By measuring the wavelength () of the incident beam, the value of the angle
corresponding to the given arrangement of k, the dimension-d of the crystal lattice can be found.
This is the main parameter that determines the structure of the crystal. This method is called X-
ray structural analysis.
If the parameter d that determines the structure of the crystal is given, the value of  can still
be found by measuring the value of  corresponding to a certain value of k. This method is called
X-ray spectral analysis method.

Lecture 13. Band Theory of Solids

1. Formation of Bands
Let there be N initially isolated atoms of any substance. While they are isolated, they have
completely identical scheme of energy levels. When atoms approach the Pauli Exclusion
Principle makes split levels: as the atoms approach interaction between them all increasing,
which leads to changes in levels of energy. Instead of a single and the same level for all of
electrons are formed N very close but different levels. Thus, as the number of atoms grows, the
number of combinations of wave functions grows, as does the number of possible energies. If we
extend this argument to the large number of atoms found in solids (on the order of 10 23 atoms per
cubic centimeter), we obtain a huge number of levels of varying energy so closely spaced that
they may be regarded as a continuous band of energy levels as shown in Figures.

The splitting of the different levels are not the same. Strongly perturbs levels of the valence
electrons. Levels filled by the inner electrons, slightly perturbed.
The bands are separated by the gaps where are no any possible energy levels. Allowed bands
are the very closely spaced energy levels. Number of possible states in the band limited (equal to
the number of atoms in the crystal). Energy corresponding to electron transition from one
allowed level to another is negligible.

2. Metals, Insulators, and Semiconductors.


Band theory helps to explain from a unified point of view of the existence of metals,
semiconductors and dielectrics
Depending on the degree of filling of the electrons and the last zone of the width of the
forbidden zone of possible cases:
1) Last (higher) band is not complete fully and partly is empty. May be case when two bands
overlap and create one half-empty band. These materials behave as metal.
2) Last band is completely full at temperature 0 K, but the energy gap between it and more
higher band is narrow (EG ⁓ 1 eV < 102 kT). These materials called semiconductors.
3) Again last band is completely full at temperature 0 K, but the energy gap between it and
more higher band is large (EG > 3 eV ⁓102 kT). These materials called insulators.

The lower, filled band is called the valence band, and the upper, empty or half-empty band is
the conduction band. (The conduction band is the one that is partially filled in a metal.) It is
common to refer to the energy separation between the valence and conduction bands as the
energy gap Eg of the material.

3. Electrical Conduction in Semiconductors.

3.1 Intrinsic semiconductors.


Semiconductors have the same type of band structure as an insulator, but the energy gap is
much smaller, on the order of 1 eV. At T = 0 K, all electrons in these materials are in the valence
band and no energy is available to excite them across the energy gap. Therefore, semiconductors
are poor conductors at very low temperatures. The Fermi level is located near the middle of the
gap. Because Eg is small (EG ⁓ 1 eV < 102kT), appreciable numbers of electrons are thermally
excited from the valence band to the conduction band. Because of the many empty levels above
the thermally filled levels in the conduction band, a small applied potential difference can easily
raise the electrons in the conduction band into available energy states, resulting in a moderate
current.
Charge carriers in a semiconductor can be
negative, positive, or both. When an electron
moves from the valence band into the conduction
band, it leaves behind a vacant site, called a hole,
in the otherwise filled valence band. This hole
(electrondeficient site) acts as a charge carrier in
the sense that a free electron from a nearby site
can transfer into the hole. Whenever an electron
does so, it creates a new hole at the site it
abandoned.
Therefore, the net effect can be viewed as the hole migrating through the material in the
direction opposite the direction of electron movement. The hole behaves as if it were a particle
with a positive charge 1e.
A pure semiconductor crystal containing only one element or one compound is called an
intrinsic semiconductor. In these semiconductors, there are equal numbers of conduction
electrons and holes.

Such combinations of charges are called electron–hole pairs. In the presence of an external
electric field, the holes move in the direction of the field and the conduction electrons move in
the direction opposite the field (see Figure). Because the electrons and holes have opposite signs,
both motions correspond to a current in the same direction.

In a semiconductor, there is a dynamic


equilibrium between the two processes:
The generation of free electrons and holes under
the effect of thermal motion;
Recombination, when electrons and holes meet
and are mutually canceled as free carriers.

3.2 Doped Semiconductors


When impurities are added to a
semiconductor, both the band structure of the
semiconductor and its resistivity are modified.
The process of adding impurities, called
doping, is important in controlling the
conductivity of semiconductors. For example,
when an atom containing five outer-shell
electrons, such as arsenic, is added to a Group
IV semiconductor, four of the electrons form
covalent bonds with atoms of the
semiconductor and one is left over (see
Figure).
This extra electron is nearly free of its parent atom and can be modeled as having an energy
level that lies in the energy gap, immediately below the conduction band (see Figure). Such a
pentavalent atom in effect donates an electron to the structure and hence is referred to as a donor
atom. Because the spacing between the energy level of the electron of the donor atom and the
bottom of the conduction band is very small (typically, approximately 0.05 eV), only a small
amount of thermal excitation is needed to cause this electron to move into the conduction band.
(Recall that the average energy of an electron at room temperature is approximately kT < 0.026
eV.) Semiconductors doped with donor atoms are called n-type semiconductors because the
majority of charge carriers are electrons, which are negatively charged.

If a Group IV semiconductor is doped with atoms containing


three outer-shell electrons, such as indium and aluminum, the
three electrons form covalent bonds with neighboring
semiconductor atoms, leaving an electron deficiency - a hole -
where the fourth bond would be if an impurity-atom electron
were available to form it (see Figure). This situation can be
modeled by placing an energy level in the energy gap,
immediately above the valence band, as in Figure before.
An electron from the valence band has enough energy at room temperature to fill this impurity
level, leaving behind a hole in the valence band. This hole can carry current in the presence of an
electric field. Because a trivalent atom accepts an electron from the valence band, such
impurities are referred to as acceptor atoms. A semiconductor doped with trivalent (acceptor)
impurities is known as a p-type semiconductor because the majority of charge carriers are
positively charged holes.
When conduction in a semiconductor is the result of acceptor or donor impurities, the material
is called an extrinsic semiconductor. The typical range of doping densities for extrinsic
semiconductors is 1014 to 1019 cm-3, whereas the electron density in a typical semiconductor is
roughly 1021 cm-3. The typical range of densities of electron - hole pairs for intrinsic
semiconductors at room temperature is 109 to 1013 cm-3.
At low temperatures, the Fermi level is almost the same as the
corresponding level of impurity (donor or acceptor)
At high T impurity level is depleted, and the electrons are
moved from the valence band to the conduction band -
predominant intrinsic conductivity. The Fermi level shifts toward
the center of the forbidden band, both in their own semiconductors.
3.3 The dependence of semiconductor's conductivity on temperature

First consider intrinsic semiconductors.


At ordinary (room) temperatures, the energy of thermal excitation is much smaller than the
width ΔE of the forbidden band (ΔE ~ 1 eV):

If ΔE>>kT then

Fermi-Dirac distribution function for electron:

Here we use that:

Concentration of free electrons ne in the conduction band and holes nh in valence band are
proportional to the distribution function of probability f(E)

This is the classical Boltzmann distribution. Thus, the electron gas in the semiconductor is a
classic, non-degenerate.
The current in the intrinsic semiconductor is composed of electron and hole currents. The
specific conductivity is proportional to the concentration n of free carriers (γ⁓ n). Therefore:

resistivity ρ = 1/γ:

and resistance:

The strong temperature dependence of the resistivity of semiconductors used in thermistors -


semiconductor devices for measuring temperature. Benefits:
 accuracy;
 small size;
 low heat capacity;
 rapid measurement (because the thermal equilibrium is reached quickly)

For doped semiconductors temperature dependence more complicated.

At low temperatures, the Fermi level is almost the


same as the impurity level.
At high T, the impurity level is depleted, and the
electrons are transferred from the valence band to the
conduction band - the intrinsic conductivity. The Fermi
level moves to the center of the forbidden band, as in
intrinsic semiconductors.
At low temperatures, impurity conduction
predominates.
At high temperatures, intrinsic conductivity
predominates

Lecture 14. Contact phenomena

1. Contact phenomena in semiconductors


1.1. p-n junction
A fundamental unit of a semiconductor device is formed when a p-type semiconductor is
joined to an n-type semiconductor to form a p–n junction. A junction diode is a device that is
based on a single p–n junction. The role of a diode of any type is to pass current in one direction
but not the other. Therefore, it acts as a one-way valve for current.
The p–n junction shown in Figure consists of
three distinct regions: a p region, an n region, and
a small area that extends several micrometers to
either side of the interface, called a depletion
region or space charge region.
The depletion region may be visualized as
arising when the two halves of the junction are
brought together. The mobile n-side donor
electrons nearest the junction (pink area in Figure)
diffuse to the p side and fill holes located there,
leaving behind immobile positive ions. While this
process occurs, we can model the holes that are
being filled as diffusing to the n side, leaving
behind a region (blue area in Figure) of fixed
negative ions.
Because the two sides of the depletion region
each carry a net charge, an internal electric field
on the order of 104 to 106 V/cm exists in the
depletion region (see Figure).
This field produces an electric force on any remaining mobile charge carriers that sweeps
them out of the depletion region, so named because it is a region depleted of mobile charge
carriers. This internal electric field creates an internal potential difference ΔV that prevents
further diffusion of holes and electrons across the junction and thereby ensures zero current in
the junction when no potential difference is applied.
The operation of the junction as a diode is
easiest to understand in terms of the potential
difference graph shown in Figure. If a voltage ΔV
is applied to the junction such that the p side is
connected to the positive terminal of a voltage
source as shown in Figure B, the internal potential
difference ΔVo across the junction decreases; the
decrease results in a current that increases
exponentially with increasing forward voltage, or
forward bias.
For reverse bias (where the n side of the
junction is connected to the positive terminal of a
voltage source), the internal potential difference
ΔVo increases with increasing reverse bias.
The increase results in a very small reverse current (leakage current) that quickly reaches
saturation value Io.
The current– voltage relationship for an ideal diode is

I =I o ( e eΔV / kT −1 )
where the first e is the base of the natural logarithm, the second e represents the magnitude of
the electron charge, k is Boltzmann’s constant, and T is the absolute temperature. Figure shows
an I vs ΔV plot characteristic of a real p–n junction, demonstrating the diode behavior.

1.2. Light-absorbing diodes.

1.2.1. Photoelectric effect with barrier layer (photovoltaic sell).


The photovoltaic effect is the creation of voltage or in a material upon exposure to light and
is a physical phenomenon. The photovoltaic effect is closely related to the photoelectric effect. In
either case, light is absorbed, causing excitation of an electron or other charge carrier to a higher-
energy state. The main distinction is that the term photoelectric effect is now usually used when
the electron is ejected out of the material (usually into a vacuum) and photovoltaic effect used
when the excited charge carrier is still contained within the material. In either case, an electric
potential (or voltage) is produced by the separation of charges, and the light has to have a
sufficient energy to overcome the potential barrier for excitation. The internal field "pulls" the
charge carriers: the holes move toward the p-type semiconductor, and the electron moves toward
the n-type semiconductor.
The potential difference is created
and it is called - photo-emf. Light
energy in solar batteries is directly
converted into electrical energy.

Benefits:
1. Ecological cleanliness;
2. Renewable alternative energy
source, in contrast to fossil - coal
and gas;
3. Can be used where there are no
power lines, and there is enough
sunlight (in deserts or on artificial
earth satellites.

Disadvantages of solar panels:


 low efficiency (12 ÷ 16%)
 Fragility
 High cost

1.2.2. Photo-detector (photodiode)

The operation of the photodiode is


also based on the internal photoelectric
effect. An inverse voltage is applied to
the p-n junction. As a result, the depleted
layer becomes wider. The current
through the p-n junction is zero.
When a photon hits the photosensitive surface and is absorbed there,
an electron-hole pair (charge carriers) is formed in the depletion layer.
This leads to the appearance of a current through the p-n junction. The
aroused current as a signal is received by the corresponding electronic
circuit.
Semiconductor photodiodes replaced vacuum photodiodes with great
success.

1.3. Light-emitting diodes (LED)


The light emitting diode simply, we know as a diode. The working principle of the Light
emitting diode is based on the quantum theory. The quantum theory says that when the electron
comes down from the higher energy level to the lower energy level then, the energy emits from
the photon. The photon energy is equal to the energy gap between these two energy levels.
If the p-n -junction diode is in the forward biased,
then the electrons and holes are moving fast across
the junction and they are combining constantly,
removing one another out. Soon after the electrons
are moving from the n-type to the p-type silicon, it
combines with the holes and then disappears. Hence
it makes the complete atom and more stable and it
gives the little burst of energy in the form of a tiny
packet or photon of light.
The quantum energy depends on the width of the forbidden
gap. With a band gap of 1.7 to 3.4 eV, the energy of the emitted
quanta corresponds to the apparent spectral range with
wavelengths from 700 to 400 nm.
Radiated light spreads in all directions. To focus radiation, a
plastic lens is used
The light of the LED is not monochromatic, it depends on the
composition of the semiconductor. To obtain white light, RGB
color mixing is used: In one matrix densely arranged red, blue
and green LEDs, whose radiation is mixed with an optical
system such as lenses. The result is white light.

Advantages of LEDs:
1. Lifetime measured in decades;
2. Operate at low voltage, that is, they are electrically safe;
3. Absence of components harmful to the environment (mercury, etc.), unlike fluorescent
lamps;
4. High mechanical strength, vibration resistance;
5. Instant switching on of the LEDs after applying voltage to them makes it possible to
turn them on and off almost at an unlimitedly high frequency;
6. The latest achievements in the technology of manufacturing of light-emitting diodes
allow to receive all colors of the visible spectrum;
7. Compactness, small size.

Disadvantages:
1. High cost
2. Narrow spectral range of light
3. The current must be stabilized (due to the steepness of the current-voltage
characteristic)

1.4. Transistor.
The invention of the transistor by John Bardeen (1908–1991), Walter Brattain (1902–1987),
and William Shockley (1910–1989) in 1948 totally revolutionized the world of electronics. For
this work, these three men shared the Nobel Prize in Physics in 1956. By 1960, the transistor had
replaced the vacuum tube in many electronic applications. The advent of the transistor created a
multitrillion-dollar industry that produces such popular devices as personal computers, wireless
keyboards, smartphones, electronic book readers, and computer tablets.
a) A junction transistor (bipolar transistor) consists of
a semiconducting material in which a very narrow n region is
sandwiched between two p regions (p-n-p transistor) or a p
region is sandwiched between two n regions (n-p-n
transistor). In either case, the transistor is formed from two
p–n junctions. These types of transistors were used widely in
the early days of semiconductor electronics.

b) Field-effect transistor (FET)


During the 1960s, the electronics
industry converted many electronic
applications from the junction transistor to
the field-effect transistor, which is much
easier to manufacture and just as effective.
Figure shows the structure of a very
common device, the MOSFET, or metal-
oxide-semiconductor field-effect
transistor. You are likely using millions
of MOSFET devices when you are
working on your computer.
There are three metal connections (the M in MOSFET) to the transistor: the source, drain, and
gate. The source and drain are connected to n-type semiconductor regions (the S in MOSFET) at
either end of the structure. These regions are connected by a narrow channel of additional n-type
material, the n channel.
The source and drain regions and the n
channel are embedded in a p-type substrate
material, which forms a depletion region, as in
the junction diode, along the bottom of the n
channel. (Depletion regions also exist at the
junctions underneath the source and drain
regions, but we will ignore them because the
operation of the device depends primarily on
the behavior in the channel.)
The gate is separated from the n channel by a layer of insulating silicon dioxide (the O in
MOSFET, for oxide). Therefore, it does not make electrical contact with the rest of the
semiconducting material.
Imagine that a voltage source VSD is applied across the source and drain as shown in Figure
(negative to the source, positive to the drain). In this situation, electrons flow through the upper
region of the n channel. Electrons cannot flow through the depletion region in the lower part of
the n channel because this region is depleted of charge carriers. Now a second voltage VSG is
applied across the source and gate as in Figure. The positive potential on the gate electrode
results in an electric field below the gate that is directed downward in the n channel (the field in
“field-effect”).
This electric field exerts upward forces on electrons in the region below
the gate, causing them to move into the n channel. Consequently, the
depletion region becomes smaller, widening the area through which there is
current between the top of the n channel and the depletion region. As the
area becomes wider, the current increases. Thus, the gates act as a valve.
If a varying voltage, such as that generated from music stored in the memory of a smartphone,
is applied to the gate, the area through which the source–drain current exists varies in size
according to the varying gate voltage. A small variation in gate voltage results in a large
variation in current and a correspondingly large voltage across the load resistor in Figure.
Therefore, the MOSFET acts as a voltage amplifier. A circuit consisting of a chain of such
transistors can result in a very small initial signal from a microphone being amplified enough to
drive powerful speakers at an outdoor concert.

The Integrated Circuit


Invented independently by Jack Kilby (1923–2005, Nobel Prize in Physics, 2000) at Texas
Instruments in late 1958 and by Robert Noyce (1927–1990) at Fairchild Camera and Instrument
in early 1959, the integrated circuit has been justly called “the most remarkable technology ever
to hit mankind.” Integrated circuits have indeed started a “second industrial revolution” and are
found at the heart of computers, watches, cameras, automobiles, aircraft, robots, space vehicles,
and all sorts of communication and switching networks.
In simplest terms, an integrated circuit is a collection of
interconnected transistors, diodes, resistors, and capacitors
fabricated on a single piece of silicon known as a chip.
Contemporary electronic devices often contain many integrated
circuits.
State-of-the-art chips easily contain several million components within a 1-cm 2 area, and the
number of components per square inch has increased steadily since the integrated circuit was
invented. The dramatic advances in chip technology can be seen by looking at microchips
manufactured by Intel. The 4004 chip, introduced in 1971, contained 2 300 transistors. This
number increased to 3.2 million 24 years later in 1995 with the Pentium processor. Sixteen years
later, the Core i7 Sandy Bridge processor introduced in November 2011 contained 2 270 million
transistors.

2. Contact phenomena in metals. Work function. Contact potential difference

In solid-state physics, the work function is the minimum thermodynamic work (i.e. energy)
needed to remove an electron from a solid to a point in the vacuum immediately outside the solid
surface. Here "immediately" means that the final electron position is far from the surface on the
atomic scale, but still too close to the solid to be influenced by ambient electric fields in the
vacuum.
When leaving the metal, electrons must overcome the potential barrier
at the metal-vacuum interface. Its height is equal to the work function of
an electron from a metal:
The work function is not a characteristic of a bulk material, but rather a
property of the surface of the material (depending on crystal face and
contamination).
The work function W for a given surface is defined by the difference
W= - eϕ
where −e is the charge of an electron, ϕ is the electrostatic potential in
metal. The potential in vacuum nearby the surface we assume to be 0.
The term − eϕ is the energy of an electron at rest in the vacuum nearby the surface. In words,
the work function is thus defined as the thermodynamic work required to remove an electron
from the material to a state at rest in the vacuum nearby the surface.
If two different metals are brought
into contact, a contact potential
difference arises between them.

The First Law of Volta.

The contact potential difference


depends only on the chemical
composition of the metals and the
temperature

A metal is a potential well for an


electron.
In an isolated state, the metals have
the same vacuum level and different
Fermi levels
When metals are in contact, the Fermi
levels are aligned. The electrons of one
metal pass from the above lying filled
levels to the underlying free levels of the
second metal; the first is charged
positively, the second - negatively. The
electron energy in the immediate vicinity
of the surface of the right metal will be
lower than near the left.
This energy difference gives an external contact potential difference.

Between internal points of metals, too, an internal contact potential difference is established,
due to the difference between the Fermi energies of metals:

where Δϕin is the potential, k is the Boltzmann constant, T absolute temperature, n 1, n2, electron
number densities in metals 1 and 2 The order of magnitude of the internal contact potential
difference: 0.01 - 0.1 V. The origin of the internal potential difference is due to the difference in
electron concentrations in metals.

Volta Series.
A series in which the potentials of metals in contact decrease: Al> Zn> Sn> Pb> Fe> Cu> Ag>
Pt
Second Volta law: the potential difference does not depend on the intermediate metals at the
same temperature of all contacts. If we form a closed chain of several dissimilar metals 1, 2 and
3, then the potential difference between the ends of the chain will be the same as for direct
contact 1 and 3

3. Thermoelectric phenomena
1 2 3 1 3
3.1. Seebeck effect
If a chain of dissimilar metals is closed, then there will be no current at the same temperature,
since the total contact potential difference is zero. If the temperature of the contacts is different,
the total contact potential difference (voltage in the circuit) will be different from zero, this is the
thermo-emf.

where α is Seebeck coefficient


The thermoelectric effect (Seebeck phenomenon) is the
occurrence of a current in a closed circuit composed of
dissimilar conductors with different temperatures of
contacts.
The first reason for the occurrence of thermo-emf is the
dependence of the Fermi energy on temperature.
The second reason is the diffusion of electrons due to
uneven heating of the conductor
The Seebeck coefficient characterizes the potential
gradient that occurs in a conductor when its ends are
maintained at different temperatures:

The thermoelectric effect is used in thermocouples


(thermoelements) for accurate temperature measurements.
Thermocouple is a temperature sensor consisting of two
dissimilar conductors and a measuring device
(galvanometer)
1 - measuring instrument;
2 - contact wires;
3 and 4 are different metals. Contacts of the
thermocouple are maintained at different temperatures T 1
and T2

The Seebeck coefficient is greater for contact of two


semiconductors.

3.2. Peltier effect


The Peltier effect is the inverse of the thermoelectric effect. When the current passes through
the contact of two dissimilar conductors, in addition to the heat of the Joule-Lenz, additional heat
is released (or absorbed, depending on the direction of the current), directly proportional to the
current: Q ⁓ I
Due to the presence of the contact potential difference in the
p-n junction the contact electric field is created.
If the electron moves against the field as the current passes, its
field accelerates, and the electron then gives the surplus energy to
the crystal lattice; the junction heats up
If during the passage through the transition, the electrons
move along the field, then in this transition the field slows down
the electrons and the electrons take the missing energy from the
ions in collisions with them; This junction is cooled.
When the direction of the current is changed, Peltier's heat also changes sign. The module of
Peltier heat generated at the junction is equal to: Q = αTIΔt
where α- Seebeck coefficient; T - temperature in Kelvin; I - current; Δt - time.
Peltier effect Seebeck
effect
3.3. Thomson effect
A phenomenon discovered in 1854 by William Thomson (Lord Kelvin). He found that there
occurs a reversible heat flow into or out of a conductor. The direction depending upon whether a
electric current flows from colder to warmer side of metal or from warmer to colder. Any
temperature gradient previously existing in the conductor is thus modified if a current is turned
on. This heat is extra to Joule heat related with conductor's resistance. The Thomson effect does
not occur in a current-carrying conductor which is initially at uniform temperature.

Lecture 15. Atomic Nuclear. Radioactivity


1. The composition of the atomic nucleus.
All nuclei are composed of two types of particles: protons and
neutrons. The only exception is the ordinary hydrogen nucleus,
which is a single proton. We describe the atomic nucleus by the
number of protons and neutrons it contains, using the following
quantities:
• the atomic number Z, which equals the number of protons in
the nucleus (sometimes called the charge number)
• the neutron number N, which equals the number of neutrons
in the nucleus
• the mass number A = Z + N, which equals the number of
nucleons (neutrons plus protons) in the nucleus
A nuclide is a specific combination of atomic number and mass number that represents a
nucleus. In representing nuclides, it is convenient to use the symbol zAX to convey the numbers of
protons and neutrons, where X represents the chemical symbol of the element.
The proton carries a single positive charge e, equal in magnitude to the charge -e on the
electron (e =1.6∙10-19 C). The neutron is electrically neutral as its name implies. Because the
neutron has no charge, it was difficult to detect with early experimental apparatus and
techniques. Today, neutrons are easily detected with devices such as plastic scintillators.
The atomic mass unit u is defined in such a way that the mass of one atom of the isotope 12C
is exactly 12u, where 1 u is equal to 1.660 539∙10 -27 kg. According to this definition, the proton
and neutron each have a mass of approximately 1u.
It is often convenient to express the atomic mass unit in terms of its rest-energy equivalent.
For one atomic mass unit,
E=mc2 =(1.660 539∙10-27 kg)(2.997 92∙108 m/s)2 =931.494 MeV
where we have used the conversion 1 eV = 1.6∙10-19 J.
Models of nucleus
In 1936, Bohr proposed treating nucleons like molecules in a drop of liquid. In this liquid-
drop model, the nucleons interact strongly with one another and undergo frequent collisions as
they jiggle around within the nucleus. This jiggling motion is analogous to the thermally agitated
motion of molecules in a drop of liquid.
The shell model of the nucleus, also called the independent-particle model, was developed
independently by two German scientists: Maria Goeppert-Mayer in 1949 and Hans Jensen
(1907–1973) in 1950. In this model, each nucleon is assumed to exist in a shell, similar to an
atomic shell for an electron. The nucleons exist in quantized energy states. The quantized states
occupied by the nucleons can be described by a set of quantum numbers.
1. Nuclear forces. The mass defect and the binding energy of the nucleus.
You might expect that the very large repulsive Coulomb forces between the close packed
protons in a nucleus should cause the nucleus to fly apart. Because that does not happen, there
must be a counteracting attractive force. The nuclear force is a very short range about 2 fm (1
fm = 10-15 m - femtometer, which is sometimes called the fermi) attractive force that acts
between all nuclear particles.
The nuclear force is independent of charge. In other words, the forces associated with the
proton–proton, proton–neutron, and neutron–neutron interactions are the same.
Eventually, the repulsive Coulomb forces between protons cannot be compensated by the
addition of more neutrons. This point occurs at Z = 83, meaning that elements that contain more
than 83 protons do not have stable nuclei.
The total mass of a nucleus is less than the sum of the masses of its individual nucleons.
Δm = Zmp + Nmn - Mnuc(zAX) = ZM(H) + Nmn - M(zAX)
where M(H) is the atomic mass of the neutral hydrogen atom, m n is the mass of the neutron,
Mnuc(zAX) - mass of isotope zAX, M(zAX) represents the atomic mass of an atom of the isotope zAX,
and the masses are all in atomic mass units. The mass of the Z electrons included in M(H)
cancels with the mass of the Z electrons included in the term M(zAX) within a small difference
associated with the atomic binding energy of the electrons.
This difference in energy is called the
binding energy of the nucleus and can
be interpreted as the energy that must be
added to a nucleus to break it apart into
its components.
Conservation of energy and the
Einstein mass–energy equivalence
relationship show that the binding energy
Eb in MeV of any nucleus is
Eb = [ZM(H) + Nmn -
M(zAX)]∙931.494 MeV/u
A plot of binding energy per nucleon
Eb/A as a function of mass number A for
various stable nuclei is shown in Figure.
Notice that the binding energy in Figure
peaks in the vicinity of A = 60. The
nucleus 2863Ni has the largest binding
energy per nucleon of 8.794 5 MeV.
2. Natural radioactivity. The law of radioactive decay.
In 1896, Becquerel accidentally discovered that uranyl potassium sulfate crystals emit an
invisible radiation that can darken a photographic plate even though the plate is covered to
exclude light. This process of spontaneous emission of radiation by uranium was soon to be
called radioactivity.
Additional experiments, including Rutherford’s famous work on alpha-particle scattering,
suggested that radioactivity is the result of the decay, or disintegration, of unstable nuclei.
Three types of radioactive decay occur in radioactive substances: alpha (α) decay, in which
the emitted particles are 4He nuclei; beta (β) decay, in which the emitted particles are either
electrons or positrons; and gamma (γ) decay, in which the emitted particles are high-energy
photons. A positron is a particle like the electron in all respects except that the positron has a
charge of +e.
The decay process is probabilistic in nature and can be described with statistical calculations
for a radioactive substance of macroscopic size containing a large number of radioactive nuclei.
If N is the number of undecayed radioactive nuclei present at some instant, the rate of change of
N with time is
dN dN
=−λN or =−λt
dt N
where l, called the decay constant, is the probability of decay per nucleus per second. The
negative sign indicates that dN/dt is negative, that is, N decreases in time. Upon integration we
get
−λt
N=N o e
The decay rate R (is often referred to as activity of a sample),
which is the number of decays per second
R= | |
dN
dt
=λN

−λt
R=R o e

where Ro = lNo is the decay rate at t = 0


The SI unit of activity is the becquerel (Bq):1 Bq = 1 decay/s
Another parameter useful in characterizing nuclear decay is the half-life T1/2:
ln 2
T 1/ 2=
λ

−t
T 1/2
N=N o 2

3. Types of radioactivity. Mutual transformations of nucleons.


Three types of radioactive decay occur in radioactive substances: alpha (α) decay, in which
the emitted particles are 4He nuclei; beta (β) decay, in which the emitted particles are either
electrons or positrons; and gamma (γ) decay, in which the emitted particles are high-energy
photons. A positron is a particle like the electron in all respects except that the positron has a
charge of +e.
Displacement rules
Alpha decay:
A A −4
Z X → Z−2Y +α
Beta minus decay (electron exit)
A A −¿¿
Z X → Z +1Y + β
Beta plus decay (positron exit)
A A +¿¿
Z X → Z−1Y + β

Gamma-ray photons, like their X-ray counterparts, are a form of ionizing radiation;
when they pass through matter, they usually deposit their energy by liberating electrons
from atoms and molecules. At the lower energy ranges, a gamma-ray photon is often
completely absorbed by an atom and the gamma ray’s energy transferred to a single
ejected electron (see photoelectric effect). Higher-energy gamma rays are more likely to
scatter from the atomic electrons, depositing a fraction of their energy in each scattering
event (see Compton effect). Standard methods for the detection of gamma rays are
based on the effects of the liberated atomic electrons in gases, crystals, and
semiconductors (see radiation measurement and scintillation counter).

gamma ray
Gamma rays can also interact with atomic nuclei. In
the process of pair production, a gamma-ray photon
with an energy exceeding twice the rest mass energy
of the electron (greater than 1.02 MeV), when passing
close to a nucleus, is directly converted into an
electron-positron pair (see photograph). At even
higher energies (greater than 10 MeV), a gamma ray
can be directly absorbed by a nucleus, causing the
ejection of nuclear particles (see photodisintegration)
or the splitting of the nucleus in a process known
as photofission.

Medical applications of gamma rays include the


valuable imaging technique of positron emission
tomography (PET) and effective radiation therapies to
treat cancerous tumours. In a PET scan, a short-lived positron-emitting radioactive
pharmaceutical, chosen because of its participation in a particular physiological process
(e.g., brain function), is injected into the body. Emitted positrons quickly combine with
nearby electrons and, through pair annihilation, give rise to two 511-keV gamma rays
traveling in opposite directions. After detection of the gamma rays, a computer-
generated reconstruction of the locations of the gamma-ray emissions produces an
image that highlights the location of the biological process being examined.

As a deeply penetrating ionizing radiation, gamma rays cause significant biochemical


changes in living cells (see radiation injury). Radiation therapies make use of this
property to selectively destroy cancerous cells in small localized tumours. Radioactive
isotopes are injected or implanted near the tumour; gamma rays that are continuously
emitted by the radioactive nuclei bombard the affected area and arrest the development
of the malignant cells.

Airborne surveys of gamma-ray emissions from Earth’s surface search for minerals
containing trace radioactive elements such as uranium and thorium. Aerial and ground-
based gamma-ray spectroscopy is employed to support geologic mapping, mineral
exploration, and identification of environmental contamination. Gamma rays were first
detected from astronomical sources in the 1960s, and gamma-ray astronomy is now a
well-established field of research. As with the study of astronomical X-rays, gamma-ray
observations must be made above the strongly absorbing atmosphere of Earth—typically
with orbiting satellites or high-altitude balloons (see telescope: Gamma-ray telescopes).
There are many intriguing and poorly understood astronomical gamma-ray sources,
including powerful point sources tentatively identified as pulsars, quasars,
and supernova remnants. Among the most fascinating unexplained astronomical
phenomena are so-called gamma-ray bursts—brief, extremely intense emissions from
sources that are apparently isotropically distributed in the sky.

You might also like