You are on page 1of 27

L.

Yaroslavsky


DIGITAL IMAGE PROCESSING:
APPLICATIONS

LECTURE NOTES


Course 0510.7211, Semester B



Tel Aviv University
Faculty of Engineering,
Department of Interdisciplinary Studies


L. Yaroslavsky. Course 0510.7211 Digital I mage Processing: Applications
Lecture I. Images and Imaging Devices

Images are signals produced by special devices, imaging devices
Designing and perfecting imaging devices were among the main tasks of the
modern science beginning from Galileo Galilei and Newton

Brief history of imaging devices
Eye-glasses(1-st century or earlier)
Pliny the Elder wrote in 23-79 A.D.:
"Emeralds are usually concave so that they may concentrate the visual rays. The
Emperor Nero used to watch in an Emerald the gladiatorial combats."

The modern reinvention of spectacles occurred around 1280-1285 in Florence, Italy.
It's uncertain who the inventor was. Some give credit to a nobleman named Amati
(Salvino degli Armati, 1299 ). It has been said that he made the invention, but told
only a few of his closest friends
Camera-obscura (pinhole camera) (Ibn Al Haytam, X century):


Printing devices (Gutenberg). Painting art. Origin of the
theory of imaging (Renaissance painters: Leonardo da Vinci, Albrecht
Drer, 15-16th century)

A woodcut by Albrecht Drer showing Perspektoraph , an istrument used by painters in 16 century.
Drer was held a while for the inventing this instrument, it goes however back to Leonardo da Vinchi
Magnifying glass(around 1600 year), Microscope (Joannes and
Zacharios Jansen, 1595; A. Leeuwenhoek, R. Hook, around 1660)

Inventor of optical microscope is not known. Credit for the first microscope is usually
given to Dutch (from other sources, Middleburg, Holland) spectacle-maker J oannes
and his son Zacharios J ansen. While experimenting with several lenses in a tube,
they discovered (around the year 1595) that nearby objects appeared greatly enlarged.
(partly adopted from [1]) . That was the forerunner of the compound microscope and
of the telescope.

The father of microscopy, Anthony
Leeuwenhoek of Holland (1632-1723),
started as an apprentice in a dry goods
store where magnifying glasses were used
to count the threads in cloth. He taught
himself new methods for grinding and
polishing tiny lenses of great curvature
which gave magnifications up to 270, the
finest known at that time. These led to the
building of his microscopes and the
biological discoveries for which he is
famous. He was the first to see and
describe bacteria, yeast plants, the teeming
life in a drop of water, and the circulation
of blood corpuscles in capillaries.
Robert Hooke, the English father of
microscopy, re-confirmed Anthony van
Leeuwenhoek's discoveries of the
existence of tiny living organisms in a drop
of water. Hooke made a copy of
Leeunwenhoek's microscope and then
improved upon his design














Microscope of Hooke (R. Hooke,
Micrographia, 1665)


Modern Zeiss microscope

Telescope (Galileo, 1609 year)












Newtons telescope-refractor










Hubble space
telescope: in
principle, the
same optics as
in Newtons
telescope

In 1609, Galileo, father of modern
physics and astronomy, heard of these
early experiments, worked out the
principles of lenses, and made a much
better instrument with a focusing device.
The scientific impetus produced by the
great discoveries made with the
telescope can be gauged from the
enthusiastic manner in which Huygens
in the Dioptrica speaks of these
discoveries. He describes how Galileo
was able to see the mountains and
valleys of the moon, to observe sun-
spots and determine the rotation of the
sun, to discover Jupiters satellites and
the phases of Venus, to resolve the
Milky Way into stars, and to establish
the differences in apparent diameter of
the planets and fixed stars (after E.
Mach, The principles of Physical Optics,
Dover Publ., 1926).

Huygens (Dioptrica, de telescopiis) held the view that
Only a superhuman genius could have invented the telescope on the basis
of theoretical considerations, but the frequent use of spectacles and lenses
of various shapes over a period of 300 years contributed to its chance
invention.
Photography (Niepps, 1826; Daguerre, 1836 year, W. F. Talbot, 1844.
First public report was presented by F. Arago, 19.8.1839 at a meeting of
LInstitut, Paris, France)

In the 19-th century, scientists began to explore ways of fixing the image thrown by
a glass lens. (H. Nieps, 1826; J. Daguerr, 1836; W. F. Talbot, 1844)
The first method of light writing was developed by the French commercial
artist Louis J acque Mande Daguerre(1787-1851). The daguerrotype was made on a
shhet of silver-plated coper, which could be inked and then printed to produce
accurate reproduction of original works or scenes. The surface of the copper was
polished to a mirrorlike brilliance, then rendered light sensitive by treament with
iodine fumes. The copper plate was then exposed to an image sharply focused by the
cameras well-ground, optically correct lens. The plate was removed from the camera
and treated with mercury vapors to develop the latent image. Finally, the image was
fixed by removal of the remaining photosensitive salts in a bath of hyposulfite and
toned with gold chloride to improve contrast and durability. Color, made of powdered
pigment, was applied directly to the metal surface with a finely pointed brush.
Daguerres attempt to sell his process (the daguerreotype) through licensing
was not successful, but he found an enthusiastic supporter in Francois Arago, an
eminent member of the Academie des Sciences in France. Arago recommended that
the French government compensate Daguerre for his considerable efforts, so that the
daguerreotype process could be placed at the service of the entire world. The French
government complied, and the process was widely publicized by F. Arago, 19.8.1839
at a meeting of LInstitut, Paris on August 19, 1839, as a gift to the world from
France.
Astronomers were among the first to employ the new imaging techniques. In
1839-1840, John W. Draper, professor of chemistry at New York University, made
first photographs the moon in first application of daguerreotypes in astronomy. The
photoheliograph, a device for taking telescopic photographs of the sun, was unveiled
in 1854.
In 1840 optical means used to reduce daguerreotype exposure times to 3-5
min. In 1841 William Henry Fox Talbot patents a new process involving creation of
paper negatives. By the end of 19-th century, photography had become an important
means for scientific research and also a commercial item that entered people every
day life. It has been keeping this status till very recently.

Invention of photography (combination of imaging optics + photo sensitive
material) was a revolutionary step. Image formation and image display were
separated. Photographic plate/film combines three basic imaging functions: image
recording, image storage and image display


X-ray imaging (Wilhelm Conrad Rntgen, Nov. 8,1895; Institute of
Physics, University of Wrzburg, Germany, the 1-st Nobel Prize, 1901)

A new type of radiation for imaging was discovered
X-ray point source+ photographic film or photo-luminescent screen


Wilhelm Conrad Rntgen One of the first medical X-ray images (a
hand with small shots in it)


Fluorography 1907

Fluorography 2000

Marie Curie, the discoverer of radium, operated and taught operating first X-ray
imaging machines in French army during the 1-st world war

Photography had played a decisive role in
the discovery of X-Rays.
It had played a decisive role in yet
another revolutionary discovery, the
discovery of radioactivity.

In 1896, Antoine Henri Becquerel
accidentally discovered radioactivity
while investigating phosphorescence in
uranium salts.

This discovery eventually led, along
with other, to new imaging techniques,
radiography

Antoine Henri Becquerel

Modern gamma-camera:
Gamma-ray collimator + Gamma-ray-to light converter + photo sensitive array + CRT
as a display. Collimator separates rays from different object points.


Electronic imaging
Electron microscope (Ernst Ruska, 1931, The Nobel Prize, 1986)
Electron optics + luminescent screen or electron sensitive array + CRT display








Transmission Electron Microscope: Atoms of
gold (Au_clusters) on MoS2.

Scanning electron microscope image (from
http://www.sst.ic.ac.uk/intro/AFM.htm )
Electronic television
Video camera: imaging optics +electron optics+scanning+ photo-electronic
converter ; image display CRT tube.
An important step: image discretization.

~1910, Boris Lvovich Rosing, St. Petersburg, Russia: Cathode Ray Tube as a display
device
~1920, His former student, US migr, Vladimir Kozmich Zvorykin conversion of
optical image into electric signal and inverse conversion: iconoscope& kinescope.
David Sarnov, sun of a rabbi from Belorussia, US migr and former telegraph
operator, President of RCA at that time, invited Zvorykin to RCA and gave him
$100.000 for the development of a commercial electronic television system
~1935 : first regular TV broadcasting in Britain, Germany, USA
~ end of 1940-th commercial TV broadcasting
~ end of 1950-ths colour television



One of the first TV image (V.K. Zvorykin, 1933)

Radars (around 1935-40), Sonars.
The principle of active vision was invented and implemented:

Radio wave beam forming antenna + space scanning mechanism + receiver + CRT as
a display

Acoustic microscope (1950-th, after R. Bracewell, Two-dimensional Imaging,
Prentice Hall, 1995):



























A monochromatic sound pulse can be focused to a point on the solid surface of an
object by a lens (sapphire rode), and the reflection will return to the lens to be
gathered by a receiver. The strength of the reflection depends on the acoustical
impedance looking into the solid surface relative to the impedance of the propagating
media. If the focal point performs a raster scan over the object, a picture of the surface
impedance is formed. Acoustic impedance of a medium depends on its density and
elastic rigidity. Acoustic energy that is not reflected at the surface but enters the solid
may be only lightly attenuated and then reflect from surface discontinuities to reveal
an image of the invisible interior. With such a device, an optical resolution can be
achieved. A major application is in the semiconductor industry for inspecting
integrated circuits.

The idea of focusing an acoustic beam was originally suggested by Rayleigh. The
application of scanning acoustic microscopes goes back to 1950.

A scanning optical microscope can also be made on the same principle. It has value as
a means of imaging an extended field without aberrations associated with a lens.


Electric
Oscillator
Receiver
Sapphir
(Al
2
O
3
) rode
Piezo-electric
transducer
(niobium
i )
Movable specimen
(immersed in a
li id)
Scanned-proximity probe (SPP) microscopes.
SPP- microscopes work by measuring a local property - such as height, optical
absorption, or magnetism - with a probe or "tip" placed very close to the sample. The
small probe-sample separation (on the order of the instrument's resolution) makes it
possible to take measurements over a small area. To acquire an image the microscope
raster-scans the probe over the sample while measuring the local property in question.
Scanned-probe systems do not use lenses, so the size of the probe rather than
diffraction effects generally limit their resolution

Tunnel microscope (1980-th; The Nobel prize 1986)

Schematic of the physical principle and initial
technical realization of Scanning Tunnel Microscope.
(a) shows apex of the tip (left) and the sample surface
(right) at a magnification of about 10
8
. The solid
circles indicate atoms, the dotted lines electron density
contours. The path of the tunnel current is given by the
arrow. (b) Scaled down by factor of 10
4
. The tip (left)
appears to touch the surface (right). (c) STM with
rectangular piezo drive X,Y,Z of the tunnel tip at left
and loose L (electrostatic motor) for rough
positioning (m to cm range) of the sample S (from G.
Binning, H. Rohrer: Physica 127B, 37, 1984)




Scanning tunnel microscope image
of silicon surface. The image shows
two single layer steps (the jagged
interfaces) separating three terraces.
Because of the tetrahedral bonding
configuration in the silicon lattice,
dimer tow directions are orthogonal
on terraces joined by a single layer
step. The area pictured is 30x30 nm

A conductive sample and a sharp metal tip, which acts as a local probe, are brought within
a distance of a few ngstroms, resulting in a significant overlap of the electronic wave
functions (see the figure). With applied bias voltage (typically between 1mV and 4V), a
tunelling current (typically between 0.1nA and 10 nA) can flow from the occupied electronic
states near the Fermi level of one electrode into the unoccupied states of the other electrode.
By using a piezo-electric drive system of the tip and a feedback loop, a map of the surface
topography can be obtained. The exponential dependence of the tunneling current on the tip-
to-sample spacing has proven to be the key for the high spatial resolution which can be
achieved with the STM. Under favorable conditions, a vertical resolution of hundredths of an
ngstrom and the lateral resolution of about one ngstrom can be reached. Therefore, STM
can provide real-space images of surfaces of conducting materials down to the atomic scale.
(from R. Wiesendanger and H.-J. Gntherodt, Introduction, Scanniing Tunneling Microscopy
I, General Principles and Applications to Clean and Absorbate-Covered Surfaces, Springer
Verlag, Berlin, 1994)
Atomic force microscope (after http://stm2.nrl.navy.mil/how-afm/how-afm.html).

The atomic force microscope is one of about two dozen types of scanning probe
microscopes. AFM operates by measuring attractive or repulsive forces between a tip
and the sample. In its repulsive "contact" mode, the instrument lightly touches a tip at
the end of a leaf spring or "cantilever" to the sample. As a raster-scan drags the tip
over the sample, some sort of detection apparatus measures the vertical deflection of
the cantilever, which indicates the local sample height. Thus, in contact mode the
AFM measures hard-sphere repulsion forces between the tip and sample. In
noncontact mode, the AFM derives topographic images from measurements of
attractive forces; the tip does not touch the sample.
AFMs can achieve a resolution of 10 pm, and unlike electron microscopes, can
image samples in air and under liquids. To achieve this most AFMs today use the
optical lever. The optical lever (Figure 1) operates by reflecting a laser beam off the
cantilever. Angular deflection of the cantilever causes a twofold larger angular
deflection of the laser beam. The reflected laser beam strikes a position-sensitive
photodetector consisting of two side-by-side photodiodes. The difference between the
two photodiode signals indicates the position of the laser spot on the detector and thus
the angular deflection of the cantilever. Image acquisition times is of about one
minute.


Atomic force microscope, University of Konstanz (May 1991)

The ability of AFM to image at atomic resolution, combined with its ability to image
a wide variety of samples under a wide variety of conditions, has created a great deal
of interest in applying it to the study of biological structures. Images have appeared in
the literature showing DNA, single proteins, structures such as gap junctions, and
living cells.

Linear tomography (~1930-th)
























Schematic diagram of linear tomography.

Due to the synchronous movement of the X-ray source and X-ray sensor, certain
plane cross-section of the object is always projected in the same place of the sensor
while others are projected with a displacement and therefore will appear blurred in
the resulting image.


Application in dentistry

Moving stage with a X-ray sensor
O
1
O
2
O
3
Moving X-ray point source
Focal plane
Image 3 Image 2 Image 1
Laminography
The principle of laminography (http://lca.kaist.ac.kr/Research/2000/X_lamino.html)

X-ray point source moving in the source plane over a circular trajectory projects
object onto X-ray detector plane. The detector moves synchronously to the source in
such a way as to secure that a specific object layer is projected on the same place on
the detector array for whatever position of the source. The plane of this selected layer
is called focal plane. Projections of other object layers located above or beneath of
the focal plane will, for different position of the source, be displaced. Therefore if
one sums up all projections obtained for different positions of the source, projections
of the focal plane layers will be accumulated coherently producing a sharp image of
this layer while other layers projected with different displacement in different
projections will produce a blurred background image. The more projections are
available the lower will be the contribution of this background into high frequency
components of the output image.





Illustration of restoration of different layers of a printed circuit board

Projection Projection
Accumulated image
Accumulated image
TRANSFORM IMAGING TECHNIQUES

All above imaging devices belong to a class of direct image plane imaging devices.
They produce images that can be directly viewed by eyes. Fundamental drawbacks of
direct image plane imaging techniques is that they require direct access to individual
locations of objects to be resolved

Transform imaging devices collect, in a certain form, information needed for
image reconstruction rather then directly object images. In transform imaging,
image information retrieval and image formation (reconstruction) for display are
essentially separated.

Probably, the very first example of indirect imaging method was that of Xray
crystallography (Max Von Laue, 1912, Nobel Prize 1914-1918)


In 1912 Max von Laue and
his two students (Walter
Friedrich and Paul Knipping)
demonstrated the wave nature
of X-rays and periodic
structure of crystals by
observing the diffraction of
X-rays from crystals of zinc
sulfide. This discovery ended
up in crystallography as a
new imaging technique. In
crystallography, geometrical
parameters and intensity
distribution of the pattern of
diffraction spots is used for
calculation of spatial
distribution and density of
atoms in crystals.



Discovery of diffraction of X-rays had a decisive value in the development of physics
and biology of XX-th century. One of the most remarkable scientific achievements
that is based on X-ray crystallography was discovery by J. Watson and F. Crick of
spiral structure of DNA (Nobel Prize, 1953)






Holography (1948, D. Gabor, The Nobel Prize, 1971)
Invention of holography by D. Gabor (1948) was motivated by the desire to
improve resolution power of electron microscopes that was limited by the
fundamental limitations of the electron optics. The term holography originates from
Greece word holos (). By this, inventor of holography intended to emphasize
that in holography full information regarding light wave, both amplitude and phase, is
recorded by means of interference of two beams, object and reference one. Due to the
fact that at that time sources of coherent electron radiation were not available, Gabor
carried out model optical experiments to demonstrate the feasibility of the method.
However, powerful sources of coherent light were also not available at the time, and
holography remained an optical paradox until the invention of lasers. The very first
implementation of holography were demonstrated in 1961 by radio-engineers E. Leith
a nd J. Upatnieks in Michigan University and by Physicist Yu. Denisyuk in State
Optical Institute, Sanct Petersburg, Russia.
In holography, interference pattern between optical waves reflected by or
transmitted through object and a special reference wave is recorded.





















Object
Source of
coherent light
Mirror
Recording
medium
Object beam
Reference beam
Principle of
recording
holograms
Virtual object (real)
Source of
coherent light
Mirror
Recording
medium
Reconstructing beam
Virtual object (imaginary)
Principle of
hologram
playback
For reconstruction of
hologram, hologram is
illuminated by the same
reference beam used
for recording.
Optical information processing (Marechal, VanderLugt,
1964-66)

Invention of lasers and holography stimulated works on optical information
processing based on capability of by capability of optical lenses perform image
Fourier Transforms with a speed of light.















Optical system for image restoration


















Vander Lugt optical correlator
Input image
Fourier lens Fourier lens Spatial filter Output image
Coherent
illumination
F F F F
Input image
Fourier lens Fourier lens
Matched
filter:
Target
object
h l Correlation plane
Coherent
illumination
F F F F
Digital holography (1968-1971)

In the end of 1960-th beginning 1970-th it was suggested (A. Lohmann, J. Goodman,
T. Huang) to use digital computers for reconstruction and synthesis of holograms and
replacement optical information processing.

Computer reconstruction of holograms
Computer synthesis of holograms for 3-D visualization
Computer synthesis of diffractive optical elements for
optical image processing and optical metrology


Digital holography reflects, in the most purified way, the informational pith and
marrow of holography and imaging which motivated two the most famous inventors
in holography, D. Gabor and Yu.N. Denisyuk



























Latest development: digital holographic microscopy (end of 1990-th)





Laser
Collimator
Beam spatial
filter
Lens
Microscope
Object table
Hologram
sensor:
Digital
photographic
camera
Computer
Reference
beam
Object
beam
Digital
reconstruction of
electronically
recorded holograms
Synthetic aperture radar (C.Wiley, USA, 1951)








In synthetic aperture radar imaging, amplitude and phase of radio waves reflected
by the object is recorded in course of plain flight around the object. These flight data
are then used for reconstruction of wave reflectivity distribution over the object
surface. The reconstruction is carried out either optically, or, presently, in digital
computers.

It is not accidentally that E. Leith and Yu. Upatnieks who produced first optical laser
hologram were radio engineers that have been working on synthetic aperture radars, a
highly classified subject at that time.










One of most recent example
of application of SAR
imaging: Radar map of
Venus




If the thick clouds covering Venus were removed, how would the surface appear?
Using an imaging radar technique, the Magellan spacecraft was able to lift the veil
from the Face of Venus and produce this spectacular high resolution image of the
planet's surface. Red, in this false-color map, represents mountains, while blue
represents valleys. This 3-kilometer resolution map is a composite of Magellan
images compiled between 1990 and 1994. Gaps were filled in by the Earth-based
Arecibo Radio Telescope. The large yellow/red area in the north is Ishtar Terra
featuring Maxwell Montes, the largest mountain on Venus. The large highland
regions are analogous to continents on Earth. Scientists are particularly interested in
exploring the geology of Venus because of its similarity to Earth.


Coded aperture (multiplexing) techniques (1970-th)

Pinhole camera (camera obscura) has a substantial advantage over lenses - it has
infinite depth of field, and it doesn't suffer from chromatic aberration. Because it
doesn't rely on refraction, pinhole camera can be used to form images from X-ray and
other high energy sources, which are normally difficult or impossible to focus.













The biggest problem with pinhole cameras is that they let very little light through to
the film or other detector. This problem can be overcome to some degree by making
the hole larger, which unfortunately leads to a decrease in resolution. The smallest
feature which can be resolved by a pinhole is approximately the same size as the
pinhole itself. The larger the hole, the more blurred the image becomes. Using
multiple, small pinholes might seem to offer a way around this problem, but this gives
rise to a confusing montage of overlapping images. Nonetheless, if the pattern of
holes is carefully chosen, it is possible to reconstruct the original image with a
resolution equal to that of a single hole.











In coded aperture imaging, image projections obtained through a set of special
binary masks are recorded and used for image reconstruction carried out in
computers

Detector array
( ) y , x b
Coding mask
( ) y , x m
Image plane
detector
Source of irradiation
Pinhole camera
Computer tomography (Hounsfield, 1973, The Nobel Prize,
~1980)



















Schematic diagram of parallel beam projection tomography

In computer tomography, a set of objects projections taken a different observation
angles is measured and used for subsequent reconstruction of the object.
Computer tomography had become the first full scale example of digital imaging



Surface rendering of a fly head reconstructed using a SkyScan micro-CT scanner
Model L1072 (Advanced imaging, July 2001, p. 22)


( ) y , x Obj
( ) , oj Pr
X-ray sensitive
line sensor array
Parallel beam
of X-rays
y
x
MRI : Magnetic resonance tomography (Felix Bloch and
Edward Purcell, The Nobel Prize, 1952, for the discovery of the magnetic
resonance phenomenon in 1946; Richard Ernst, The Nobel Prize in chemistry,
1991 for his achievements in pulsed Fourier Transform NMR and MRI; Paul C.
Lautenbur and Sir Peter Mansfield, UK, the Nobel Prize in physiology and
medicine, 2003) .



























Schematic diagram of NMR imaging

MRI is based on the principles of nuclear magnetic resonance (NMR), a spectroscopic
technique capable of obtaining microscopic chemical and physical information about
molecules. An effect is observed when an atomic nucleus is exposed to radio waves in the
presence of a magnetic field. A strong magnetic field causes the magnetic moment of the
nucleus to precess around the direction of the field, only certain orientations being allowed by
quantum theory. A transition from one orientation to another involves the absorption or
emission of a photon, the frequency of which is equal to the precessional frequency. With
magnetic field strengths customarily used, the radiation is in the radio-frequency band. If
radio-frequency radiation is supplied to the sample from one coil and is detected by another
coil, while the magnetic field strength is slowly changed, radiation is absorbed at certain field
values, which correspond to the frequency difference between orientations. An NMR
spectrum consists of a graph of field strength against detector response. This provides
information about the structure of molecules and the positions of electrons within them, as the
orbital electrons shield the nucleus and cause them to resonate at different field strengths. (
adopted from The Macmillan Encyclopedia 2001, Market House Books Ltd 2000)

x
y
z
Strong magnetic
field
RF
inductor
RF
receiver
Magnet
and
gradient
coils
RF impulse
generator
Reconstruction
and display
Object
Nobel prizes for new imaging devices and principles of imaging

Wilhelm Conrad Rntgen, Germany, Munich University , Munich, Germany b.1845,
d.1923. The Nobel Prize in Physics 1901 "in recognition of the extraordinary services he
has rendered by the discovery of the remarkable rays subsequently named after him"

Gabriel Lippmann, France, Sorbonne University, Paris, France b.1845 (in Hollerich,
Luxembourg), d.1921: The Nobel Prize in Physics 1908 "for his method of reproducing
colours photographically based on the phenomenon of interference"

Max von Laue, Germany, Frankfurt-on-the Main University , Frankfurt-on-the Main,
Germany b.1879, d.1960:The Nobel Prize in Physics 1914 "for his discovery of the
diffraction of X-rays by crystals"

Patrick Maynard Stuart Blackett, United Kingdom, Victoria University,
Manchester, United Kingdom b.1897, d.1974The Nobel Prize in Physics 1948 "for his
development of the Wilson cloud chamber method, and his discoveries therewith in the
fields of nuclear physics and cosmic radiation"

Cecil Frank Powell, United Kingdom, Bristol University, Bristol, United Kingdom
b.1903 d.1969. The Nobel Prize in Physics 1950 "for his development of the
photographic method of studying nuclear processes and his discoveries regarding mesons
made with this method"

Frits (Frederik) Zernike, the Netherlands, Groningen University , Groningen, the
Netherlands, b.1888, d.1966. The Nobel Prize in Physics 1953 "for his demonstration of
the phase contrast method, especially for his invention of the phase contrast microscope"

Donald Arthur Glaser, USA, University of California , Berkeley, CA, USA b.1926.
The Nobel Prize in Physics 1960 "for the invention of the bubble chamber

Dennis Gabor, United Kingdom, Imperial College of Science and Technology London,
United Kingdom b.1900 (in Budapest, Hungary), d.1979, The Nobel Prize in Physics
1971 "for his invention and development of the holographic method"

Allan M. Cormack, USA, Tufts University Medford, MA, USA, b.1924 (in
Johannesburg, South Africa) d.1998

Godfrey N. Hounsfield United Kingdom Central Research Laboratories, EMI,
London, United Kingdom b.1919. The Nobel Prize in Physiology or Medicine, 1979 "for
the development of computer assisted tomography
"
Ernst Ruska, Germany Fritz-Haber-Institut der Max-Planck- Gesellschaft, Berlin,
b.1906, d.1988. The Nobel Prize in Physics 1986 "for his fundamental work in electron
optics, and for the design of the first electron microscope"

Gerd Binnig, Germany, b.1947, , IBM Zurich Research Laboratory, Switzerland
Heinrich Rohrer , Switzerland, b.1933, IBM Zurich Research Laboratory, Switzerland.
The Nobel Prize in Physics 1986 "for their design of the scanning tunneling microscope

Paul C. Lautenbur, Peter Mansfield, UK. The Nobel Prize 2003 in Physiology and
Medicine for their discoveries concerning magnetic resonance imaging.
Digital imaging and image processing: the highest
level of the evolution of imaging techniques

New qualities that are brought to imaging systems by digital computers and
processors:

- Flexibility and adaptability. The most substantial advantage of digital
computers as compared with analog electronic and optical information
processing devices is that no hardware modifications are necessary to
reprogram digital computers to solving different tasks. With the same
hardware, one can build an arbitrary problem solver by simply selecting or
designing an appropriate code for the computer. This feature makes digital
computers also an ideal vehicle for processing image signals adaptively since,
with the help of computers, they can adapt rapidly and easily to varying
signals, tasks and end user requirements.

- Digital computers integrated into imaging systems enable them to perform
not only element-wise and integral signal transformations such as spatial and
temporal Fourier analysis, signal convolution and correlation that are
characteristic for analog optics but any operations needed. This removes the
major limitation of optical information processing and makes optical
information processing integrated with digital signal processing almost
almighty.

- Acquiring and processing quantitative data contained in images as signals,
and connecting imaging systems to other informational systems and networks
is most natural when data are handled in digital form. In the same way as in
economics currency is a general equivalent, digital signals are general
equivalent in information handling. A digital signal within the computer that
represents, so to say, purified information carried by image signals deprived
of its physical integument. Thanks to its universal nature, the digital signal is
an ideal means for integrating different informational systems.

The only limitations of digital imaging and image processing are
memory and processing speed capacities of computers.

IMAGE PROCESSING:

SOLVING a GIGO PROBLEM

























Three types of end users in image processing:

Collective user (as in commercial photography, TV broadcasting,
Multimedia)
Expert user (as in air and space photo reconnaissance, radiology
diagnostics, etc.)
Automatic devices (computer vision)
IMAGE
PROCESSOR
(Garbage In) (Gold Out)
OBJECTS
of STUDY
IMAGING
DEVICE
Raw IMAGE
USER
If its green or it wriggles, its biology
If it doesnt work, its physics
To err is human, but to really fool things requires a computer

Murphys handy guide to modern science

IMAGE PROCESSING TASKS

User
Image processing:
Collective
user
Individual
user -
expert
Automata
Image formation (image
reconstruction)

Image perfecting (Image
restoration)

Interactive image interpretation
(Image preparation, Image
enhancement)

Image quantification
(parameter estimation)

Automated image analysis
Image modeling (Virtual
imaging)

Image coding for transmission
and storage

Simulating imaging devices



Digital Image Processing: Applications

An ounce of application is worth of a ton of abstraction
Bookers law, in: A. Bloch, Murphys Law, Price Stern Sloan,L.A.,1977

Es gibt nichts Praktischeres als eine gute Theorie
(There is nothing more practical then a good theory)
Saying

Grau, teurer Freund, ist alle Theorie
And gruen des Lebens goldner Baum
(Gray, dear friend, is every theory
And green the golden tree of life)
Goethe, Faust

La precision nest pas la fidelite
(Accuracy is not fidelity)
H. Matisse
Syllabus of the course:
Image digitization and coding
Image interpolation, resampling and geometrical transforms.
Statistical image and noise models and texture synthesis and analysis.
Image quantification and parameter estimation techniques
Object detection and localization in images. Accuracy and reliability of the
localization. Adaptive and local adaptive filters for reliable target localization.
Image reconstruction, perfecting and restoration. Optimal, adaptive and local
adaptive linear and rank filters.
Interactive image processing and enhancement.
Multi component and multi modal image processing.
Efficient computational algorithms and parallel neuro-morphic networks

Text books:

1. L. Yaroslavsky, Digital Holography and Digital Image processing: Principles, Methods,
Algorithms, Kluwer Scientific Publishers, 2004
2. L. Yaroslavsky, M. Eden. Fundamentals of Digital Optics. Birkhauser, Boston, 1996
3. L. Yaroslavsky. Digital Picture Processing. An Introduction, Springer Verlag, Heidelberg,
New York, 1985

Additional references

1. L. Yaroslavsky, Digital Signal Processing in Optics Holography, Moscow, Radio I Svyaz),
1987 (In Russian).
2. L. P. Yaroslavsky, N. S. Merzlyakov, Digital Holography, Nauka Publsh., Moscow, 1982 (In
Russian).
3. L. P. Yaroslavskii, N. S. Merzlyakov, Methods of Digital Holography, Consultant Bureau,
N.Y., 1980
4. L. Yaroslavsky, Introduction to Digital Image processing, Moscow, Sov. Radio, Moscow,
1979 (In Russian).
5. J. S. Lim, Two-dimensional Signal and Image Processing, Prentice Hall, New Jersey, 1990.
6. R. C. Gonzalez, R. E. Woods, Digital Image Processing, 2-nd ed. Prentice Hall, Inc., 2002
7. R. N. Bracewell, Two-Dimensional Imaging, Prentice Hall Int. Inc., 1995

You might also like